text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
<div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="https://cocl.us/topNotebooksPython101Coursera">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center">
</a>
</div>
<a href="https://cognitiveclass.ai/">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center">
</a>
<h1>Conditions in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about the condition statements in the Python Programming Language. By the end of this lab, you'll know how to use the condition statements in Python, including operators, and branching.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>
<a href="#cond">Condition Statements</a>
<ul>
<li><a href="comp">Comparison Operators</a></li>
<li><a href="branch">Branching</a></li>
<li><a href="logic">Logical operators</a></li>
</ul>
</li>
<li>
<a href="#quiz">Quiz on Condition Statement</a>
</li>
</ul>
<p>
Estimated time needed: <strong>20 min</strong>
</p>
</div>
<hr>
<h2 id="cond">Condition Statements</h2>
<h3 id="comp">Comparison Operators</h3>
Comparison operations compare some value or operand and, based on a condition, they produce a Boolean. When comparing two values you can use these operators:
<ul>
<li>equal: <b>==</b></li>
<li>not equal: <b>!=</b></li>
<li>greater than: <b>></b></li>
<li>less than: <b><</b></li>
<li>greater than or equal to: <b>>=</b></li>
<li>less than or equal to: <b><=</b></li>
</ul>
Let's assign <code>a</code> a value of 5. Use the equality operator denoted with two equal <b>==</b> signs to determine if two values are equal. The case below compares the variable <code>a</code> with 6.
```
# Condition Equal
a = 5
a == 6
```
The result is <b>False</b>, as 5 does not equal to 6.
Consider the following equality comparison operator <code>i > 5</code>. If the value of the left operand, in this case the variable <b>i</b>, is greater than the value of the right operand, in this case 5, then the statement is <b>True</b>. Otherwise, the statement is <b>False</b>. If <b>i</b> is equal to 6, because 6 is larger than 5, the output is <b>True</b>.
```
# Greater than Sign
i = 6
i > 5
```
Set <code>i = 2</code>. The statement is false as 2 is not greater than 5:
```
# Greater than Sign
i = 2
i > 5
```
Let's display some values for <code>i</code> in the figure. Set the values greater than 5 in green and the rest in red. The green region represents where the condition is **True**, the red where the statement is **False**. If the value of <code>i</code> is 2, we get **False** as the 2 falls in the red region. Similarly, if the value for <code>i</code> is 6 we get a **True** as the condition falls in the green region.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsGreater.gif" width="650" />
The inequality test uses an exclamation mark preceding the equal sign, if two operands are not equal then the condition becomes **True**. For example, the following condition will produce **True** as long as the value of <code>i</code> is not equal to 6:
```
# Inequality Sign
i = 2
i != 6
```
When <code>i</code> equals 6 the inequality expression produces <b>False</b>.
```
# Inequality Sign
i = 6
i != 6
```
See the number line below. when the condition is **True** the corresponding numbers are marked in green and for where the condition is **False** the corresponding number is marked in red. If we set <code>i</code> equal to 2 the operator is true as 2 is in the green region. If we set <code>i</code> equal to 6, we get a **False** as the condition falls in the red region.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsIneq.gif" width="650" />
We can apply the same methods on strings. For example, use an equality operator on two different strings. As the strings are not equal, we get a **False**.
```
# Use Equality sign to compare the strings
"ACDC" == "Michael Jackson"
```
If we use the inequality operator, the output is going to be **True** as the strings are not equal.
```
# Use Inequality sign to compare the strings
"ACDC" != "Michael Jackson"
```
Inequality operation is also used to compare the letters/words/symbols according to the ASCII value of letters. The decimal value shown in the following table represents the order of the character:
For example, the ASCII code for <b>!</b> is 21, while the ASCII code for <b>+</b> is 43. Therefore <b>+</b> is larger than <b>!</b> as 43 is greater than 21.
Similarly, the value for <b>A</b> is 101, and the value for <b>B</b> is 102 therefore:
```
# Compare characters
'B' > 'A'
```
When there are multiple letters, the first letter takes precedence in ordering:
```
# Compare characters
'BA' > 'AB'
```
<b>Note</b>: Upper Case Letters have different ASCII code than Lower Case Letters, which means the comparison between the letters in python is case-sensitive.
<h3 id="branch">Branching</h3>
Branching allows us to run different statements for different inputs. It is helpful to think of an **if statement** as a locked room, if the statement is **True** we can enter the room and your program will run some predefined tasks, but if the statement is **False** the program will ignore the task.
For example, consider the blue rectangle representing an ACDC concert. If the individual is older than 18, they can enter the ACDC concert. If they are 18 or younger than 18 they cannot enter the concert.
Use the condition statements learned before as the conditions need to be checked in the **if statement**. The syntax is as simple as <code> if <i>condition statement</i> :</code>, which contains a word <code>if</code>, any condition statement, and a colon at the end. Start your tasks which need to be executed under this condition in a new line with an indent. The lines of code after the colon and with an indent will only be executed when the **if statement** is **True**. The tasks will end when the line of code does not contain the indent.
In the case below, the tasks executed <code>print(“you can enter”)</code> only occurs if the variable <code>age</code> is greater than 18 is a True case because this line of code has the indent. However, the execution of <code>print(“move on”)</code> will not be influenced by the if statement.
```
# If statement example
age = 19
#age = 18
#expression that can be true or false
if age > 18:
#within an indent, we have the expression that is run if the condition is true
print("you can enter" )
#The statements after the if statement will run regardless if the condition is true or false
print("move on")
```
<i>Try uncommenting the age variable</i>
It is helpful to use the following diagram to illustrate the process. On the left side, we see what happens when the condition is <b>True</b>. The person enters the ACDC concert representing the code in the indent being executed; they then move on. On the right side, we see what happens when the condition is <b>False</b>; the person is not granted access, and the person moves on. In this case, the segment of code in the indent does not run, but the rest of the statements are run.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsIf.gif" width="650" />
The <code>else</code> statement runs a block of code if none of the conditions are **True** before this <code>else</code> statement. Let's use the ACDC concert analogy again. If the user is 17 they cannot go to the ACDC concert, but they can go to the Meatloaf concert.
The syntax of the <code>else</code> statement is similar as the syntax of the <code>if</code> statement, as <code>else :</code>. Notice that, there is no condition statement for <code>else</code>.
Try changing the values of <code>age</code> to see what happens:
```
# Else statement example
age = 18
# age = 19
if age > 18:
print("you can enter" )
else:
print("go see Meat Loaf" )
print("move on")
```
The process is demonstrated below, where each of the possibilities is illustrated on each side of the image. On the left is the case where the age is 17, we set the variable age to 17, and this corresponds to the individual attending the Meatloaf concert. The right portion shows what happens when the individual is over 18, in this case 19, and the individual is granted access to the concert.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsElse.gif" width="650" />
The <code>elif</code> statement, short for else if, allows us to check additional conditions if the condition statements before it are <b>False</b>. If the condition for the <code>elif</code> statement is <b>True</b>, the alternate expressions will be run. Consider the concert example, where if the individual is 18 they will go to the Pink Floyd concert instead of attending the ACDC or Meat-loaf concert. The person of 18 years of age enters the area, and as they are not older than 18 they can not see ACDC, but as they are 18 years of age, they attend Pink Floyd. After seeing Pink Floyd, they move on. The syntax of the <code>elif</code> statement is similar in that we merely change the <code>if</code> in <code>if</code> statement to <code>elif</code>.
```
# Elif statment example
age = 18
if age > 18:
print("you can enter" )
elif age == 18:
print("go see Pink Floyd")
else:
print("go see Meat Loaf" )
print("move on")
```
The three combinations are shown in the figure below. The left-most region shows what happens when the individual is less than 18 years of age. The central component shows when the individual is exactly 18. The rightmost shows when the individual is over 18.
<img src ="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsElif.gif" width="650" />
Look at the following code:
```
# Condition statement example
album_year = 1983
album_year = 1970
if album_year > 1980:
print("Album year is greater than 1980")
print('do something..')
```
Feel free to change <code>album_year</code> value to other values -- you'll see that the result changes!
Notice that the code in the above <b>indented</b> block will only be executed if the results are <b>True</b>.
As before, we can add an <code>else</code> block to the <code>if</code> block. The code in the <code>else</code> block will only be executed if the result is <b>False</b>.
<b>Syntax:</b>
if (condition):
# do something
else:
# do something else
If the condition in the <code>if</code> statement is <b>False</b>, the statement after the <code>else</code> block will execute. This is demonstrated in the figure:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsLogicMap.png" width="650" />
```
# Condition statement example
album_year = 1983
#album_year = 1970
if album_year > 1980:
print("Album year is greater than 1980")
else:
print("less than 1980")
print('do something..')
```
Feel free to change the <code>album_year</code> value to other values -- you'll see that the result changes based on it!
<h3 id="logic">Logical operators</h3>
Sometimes you want to check more than one condition at once. For example, you might want to check if one condition and another condition is **True**. Logical operators allow you to combine or modify conditions.
<ul>
<li><code>and</code></li>
<li><code>or</code></li>
<li><code>not</code></li>
</ul>
These operators are summarized for two variables using the following truth tables:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsTable.png" width="650" />
The <code>and</code> statement is only **True** when both conditions are true. The <code>or</code> statement is true if one condition is **True**. The <code>not</code> statement outputs the opposite truth value.
Let's see how to determine if an album was released after 1979 (1979 is not included) and before 1990 (1990 is not included). The time periods between 1980 and 1989 satisfy this condition. This is demonstrated in the figure below. The green on lines <strong>a</strong> and <strong>b</strong> represents periods where the statement is **True**. The green on line <strong>c</strong> represents where both conditions are **True**, this corresponds to where the green regions overlap.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsEgOne.png" width="650" />
The block of code to perform this check is given by:
```
# Condition statement example
album_year = 1980
if(album_year > 1979) and (album_year < 1990):
print ("Album year was in between 1980 and 1989")
print("")
print("Do Stuff..")
```
To determine if an album was released before 1980 (~ - 1979) or after 1989 (1990 - ~), an **or** statement can be used. Periods before 1980 (~ - 1979) or after 1989 (1990 - ~) satisfy this condition. This is demonstrated in the following figure, the color green in <strong>a</strong> and <strong>b</strong> represents periods where the statement is true. The color green in **c** represents where at least one of the conditions
are true.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsEgTwo.png" width="650" />
The block of code to perform this check is given by:
```
# Condition statement example
album_year = 1990
if(album_year < 1980) or (album_year > 1989):
print ("Album was not made in the 1980's")
else:
print("The Album was made in the 1980's ")
```
The <code>not</code> statement checks if the statement is false:
```
# Condition statement example
album_year = 1983
if not (album_year == '1984'):
print ("Album year is not 1984")
```
<hr>
<h2 id="quiz">Quiz on Conditions</h2>
Write an if statement to determine if an album had a rating greater than 8. Test it using the rating for the album <b>“Back in Black”</b> that had a rating of 8.5. If the statement is true print "This album is Amazing!"
```
# Write your code below and press Shift+Enter to execute
```
Double-click __here__ for the solution.
<!--
rating = 8.5
if rating > 8:
print ("This album is Amazing!")
-->
<hr>
Write an if-else statement that performs the following. If the rating is larger then eight print “this album is amazing”. If the rating is less than or equal to 8 print “this album is ok”.
```
# Write your code below and press Shift+Enter to execute
```
Double-click __here__ for the solution.
<!--
rating = 8.5
if rating > 8:
print ("this album is amazing")
else:
print ("this album is ok")
-->
<hr>
Write an if statement to determine if an album came out before 1980 or in the years: 1991 or 1993. If the condition is true print out the year the album came out.
```
# Write your code below and press Shift+Enter to execute
```
Double-click __here__ for the solution.
<!--
album_year = 1979
if album_year < 1980 or album_year == 1991 or album_year == 1993:
print ("this album came out already")
-->
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<h2>Get IBM Watson Studio free of charge!</h2>
<p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p>
</div>
<h3>About the Authors:</h3>
<p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
| github_jupyter |
# Modeling and Simulation in Python
Chapter 6
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
```
### Code from the previous chapter
```
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
p_0 = get_first_value(census)
p_end = get_last_value(census)
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
```
### System objects
We can rewrite the code from the previous chapter using system objects.
```
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
annual_growth=annual_growth)
```
And we can encapsulate the code that runs the model in a function.
```
def run_simulation1(system):
"""Runs the constant growth model.
system: System object
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = results[t] + system.annual_growth
return results
```
We can also encapsulate the code that plots the results.
```
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
```
Here's how we run it.
```
results = run_simulation1(system)
plot_results(census, un, results, 'Constant growth model')
```
## Proportional growth
Here's a more realistic model where the number of births and deaths is proportional to the current population.
```
def run_simulation2(system):
"""Run a model with proportional birth and death.
system: System object
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
births = system.birth_rate * results[t]
deaths = system.death_rate * results[t]
results[t+1] = results[t] + births - deaths
return results
```
I picked a death rate that seemed reasonable and then adjusted the birth rate to fit the data.
```
system.death_rate = 0.01
system.birth_rate = 0.027
```
Here's what it looks like.
```
results = run_simulation2(system)
plot_results(census, un, results, 'Proportional model')
savefig('figs/chap03-fig03.pdf')
```
The model fits the data pretty well for the first 20 years, but not so well after that.
### Factoring out the update function
`run_simulation1` and `run_simulation2` are nearly identical except the body of the loop. So we can factor that part out into a function.
```
def update_func1(pop, t, system):
"""Compute the population next year.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
births = system.birth_rate * pop
deaths = system.death_rate * pop
return pop + births - deaths
```
The name `update_func` refers to a function object.
```
update_func1
```
Which we can confirm by checking its type.
```
type(update_func1)
```
`run_simulation` takes the update function as a parameter and calls it just like any other function.
```
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
```
Here's how we use it.
```
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
birth_rate=0.027,
death_rate=0.01)
results = run_simulation(system, update_func1)
plot_results(census, un, results, 'Proportional model, factored')
```
Remember not to put parentheses after `update_func1`. What happens if you try?
**Exercise:** When you run `run_simulation`, it runs `update_func1` once for each year between `t_0` and `t_end`. To see that for yourself, add a print statement at the beginning of `update_func1` that prints the values of `t` and `pop`, then run `run_simulation` again.
### Combining birth and death
Since births and deaths get added up, we don't have to compute them separately. We can combine the birth and death rates into a single net growth rate.
```
def update_func2(pop, t, system):
"""Compute the population next year.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha * pop
return pop + net_growth
```
Here's how it works:
```
system.alpha = system.birth_rate - system.death_rate
results = run_simulation(system, update_func2)
plot_results(census, un, results, 'Proportional model, combined birth and death')
```
### Exercises
**Exercise:** Maybe the reason the proportional model doesn't work very well is that the growth rate, `alpha`, is changing over time. So let's try a model with different growth rates before and after 1980 (as an arbitrary choice).
Write an update function that takes `pop`, `t`, and `system` as parameters. The system object, `system`, should contain two parameters: the growth rate before 1980, `alpha1`, and the growth rate after 1980, `alpha2`. It should use `t` to determine which growth rate to use. Note: Don't forget the `return` statement.
Test your function by calling it directly, then pass it to `run_simulation`. Plot the results. Adjust the parameters `alpha1` and `alpha2` to fit the data as well as you can.
```
# Solution goes here
# Solution goes here
```
| github_jupyter |
# Hierarchical Live sellers
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.style as style
from datetime import datetime as dt
style.use('ggplot')
#importando dataset e dropando colunas vazias e sem informação útil
dataset = pd.read_csv("Live.csv").drop(columns = {'status_id','Column1','Column2','Column3','Column4'})
#transformando pra datetime
dataset['status_published'] = dataset['status_published'].astype(str).str.replace("/","-")
dataset['status_published'] = pd.to_datetime(dataset['status_published'])
dataset['day'] = dataset['status_published'].dt.weekday #adiciona o dia da semana
#dataset['month'] = dataset['status_published'].dt.month #adiciona o mes
dataset['hour'] = dataset['status_published'].dt.hour #adiciona a hora
dataset['minute'] = dataset['status_published'].dt.minute #adiciona os minutos
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
#transformando as labels em atributos numéricos
dataset['status_type'] = encoder.fit_transform(dataset['status_type'])
#dropando a coluna antiga, pois já adicionamos a mesma numerizada
dataset = dataset.drop(columns = {'status_published'})
dataset.head()
#função para fazer o nome das colunas.
def column_name(name, df):
result = []
for i in range(len(df.columns)):
result.append(name + str(i))
return result
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(sparse=False)
day = pd.DataFrame(ohe.fit_transform(dataset.iloc[:,10:11].values),index = dataset.index).drop(columns = {0})
day.columns = column_name("day",day)
day.shape
hour = pd.DataFrame(ohe.fit_transform(dataset.iloc[:,11:12].values)).drop(columns = {0})
hour.columns = column_name("hour",hour)
hour.shape
minute = pd.DataFrame(ohe.fit_transform(dataset.iloc[:,12:13].values)).drop(columns = {0})
minute.columns = column_name("minute",minute)
minute.shape
dataset = dataset.drop(columns = {'hour','day','minute'})
dataset = dataset.join(hour).join(day).join(minute)
dataset.head()
```
Faremos o PCA para ajudar na visualização dos dados e reduzin a dimensionalidade
```
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
X = pca.fit_transform(dataset)
explained_variance = pca.explained_variance_ratio_
explained_variance.sum()
```
Com 2 componentes conseguimos ficar com 0.99 de variância
# Hierárquico (single linkage)
```
import scipy.cluster.hierarchy as sch
dendrogram = sch.dendrogram(sch.linkage(X, method = 'single'))
plt.title('Dendrogram')
plt.xlabel('Axis')
plt.ylabel('Euclidean distances')
plt.show()
```
Tem-se dois clusters, visto que a maior linha vertical sem interrupções de linhas horizontais é a azul, que divide em dois clusters, o vermelho e o verde. A figura está anexada para melhor análise.
```
from sklearn.cluster import AgglomerativeClustering
hc = AgglomerativeClustering(n_clusters = 2, affinity = 'euclidean', linkage = 'single')
y_hc = hc.fit_predict(X)
pd.Series(hc.labels_).value_counts()
```
De fato, mesmo tendo uma quantidade menor de instâncias no segundo cluster, se aumentassemos o número de clusters até 4 teríamos a seguinte distribuição:
```
hc_hipotesis = AgglomerativeClustering(n_clusters = 4, affinity = 'euclidean', linkage = 'single')
y_hc_hipotesis = hc_hipotesis.fit_predict(X)
pd.Series(hc_hipotesis.labels_).value_counts()
```
Vamos agora plotar o resultado do nosso cluster.
```
import collections, numpy
collections.Counter(y_hc) #Número de elementos em cada cluster
#plot do gráfico com clusters
plt.scatter(X[y_hc == 0, 0], X[y_hc == 0, 1], s = 50, c = 'red', label = 'Cluster 0')
plt.scatter(X[y_hc == 1, 0], X[y_hc == 1, 1], s = 20, c = 'blue', label = 'Cluster 1')
plt.xlabel('PC2')
plt.ylabel('PC1')
plt.legend()
plt.show()
```
# Hierárquico (complete linkage)
```
import scipy.cluster.hierarchy as sch
dendrogram = sch.dendrogram(sch.linkage(X, method = 'complete'))
plt.title('Dendrogram')
plt.xlabel('Axis')
plt.ylabel('Euclidean distances')
plt.show()
```
Nesse caso, devido à ao espaçamento maior no dendograma, podemos ver que a escolha entre 2 e 3 clusters torna-se a mais adequada.
```
from sklearn.cluster import AgglomerativeClustering
hc_link = AgglomerativeClustering(n_clusters = 3, affinity = 'euclidean', linkage = 'complete')
y_hc_link = hc_link.fit_predict(X)
pd.Series(hc_link.labels_).value_counts()
```
Podemos ver que ao adicionarmos um cluster a mais, diferentemente do single linked, ele criou um novo cluster com 104 instâncias
```
plt.scatter(X[y_hc_link == 1, 0], X[y_hc_link == 1, 1], s = 50, c = 'red', label = 'Cluster 0')
plt.scatter(X[y_hc_link == 0, 0], X[y_hc_link == 0, 1], s = 30, c = 'green', label = 'Cluster 1')
plt.scatter(X[y_hc_link == 2, 0], X[y_hc_link == 2, 1], s = 20, c = 'blue', label = 'Cluster 2')
plt.xlabel('PC2')
plt.ylabel('PC1')
plt.legend()
plt.show()
```
# Normalizando os dados e tentando novamente
```
X
from sklearn.preprocessing import StandardScaler, MinMaxScaler
#limitando os valores ao intervalo -1 a 1
scaler = MinMaxScaler(feature_range=(-1,1))
X_scaled = scaler.fit_transform(X)
X_scaled
```
# Hierárquico (single linkage scaled)
```
import scipy.cluster.hierarchy as sch
dendrogram = sch.dendrogram(sch.linkage(X_scaled, method = 'single'))
plt.title('Dendrogram')
plt.xlabel('Axis')
plt.ylabel('Euclidean distances')
plt.show()
```
Tem-se dois clusters,ao dar um zoom na imagem fica mais perceptível
```
from sklearn.cluster import AgglomerativeClustering
hc_single_scaled = AgglomerativeClustering(n_clusters = 2, affinity = 'euclidean', linkage = 'single')
y_hc_single_scaled = hc_single_scaled.fit_predict(X_scaled)
pd.Series(hc_single_scaled.labels_).value_counts()
#plt.scatter(X[:,0], X[:,1], c=hc.labels_, cmap='rainbow')
plt.scatter(X_scaled[y_hc_single_scaled == 1, 0], X_scaled[y_hc_single_scaled == 1, 1], s = 50, c = 'red', label = 'Cluster 0')
plt.scatter(X_scaled[y_hc_single_scaled == 0, 0], X_scaled[y_hc_single_scaled == 0, 1], s = 20, c = 'blue', label = 'Cluster 1')
plt.xlabel('PC2')
plt.ylabel('PC1')
plt.legend()
plt.show()
```
# Hierárquico (complete linkage)
```
import scipy.cluster.hierarchy as sch
dendrogram = sch.dendrogram(sch.linkage(X_scaled, method = 'complete'))
plt.title('Dendrogram')
plt.xlabel('Axis')
plt.ylabel('Euclidean distances')
plt.show()
```
É possível ter de 2 a 3 clusters de acordo com o dendograma
```
from sklearn.cluster import AgglomerativeClustering
hc_link_complete_scaled = AgglomerativeClustering(n_clusters = 3, affinity = 'euclidean', linkage = 'complete')
y_hc_link_complete_scaled = hc_link_complete_scaled.fit_predict(X_scaled)
pd.Series(hc_link_complete_scaled.labels_).value_counts()
```
se rodarmos com 2 clusters ele junta o primeiro com o segunda e deixa o 0 que só tem 12 e se adicionarmos um a mais ele divide o de 12 em 7 e 5, o que acaba sendo, possivelmente muito específico
```
plt.scatter(X_scaled[y_hc_link_complete_scaled == 0, 0], X_scaled[y_hc_link_complete_scaled == 0, 1], s = 50, c = 'red', label = 'Cluster 0')
plt.scatter(X_scaled[y_hc_link_complete_scaled == 1, 0], X_scaled[y_hc_link_complete_scaled == 1, 1], s = 20, c = 'blue', label = 'Cluster 1')
plt.scatter(X_scaled[y_hc_link_complete_scaled == 2, 0], X_scaled[y_hc_link_complete_scaled == 2, 1], s = 20, c = 'green', label = 'Cluster 2')
plt.xlabel('PC2')
plt.ylabel('PC1')
plt.legend()
plt.show()
```
# Comparando
```
#single linkage sem normalização
pd.Series(hc.labels_).value_counts()
#single linkage com normalização
pd.Series(hc_single_scaled.labels_).value_counts()
```
No single linkage houveram poucas mudanças, embora o dendograma tenha mudado, a distribuição de clusters indicada por ele permaneceu a mesma, ao passo que o número de instâncias também.
```
#complete linkage sem normalização
pd.Series(hc_link.labels_).value_counts()
#complete linkage com normalização
pd.Series(hc_link_complete_scaled.labels_).value_counts()
```
No complete linkage vemos uma mudança tanto no dendograma quanto na distribuição das instâncias nos clusters. Embora tenha permanecido o mesmo número de clusters após a normalização, os valores em cada um foram alterados
| github_jupyter |
### 참고 사이트
glove model : https://jxnjxn.tistory.com/49
mecab : https://github.com/SOMJANG/Mecab-ko-for-Google-Colab
mecab : https://velog.io/@kjyggg/ %ED%98%95%ED%83%9C%EC%86%8C-%EB%B6%84%EC%84%9D%EA%B8%B0-Mecab-%EC%82%AC%EC%9A%A9%ED%95%98%EA%B8%B0-A-to-Z%EC%84%A4%EC%B9%98%EB%B6%80%ED%84%B0-%EB%8B%A8%EC%96%B4-%EC%9A%B0%EC%84%A0%EC%88%9C%EC%9C%84-%EB%93%B1%EB%A1%9D%EA%B9%8C%EC%A7%80
형태소 : https://mr-doosun.tistory.com/22
```
# Install Wordcloud
!pip install wordcloud
from gensim.models import Word2Vec
from wordcloud import WordCloud, STOPWORDS
import gensim
import pandas as pd
from sklearn.manifold import TSNE
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from sklearn.cluster import KMeans
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import tensorflow as tf
from os import path
# from konlpy.tag import Komoran
import re
import sys
import datetime
%matplotlib inline
from google.colab import drive
drive.mount('/content/drive')
# colab 환경에 glove를 설치해줍니다.
! pip install glove-python-binary
!git clone https://github.com/SOMJANG/Mecab-ko-for-Google-Colab.git
cd Mecab-ko-for-Google-Colab
# 토크나이저 mecab 설치
! bash install_mecab-ko_on_colab190912.sh
data = pd.read_csv('/content/drive/MyDrive/welfare_tokenization/welfare_data.csv',encoding='utf8')
data.head()
data.info()
welfare_data=data[['title', 'summary']]
welfare_data.head()
len(welfare_data)
import re
def preprocess(x):
# 전화번호 제거
text = re.sub(r'[0-9]{2,3}(-[0-9]{3,4}){2}','',x)
# 숫자 3개 이상일 경우 제거
text = re.sub(r'[0-9]{3,}','',text)
# 특수 문자 제거
text = re.sub(r'[^\s0-9a-zA-Z가-힣:]','',text)
text = ' '.join([i.upper() for i in text.split() if len(i) >= 1])
return text
# data['처리내역'] = data['처리내역'].astype(str)
welfare_data['title'] = welfare_data['title'].apply(lambda x: preprocess(x))
welfare_data['summary'] = welfare_data['summary'].apply(lambda x: preprocess(x))
#welfare_data = welfare_data[welfare_data.title != ''].reset_index(drop=True)
#welfare_data = welfare_data[welfare_data.summary != ''].reset_index(drop=True)
len(welfare_data)
welfare_data.head()
from konlpy.tag import Mecab
mecab = Mecab()
text = '고위험 임산부 의료비 지원 '
nouns = mecab.nouns(text)
morphs = mecab.morphs(text)
pos = mecab.pos(text)
print(nouns)
print(morphs)
print(pos)
```
### 사용자 사전을 만들고, 불용어, 품사 지정해서 제거하는 preprocessing 해주어야 함
### word vector 학습하기
```
welfare_data['title_token']=welfare_data['title'].apply(lambda x: mecab.morphs(x))
welfare_data['summary_token']=welfare_data['summary'].apply(lambda x: mecab.morphs(x))
welfare_data.head()
title_token_list=list(welfare_data['title_token'])
summary_token_list=list(welfare_data['summary_token'])
sent_token=title_token_list+summary_token_list
sent_token
# glove import 해서 불러옵니다.
from glove import Corpus, Glove
DATA_DIR = '/content/drive/MyDrive/welfare_tokenization'
# corpus 생성
corpus = Corpus()
corpus.fit(sent_token, window=20)
# model
glove = Glove(no_components=32, learning_rate=0.01) # 0.05
%time glove.fit(corpus.matrix, epochs=10, no_threads=4, verbose=False) # Wall time: 8min 32s
glove.add_dictionary(corpus.dictionary)
# save
glove.save(DATA_DIR + '/glove_w20_epoch10.model')
# load glove
glove_model = Glove.load(DATA_DIR + '/glove_w20_epoch10.model')
glove_model.word_vectors
# word dict 생성
word_dict = {}
for word in glove_model.dictionary.keys():
word_dict[word] = glove_model.word_vectors[glove_model.dictionary[word]]
print('[Success !] Lengh of word dict... : ', len(word_dict))
# save word_dict
import pickle
with open(DATA_DIR + '/glove_word_dict_64.pickle', 'wb') as f:
pickle.dump(word_dict, f)
print('[Success !] Save word dict!...')
word_dict['임산부']
# test data(질의) 속 word dict에 없는 단어는 64차원 0벡터로 임베딩시킴
# word dict : train데이터의 임베딩 사전
total_word_dict = {}
cnt = 0
for word in test_tokens:
if word in word_dict.keys():
total_word_dict[word] = word_dict[word]
else:
word_dict[word] = np.zeros((128))
cnt += 1 # 처음 본 단어 갯수 세기
print('no train word -> 0....', cnt)
print('token -> word embedding....!',len(unique_tokens))
# 워드 임베딩된 벡터로 문장 단위 임베딩
def sent2vec_glove(tokens, embedding_dim=32):
'''문장 token 리스트를 받아서 임베딩 시킨다.'''
size = len(tokens)
matrix = np.zeros((size, embedding_dim))
word_table = word_dict # glove word_dict
for i, token in enumerate(tokens):
vector = np.array([
word_table[t] for t in token
if t in word_table
])
if vector.size != 0:
final_vector = np.mean(vector, axis=0)
matrix[i] = final_vector
return matrix
# 문장 임베딩
sentence_tokens = sent_token
sentence_glove = sent2vec_glove(sentence_tokens)
sentence_glove.shape
# clustering
k = 16
kmeans = KMeans(n_clusters=k, random_state=2021)
y_pred = kmeans.fit_predict(sentence_glove)
# tsne
tsne = TSNE(verbose=1, perplexity=100, random_state=2021) # perplexity : 유사정도
X_embedded = tsne.fit_transform(sentence_glove)
print('Embedding shape 확인', X_embedded.shape)
# 시각화
sns.set(rc={'figure.figsize':(20,20)})
# colors
palette = sns.hls_palette(16, l=.4, s=.9)
# plot
sns.scatterplot(X_embedded[:,0], X_embedded[:,1], hue=y_pred,
legend='full',palette=palette) # kmeans로 예측
plt.title('t-SNE with KMeans Labels and Glove Embedding')
#plt.savefig(DATA_DIR + "/t-sne_question_glove_embedding.png")
plt.show()
### sentence embedding 한 것을 문서랑 맵핑해서 질의 sentence와 가장 유사한 것을 찾으면 됨.
# 코사인 유사도를 구하는 함수
from numpy import dot
from numpy.linalg import norm
import numpy as np
def cos_sim(A, B):
return dot(A, B)/(norm(A)*norm(B))
sentence_glove.shape #712문장의 32벡터
sentence_glove
# 고위험 임산부 의료비 지원과 가장 유사도가 높은 문서를 구해보자.
# 임산부 복지정책에 대해 알려줘-> "임산부"
# "임산부", "고위험", "지원"
sample_query = sent2vec_glove([['임산부']])
sample_query[0,:]
sample_query
cos_sim_query = pd.DataFrame()
for i in range(0,712):
cos_sim_query.loc[i, 'cos_sim'] = cos_sim(sentence_glove[i,:], sample_query[0,:])
cos_sim_query.sort_values(by=['cos_sim'], axis=0, ascending=False)
data.iloc[0, :]
data.iloc[31, :]
data.iloc[246, :]
data.iloc[241, :]
```
| github_jupyter |
# A TRIGA geometry
This notebook can be used as a template for modeling TRIGA reactors.
```
%matplotlib inline
import numpy as np
import openmc
# Materials definitions
# Borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# 20% enriched uranium zirconium hydride fuel
uzrh = openmc.Material(name='UZrH')
uzrh.set_density('g/cm3', 6.128)
uzrh.add_nuclide('U235', .02376, 'wo')
uzrh.add_nuclide('U238', .09619, 'wo')
uzrh.add_element('H', .03, 'wo')
uzrh.add_element('Zr', .85, 'wo')
molybdenum = openmc.Material(name='Molybdenum')
molybdenum.add_element('Mo', 1.0)
molybdenum.set_density('g/cm3', 10.22)
graphite = openmc.Material(name='Graphite')
graphite.set_density('g/cm3', 1.70)
graphite.add_element('C', 1.0)
graphite.add_s_alpha_beta('c_Graphite')
# Stainless steel
ss304 = openmc.Material(name='Stainless Steel 304')
ss304.set_density('g/cm3', 8.0)
ss304.add_element('C',.002,'wo')
ss304.add_element('Si',.004,'wo')
ss304.add_element('P',.0003,'wo')
ss304.add_element('S',.0002,'wo')
ss304.add_element('V',.003,'wo')
ss304.add_element('Cr',.115,'wo')
ss304.add_element('Mn',.006,'wo')
ss304.add_element('Fe',.8495,'wo')
ss304.add_element('Ni',.005,'wo')
ss304.add_element('Mo',.01,'wo')
ss304.add_element('W',.005,'wo')
# Boron carbide
b4c = openmc.Material(name='Boron Carbide')
b4c.set_density('g/cm3', 2.52)
b4c.add_element('B', 4)
b4c.add_element('C', 1)
zirconium = openmc.Material(name='Zirconium')
zirconium.add_element('Zr', 1.0)
zirconium.set_density('g/cm3', 6.506)
void = openmc.Material(name='Void')
void.set_density('g/cm3', 0.001205)
void.add_element('Ni', 0.755268, 'wo')
void.add_element('C', 0.000124, 'wo')
void.add_element('O', 0.231781, 'wo')
void.add_element('Ar', 0.012827, 'wo')
aluminum = openmc.Material(name='Aluminum')
aluminum.add_element('Al', 1.0)
aluminum.set_density('g/cm3', 2.6)
# Instantiate a Materials collection and export to xml
materials_file = openmc.Materials([aluminum, zirconium, b4c, ss304, graphite, molybdenum, uzrh, water, void])
materials_file.export_to_xml()
# Geometry definitions for the fuel rod
rod_outer_radius = openmc.ZCylinder(r=0.635)
uzrh_outer_radius = openmc.ZCylinder(r=3.6449)
molybdenum_outer_radius = openmc.ZCylinder(r=3.6449)
graphite_outer_radius = openmc.ZCylinder(r=3.6449)
ss304_outer_radius = openmc.ZCylinder(r=3.6449)
clad_outer_radius = openmc.ZCylinder(r=3.75412)
empty_space_min = openmc.ZPlane(z0=-114.3)
empty_space_max = openmc.ZPlane(z0=-55.079265)
rod_min = openmc.ZPlane(z0=-55.079265)
rod_max = openmc.ZPlane(z0=-16.979265)
uzrh_min = openmc.ZPlane(z0=-55.079265)
uzrh_max = openmc.ZPlane(z0=-16.979265)
molybdenum_min = openmc.ZPlane(z0=-55.15864)
molybdenum_max = openmc.ZPlane(z0=-55.079265)
graphite_upper_min = openmc.ZPlane(z0=-16.979265)
graphite_upper_max = openmc.ZPlane(z0=-10.375265)
graphite_lower_min = openmc.ZPlane(z0=-64.621664)
graphite_lower_max = openmc.ZPlane(z0=-55.15864)
ss304_upper_min = openmc.ZPlane(z0=-10.375265)
ss304_upper_max = openmc.ZPlane(z0=+0)
ss304_lower_min = openmc.ZPlane(z0=-72.0598)
ss304_lower_max = openmc.ZPlane(z0=-64.621664)
clad_min = openmc.ZPlane(z0=-72.0598)
clad_max = openmc.ZPlane(z0=0)
# Create a Universe to encapsulate the fuel rod
fuel_universe = openmc.Universe(name='UZrH Fuel Universe')
# Create rod cell
rod_cell = openmc.Cell(name='Zr Rod')
rod_cell.fill = zirconium
rod_cell.region = -rod_outer_radius & +rod_min & -rod_max
fuel_universe.add_cell(rod_cell)
# Create uzrh cell
uzrh_cell = openmc.Cell(name='UZrH')
uzrh_cell.fill = uzrh
uzrh_cell.region = +rod_outer_radius & -uzrh_outer_radius & +uzrh_min & -uzrh_max
fuel_universe.add_cell(uzrh_cell)
# Create molybdenum disk
molybdenum_cell = openmc.Cell(name='Molybdenum')
molybdenum_cell.fill = molybdenum
molybdenum_cell.region = -molybdenum_outer_radius & +molybdenum_min & -molybdenum_max
fuel_universe.add_cell(molybdenum_cell)
# Create upper graphite cell
graphite_upper_cell = openmc.Cell(name='Upper Graphite')
graphite_upper_cell.fill = graphite
graphite_upper_cell.region = -graphite_outer_radius & +graphite_upper_min & -graphite_upper_max
fuel_universe.add_cell(graphite_upper_cell)
# Create lower graphite cell
graphite_lower_cell = openmc.Cell(name='Lower Graphite')
graphite_lower_cell.fill = graphite
graphite_lower_cell.region = -graphite_outer_radius & +graphite_lower_min & -graphite_lower_max
fuel_universe.add_cell(graphite_lower_cell)
# Create upper ss304 cell
ss304_upper_cell = openmc.Cell(name='Upper Stainless Steel 304')
ss304_upper_cell.fill = ss304
ss304_upper_cell.region = -ss304_outer_radius & +ss304_upper_min & -ss304_upper_max
fuel_universe.add_cell(ss304_upper_cell)
# Create lower ss304 cell
ss304_lower_cell = openmc.Cell(name='Lower Stainless Steel 304')
ss304_lower_cell.fill = ss304
ss304_lower_cell.region = -ss304_outer_radius & +ss304_lower_min & -ss304_lower_max
fuel_universe.add_cell(ss304_lower_cell)
# Create clad cell
clad_cell = openmc.Cell(name='Stainless Steel 304 Cladding')
clad_cell.fill = ss304
clad_cell.region = -clad_outer_radius & +ss304_outer_radius & +clad_min & -clad_max
fuel_universe.add_cell(clad_cell)
# Create empty space cell
empty_space_cell = openmc.Cell(name='Empty space before fuel rod')
empty_space_cell.fill = water
empty_space_cell.region = -clad_outer_radius & +empty_space_min & -empty_space_max
fuel_universe.add_cell(empty_space_cell)
# Geometry definitions for the transient rod
void_outer_radius = openmc.ZCylinder(r=3.03276)
b4c_outer_radius = openmc.ZCylinder(r=3.03276)
clad_outer_radius = openmc.ZCylinder(r=3.38455)
aluminum_outer_radius = openmc.ZCylinder(r=3.03276)
aluminum_1_min = openmc.ZPlane(z0=-114.3)
aluminum_1_max = openmc.ZPlane(z0=-113.03)
void_1_min = openmc.ZPlane(z0=-113.03)
void_1_max = openmc.ZPlane(z0=-57.785)
aluminum_2_min = openmc.ZPlane(z0=-57.785)
aluminum_2_max = openmc.ZPlane(z0=-56.515)
b4c_min = openmc.ZPlane(z0=-56.515)
b4c_max = openmc.ZPlane(z0=-18.415)
void_2_min = openmc.ZPlane(z0=-18.415)
void_2_max = openmc.ZPlane(z0=-18.0975)
aluminum_3_min = openmc.ZPlane(z0=-18.0975)
aluminum_3_max = openmc.ZPlane(z0=-16.8275)
void_3_min = openmc.ZPlane(z0=-16.8275)
void_3_max = openmc.ZPlane(z0=-7.3025)
aluminum_4_min = openmc.ZPlane(z0=-7.3025)
aluminum_4_max = openmc.ZPlane(z0=0)
clad_min = openmc.ZPlane(z0=-114.3)
clad_max = openmc.ZPlane(z0=0)
# Create a Universe to encapsulate the transient rod
transient_universe = openmc.Universe(name='Transient Universe')
# Create void 1 cell
void_1_cell = openmc.Cell(name= 'Void 1')
void_1_cell.fill = void
void_1_cell.region = -void_outer_radius & +void_1_min & -void_1_max
transient_universe.add_cell(void_1_cell)
# Create void 2 cell
void_2_cell = openmc.Cell(name= 'Void 2')
void_2_cell.fill = void
void_2_cell.region = -void_outer_radius & +void_2_min & -void_2_max
transient_universe.add_cell(void_2_cell)
# Create void 3 cell
void_3_cell = openmc.Cell(name= 'Void 3')
void_3_cell.fill = void
void_3_cell.region = -void_outer_radius & +void_3_min & -void_3_max
transient_universe.add_cell(void_3_cell)
# Create b4c cell
b4c_cell = openmc.Cell(name='Boron Carbide')
b4c_cell.fill = b4c
b4c_cell.region = -b4c_outer_radius & -b4c_max & +b4c_min
transient_universe.add_cell(b4c_cell)
# Create aluminum 1 cell
aluminum_1_cell = openmc.Cell(name='Aluminum')
aluminum_1_cell.fill = aluminum
aluminum_1_cell.region = -aluminum_outer_radius & +aluminum_1_min & -aluminum_1_max
transient_universe.add_cell(aluminum_1_cell)
# Create aluminum 2 cell
aluminum_2_cell = openmc.Cell(name='Aluminum')
aluminum_2_cell.fill = aluminum
aluminum_2_cell.region = -aluminum_outer_radius & +aluminum_2_min & -aluminum_2_max
transient_universe.add_cell(aluminum_2_cell)
# Create aluminum 3 cell
aluminum_3_cell = openmc.Cell(name='Aluminum')
aluminum_3_cell.fill = aluminum
aluminum_3_cell.region = -aluminum_outer_radius & +aluminum_3_min & -aluminum_3_max
transient_universe.add_cell(aluminum_3_cell)
# Create aluminum 4 cell
aluminum_4_cell = openmc.Cell(name='Aluminum')
aluminum_4_cell.fill = aluminum
aluminum_4_cell.region = -aluminum_outer_radius & +aluminum_4_min & -aluminum_4_max
transient_universe.add_cell(aluminum_4_cell)
# Create a clad cell
clad_cell = openmc.Cell(name='Aluminum Cladding')
clad_cell.fill = aluminum
clad_cell.region = -clad_outer_radius & +b4c_outer_radius & +clad_min & -clad_max
transient_universe.add_cell(clad_cell)
# Geometry definitions for the control rod
rod_outer_radius = openmc.ZCylinder(r=0.635)
uzrh_outer_radius = openmc.ZCylinder(r=3.33375)
void_outer_radius = openmc.ZCylinder(r=3.33375)
b4c_outer_radius = openmc.ZCylinder(r=3.33375)
ss304_outer_radius = openmc.ZCylinder(r=3.33375)
clad_outer_radius = openmc.ZCylinder(r=3.38455)
ss304_5_min = openmc.ZPlane(z0=-114.3)
ss304_5_max = openmc.ZPlane(z0=-113.03)
void_4_min = openmc.ZPlane(z0=-113.03)
void_4_max = openmc.ZPlane(z0=-99.06)
ss304_4_min = openmc.ZPlane(z0=-99.06)
ss304_4_max = openmc.ZPlane(z0=-96.52)
rod_min = openmc.ZPlane(z0=-96.52)
rod_max = openmc.ZPlane(z0=-58.42)
uzrh_min = openmc.ZPlane(z0=-96.52)
uzrh_max = openmc.ZPlane(z0=-58.42)
void_1_min = openmc.ZPlane(z0=-58.42)
void_1_max = openmc.ZPlane(z0=-57.785)
ss304_1_min = openmc.ZPlane(z0=-57.785)
ss304_1_max = openmc.ZPlane(z0=-56.515)
b4c_1_min = openmc.ZPlane(z0=-56.515)
b4c_1_max = openmc.ZPlane(z0=-18.415)
void_2_min = openmc.ZPlane(z0=-18.415)
void_2_max = openmc.ZPlane(z0=-18.0975)
ss304_2_min = openmc.ZPlane(z0=-18.0975)
ss304_2_max = openmc.ZPlane(z0=-16.8275)
void_3_min = openmc.ZPlane(z0=-16.8275)
void_3_max = openmc.ZPlane(z0=-7.3025)
ss304_3_min = openmc.ZPlane(z0=-7.3025)
ss304_3_max = openmc.ZPlane(z0=0.0)
clad_min = openmc.ZPlane(z0=-114.3)
clad_max = openmc.ZPlane(z0=0.0)
# Create a Universe to encapsulate the control rod
control_universe = openmc.Universe(name='Control Universe')
# Create rod cell
rod_cell = openmc.Cell(name='Zr Rod')
rod_cell.fill = zirconium
rod_cell.region = -rod_outer_radius & +rod_min & -rod_max
control_universe.add_cell(rod_cell)
# Create uzrh cell
uzrh_cell = openmc.Cell(name='UZrH')
uzrh_cell.fill = uzrh
uzrh_cell.region = +rod_outer_radius & -uzrh_outer_radius & +uzrh_min & -uzrh_max
control_universe.add_cell(uzrh_cell)
# Create void 1 cell
void_1_cell = openmc.Cell(name= 'Void 1')
void_1_cell.fill = void
void_1_cell.region = -void_outer_radius & +void_1_min & -void_1_max
control_universe.add_cell(void_1_cell)
# Create void 2 cell
void_2_cell = openmc.Cell(name= 'Void 2')
void_2_cell.fill = void
void_2_cell.region = -void_outer_radius & +void_2_min & -void_2_max
control_universe.add_cell(void_2_cell)
# Create void 3 cell
void_3_cell = openmc.Cell(name= 'Void 3')
void_3_cell.fill = void
void_3_cell.region = -void_outer_radius & +void_3_min & -void_3_max
control_universe.add_cell(void_3_cell)
# Create void 4 cell
void_4_cell = openmc.Cell(name= 'Void 3')
void_4_cell.fill = void
void_4_cell.region = -void_outer_radius & +void_4_min & -void_4_max
control_universe.add_cell(void_4_cell)
# Create ss304 1 cell
ss304_1_cell = openmc.Cell(name='Stainless Steel 304 Cell 1')
ss304_1_cell.fill = ss304
ss304_1_cell.region = -ss304_outer_radius & +ss304_1_min & -ss304_1_max
control_universe.add_cell(ss304_1_cell)
# Create ss304 2 cell
ss304_2_cell = openmc.Cell(name='Stainless Steel 304 Cell 2')
ss304_2_cell.fill = ss304
ss304_2_cell.region = -ss304_outer_radius & +ss304_2_min & -ss304_2_max
control_universe.add_cell(ss304_2_cell)
# Create ss304 3 cell
ss304_3_cell = openmc.Cell(name='Stainless Steel 304 Cell 3')
ss304_3_cell.fill = ss304
ss304_3_cell.region = -ss304_outer_radius & +ss304_3_min & -ss304_3_max
control_universe.add_cell(ss304_3_cell)
# Create ss304 4 cell
ss304_4_cell = openmc.Cell(name='Stainless Steel 304 Cell 4')
ss304_4_cell.fill = ss304
ss304_4_cell.region = -ss304_outer_radius & +ss304_4_min & -ss304_4_max
control_universe.add_cell(ss304_4_cell)
# Create ss304 5 cell
ss304_5_cell = openmc.Cell(name='Stainless Steel 304 Cell 5')
ss304_5_cell.fill = ss304
ss304_5_cell.region = -ss304_outer_radius & +ss304_5_min & -ss304_5_max
control_universe.add_cell(ss304_5_cell)
# Create b4c 1 cell
b4c_1_cell = openmc.Cell(name='B4C cell')
b4c_1_cell.fill = b4c
b4c_1_cell.region = -b4c_outer_radius & +b4c_1_min & -b4c_1_max
control_universe.add_cell(b4c_1_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Stainless Steel 304 Cladding')
clad_cell.fill = ss304
clad_cell.region = -clad_outer_radius & +ss304_outer_radius & +clad_min & -clad_max #Miriam: cladding is only the exterior coat.
control_universe.add_cell(clad_cell)
# Create water universe to surround the lattice
all_water_cell = openmc.Cell(fill=water)
outer_universe = openmc.Universe(cells=(all_water_cell,))
# Create surfaces that will divide rings in the circular lattice
ring_radii = np.array([0.0, 8.0, 16.0, 24.0, 32.0, 40.0])
radial_surf = [openmc.ZCylinder(r=r) for r in
(ring_radii[:-1] + ring_radii[1:])/2]
water_cells = []
for i in range(ring_radii.size):
# Create annular region
if i == 0:
water_region = -radial_surf[i]
elif i == ring_radii.size - 1:
water_region = +radial_surf[i-1]
else:
water_region = +radial_surf[i-1] & -radial_surf[i]
water_cells.append(openmc.Cell(fill=water, region=water_region))
# Plot the rings to visualize the circular lattice, without rods
plot_args = {'width': (2*24.1, 2*24.1)}
bundle_universe = openmc.Universe(cells=water_cells)
bundle_universe.plot(**plot_args)
# Arrange the pins in the circular lattice
num_pins = [1, 6, 12, 18, 24, 30]
angles = [0, 0, 0, 0, 0, 0]
controlRods = {'numPins' :[num_pins[1], num_pins[3], num_pins[5]],
'howLeftFrom3oclock':[4 , 2 , 0]}
transientRods = {'numPins' :[num_pins[3], num_pins[2]],
'howLeftFrom3oclock':[1 , 0]}
waterRods = {'numPins' :[num_pins[5], num_pins[2]],
'howLeftFrom3oclock':[1 , 8]}
def ControlRod(controlRods,n,j):
for irod in range(len(controlRods['numPins'])):
if n == controlRods['numPins'][irod] and \
j-1 == controlRods['howLeftFrom3oclock'][irod]:
return True
return False
def TransientRod(transientRods,n,j):
for irod in range(len(transientRods['numPins'])):
if n == transientRods['numPins'][irod] and \
j-1 == transientRods['howLeftFrom3oclock'][irod]:
return True
return False
def WaterRod(waterRods,n,j):
for irod in range(len(waterRods['numPins'])):
if n == waterRods['numPins'][irod] and \
j-1 == waterRods['howLeftFrom3oclock'][irod]:
return True
return False
for i, (r, n, a) in enumerate(zip(ring_radii, num_pins, angles)):
for j in range(n):
# Determine location of center of pin
theta = (a + j/n*360.) * np.pi/180.
x = r*np.cos(theta)
y = r*np.sin(theta)
pin_boundary = openmc.ZCylinder(x0=x, y0=y, r=clad_outer_radius.r)
water_cells[i].region &= +pin_boundary
# Create each fuel pin -- note that we explicitly assign an ID so
# that we can identify the pin later when looking at tallies
if ControlRod(controlRods,n,j):
print('Adding in a control rod...')
pin = openmc.Cell(fill=control_universe, region=-pin_boundary)
elif TransientRod(transientRods,n,j):
print('Adding in a transient rod...')
pin = openmc.Cell(fill=transient_universe, region=-pin_boundary)
elif WaterRod(waterRods,n,j):
print('Adding in a water rod...')
pin = openmc.Cell(fill=outer_universe, region=-pin_boundary)
else:
pin = openmc.Cell(fill=fuel_universe, region=-pin_boundary)
pin.translation = (x, y, 0)
pin.id = (i + 1)*100 + j
bundle_universe.add_cell(pin)
# Plot the rings to visualize the filled circular lattice
bundle_universe.plot(width=(100, 100), origin=[0,0,-40],
basis='xy', color_by='material',
colors={water:'blue',uzrh:'orange',
zirconium:'green',graphite:'gray',
b4c:'yellow'})
# Plotting fuel rod
fuel_universe.plot(width=(20, 150), origin=[0,0,-40], basis='yz', color_by='material', colors={ss304:'fuchsia'})
# Plotting transient rod
transient_universe.plot(width=(20, 150), origin=[0,0,-40], basis='yz', color_by='material', colors={water:'fuchsia'})
# Plotting control rod
control_universe.plot(width=(20, 150), origin=[0,0,-40], basis='yz', color_by='material', colors={ss304:'fuchsia'})
# Geometry definitions for the reactor
reactor_wall = openmc.ZCylinder(r=50.0, boundary_type='vacuum')
reactor_top = openmc.ZPlane(z0=0.0, boundary_type='vacuum')
reactor_bottom = openmc.ZPlane(z0=-114.3, boundary_type='vacuum')
reactor = openmc.Cell()
reactor.region = -reactor_wall & -reactor_top & +reactor_bottom
reactor.fill = bundle_universe
reactor_universe = openmc.Universe(cells=[reactor])
reactor_universe.plot(width=(100, 100), origin=[0,0,-40],
basis='yz', color_by='material',
colors={water:'blue',uzrh:'orange',
zirconium:'green',graphite:'gray',
b4c:'yellow'})
geometry = openmc.Geometry(reactor_universe)
geometry.export_to_xml()
# OpenMC simulation parameters
batches = 100
inactive = 10
particles = 5000
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
bounds = [-28.527375, -28.527375, -28.527375, 28.527375, 28.527375, 28.527375]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
settings_file.export_to_xml()
openmc.run()
```
| github_jupyter |
```
import config
'''
Lets print all the variable for the pinouts on board
'''
def get_variable_module_name(module_name):
module = globals().get(module_name, None)
variable = {}
if module:
variable = {key: value for key, value in module.__dict__.iteritems() if not (key.startswith('__') or key.startswith('_'))}
return variable
variable = get_variable_module_name('config')
for key, value in variable.iteritems():
print "{:<20}{:<10}".format(key, value)
from phySyncFirmata import ArduinoNano
import phySyncFirmata
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
board = ArduinoNano('/dev/ttyUSB0')
# Creates a scrolling data display
class RealtimePlotWindow:
def __init__(self):
# create a plot window
self.fig, self.ax = plt.subplots()
# that's our plotbuffer
self.plotbuffer = np.zeros(500)
# create an empty line
self.line, = self.ax.plot(self.plotbuffer)
# axis
self.ax.set_ylim(0, 1)
# That's our ringbuffer which accumluates the samples
# It's emptied every time when the plot window below
# does a repaint
self.ringbuffer = []
# start the animation
self.ani = animation.FuncAnimation(self.fig, self.update, interval=100)
# updates the plot
def update(self, data):
# add new data to the buffer
self.plotbuffer = np.append(self.plotbuffer, self.ringbuffer)
# only keep the 500 newest ones and discard the old ones
self.plotbuffer = self.plotbuffer[-500:]
self.ringbuffer = []
# set the new 500 points of channel 9
self.line.set_ydata(self.plotbuffer)
return self.line,
# appends data to the ringbuffer
def addData(self, v):
self.ringbuffer.append(v)
# Create an instance of an animated scrolling window
# To plot more channels just create more instances and add callback handlers below
realtimePlotWindow = RealtimePlotWindow()
# sampling rate: 1000Hz
samplingRate = 100
# called for every new sample which has arrived from the Arduino
def callBack(data):
# send the sample to the plotwindow
realtimePlotWindow.addData(data)
# Set the sampling rate in the Arduino
board.samplingOn(1000 / samplingRate)
# Register the callback which adds the data to the animated plot
board.analog[2].register_callback(callBack)
# Enable the callback
board.analog[2].enable_reporting()
# show the plot and start the animation
plt.show()
# needs to be called to close the serial port
board.exit()
print("finished")
```
| github_jupyter |
This notebook is an analysis of predictive accuracy in relative free energy calculations from the Schrödinger JACS dataset:
> Wang, L., Wu, Y., Deng, Y., Kim, B., Pierce, L., Krilov, G., ... & Romero, D. L. (2015). Accurate and reliable prediction of relative ligand binding potency in prospective drug discovery by way of a modern free-energy calculation protocol and force field. Journal of the American Chemical Society, 137(7), 2695-2703.
http://doi.org/10.1021/ja512751q
```
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pylab as plt
def read_ddGs
# Read the Excel sheet
df = pd.read_excel('ja512751q_si_003.xlsx', sheet_name='dG')
# Delete rows with summary statistics
rows_to_drop = list()
for i, row in df.iterrows():
if str(df.loc[i,'Ligand']) == 'nan':
rows_to_drop.append(i)
print('dropping rows: {}'.format(rows_to_drop))
df = df.drop(index=rows_to_drop);
# Populate 'system' field for each entry
system = df.loc[0,'Systems']
for i, row in df.iterrows():
if str(df.loc[i,'Systems']) == 'nan':
df.loc[i, "Systems"] = system
else:
system = df.loc[i, "Systems"]
def bootstrap_sign_prediction(DeltaG_predicted, DeltaG_experimental, threshold, ci=0.95, nbootstrap = 1000):
"""Compute mean and confidence intervals for predicting correct sign.
Parameters
----------
DeltaG_predicted : numpy array with dimensions (Nligands,)
Predicted free energies (kcal/mol)
DeltaG_experimental : numpy array with dimensions (Nligands,)
Experimental free energies (kcal/mol)
threshold : float
Threshold in free energy (kcal/mol)
ci : float, optional, default=0.95
Interval for CI
nbootstrap : int, optional. default=10000
Number of bootstrap samples
Returns
-------
mean : float
The mean statistic for the whole dataset
stderr : float
The standard error
low, high : float
Low and high ends of CI
"""
def compute_fraction(DeltaG_predicted, DeltaG_experimental, threshold):
# Compute all differences
N = len(DeltaG_predicted)
DDG_predicted = np.zeros([N*(N-1)], np.float64)
DDG_experimental = np.zeros([N*(N-1)], np.float64)
index = 0
for i in range(N):
for j in range(N):
if i != j:
DDG_predicted[index] = (DeltaG_predicted[j] - DeltaG_predicted[i])
DDG_experimental[index] = (DeltaG_experimental[j] - DeltaG_experimental[i])
index += 1
indices = np.where(np.abs(DDG_predicted) > threshold)[0]
return np.sum(np.sign(DDG_predicted[indices]) == np.sign(DDG_experimental[indices])) / float(len(indices))
N_ligands = len(DeltaG_predicted)
fraction_n = np.zeros([nbootstrap], np.float64)
for replicate in range(nbootstrap):
bootstrapped_sample = np.random.choice(np.arange(N_ligands), size=[N_ligands])
fraction_n[replicate] = compute_fraction(DeltaG_predicted[bootstrapped_sample], DeltaG_experimental[bootstrapped_sample], threshold)
fraction_n = np.sort(fraction_n)
fraction = compute_fraction(DeltaG_predicted, DeltaG_experimental, threshold)
dfraction = np.std(fraction_n)
low_frac = (1.0-ci)/2.0
high_frac = 1.0 - low_frac
fraction_low = fraction_n[int(np.floor(nbootstrap*low_frac))]
fraction_high = fraction_n[int(np.ceil(nbootstrap*high_frac))]
return fraction, dfraction, fraction_low, fraction_high
# Collect data by system
def plot_data(system, rows):
DeltaG_experimental = rows['Exp. dG'].values
DeltaG_predicted = rows['Pred. dG'].values
plt.xlabel('threshold (kcal/mol)');
plt.ylabel('P(correct sign)');
Nligands = len(DeltaG_experimental)
print(system, Nligands)
[min_threshold, max_threshold] = [0, 2]
thresholds = np.linspace(min_threshold, max_threshold, 20)
fractions = thresholds * 0.0
dfractions = thresholds * 0.0
fractions_low = thresholds * 0.0
fractions_high = thresholds * 0.0
for index, threshold in enumerate(thresholds):
fractions[index], dfractions[index], fractions_low[index], fractions_high[index] = bootstrap_sign_prediction(DeltaG_predicted, DeltaG_experimental, threshold)
plt.fill_between(thresholds, fractions_low, fractions_high, alpha=0.5)
plt.plot(thresholds, fractions, 'ko');
#plt.plot(thresholds, fractions_low, 'k-')
#plt.plot(thresholds, fractions_high, 'k-')
plt.title('{} (N = {})'.format(system, Nligands));
plt.xlim(min_threshold, max_threshold);
plt.ylim(0, 1);
systems = df['Systems'].unique()
nsystems = len(systems)
nx = int(np.ceil(np.sqrt(nsystems)))
ny = int(np.ceil(np.sqrt(nsystems)))
fig = plt.figure(figsize=[12,12])
for plot_index, system in enumerate(systems):
plt.subplot(ny, nx, plot_index+1)
rows = df.query("Systems == '{}'".format(system))
plot_data(system, rows)
plt.subplot(ny, nx, nsystems+1)
system = 'all'
plot_data(df)
fig.tight_layout()
fig.savefig('jacs-fraction-analysis.pdf');
```
| github_jupyter |
# Inference and Validation
```
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/',
download=True,
train=True,
transform=transform)
trainloader = torch.utils.data.DataLoader(trainset,
batch_size=64,
shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/',
download=True,
train=False,
transform=transform)
testloader = torch.utils.data.DataLoader(testset,
batch_size=64,
shuffle=True)
# Create a model
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
The goal of validation is to measure the model's performance on data that isn't part of the training set. There many options like accuracy, precision an recall, top-5 error rate an so on.
```
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# we should get 10 class probabilities for 64 examples
print(ps.shape)
```
With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
```
top_p, top_class = ps.topk(1, dim=1)
# look at the most likely classes for the first
# 10 examples
print(top_class[:10,:])
```
Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.
If we do
```python
equals = top_class == labels
```
`equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
```
#equals = top_class == labels
equals = top_class == labels.view(*top_class.shape)
print(equals.shape)
print(equals)
```
Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error
```
RuntimeError: mean is not implemented for type torch.ByteTensor
```
This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implement for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
```
equals.shape
equals
```
Now we need to calculate the percentage of correct predictions.
But first we need to convert them to floats.
```
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
accuracy
```
**Exercise**
Implement the validation loop below and print out the total accuracy after the loop.
```
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(),
lr=0.003)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for epoch in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
val_accuracy = 0
# Implement the validation pass and
# print out the validation accuracy
with torch.no_grad():
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
val_accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(epoch+1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(val_accuracy/len(testloader))
)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training Loss')
plt.plot(test_losses, label='Test Loss')
plt.legend(frameon=False)
```
# Overfitting
The most common method to reduce overfitting is dropout, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data.
We need to turn off dropout during validation, testing and whenever we're using the network to make predictions.
**Exercise**
Add dropout to your model and train it on Fasion-MNIST again. Try to get a lower validation loss or higher accuracy.
```
import torch.nn.functional as F
from torch import nn, optim
# Define the model with dropout
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output, so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use model.eval(). This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with model.train(). In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.
```python
# turn off gradients
with torch.no_grad():
# set model to evaluation mode
model.eval()
# validation pass here
for images, labels in testloader:
...
# set model back to train mode
model.train()
```
```
# Train the model with dropout, and monitor
# the training process with the validation loss
# and accuracy
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
epochs = 3
steps = 0
train_losses, test_losses = [], []
for epoch in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
logps = model(images)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
val_accuracy = 0
test_loss = 0
with torch.no_grad():
model.eval()
for images, labels in testloader:
logps = model(images)
loss = criterion(logps, labels)
test_loss += loss
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
val_accuracy += torch.mean(equals.type(torch.FloatTensor))
model.train()
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch {}/{}".format(epoch+1, epochs),
"Training Loss.. {}".format(running_loss/len(trainloader)),
"Testing Loss.. {}".format(test_loss/len(testloader)),
"Validation Accuracy.. {}".format(val_accuracy/len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
```
# Inference
We need to set the model in inference mode with *model.eval()*.
```
import helper
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28),
ps,
version='Fashion')
```
| github_jupyter |
# Assessing Corpus Quality
### In this notebook, we'll learn about assessing corpus quality and potentially correcting problems.
## Potential Problem areas
1. Unexpected characters
1. Improperly joined words
1. Loanwords
### We will consider each of these in turn.
### Datasets: We'll be using the freely available CLTK datasets of the Latin Library and Perseus Latin texts, but the techniques in this notebook will apply to other languages as well.
#### set up the notebook
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
### A few open source imports
```
from collections import Counter, defaultdict
import re
from bisect import bisect
import statistics
from cltk.corpus.readers import get_corpus_reader
from cltk.stem.latin.j_v import JVReplacer
from tqdm import tqdm
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
plt.style.use('fivethirtyeight')
```
### Add parent directory to path so we can access our common code
```
import os,sys,inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
from mlyoucanuse.corpus_analysis_fun import (get_word_lengths,
get_samples_for_lengths,
get_char_counts)
```
## The Latin Library Corpus
```
latin_library_reader = get_corpus_reader(corpus_name='latin_text_latin_library', language='latin')
```
## The Perseus Latin Corpus
```
perseus_latin_reader = get_corpus_reader(corpus_name='latin_text_perseus', language='latin')
```
### 1. Unexpected Characters
#### Let's have a look at the characters: Grab all the words, check if they are alphabetical, and count them
```
latin_lib_char_freq = get_char_counts(latin_library_reader)
print(f'Latin Library Corpus: Number of characters: {sum(latin_lib_char_freq.values()):,}')
print('Latin Library character frequency distribution:', latin_lib_char_freq)
#####################
perseus_latin_char_freq = get_char_counts(perseus_latin_reader)
print(f'Perseus Latin Corpus: Number of characters: {sum(perseus_latin_char_freq.values()):,}')
print('Perseus Latin character frequency distribution:', perseus_latin_char_freq)
```
## We can see some obvious dirtiness:
1. There are two characters 'œ' and 'æ' that represent dipthongs which should be split into two characters.
1. There are some Greek letters in a Greek font, and some accented vowels foreign to Greek and Latin.
#### Let's consider these items the start of a bread crumb trail towards something bad and let's look further.
## 1. OE and AE
#### Classical Latin wrote the o and e separately (as has today again become the general practice), but the ligature was used by medieval and early modern writings, in part because the diphthongal sound had, by Late Latin, merged into the sound [e].
#### https://en.wikipedia.org/wiki/%C5%92
#### Æ (minuscule: æ) is a grapheme named æsc or ash, formed from the letters a and e, originally a ligature representing the Latin diphthong ae. It has been promoted to the full status of a letter in the alphabets of some languages, including Danish, Norwegian, Icelandic, and Faroese.
#### https://en.wikipedia.org/wiki/%C3%86
#### We have written an AEOEReplacer in the style of the CLTK JVReplacer and will use this to clean the corpora in other notebooks.
## 2. Greek Letters, Greek words; Foreign characters
### Filtering out these Greek words and obvious characters is easy
```
foreign_characters = [ tmp for tmp in
perseus_latin_char_freq.keys()
if not re.match('[a-zA-Z]', tmp) ]
print('foreign_characters', foreign_characters)
just_latin_words = []
latin_library_reader = get_corpus_reader(corpus_name='latin_text_latin_library', language='latin')
for word in tqdm(latin_library_reader.words()):
if not re.match('[{}]'.format(''.join(foreign_characters)), word):
just_latin_words.append(word)
print(f'Number just latin words: {len(just_latin_words):,}')
```
### This is easily extended to "Show me all the Greek in Cicero", "Show me Plautus without Greek", etc.
#### But other problems lurk: such as
## Acceptable Characters may represent foreign words transliterated:
```
'kai' in just_latin_words # kai means 'and' in Greek
```
### We will deal with the complex problem of detecting and correcting transliterated loadwords in another notebook
## Simple Heuristic for detecting improperly joined words
### Almost always when a sequence of lower case letters is followed by a capital letter without a space intervening it is an indicator of improperly joined words.
```
camelCase_regex = '[a-z]+[A-Z][a-z]+'
for word in just_latin_words[:10000]:
if re.match(camelCase_regex, word):
print (word)
camel_case_count = 0
for word in tqdm(just_latin_words):
if re.match(camelCase_regex, word):
camel_case_count +=1
print(f'Number of camel cased words {camel_case_count:,}')
```
## camelCased words are easily corrected
### e.g.:
#### from cltk.util.matrix_corpus_fun import split_camel
at: https://github.com/cltk/cltk/blob/master/cltk/utils/matrix_corpus_fun.py#L213
## Detecting Improperly Joined Words In General
* Errors in data transmission,
* Improperly OCRed text
* botched formating (e.g. blindly joining lines and dropping newline characters that are also meant to encode word separations)
#### Can all cause words to become improperly joined in a corpus.
#### To help spot candidates of improperly joined words, we need to first examine some sample corpora to determine typical word lengths.
```
latin_lib_lengths = get_word_lengths(latin_library_reader)
top_label, side_label, bottom_label = ( 'Word Lengths', 'Number of words',
'Raw Word Lengths of Latin Library Corpus')
fig = plt.figure(figsize=(7,7))
plt.xlabel(top_label)
plt.ylabel(side_label)
plt.title(bottom_label)
ax = fig.gca()
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
data = list(latin_lib_lengths.items())
data.sort(key=lambda k: k[0])
indices, values = zip(*data)
idx= bisect(indices, 28)
# expert testimony, reasonably longest words:
# undēquinquāgēsimōrum = 24 letters
# conscrībillātūrōrum
# percrēbrēscēbāminī
indices, values = zip(*data[:idx]) # reasonably longest word in latin
indices, values = zip(*data[:16]) # Nothing shows up on the graph beyond this range
plt.bar(indices, values)
plt.tight_layout(pad=0.5, w_pad=20, h_pad=0.5)
plt.show()
total_words = sum(latin_lib_lengths.values())
print(f'Total words {total_words:,}')
print('Word lengths:', latin_lib_lengths)
all_lens = []
for key in latin_lib_lengths:
all_lens += [key] * mycounter[key]
mean_word_len, stdev_word_length = round(statistics.mean(all_lens), 3), round(statistics.stdev(all_lens), 3)
print(f'Mean Latin word length: {round(mean_word_len, 2)}, standard deviation: {round(stdev_word_length, 2)}')
print(f'98% cutoff {round(mean_word_len + (2* stdev_word_length))} letters')
perseus_latin_lengths = get_word_lengths(perseus_latin_reader)
top_label, side_label, bottom_label = ('Word Lengths', 'Number of words',
'Raw Word Lengths of Perseus Latin(CLTK)')
fig = plt.figure(2, figsize=(7,7))
ax = fig.gca()
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
plt.xlabel(top_label)
plt.ylabel(side_label)
plt.title(bottom_label )
data = list(perseus_latin_lengths.items())
data.sort(key=lambda k: k[0])
indices, values = zip(*data)
idx = bisect(indices, 28)
indices, values = zip(*data[:16]) # reasonably Longest word in latin
# print(indices)
# print(values)
plt.bar(indices, values)
plt.tight_layout(pad=0.5, w_pad=20, h_pad=0.5)
plt.show();
total_words = sum(perseus_latin_lengths.values())
print(f'Total words {total_words:,}')
print('Word Lengths', perseus_latin_lengths)
all_lens = []
for key in perseus_latin_lengths:
all_lens += [key] * mycounter[key]
mean_word_len, stdev_word_length = round(statistics.mean(all_lens), 3), round(statistics.stdev(all_lens), 3)
print(f'Mean Latin word length: {round(mean_word_len, 2)}, standard deviation: {round(stdev_word_length, 2)}')
print(f'98% cutoff {round(mean_word_len + (2* stdev_word_length))} letters')
```
### Based on these numbers we can see that the long words in the Latin corpora are exceptional, and we can take a reasonable stance that any word in a corpus 12 letters or longer is a candidate for examination to determine whether or not it's an improperly joined word. Let's take a closer look at their representations.
## Breaking long words in two when possible, to do this we'll need to
### Build a language model that will help us find word boundaries and vet candidates for un-merging, this is the task in another notebook `make_trie_language_model.ipynb`.
### In another notebook, `detecting_and_correcting_loanwords.ipynb` we will take on the complex problem of transliterated loanwords.
## That's all for now folks!
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Reducer/stats_by_group.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Reducer/stats_by_group.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Reducer/stats_by_group.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Reducer/stats_by_group.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Load a collection of US census blocks.
blocks = ee.FeatureCollection('TIGER/2010/Blocks')
# Compute sums of the specified properties, grouped by state code.
sums = blocks \
.filter(ee.Filter.And(
ee.Filter.neq('pop10', {}),
ee.Filter.neq('housing10', {}))) \
.reduceColumns(**{
'selectors': ['pop10', 'housing10', 'statefp10'],
'reducer': ee.Reducer.sum().repeat(2).group(**{
'groupField': 2,
'groupName': 'state-code',
})
})
# Print the resultant Dictionary.
print(sums.getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# Reinforcement Learning
В этом задании постараемся разобраться в проблеме обучения с подкреплением, реализуем алгоритм REINFORCE и научим агента с помощью этого алгоритма играть в игру Cartpole.
Установим и импортируем необходимые библиотеки, а также вспомогательные функции для визуализации игры агента.
```
!pip install gym pandas torch matplotlib pyvirtualdisplay > /dev/null 2>&1
!apt-get install -y xvfb python-opengl ffmpeg x11-utils > /dev/null 2>&1
from IPython.display import clear_output, HTML
from IPython import display as ipythondisplay
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import glob
import time
import io
import base64
import gym
from gym.wrappers import Monitor
import torch
import collections
import pandas as pd
from torch import nn
from torch.optim import Adam
from torch.distributions import Categorical
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
"""
Utility functions to enable video recording of gym environment and displaying it
To enable video, just do "env = wrap_env(env)""
"""
def show_video():
mp4list = glob.glob('video/*.mp4')
if len(mp4list) > 0:
mp4 = mp4list[0]
video = io.open(mp4, 'r+b').read()
encoded = base64.b64encode(video)
ipythondisplay.display(HTML(data='''<video alt="test" autoplay
loop controls style="height: 400px;">
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))))
else:
print("Could not find video")
def wrap_env(env):
env = Monitor(env, './video', force=True)
return env
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') #позволяет перенести тензор на GPU, если он доступен в системе
```
## OpenAI Gym
[OpenAI Gym](https://gym.openai.com) это набор сред для разработки и сравнения алгоритмов обучения с подкреплением.
OpenAI Gym предоставляет простой и универсальный API ко многим средам с разными свойствами, как простым так и сложным:
* Классические задачи управления и игрушечные примеры, которые можно найти в учебниках и на которых демонстрируется работа алгоритмов обучения с подкреплением (одна из этих сред используется в этом задании)
* Игры Atari (оказали огромное влияние на достижения в обучении с подкреплением в последние годы)
* 2D и 3D среды для контроля роботов в симуляции (используют проприетарный движок [Mojuco](http://www.mujoco.org))
Рассмотрим, как устроена среда [CartPole-v0](https://gym.openai.com/envs/CartPole-v0), с которой мы будем работать.
Для этого создадим среду и выведем ее описание.
```
env = gym.make("CartPole-v0")
print(env.env.__doc__)
```
Из этого описания мы можем узнать, как устроены пространства состояний и действий в этой среды, какие награды получаются на каждом шаге, а также, что нам необходимо сделать, чтобы научиться "решать" эту среду, а именно достич средней награды больше 195.0 или больше за 100 последовательных запусков агента в этой среде. Именно такого агента мы и попробуем создать и обучить.
Но для начала напишем вспомогательную функцию, которая будет принимать на вход среду, агента и число эпизодов, и возвращать среднюю награду за 100 эпизодов. С помощью этой функции мы сможем протестировать, насколько хорошо обучился наш агент, а также визуализировать его поведение в среде.
```
def test_agent(env, agent=None, n_episodes=100):
"""Runs agent for n_episodes in environment and calclates mean reward.
Args:
env: The environment for agent to play in
agent: The agent to play with. Defaults to None -
in this case random agent is used.
n_episodes: Number of episodes to play. Defaults to 100.
Returns:
Mean reward for 100 episodes.
"""
total_reward = []
for episode in range(n_episodes):
episode_reward = 0
observation = env.reset()
t = 0
while True:
if agent:
with torch.no_grad():
probs = agent(torch.FloatTensor(observation).to(device))
dist = Categorical(probs)
action = dist.sample().item()
else:
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
episode_reward += reward
t += 1
if done:
print("Episode {} finished after {} timesteps".format(episode+1, t+1))
break
total_reward.append(episode_reward)
env.close()
return np.mean(total_reward)
```
Протестируем и визуализируем случайного агента (параметр ```agent=False```).
```
test_agent(env, agent=False, n_episodes=100)
```
Как видно, наш случайный агент выступает не очень хорошо и в среднем может удержать шест всего около 20 шагов.
Напишем функцию для визуализации агента и посмотрим на случайного агента.
```
def agent_viz(env="CartPole-v0", agent=None):
"""Visualizes agent play in the given environment.
Args:
env: The environment for agent to play in. Defaults to CartPole-v0.
agent: The agent to play with. Defaults to None -
in this case random agent is used.
Returns:
Nothing is returned. Visualization is created and can be showed
with show_video() function.
"""
env = wrap_env(gym.make(env))
observation = env.reset()
while True:
env.render()
if agent:
with torch.no_grad():
probs = agent(torch.FloatTensor(observation).to(device))
dist = Categorical(probs)
action = dist.sample().item()
else:
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
break
env.close()
agent_viz()
show_video()
```
Попробуем применить обучение с подкреплением и алгоритм REINFORCE для того, чтобы в среднем за 100 эпиздов мы держали шест не менее 195 шагов.
## REINFORCE
Вспомним, что из себя представляет алгоритм REINFORCE (Sutton & Barto) <img src="//i.imgur.com/bnASTrY.png" width="700">
1. Инициализуем политику (в качестве политики мы будем использовать глубокую нейронную сеть).
2. "Играем" в среде эпизод, используя нашу политику, или несколько (мы будем использовать последний вариант) и собираем данные о состояниях, действиях и полученных наградах.
3. Для каждого состояния в собранных эпизодах вычисляем сумму дисконтированных наград, полученных из этого состояния, а также логорифм правдоподобия предпринятого действия в этом состоянии для нашей политики.
4. Обновляем параметры нашей политики по формуле на схеме.
### Политика
Наша политика должна принимать на вход состояние среды, а на выходе выдавать распределение по действиям, которые мы можем осуществлять в среде.
**Задание:** Создать класс нейронной сети со следующей архитектурой ```Linear -> ReLU -> Linear -> Softmax```. Параметрами инициализации должны служить размерности пространства состояний, пространства действий и размер скрытого слоя.
```
class Policy(nn.Module):
"""Policy to be used by agent.
Attributes:
state_size: Dimention of the state space of the environment.
act_size: Dimention of the action space of the environment.
hidden_size: Dimention of the hidden state of the agent's policy.
"""
# TO DO
```
### Оценка правдоподобия и расчет суммы дисконтированных наград
**Задание:** Напишем вспомогательная функцию, которая принимает на вход политику, батч траекторий и фактор дисконтирования, и должна вернуть следующие величины:
* правдоподобие действия на каждом шаге на траектории посчитанные для всего батча;
* дисконтированные суммы наград (reward-to-go) из каждого состояния среды на траектории посчитанные для всего батча;
**Hint**: Представим батч траекторий как ```list```, в котром также хранится ```list``` для каждой траектории, в котором каждый шаг хранится, как ```namedtuple```:
```transition = collections.namedtuple("transition", ["state", "action", "reward"])```
```
def process_traj_batch(policy, batch, discount):
"""Computes log probabilities for each action
and rewards-to-go for each state in the batch of trajectories.
Args:
policy: Policy of the agent.
batch (list of list of collections.namedtuple): Batch of trajectories.
discount (float): Discount factor for rewards-to-go calculation.
Returns:
log_probs (list of torch.FloatTensor): List of log probabilities for
each action in the batch of trajectories.
returns (list of rewards-to-go): List of rewards-to-go for
each state in the batch of trajectories.
"""
# TO DO
return log_probs, returns
```
Ваша реализация функции должна проходить следующий тест.
```
def test_process_traj_batch(process_traj_batch):
transition = collections.namedtuple("transition", ["state", "action", "reward"])
class HelperPolicy(nn.Module):
def __init__(self):
super(HelperPolicy, self).__init__()
self.act = nn.Sequential(
nn.Linear(4, 2),
nn.Softmax(dim=0),
)
def forward(self, x):
return self.act(x)
policy = HelperPolicy()
for name, param in policy.named_parameters():
if name == "act.0.weight":
param.data = torch.tensor([[1.7492, -0.2471, 0.3310, 1.1494],
[0.6171, -0.6026, 0.5025, -0.3196]])
else:
param.data = torch.tensor([0.0262, 0.1882])
batch = [
[
transition(state=torch.tensor([ 0.0462, -0.0018, 0.0372, 0.0063]), action=torch.tensor(0), reward=1.0),
transition(state=torch.tensor([ 0.0462, -0.1975, 0.0373, 0.3105]), action=torch.tensor(1), reward=1.0),
transition(state=torch.tensor([ 0.0422, -0.0029, 0.0435, 0.0298]), action=torch.tensor(0), reward=1.0),
transition(state=torch.tensor([ 0.0422, -0.1986, 0.0441, 0.3359]), action=torch.tensor(0), reward=1.0),
],
[
transition(state=torch.tensor([ 0.0382, -0.3943, 0.0508, 0.6421]), action=torch.tensor(1), reward=1.0),
transition(state=torch.tensor([ 0.0303, -0.2000, 0.0637, 0.3659]), action=torch.tensor(1), reward=1.0),
transition(state=torch.tensor([ 0.0263, -0.0058, 0.0710, 0.0939]), action=torch.tensor(1), reward=1.0),
transition(state=torch.tensor([ 0.0262, 0.1882, 0.0729, -0.1755]), action=torch.tensor(0), reward=1.0)
]
]
log_probs, returns = process_traj_batch(policy, batch, 0.9)
assert sum(log_probs).item() == -6.3940582275390625, "Log probabilities calculation is incorrect!!!"
assert sum(returns) == 18.098, "Log probabilities calculation is incorrect!!!"
print("Correct!")
test_process_traj_batch(process_traj_batch)
```
### Вспомогательные функции и гиперпараметры
Функция для расчета скользящего среднего - ее мы будем использовать для визуализации наград по эпизодам.
```
moving_average = lambda x, **kw: pd.DataFrame({'x':np.asarray(x)}).x.ewm(**kw).mean().values
```
Определим также гиперпараметры.
```
STATE_SIZE = env.observation_space.shape[0] # размерность пространства состояний среды
ACT_SIZE = env.action_space.n # размерность пространства действий среды
HIDDEN_SIZE = 256 # размер скрытого слоя для политики
NUM_EPISODES = 1000 # количество эпиздов, которые будут сыграны для обучения
DISCOUNT = 0.99 # фактор дисконтирования
TRAIN_EVERY = 20
```
Инициализуем политику и алгоритм оптимизации - мы будем использовать Adam c праметрами по умолчанию.
```
policy = Policy(STATE_SIZE, ACT_SIZE, HIDDEN_SIZE).to(device)
optimizer = Adam(policy.parameters())
transition = collections.namedtuple("transition", ["state", "action", "reward"])
```
### Основной цикл обучения
Теперь, когда мы опредлели вспомогательные функции, то нам следует написать основной цикл обучения агент.
В цикле должно происходить следующее:
1. Играем количество эпизодов, определенное в гиперпараметре ```NUM_EPISODES```.
2. В каждом эпизоде сохраняем информацию о шагах на траектории - состояние, действие и награду.
3. В конце каждого эпизода сохраняем вышеуказанную информацию о траектории.
4. Периодически обучаемся на собранных эпизодах каждые ```TRAIN_EVERY``` эпизодов:
4.1. Считаем для собранного батча для каждого шага на трактории правдоподобие и сумму дисконтированных наград.
4.2. Обновляем параметры политики агента по формуле, приведенной на схеме.
**Задание:** Реализовать алгоритм обучения, описанный на схеме и в тексте выше. Шаблон кода алгоритма представлен ниже. При этом следует сохранять сумму ревордов для каждого эпизода в переменную ```returns_history```. Алгоритму потребуется около 1000 эпизодов игры, для того чтобы научиться играть в игру (если после 1000 эпизодов агент играет немного хуже, чем для победы в игре, попробуйте обучать его немного дольше или установите критерий останова - когда средняя награда за 100 последних эпизодов превышает значение в ```env.spec.reward_threshold``` )
```
returns_history = []
traj_batch = []
for i in range(NUM_EPISODES):
# TO DO
returns_history.append(rewards)
traj_batch.append(traj)
if i % TRAIN_EVERY:
log_probs, returns = process_traj_batch(policy, traj_batch, DISCOUNT)
loss = -(torch.stack(log_probs) * torch.FloatTensor(returns).to(device)).sum()
optimizer.zero_grad()
loss.backward()
optimizer.step()
traj_batch = []
if i % 10:
clear_output(True)
plt.figure(figsize=[12, 6])
plt.title('Returns'); plt.grid()
plt.scatter(np.arange(len(returns_history)), returns_history, alpha=0.1)
plt.plot(moving_average(returns_history, span=10, min_periods=10))
plt.show()
```
Протестируем обученного агента.
```
test_agent(env, agent=policy, n_episodes=100)
```
Обученный агент должен приближаться к искомому значению средней награды за 100 эпизодов 195.
Визуализируем обученного агента.
```
agent_viz(agent=policy)
show_video()
```
Как видно, агент выучил довольно хорошую стратегию для игры и способен долго удерживать шест.
### REINFORCE with baselines (Опционально)
В лекциях вы слышали, что при расчете градиентов для обновления параметров политики агента мы можем вычесть из суммы дисконтированных наград ```baseline``` для уменьшения дисперсии градиентов и ускорения сходимости обучения - такой алгоритм называется REINFORCE with baselines. В качестве ```baseline``` мы можем использовать другую нейронную сеть, которая будет оценивать сумму дисконтированных наград из данного состояния *V(s)*.
Схема алгоритма REINFORCE with baselines (Sutton & Barto) <img src="//i.imgur.com/j3BcbHP.png" width="700">
**Задание**: Включите в уже разработанный алгоритм вторую нейронную сеть для оценки суммы дисконтированных наград *V(s)*. Используйте разницу между фактической суммой дисконтированных наград и оценкой в формуле функции потерь политики. В качестве функции потерь для *V(s)* используйте ```MSELoss```. Оцените скорость сходимости нового алгоритма.
| github_jupyter |
```
# import necessary modules
# uncomment to get plots displayed in notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from classy import Class
from scipy.optimize import fsolve
from scipy.interpolate import interp1d
import math
# esthetic definitions for the plots
font = {'size' : 16, 'family':'STIXGeneral'}
axislabelfontsize='large'
matplotlib.rc('font', **font)
matplotlib.mathtext.rcParams['legend.fontsize']='medium'
plt.rcParams["figure.figsize"] = [8.0,6.0]
#############################################
#
# Cosmological parameters and other CLASS parameters
#
common_settings = {# wich output? ClTT, transfer functions delta_i and theta_i
'output':'tCl,mTk,vTk',
# LambdaCDM parameters
'h':0.67556,
'omega_b':0.022032,
'omega_cdm':0.12038,
'A_s':2.215e-9,
'n_s':0.9619,
'tau_reio':0.0925,
# Take fixed value for primordial Helium (instead of automatic BBN adjustment)
'YHe':0.246,
# other output and precision parameters
'l_max_scalars':5000,
'P_k_max_1/Mpc':10.0,
'gauge':'newtonian'}
###############
#
# call CLASS a first time just to compute z_rec (will compute transfer functions at default: z=0)
#
M = Class()
M.set(common_settings)
M.compute()
derived = M.get_current_derived_parameters(['z_rec','tau_rec','conformal_age'])
#print derived.viewkeys()
z_rec = derived['z_rec']
z_rec = int(1000.*z_rec)/1000. # round down at 4 digits after coma
M.struct_cleanup() # clean output
M.empty() # clean input
#
# call CLASS again (will compute transfer functions at inout value z_rec)
#
M = Class()
M.set(common_settings)
M.set({'z_pk':z_rec})
M.compute()
#
# load transfer functions at recombination
#
one_time = M.get_transfer(z_rec)
print one_time.viewkeys()
k = one_time['k (h/Mpc)']
Theta0 = 0.25*one_time['d_g']
phi = one_time['phi']
psi = one_time['psi']
theta_b = one_time['t_b']
# compute related quantitites
R = 3./4.*M.Omega_b()/M.Omega_g()/(1+z_rec) # R = 3/4 * (rho_b/rho_gamma) at z_rec
zero_point = -(1.+R)*psi # zero point of oscillations: -(1.+R)*psi
#
# get Theta0 oscillation amplitude (for vertical scale of plot)
#
Theta0_amp = max(Theta0.max(),-Theta0.min())
#
# use table of background quantitites to find the wavenumbers corresponding to
# Hubble crossing (k = 2 pi a H), sound horizon crossing (k = 2pi / rs)
#
background = M.get_background() # load background table
#print background.viewkeys()
#
background_tau = background['conf. time [Mpc]'] # read confromal times in background table
background_z = background['z'] # read redshift
background_kh = 2.*math.pi*background['H [1/Mpc]']/(1.+background['z'])/M.h() # read kh = 2pi aH = 2pi H/(1+z) converted to [h/Mpc]
background_ks = 2.*math.pi/background['comov.snd.hrz.']/M.h() # read ks = 2pi/rs converted to [h/Mpc]
#
# define interpolation functions; we want the value of tau when the argument is equal to 2pi
#
kh_at_tau = interp1d(background_tau,background_kh)
ks_at_tau = interp1d(background_tau,background_ks)
#
# finally get these scales
#
tau_rec = derived['tau_rec']
kh = kh_at_tau(tau_rec)
ks = ks_at_tau(tau_rec)
#
#################
#
# start plotting
#
#################
#
fig, (ax_Tk, ax_Tk2, ax_Cl) = plt.subplots(3,sharex=True,figsize=(8,12))
fig.subplots_adjust(hspace=0)
##################
#
# first figure with transfer functions
#
##################
ax_Tk.set_xlim([3.e-4,0.5])
ax_Tk.set_ylim([-1.1*Theta0_amp,1.1*Theta0_amp])
ax_Tk.tick_params(axis='x',which='both',bottom='off',top='on',labelbottom='off',labeltop='on')
ax_Tk.set_xlabel(r'$\mathrm{k} \,\,\, \mathrm{[h/Mpc]}$')
ax_Tk.set_ylabel(r'$\mathrm{Transfer}(\tau_\mathrm{dec},k)$')
ax_Tk.xaxis.set_label_position('top')
ax_Tk.grid()
#
ax_Tk.axvline(x=kh,color='r')
ax_Tk.axvline(x=ks,color='y')
#
ax_Tk.annotate(r'Hubble cross.',
xy=(kh,0.8*Theta0_amp),
xytext=(0.15*kh,0.9*Theta0_amp),
arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
ax_Tk.annotate(r'sound hor. cross.',
xy=(ks,0.8*Theta0_amp),
xytext=(1.3*ks,0.9*Theta0_amp),
arrowprops=dict(facecolor='black', shrink=0.05, width=1, headlength=5, headwidth=5))
#
ax_Tk.semilogx(k,psi,'y-',label=r'$\psi$')
ax_Tk.semilogx(k,phi,'r-',label=r'$\phi$')
ax_Tk.semilogx(k,zero_point,'k:',label=r'$-(1+R)\psi$')
ax_Tk.semilogx(k,Theta0,'b-',label=r'$\Theta_0$')
ax_Tk.semilogx(k,(Theta0+psi),'c',label=r'$\Theta_0+\psi$')
ax_Tk.semilogx(k,theta_b,'g-',label=r'$\theta_b$')
#
ax_Tk.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
#######################
#
# second figure with transfer functions squared
#
#######################
ax_Tk2.set_xlim([3.e-4,0.5])
ax_Tk2.tick_params(axis='x',which='both',bottom='off',top='off',labelbottom='off',labeltop='off')
ax_Tk2.set_ylabel(r'$\mathrm{Transfer}(\tau_\mathrm{dec},k)^2$')
ax_Tk2.grid()
#
ax_Tk2.semilogx(k,(Theta0+psi)**2,'c',label=r'$(\Theta_0+\psi)^2$')
#
ax_Tk2.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
########################
#
# third figure with all contributions to Cls
#
# For that we will need to call CLASS again for each contribution (TSW, earlyISW, lateISW, Doppler, total)
# Note that there is another contribution from polarisation: we don't plot it individually because it is
# too small to be seen, however it is included by default in the total.
#
# After each step we will save the figure (to get intermediate figures for the slides)
#
#########################
# presentation settings
ax_Cl.set_xlim([3.e-4,0.5])
ax_Cl.set_ylim([0.,8.])
ax_Cl.set_xlabel(r'$\ell/(\tau_0-\tau_{rec}) \,\,\, \mathrm{[h/Mpc]}$')
ax_Cl.set_ylabel(r'$\ell (\ell+1) C_l^{TT} / 2 \pi \,\,\, [\times 10^{10}]$')
ax_Cl.tick_params(axis='x',which='both',bottom='on',top='off',labelbottom='on',labeltop='off')
ax_Cl.grid()
#
# the x-axis will show l/(tau_0-tau_rec), so we need (tau_0-tau_rec) in units of [Mpc/h]
#
tau_0_minus_tau_rec_hMpc = (derived['conformal_age']-derived['tau_rec'])*M.h()
#
# save the total Cl's (we will plot them in the last step)
#
cl_tot = M.raw_cl(5000)
#
# call CLASS with TSW, then plot and save
#
M.struct_cleanup() # clean output
M.empty() # clean input
M.set(common_settings) # new input
M.set({'temperature contributions':'tsw'})
M.compute()
cl = M.raw_cl(5000)
#
ax_Cl.semilogx(cl['ell']/tau_0_minus_tau_rec_hMpc,1.e10*cl['ell']*(cl['ell']+1.)*cl['tt']/2./math.pi,'c-',label=r'$\mathrm{T+SW}$')
#
ax_Cl.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
fig.savefig('one_time_with_cl_1.pdf',bbox_inches='tight')
#
# call CLASS with early ISW, plot; call CLASS with late ISW, plot; then save
#
M.struct_cleanup()
M.empty()
M.set(common_settings)
M.set({'temperature contributions':'eisw'})
M.compute()
cl = M.raw_cl(5000)
#
ax_Cl.semilogx(cl['ell']/tau_0_minus_tau_rec_hMpc,1.e10*cl['ell']*(cl['ell']+1.)*cl['tt']/2./math.pi,'r-',label=r'$\mathrm{early} \,\, \mathrm{ISW}$')
#
M.struct_cleanup()
M.empty()
M.set(common_settings)
M.set({'temperature contributions':'lisw'})
M.compute()
cl = M.raw_cl(5000)
#
ax_Cl.semilogx(cl['ell']/tau_0_minus_tau_rec_hMpc,1.e10*cl['ell']*(cl['ell']+1.)*cl['tt']/2./math.pi,'y-',label=r'$\mathrm{late} \,\, \mathrm{ISW}$')
#
ax_Cl.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
fig.savefig('one_time_with_cl_2.pdf',bbox_inches='tight')
#
# call CLASS with Doppler, then plot and save
#
M.struct_cleanup()
M.empty()
M.set(common_settings)
M.set({'temperature contributions':'dop'})
M.compute()
cl = M.raw_cl(5000)
#
ax_Cl.semilogx(cl['ell']/tau_0_minus_tau_rec_hMpc,1.e10*cl['ell']*(cl['ell']+1.)*cl['tt']/2./math.pi,'g-',label=r'$\mathrm{Doppler}$')
#
ax_Cl.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
fig.savefig('one_time_with_cl_3.pdf',bbox_inches='tight')
#
# plot the total Cls that had been stored, and save
#
ax_Cl.semilogx(cl_tot['ell']/tau_0_minus_tau_rec_hMpc,1.e10*cl_tot['ell']*(cl_tot['ell']+1.)*cl_tot['tt']/2./math.pi,'k-',label=r'$\mathrm{Total}$')
#
ax_Cl.legend(loc='right',bbox_to_anchor=(1.4, 0.5))
fig.savefig('one_time_with_cl_tot.pdf',bbox_inches='tight')
```
| github_jupyter |
# Agregando funciones no lineales a las capas
> Transformaciones no lineales para mejorar las predicciones de nuestras redes
Algunas de las transformaciones no lineales más comunes en una red neuronal son la funcion ```sigmoide```, ```tanh``` y ```ReLU```
Para agregar estas funciones debemos agregar los siguientes metodos a la clase ```Tensor```
```
def sigmoid(self):
if (self.autograd):
return Tensor(1/(1+np.exp(-self.data)),
autograd=True,
creators=[self],
creation_op='sigmoid')
return Tensor(1/(1+np.exp(-self.data)))
def tanh(self):
if (self.autograd):
return Tensor(np.tanh(self.data),
autograd=True,
creators=[self],
creation_op='tanh')
return Tensor(1/(1+np.exp(-self.data)))
def relu(self):
ones_and_zeros = self.data > 0
if (self.autograd):
return Tensor(self.data * ones_and_zeros,
autograd=True,
creators=[self],
creation_op='relu')
return Tensor(self.data * ones_and_zeros)
```
Y las siguientes condiciones al metodo ```backward()``` de la clase Tensor
```
if (self.creation_op == 'sigmoid'):
ones = Tensor(np.ones_like(self.grad.data))
self.creators[0].backward(self.grad * (self * (ones - self)))
if (self.creation_op == 'tanh'):
ones = Tensor(np.ones_like(self.grad.data))
self.creators[0].backward(self.grad * (ones - (self * self)))
if (self.creation_op == 'relu'):
mask = Tensor(self.data > 0)
self.creators[0].backward(self.grad * mask)
```
```
import numpy as np
class Tensor(object):
def __init__(self, data,
autograd=False,
creators=None,
creation_op=None,
id=None):
'''
Inicializa un tensor utilizando numpy
@data: una lista de numeros
@creators: lista de tensores que participarion en la creacion de un nuevo tensor
@creators_op: la operacion utilizada para combinar los tensores en el nuevo tensor
@autograd: determina si se realizara backprop o no sobre el tensor
@id: identificador del tensor, para poder dar seguimiento a los hijos y padres del mismo
'''
self.data = np.array(data)
self.creation_op = creation_op
self.creators = creators
self.grad = None
self.autograd = autograd
self.children = {}
# se asigna un id al tensor
if (id is None):
id = np.random.randint(0, 100000)
self.id = id
# se hace un seguimiento de cuantos hijos tiene un tensor
# si los creadores no es none
if (creators is not None):
# para cada tensor padre
for c in creators:
# se verifica si el tensor padre posee el id del tensor hijo
# en caso de no estar, agrega el id del tensor hijo al tensor padre
if (self.id not in c.children):
c.children[self.id] = 1
# si el tensor ya se encuentra entre los hijos del padre
# y vuelve a aparece, se incrementa en uno
# la cantidad de apariciones del tensor hijo
else:
c.children[self.id] += 1
def all_children_grads_accounted_for(self):
'''
Verifica si un tensor ha recibido la cantidad
correcta de gradientes por cada uno de sus hijos
'''
# print('tensor id:', self.id)
for id, cnt in self.children.items():
if (cnt != 0):
return False
return True
def backward(self, grad, grad_origin=None):
'''
Funcion que propaga recursivamente el gradiente a los creators o padres del tensor
@grad: gradiente
@grad_orign
'''
if (self.autograd):
if grad is None:
grad = Tensor(np.ones_like(self.data))
if (grad_origin is not None):
# Verifica para asegurar si se puede hacer retropropagacion
if (self.children[grad_origin.id] == 0):
raise Exception("No se puede retropropagar mas de una vez")
# o si se está esperando un gradiente, en dicho caso se decrementa
else:
# el contador para ese hijo
self.children[grad_origin.id] -= 1
# acumula el gradiente de multiples hijos
if (self.grad is None):
self.grad = grad
else:
self.grad += grad
if (self.creators is not None and
(self.all_children_grads_accounted_for() or grad_origin is None)):
if (self.creation_op == 'neg'):
self.creators[0].backward(self.grad.__neg__())
if (self.creation_op == 'add'):
# al recibir self.grad, empieza a realizar backprop
self.creators[0].backward(self.grad, grad_origin=self)
self.creators[1].backward(self.grad, grad_origin=self)
if (self.creation_op == "sub"):
self.creators[0].backward(Tensor(self.grad.data), self)
self.creators[1].backward(Tensor(self.grad.__neg__().data), self)
if (self.creation_op == "mul"):
new = self.grad * self.creators[1]
self.creators[0].backward(new, self)
new = self.grad * self.creators[0]
self.creators[1].backward(new, self)
if (self.creation_op == "mm"):
layer = self.creators[0] # activaciones => layer
weights = self.creators[1] # pesos = weights
# c0 = self.creators[0] # activaciones => layer
# c1 = self.creators[1] # pesos = weights
# new = self.grad.mm(c1.transpose()) # grad = delta => delta x weights.T
new = Tensor.mm(self.grad, weights.transpose()) # grad = delta => delta x weights.T
layer.backward(new)
# c0.backward(new)
# new = self.grad.transpose().mm(c0).transpose() # (delta.T x layer).T = layer.T x delta
new = Tensor.mm(layer.transpose(), self.grad) # layer.T x delta
weights.backward(new)
# c1.backward(new)
if (self.creation_op == "transpose"):
self.creators[0].backward(self.grad.transpose())
if ("sum" in self.creation_op):
dim = int(self.creation_op.split("_")[1])
self.creators[0].backward(self.grad.expand(dim, self.creators[0].data.shape[dim]))
if ("expand" in self.creation_op):
dim = int(self.creation_op.split("_")[1])
self.creators[0].backward(self.grad.sum(dim))
if (self.creation_op == "sigmoid"):
ones = Tensor(np.ones_like(self.grad.data))
self.creators[0].backward(self.grad * (self * (ones - self)))
if (self.creation_op == "tanh"):
ones = Tensor(np.ones_like(self.grad.data))
self.creators[0].backward(self.grad * (ones - (self * self)))
if (self.creation_op == 'relu'):
mask = Tensor(self.data > 0)
self.creators[0].backward(self.grad * mask)
def __neg__(self):
if (self.autograd):
return Tensor(self.data * -1,
autograd=True,
creators=[self],
creation_op='neg')
return Tensor(self.data * -1)
def __add__(self, other):
'''
@other: un Tensor
'''
if (self.autograd and other.autograd):
return Tensor(self.data + other.data,
autograd=True,
creators=[self, other],
creation_op='add')
return Tensor(self.data + other.data)
def __sub__(self, other):
'''
@other: un Tensor
'''
if (self.autograd and other.autograd):
return Tensor(self.data - other.data,
autograd=True,
creators=[self, other],
creation_op='sub')
return Tensor(self.data - other.data)
def __mul__(self, other):
'''
@other: un Tensor
'''
if (self.autograd and other.autograd):
return Tensor(self.data * other.data,
autograd=True,
creators=[self, other],
creation_op="mul")
return Tensor(self.data * other.data)
def sum(self, dim):
'''
Suma atravez de dimensiones, si tenemos una matriz 2x3 y
aplicamos sum(0) sumara todos los valores de las filas
dando como resultado un vector 1x3, en cambio si se aplica
sum(1) el resultado es un vector 2x1
@dim: dimension para la suma
'''
if (self.autograd):
return Tensor(self.data.sum(dim),
autograd=True,
creators=[self],
creation_op="sum_" + str(dim))
return Tensor(self.data.sum(dim))
def expand(self, dim, copies):
'''
Se utiliza para retropropagar a traves de una suma sum().
Copia datos a lo largo de una dimension
'''
trans_cmd = list(range(0, len(self.data.shape)))
trans_cmd.insert(dim, len(self.data.shape))
new_data = self.data.repeat(copies).reshape(list(self.data.shape) + [copies]).transpose(trans_cmd)
if (self.autograd):
return Tensor(new_data,
autograd=True,
creators=[self],
creation_op="expand_" + str(dim))
return Tensor(new_data)
def transpose(self):
if (self.autograd):
return Tensor(self.data.transpose(),
autograd=True,
creators=[self],
creation_op="transpose")
return Tensor(self.data.transpose())
def mm(self, x):
if (self.autograd):
return Tensor(self.data.dot(x.data),
autograd=True,
creators=[self, x],
creation_op="mm")
return Tensor(self.data.dot(x.data))
def sigmoid(self):
if (self.autograd):
return Tensor(1/(1+np.exp(-self.data)),
autograd=True,
creators=[self],creation_op='sigmoid')
return Tensor(1/(1+np.exp(-self.data)))
def tanh(self):
if (self.autograd):
return Tensor(np.tanh(self.data),
autograd=True,
creators=[self],
creation_op='tanh')
return Tensor(np.tanh(self.data))
def relu(self):
ones_and_zeros = self.data > 0
if (self.autograd):
return Tensor(self.data * ones_and_zeros,
autograd=True,
creators=[self],
creation_op='relu')
return Tensor(self.data * ones_and_zeros)
def __repr__(self):
return str(self.data.__repr__())
def __str__(self):
return str(self.data.__str__())
class SGD(object):
def __init__(self, parameters, alpha=0.1):
self.parameters = parameters
self.alpha = alpha
def zero(self):
for p in self.parameters:
p.grad.data *= 0
def step(self, zero=True):
for p in self.parameters:
p.data = p.data - (self.alpha * p.grad.data)
if(zero):
p.grad.data *= 0
class Layer(object):
def __init__(self):
self.parameters = list()
def get_parameters(self):
return self.parameters
class Linear(Layer):
def __init__(self, n_inputs, n_outputs):
super().__init__()
W = np.random.randn(n_inputs, n_outputs) * np.sqrt(2.0 / (n_inputs))
self.weight = Tensor(W, autograd=True)
self.bias = Tensor(np.zeros(n_outputs), autograd=True)
self.parameters.append(self.weight)
self.parameters.append(self.bias)
def forward(self, input):
return Tensor.mm(input, self.weight) + self.bias.expand(0, len(input.data))
class Sequential(Layer):
def __init__(self, layers=list()):
super().__init__()
self.layers = layers
def add(self, layer):
self.layers.append(layer)
def forward(self, input):
for layer in self.layers:
input = layer.forward(input)
return input
def get_parameters(self):
params = list()
for l in self.layers:
params += l.get_parameters()
return params
class Tanh(Layer):
def __init__(self):
super().__init__()
def forward(self, input):
return input.tanh()
class Sigmoid(Layer):
def __init__(self):
super().__init__()
def forward(self, input):
return input.sigmoid()
class Relu(Layer):
def __init__(self):
super().__init__()
def forward(self, input):
return input.relu()
class MSELoss(Layer):
def __init__(self):
super().__init__()
def forward(self, pred, target):
return ((pred - target) * (pred - target)).sum(0)
```
## Una red neuronal contransformaciones no lineales
```
np.random.seed(0)
data = Tensor(np.array([[0,0],[0,1],[1,0],[1,1]]), autograd=True) # (4,2)
target = Tensor(np.array([[0],[1],[0],[1]]), autograd=True) # (4,1)
model = Sequential([Linear(2,3),
Tanh(),
Linear(3,1),
Sigmoid()])
criterion = MSELoss()
# optim = SGD(model.get_parameters(), alpha=0.05) # Lineal
optim = SGD(model.get_parameters(), alpha=1) # Tanh, Sigmoid
for i in range(10):
# Predecir
pred = model.forward(data)
# Comparar
loss = criterion.forward(pred, target)
# Aprender
loss.backward(Tensor(np.ones_like(loss.data)))
optim.step()
print(loss)
```
## Aprendiendo XOR
```
np.random.seed(0)
data = Tensor(np.array([[0,0],[0,1],[1,0],[1,1]]), autograd=True) # (4,2)
target = Tensor(np.array([[0],[1],[1],[0]]), autograd=True) # (4,1)
model = Sequential([Linear(2,3),
Tanh(),
Linear(3,1),
Sigmoid()])
criterion = MSELoss()
# optim = SGD(model.get_parameters(), alpha=0.05) # Lineal
optim = SGD(model.get_parameters(), alpha=1) # Tanh, Sigmoid
for i in range(10):
# Predecir
pred = model.forward(data)
# Comparar
loss = criterion.forward(pred, target)
# Aprender
loss.backward(Tensor(np.ones_like(loss.data)))
optim.step()
if (i%1 == 0):
print(loss)
```
| github_jupyter |
# Convolutional Neural Networks
CNNs are a twist on the neural network concept designed specifically to process data with spatial relationships. In the deep neural networks we've seen so far every node is always connected to every other node in the subsequent layer. While spatial relationships CAN be captured, as we've seen with out results on MNIST, the networks were not explicitly built with the assumption that spatial relationships definitely exist. Artificial neural networks are perfectly appropriate for data where the relationships are not spatial.
But, for data such as images, it seems crazy to ignore the spatial relationships! For the vast majority of image data, neighboring pixels combined with each other tell us much more than combining the pixels in opposite corners or the image. CNN's rely on the assumption that our data has spatial relationships, and they have produced state-of-the-art results especially in image processing and computer vision.
The fundamental unit of a CNN is a "convolution":

> Image Source: https://github.com/PetarV-/TikZ/tree/master/2D%20Convolution
The key component of the convolution is called the kernel, which is a matrix. K in the image above. The kernel has a shape, 3x3 in this example, but we can define the for each convolution. We "slide" the kernel across every 3x3 section of the image performing item-by-item multiplication, for example in the above image the 4 highlighted in green is produced by taking the values highlighted in red, multiplying the values in the same position in the kernel, and summing the result of those multiplications. Specifically:
```
position: [0,0] [0,1] [0,2] [1,0] [1,1] [1,2] [2,0] [2,1] [2,2]
operation: (1*1) + (0*0) + (0*1) + (1*0) + (1*1) + (0*0) + (1*1) + (1*0) + (1*1) == 4
```
This value is (optionally, but typically) then passed through a non-linearity like ReLU or Sigmoid before it is passed to the next layer.
> Side note: In the literature, you'll discover that in a "true" convolution the kernel is inverted prior to the multiply+sum operation, and that this operation without the inversion is actually called "cross correlation" by most mathematicians. This matters in some contexts but we typically ignore it in deep learning because the values of the kernel are the things that are fine tuned, and storing them as "pre-inverted" matrixes is computationally efficent compared to inverting the kernel repeatedly.
Here is a helpful animation to visualize convolutions:

> Image source: https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d
A convolutional layer has a few important properties:
* **Number of kernels** -- this is similar to the number of nodes in an ANN
* Each kernel will be separately trained on the input data.
* Each kernel will produce an output layer, sometimes called a feature map.
* These feature maps are used as input to the next layer.
* **Kernel size** -- these are almost always 3x3 or 5x5.
* Bigger kernels are more computationally expensive.
* Bigger kernals have a wider "field of view" which can be helpful.
* Dialted convolutions can capture a wider field of view at a lower computational cost (see additional resources).
* **Padding** -- notice above that a convolution produces a smaller output layer than the input layer by 1 pixel in each direction. Padding the input (typically with 0 values) allows the convolution to produce an output with the same size as the input.
* Downsampling to smaller sizes isn't always bad.
* It reduces the computational costs at the next layer.
* If we don't pad, it limits the possible depth of the network esp. for small inputs
* Padding tends to preserve information at the borders. If your images have important features on the edges, padding can improve performance
* **Stride** -- in the above we "slide" the kernel over by 1 pixel at every step. Increasing the stride increases the amount we slide by.
* Stride is typically set to 1.
* Higher values reduce the amount of information captured.
* Higher values are more computationally efficent, as fewer values are combined per convolution.
One last important concept before we build a CNN: pooling. Pooling is a tactic used to decrease the resolution of our feature maps, and it is largely an issue of computational efficency. There are 2 popular kinds, max pooling and average pooling. Pooling layers use a window size, say 2x2, and take either the max or average value within each window to produce the output layer. The windows are almost always square, and the stride size is almost always set to the size of the window:

> Image source: https://cs231n.github.io/convolutional-networks/
It is worth noting that pooling has fallen out of favor in a lot of modern architectures. Many machine learning practitioners have started downsampling through convolutions with larger stride sizes instead of pooling.
### Building Our First CNN
Let's use Keras to build a CNN now.
```
# Setting up MNST, this should look familiar:
import numpy as np
from matplotlib import pyplot as plt
from keras.datasets import fashion_mnist
from keras.models import Sequential
from keras.layers import Dense, MaxPooling2D, Conv2D, Flatten, Dropout
from keras.utils import to_categorical
# For examining results
from sklearn.metrics import confusion_matrix
import seaborn as sn
num_classes = 10
image_size = 784
(training_images, training_labels), (test_images, test_labels) = fashion_mnist.load_data()
training_data = training_images.reshape(training_images.shape[0], image_size)
test_data = test_images.reshape(test_images.shape[0], image_size)
training_labels = to_categorical(training_labels, num_classes)
test_labels = to_categorical(test_labels, num_classes)
conv_training_data = training_images.reshape(60000, 28, 28, 1)
conv_test_data = test_images.reshape(10000, 28, 28, 1)
def plot_training_history(history, model, eval_images=False):
figure = plt.figure()
plt.subplot(1, 2, 1)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.tight_layout()
plt.subplot(1, 2, 2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.tight_layout()
figure.tight_layout()
plt.show()
if eval_images:
loss, accuracy = model.evaluate(conv_test_data, test_labels, verbose=False)
else:
loss, accuracy = model.evaluate(test_data, test_labels, verbose=False)
print(f'Test loss: {loss:.3}')
print(f'Test accuracy: {accuracy:.3}')
```
This time, we're using a new dataset called "Fashion MNIST". Like the handwritten digits dataset, this is a set of grayscale images each 28 by 28 pixels. However, the subject of these images is very different from the handwritten digits dataset. Instead, these are images of fashion objects. Let's take a look at some:
```
# Lets visualize the first 100 images from the dataset
for i in range(100):
ax = plt.subplot(10, 10, i+1)
ax.axis('off')
plt.imshow(training_images[i], cmap='Greys')
i = 0 # So we can look at one at a time...
# So we can see the label
label_map = {
0: 'T-shirt/top',
1: 'Trouser',
2: 'Pullover',
3: 'Dress',
4: 'Coat',
5: 'Sandal',
6: 'Shirt',
7: 'Sneaker',
8: 'Bag',
9: 'Ankle boot'
}
label = np.argmax(training_labels[i])
plt.title(label_map[label])
plt.imshow(training_images[i], cmap='Greys')
i += 1
```
Once again, there are 10 classes of image:
0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot
As you might guess, this is a bigger challenge than the handwritten digits. Firstly, at 28 by 28 pixels much more fidelity is lost in this dataset compared to the digits dataset. Secondly, more pixels matter. In the digits dataset, we rarely care about the weight of the pixel, more or less what matters is if it's white or something else—we mostly cared about the edges between where someone had drawn and where they had not. Now internal differences in grayscale intensity are more informative, and comprise a larger amount of the image.
Let's quickly verify that a standard ANN that worked well in the context of MNIST fails in Fashion MNIST:
```
# Recall from the Optimizers section that we were able to get 97+ test accuracy with this network:
model = Sequential()
model.add(Dense(units=64, activation='relu', input_shape=(image_size,)))
model.add(Dense(units=64, activation='relu'))
model.add(Dense(units=64, activation='relu'))
model.add(Dense(units=64, activation='relu'))
model.add(Dense(units=64, activation='relu'))
model.add(Dense(units=64, activation='relu'))
model.add(Dense(units=num_classes, activation='softmax'))
# nadam performed best, as did categorical cross entropy in our previous experiments...
model.compile(optimizer='nadam', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(training_data, training_labels, batch_size=128, epochs=10, verbose=False, validation_split=.1)
plot_training_history(history, model)
```
Not bad, but not nearly as good as we were able to achieve with regular MNIST. Plus some overfitting concerns are showing themselves in the chart...
```
# The model is still sequentail, nothing new here.
model = Sequential()
# add model layers. The first parameter is the number of filters to make at each layer.
# Meaning here the result of the first layer is 64 different "feature maps" or "activation maps"
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', padding='same', input_shape=(28,28,1)))
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', padding='same',))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))
# Lets fit it with identical parameters and see what happens...
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# OOPS! Previously, we flattened our training data, but now we INTEND on having 2D input data.
# training_data => 784 long vector
# training_images => 28 x 28 matrix
# Plus one small caveat: we have to indicate the number of color channels explicitly as a dimension...
history = model.fit(conv_training_data, training_labels, batch_size=128, epochs=3, verbose=True, validation_split=.1)
plot_training_history(history, model, eval_images=True)
# When did our evaluator do poorly?
predictions = model.predict(conv_test_data)
cm = confusion_matrix(np.argmax(predictions, axis=1), np.argmax(test_labels, axis=1))
plt.figure(figsize = (15, 15))
name_labels = [
'T-shirt/top',
'Trouser',
'Pullover',
'Dress',
'Coat',
'Sandal',
'Shirt',
'Sneaker',
'Bag',
'Ankle boot'
]
sn.heatmap(cm, annot=True, xticklabels=name_labels, yticklabels=name_labels)
plt.show()
# Lets make a few small changes and see what happens...
model = Sequential()
# Note, fewer filters and a bigger kernel, plus a pooling layer
model.add(Conv2D(32, kernel_size=(5, 5), activation='relu', padding='same', input_shape=(28,28,1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Note, more filters and a pooling
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# 2 dense layers with dropout before the final.
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(conv_training_data, training_labels, batch_size=128, epochs=5, verbose=True, validation_split=.1)
plot_training_history(history, model, eval_images=True)
predictions = model.predict(conv_test_data)
cm = confusion_matrix(np.argmax(predictions, axis=1), np.argmax(test_labels, axis=1))
plt.figure(figsize = (15, 15))
sn.heatmap(cm, annot=True, xticklabels=name_labels, yticklabels=name_labels)
plt.show()
```
90% is pretty respectible, especially considering how speedy training was, and given that we didn't apply any data augmentation. [Some state of the art networks get around 93-95% accuracy](https://github.com/zalandoresearch/fashion-mnist). It's also worth noting that we only really fail on comparing pullovers to coats, and tops to t-shirts.
```
# Lets get rid of pooling and try using striding to do the downsampling instead.
model = Sequential()
model.add(Conv2D(32, kernel_size=(5, 5), strides=(2,2), activation='relu', padding='same', input_shape=(28,28,1)))
model.add(Conv2D(64, kernel_size=(3, 3), strides=(2,2), activation='relu', padding='same'))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(rate=0.2))
model.add(Dense(num_classes, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(conv_training_data, training_labels, batch_size=128, epochs=5, verbose=True, validation_split=.1)
plot_training_history(history, model, eval_images=True)
predictions = model.predict(conv_test_data)
cm = confusion_matrix(np.argmax(predictions, axis=1), np.argmax(test_labels, axis=1))
plt.figure(figsize = (15, 15))
sn.heatmap(cm, annot=True, xticklabels=name_labels, yticklabels=name_labels)
plt.show()
# Sweet, faster and similar performance, looks a bit at risk for overfitting.
# Lets try one more:
model = Sequential()
# Downsample on the first layer via strides.
model.add(Conv2D(32, kernel_size=(5, 5), strides=(2,2), activation='relu', padding='same', input_shape=(28,28,1)))
# Once downsampled, don't down sample further (strides back to (1,1))
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(Flatten())
# NOTE, because we're downsampling much less I reduced the number of nodes in this layer.
# keeping it at 256 explodes the total parameter count and slows down learning a lot.
model.add(Dense(64, activation='relu'))
model.add(Dropout(rate=0.2))
model.add(Dense(num_classes, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(conv_training_data, training_labels, batch_size=128, epochs=5, verbose=True, validation_split=.1)
plot_training_history(history, model, eval_images=True)
# Pretty similar results. This would be a good place to apply data augmentation or collect a bit more data.
# We could continue to experiment with different models and probably find some small improvements as well.
# Plus, these models might all improve some if we kept training. They are overfitting a bit, but validation
# Scores are still rising by the end.
```
| github_jupyter |
# Lasso and Ridge Regression
**Lasso regression:** It is a type of linear regression that uses shrinkage. Shrinkage is where data values are shrunk towards a central point, like the mean.
<hr>
**Ridge Regression:** It is a way to create a predictive and explonatory model when the number of predictor variables in a set exceeds the number of observations, or when a data set has multicollinearity (correlations between predictor variables).
<hr>
- With this brief knowledge of Lasso and Ridge, in this notebook we are going to predict the Height of the person given the age.
**Dataset can be directly downloaded from <a href="https://archive.org/download/ages-and-heights/AgesAndHeights.pkl">here</a>.**
## Importing Libraries
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import sklearn
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso
from sklearn.linear_model import Ridge
```
## Importing Dataset
```
!wget 'https://archive.org/download/ages-and-heights/AgesAndHeights.pkl'
raw_data = pd.read_pickle(
'AgesAndHeights.pkl'
) # Dataset Link: https://archive.org/download/ages-and-heights/AgesAndHeights.pkl
raw_data
raw_data.describe()
raw_data.info()
```
## Data Visualisation
```
sns.histplot(raw_data['Age'])
plt.show()
sns.histplot(raw_data['Height'], kde=False, bins=10)
plt.show()
```
## Data Preprocessing
```
cleaned = raw_data[raw_data['Age'] > 0]
cleaned.shape
# 7 Columns in the Dataset where the Age was less than 0, which is pretty unobvious.
sns.histplot(cleaned['Age'], kde=True)
plt.show()
cleaned.describe()
cleaned.info()
sns.scatterplot(cleaned['Age'],
cleaned['Height'],
label='Age, Height')
plt.title('Age VS Height', color='blue')
plt.xlabel('Age', color='green')
plt.ylabel('Height', color='green')
plt.legend()
plt.show()
```
**Scaling the Data in the range of (0, 1) to fit the model easily.**
```
scaler = MinMaxScaler()
cleaned_data = pd.DataFrame(scaler.fit_transform(cleaned))
cleaned_data.columns = ['Age', 'Height']
cleaned_data
```
## Model Building
```
age = cleaned_data['Age']
height = cleaned_data['Height']
```
### Lasso
```
model_l = Lasso()
X = cleaned_data[['Age']]
y = cleaned_data[['Height']]
model_l.fit(X, y)
```
#### Lasso - Predict
```
np.float64(model_l.predict([[16]]) * 100)
```
### Ridge
```
model_r = Ridge()
model_r.fit(X, y)
```
#### Ridge - Predict
```
np.float64(model_r.predict([[16]]) * 10)
```
### With and Without Regularisation
Use `ML Algorithms` to build and train the model. Here, `Simple Linear Regression` is used. Before that let's create a necessary environment to build the model.
Actual -> $y = \alpha + \beta x + \epsilon$
True -> $\hat{y} = \alpha + \beta x$
```
# random parameter values
parameters = {'alpha': 40, 'beta': 4}
# y_hat using formulas mentioned above
def y_hat(age, params):
alpha = params['alpha']
beta = params['beta']
return alpha + beta * age
age = int(input('Enter age: '))
y_hat(age, parameters)
# learning better parameters for optimum result (using Regularisation)
def learn_parameters(data, params):
x, y = data['Age'], data['Height']
x_bar, y_bar = x.mean(), y.mean()
x, y = x.to_numpy(), y.to_numpy()
beta = sum(((x - x_bar) * (y - y_bar)) / sum((x - x_bar)**2))
alpha = y_bar - beta * x_bar
params['alpha'] = alpha
params['beta'] = beta
# new parameters derived from 'learn_parameters' function
new_parameter = {'alpha': -2, 'beta': 1000}
learn_parameters(cleaned, new_parameter)
new_parameter
# general untrained predictions
spaced_ages = list(range(19))
spaced_untrained_predictions = [y_hat(x, parameters) for x in spaced_ages]
print(spaced_untrained_predictions)
# Untrained Predictions
ages = cleaned_data[['Age']] * 17.887852
heights = cleaned_data[['Height']] * 68.170414
plt.scatter(ages, heights, label='Raw Data')
plt.plot(spaced_ages,
spaced_untrained_predictions,
label='Untrained Predictions',
color='green')
plt.title('Height VS Age')
plt.xlabel('Age[Years]')
plt.ylabel('Height[Inches]')
plt.legend()
plt.show()
# Trained Predictions
spaced_trained_predictions = [y_hat(x, new_parameter) for x in spaced_ages]
print('Trained Predicted Values: ',spaced_trained_predictions)
plt.scatter(ages,heights, label='Raw Data')
plt.plot(spaced_ages, spaced_untrained_predictions, label = 'Untrained Predictions', color = 'green')
plt.plot(spaced_ages, spaced_trained_predictions, label = 'Trained Predictions', color = 'red')
plt.title('Height VS Age')
plt.xlabel('Age[Years]')
plt.ylabel('Height[Inches]')
plt.legend()
plt.show()
# We can see that the result is not optimal but has changed significantly from a normal Linear Regression Type Model
```
# Summary
_Input (Age):_ 16
| **Model Name** | **Results** |
| :------------: | :---------- |
| Lasso | 42.3622 |
| Ridge | 128.0477 |
- We can see from the above plot how a normal Linear Regression performs and how a Linear Regression with either L1 or L2 norm Regularisations improves the predictions.
- From the above table, we can conclude that Ridge model out performs Lasso by a huge margin and point to be **noted**, that it is the case with this dataset, which may prove wrong for a different Dataset.
- It also satisfies the definition of Lasso and Ridge Regression, mentioned at the start of the notebook.
**P.S.** It always cannot be the case that Regularisation Model outperforms Linear Regression in all cases. It happens in almost all cases but in some exceptional cases it is the vice versa.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Plot parameters
sns.set()
%pylab inline
pylab.rcParams['figure.figsize'] = (4, 4)
plt.rcParams['xtick.major.size'] = 0
plt.rcParams['ytick.major.size'] = 0
# Avoid inaccurate floating values (for inverse matrices in dot product for instance)
# See https://stackoverflow.com/questions/24537791/numpy-matrix-inversion-rounding-errors
np.set_printoptions(suppress=True)
%%html
<style>
.pquote {
text-align: left;
margin: 40px 0 40px auto;
width: 70%;
font-size: 1.5em;
font-style: italic;
display: block;
line-height: 1.3em;
color: #5a75a7;
font-weight: 600;
border-left: 5px solid rgba(90, 117, 167, .1);
padding-left: 6px;
}
.notes {
font-style: italic;
display: block;
margin: 40px 10%;
}
</style>
```
$$
\newcommand\bs[1]{\boldsymbol{#1}}
\newcommand\norm[1]{\left\lVert#1\right\rVert}
$$
<span class='notes'>
This content is part of a series following the chapter 2 on linear algebra from the [Deep Learning Book](http://www.deeplearningbook.org/) by Goodfellow, I., Bengio, Y., and Courville, A. (2016). It aims to provide intuitions/drawings/python code on mathematical theories and is constructed as my understanding of these concepts. You can check the syllabus in the [introduction post](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-Introduction/).
</span>
# Introduction
The [2.4](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.4-Linear-Dependence-and-Span/) was heavy but this one is light. We will however see an important concept for machine learning and deep learning. The norm is what is generally used to evaluate the error of a model. For instance it is used to calculate the error between the output of a neural network and what is expected (the actual label or value). You can think of the norm as the length of a vector. It is a function that maps a vector to a positive value. Different functions can be used and we will see few examples.
# 2.5 Norms
Norms are any functions that are characterized by the following properties:
1- Norms are non-negative values. If you think of the norms as a length, you easily see why it can't be negative.
2- Norms are $0$ if and only if the vector is a zero vector
3- Norms respect the triangle inequity. See bellow.
4- $\norm{\bs{k}\cdot \bs{u}}=\norm{\bs{k}}\cdot\norm{\bs{u}}$. The norm of a vector multiplied by a scalar is equal to the absolute value of this scalar multiplied by the norm of the vector.
It is usually written with two horizontal bars: $\norm{\bs{x}}$
# The triangle inequity
The norm of the sum of some vectors is less than or equal the sum of the norms of these vectors.
$$
\norm{\bs{u}+\bs{v}} \leq \norm{\bs{u}}+\norm{\bs{v}}
$$
### Example 1.
$$
\bs{u}=
\begin{bmatrix}
1 & 6
\end{bmatrix}
$$
and
$$
\bs{v}=
\begin{bmatrix}
4 & 2
\end{bmatrix}
$$
$$
\norm{\bs{u}+\bs{v}} = \sqrt{(1+4)^2+(6+2)^2} = \sqrt{89} \approx 9.43
$$
$$
\norm{\bs{u}}+\norm{\bs{v}} = \sqrt{1^2+6^2}+\sqrt{4^2+2^2} = \sqrt{37}+\sqrt{20} \approx 10.55
$$
Let's check these results:
```
u = np.array([1, 6])
u
v = np.array([4, 2])
v
u+v
np.linalg.norm(u+v)
np.linalg.norm(u)+np.linalg.norm(v)
u = [0,0,1,6]
v = [0,0,4,2]
u_bis = [1,6,v[2],v[3]]
w = [0,0,5,8]
plt.quiver([u[0], u_bis[0], w[0]],
[u[1], u_bis[1], w[1]],
[u[2], u_bis[2], w[2]],
[u[3], u_bis[3], w[3]],
angles='xy', scale_units='xy', scale=1, color=sns.color_palette())
# plt.rc('text', usetex=True)
plt.xlim(-2, 6)
plt.ylim(-2, 9)
plt.axvline(x=0, color='grey')
plt.axhline(y=0, color='grey')
plt.text(-1, 3.5, r'$||\vec{u}||$', color=sns.color_palette()[0], size=20)
plt.text(2.5, 7.5, r'$||\vec{v}||$', color=sns.color_palette()[1], size=20)
plt.text(2, 2, r'$||\vec{u}+\vec{v}||$', color=sns.color_palette()[2], size=20)
plt.show()
plt.close()
```
<span class='pquote'>
Geometrically, this simply means that the shortest path between two points is a line
</span>
# P-norms: general rules
Here is the recipe to get the $p$-norm of a vector:
1. Calculate the absolute value of each element
2. Take the power $p$ of these absolute values
3. Sum all these powered absolute values
4. Take the power $\frac{1}{p}$ of this result
This is more condensly expressed with the formula:
$$
\norm{\bs{x}}_p=(\sum_i|\bs{x}_i|^p)^{1/p}
$$
This will be clear with examples using these widely used $p$-norms.
# The $L^0$ norm
All positive values will get you a $1$ if you calculate its power $0$ except $0$ that will get you another $0$. Therefore this norm corresponds to the number of non-zero elements in the vector. It is not really a norm because if you multiply the vector by $\alpha$, this number is the same (rule 4 above).
# The $L^1$ norm
$p=1$ so this norm is simply the sum of the absolute values:
$$
\norm{\bs{x}}_1=\sum_{i} |\bs{x}_i|
$$
# The Euclidean norm ($L^2$ norm)
The Euclidean norm is the $p$-norm with $p=2$. This may be the more used norm with the squared $L^2$ norm.
$$
\norm{\bs{x}}_2=(\sum_i \bs{x}_i^2)^{1/2}\Leftrightarrow \sqrt{\sum_i \bs{x}_i^2}
$$
Let's see an example of this norm:
### Example 2.
Graphically, the Euclidean norm corresponds to the length of the vector from the origin to the point obtained by linear combination (like applying Pythagorean theorem).
$$
\bs{u}=
\begin{bmatrix}
3 \\\\
4
\end{bmatrix}
$$
$$
\begin{align*}
\norm{\bs{u}}_2 &=\sqrt{|3|^2+|4|^2}\\\\
&=\sqrt{25}\\\\
&=5
\end{align*}
$$
So the $L^2$ norm is $5$.
The $L^2$ norm can be calculated with the `linalg.norm` function from numpy. We can check the result:
```
np.linalg.norm([3, 4])
```
Here is the graphical representation of the vectors:
```
u = [0,0,3,4]
plt.quiver([u[0]],
[u[1]],
[u[2]],
[u[3]],
angles='xy', scale_units='xy', scale=1)
plt.xlim(-2, 4)
plt.ylim(-2, 5)
plt.axvline(x=0, color='grey')
plt.axhline(y=0, color='grey')
plt.annotate('', xy = (3.2, 0), xytext = (3.2, 4),
arrowprops=dict(edgecolor='black', arrowstyle = '<->'))
plt.annotate('', xy = (0, -0.2), xytext = (3, -0.2),
arrowprops=dict(edgecolor='black', arrowstyle = '<->'))
plt.text(1, 2.5, r'$\vec{u}$', size=18)
plt.text(3.3, 2, r'$\vec{u}_y$', size=18)
plt.text(1.5, -1, r'$\vec{u}_x$', size=18)
plt.show()
plt.close()
```
In this case, the vector is in a 2-dimensional space but this stands also for more dimensions.
$$
u=
\begin{bmatrix}
u_1\\\\
u_2\\\\
\cdots \\\\
u_n
\end{bmatrix}
$$
$$
||u||_2 = \sqrt{u_1^2+u_2^2+\cdots+u_n^2}
$$
# The squared Euclidean norm (squared $L^2$ norm)
$$
\sum_i|\bs{x}_i|^2
$$
The squared $L^2$ norm is convenient because it removes the square root and we end up with the simple sum of every squared values of the vector.
The squared Euclidean norm is widely used in machine learning partly because it can be calculated with the vector operation $\bs{x}^\text{T}\bs{x}$. There can be performance gain due to the optimization See [here](https://softwareengineering.stackexchange.com/questions/312445/why-does-expressing-calculations-as-matrix-multiplications-make-them-faster) and [here](https://www.quora.com/What-makes-vector-operations-faster-than-for-loops) for more details.
### Example 3.
$$
\bs{x}=
\begin{bmatrix}
2 \\\\
5 \\\\
3 \\\\
3
\end{bmatrix}
$$
$$
\bs{x}^\text{T}=
\begin{bmatrix}
2 & 5 & 3 & 3
\end{bmatrix}
$$
$$
\begin{align*}
\bs{x}^\text{T}\bs{x}&=
\begin{bmatrix}
2 & 5 & 3 & 3
\end{bmatrix} \times
\begin{bmatrix}
2 \\\\
5 \\\\
3 \\\\
3
\end{bmatrix}\\\\
&= 2\times 2 + 5\times 5 + 3\times 3 + 3\times 3= 47
\end{align*}
$$
```
x = np.array([[2], [5], [3], [3]])
x
euclideanNorm = x.T.dot(x)
euclideanNorm
np.linalg.norm(x)**2
```
It works!
## Derivative of the squared $L^2$ norm
Another advantage of the squared $L^2$ norm is that its partial derivative is easily computed:
$$
u=
\begin{bmatrix}
u_1\\\\
u_2\\\\
\cdots \\\\
u_n
\end{bmatrix}
$$
$$
\norm{u}_2 = u_1^2+u_2^2+\cdots+u_n^2
$$
$$
\begin{cases}
\dfrac{d\norm{u}_2}{du_1} = 2u_1\\\\
\dfrac{d\norm{u}_2}{du_2} = 2u_2\\\\
\cdots\\\\
\dfrac{d\norm{u}_2}{du_n} = 2u_n
\end{cases}
$$
## Derivative of the $L^2$ norm
In the case of the $L^2$ norm, the derivative is more complicated and takes every elements of the vector into account:
$$
\norm{u}_2 = \sqrt{(u_1^2+u_2^2+\cdots+u_n^2)} = (u_1^2+u_2^2+\cdots+u_n^2)^{\frac{1}{2}}
$$
$$
\begin{align*}
\dfrac{d\norm{u}_2}{du_1} &=
\dfrac{1}{2}(u_1^2+u_2^2+\cdots+u_n^2)^{\frac{1}{2}-1}\cdot
\dfrac{d}{du_1}(u_1^2+u_2^2+\cdots+u_n^2)\\\\
&=\dfrac{1}{2}(u_1^2+u_2^2+\cdots+u_n^2)^{-\frac{1}{2}}\cdot
\dfrac{d}{du_1}(u_1^2+u_2^2+\cdots+u_n^2)\\\\
&=\dfrac{1}{2}\cdot\dfrac{1}{(u_1^2+u_2^2+\cdots+u_n^2)^{\frac{1}{2}}}\cdot
\dfrac{d}{du_1}(u_1^2+u_2^2+\cdots+u_n^2)\\\\
&=\dfrac{1}{2}\cdot\dfrac{1}{(u_1^2+u_2^2+\cdots+u_n^2)^{\frac{1}{2}}}\cdot
2\cdot u_1\\\\
&=\dfrac{u_1}{\sqrt{(u_1^2+u_2^2+\cdots+u_n^2)}}\\\\
\end{align*}
$$
$$
\begin{cases}
\dfrac{d\norm{u}_2}{du_1} = \dfrac{u_1}{\sqrt{(u_1^2+u_2^2+\cdots+u_n^2)}}\\\\
\dfrac{d\norm{u}_2}{du_2} = \dfrac{u_2}{\sqrt{(u_1^2+u_2^2+\cdots+u_n^2)}}\\\\
\cdots\\\\
\dfrac{d\norm{u}_2}{du_n} = \dfrac{u_n}{\sqrt{(u_1^2+u_2^2+\cdots+u_n^2)}}\\\\
\end{cases}
$$
One problem of the squared $L^2$ norm is that it hardly discriminates between 0 and small values because the increase of the function is slow.
We can see this by graphically comparing the squared $L^2$ norm with the $L^2$ norm. The $z$-axis corresponds to the norm and the $x$- and $y$-axis correspond to two parameters. The same thing is true with more than 2 dimensions but it would be hard to visualize it.
$L^2$ norm:
<img src="images/L2Norm.png" alt="L2Norm" width="500">
Squared $L^2$ norm:
<img src="images/squaredL2Norm.png" alt="squaredL2Norm" width="500">
$L^1$ norm:
<img src="images/L1Norm.png" alt="L1Norm" width="500">
These plots are done with the help of this [website](https://academo.org/demos/3d-surface-plotter/). Go and plot these norms if you need to move them in order to catch their shape.
# The max norm
It is the $L^\infty$ norm and corresponds to the absolute value of the greatest element of the vector.
$$
\norm{\bs{x}}_\infty = \max\limits_i|x_i|
$$
# Matrix norms: the Frobenius norm
$$
\norm{\bs{A}}_F=\sqrt{\sum_{i,j}A^2_{i,j}}
$$
This is equivalent to take the $L^2$ norm of the matrix after flattening.
The same Numpy function can be use:
```
A = np.array([[1, 2], [6, 4], [3, 2]])
A
np.linalg.norm(A)
```
# Expression of the dot product with norms
$$
\bs{x}^\text{T}\bs{y} = \norm{\bs{x}}_2\cdot\norm{\bs{y}}_2\cos\theta
$$
### Example 4.
$$
\bs{x}=
\begin{bmatrix}
0 \\\\
2
\end{bmatrix}
$$
and
$$
\bs{y}=
\begin{bmatrix}
2 \\\\
2
\end{bmatrix}
$$
```
x = [0,0,0,2]
y = [0,0,2,2]
plt.xlim(-2, 4)
plt.ylim(-2, 5)
plt.axvline(x=0, color='grey', zorder=0)
plt.axhline(y=0, color='grey', zorder=0)
plt.quiver([x[0], y[0]],
[x[1], y[1]],
[x[2], y[2]],
[x[3], y[3]],
angles='xy', scale_units='xy', scale=1)
plt.text(-0.5, 1, r'$\vec{x}$', size=18)
plt.text(1.5, 0.5, r'$\vec{y}$', size=18)
plt.show()
plt.close()
```
We took this example for its simplicity. As we can see, the angle $\theta$ is equal to 45°.
$$
\bs{x^\text{T}y}=
\begin{bmatrix}
0 & 2
\end{bmatrix} \cdot
\begin{bmatrix}
2 \\\\
2
\end{bmatrix} =
0\times2+2\times2 = 4
$$
and
$$
\norm{\bs{x}}_2=\sqrt{0^2+2^2}=\sqrt{4}=2
$$
$$
\norm{\bs{y}}_2=\sqrt{2^2+2^2}=\sqrt{8}
$$
$$
2\times\sqrt{8}\times cos(45)=4
$$
Here are the operations using numpy:
```
# Note: np.cos take the angle in radian
np.cos(np.deg2rad(45))*2*np.sqrt(8)
```
<span class='notes'>
Feel free to drop me an email or a comment. The syllabus of this series can be found [in the introduction post](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-Introduction/). All the notebooks can be found on [Github](https://github.com/hadrienj/deepLearningBook-Notes).
</span>
# References
- https://en.wikipedia.org/wiki/Norm_(mathematics)
- [3D plots](https://academo.org/demos/3d-surface-plotter/)
| github_jupyter |
## Text Similarity using Word Embeddings
In this notebook we're going to play around with pre build word embeddings and do some fun calcultations:
```
%matplotlib inline
import os
from keras.utils import get_file
import gensim
import subprocess
import numpy as np
import matplotlib.pyplot as plt
from IPython.core.pylabtools import figsize
figsize(10, 10)
from sklearn.manifold import TSNE
import json
from collections import Counter
from itertools import chain
```
We'll start by downloading a pretrained model from Google News. We're using `zcat` to unzip the file, so you need to make sure you have that installed or replace it by something else.
```
MODEL = 'GoogleNews-vectors-negative300.bin'
path = get_file(MODEL + '.gz', 'https://s3.amazonaws.com/dl4j-distribution/%s.gz' % MODEL)
unzipped = os.path.join('generated', MODEL)
if not os.path.isfile(unzipped):
with open(unzipped, 'wb') as fout:
zcat = subprocess.Popen(['zcat'],
stdin=open(path),
stdout=fout
)
zcat.wait()
model = gensim.models.KeyedVectors.load_word2vec_format(unzipped, binary=True)
```
Let's take this model for a spin by looking at what things are most similar to espresso. As expected, coffee like items show up:
```
model.most_similar(positive=['espresso'])
```
Now for the famous equation, what is like woman if king is like man? We create a quick method to these calculations here:
```
def A_is_to_B_as_C_is_to(a, b, c, topn=1):
a, b, c = map(lambda x:x if type(x) == list else [x], (a, b, c))
res = model.most_similar(positive=b + c, negative=a, topn=topn)
if len(res):
if topn == 1:
return res[0][0]
return [x[0] for x in res]
return None
A_is_to_B_as_C_is_to('man', 'woman', 'king')
```
We can use this equation to acurately predict the capitals of countries by looking at what has the same relationship as Berlin has to Germany for selected countries:
```
for country in 'Italy', 'France', 'India', 'China':
print('%s is the capital of %s' %
(A_is_to_B_as_C_is_to('Germany', 'Berlin', country), country))
```
Or we can do the same for important products for given companies. Here we seed the products equation with two products, the iPhone for Apple and Starbucks_coffee for Starbucks. Note that numbers are replaced by # in the embedding model:
```
for company in 'Google', 'IBM', 'Boeing', 'Microsoft', 'Samsung':
products = A_is_to_B_as_C_is_to(
['Starbucks', 'Apple'],
['Starbucks_coffee', 'iPhone'],
company, topn=3)
print('%s -> %s' %
(company, ', '.join(products)))
```
Let's do some clustering by picking three categories of items, drinks, countries and sports:
```
beverages = ['espresso', 'beer', 'vodka', 'wine', 'cola', 'tea']
countries = ['Italy', 'Germany', 'Russia', 'France', 'USA', 'India']
sports = ['soccer', 'handball', 'hockey', 'cycling', 'basketball', 'cricket']
items = beverages + countries + sports
len(items)
```
And looking up their vectors:
```
item_vectors = [(item, model[item])
for item in items
if item in model]
len(item_vectors)
```
Now use TSNE for clustering:
```
vectors = np.asarray([x[1] for x in item_vectors])
lengths = np.linalg.norm(vectors, axis=1)
norm_vectors = (vectors.T / lengths).T
tsne = TSNE(n_components=2, perplexity=10, verbose=2).fit_transform(norm_vectors)
```
And matplotlib to show the results:
```
x=tsne[:,0]
y=tsne[:,1]
fig, ax = plt.subplots()
ax.scatter(x, y)
for item, x1, y1 in zip(item_vectors, x, y):
ax.annotate(item[0], (x1, y1), size=14)
plt.show()
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# MNIST classification
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/mnist"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/mnist.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/mnist.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/mnist.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial builds a quantum neural network (QNN) to classify a simplified version of MNIST, similar to the approach used in <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al</a>. The performance of the quantum neural network on this classical data problem is compared with a classical neural network.
## Setup
```
!pip install tensorflow==2.4.1
```
Install TensorFlow Quantum:
```
!pip install tensorflow-quantum
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
```
Now import TensorFlow and the module dependencies:
```
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
import seaborn as sns
import collections
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
```
## 1. Load the data
In this tutorial you will build a binary classifier to distinguish between the digits 3 and 6, following <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> This section covers the data handling that:
- Loads the raw data from Keras.
- Filters the dataset to only 3s and 6s.
- Downscales the images so they fit can fit in a quantum computer.
- Removes any contradictory examples.
- Converts the binary images to Cirq circuits.
- Converts the Cirq circuits to TensorFlow Quantum circuits.
### 1.1 Load the raw data
Load the MNIST dataset distributed with Keras.
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Rescale the images from [0,255] to the [0.0,1.0] range.
x_train, x_test = x_train[..., np.newaxis]/255.0, x_test[..., np.newaxis]/255.0
print("Number of original training examples:", len(x_train))
print("Number of original test examples:", len(x_test))
```
Filter the dataset to keep just the 3s and 6s, remove the other classes. At the same time convert the label, `y`, to boolean: `True` for `3` and `False` for 6.
```
def filter_36(x, y):
keep = (y == 3) | (y == 6)
x, y = x[keep], y[keep]
y = y == 3
return x,y
x_train, y_train = filter_36(x_train, y_train)
x_test, y_test = filter_36(x_test, y_test)
print("Number of filtered training examples:", len(x_train))
print("Number of filtered test examples:", len(x_test))
```
Show the first example:
```
print(y_train[0])
plt.imshow(x_train[0, :, :, 0])
plt.colorbar()
```
### 1.2 Downscale the images
An image size of 28x28 is much too large for current quantum computers. Resize the image down to 4x4:
```
x_train_small = tf.image.resize(x_train, (4,4)).numpy()
x_test_small = tf.image.resize(x_test, (4,4)).numpy()
```
Again, display the first training example—after resize:
```
print(y_train[0])
plt.imshow(x_train_small[0,:,:,0], vmin=0, vmax=1)
plt.colorbar()
```
### 1.3 Remove contradictory examples
From section *3.3 Learning to Distinguish Digits* of <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a>, filter the dataset to remove images that are labeled as belonging to both classes.
This is not a standard machine-learning procedure, but is included in the interest of following the paper.
```
def remove_contradicting(xs, ys):
mapping = collections.defaultdict(set)
orig_x = {}
# Determine the set of labels for each unique image:
for x,y in zip(xs,ys):
orig_x[tuple(x.flatten())] = x
mapping[tuple(x.flatten())].add(y)
new_x = []
new_y = []
for flatten_x in mapping:
x = orig_x[flatten_x]
labels = mapping[flatten_x]
if len(labels) == 1:
new_x.append(x)
new_y.append(next(iter(labels)))
else:
# Throw out images that match more than one label.
pass
num_uniq_3 = sum(1 for value in mapping.values() if len(value) == 1 and True in value)
num_uniq_6 = sum(1 for value in mapping.values() if len(value) == 1 and False in value)
num_uniq_both = sum(1 for value in mapping.values() if len(value) == 2)
print("Number of unique images:", len(mapping.values()))
print("Number of unique 3s: ", num_uniq_3)
print("Number of unique 6s: ", num_uniq_6)
print("Number of unique contradicting labels (both 3 and 6): ", num_uniq_both)
print()
print("Initial number of images: ", len(xs))
print("Remaining non-contradicting unique images: ", len(new_x))
return np.array(new_x), np.array(new_y)
```
The resulting counts do not closely match the reported values, but the exact procedure is not specified.
It is also worth noting here that applying filtering contradictory examples at this point does not totally prevent the model from receiving contradictory training examples: the next step binarizes the data which will cause more collisions.
```
x_train_nocon, y_train_nocon = remove_contradicting(x_train_small, y_train)
```
### 1.4 Encode the data as quantum circuits
To process images using a quantum computer, <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> proposed representing each pixel with a qubit, with the state depending on the value of the pixel. The first step is to convert to a binary encoding.
```
THRESHOLD = 0.5
x_train_bin = np.array(x_train_nocon > THRESHOLD, dtype=np.float32)
x_test_bin = np.array(x_test_small > THRESHOLD, dtype=np.float32)
```
If you were to remove contradictory images at this point you would be left with only 193, likely not enough for effective training.
```
_ = remove_contradicting(x_train_bin, y_train_nocon)
```
The qubits at pixel indices with values that exceed a threshold, are rotated through an $X$ gate.
```
def convert_to_circuit(image):
"""Encode truncated classical image into quantum datapoint."""
values = np.ndarray.flatten(image)
qubits = cirq.GridQubit.rect(4, 4)
circuit = cirq.Circuit()
for i, value in enumerate(values):
if value:
circuit.append(cirq.X(qubits[i]))
return circuit
x_train_circ = [convert_to_circuit(x) for x in x_train_bin]
x_test_circ = [convert_to_circuit(x) for x in x_test_bin]
```
Here is the circuit created for the first example (circuit diagrams do not show qubits with zero gates):
```
SVGCircuit(x_train_circ[0])
```
Compare this circuit to the indices where the image value exceeds the threshold:
```
bin_img = x_train_bin[0,:,:,0]
indices = np.array(np.where(bin_img)).T
indices
```
Convert these `Cirq` circuits to tensors for `tfq`:
```
x_train_tfcirc = tfq.convert_to_tensor(x_train_circ)
x_test_tfcirc = tfq.convert_to_tensor(x_test_circ)
```
## 2. Quantum neural network
There is little guidance for a quantum circuit structure that classifies images. Since the classification is based on the expectation of the readout qubit, <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> propose using two qubit gates, with the readout qubit always acted upon. This is similar in some ways to running small a <a href="https://arxiv.org/abs/1511.06464" class="external">Unitary RNN</a> across the pixels.
### 2.1 Build the model circuit
This following example shows this layered approach. Each layer uses *n* instances of the same gate, with each of the data qubits acting on the readout qubit.
Start with a simple class that will add a layer of these gates to a circuit:
```
class CircuitLayerBuilder():
def __init__(self, data_qubits, readout):
self.data_qubits = data_qubits
self.readout = readout
def add_layer(self, circuit, gate, prefix):
for i, qubit in enumerate(self.data_qubits):
symbol = sympy.Symbol(prefix + '-' + str(i))
circuit.append(gate(qubit, self.readout)**symbol)
```
Build an example circuit layer to see how it looks:
```
demo_builder = CircuitLayerBuilder(data_qubits = cirq.GridQubit.rect(4,1),
readout=cirq.GridQubit(-1,-1))
circuit = cirq.Circuit()
demo_builder.add_layer(circuit, gate = cirq.XX, prefix='xx')
SVGCircuit(circuit)
```
Now build a two-layered model, matching the data-circuit size, and include the preparation and readout operations.
```
def create_quantum_model():
"""Create a QNN model circuit and readout operation to go along with it."""
data_qubits = cirq.GridQubit.rect(4, 4) # a 4x4 grid.
readout = cirq.GridQubit(-1, -1) # a single qubit at [-1,-1]
circuit = cirq.Circuit()
# Prepare the readout qubit.
circuit.append(cirq.X(readout))
circuit.append(cirq.H(readout))
builder = CircuitLayerBuilder(
data_qubits = data_qubits,
readout=readout)
# Then add layers (experiment by adding more).
builder.add_layer(circuit, cirq.XX, "xx1")
builder.add_layer(circuit, cirq.ZZ, "zz1")
# Finally, prepare the readout qubit.
circuit.append(cirq.H(readout))
return circuit, cirq.Z(readout)
model_circuit, model_readout = create_quantum_model()
```
### 2.2 Wrap the model-circuit in a tfq-keras model
Build the Keras model with the quantum components. This model is fed the "quantum data", from `x_train_circ`, that encodes the classical data. It uses a *Parametrized Quantum Circuit* layer, `tfq.layers.PQC`, to train the model circuit, on the quantum data.
To classify these images, <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> proposed taking the expectation of a readout qubit in a parameterized circuit. The expectation returns a value between 1 and -1.
```
# Build the Keras model.
model = tf.keras.Sequential([
# The input is the data-circuit, encoded as a tf.string
tf.keras.layers.Input(shape=(), dtype=tf.string),
# The PQC layer returns the expected value of the readout gate, range [-1,1].
tfq.layers.PQC(model_circuit, model_readout),
])
```
Next, describe the training procedure to the model, using the `compile` method.
Since the the expected readout is in the range `[-1,1]`, optimizing the hinge loss is a somewhat natural fit.
Note: Another valid approach would be to shift the output range to `[0,1]`, and treat it as the probability the model assigns to class `3`. This could be used with a standard a `tf.losses.BinaryCrossentropy` loss.
To use the hinge loss here you need to make two small adjustments. First convert the labels, `y_train_nocon`, from boolean to `[-1,1]`, as expected by the hinge loss.
```
y_train_hinge = 2.0*y_train_nocon-1.0
y_test_hinge = 2.0*y_test-1.0
```
Second, use a custiom `hinge_accuracy` metric that correctly handles `[-1, 1]` as the `y_true` labels argument.
`tf.losses.BinaryAccuracy(threshold=0.0)` expects `y_true` to be a boolean, and so can't be used with hinge loss).
```
def hinge_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true) > 0.0
y_pred = tf.squeeze(y_pred) > 0.0
result = tf.cast(y_true == y_pred, tf.float32)
return tf.reduce_mean(result)
model.compile(
loss=tf.keras.losses.Hinge(),
optimizer=tf.keras.optimizers.Adam(),
metrics=[hinge_accuracy])
print(model.summary())
```
### Train the quantum model
Now train the model—this takes about 45 min. If you don't want to wait that long, use a small subset of the data (set `NUM_EXAMPLES=500`, below). This doesn't really affect the model's progress during training (it only has 32 parameters, and doesn't need much data to constrain these). Using fewer examples just ends training earlier (5min), but runs long enough to show that it is making progress in the validation logs.
```
EPOCHS = 3
BATCH_SIZE = 32
NUM_EXAMPLES = len(x_train_tfcirc)
x_train_tfcirc_sub = x_train_tfcirc[:NUM_EXAMPLES]
y_train_hinge_sub = y_train_hinge[:NUM_EXAMPLES]
```
Training this model to convergence should achieve >85% accuracy on the test set.
```
qnn_history = model.fit(
x_train_tfcirc_sub, y_train_hinge_sub,
batch_size=32,
epochs=EPOCHS,
verbose=1,
validation_data=(x_test_tfcirc, y_test_hinge))
qnn_results = model.evaluate(x_test_tfcirc, y_test)
```
Note: The training accuracy reports the average over the epoch. The validation accuracy is evaluated at the end of each epoch.
## 3. Classical neural network
While the quantum neural network works for this simplified MNIST problem, a basic classical neural network can easily outperform a QNN on this task. After a single epoch, a classical neural network can achieve >98% accuracy on the holdout set.
In the following example, a classical neural network is used for for the 3-6 classification problem using the entire 28x28 image instead of subsampling the image. This easily converges to nearly 100% accuracy of the test set.
```
def create_classical_model():
# A simple model based off LeNet from https://keras.io/examples/mnist_cnn/
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(32, [3, 3], activation='relu', input_shape=(28,28,1)))
model.add(tf.keras.layers.Conv2D(64, [3, 3], activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(1))
return model
model = create_classical_model()
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.summary()
model.fit(x_train,
y_train,
batch_size=128,
epochs=1,
verbose=1,
validation_data=(x_test, y_test))
cnn_results = model.evaluate(x_test, y_test)
```
The above model has nearly 1.2M parameters. For a more fair comparison, try a 37-parameter model, on the subsampled images:
```
def create_fair_classical_model():
# A simple model based off LeNet from https://keras.io/examples/mnist_cnn/
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(4,4,1)))
model.add(tf.keras.layers.Dense(2, activation='relu'))
model.add(tf.keras.layers.Dense(1))
return model
model = create_fair_classical_model()
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.summary()
model.fit(x_train_bin,
y_train_nocon,
batch_size=128,
epochs=20,
verbose=2,
validation_data=(x_test_bin, y_test))
fair_nn_results = model.evaluate(x_test_bin, y_test)
```
## 4. Comparison
Higher resolution input and a more powerful model make this problem easy for the CNN. While a classical model of similar power (~32 parameters) trains to a similar accuracy in a fraction of the time. One way or the other, the classical neural network easily outperforms the quantum neural network. For classical data, it is difficult to beat a classical neural network.
```
qnn_accuracy = qnn_results[1]
cnn_accuracy = cnn_results[1]
fair_nn_accuracy = fair_nn_results[1]
sns.barplot(["Quantum", "Classical, full", "Classical, fair"],
[qnn_accuracy, cnn_accuracy, fair_nn_accuracy])
```
| github_jupyter |
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
import pandas_profiling
%matplotlib inline
!dir
%cd .\decision_tree
!dir
df = pd.read_csv(".\\credit_cards_dataset.csv")
df['EDUCATION'].plot.hist()
#df.describe()
#df['PAY_AMT4']
#df.tail(5)
### EDA & Featuring
#df.describe()
'''
df = pd.DataFrame(
np.random.rand(100, 5),
columns=['a', 'b', 'c', 'd', 'e']
)
'''
#df.profile_report(style={'full_width':True})
df.profile_report()
#print("Original shape of the data: "+ str(df.shape))
features_names = df.columns
#df.describe()
X = df.drop('default.payment.next.month', axis =1).values
y = df['default.payment.next.month'].values
print(X.shape)
print(y.shape)
```
Split my data into training and testing
```
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7, random_state=42, shuffle=True)
```
Instantiate the random forest model with 200 trees
```
rf = RandomForestClassifier(n_estimators=200, criterion='entropy', max_features='log2', max_depth=15)
rf.fit(X_train, y_train)
y_predict = rf.predict(X_test)
#y_predict.reshape(-1,1).shape
```
Check feature importance
```
sorted(zip(rf.feature_importances_, features_names), reverse=True)
## plot the importances ##
import matplotlib.pyplot as plt
importances = rf.feature_importances_
indices = np.argsort(importances)[::-1]
plt.figure(figsize=(12,6))
plt.title("Feature importances by DecisionTreeClassifier")
plt.bar(range(len(indices)), importances[indices], color='lightblue', align="center")
plt.step(range(len(indices)), np.cumsum(importances[indices]), where='mid', label='Cumulative')
plt.xticks(range(len(indices)), features_names[indices], rotation='vertical',fontsize=14)
plt.xlim([-1, len(indices)])
#plt.show()
```
# Making my prediction and seeing how well my model predicted by checking recall, precision, F1 score and making a confusion matrix.
Recall -tells us generally or overall how well our model predicted based on
the total of how much it correctly predicted /correctly predicted + how many where actually right but predicted wrong.
formula = TP/TP+FN
Precision -tells us or gives us true measure how well our model predicted it shows correctly predicted /correctly predicted + how many the model predicted to be positive but where false.
formula = TP/TP+FP
F1 score - gives us a mean of precision and recall, a sumarization of how well it did in respect to recall and precision.
```
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import recall_score
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
X_test.shape
#Make my predictions
y_prediction = rf.predict(X_test)
y_probability = rf.predict_proba(X_test)
y_probability.shape
### Grid search
### https://towardsdatascience.com/algorithms-for-hyperparameter-optimisation-in-python-edda4bdb167
'''
rf_params = {
'n_estimators': [200, 700],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth':np.arange(5,15,1)
}
gs_random = GridSearchCV(estimator=rf, param_grid=rf_params, cv= 5)
gs_random.fit(X_test, y_prediction)
print(gs_random.best_params_)
'''
print("Recall score:"+ str(recall_score(y_test, y_prediction)))
y_prediction.reshape(-1,1)
# This shows overall acuracy of how well it will predict defualt or non_default
# The scores corresponding to every class will tell you the accuracy of the classifier
# in classifying the data points in that particular class compared to all other classes.
# The support is the number of samples of the true response that lie in that class.
print(classification_report(y_test, y_prediction,
target_names=["non_default", "default"]))
# Creating confusion matrix would give us a ration of non-default compared
# to default.
from sklearn.metrics import confusion_matrix
import itertools
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
cnf_matrix = confusion_matrix(y_test, y_prediction)
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Non_Default','Default'], normalize=False,
title='Non Normalized confusion matrix')
```
# Explanation of this confusion matrix
In our confusion matrix, the non-default classification has a total of 2,158 points and defualt clssification has a total of 7,742 points.
It correctly identified 7239 points as default and 503 points as non-default.
Non_default classification incorectly predicted 1480 points as default and correctly classified 678 points as Non defualt.
```
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Defualt', 'Non_default'], normalize=True,
title='Normalized confusion matrix')
import pickle
filename = 'RandomForest_model.sav'
pickle.dump(rf, open(filename, 'wb'))
```
| github_jupyter |
```
# Copyright 2021 The Earth Engine Community Authors { display-mode: "form" }
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Detecting Changes in Sentinel-1 Imagery (Part 4)
Author: mortcanty
In the fourth and final part of this Community Tutorial series on SAR change detection, we will have a look at some more examples using imagery taken from the [GEE Sentinel-1 Archive](https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S1_GRD). In order to encourage further exploration, the algorithm that we developed in the first three parts ([Part 1](https://developers.google.com/earth-engine/tutorials/community/detecting-changes-in-sentinel-1-imagery-pt-1), [Part 2](https://developers.google.com/earth-engine/tutorials/community/detecting-changes-in-sentinel-1-imagery-pt-2), [Part 3](https://developers.google.com/earth-engine/tutorials/community/detecting-changes-in-sentinel-1-imagery-pt-3))
has been provided with a convenient interactive widget interface. Some of the examples presented here are taken from the publications
[Canty et al. (2020)](https://www.mdpi.com/2072-4292/12/1/46) and [Canty et al. (2020a)](https://www.mdpi.com/2072-4292/12/15/2454).
### Preliminaries
```
# Setup the GEE Python API.
import ee
# Trigger the authentication flow.
ee.Authenticate()
# Initialize the library.
ee.Initialize()
# Enable the Widget manager.
from google.colab import output
output.enable_custom_widget_manager()
# Install the algorithm and the interactive interface (from fresh runtime!).
!pip install -q ipyleaflet
!git clone https://github.com/mortcanty/eesarseq
%cd /content/eesarseq/src
%run setup install
```
# Part 4. Applications: exploring the GEE Sentinel-1 collection
```
# Run the interface.
from eesar.application import run
run()
```
## How to use the interface
### Caveat
*The sequential SAR change detection method that we developed in the first three parts of this tutorial is _pixel-oriented_. That is to say, it is based entirely on the statistical properties of each and every multi-look pixel in the observed image series. For this reason it is best to limit your analysis to a region less than a few thousand square kilometers while interactively visualizing and exploring results using this notebook (for larger regions, [export the results](https://developers.google.com/earth-engine/guides/python_install#exporting-data)). Furthermore, since we are analysing time series for changes, the reflectance images involved must all completely overlap the region of interest and have been irradiated from the same position in space. This means choosing either ascending **or** descending node, and only **one** relative orbit number for any given sequence.*
### Walk through
To get started, let's see how to generate a small change map. In the widget interface above, choose **Platform** A, leaving all other settings as is. Select a small area of interest (aoi) near the town of Jülich with the rectangular or polygon **draw tool**. This will enable the **Collect** button. Click it to collect an image series, the details of which are printed in the info box at the bottom. The raster overlay shows the complete swath of the last image in the sequence. When the overlay is fully rendered, the **Preview** button is enabled.
**Note:** Depending on the position of the aoi, two relative orbit numbers may be displayed (88 and 15). If so, in the corresponding **RelOrbit** field choose either of them and re-collect.
The **Preview** button will now trigger the change detection algorithm at the scale selected by the current zoom setting. The color coded change map is displayed, showing, for each pixel, the interval within the series of the **First** detected change (palette = 'black, blue, cyan, yellow, red' indicating early in the series through to late). The map displayed is set by the Radio Button next to **Preview**. Since processing is carried out in parallel, the change image is built up tile-by-tile. As explained in [Part 3](https://developers.google.com/earth-engine/tutorials/community/detecting-changes-in-sentinel-1-imagery-pt-3#a_question_of_scale) of this tutorial, the zoom setting can falsify the result somewhat, depending on the pyramid level at which the calculation is carried out. Nevertheless it is often convenient for generating a quick overview. You can see the effect by zooming in and out. De-select the **QuickPreview** check box to override it. Now the calculation is carried out at the full 10 m pixel scale irrespective of the zoom level chosen, but can take considerably longer.
If and when you are satisfied with the previewed result, you can export the change maps to your GEE cloud assets with the **ExportToAssets** button, see below.
### The widgets
**Platform:** Choose one or both of the Sentinel-1 satellites.
**Pass:** Choose ascending or descending node.
**RelOrbit:** Choose relative orbit number. If set to 0 all orbit numbers are included with images which overlap with the area of interest.
**StartDate:** Beginning of collected time series.
**EndDate:** End of collected time series.
**Collect:** Start collection, enabled when an area of interest has been chosen. Upon completion the last Sentinel-1 image in the sequence is displayed.
**Signif:** Choose a significance level for the [likelihood ratio test](https://developers.google.com/earth-engine/tutorials/community/detecting-changes-in-sentinel-1-imagery-pt-2#the_likelihood_ratio_test).
**MedianFilter:** Run a 5x5 median filter over the change map before displaying or exporting.
**Stride:** Select only a subset of the collected sequence. For example, the value 3 will collect every third image in the sequence.
**ShowS2:** Display the most cloud-free Sentinel-2 image found within the chosen time period instead of the last Sentinel-1 image.
**ExportToAssets:** Creates and runs a batch task to export a change map image as a raster to an Earth Engine asset. For a time series of $k$ images, the exported change map consists of $k+2$ bands
- cmap: the interval* of the most recent change, one band, byte values $\in [0,k-1]$, where 0 = no change.
- smap: the interval of the first change, one band, byte values $\in [0,k-1]$, where 0 = no change.
- fmap: the number of changes, one band, byte values $\in [0,k-1]$, where 0 = no changes.
- bmap: the changes in each interval, $\ k-1$ bands, byte values $\in [0,3]$, where 0 = no change, 1 = positive definite change, 2 = negative definite change, 3 = indefinite change.
*Two successive acquisition times in the series.
**ExportToDrive:** Sends the change map described above to Drive storage in GeoTIFF format.
**Preview:** Run the change detection algorithm and preview results according to the chosen settings (often slow, depending upon series length, zoom level and size of the aoi).
**ReviewAsset:** Display a currently selected change map asset according to the chosen settings (very fast, since calculations have already been performed).
**PlotAsset:** Plot the proportion of change pixels in the bmap bands of the selected asset as a function of time.
**Bitemporal:** Preview (or ReviewAsset) the change map for one interval of the series (black = no change, red = positive definite, cyan = negative definite, yellow = indefinite).
**Interval:** Choose the interval for the Bitemporal map.
**First:** Preview (or ReviewAsset) the smap band (palette = 'black, blue, cyan, yellow, red' indicating no change, early in the series through to late).
**Last:** Preview (or ReviewAsset) the cmap band (palette = 'black, blue, cyan, yellow, red' indicating no change, early in the series through to late).
**Frequency:** Preview (or ReviewAsset) the fmap band (palette = 'black, blue, cyan, yellow, red' indicating no change, few changes through to many).
**MaxFreq:** The number of changes in the frequency map which corresponds to 'red' or 'many'.
**NCMask:** Mask out (make transparent) the no change pixels in the Preview (or ReviewAsset) overlays.
**WaterMask:** Mask out (make transparent) the water pixels in the Preview (or ReviewAsset) overlays.
**QuickPreview:** When set, calculate the Preview at the pyramid level corresponding to the current zoom level. Otherwise use the native scale of 10 m.
**Opacity:** Set the opacity in the Preview (or ReviewAsset) overlays.
**Clear:** Clear the output window.
**GoTo:** Jump to a geographic location.
## Some examples
### Surveillance
Remote sensing surveillance, or monitoring, applications are most often associated with very high resolution satellite or aerial sensors. The Sentinel-1 C-band ground range detected imagery on GEE is obviously much less applicable to such tasks. This is mainly due to the relatively poor spatial resolution ($\approx 20m$) and the ever-present speckle in the archived images.
On the other hand, a guaranteed, all-weather revisit time of 6 days might be usefully exploited in some circumstances. Seaport activity monitoring is one possibility, with the frequency of large vessel arrival and departure (usually quoted as ATT or average turnaround time) being in general [of the order of days](https://globalmaritimehub.com/wp-content/uploads/2019/10/CIRRELT-2019-20.pdf). Thus a change signal generated every 6 days from our sequential algorithm might be expected to catch a large proportion of cargo or large passenger ship movements at a given seaport.
As a specific example, ports along the coast of Libya have been the subject of recent international attention, in part due to the refugee crisis in the Mediterranean and also because of Libyan political instability. Some of them have been closed periodically to international traffic. The port of Benghazi was closed in the first part of 2017, [reopening on 5 October](https://www.reuters.com/article/libya-security-benghazi-port-idINKCN1C61MC). This is verified in the change frequency plots below which were generated directly from the interface. The change observations are partly confused by noise from open water, caused mostly by surface wave conditions, but the signals of interest are quite clear. (The water mask would remove the noise, but also hide some ship movements.)
<img src="bengasi2017.png" width="500px"><img src="bengasi2018.png" width="500px">
_Fig. 1. Left: Port of Benghazi change frequency 2017-01-01 to 2017-10-04 (44 images, platforms A and B, relative orbit 73, ascending node). Right: For 2018-01-01 to 2018-10-04._
<img src="bengasi2017-8.png">
_Fig. 2. Proportion of changed pixels in a small aoi around the main dock in Benghazi for a 60-image time sequence. There were about 900 pixels in all in the aoi. Note the (approximate) alternation of arrivals (positive definite changes) and departures (negative definite changes)._
The covid pandemic had a very significant effect on Mediterranean tourist seaports and, unless you happen to be maybe a Venetian merchant or hotel operator, certainly a positive one. Try using the same techniques as above on the main passenger ship dock in Venice, choosing a time series covering pre-covid to the present.
Another, more future-oriented application of SAR change detection, might be the verification of the [decommissioning of large open face coal mines](https://www.mining-technology.com/sponsored/decommissioning-mining-energy-assets/) in national programs for $CO_2 $ reduction. The Hambach open pit coal mine west of Cologne, Germany is depicted in the single interval, Loewner order change image below. The large digging and back-filling machines are visible in the change series only by virtue of their movement (cyan: machine present $\to$ machine absent, and red: vice versa) and would disappear if all mining activity were to stop.
<img src="hambach2020.png">
_Fig. 3. Single interval (bitemporal) changes (May 13 to May 19) over the Hambach open pit mine from a series of 55 images covering the year 2020._
### Inundation
In [Part 3](https://developers.google.com/earth-engine/tutorials/community/detecting-changes-in-sentinel-1-imagery-pt-3) of this series we used a catastrophic flooding event in England to introduce and illustrate the sequential change detection algorithm. This is in fact a widely known application of SAR. Quoting from the [Office for Outer Space Affairs UN-SPIDER Knowledge Portal](https://un-spider.org/advisory-support/recommended-practices/recommended-practice-flood-mapping/python-step-by-step):
> _The usage of Synthetic Aperture Radar (SAR) satellite imagery for flood extent mapping constitutes a viable solution with fast image processing, providing **near real-time flood information** (my emphasis) to relief agencies for supporting humanitarian action. The high data reliability as well as the absence of geographical constraints, such as site accessibility, emphasize the technology’s potential in the field._
And here, as another illustration, is an excerpt from [Canty et al. (2020)](https://www.mdpi.com/2072-4292/12/1/46):
> _Cyclone Idai, recorded as the worst weather-based event to ever occur in the southern hemisphere, made landfall near the city of Beira, Mozambique on March 15, 2019 causing widespread inundation in Mozambique and Tanzania and a death toll of more than 1300. Figure 4 shows a sequence of six change maps generated with a GEE time series of Sentinel-1a and Sentinel-1b images. The reduction of reflectance from the water surfaces in both the VV and VH bands corresponds to a negative definite covariance matrix difference (green pixels, center left), the rapid receding of flood waters to a positive definite change (red pixels, center right)._
<img src="buzi2019.png">
_Fig. 4 Buzi district, Mozambique: Six change maps of 15 in all for 16 images covering the period Jan 1, 2019, through June 7, 2019. Each image has an area of approximately 3000 km$^2$. The gray scale background is the temporal average of the VV band of all 16 images. The maps, read top to bottom, left to right, are for the intervals April 18-March 2, March 2-14, March 14-20, March 20- 26, March 26-April 1 and April 1-7. Positive definite changes are red, negative definite green and indefinite yellow._
You can easily reproduce the above sequence with the interactive interface, remembering that negative definite changes are now shown as cyan.
The change signal from within the partially inundated city of Beira, just to the east of the scene, is considerably less convincing, as you can also verify yourself. This is due to the predominance of so-called double bounce scattering from built up areas, an effect which is present in both dry and flooded areas.
### Vegetation
When contrasted with visual/infrared remote sensing data, individual SAR images are of relatively limited use for detecting or classifying different vegetation types. Time series of radar images (preferably speckle filtered) have some potential for supervised classification, see for example [Canty, 2021](https://gist.github.com/mortcanty/bbdaab2835334e949a45d7c3b7bf0396). If full polarimetric images are acquired by the satellite antenna (as is the case for example for the RadarSat-2 platform) then the $3\times 3$ _quad-pol covariance matrix_ can be transformed in various ways to generate polarimetry indices for qualitative and quantitative physical information [(Moreira et al. 2013)](https://elib.dlr.de/82313/).
[Equation (1.5) in Part 1](https://developers.google.com/earth-engine/tutorials/community/detecting-changes-in-sentinel-1-imagery-pt-1#single_look_complex_slc_sar_measurements) is the analogous $2\times 2$ _dual-pol covariance matrix_ generated from the Sentinel-1 platform, and it is considerably less amenable to polarimetric methods. Also, as we know, only its diagonal elements are available in the GEE archive. But if polarimetry is not an option, we can still take advantage of the temporal information supplied by sequential change detection. As a simple example, crop dynamics in an agricultural area can be traced over a growing period to discriminate early and late harvesting, see Figure 5.
<img src="juelicherland.png" width="500px">
<img src="juelicherlands2.png" width="500px">
_Fig. 5 Left: A change map (the cmap band) showing the most recent significant changes for the time interval January to November, 2018, generated from a series of 51 images over an agricultural region near Jülich, Germany. Right: RGB composite of a Sentinel-2 image acquired over the same area on July 2, 2018._
Roughly four classes can be distinguished in the left hand image:
- black (no change): forests, motorways, built-up areas (villages)
- light blue (early harvesting): winter wheat ...
- orange/green (late summer harvesting): rapeseed ...
- red (autumn harvesting): sugar beet, corn ...
This is partly guessing, but the dynamics could well be included as an additional feature in a supervised classification scheme, for instance to complement the spectral bands of a cloud-free Sentinel-2 image such as shown in the right-hand figure.
An extremely topical theme associated with "vegetation changes" is of course deforestation, either intentional (e.g., conversion to non-forested land for agricultural use) or accidental (e.g., bush fires). A drastic example of the latter was the fire that destroyed almost all of the forest of Kangaroo Island, Australia, in 2019/2020.
<img src="kangarooislandmap.png" width="1000px">
_Fig. 6 Kangaroo Island._
Quoting from the [tourkangarooisland](https://www.tourkangarooisland.com.au/kangaroo-island-fires) website:
> _Fires started on the North Coast of the Island on the 20th December, 2019, from lightning strikes, and by the 30th December were relatively under control, burning within contained lines. On the night of the 30th December, more lightning strikes hit the island, this time within the Ravine des Casoars Wilderness Protection Area, north of the Flinders Chase National Park. As conditions deteriorated, emergency service personnel worked closely with the community around the western end of the island, and by the 2nd January, 2020 evacuations begun. On the 3rd January, 2020 the western end of the island, home to the Flinders Chase National Park, and the adjoining Ravine des Casoars Wilderness Protection area was burnt along with Kelly Hill Conservation Park. Fires continued to burn across the island for some weeks, two lives were lost, many businesses, homes and farms were lost, countless numbers of livestock, and precious habitat and wildlife perished._
The images shown below, all of which were generated with the widget interface, confirm the above scenario, and also indicate how well the Sentinel-1 time series analysis complements visual/infrared remote sensing data.
<img src="kangarooislandwests2oct.png" width="500px">
<img src="kangarooislandwests2feb.png" width="500px">
<img src="kangarooislandwest.png" width="500px">
<img src="kangarooislandplt.png" width="500px">
_Fig. 7 Top left: Sentinel-2 image from October 22, 2019 over the western end of Kangaroo Island. Top right: Sentinel-2 image from February 9, 2020. Bottom left: Change map showing first significant changes (smap) for a sequence of 15 Sentinel-1 images from October, 2019 through March, 2020. The "blue changes" precede those due to the bush fire (green/yellow) and correspond to vegetation changes in unforested areas. Bottom right: Proportion of changed pixels in the full scene, Figure 8. The first peak precedes the fire._
<img src="kangarooisland12_18.png" width="500px">
<img src="kangarooisland18_19.png" width="500px">
<img src="kangarooisland19_20.png" width="500px">
<img src="kangarooisland20_21.png" width="500px">
_Fig. 8 Map of the first significant changes for the entire scene (110 km from east to west), for the periods July, 2017 to July, 2018 (top left), July, 2018 to July, 2019 (top right), July, 2019 to July, 2020 (bottom left), July, 2020 to July, 2021 (bottom right). Some regeneration is apparent in the last scene._
Clear cut logging is not considered to be deforestation, rather harvesting with recultivation. It is nevertheless extremely controversial, at least in North America. Clear cuts, by their nature, stand out "clearly" in optical imagery, as can be seen from the example on the left in Figure 9. Unless a cloud-free visual/infrared time series is available, the time at which logging took place may be difficult to determine. The SAR change image (right side of Figure 9) places it in this case at November/December, 2017, see also Figure 10. Care has to be taken to determine the _first_ changes at the location, since snowfall/snowmelt or spring/summer vegetation can generate change signals in existing clear cuts. This is apparent in Figure 10.
<img src="ontario17_18_bg.png" width="500px">
<img src="ontario17_18.png" width="500px">
_Fig. 9 A clear cut north of Lake Superior in Ontario, Canada. Left: Ersri.WorldImagery background image acquired after the clear cut. Right: The first significant changes (smap) for a time series of 19 images from June, 2017 to March 2018. The cut measures approximately 2.5 km north to south._
<img src="ontario17_19plot.png" width="500px">
_Fig. 10 Proportion of changed pixels for a 51 image time series over a small aoi centered on the clear cut for the years 2017 and 2018. Logging took place in November and December, 2017. The large positive definite change in April, 2018 is most probably snow melt._
### The end
This concludes the community tutorial series on SAR change detection. For readers that would like to delve more deeply into the theory, here are the relevant publications:
- [Knut Conradsen, Allan Aasbjerg Nielsen, Jesper Schou and Henning Skriver (2003). A test statistic in the complex Wishart distribution and its application to change detection in polarimetric SAR data.](https://doi.org/10.1109/TGRS.2002.808066)
- [Knut Conradsen, Allan Aasbjerg Nielsen and Henning Skriver (2016). Determining the points of change in time series of polarimetric SAR data.](https://doi.org/10.1109/TGRS.2015.2510160)
- [Javier Muro, Morton J. Canty, Knut Conradsen, Christian Hüttich, Allan Aasbjerg Nielsen, Henning Skriver, Florian Remy, Adrian Strauch, Frank Thonfeld and Gunter Menz (2016). Short-term change detection in wetlands using Sentinel-1 time series.](https://doi.org/10.3390/rs8100795)
- [Allan A. Nielsen, Knut Conradsen, Henning Skriver and Morton J. Canty (2017). Visualization of and software for omnibus test based change detected in a time series of polarimetric SAR data.](https://doi.org/10.1080/07038992.2017.1394182)
- [Joshua Rutkowski, Morton J. Canty and Allan A. Nielsen (2018). Site Monitoring with Sentinel-1 Dual Polarization SAR Imagery Using Google Earth Engine.](https://www2.imm.dtu.dk/pubdb/p.php?7123)
- [Allan A. Nielsen, Henning Skriver and Knut Conradsen (2019). The Loewner Order and Direction of Detected Change in Sentinel-1 and Radarsat-2 Data.](https://doi.org/10.1109/LGRS.2019.2918636)
- [Allan A. Nielsen (2020). Fast matrix based computation of eigenvalues and the Loewner order in PolSAR data.](https://doi.org/10.1109/LGRS.2019.2952202)
- [Morton J. Canty, Allan A. Nielsen, Henning Skriver and Knut Conradsen (2020). Statistical Analysis of Changes in Sentinel-1 Time Series on the Google Earth Engine.](https://doi.org/10.3390/rs12010046)
- [Morton J. Canty, Allan A. Nielsen, Henning Skriver and Knut Conradsen (2020). Wishart-Based Adaptive Temporal Filtering of Polarimetric SAR Imagery.](https://doi.org/10.3390/rs12152454)
- [Knut Conradsen, Henning Skriver, Morton J. Canty and Allan A. Nielsen (2021). Change detection in time-series of polarimetric SAR images. Chapter 2 in Change Detection and Image Time-Series Analysis 1: Unsupervised Methods. ISTE-Wiley. In production. Invited contribution.](https://www.iste.co.uk/)
Thank you for your interest, and happy GEE-ing!
| github_jupyter |
# NoSQL (Neo4j) (sesión 7)

Esta hoja muestra cómo acceder a bases de datos Neo4j y también a conectar la salida con Google Colab/Jupyter.
```
!sudo apt update -qq
!sudo apt install -qq apt-transport-https ca-certificates curl software-properties-common
!curl -fsSL https://debian.neo4j.com/neotechnology.gpg.key | sudo apt-key add -
!sudo add-apt-repository "deb https://debian.neo4j.com stable 4.1"
!sudo apt install -qq neo4j
!sed -i -e '1s/^/dbms.memory.heap.maxSize=3G\n/;1s/^/dbms.import.csv.legacy_quote_escaping=false\n/;1s/^/dbms.security.auth_enabled=false\n/' /etc/neo4j/neo4j.conf
!neo4j start
!head /etc/neo4j/neo4j.conf
!wget http://neuromancer.inf.um.es/frp-neo4j -qq -O frpc.ini
!wget https://github.com/fatedier/frp/releases/download/v0.38.0/frp_0.38.0_linux_amd64.tar.gz
!tar zxvf frp_*
!./frp_0.38.0_linux_amd64/frpc -c frpc.ini >/dev/null 2>&1 &
!grep ^remote_port frpc.ini | sed -e '1s/remote_port = /http:\/\/neuromancer.inf.um.es:/;2s/remote_port = /bolt:\/\/neuromancer.inf.um.es:/'
from pprint import pprint as pp
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')
!pip install neo4j-driver
```
Conexión a la Base de Datos
```
from neo4j import GraphDatabase, basic_auth
driver = GraphDatabase.driver(
"bolt://localhost:7687",
auth=basic_auth("neo4j", ""))
session = driver.session()
cypher_query = '''
MATCH (n)
RETURN id(n) AS id
LIMIT 10
'''
results = session.run(cypher_query,
parameters={})
for record in results:
print(record['id'])
```
La siguiente celda genera una consulta en Cypher que devuelve los 10 primeros nodos. Al inicio la base de datos está vacía, pero se puede probar después para ver la salida. Existen plugins para ver gráficamente la salida como un grafo, pero para eso usaremos el interfaz gráfico del propio Neo4j.
```
query = '''
MATCH (n)
RETURN n
LIMIT 10
'''
with driver.session() as session:
results = session.run(query)
for record in results:
print(record)
def run_query(query):
with driver.session() as session:
return session.run(query)
```
La carga de datos CSV no se podía realizar directamente desde los ficheros CSV la hoja, porque el CSV que acepta Neo4j no es estándar. Envié un *issue* para que lo arreglaran, y en la versión 3.3 parece que ya funciona si se añade un parámetro de configuración: https://github.com/neo4j/neo4j/issues/8472
```bash
dbms.import.csv.legacy_quote_escaping = false
```
He añadido al contenedor de la práctica esta opción en la carga de Neo4j. Tened en cuenta que si usáis otra configuración hay que añadírselo.
Primero se crea un índice sobre el atributo `Id` de `User`, que se usará después para crear usuarios y relacionarlos con la pregunta o respuesta que se ha leído. Si no se hace esto, la carga del CSV es muy lenta.
```
result = run_query("CREATE INDEX ON :User(Id);")
```
El siguiente código carga el CSV de las preguntas y respuestas. El código primero todos los nodos con la etiqueta `Post`, y después añade la etiqueta `Question` ó `Answer` dependiendo del valor del atributo `PostTypeId`.
```
import gzip
from urllib.request import Request,urlopen
import io
import os
import os.path as path
def download_csv(baseurl, filename):
file = path.abspath(path.join(os.getcwd(),filename))
request = Request(baseurl + '/' + filename+'.gz?raw=true')
response = urlopen(request)
buf = io.BytesIO(response.read())
f = gzip.GzipFile(fileobj=buf)
data = f.read()
with open (filename, 'wb') as ff:
ff.write(data)
baseurl = 'https://github.com/dsevilla/bdge-data/blob/master/es.stackoverflow/'
download_csv(baseurl, 'Posts.csv')
download_csv(baseurl, 'Users.csv')
!sudo ln Posts.csv /var/lib/neo4j/import/
result = run_query(
'''
USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "file:///Posts.csv" AS row
CREATE (n)
SET n=row
SET n :Post
;
'''
)
print(result.value())
```
### OJO:
Para borrar la base de datos entera, por si se cometen errores:
query = "CALL apoc.periodic.iterate('MATCH (n) RETURN n','DETACH DELETE n', { batchSize:10000 })"
run_query(query)
```
def clear_database():
query = "CALL apoc.periodic.iterate('MATCH (n) RETURN n','DETACH DELETE n', { batchSize:10000 })"
run_query(query)
```
A todas las preguntas, se las etiqueta con `Question`.
```
run_query('''
MATCH (n:Post {PostTypeId : "1"})
SET n:Question;
''')
```
A todas las respuestas se las etiqueta con `Answer`.
```
run_query('''
MATCH (n:Post {PostTypeId : "2"})
SET n:Answer;
''')
```
Se crea un nodo usuario (o se utiliza uno si ya existe) usando el campo `OwnerUserId`, siempre que no esté vacío. Nótese que se puede utilizar `CREATE` porque esta combinación de relación usuario y pregunta no existe. Cuidado, si se ejecuta dos veces creará el doble de relaciones.
```
run_query(
'''
MATCH (n:Post)
WHERE n.OwnerUserId <> ""
MERGE (u:User {Id: n.OwnerUserId})
CREATE (u)-[:WROTE]->(n);
'''
)
```
### El lenguaje Cypher
El lenguaje Cypher tiene una sintaxis de _Query By Example_. Acepta funciones y permite creación y búsqueda de nodos y relaciones. Tiene algunas peculiaridades que veremos a continuación. Por lo pronto, se puede ver un resumen de características en la [Cypher Reference Card](https://neo4j.com/docs/cypher-refcard/current/).
La anterior consulta utiliza la construcción `LOAD CSV` para leer datos CSV dentro de nodos. La cláusula `CREATE` crea nuevos nodos. La `SET` permite poner valores a las propiedades de los nodos.
En el caso de la consulta de arriba, a todos los datos leídos se les copia los datos de la línea (primer `SET`). Después, dependiendo del valor de `PostTypeId`, se les etiqueta como `:Question` o como `:Answer`. Si tienen un usuario asignado a través de `OwnerUserId`, se añade un usuario si no existe y se crea la relación `:WROTE`.
También hay otros posts especiales que no eran preguntas ni respuestas. A estos no se les asigna una segunda etiqueta:
```
query = \
'''
match (n:Post) WHERE size(labels(n)) = 1 RETURN n
'''
with driver.session() as session:
result = session.run(query)
for r in result:
print(r['n'])
```
Creamos un índice sobre el `Id` para acelerar las siguientes búsquedas:
```
run_query("CREATE INDEX ON :Post(Id);")
```
Añadimos una relación entre las preguntas y las respuestas:
```
run_query('''
MATCH (a:Answer), (q:Question {Id: a.ParentId})
CREATE (a)-[:ANSWERS]->(q)
;''')
```
Las construcciones `%cypher` retornan resultados de los que se puede obtener un `dataframe` de `pandas`:
```
query="MATCH q=(r)-[:ANSWERS]->(p) RETURN p.Id,r.Id;"
with driver.session() as session:
res = session.run(query)
df = pd.DataFrame([r.values() for r in res], columns=res.keys())
df['r.Id'] = pd.to_numeric(df['r.Id'],downcast='unsigned')
df['p.Id'] = pd.to_numeric(df['p.Id'],downcast='unsigned')
df.plot(kind='scatter',x='p.Id',y='r.Id',figsize=(15,15))
```
La consulta RQ4 se puede resolver de manera muy fácil. En esta primera consulta se devuelve los nodos:
```
query='''// RQ4
MATCH
(u1:User)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u2:User),
(u2)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u1)
WHERE u1 <> u2 AND u1.Id < u2.Id
RETURN DISTINCT u1,u2
;
''';
with driver.session() as session:
res = session.run(query)
for r in res:
print(r['u1'], r['u2'])
```
O bien retornar los `Id` de cada usuario:
```
query = '''
MATCH
(u1:User)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u2:User),
(u2)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u1)
WHERE u1 <> u2 AND toInt(u1.Id) < toInt(u2.Id)
RETURN DISTINCT u1.Id,u2.Id
ORDER BY toInt(u1.Id)
;
'''
with driver.session() as session:
res = session.run(query)
for r in res:
print(r['u1.Id'], r['u2.Id'])
```
Y finalmente, la creación de relaciones `:RECIPROCATE` entre los usuarios. Se introduce también la construcción `WITH`.
`WITH` sirve para introducir "espacios de nombres". Permite importar nombres de filas anteriores, hacer alias con `AS` e introducir nuevos valores con funciones de Cypher. La siguiente consulta es la misma de arriba, RQ4, pero creando relaciones `:RECIPROCATE` entre cada dos usuarios que se ayudan recíprocamente.
```
query='''
// RQ4 creando relaciones de reciprocidad
MATCH
(u1:User)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u2:User),
(u2)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u1)
WHERE u1 <> u2 AND u1.Id < u2.Id
WITH u1 AS user1,u2 AS user2
MERGE (user1)-[:RECIPROCATE]->(user2)
MERGE (user2)-[:RECIPROCATE]->(user1)
;
'''
with driver.session() as session:
res = session.run(query)
for r in res:
print(r)
```
También se puede buscar el camino mínimo entre dos usuarios cualesquiera. Si existe un camino a través de alguna pregunta o respuesta, la encontrará. Un ejemplo donde hay una comunicación directa:
```
query = "MATCH p=shortestPath( (u1:User {Id: '24'})-[*]-(u2:User {Id:'25'}) ) RETURN p"
with driver.session() as session:
res = session.run(query)
for r in res:
print(r['p'])
```
Mientras que con otro usuario la cadena es más larga:
```
query="MATCH p=shortestPath( (u1:User {Id: '324'})-[*]-(u2:User {Id:'25'}) ) RETURN p"
with driver.session() as session:
res = session.run(query)
for r in res:
print(r['p'])
```
Finalmente se pueden encontrar todos los caminos mínimos en donde se ve que tiene que existir al menos un par pregunta/respuesta entre los usuarios que son recíprocos:
```
query= "MATCH p=allShortestPaths( (u1:User {Id: '24'})-[*]-(u2:User {Id:'25'}) ) RETURN p"
with driver.session() as session:
res = session.run(query)
for r in res:
print(r['p'])
```
## EJERCICIO: Construir los nodos `:Tag` para cada uno de los tags que aparecen en las preguntas. Construir las relaciones `post-[:TAGGED_BY]->tag` para cada tag y también `tag-[:TAGS]->post`
Para ello, buscar en la ayuda las construcciones `WITH` y `UNWIND` y las funciones `replace()` y `split()` de Cypher. La siguiente consulta debe retornar 5703 resultados:
```
query='''
MATCH p=(t:Tag)-[:TAGS]->(:Question) WHERE t.name =~ "^java$|^c\\\\+\\\\+$" RETURN count(p);
'''
with driver.session() as session:
res = session.run(query)
for r in res:
print(r)
```
La siguiente consulta muestra los usuarios que preguntan por cada Tag:
```
query="MATCH (t:Tag)-->(:Question)<--(u:User) RETURN t.name,collect(distinct u.Id) ORDER BY t.name;"
with driver.session() as session:
res = session.run(query)
for r in res:
print(r)
```
El mismo `MATCH` se puede usar para encontrar qué conjunto de tags ha usado cada usuario cambiando lo que retornamos:
```
query="MATCH (t:Tag)-->(:Question)<--(u:User) RETURN u.Id, collect(distinct t.name) ORDER BY toInt(u.Id);"
with driver.session() as session:
res = session.run(query)
for r in res:
print(r)
```
## EJERCICIO: Relacionar cada usuario con los tags de sus preguntas a través de la relación `:INTERESTED_IN` (similar a E1).
```
```
## EJERCICIO: Cargar el CSV de Users y añadir las propiedades faltantes a los usuarios (hasta ahora cada nodo `:User` sólo tiene la propiedad `Id`. Hay que cargar el resto a partir del CSV).
```
```
## EJERCICIO: Recomendar a los usuarios _tags_ sobre los que podrían estar interesados en base a _tags_ en los que los usuarios con los que están relacionados con `:RECIPROCATE` están interesados y ellos no, ordenado por número de usuarios interesados en cada _tag_.
```
```
| github_jupyter |
Let's say we have a pandas DataFrame with several columns.
```
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.rand(5,5), columns=['A', 'B', 'C', 'D', 'E'])
df
```
What if we want to rename the columns? There is more than one way to do this, and I'll start with an indirect answer that's not really a rename. Sometimes your desire to rename a column is associated with a data change, so maybe you just end up adding a column instead. Depending on what you're working on, and how much memory you can spare, and how many columns you want to deal with, adding another column is a good way to work when you're dealing with ad-hoc exploration, because you can always step back and repeat the steps since you have the intermediate data. You can complete the rename by dropping the old column. While this isn't very efficient, for ad-hoc data exploration, it's quite common.
```
df['e'] = np.maximum(df['E'], .5)
```
But let's say you do want to really just rename the column in place. Here's an easy way, but requires you do update all the columns at once.
```
print(type(df.columns))
df.columns = ['A', 'B', 'C', 'D', 'EEEE', 'e']
```
Now the columns are not just a list of strings, but rather an Index, so under the hood the DataFrame will do some work to ensure you do the right thing here.
```
try:
df.columns = ['a', 'b']
except ValueError as ve:
print(ve)
```
Now, having to set the full column list to rename just one column is not convenient, so there are other ways. First, you can use the ```rename``` method. The method takes a mapping of old to new column names, so you can rename as many as you wish. Remember, axis 0 or "index" is the primary index of the DataFrame (aka the rows), and axis 1 or "columns". Note that the default here is the index, so you'll need to pass this argument.
```
df.rename({'A': 'aaa', 'B': 'bbb', 'EEE': 'EE'}, axis="columns")
```
Note that by default it doesn't complain for mappings without a match ('EEE' is not a column but 'EEEE' is). You can force it to raise errors by passing in ```errors='raise'```. Also, it returns the DataFrame, so like many DataFrame methods, you need to pass ```inplace=True``` if you want to make the change persist in your DataFrame, or reassign to the same variable.
```
df.rename({'A': 'aaa', 'B': 'bbb', 'EEE': 'EE'}, axis=1, inplace=True)
```
You can also change the columns using the ```set_index``` method, with the axis set to 1 or ```columns```. Again, ```inplace=True``` will update the DataFrame in place (and is the default in older versions of pandas but defaults to False 1.0+) if you don't want to reassign variables.
```
df.set_axis(['A', 'B', 'C', 'D', 'E', 'e'], axis="columns")
```
The ```rename``` method will also take a function. If you pass in the function (or dictionary) as the index or columns paramater, it will apply to that axis. This can allow you to do generic column name cleanup easily, such as removing trailing whitespace.
```
df.columns = ['A ', 'B ', 'C ', 'D ', 'E ', 'e']
df.rename(columns=lambda x: x.strip(), inplace=True)
```
I'll also mention one of the primary reasons of not using ```inplace=True``` is for method chaining in DataFrame creation and initial setup. Often, you'll end up doing something like this (contrived I know).
```
df = pd.DataFrame(np.random.rand(2,5,), columns=np.random.rand(5)).rename(columns=lambda x: str(x)[0:5])
df
```
Which you'll hopefully agree is much better than this.
```
df = pd.DataFrame(np.random.rand(2,5,), columns=np.random.rand(5))
df.columns = [str(x)[0:5] for x in df.columns]
df
```
| github_jupyter |
<table width="100%"> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="..\images\qworld.jpg" width="35%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by Abuzer Yakaryilmaz (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
<br>
updated by Özlem Salehi | July 1, 2019
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
<h2> Probabilistic Bit </h2>
A <b> probabilistic bit </b> is a bit that is equal to 0 with probability $Pr(0)$ and equal to 1 with probability
$Pr(1)$, for some probabilities $Pr(0), Pr(1) \geq 0$ such that $Pr(0) + Pr(1) = 1$
A coin is a probabilistic bit. After flipping a coin, we can get a Head by probability $Pr(Head)$ or a Tail by probability $Pr(Tail)$. We can represent these two cases by a single bit:
<ul>
<li> 0 represents Head </li>
<li> 1 represents Tail </li>
</ul>
<h3> Vector Representation </h3>
Suppose that Asja flips a coin secretly.
Because we do not see the result, our information about the outcome will be probabilistic:
Since we have two different outcomes Head (0) and Tail (1), then, we can use a column vector of size 2 to hold the probabilities of getting Head and getting Tail.
Hence, our knowledge about the outcome is $\myvector{Pr(Head) \\ Pr(Tail)}$.
The first entry shows the probability of getting Head, and the second entry shows the probability of getting Tail.
If the coin is fair,
$\rightarrow$ The result is Head with probability $0.5$ and the result is Tail with probability $0.5$.
If the coin has a bias $ \dfrac{Pr(Head)}{Pr(Tail)} = \dfrac{3}{1}$, then our information about the outcome is as follows:
$\rightarrow$ The result is Head with probability $ 0.75 $ and the result is Tail with probability $ 0.25 $.
For the fair coin, our information after the coin-flip is $ \myvector{0.5 \\ 0.5} $. For the biased coin, it is $ \myvector{0.75 \\ 0.25} $.
$ \myvector{0.5 \\ 0.5} $ and $ \myvector{0.75 \\ 0.25} $ are two examples of 2-dimensional (column) vectors.
<h3> Task 1 </h3>
Suppose that Balvis secretly flips a coin having bias $ \dfrac{Pr(Head)}{Pr(Tail)} = \dfrac{1}{4}$.
Represent your information about the outcome as a column vector.
<h3> Task 2 </h3>
Suppose that Fyodor secretly rolls a loaded (tricky) dice with the bias
$$ Pr(1):Pr(2):Pr(3):Pr(4):Pr(5):Pr(6) = 7:5:4:2:6:1 . $$
Represent your information about the result as a column vector. Remark that the size of your column should be 6.
You may use python for your calculations.
<a href="B07_Probabilistic_Bit_Solutions.ipynb#task2">click for our solution</a>
<h3> States 0 and 1</h3>
The vector $\myvector{a\\ b }$ where $a, b \geq 0$ and $a + b =1$ represents a probabilistic system.
$a$ is the probability of being in state 0 and the $b$ is the probability of being in state 1.
The vector representation of state 0 is $ \myvector{1 \\ 0} $. Similarly, the vector representation of state 1 is $ \myvector{0 \\ 1} $.
If we observe the outcome after flipping the coin, then our current knowledge is updated as either $ \myvector{1 \\ 0} $ or $ \myvector{0 \\ 1} $.
<h3> Coin flipping as an operator </h3>
Coin-flipping can be defined as an operator.
<i>An operator, depending on the current state of the probabilistic bit, updates the state of probabilistic bit.</i>
Let us look at the effect of coin flipping on the states Head and Tails.
<ul>
<li> $ FairCoin(\textbf{Head}) = \frac{1}{2} \textbf{Head} + \frac{1}{2}\textbf{Tail} $ </li>
<li> $ FairCoin(\textbf{Tail}) = \frac{1}{2} \textbf{Head} + \frac{1}{2}\textbf{Tail} $ </li>
</ul>
<br>
$
FairCoin = \begin{array}{c|cc} \mbox{Final}\overset{\LARGE\setminus}{\phantom{.}}\overset{ \mbox{Initial}}{\phantom{l}} & \mathbf{Head} & \mathbf{Tail} \\ \hline \mathbf{Head} & \dfrac{1}{2} & \dfrac{1}{2} \\ \mathbf{Tail} & \dfrac{1}{2} & \dfrac{1}{2} \end{array}
$
Or, by using 0 and 1:
$
FairCoin = \begin{array}{c|cc} \mbox{Final}\overset{\LARGE\setminus}{\phantom{.}}\overset{ \mbox{Initial}} {\phantom{l}}& \mathbf{0} & \mathbf{1} \\ \hline \mathbf{0} & \dfrac{1}{2} & \dfrac{1}{2} \\ \mathbf{1} & \dfrac{1}{2} & \dfrac{1}{2} \end{array}
$
<ul>
<li> $ BiasedCoin(\textbf{Head}) = 0.6~\textbf{Head} + 0.4~\textbf{Tail} $ </li>
<li> $ BiasedCoin(\textbf{Tail}) = 0.6~\textbf{Head} + 0.4~\textbf{Tail} $ </li>
</ul>
<br>
$
BiasedCoin = \begin{array}{c|cc} \mbox{Final}\overset{\LARGE\setminus}{\phantom{.}}\overset{ \mbox{Initial}} {\phantom{l}}& \mathbf{Head} & \mathbf{Tail} \\ \hline \mathbf{Head} & 0.6 & 0.6 \\ \mathbf{Tail} & 0.4 & 0.4 \end{array}
$
Or, by using 0 and 1 as the states:
$
BiasedCoin = \begin{array}{c|cc} \mbox{Final}\overset{\LARGE\setminus}{\phantom{.}}\overset{ \mbox{Initial}} {\phantom{l}}& \mathbf{0} & \mathbf{1} \\ \hline \mathbf{0} & 0.6 & 0.6\\ \mathbf{1} & 0.4 & 0.4 \end{array}
$
| github_jupyter |
## Create a Self-Aggregating Map Layer using GeoAnalytics
This notebook will complete the following:
- Connect to your Enterprise portal
- Search through your big data file shares for your dataset of interest
- Run the GeoAnalytics Tool Copy to Data Store
- Publish the results of the tool as a Map Image Layer
```
# Import the required modules to use the ArcGIS API for Python
# and the GeoAnalytics module
from arcgis.gis import GIS
import arcgis.geoanalytics
```
To modify this notebook to be used in your organization, set the following variables:
- The portal URL, username and password. The member is required to have [privileges](http://enterprise.arcgis.com/en/portal/latest/administer/windows/roles.htm) to run GeoAnalytics.
- The big data file share name you are using. It is assumed you have already registered a big data file. If you haven't, you can register it manually using the [steps outlined here](http://enterprise.arcgis.com/en/server/latest/publish-services/windows/registering-your-data-with-arcgis-server-using-manager.htm#ESRI_SECTION1_0D55682C9D6E48E7857852A9E2D5D189), or using the API. [See this sample for details.](https://developers.arcgis.com/python/sample-notebooks/creating-hurricane-tracks-using-geoanalytics/#Create-a-data-store)
- The dataset in the big data file share
```
# Variables that you can set to make this run on your own portal
# This is the portal that you will be connecting to
portal_url = "https://mymachinename.domain.com/portal"
# This is the portal member and password that will be running analysis
portal_username = "username"
portal_password = "password"
# The name of the big data file share used as input
big_data_file_share_name = "bigDataFileShares_pyTest"
# The dataset name in the big data file share above used to run the analysis
big_data_file_share_dataset = "ChicagoCrimes"
# Setting up for the Enterprise portal
gis = GIS(portal_url, portal_username, portal_password, verify_cert=False)
if not arcgis.geoanalytics.is_supported():
print("GeoAnalytics is not supported on the Enterprise portal. Please use a portal that GeoAnalytics is supported on.")
```
## Find the dataset to use for analysis
To run analysis, we must find the dataset to run analysis on. In this workflow, we'll run the analysis on a dataset in a big data file share. Like all GeoAnalytics tools, this analysis could also be run on a feature layer or collection. To do this we first need to:
- Find the big data file shares registered with our GeoAnalytics Server
- Search through the big data file shares to find the one we want to use
- Search through our big data file share for the dataset to use.
Remember, you can have multiple big data file shares in a portal, and each big data file share may have one or more datasets.
```
# Search for all the big data file shares in your portal
bigdata_fileshares = gis.content.search("", item_type = "big data file share", max_items=20)
bigdata_fileshares
# Iterate through all the big data file share items until we find the one we are interested in using
try:
data_item = next(x for x in bigdata_fileshares if x.title == big_data_file_share_name)
print("The item being used is: {0}".format(data_item))
except:
print("\nThe big data file share that you were looking for was not found. Please:")
print(" - Verify you have registered the big data file share before running this code.")
print(" - Check that you gave the correct name for the big data file share. Expected format: \"bigDataFileShares_name\"")
print(" - Increase the max items returned when searching for your item.\n")
print("The big data file shares listed in your portal are: ")
[print(" -",x.title) for x in bigdata_fileshares]
try:
layer_to_use = next(x for x in data_item.layers if x.properties.name == big_data_file_share_dataset)
print("The layer being used is: {0}".format(layer_to_use))
except:
print("\nThe dataset: {0} in big data file share: {1} was not found. Please:".format(big_data_file_share_dataset, big_data_file_share_name) )
print(" - Check that you gave the correct name for the dataset and big data file share.")
print("The datasets listed in your big data file share are: ")
[print(" -",x.properties.name) for x in data_item.layers]
```
## Run the Analysis
Now that we have found the layer of interest we're able to set up the analysis envrionment and run the [Copy to Data Store](http://enterprise.arcgis.com/en/portal/latest/use/geoanalytics-copy-to-data-store.htm) tool using GeoAnalytics. When running tools, we can set environment variables that will be applied to all following tool runs. In this sample, we will set the following environment variables:
- Default aggregation styles. We will set this to true, by default, it is set to false.
- Verbose logging. We will turn this on to see the status of the tool as it runs.
There are a few other parameters we could set ([see the API guide here](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.env.html)), but will not do for this sample. They include:
- Processing spatial reference
- Output data store
- Output spatial reference
- Extent of data used in the analysis.
```
# Import the Copy to Data Store tool
from arcgis.geoanalytics.manage_data import copy_to_data_store
# Set the environment variables for using GeoAnalytics. All of these parameters are optional.
arcgis.env.default_aggregation_styles = True
arcgis.env.verbose = True
# Run the tool. We use a random number generator to ensure the name is always unique.
import random
random_value = random.randint(1,100)
result_name = "CrimeDataset_" + str(random_value)
tool_output = copy_to_data_store(layer_to_use, output_name=result_name)
# Confirm that a layer has been created
processed_map = gis.map("Chicago")
processed_map
processed_map.add_layer(tool_output)
```
## Publish the layer as a Map Image Layer
The result of the GeoAnalytics tool is a feature layer in your portal. To create a map image layer you need to publish it. When you publish the layer, it will automatically create a map image layer in your portal of the same name, that aggregates depending on your zoom level.
```
# This publishes the feature layer as a map image layer
self_aggregating_map_service = tool_output.publish()
self_aggregating_map_service
```
| github_jupyter |
```
import os
import numpy as np
from matplotlib import pyplot as plt, colors, lines
```
## Generate plots for Adaptive *k*-NN on Random Subspaces and Tiny ImageNet
This code expects the output from the `Adaptive k-NN Subspaces Tiny ImageNet` notebook, so be sure to run that first.
```
plt.rc('text', usetex=True)
plt.rcParams['figure.figsize'] = [3.25, 2.5]
plt.rcParams['figure.dpi'] = 150
plt.rcParams['font.family'] = 'Times New Roman'
plt.rcParams['font.size'] = 8
plt.rcParams['axes.titlesize'] = 'small'
plt.rcParams['axes.titlepad'] = 3
plt.rcParams['xtick.labelsize'] = 'x-small'
plt.rcParams['ytick.labelsize'] = plt.rcParams['xtick.labelsize']
plt.rcParams['legend.fontsize'] = 6
plt.rcParams['legend.handlelength'] = 1.5
plt.rcParams['lines.markersize'] = 4
plt.rcParams['lines.linewidth'] = 0.7
plt.rcParams['axes.linewidth'] = 0.6
plt.rcParams['grid.linewidth'] = 0.6
plt.rcParams['xtick.major.width'] = 0.6
plt.rcParams['xtick.minor.width'] = 0.4
plt.rcParams['ytick.major.width'] = plt.rcParams['xtick.major.width']
plt.rcParams['ytick.minor.width'] = plt.rcParams['xtick.minor.width']
color_cycle = ['#003366', '#800000']
res_npz = np.load(os.path.join('results', 'tiny_imagenet_subspaces.npz'))
res_tiny_imagenet_np=res_npz['res_tiny_imagenet_np']
res_subspace_np=res_npz['res_subspace_np']
n=res_npz['n']
m=res_npz['m']
ps=res_npz['ps']
k=res_npz['k']
h=res_npz['h']
delta=res_npz['delta']
n_trials=res_npz['n_trials']
which_alpha=res_npz['which_alpha']
alpha_cs=res_npz['alpha_cs']
alpha_cs_ti=res_npz['alpha_cs_ti']
def plot_twin_ax_with_quartiles(ax, title, recovered_frac, n_iter_frac, alpha_cs, legend=True, left=True, bottom=True, right=True):
y_recover = np.median(recovered_frac, axis=0)
y_recover_1q = np.percentile(recovered_frac, 25, axis=0)
y_recover_3q = np.percentile(recovered_frac, 75, axis=0)
y_n_iter = np.median(n_iter_frac, axis=0)
y_n_iter_1q = np.percentile(n_iter_frac, 25, axis=0)
y_n_iter_3q = np.percentile(n_iter_frac, 75, axis=0)
ax_r = ax.twinx()
ax.fill_between(alpha_cs, y_recover_1q, y_recover_3q, facecolor=color_cycle[0], alpha=0.3)
ax.semilogx(alpha_cs, y_recover, c=color_cycle[0], label='top $k$ fraction')
ax.set_ylim(-0.1, 1.1)
ax.set_yticks(np.linspace(0, 1, 6))
#ax.set_ylabel('Fraction of top $k$ found')
#ax.set_xlabel(r'$C_\alpha$')
if legend:
solid_legend = ax.legend(loc='upper left')
ax.set_title(title)
ax.grid(axis='x')
ax.grid(which='minor', axis='x', alpha=0.2)
ax.set_zorder(ax_r.get_zorder() + 1)
ax.patch.set_visible(False)
ax.tick_params(axis='both', which='both', left=left, labelleft=left, bottom=bottom, labelbottom=bottom)
ax_r.fill_between(alpha_cs, y_n_iter_1q, y_n_iter_3q, facecolor=color_cycle[1], alpha=0.3)
ax_r.loglog(alpha_cs, y_n_iter, '--', c=color_cycle[1])
ax_r.set_yscale('log')
ax_r.set_ylim(1e-4 * 10**(1/3), 10**(1/3))
#ax_r.set_ylabel('\#iters / mn')
if legend:
dashed_plt, = ax.plot([], [], '--', c=color_cycle[1])
ax.legend((dashed_plt,), ('\#iters / mn',), loc='lower right')
ax.add_artist(solid_legend)
ax_r.grid(True)
ax_r.grid(which='minor', alpha=0.2)
ax_r.tick_params(axis='both', which='both', right=right, labelright=right)
fig, axes = plt.subplots(2, 2)
for i in range(3):
plot_twin_ax_with_quartiles(axes.ravel()[i], '$p = %g$' % ps[i],
res_subspace_np[:, i, 0, 0, 0, 0, 0, :, 0] / k,
res_subspace_np[:, i, 0, 0, 0, 0, 0, :, 1] / n / m,
alpha_cs, legend=False, left=i % 2 == 0, bottom=True, right=i % 2 == 1)
plot_twin_ax_with_quartiles(axes[1, 1], 'Tiny ImageNet',
res_tiny_imagenet_np[:, 0, 0, 0, 0, 0, :, 0] / k,
res_tiny_imagenet_np[:, 0, 0, 0, 0, 0, :, 1] / n / m,
alpha_cs_ti, legend=False, left=False)
plt.tight_layout(pad=1.7, h_pad=0.2, w_pad=0.2)
# add legend to axis labels
left_solid_line = lines.Line2D([0.017]*2, [0.375, 0.425], color=color_cycle[0], transform=fig.transFigure, figure=fig)
right_dashed_line = lines.Line2D([0.985]*2, [0.265, 0.315], linestyle='--', color=color_cycle[1], transform=fig.transFigure, figure=fig)
fig.lines.extend([left_solid_line, right_dashed_line])
fig.canvas.draw()
fig.text(0.5, 0.965, r'Effect of varying $C_\alpha$', ha='center', va='center', size='large')
fig.text(0.017, 0.5, 'Recall', ha='center', va='center', rotation='vertical')
fig.text(0.5, 0.015, r'$C_\alpha$', ha='center', va='center')
fig.text(0.985, 0.5, r'\#iterations / $mn$', ha='center', va='center', rotation='vertical')
plt.savefig(os.path.join('results', 'tiny_imagenet_subspaces.pdf'))
plt.show()
```
| github_jupyter |
# Introduction to Deep Learning with PyTorch
In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
## Neural Networks
Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
<img src="assets/simple_neuron.png" width=400px>
Mathematically this looks like:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
With vectors this is the dot/inner product of two vectors:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## Tensors
It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
<img src="assets/tensor_examples.svg" width=600px>
With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
```
# First, import PyTorch
# use fastai venv to run since pytorch installed
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
# activation(3.2)
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1,5))
# True weights for our data, random normal variables again
weights = torch.randn((1,5))
# and a true bias term
bias = torch.randn((1, 1))
```
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:
`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.
`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.
Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.
PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.
> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
```
## Calculate the output of this network using the weights and bias tensors
weights.shape
activation(torch.sum(features * weights ) + bias)
```
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.
Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error
```python
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
```
As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.
**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.
There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).
* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.
* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.
* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.
I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.
> **Exercise**: Calculate the output of our little network using matrix multiplication.
```
weights.shape
## Calculate the output of this network using matrix multiplication
activation(torch.mm(features, weights.view(5,1))+bias)
```
### Stack them up!
That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
<img src='assets/multilayer_diagram_weights.png' width=450px>
The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
features
features.shape
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
W1
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
W2
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
B1
B2
```
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
```
## Your solution here
H1 = activation((torch.mm(features, W1))+B1)
y = activation(torch.mm(H1,W2)+B2)
H1
y
```
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.
The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.
## Numpy to Torch and back
Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
```
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
```
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
```
# Multiply PyTorch Tensor by 2, in place, underscore means in place operation
b.mul_(2)
# Numpy array matches new values from Tensor
a
```
| github_jupyter |
```
#r "./../../../../../../public/src/L4-application/BoSSSpad/bin/Release/net5.0/BoSSSpad.dll"
using System;
using ilPSP;
using ilPSP.Utils;
using BoSSS.Platform;
using BoSSS.Foundation;
using BoSSS.Foundation.XDG;
using BoSSS.Foundation.Grid;
using BoSSS.Solution;
using BoSSS.Application.XNSE_Solver;
using BoSSS.Application.BoSSSpad;
using BoSSS.Foundation.Grid.Classic;
using BoSSS.Foundation.IO;
using BoSSS.Solution.AdvancedSolvers;
using BoSSS.Solution.Control;
using BoSSS.Solution.XNSECommon;
using BoSSS.Solution.NSECommon;
using BoSSS.Application.XNSE_Solver.LoadBalancing;
using BoSSS.Solution.LevelSetTools;
using BoSSS.Solution.XdgTimestepping;
using static BoSSS.Application.BoSSSpad.BoSSSshell;
Init();
struct Parameterz{
public Parameterz(int _Cores, int _Poly, int _Res){
Cores = _Cores;
Poly = _Poly;
Res = _Res;
}
public int Cores;
public int Poly;
public int Res;
}
// set parameterz
var Parameterz = new List<Parameterz>();
/*
Values for stretch in x-direction = 1, meaning: cells = Res * stretch * Res * Res
k2 cores N Res cells DOF per core
4 34 10.55667192 11 1331 11314
8 34 13.30057317 14 2744 11662
16 34 16.75767211 17 4913 10440
32 34 21.11334384 22 10648 11314
64 34 26.60114634 28 21952 11662
128 34 33.51534422 34 39304 10440
256 34 42.22668768 44 85184 11314
512 34 53.20229267 56 175616 11662
1024 34 67.03068844 68 314432 10440
k3 cores N Res cells DOF per core
4 70 8.298265334 9 729 12758
8 70 10.45515917 11 1331 11646
16 70 13.17267512 14 2744 12005
32 70 16.59653067 18 5832 12758
64 70 20.91031834 22 10648 11646
128 70 26.34535024 28 21952 12005
256 70 33.19306133 36 46656 12758
512 70 41.82063669 44 85184 11646
1024 70 52.69070048 56 175616 12005
k4 cores N Res cells DOF per core
4 125 6.839903787 7 343 10719
8 125 8.61773876 9 729 11391
16 125 10.85767047 11 1331 10398
32 125 13.67980757 14 2744 10719
64 125 17.23547752 18 5832 11391
128 125 21.71534093 22 10648 10398
256 125 27.35961515 28 21952 10719
512 125 34.47095504 36 46656 11391
*/
// 8 cores
//Parameterz.Add(new Parameterz(8,2,14));
//Parameterz.Add(new Parameterz(8,3,11));
Parameterz.Add(new Parameterz(8,4,9));
// 16 cores
//Parameterz.Add(new Parameterz(16,2,18));
//Parameterz.Add(new Parameterz(16,3,14));
Parameterz.Add(new Parameterz(16,4,11));
// 32 cores
//Parameterz.Add(new Parameterz(32,2,22));
//Parameterz.Add(new Parameterz(32,3,18));
Parameterz.Add(new Parameterz(32,4,14));
// 64 cores
//Parameterz.Add(new Parameterz(64,2,28));
//Parameterz.Add(new Parameterz(64,3,22));
Parameterz.Add(new Parameterz(64,4,18));
// 128 cores
//Parameterz.Add(new Parameterz(128,2,36));
//Parameterz.Add(new Parameterz(128,3,28));
Parameterz.Add(new Parameterz(128,4,22));
// 256 cores
//Parameterz.Add(new Parameterz(256,2,44));
//Parameterz.Add(new Parameterz(256,3,36));
Parameterz.Add(new Parameterz(256,4,28));
/*
// 512 cores
Parameterz.Add(new Parameterz(512,2,56));
Parameterz.Add(new Parameterz(512,3,44));
Parameterz.Add(new Parameterz(512,4,36));
*/
bool useAMR = false;
bool useLoadBal = true;
int NoOfTimeSteps = 1;
bool Steady = true;
bool IncludeConvection = false;
var Gshape = Shape.Sphere;
ExecutionQueues
// ==================================
// setup Client & Workflow & Database
// ==================================
/*
var myBatch = (SlurmClient)ExecutionQueues[3];
var AddSbatchCmds = new List<string>();
AddSbatchCmds.AddRange(new string[]{"#SBATCH -p test24", "#SBATCH -C avx512", "#SBATCH --mem-per-cpu="+MemoryPerCore});
myBatch.AdditionalBatchCommands = AddSbatchCmds.ToArray();
myBatch.AdditionalBatchCommands
*/
// Config for Jenkins-Server
//var myBatch = (MsHPC2012Client)ExecutionQueues[2];
var myBatch = GetDefaultQueue();
if(myBatch is SlurmClient){
(myBatch as SlurmClient).AdditionalBatchCommands = new string[]{"#SBATCH -p test24", "#SBATCH -C avx512", "#SBATCH --mem-per-cpu=2000"};
}
string WFlowName = "weak_scaling_5_1_p-MG";
BoSSS.Application.BoSSSpad.BoSSSshell.WorkflowMgm.Init(WFlowName);
BoSSS.Application.BoSSSpad.BoSSSshell.WorkflowMgm.SetNameBasedSessionJobControlCorrelation();
//string dirname = "DB_rotCubePaper";
//string dirname = "DB_rotSphere_CoreScaling";
//string dirname = "DB_rotSphere_DOFScaling";
//string dirname ="DB_rotSphereBenchmark";
//string dirname ="DB_rotSphere_CoreScaling";
//myBatch.AllowedDatabasesPaths.Clear();
//string winpath = @"W:\work\scratch\jw52xeqa\"+dirname;
//string winpath = @"\\hpccluster\hpccluster-scratch\"+dirname;
//string remotepath = @"/work/scratch/jw52xeqa/"+dirname;
//var myDB = OpenOrCreateDatabase(winpath);
//myBatch.AllowedDatabasesPaths.Add(new AllowedDatabasesPair(winpath,remotepath)); myBatch.AllowedDatabasesPaths
var pair = myBatch.AllowedDatabasesPaths.Pick(0);
string DBName = @"\"+"DB_rotSphere_CoreScaling";
string localpath=pair.LocalMountPath+DBName;
string remotepath=pair.PathAtRemote+DBName;
var myDB = OpenOrCreateDatabase(localpath); myDB.Path
static class Utils {
// DOF per cell in 3D
public static int Np(int p) {
return (p*p*p + 6*p*p + 11*p + 6)/6;
}
}
```
using
```
double xMax = 4.0, yMax = 1.0, zMax = 1.0;
double xMin = -2.0, yMin = -1.0,zMin = -1.0;
```
double xMax = 1.0, yMax = 1.0, zMax=1.0;
double xMin = -1.0, yMin = -1.0,zMin = -1.0;
```
static Dictionary<int, IGridInfo> Grids = new Dictionary<int, IGridInfo>();
foreach(var P in Parameterz){
int Res = P.Res;
if(Grids.TryGetValue(Res,out IGridInfo ignore))
continue;
//int Stretching = (int)Math.Floor(Math.Abs(xMax-xMin)/Math.Abs(yMax-yMin));
int Stretching = 1;
var _xNodes = GenericBlas.Linspace(xMin, xMax, Stretching*Res + 1);
var _yNodes = GenericBlas.Linspace(yMin, yMax, Res + 1);
var _zNodes = GenericBlas.Linspace(zMin, zMax, Res + 1);
GridCommons grd;
string gname = "RotBenchmarkGrid";
var tmp = new List<IGridInfo>();
foreach(var grid in myDB.Grids){
try{
bool IsMatch = grid.Name.Equals(gname)&&grid.NumberOfCells==(_xNodes.Length-1)*(_yNodes.Length-1)*(_zNodes.Length-1);
if(IsMatch) tmp.Add(grid);
}
catch(Exception ex) {
Console.WriteLine(ex.Message);
}
}
//var tmp = myDB.Grids.Where(g=>g.Name.Equals(gname)&&g.NumberOfCells==Res*Res*Res); // this leads to exception in case of broken grids
if(tmp.Count()>=1){
Console.WriteLine("Grid found: "+tmp.Pick(0).Name);
Grids.Add(Res,tmp.Pick(0));
continue;
}
grd = Grid3D.Cartesian3DGrid(_xNodes, _yNodes, _zNodes);
grd.Name = gname;
//grd.AddPredefinedPartitioning("debug", MakeDebugPart);
grd.EdgeTagNames.Add(1, "Velocity_inlet");
grd.EdgeTagNames.Add(2, "Wall");
grd.EdgeTagNames.Add(3, "Pressure_Outlet");
grd.DefineEdgeTags(delegate (double[] _X) {
var X = _X;
double x, y, z;
x = X[0];
y = X[1];
z = X[2];
if(Math.Abs(x-xMin)<1E-8)
return 1;
else
return 3;
});
myDB.SaveGrid(ref grd,false);
Grids.Add(Res,grd);
} Grids.Keys.ToList()
int SpaceDim = 3;
Func<IGridInfo, int, XNSE_Control> GenXNSECtrl = delegate(IGridInfo grd, int k){
XNSE_Control C = new XNSE_Control();
// basic database options
// ======================
C.AlternateDbPaths = new[] {
(localpath, ""),
(remotepath,"")};
//C.AlternateDbPaths = new[] {
// (@"/work/scratch/jw52xeqa/DB_rotSphereBenchmark", ""),
// (@"W:\work\scratch\jw52xeqa\DB_rotSphereBenchmark","")};
//C.DbPath=@"\\hpccluster\hpccluster-scratch\DB_rotSphereBenchmark";
C.savetodb = true;
int J = grd.NumberOfCells;
C.SessionName = string.Format("J{0}_k{1}_t{2}", J, k,NoOfTimeSteps);
if(IncludeConvection){
C.SessionName += "_NSE";
C.Tags.Add("NSE");
} else {
C.SessionName += "_Stokes";
C.Tags.Add("Stokes");
}
C.Tags.Add(SpaceDim + "D");
if(Steady)C.Tags.Add("steady");
else C.Tags.Add("transient");
C.Tags.Add("reortho_Iter2_sameRes");
// DG degrees
// ==========
C.SetFieldOptions(k, Math.Max(k, 2));
C.saveperiod = 1;
//C.TracingNamespaces = "*";
C.GridGuid = grd.ID;
C.GridPartType = GridPartType.clusterHilbert;
C.DynamicLoadbalancing_ClassifierType = ClassifierType.CutCells;
C.DynamicLoadBalancing_On = useLoadBal;
C.DynamicLoadBalancing_RedistributeAtStartup = true;
C.DynamicLoadBalancing_Period = 1;
C.DynamicLoadBalancing_ImbalanceThreshold = 0.1;
// Physical Parameters
// ===================
const double rhoA = 1;
const double Re = 50;
double muA = 1;
double partRad = 0.3001;
double anglev = Re*muA/rhoA/(2*partRad);
//double anglev = 0.0;
double d_hyd = 2*partRad;
double VelocityIn = Re*muA/rhoA/d_hyd;
double[] pos = new double[SpaceDim];
C.PhysicalParameters.IncludeConvection = IncludeConvection;
C.PhysicalParameters.Material = true;
C.PhysicalParameters.rho_A = rhoA;
C.PhysicalParameters.mu_A = muA;
C.Rigidbody.SetParameters(pos,anglev,partRad,SpaceDim);
C.Rigidbody.SpecifyShape(Gshape);
C.Rigidbody.SetRotationAxis("x");
C.AddInitialValue(VariableNames.LevelSetCGidx(0), new Formula("X => -1"));
C.UseImmersedBoundary = true;
C.AddInitialValue("Pressure", new Formula(@"X => 0"));
C.AddBoundaryValue("Pressure_Outlet");
C.AddBoundaryValue("Velocity_inlet","VelocityX",new Formula($"(X) => {VelocityIn}"));
C.CutCellQuadratureType = BoSSS.Foundation.XDG.XQuadFactoryHelper.MomentFittingVariants.Saye;
C.UseSchurBlockPrec = true;
C.AgglomerationThreshold = 0.1;
C.AdvancedDiscretizationOptions.ViscosityMode = ViscosityMode.FullySymmetric;
C.Option_LevelSetEvolution2 = LevelSetEvolution.Prescribed;
C.Option_LevelSetEvolution = LevelSetEvolution.None;
C.Timestepper_LevelSetHandling = LevelSetHandling.None;
C.LinearSolver.NoOfMultigridLevels = 4;
C.LinearSolver.ConvergenceCriterion = 1E-6;
C.LinearSolver.MaxSolverIterations = 500;
C.LinearSolver.MaxKrylovDim = 50;
C.LinearSolver.TargetBlockSize = 1000;
C.LinearSolver.verbose = true;
C.LinearSolver.SolverCode = LinearSolverCode.exp_Kcycle_schwarz;
C.NonLinearSolver.SolverCode = NonLinearSolverCode.Newton;
C.NonLinearSolver.ConvergenceCriterion = 1E-6;
C.NonLinearSolver.MaxSolverIterations = 10;
C.NonLinearSolver.verbose = true;
C.AdaptiveMeshRefinement = useAMR;
if (useAMR) {
C.SetMaximalRefinementLevel(1);
C.AMR_startUpSweeps = 0;
}
// Timestepping
// ============
double dt = -1;
if(Steady){
C.TimesteppingMode = AppControl._TimesteppingMode.Steady;
dt = 1000;
C.NoOfTimesteps = 1;
} else {
C.TimesteppingMode = AppControl._TimesteppingMode.Transient;
dt = 0.1;
C.NoOfTimesteps = NoOfTimeSteps;
}
C.TimeSteppingScheme = TimeSteppingScheme.ImplicitEuler;
C.dtFixed = dt;
return C;
};
var controls = new Dictionary<Parameterz,XNSE_Control>();
foreach(var P in Parameterz){
int k = P.Poly;
Grids.TryGetValue(P.Res,out IGridInfo grd);
controls.Add(P,GenXNSECtrl(grd,k));
} controls.Values.Select(s=>s.SessionName)
static Func<KeyValuePair<Parameterz,XNSE_Control>,int> NodeRegression = delegate (KeyValuePair<Parameterz,XNSE_Control> extcontrol) {
var thisctrl = extcontrol;
Grids.TryGetValue(thisctrl.Key.Res,out IGridInfo grd);
int Ncells = grd.NumberOfCells;
int k = thisctrl.Key.Poly;
int DOFperCell = Utils.Np(k);
int DOFperCore = DOFperCell*Ncells;
double mempercore = -1E-10*DOFperCore*DOFperCore+2.6E-3*DOFperCore+13500;
double memoryneed = mempercore * thisctrl.Key.Cores;
double memorypernode = 384*1024; // MB
int tmp = (int)Math.Ceiling(memoryneed / memorypernode);
double bla = Math.Ceiling(Math.Log(tmp)/Math.Log(2));
int nodes2allocate = (int)Math.Pow(2,bla);
return nodes2allocate;
};
NodeRegression(controls.Pick(0))
controls.Select(s=>s.Value.SessionName)
int iSweep=0;
foreach(var ctrl in controls){
try{
int cores= ctrl.Key.Cores;
var ctrlobj = ctrl.Value;
string sessname = ctrlobj.SessionName;
ctrlobj.SessionName = sessname + "_c"+cores+"_Re50_"+"mue_"+ctrlobj.PhysicalParameters.mu_A;
var aJob = new Job("rotSphereInlet_"+Gshape+ctrlobj.SessionName,typeof(XNSE));
aJob.SetControlObject(ctrlobj);
aJob.NumberOfMPIProcs = cores;
aJob.ExecutionTime = "3:00:00";
aJob.UseComputeNodesExclusive = true;
if(myBatch is SlurmClient){
int nodes2allocate = NodeRegression.Invoke(ctrl);
var Cmdtmp = new List<string>();
Cmdtmp.AddRange((myBatch as SlurmClient).AdditionalBatchCommands.ToList());
Cmdtmp.Add($"#SBATCH -N {nodes2allocate}");
(myBatch as SlurmClient).AdditionalBatchCommands = Cmdtmp.ToArray();
}
aJob.Activate(myBatch);
iSweep++;
} catch (Exception ex){
Console.WriteLine(ex.Message);
}
}
BoSSS.Application.BoSSSpad.BoSSSshell.WorkflowMgm.BlockUntilAllJobsTerminate();
```
| github_jupyter |
<img src="images/dask_horizontal.svg"
width="45%"
alt="Dask logo\">
# Dask Delayed
This notebook covers Dask's `delayed` interface and how it can be used to parallelize existing Python code and custom algorithms. Let's start by looking at a basic, non-parallelized example and then see how `dask.delayed` can help.
# Basic example
In the code cell below, we have two Python functions, `inc` and `add`, which increment and add together their inputs, respectively. We call these functions on input values to produce some output which we then print.
*Tip*: the `%%time` at the top of the cell tells Jupyter to print out how long it took the cell to run.
```
%%time
import time
def inc(i):
time.sleep(1)
return i + 1
def add(a, b):
time.sleep(1)
return a + b
a = 1
b = 12
# Increment a and b
c = inc(a)
d = inc(b)
# Add results together
output = add(c, d)
print(f"{output = }")
```
The steps in this computation can be encoded in the following task graph shown below

In the above task graph:
1. Circular nodes in the graph are Python function calls
2. Square nodes are Python objects that are created by one task as output and can be used as inputs in another task
3. Arrows represent dependencies between tasks
From looking at the task graph, we can see there's an opportunity for parallelism here! The two `inc` calls are totally independent of one another, so they could be run at the same time in parallel. Let's see how we can use Dask's `delayed` interface to do this.
# `dask.delayed` decorator
Dask's `delayed` interface consists of a single `delayed` decorator which allows you to build up complex task graphs by lightly annotating normal Python functions. Dask can then execute the task graph (potentially in parallel). The idea is that you can take your existing Python code, apply a few `delayed` decorators, and then have a parallel version of your code.
Let's revist our `inc` / `add` example from before:
```
%%time
import time
from dask import delayed # Import the delayed decorator
@delayed # Wrap inc with delayed
def inc(i):
time.sleep(1)
return i + 1
@delayed # Wrap add with delayed
def add(a, b):
time.sleep(1)
return a + b
a = 1
b = 12
# Increment a and b
c = inc(a)
d = inc(b)
# Add results together
output = add(c, d)
print(f"{output = }")
```
That happened much faster! But notice that the above cell didn't print the expected result of `15`, instead it printed a `Delayed` object.
That's because Dask `delayed` works by wraping function calls and **delaying their execution** (hence the name "delayed"). Instead of returning the result of a function call, `delayed` functions return `Delayed` objects which keep track of what we want to compute by automatically building a task graph for us.
You can see the task graph for a `Delayed` object by calling its `visualize` method:
```
output.visualize()
```
To actually compute a result of a `Delayed` object, call its `compute` method which will tell Dask to compute the task graph in parallel.
```
%%time
# Compute result
result = output.compute()
print(f"{result = }")
```
Notice that the parallel version of this computation took ~2s while the non-parallel version took ~3s. Why do you think that is?
``Delayed`` objects support several standard Python operations, each of which creates another ``Delayed`` object representing the result:
- Arithamtic operators, e.g. `*`, `-`, `+`
- Item access and slicing, e.g. `x[0]`, `x[1:3]`
- Attribute access, e.g. `x.size`
- Method calls, e.g. `x.index(0)`
Using `delayed` functions, we can easily build up a task graph for the particular computation we want to perform.
```
result = inc(5)
result.visualize()
result.compute()
result = inc(5) * inc(7)
result.visualize()
result.compute()
result = (inc(5) * inc(7)) + 2
result.visualize()
result.compute()
```
# Exercise 1: Parallelize a for-loop
Below we define three functions: `inc`, `double`, and `add`. We use these functions to perform some operations on a list (assigned to the `data` variable). For this exercise, use `delayed` to run these operations in parallel.
```
%%time
import time
def inc(x):
time.sleep(0.5)
return x + 1
def double(x):
time.sleep(0.5)
return 2 * x
def add(x, y):
time.sleep(0.5)
return x + y
data = list(range(10))
output = []
for x in data:
a = inc(x)
b = double(x)
c = add(a, b)
output.append(c)
total = sum(output)
total
# Your solution here
%load solutions/delayed-1.py
```
# Exercise 2: Parallelize a for-loop with conditional flow
This exercise is similar to the previous, but now instead of always having `a = inc(x)` we sometimes increment `x` and sometimes double `x` depending on if `x` is an even number or not. For this exercise, we again want to use `delayed` to run these operations in parallel.
```
import time
def inc(x):
time.sleep(0.5)
return x + 1
def double(x):
time.sleep(0.5)
return 2 * x
def add(x, y):
time.sleep(0.5)
return x + y
def is_even(x):
return not x % 2
%%time
data = list(range(10))
output = []
for x in data:
if is_even(x):
a = inc(x)
else:
a = double(x)
b = double(x)
c = add(a, b)
output.append(c)
total = sum(output)
total
# Your solution here
%load solutions/delayed-2.py
```
# Exercise 3: Parallelize Pandas' `read_csv`
For this exercise we'll use CSV files from NYC's flight dataset. You can download the CSV files by running the cell below.
```
# Run this cell to download NYC flight dataset
%run prep.py -d flights
```
We can then use Python's `glob` module to get a list of all the CSV files in the dataset:
```
import glob
files = glob.glob("data/nycflights/*.csv")
files
```
[Pandas' `read_csv` function](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) can be used to load a single CSV file for our datset:
```
import pandas as pd
%%time
df = pd.read_csv(files[0])
df
```
The goal of this exercise is to use `dask.delayed` to create a new `read_csv_parallel` function which reads in *all* the files in the dataset in parallel using `dask.delayed`.
```
# Your solution here
%load solutions/delayed-3.py
```
# Additional Resources
- [Delayed documentation](https://docs.dask.org/en/latest/delayed.html)
- [Delayed screencast](https://www.youtube.com/watch?v=SHqFmynRxVU)
- [Delayed API](https://docs.dask.org/en/latest/delayed-api.html)
- [Delayed examples](https://examples.dask.org/delayed.html)
- [Delayed best practices](https://docs.dask.org/en/latest/delayed-best-practices.html)
# Next steps
Next, we'll move onto discussing [Dask DataFrames](2-dataframe.ipynb)
| github_jupyter |
# 1. Introduction
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from prml.preprocess import PolynomialFeature
from prml.linear import (
LinearRegression,
RidgeRegression,
BayesianRegression
)
np.random.seed(1234)
```
## 1.1. Example: Polynomial Curve Fitting
```
def create_toy_data(func, sample_size, std):
x = np.linspace(0, 1, sample_size)
t = func(x) + np.random.normal(scale=std, size=x.shape)
return x, t
def func(x):
return np.sin(2 * np.pi * x)
x_train, y_train = create_toy_data(func, 10, 0.25)
x_test = np.linspace(0, 1, 100)
y_test = func(x_test)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b", s=50, label="training data")
plt.plot(x_test, y_test, c="g", label="$\sin(2\pi x)$")
plt.legend()
plt.show()
for i, degree in enumerate([0, 1, 3, 9]):
plt.subplot(2, 2, i + 1)
feature = PolynomialFeature(degree)
X_train = feature.transform(x_train)
X_test = feature.transform(x_test)
model = LinearRegression()
model.fit(X_train, y_train)
y = model.predict(X_test)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b", s=50, label="training data")
plt.plot(x_test, y_test, c="g", label="$\sin(2\pi x)$")
plt.plot(x_test, y, c="r", label="fitting")
plt.ylim(-1.5, 1.5)
plt.annotate("M={}".format(degree), xy=(-0.15, 1))
plt.legend(bbox_to_anchor=(1.05, 0.64), loc=2, borderaxespad=0.)
plt.show()
def rmse(a, b):
return np.sqrt(np.mean(np.square(a - b)))
training_errors = []
test_errors = []
for i in range(10):
feature = PolynomialFeature(i)
X_train = feature.transform(x_train)
X_test = feature.transform(x_test)
model = LinearRegression()
model.fit(X_train, y_train)
y = model.predict(X_test)
training_errors.append(rmse(model.predict(X_train), y_train))
test_errors.append(rmse(model.predict(X_test), y_test + np.random.normal(scale=0.25, size=len(y_test))))
plt.plot(training_errors, 'o-', mfc="none", mec="b", ms=10, c="b", label="Training")
plt.plot(test_errors, 'o-', mfc="none", mec="r", ms=10, c="r", label="Test")
plt.legend()
plt.xlabel("degree")
plt.ylabel("RMSE")
plt.show()
```
#### Regularization
```
feature = PolynomialFeature(9)
X_train = feature.transform(x_train)
X_test = feature.transform(x_test)
model = RidgeRegression(alpha=1e-3)
model.fit(X_train, y_train)
y = model.predict(X_test)
y = model.predict(X_test)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b", s=50, label="training data")
plt.plot(x_test, y_test, c="g", label="$\sin(2\pi x)$")
plt.plot(x_test, y, c="r", label="fitting")
plt.ylim(-1.5, 1.5)
plt.legend()
plt.annotate("M=9", xy=(-0.15, 1))
plt.show()
```
### 1.2.6 Bayesian curve fitting
```
model = BayesianRegression(alpha=2e-3, beta=2)
model.fit(X_train, y_train)
y, y_err = model.predict(X_test, return_std=True)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b", s=50, label="training data")
plt.plot(x_test, y_test, c="g", label="$\sin(2\pi x)$")
plt.plot(x_test, y, c="r", label="mean")
plt.fill_between(x_test, y - y_err, y + y_err, color="pink", label="std.", alpha=0.5)
plt.xlim(-0.1, 1.1)
plt.ylim(-1.5, 1.5)
plt.annotate("M=9", xy=(0.8, 1))
plt.legend(bbox_to_anchor=(1.05, 1.), loc=2, borderaxespad=0.)
plt.show()
```
| github_jupyter |
```
%matplotlib inline
```
Compute the scattering transform of a speech recording
======================================================
This script loads a speech signal from the free spoken digit dataset (FSDD)
of a man pronouncing the word "zero," computes its scattering transform, and
displays the zeroth-, first-, and second-order scattering coefficients.
Preliminaries
-------------
##############################################################################
To handle audio file I/O, we import `os` and `scipy.io.wavfile`.
```
import numpy as np
import os
import scipy.io.wavfile
```
We import `matplotlib` to plot the calculated scattering coefficients.
```
import matplotlib.pyplot as plt
```
Finally, we import the `Scattering1D` class from the `scattering` package and
the `fetch_fsdd` function from `scattering.datasets`. The `Scattering1D`
class is what lets us calculate the scattering transform, while the
`fetch_fsdd` function downloads the FSDD, if needed.
```
from kymatio.numpy import Scattering1D
from kymatio.datasets import fetch_fsdd
```
Scattering setup
----------------
First, we download the FSDD (if not already downloaded) and read in the
recording `0_jackson_0.wav` of a man pronouncing the word "zero".
```
info_dataset = fetch_fsdd(verbose=True)
file_path = os.path.join(info_dataset['path_dataset'], sorted(info_dataset['files'])[0])
_, x = scipy.io.wavfile.read(file_path)
```
Once the recording is in memory, we normalize it.
```
x = x / np.max(np.abs(x))
```
We are now ready to set up the parameters for the scattering transform.
First, the number of samples, `T`, is given by the size of our input `x`.
The averaging scale is specified as a power of two, `2**J`. Here, we set
`J = 6` to get an averaging, or maximum, scattering scale of `2**6 = 64`
samples. Finally, we set the number of wavelets per octave, `Q`, to `16`.
This lets us resolve frequencies at a resolution of `1/16` octaves.
```
T = x.shape[-1]
J = 6
Q = 16
```
Finally, we are able to create the object which computes our scattering
transform, `scattering`.
```
scattering = Scattering1D(J, T, Q)
```
Compute and display the scattering coefficients
-----------------------------------------------
Computing the scattering transform of a signal is achieved using the
`__call__` method of the `Scattering1D` class. The output is an array of
shape `(C, T)`. Here, `C` is the number of scattering coefficient outputs,
and `T` is the number of samples along the time axis. This is typically much
smaller than the number of input samples since the scattering transform
performs an average in time and subsamples the result to save memory.
```
Sx = scattering(x)
```
To display the scattering coefficients, we must first identify which belong
to each order (zeroth, first, or second). We do this by extracting the `meta`
information from the scattering object and constructing masks for each order.
```
meta = scattering.meta()
order0 = np.where(meta['order'] == 0)
order1 = np.where(meta['order'] == 1)
order2 = np.where(meta['order'] == 2)
```
First, we plot the original signal `x`.
```
plt.figure(figsize=(8, 2))
plt.plot(x)
plt.title('Original signal')
```
We now plot the zeroth-order scattering coefficient, which is simply an
average of the original signal at the scale `2**J`.
```
plt.figure(figsize=(8, 8))
plt.subplot(3, 1, 1)
plt.plot(Sx[order0][0])
plt.title('Zeroth-order scattering')
```
We then plot the first-order coefficients, which are arranged along time
and log-frequency.
```
plt.subplot(3, 1, 2)
plt.imshow(Sx[order1], aspect='auto')
plt.title('First-order scattering')
```
Finally, we plot the second-order scattering coefficients. These are also
organized aling time, but has two log-frequency indices: one first-order
frequency and one second-order frequency. Here, both indices are mixed along
the vertical axis.
```
plt.subplot(3, 1, 3)
plt.imshow(Sx[order2], aspect='auto')
plt.title('Second-order scattering')
```
Display the plots!
```
plt.show()
```
| github_jupyter |
# ***Good morning***
Aakansh//Asritha//Jayakrishna//Dinesh//Harsha Vardhan
**8. Even Digits Problem**
Supervin has a unique calculator. This calculator only has a display, a plus button, and a minus button. Currently, the integerNis displayed on the calculator display.Pressing the plus button increases the current number displayed on the calculator display by 1. Similarly, pressing the minus button decreases the current number displayed on the calculator display by 1. The calculator does not display any leading zeros. For example, if100is displayed on the calculator display, pressing the minus button once will cause the calculator to display99.Supervin does not like odd digits, because he thinks they are "odd". Therefore, he wants to display an integer with only even digits in its decimal representation, using only the calculator buttons. Since the calculator isa bit old and the buttons are hard to press, he wants to use a minimal number of button presses.Please help Supervin to determine the minimum number of button presses to make the calculator display an integer with no odd digits.InputThe first line of the input gives the number of test cases,T.Ttest cases follow. Each begins with one line containing an integerN: the integer initially displayed on Supervin's calculator.
Output
For each test case, output one line containingCase #x: y, wherexis the test case number (starting from 1) andyis the minimum number of button presses, as described above.SampleInputOutput4421112018Case #1: 0Case #2: 3Case #3: 1Case #4: 2 In Sample Case #1, the integer initially displayed on the calculator has no odd digits, so no button presses are needed.In Sample Case #2, pressing the minus button three times will cause the calculator to display8. There is no way to satisfy the requirements with fewer than three button presses.In Sample Case #3, either pressing the minus button once (causing the calculator to display0) or pressing the plus button once will cause the calculator to display an integer without an odd digit.In Sample Case #4, pressing the plus button twice will cause the calculator to display2020. There is no way to satisfy the requirements with fewer than two button presses.
# Solution
```
N = int(input())
for x in range(N):
A = str(input())
numberOfDigits = len(A)
firstOddDigitIndex = -1
for y in range(numberOfDigits):
if int(A[y]) % 2 != 0:
firstOddDigitIndex = y
break
if firstOddDigitIndex == -1:
print("Case #{}: {}".format(int(x+1), 0))
elif firstOddDigitIndex == numberOfDigits-1:
print("Case #{}: {}".format(int(x+1), 1))
else:
# mainNum is the slice of the original number from the first odd digit to the end
mainNum = int(A[firstOddDigitIndex: ])
numberOfDigitsInMain = len(str(mainNum))
# numberWithZeroes is the number used to replace all the digits to the right of the first odd digit with zeroes
numberWithZeroes = 10 ** (numberOfDigitsInMain - 1)
# numberWithEights is the number used to replace all the digits to the right of the first odd digit with eights
numberWithEights = ''.join(['8' for x in range(numberOfDigitsInMain - 1)])
firstOddDigit = int(str(mainNum)[ :1])
UpNum = (firstOddDigit + 1) * numberWithZeroes
DownNum = int(str(firstOddDigit - 1) + numberWithEights)
numberOfKeys = mainNum - DownNum if A[firstOddDigitIndex] == '9' else min(UpNum - mainNum, mainNum - DownNum)
print("Case #{}: {}".format(int(x+1), numberOfKeys))
```
| github_jupyter |
```
import numpy as np
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, UpSampling2D
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from joblib import Parallel, delayed
import matplotlib.pyplot as plt
#Seed for reproducibilty
np.random.seed(1338)
%%time
#Loading the training and testing data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
img_rows, img_cols = 28, 28
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
%%time
#Selecting 6000 random examples from the test data
test_rows = np.random.randint(0,X_test.shape[0],6000)
X_test = X_test[test_rows]
#Selecting the 5918 examples where the output is 6
X_six = X_train[y_train == 6]
#Selecting the examples where the output is not 6
X_not_six = X_train[y_train != 6]
#Selecting 6000 random examples from the data that contains only the data where the output is not 6
random_rows = np.random.randint(0,X_six.shape[0],6000)
X_not_six = X_not_six[random_rows]
%%time
#Appending the data with output as 6 and data with output as not six
X_train = np.append(X_six,X_not_six)
#Reshaping the appended data to appropraite form
X_train = X_train.reshape(X_six.shape[0] + X_not_six.shape[0], 1, img_rows, img_cols)
%%time
input_img = Input(shape=(1, 28, 28))
x = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(input_img)
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(x)
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(x)
encoded = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
# at this point the representation is (8, 4, 4) i.e. 128-dimensional
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(encoded)
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(x)
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(x)
x = Convolution2D(32, 3, 3, activation='relu',border_mode='same')(x)
decoded = Convolution2D(1, 3, 3, activation='sigmoid', border_mode='same')(x)
%%time
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
%%time
autoencoder.fit(X_train, X_train,
nb_epoch=15,
batch_size=128,
shuffle=True,
validation_data=(X_test, X_test))
%%time
decoded_imgs = autoencoder.predict(X_test)
%%time
# use Matplotlib (don't ask)
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(X_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras.callbacks import ModelCheckpoint
# important constants
batch_size = 128
epochs = 20
n_classes = 10
learning_rate = 0.1
width = 28
height = 28
fashion_labels = ["T-shirt/top","Trousers","Pullover","Dress","Coat","Sandal","Shirt","Sneaker","Bag","Ankle boot"]
#indices 0 1 2 3 4 5 6 7 8 9
# load the dataset
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
# normalise the features for better training
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
# flatten the features for use by the training algorithm
x_train = x_train.reshape((60000, width * height))
x_test = x_test.reshape((10000, width * height))
split = 50000
#split feature training set into training and validation sets
(x_train, x_valid) = x_train[:split], x_train[split:]
(y_train, y_valid) = y_train[:split], y_train[split:]
# one-hot encode the labels using TensorFLow.
# then convert back to numpy as we cannot combine numpy
# and tensors as input to keras later
y_train_ohe = tf.one_hot(y_train, depth=n_classes).numpy()
y_valid_ohe = tf.one_hot(y_valid, depth=n_classes).numpy()
y_test_ohe = tf.one_hot(y_test, depth=n_classes).numpy()
#or use tf.keras.utils.to_categorical(y_train,10)
# show difference between original label and one-hot-encoded label
i=5
print(y_train[i]) # 'ordinairy' number value of label at index i
print (tf.one_hot(y_train[i], depth=n_classes))# same value as a 1. in correct position in an length 10 1D tensor
print(y_train_ohe[i]) # same value as a 1. in correct position in an length 10 1D numpy array
# print sample fashion images.
# we have to reshape the image held in x_train back to width by height
# as we flattened it for training into width*height
import matplotlib.pyplot as plt
%matplotlib inline
_,image = plt.subplots(1,10,figsize=(8,1))
for i in range(10):
image[i].imshow(np.reshape(x_train[i],(width, height)), cmap="Greys")
print(fashion_labels[y_train[i]],sep='', end='')
# model definition (one canonical Google way)
class LogisticRegression(tf.keras.Model):
def __init__(self, num_classes):
super(LogisticRegression, self).__init__() # call the constructor of the parent class (Model)
self.dense = tf.keras.layers.Dense(num_classes) #create an empty layer called dense with 10 elements.
def call(self, inputs, training=None, mask=None): # required for our forward pass
output = self.dense(inputs) # copy training inputs into our layer
# softmax op does not exist on the gpu, so force execution on the CPU
with tf.device('/cpu:0'):
output = tf.nn.softmax(output) # softmax is near one for maximum value in output
# and near zero for the other values.
return output
# build the model
model = LogisticRegression(n_classes)
# compile the model
#optimiser = tf.train.GradientDescentOptimizer(learning_rate)
optimiser =tf.keras.optimizers.Adam() #not supported in eager execution mode.
model.compile(optimizer=optimiser, loss='categorical_crossentropy', metrics=['accuracy'], )
# TF Keras tries to use the entire dataset to determine the shape without this step when using .fit()
# So, use one sample of the provided input dataset size to determine input/output shapes for the model
dummy_x = tf.zeros((1, width * height))
model.call(dummy_x)
checkpointer = ModelCheckpoint(filepath="./model.weights.best.hdf5", verbose=2, save_best_only=True, save_weights_only=True)
# train the model
model.fit(x_train, y_train_ohe, batch_size=batch_size, epochs=epochs,
validation_data=(x_valid, y_valid_ohe), callbacks=[checkpointer], verbose=2)
#load model with the best validation accuracy
model.load_weights("./model.weights.best.hdf5")
# evaluate the model on the test set
scores = model.evaluate(x_test, y_test_ohe, batch_size, verbose=2)
print("Final test loss and accuracy :", scores)
y_predictions = model.predict(x_test)
# example of one predicted versus one true fashion label
index = 42
index_predicted = np.argmax(y_predictions[index]) # largest label probability
index_true = np.argmax(y_test_ohe[index]) # pick out index of element with a 1 in it
print("When prediction is ",index_predicted)
print("ie. predicted label is", fashion_labels[index_predicted])
print("True label is ",fashion_labels[index_true])
print ("\n\nPredicted V (True) fashion labels, green is correct, red is wrong")
size = 12 # i.e. 12 random numbers chosen out of x_test.shape[0] =1000, we do not replace them
fig = plt.figure(figsize=(15,3))
rows = 3
cols = 4
for i, index in enumerate(np.random.choice(x_test.shape[0], size = size, replace = False)):
axis = fig.add_subplot(rows,cols,i+1, xticks=[], yticks=[]) # position i+1 in grid with rows rows and cols columns
axis.imshow(x_test[index].reshape(width,height), cmap="Greys")
index_predicted = np.argmax(y_predictions[index])
index_true = np.argmax(y_test_ohe[index])
axis.set_title(("{} ({})").format(fashion_labels[index_predicted],fashion_labels[index_true]),
color=("green" if index_predicted==index_true else "red"))
```
| github_jupyter |
```
import numpy as np
import pandas as pd
```
<img src="logo.png" alt="Girl in a jacket" width="500" height="600">
## What is object oriented programming ?
there are two concepts
- procedural programming
- code as sequence of objects.
- great for data analysis and scripts.
- Object oriented Programming
- code as intraction of objects.
- great for building frameworks & tools.
- maintaince & reuseable code!
- *object* = states + behaviour
- *class* = blueprint for object outlining possible states & behaviour.
#### objects in python
```
type(5)
type('Qasim')
type(pd.DataFrame())
type(np.mean)
a = np.array([1,2,3,4])
type(a)
```
#### Attributes & methods
- use obj. to access attribute & methods
```
a = np.array([1,2,3,4,5])
#shape attribute
a.shape
```
```
a = np.array([1,2,3,4,5,6])
# reshape method
a.reshape(3,2)
```
#### object = attribute + methhods
- attributes <---> varibales <---> obj.my_attribute
- method <---> function() <---> obj.my_metyhod()
```
a = np.array([1,2,3,4,5,6])
# possible number of attributes & methods
dir(a)
```
#### Class Anatomy: attributes & methods
##### a basic class
```
class customer:
# code for class goes here
pass
c1 = customer() #c1 as obj1
c2 = customer() # c2 as obj2
```
##### adding method to a class
```
class customer:
def identify(self, name):
print('I am customer '+ name)
cust = customer()
cust.identify('Qasim') # calling method of a class
```
Python will take care of **self** when method called from an object:
**cust.identify("Laura")** will be interpreted as **Customer.identify(cust,
"Laura"**
##### adding an attribute to class
```
class customer:
#set the name attribute of an object to new_name
def set_name(self, new_name):
#creatinf an attribute by assigning a value
self.name= new_name # <-- will create name when set-name is called
cust = customer() # <-- .name doesn't exist here yet
cust.set_name("Qasim Hassan") # <-- .name is created and set to "Qasim Hassan"
print(cust.name) # < -- .name is used here
```
#### Compaersion between python2 & python3
##### older version
```
class customer:
# using a parameter
def identify(self,name):
print('I am customer '+name)
cust = customer()
cust.identify('Qasim Hassan')
```
#### New Version
```
class customer:
def set_name(self, new_name):
self.name = new_name
def identify(self):
print(f'I am customer {self.name}')
c1 = customer()
c1.set_name('Qasim Hassan')
c1.identify()
```
#### Class Anatomy: the \__init__ constructor
- add data to object when creating it
- **constructor** \__init__ method is called every time object is created.
###### example1
```
class customer:
def __init__(self,new_name):
self.name = new_name # <-- craeting .name attribute
print("The __init__ metyhod was called..!!!")
cust = customer('Qasim Hassan') #<-- __init__ is implicitly called
print(cust.name)
```
###### example2
```
class Customer:
def __init__(self, new_name, new_balnace): #<-- added balancea ttribute
self.name = new_name
self.balance = new_balnace # < -- Balance attribute
print('The __init__ method is called implicitly..!')
cust = Customer('Qasim Hassan', 500) #<-- __init__ is called
print( cust.name, cust.balance, sep= '\n')
```
###### example3
```
class Customer:
def __init__(self,name,balance=0): # <-- default value
self.name = name
self.balance = balance
print('The __ init__ methof called implicitly')
cust = Customer('Muneeb ul Hassan') #<-- don't specify balance explicitly
print(cust.name, cust.balance, sep = '\n')
```
##### Comparsion between Attribute Assesing in methods & constructor
###### Attribute in methods
```
class Vehicle:
def set_name(self,name):
self.name = name
def set_model(self, model):
self.model = model
car1 = Vehicle()
car1.set_name('Civic')
print(car1.name)
car1.set_model(2012)
print(car1.model)
car2 = Vehicle()
car2.set_name('Toyota')
print(car1.name)
car2.set_model(2021)
print(car2.model)
```
###### Attributes in the constructor
```
class Universities:
def __init__(self, name, city, location='Pakistan'):
self.name = name
self.city = city
self.location = location
bahria = Universities('BUKC', 'Karachi')
print(bahria.name, bahria.city, bahria.location, sep= '\n')
```
Advantages of Constructor
- easier to know all the attributes
- attributes are created when object is created
- more useable & maintainable code
## Best Practices
1- initialize attributes in **__iniy__()** <br>
2- Naming: **Camelcase** for class, **lower_snake_case** for functions & attributes <br>
3- **self** is **self** <br>
4- Use **docstrings**<br>
```
class MyClass:
"""This is the docsting"""
pass
```
| github_jupyter |
# Prediction models for Project1
This notebook explores the following models:
* MeanModel - Predicts mean value for all future values
* LastDayModel - Predicts the same values like last day (given as futures)
Table of contents:
* Load model and create training and test datasets
* Evaluate Mean model
* Evaluate LastDayModel
* Explore error
```
import datetime
import calendar
import pprint
import json
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['figure.figsize'] = 12, 4
```
# Load project
```
project_folder = '../../datasets/radon-small/'
with open(project_folder + 'project.json', 'r') as file:
project = json.load(file)
pprint.pprint(project)
print('Flow1')
flow = pd.read_csv(project_folder + 'flow1.csv', parse_dates=['time'])
flow = flow[(flow.time >= project['start-date']) & (flow.time < project['end-date'])]
print(flow.info())
flow.head()
```
## Create train and test dataset
Dataset consists of the following features:
* Vector of last 24h data
and target value:
* Vector of next 24 predictions
```
flow['day'] = flow.time.map(pd.Timestamp.date)
flow['hour'] = flow.time.map(pd.Timestamp.time)
target = flow.pivot(index='day', columns='hour', values='flow')
target = target.fillna(0)
features = target.shift()
# Skip first days as they are 0.0 anyway
target = target[datetime.date(2013, 9, 13):]
features = features[datetime.date(2013, 9, 13):]
# Now lets create train and test dataset on given split day
split_day = datetime.date(2016, 11, 11)
X_train = features[:split_day].values
Y_train = target[:split_day].values
X_test = features[split_day:].values
Y_test = target[split_day:].values
X_test.shape
```
## Helper functions
Helper functions for building training and test sets and calculating score
```
class PredictionModel:
def fit(self, X, Y):
pass
def predict(self, X):
pass
def mae(y_hat, y):
"""
Calculate Mean Absolute Error
"""
return np.sum(np.absolute(y_hat-y), axis=1)/y.shape[0]
def evaluate_model(model):
"""
Evaluate model on all days starting from split_day.
Returns 90th percentile error as model score
"""
model.fit(X_train, Y_train)
costs = mae(model.predict(X_test), Y_test)
return np.percentile(costs, 90), costs
```
# Models
## ConstantMeanModel
Calculate mean from all datapoint in the training set.
Ignore features and predict constant value (equal to this mean) for all predicted values
```
class ConstantMeanModel(PredictionModel):
def __init__(self):
self.mu = 0
def fit(self, X, y):
self.mu = np.mean(y)
def predict(self, X):
return np.ones(X.shape) * self.mu
score, costs = evaluate_model(ConstantMeanModel())
print('ConstantMeanModel score: {:.2f}'.format(score))
```
## Last day model
Here our model will predict the same values like they were in the previos day
```
class LastDayModel(PredictionModel):
def fit(self, X, y):
pass
def predict(self, X):
return X.copy()
score, costs = evaluate_model(LastDayModel())
print('LastDayModel score: {:.2f}'.format(score))
```
### Explore errors
Check the biggest errors for Last day model:
```
df = pd.Series(costs, target[split_day:].index)
df.plot()
plt.show()
df = pd.DataFrame({'day': target[split_day:].index, 'cost': costs})
df['weekday'] = df['day'].apply(lambda x: calendar.day_name[x.weekday()])
df_sorted = df.sort_values(by=['cost'], ascending=False)
df_sorted.head(10)
```
#### Explore daily flow
The biggest error is at 2017-06-23 (Friday) (and in the next day)
```
def plot_days(start_day, end_day, show_prediction=True):
df = flow[(flow.time >= start_day) & (flow.time <= end_day)].set_index('time').flow
plt.plot(df)
# df.plot()
if show_prediction:
plt.plot(df.shift(288))
plt.show()
plot_days('2017-06-20', '2017-06-25')
```
This can probably be atributed to anomaly in data reading.
Or some problems in the network (kind of congestions)
Lets check week 2017-05-01 (Monday)
```
plot_days('2017-04-28', '2017-05-8')
```
Given flow in those days, it is not suprising that the model is not working here
| github_jupyter |
Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.
- Author: Sebastian Raschka
- GitHub Repository: https://github.com/rasbt/deeplearning-models
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
```
# Gradient Checkpointing Demo (Network-in-Network trained on CIFAR-10)
Why do we care about gradient checkpointing? It can lower the memory requirement of deep neural networks quite substantially, allowing us to work with larger architectures and memory limitations of conventional GPUs. However, there is no free lunch here: as a trade-off for the lower-memory requirements, additional computations are carried out which can prolong the training time. However, when GPU-memory is a limiting factor that we cannot even circumvent by lowering the batch sizes, then gradient checkpointing is a great and easy option for making things work!
Below is a brief summary of how gradient checkpointing works. For more details, please see the excellent explanations in [1] and [2].
## Vanilla Backpropagation
In vanilla backpropagation (the standard version of backpropagation), the required memory grows linearly with the number of layers *n* in the neural network. This is because all nodes from the forward pass are being kept in memory (until all their dependent child nodes are processed).

## Low-memory Backpropagation
In the low-memory version of backpropagation, the forward pass is recomputed at each step, making it more memory-efficient than vanilla backpropagation, trading the memory for additional computations. In comparison, vanilla backpropagation processes *n* layers (nodes), the low-memory version processes $n^2$ nodes.

## Gradient Checkpointing
The gradient checkpointing method is a compromise between vanilla backpropagation and low-memory backpropagation, where nodes are recomputed more often than in vanilla backpropagation but not as often as in the low-memory version. In gradient checkpointing, we designate certain nodes as checkpoints so that they are not recomputed and serve as a basis for recomputing other nodes. The optimal choice is to designate every `\sqrt{n}`-th node as a checkpoint node. Consequently, the memory requirement increases by `\sqrt{n}` compared to the low-memory version of backpropagation.
As stated in [3], gradient checkpointing, we can implement models that are 4x to 10x larger than architectures that would usually fit into GPU memory.

## Gradient Checkpointing in PyTorch
PyTorch allows us to use gradient checkpointing very conveniently. In this notebook, we are only using the checkpointing for sequential models. However, it is also possible (and not much more complicated) to use checkpointing for non-sequential models. I recommend checking out the tutorial in [3] for more details.
A great performance benchmark and write-up is available at [4], showing the difference in memory consumption between a baseline ResNet-18 and one enhanced with gradient checkpointing.
References
-----------
[1] Saving memory using gradient-checkpointing: https://github.com/cybertronai/gradient-checkpointing
[2] Fitting larger networks into memory: https://medium.com/tensorflow/fitting-larger-networks-into-memory-583e3c758ff9
[3] Trading compute for memory in PyTorch models using Checkpointing: https://github.com/prigoyal/pytorch_memonger/blob/master/tutorial/Checkpointing_for_PyTorch_models.ipynb
[4] Deep Learning Memory Usage and Pytorch Optimization Tricks: https://www.sicara.ai/blog/2019-28-10-deep-learning-memory-usage-and-pytorch-optimization-tricks
### Network Architecture
For this demo, I am using a simple Network-in-Network (NiN) architecture for the purpose of code readability. The gain from gradient checkpointing can be larger the deeper the architecture.
The CNN architecture is based on
- Lin, Min, Qiang Chen, and Shuicheng Yan. "Network in network." arXiv preprint arXiv:1312.4400 (2013).
# Part 1: Setup and Baseline (No Gradient Checkpointing)
## Imports
```
import os
import time
import random
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torch.utils.data.dataset import Subset
from torchvision import datasets
from torchvision import transforms
import matplotlib.pyplot as plt
from PIL import Image
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
```
## Model Settings
#### Setting a random seed
I recommend using a function like the following one prior to using dataset loaders and initializing a model if you want to ensure the data is shuffled in the same manner if you rerun this notebook and the model gets the same initial random weights:
```
def set_all_seeds(seed):
os.environ["PL_GLOBAL_SEED"] = str(seed)
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
```
#### Setting cuDNN and PyTorch algorithmic behavior to deterministic
Similar to the `set_all_seeds` function above, I recommend setting the behavior of PyTorch and cuDNN to deterministic (this is particulary relevant when using GPUs). We can also define a function for that:
```
def set_deterministic():
if torch.cuda.is_available():
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
torch.set_deterministic(True)
##########################
### SETTINGS
##########################
# Device
CUDA_DEVICE_NUM = 2 # change as appropriate
DEVICE = torch.device('cuda:%d' % CUDA_DEVICE_NUM if torch.cuda.is_available() else 'cpu')
print('Device:', DEVICE)
# Hyperparameters
RANDOM_SEED = 1
LEARNING_RATE = 0.0001
BATCH_SIZE = 256
NUM_EPOCHS = 40
# Architecture
NUM_CLASSES = 10
set_all_seeds(RANDOM_SEED)
# Deterministic behavior not yet supported by AdaptiveAvgPool2d
#set_deterministic()
```
#### Import utility functions
```
import sys
sys.path.insert(0, "..") # to include ../helper_evaluate.py etc.
from helper_evaluate import compute_accuracy
from helper_data import get_dataloaders_cifar10
from helper_train import train_classifier_simple_v1
```
## Dataset
```
### Set random seed ###
set_all_seeds(RANDOM_SEED)
##########################
### Dataset
##########################
train_loader, valid_loader, test_loader = get_dataloaders_cifar10(
batch_size=BATCH_SIZE,
num_workers=2,
validation_fraction=0.1)
# Checking the dataset
print('Training Set:\n')
for images, labels in train_loader:
print('Image batch dimensions:', images.size())
print('Image label dimensions:', labels.size())
print(labels[:10])
break
# Checking the dataset
print('\nValidation Set:')
for images, labels in valid_loader:
print('Image batch dimensions:', images.size())
print('Image label dimensions:', labels.size())
print(labels[:10])
break
# Checking the dataset
print('\nTesting Set:')
for images, labels in train_loader:
print('Image batch dimensions:', images.size())
print('Image label dimensions:', labels.size())
print(labels[:10])
break
```
## Model
This is the basic NiN model without gradient checkpointing for reference.
```
##########################
### MODEL
##########################
class NiN(nn.Module):
def __init__(self, num_classes):
super(NiN, self).__init__()
self.num_classes = num_classes
self.classifier = nn.Sequential(
nn.Conv2d(3, 192, kernel_size=5, stride=1, padding=2),
nn.ReLU(inplace=True),
nn.Conv2d(192, 160, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.Conv2d(160, 96, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
nn.Dropout(0.5),
nn.Conv2d(96, 192, kernel_size=5, stride=1, padding=2),
nn.ReLU(inplace=True),
nn.Conv2d(192, 192, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.Conv2d(192, 192, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.AvgPool2d(kernel_size=3, stride=2, padding=1),
nn.Dropout(0.5),
nn.Conv2d(192, 192, kernel_size=3, stride=1, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(192, 192, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.Conv2d(192, 10, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.AvgPool2d(kernel_size=8, stride=1, padding=0),
)
def forward(self, x):
x = self.classifier(x)
logits = x.view(x.size(0), self.num_classes)
#probas = torch.softmax(logits, dim=1)
return logits
set_all_seeds(RANDOM_SEED)
model = NiN(NUM_CLASSES)
model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
```
## Training
```
import tracemalloc
tracemalloc.start()
log_dict = train_classifier_simple_v1(num_epochs=2, model=model,
optimizer=optimizer, device=DEVICE,
train_loader=train_loader, valid_loader=valid_loader,
logging_interval=50)
current, peak = tracemalloc.get_traced_memory()
print(f"{current}, {peak}")
tracemalloc.stop()
### Delete model and free memory
model.cpu()
del model
```
# Part 2: Modified NiN with Gradient Checkpointing
The changes we have to make to the NiN code are highlighted below. Note that this example uses only 1 segment in `checkpoint_sequential`. Generally, a lower number of segments improves memory efficiency but making the computational performance worse since more values need to be recomputed. For this architecture, I found that `segments=1` represents a good trade-off, though.
```
##########################
### MODEL
##########################
###### NEW ####################################################
from torch.utils.checkpoint import checkpoint_sequential
###############################################################
class NiN(nn.Module):
def __init__(self, num_classes):
super(NiN, self).__init__()
self.num_classes = num_classes
self.classifier = nn.Sequential(
nn.Conv2d(3, 192, kernel_size=5, stride=1, padding=2),
nn.ReLU(inplace=True),
nn.Conv2d(192, 160, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.Conv2d(160, 96, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
nn.Dropout(0.5),
nn.Conv2d(96, 192, kernel_size=5, stride=1, padding=2),
nn.ReLU(inplace=True),
nn.Conv2d(192, 192, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.Conv2d(192, 192, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.AvgPool2d(kernel_size=3, stride=2, padding=1),
nn.Dropout(0.5),
nn.Conv2d(192, 192, kernel_size=3, stride=1, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(192, 192, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.Conv2d(192, 10, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.AvgPool2d(kernel_size=8, stride=1, padding=0),
)
###### NEW ####################################################
self.classifier_modules = [module for k, module in self.classifier._modules.items()]
###############################################################
def forward(self, x):
###### NEW ####################################################
x.requires_grad = True
x = checkpoint_sequential(functions=self.classifier_modules,
segments=1,
input=x)
###############################################################
x = x.view(x.size(0), self.num_classes)
#probas = torch.softmax(x, dim=1)
return x
set_all_seeds(RANDOM_SEED)
model = NiN(NUM_CLASSES)
model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
tracemalloc.start()
log_dict = train_classifier_simple_v1(num_epochs=2, model=model,
optimizer=optimizer, device=DEVICE,
train_loader=train_loader, valid_loader=valid_loader,
logging_interval=50)
current, peak = tracemalloc.get_traced_memory()
print(f"{current}, {peak}")
tracemalloc.stop()
```
# Conclusion
We can see that the gradient checkpointing improves peak memory efficiency by approximately 22% while the computational performance (runtime) becomes only 14% worse.
| github_jupyter |
# "A Basic Neural Network: Differentiate Hand-Written Digits"
- badges: true
- author: Akshith Sriram
### Key Objectives:
- Building a neural network that differentiates two hand-written digits 3 and 8.
- Comparing the results of this Neural Network (NN) to that of a Logistic Regression (LR) model.
### Requirements:
- 'Kudzu' : A neural network library that was designed during our course by [Univ.AI](www.univ.ai).
- MNIST Database
If MNIST is not installed, use the command `!pip install mnist` given below.
It can be run both from the command line and Jupyter Notebook.
```
!pip install mnist
```
#### Importing necessary libraries
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
```
### Preparing the Data
```
import mnist
train_images = mnist.train_images()
train_labels = mnist.train_labels()
train_images.shape, train_labels.shape
test_images = mnist.test_images()
test_labels = mnist.test_labels()
test_images.shape, test_labels.shape
image_index = 7776 # You may select anything up to 60,000
print(train_labels[image_index])
plt.imshow(train_images[image_index], cmap='Greys')
```
## Filter data to get 3 and 8 out
```
train_filter = np.where((train_labels == 3 ) | (train_labels == 8))
test_filter = np.where((test_labels == 3) | (test_labels == 8))
X_train, y_train = train_images[train_filter], train_labels[train_filter]
X_test, y_test = test_images[test_filter], test_labels[test_filter]
```
We normalize the pixel values in the 0 to 1 range
```
X_train = X_train/255.
X_test = X_test/255.
```
Setup the labels as 1 (when the digit is 3) and 0 (when the digit is 8)
```
y_train = 1*(y_train==3)
y_test = 1*(y_test==3)
X_train.shape, X_test.shape
```
### Reshape the input data to create a linear array
```
X_train = X_train.reshape(X_train.shape[0], -1)
X_test = X_test.reshape(X_test.shape[0], -1)
X_train.shape, X_test.shape
```
### Importing appropriate functions from 'Kudzu'
```
from kudzu.layer import Sigmoid
from kudzu.layer import Relu
from kudzu.layer import Affine, Sigmoid
from kudzu.model import Model
from kudzu.train import Learner
from kudzu.optim import GD
from kudzu.data import Data, Dataloader, Sampler
from kudzu.callbacks import AccCallback
from kudzu.callbacks import ClfCallback
from kudzu.loss import MSE
```
### Let us create a `Config` class, to store important parameters.
This class essentially plays the role of a dictionary.
```
class Config:
pass
config = Config()
config.lr = 0.001
config.num_epochs = 251
config.bs = 50
```
### Initializing data to the variables
```
data = Data(X_train, y_train.reshape(-1,1))
sampler = Sampler(data, config.bs, shuffle=True)
dl = Dataloader(data, sampler)
opt = GD(config.lr)
loss = MSE()
training_xdata = X_train
testing_xdata = X_test
training_ydata = y_train.reshape(-1,1)
testing_ydata = y_test.reshape(-1,1)
```
### Running Models with the Training data
Details about the network layers:
- A first affine layer has 784 inputs and does 100 affine transforms. These are followed by a Relu
- A second affine layer has 100 inputs from the 100 activations of the past layer, and does 100 affine transforms. These are followed by a Relu
- A third affine layer has 100 activations and does 2 affine transformations to create an embedding for visualization. There is no non-linearity here.
- A final "logistic regression" which has an affine transform from 2 inputs to 1 output, which is squeezed through a sigmoid.
Help taken from Anshuman's Notebook.
```
# layers for the Neural Network
layers = [Affine("first", 784, 100), Relu("first"), Affine("second", 100, 100), Relu("second"), Affine("third", 100, 2), Affine("final", 2, 1), Sigmoid("final")]
model_nn = Model(layers)
# layers for the Logistic Regression
layers_lr = [Affine("logits", 784, 1), Sigmoid("sigmoid")]
model_lr = Model(layers_lr)
# suffix _nn stands for Neural Network.
learner_nn = Learner(loss, model_nn, opt, config.num_epochs)
acc_nn = ClfCallback(learner_nn, config.bs, training_xdata , testing_xdata, training_ydata, testing_ydata)
learner_nn.set_callbacks([acc_nn])
print("====== Neural Network ======")
learner_nn.train_loop(dl)
```
### Logistic Regression based Implementation.
```
learner_lr = Learner(loss, model_lr, opt, config.num_epochs)
acc_lr = ClfCallback(learner_lr, config.bs, training_xdata , testing_xdata, training_ydata, testing_ydata)
learner_lr.set_callbacks([acc_lr])
print("====== Logistic Regression ======")
learner_lr.train_loop(dl)
```
### Comparing results of NN and LR
```
plt.figure(figsize=(15,10))
# Neural Network plots
plt.plot(acc_nn.accuracies, 'r-', label = "Training Accuracies - NN")
plt.plot(acc_nn.test_accuracies, 'g-', label = "Testing Accuracies - NN")
# Logistic Regression plots
plt.plot(acc_lr.accuracies, 'k-', label = "Training Accuracies - LR")
plt.plot(acc_lr.test_accuracies, 'b-', label = "Testing Accuracies - LR")
plt.ylim(0.8, 1)
plt.legend()
```
### From the plot, we can observe the following:
- Neural Network achieves higher accuracy than the Logistic Regression model.
- This apparently, is because of overfitting, i.e. NN captures more noise than data.
- Testing accuracy of NN drops below the Training accuracy at higher epochs. This explains the over-fitting on training data.
- Logistic Regression gives a reliable accuracy, without the above mentioned problem.
### Moving till the last but one layer (excluding it).
#### Plotting the outputs of this layer of the NN.
```
model_new = Model(layers[:-2])
plot_testing = model_new(testing_xdata)
plt.figure(figsize=(8,7))
plt.scatter(plot_testing[:,0], plot_testing[:,1], alpha = 0.1, c = y_test.ravel());
plt.title('Outputs')
```
### Plotting probability contours
```
model_prob = Model(layers[-2:])
# Adjust the x and y ranges according to the above generated plot.
x_range = np.linspace(-4, 1, 100)
y_range = np.linspace(-6, 6, 100)
x_grid, y_grid = np.meshgrid(x_range, y_range) # x_grid and y_grig are of size 100 X 100
# converting x_grid and y_grid to continuous arrays
x_grid_flat = np.ravel(x_grid)
y_grid_flat = np.ravel(y_grid)
# The last layer of the current model takes two columns as input. Hence transpose of np.vstack() is required.
X = np.vstack((x_grid_flat, y_grid_flat)).T
# x_grid and y_grid are of size 100 x 100
probability_contour = model_prob(X).reshape(100,100)
plt.figure(figsize=(10,9))
plt.scatter(plot_testing[:,0], plot_testing[:,1], alpha = 0.1, c = y_test.ravel())
contours = plt.contour(x_grid,y_grid,probability_contour)
plt.title('Probability Contours')
plt.clabel(contours, inline = True );
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Raw-data-stats" data-toc-modified-id="Raw-data-stats-1"><span class="toc-item-num">1 </span>Raw data stats</a></span></li><li><span><a href="#Read-in-data" data-toc-modified-id="Read-in-data-2"><span class="toc-item-num">2 </span>Read in data</a></span><ul class="toc-item"><li><span><a href="#Produce-latex-table" data-toc-modified-id="Produce-latex-table-2.1"><span class="toc-item-num">2.1 </span>Produce latex table</a></span></li><li><span><a href="#Add-region" data-toc-modified-id="Add-region-2.2"><span class="toc-item-num">2.2 </span>Add region</a></span></li></ul></li><li><span><a href="#Calculate-number-of-empty-tiles" data-toc-modified-id="Calculate-number-of-empty-tiles-3"><span class="toc-item-num">3 </span>Calculate number of empty tiles</a></span><ul class="toc-item"><li><span><a href="#Create-sample-to-check-what's-empty" data-toc-modified-id="Create-sample-to-check-what's-empty-3.1"><span class="toc-item-num">3.1 </span>Create sample to check what's empty</a></span></li></ul></li><li><span><a href="#highest-number-of-markings-per-tile" data-toc-modified-id="highest-number-of-markings-per-tile-4"><span class="toc-item-num">4 </span>highest number of markings per tile</a></span></li><li><span><a href="#Convert-distance-to-meters" data-toc-modified-id="Convert-distance-to-meters-5"><span class="toc-item-num">5 </span>Convert distance to meters</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Reduction-of-number-of-fan-markings-to-finals" data-toc-modified-id="Reduction-of-number-of-fan-markings-to-finals-5.0.1"><span class="toc-item-num">5.0.1 </span>Reduction of number of fan markings to finals</a></span></li></ul></li></ul></li><li><span><a href="#Length-stats" data-toc-modified-id="Length-stats-6"><span class="toc-item-num">6 </span>Length stats</a></span><ul class="toc-item"><li><span><a href="#Blotch-sizes" data-toc-modified-id="Blotch-sizes-6.1"><span class="toc-item-num">6.1 </span>Blotch sizes</a></span></li><li><span><a href="#Longest-fans" data-toc-modified-id="Longest-fans-6.2"><span class="toc-item-num">6.2 </span>Longest fans</a></span></li></ul></li><li><span><a href="#North-azimuths" data-toc-modified-id="North-azimuths-7"><span class="toc-item-num">7 </span>North azimuths</a></span></li><li><span><a href="#User-stats" data-toc-modified-id="User-stats-8"><span class="toc-item-num">8 </span>User stats</a></span></li><li><span><a href="#pipeline-output-examples" data-toc-modified-id="pipeline-output-examples-9"><span class="toc-item-num">9 </span>pipeline output examples</a></span></li></ul></div>
```
%matplotlib ipympl
import seaborn as sns
sns.set()
sns.set_context('paper')
sns.set_palette('colorblind')
from planet4 import io, stats, markings, plotting, region_data
from planet4.catalog_production import ReleaseManager
fans = pd.read_csv("/Users/klay6683/Dropbox/data/planet4/p4_analysis/P4_catalog_v1.0/P4_catalog_v1.0_L1C_cut_0.5_fan_meta_merged.csv")
blotch = pd.read_csv("/Users/klay6683/Dropbox/data/planet4/p4_analysis/P4_catalog_v1.0/P4_catalog_v1.0_L1C_cut_0.5_blotch_meta_merged.csv")
pd.set_option("display.max_columns", 150)
fans.head()
fans.l_s.head().values[0]
group_blotch = blotch.groupby("obsid")
type(group_blotch)
counts = group_blotch.marking_id.count()
counts.head()
counts.plot(c='r')
plt.figure()
counts.hist()
counts.max()
counts.min()
fans.head()
plt.figure(constrained_layout=True)
counts[:20].plot.bar()
plt.figure()
counts[:10].plot(use_index=True)
plt.figure()
counts[:10]
grouped = fans.groupby("obsid")
grouped.tile_id.nunique().sort_values(ascending=False).head()
%matplotlib inline
from planet4.markings import ImageID
p4id = ImageID('7t9')
p4id.image_name
p4id.plot_fans()
filtered = fans[fans.tile_id=='APF0000cia']
filtered.shape
p4id.plot_fans(data=filtered)
```
# Raw data stats
```
import dask.dataframe as dd
db = io.DBManager()
db.dbname
df = dd.read_hdf(db.dbname, 'df')
df.columns
grp = df.groupby(['user_name'])
s = grp.classification_id.nunique().compute().sort_values(ascending=False).head(5)
s
```
# Read in data
```
rm = ReleaseManager('v1.0')
db = io.DBManager()
data = db.get_all()
fans = pd.read_csv(rm.fan_merged)
fans.shape
fans.columns
from planet4.stats import define_season_column
define_season_column(fans)
fans.columns
season2 = fans[fans.season==2]
season2.shape
img223 = fans.query("image_name=='ESP_012265_0950'")
img223.shape
plt.figure()
img223.angle.hist()
fans.season.dtype
meta = pd.read_csv(rm.metadata_path, dtype='str')
cols_to_merge = ['OBSERVATION_ID',
'SOLAR_LONGITUDE', 'north_azimuth', 'map_scale']
fans = fans.merge(meta[cols_to_merge], left_on='obsid', right_on='OBSERVATION_ID')
fans.drop(rm.DROP_FOR_FANS, axis=1, inplace=True)
fans.image_x.head()
ground['image_x'] = pd.to_numeric(ground.image_x)
ground['image_y'] = pd.to_numeric(ground.image_y)
fans_new = fans.merge(ground[rm.COLS_TO_MERGE], on=['obsid', 'image_x', 'image_y'])
fans_new.shape
fans.shape
s = pd.to_numeric(ground.BodyFixedCoordinateX)
s.head()
s.round(decimals=4)
blotches = rm.read_blotch_file().assign(marking='blotch')
fans = rm.read_fan_file().assign(marking='fan')
combined = pd.concat([blotches, fans], ignore_index=True)
blotches.head()
```
## Produce latex table
```
fans.columns
cols1 = fans.columns[:13]
print(cols1)
cols2 = fans.columns[13:-4]
print(cols2)
cols3 = fans.columns[-4:-1]
cols3
fanshead1 = fans[cols1].head(10)
fanshead2 = fans[cols2].head(10)
fanshead3 = fans[cols3].head(10)
with open("fan_table1.tex", 'w') as f:
f.write(fanshead1.to_latex())
with open("fan_table2.tex", 'w') as f:
f.write(fanshead2.to_latex())
with open("fan_table3.tex", 'w') as f:
f.write(fanshead3.to_latex())
```
## Add region
Adding a region identifier, immensely helpful in automatically plotting stuff across regions.
```
for Reg in region_data.regions:
reg = Reg()
print(reg.name)
combined.loc[combined.obsid.isin(reg.all_obsids), 'region'] = reg.name
fans.loc[fans.obsid.isin(reg.all_obsids), 'region']= reg.name
blotches.loc[blotches.obsid.isin(reg.all_obsids), 'region'] = reg.name
```
# Calculate number of empty tiles
```
tiles_marked = combined.tile_id.unique()
db = io.DBManager()
input_tiles = db.image_ids
input_tiles.shape[0]
n_empty = input_tiles.shape[0] - tiles_marked.shape[0]
n_empty
n_empty / input_tiles.shape[0]
empty_tiles = list(set(input_tiles) - set(tiles_marked))
all_data = db.get_all()
all_data.set_index('image_id', inplace=True)
empty_data = all_data.loc[empty_tiles]
meta = pd.read_csv(rm.metadata_path)
meta.head()
empty_tile_numbers = empty_data.reset_index().groupby('image_name')[['x_tile', 'y_tile']].max()
empty_tile_numbers['total'] = empty_tile_numbers.x_tile*empty_tile_numbers.y_tile
empty_tile_numbers.head()
n_empty_per_obsid = empty_data.reset_index().groupby('image_name').image_id.nunique()
n_empty_per_obsid = n_empty_per_obsid.to_frame()
n_empty_per_obsid.columns = ['n']
df = n_empty_per_obsid
df = df.join(empty_tile_numbers.total)
df = df.assign(ratio=df.n/df.total)
df = df.join(meta.set_index('OBSERVATION_ID'))
df['scaled_n'] = df.n / df.map_scale / df.map_scale
import seaborn as sns
sns.set_context('notebook')
df.plot(kind='scatter', y='ratio', x='SOLAR_LONGITUDE')
ax = plt.gca()
ax.set_ylabel('Fraction of empty tiles per HiRISE image')
ax.set_xlabel('Solar Longitude [$^\circ$]')
ax.set_title("Distribution of empty tiles vs time")
plt.savefig("/Users/klay6683/Dropbox/src/p4_paper1/figures/empty_data_vs_ls.pdf")
df[df.ratio > 0.8]
```
## Create sample to check what's empty
```
sample = np.random.choice(empty_tiles, 200)
cd plots
from tqdm import tqdm
for image_id in tqdm(sample):
fig, ax = plt.subplots(ncols=2)
plotting.plot_raw_fans(image_id, ax=ax[0])
plotting.plot_raw_blotches(image_id, ax=ax[1])
fig.savefig(f"empty_tiles/{image_id}_input_markings.png", dpi=150)
plt.close('all')
```
# highest number of markings per tile
```
fans_per_tile = fans.groupby('tile_id').size().sort_values(ascending=False)
fans_per_tile.head()
blotches_per_tile = blotches.groupby('tile_id').size().sort_values(ascending=False)
blotches_per_tile.head()
print(fans_per_tile.median())
blotches_per_tile.median()
plt.close('all')
by_image_id = combined.groupby(['marking', 'tile_id']).size()
by_image_id.name = 'Markings per tile'
by_image_id = by_image_id.reset_index()
by_image_id.columns
g = sns.FacetGrid(by_image_id, col="marking", aspect=1.2)
bins = np.arange(0, 280, 5)
g.map(sns.distplot, 'Markings per tile', kde=False, bins=bins, hist_kws={'log':True})
plt.savefig('/Users/klay6683/Dropbox/src/p4_paper1/figures/number_distributions.pdf', dpi=150)
blotches_per_tile.median()
from planet4 import plotting
# %load -n plotting.plot_finals_with_input
def plot_finals_with_input(id_, datapath=None, horizontal=True, scope='planet4'):
imgid = markings.ImageID(id_, scope=scope)
pm = io.PathManager(id_=id_, datapath=datapath)
if horizontal is True:
kwargs = {'ncols': 2}
else:
kwargs = {'nrows': 2}
fig, ax = plt.subplots(figsize=(4,5), **kwargs)
ax[0].set_title(imgid.imgid, fontsize=8)
imgid.show_subframe(ax=ax[0])
for marking in ['fan', 'blotch']:
try:
df = getattr(pm, f"final_{marking}df")
except:
continue
else:
data = df[df.image_id == imgid.imgid]
imgid.plot_markings(marking, data, ax=ax[1])
fig.subplots_adjust(top=0.95,bottom=0, left=0, right=1, hspace=0.01, wspace=0.01)
fig.savefig(f"/Users/klay6683/Dropbox/src/p4_paper1/figures/{imgid.imgid}_final.png",
dpi=150)
plot_finals_with_input('7t9', rm.savefolder, horizontal=False)
markings.ImageID('7t9').image_name
```
# Convert distance to meters
```
fans['distance_m'] = fans.distance*fans.map_scale
blotches['radius_1_m'] = blotches.radius_1*blotches.map_scale
blotches['radius_2_m'] = blotches.radius_2*blotches.map_scale
```
### Reduction of number of fan markings to finals
```
n_fan_in = 2792963
fans.shape[0]
fans.shape[0] / n_fan_in
```
# Length stats
Percentage of fan markings below 100 m:
```
import scipy
scipy.stats.percentileofscore(fans.distance_m, 100)
```
Cumulative histogram of fan lengths
```
def add_percentage_line(ax, meters, column):
y = scipy.stats.percentileofscore(column, meters)
ax.axhline(y/100, linestyle='dashed', color='black', lw=1)
ax.axvline(meters, linestyle='dashed', color='black', lw=1)
ax.text(meters, y/100, f"{y/100:0.2f}")
plt.close('all')
fans.distance_m.max()
bins = np.arange(0,380, 5)
fig, ax = plt.subplots(figsize=(8,3), ncols=2, sharey=False)
sns.distplot(fans.distance_m, bins=bins, kde=False,
hist_kws={'cumulative':False,'normed':True, 'log':True},
axlabel='Fan length [m]', ax=ax[0])
sns.distplot(fans.distance_m, bins=bins, kde=False, hist_kws={'cumulative':True,'normed':True},
axlabel='Fan length [m]', ax=ax[1])
ax[0].set_title("Normalized Log-Histogram of fan lengths ")
ax[1].set_title("Cumulative normalized histogram of fan lengths")
ax[1].set_ylabel("Fraction of fans with given length")
add_percentage_line(ax[1], 100, fans.distance_m)
add_percentage_line(ax[1], 50, fans.distance_m)
fig.tight_layout()
fig.savefig("/Users/klay6683/Dropbox/src/p4_paper1/figures/fan_lengths_histos.pdf",
dpi=150, bbox_inches='tight')
fans.query('distance_m>350')[['distance_m', 'obsid', 'l_s']]
fans.distance_m.describe()
```
In words, the mean length of fans is {{f"{fans.distance_m.describe()['mean']:.1f}"}} m, while the median is
{{f"{fans.distance_m.describe()['50%']:.1f}"}} m.
```
fans.replace("Manhattan_Frontinella", "Manhattan_\nFrontinella", inplace=True)
fig, ax = plt.subplots()
sns.boxplot(y="region", x="distance_m", data=fans, ax=ax,
fliersize=3)
ax.set_title("Fan lengths in different ROIs")
fig.tight_layout()
fig.savefig("/Users/klay6683/Dropbox/src/p4_paper1/figures/fan_lengths_vs_regions.pdf",
dpi=150, bbox_inches='tight')
```
## Blotch sizes
```
plt.figure()
cols = ['radius_1','radius_2']
sns.distplot(blotches[cols], kde=False, bins=np.arange(2.0,50.),
color=['r','g'], label=cols)
plt.legend()
plt.figure()
cols = ['radius_1_m','radius_2_m']
sns.distplot(blotches[cols], kde=False, bins=np.arange(2.0,50.),
color=['r','g'], label=cols)
plt.legend()
fig, ax = plt.subplots(figsize=(8,4))
sns.distplot(blotches.radius_2_m, bins=500, kde=False, hist_kws={'cumulative':True,'normed':True},
axlabel='Blotch radius_1 [m]', ax=ax)
ax.set_title("Cumulative normalized histogram for blotch lengths")
ax.set_ylabel("Fraction of blotches with given radius_1")
add_percentage_line(ax, 30, blotches.radius_2_m)
add_percentage_line(ax, 10, blotches.radius_2_m)
import scipy
scipy.stats.percentileofscore(blotches.radius_2_m, 30)
plt.close('all')
```
## Longest fans
```
fans.query('distance_m > 350')[
'distance_m distance obsid image_x image_y tile_id'.split()].sort_values(
by='distance_m')
from planet4 import plotting
plotting.plot_finals('de3', datapath=rm.catalog)
plt.gca().set_title('APF0000de3')
plotting.plot_image_id_pipeline('de3', datapath=rm.catalog, via_obsid=False, figsize=(12,8))
from planet4 import region_data
from planet4 import stats
stats.define_season_column(fans)
stats.define_season_column(blotches)
fans.season.value_counts()
fans.query('season==2').distance_m.median()
fans.query('season==3').distance_m.median()
from planet4 import region_data
for region in ['Manhattan2', 'Giza','Ithaca']:
print(region)
obj = getattr(region_data, region)
for s in ['season2','season3']:
print(s)
obsids = getattr(obj, s)
print(fans[fans.obsid.isin(obsids)].distance_m.median())
db = io.DBManager()
all_data = db.get_all()
image_names = db.image_names
g_all = all_data.groupby('image_id')
g_all.size().sort_values().head()
fans.columns
cols_to_drop = ['path', 'image_name', 'binning', 'LineResolution', 'SampleResolution', 'Line', 'Sample']
fans.drop(cols_to_drop, axis=1, inplace=True, errors='ignore')
fans.columns
fans.iloc[1]
```
# North azimuths
```
s = """ESP\_011296\_0975 & -82.197 & 225.253 & 178.8 & 2008-12-23 & 17:08 & 91 \\
ESP\_011341\_0980 & -81.797 & 76.13 & 180.8 & 2008-12-27 & 17:06 & 126 \\
ESP\_011348\_0950 & -85.043 & 259.094 & 181.1 & 2008-12-27 & 18:01 & 91 \\
ESP\_011350\_0945 & -85.216 & 181.415 & 181.2 & 2008-12-27 & 16:29 & 126 \\
ESP\_011351\_0945 & -85.216 & 181.548 & 181.2 & 2008-12-27 & 18:18 & 91 \\
ESP\_011370\_0980 & -81.925 & 4.813 & 182.1 & 2008-12-29 & 17:08 & 126 \\
ESP\_011394\_0935 & -86.392 & 99.068 & 183.1 & 2008-12-31 & 19:04 & 72 \\
ESP\_011403\_0945 & -85.239 & 181.038 & 183.5 & 2009-01-01 & 16:56 & 164 \\
ESP\_011404\_0945 & -85.236 & 181.105 & 183.6 & 2009-01-01 & 18:45 & 91 \\
ESP\_011406\_0945 & -85.409 & 103.924 & 183.7 & 2009-01-01 & 17:15 & 126 \\
ESP\_011407\_0945 & -85.407 & 103.983 & 183.7 & 2009-01-01 & 19:04 & 91 \\
ESP\_011408\_0930 & -87.019 & 86.559 & 183.8 & 2009-01-01 & 19:43 & 59 \\
ESP\_011413\_0970 & -82.699 & 273.129 & 184.0 & 2009-01-01 & 17:17 & 108 \\
ESP\_011420\_0930 & -87.009 & 127.317 & 184.3 & 2009-01-02 & 20:16 & 54 \\
ESP\_011422\_0930 & -87.041 & 72.356 & 184.4 & 2009-01-02 & 20:15 & 54 \\
ESP\_011431\_0930 & -86.842 & 178.244 & 184.8 & 2009-01-03 & 19:41 & 54 \\
ESP\_011447\_0950 & -84.805 & 65.713 & 185.5 & 2009-01-04 & 17:19 & 218 \\
ESP\_011448\_0950 & -84.806 & 65.772 & 185.6 & 2009-01-04 & 19:09 & 59 \\"""
lines = s.split(' \\')
s.replace('\\', '')
obsids = [line.split('&')[0].strip().replace('\\','') for line in lines][:-1]
meta = pd.read_csv(rm.metadata_path)
meta.query('obsid in @obsids').sort_values(by='obsid').
blotches.groupby('obsid').north_azimuth.nunique()
```
# User stats
```
db = io.DBManager()
db.dbname = '/Users/klay6683/local_data/planet4/2018-02-11_planet_four_classifications_queryable_cleaned_seasons2and3.h5'
with pd.HDFStore(str(db.dbname)) as store:
user_names = store.select_column('df', 'user_name').unique()
user_names.shape
user_names[:10]
not_logged = [i for i in user_names if i.startswith('not-logged-in')]
logged = list(set(user_names) - set(not_logged))
len(logged)
len(not_logged)
not_logged[:20]
df = db.get_all()
df[df.marking=='fan'].shape
df[df.marking=='blotch'].shape
df[df.marking=='interesting'].shape
n_class_by_user = df.groupby('user_name').classification_id.nunique()
n_class_by_user.describe()
logged_users = df.user_name[~df.user_name.str.startswith("not-logged-in")].unique()
logged_users.shape
not_logged = list(set(df.user_name.unique()) - set(logged_users))
len(not_logged)
n_class_by_user[not_logged].describe()
n_class_by_user[logged_users].describe()
n_class_by_user[n_class_by_user>50].shape[0]/n_class_by_user.shape[0]
n_class_by_user.shape
```
# pipeline output examples
```
pm = io.PathManager('any', datapath=rm.savefolder)
cols1 = pm.fandf.columns[:8]
cols2 = pm.fandf.columns[8:-2]
cols3 = pm.fandf.columns[-2:]
print(pm.fandf[cols1].to_latex())
print(pm.fandf[cols2].to_latex())
print(pm.fandf[cols3].to_latex())
df = pm.fnotchdf.head(4)
cols1 = df.columns[:6]
cols2 = df.columns[6:14]
cols3 = df.columns[14:]
for i in [1,2,3]:
print(df[eval(f"cols{i}")].to_latex())
```
| github_jupyter |
# Tutorial
We will solve the following problem using a computer to estimate the expected
probabilities:
```{admonition} Problem
An experiment consists of selecting a token from a bag and spinning a coin. The
bag contains 5 red tokens and 7 blue tokens. A token is selected at random from
the bag, its colour is noted and then the token is returned to the bag.
When a red token is selected, a biased coin with probability $\frac{2}{3}$
of landing heads is spun.
When a blue token is selected a fair coin is spun.
1. What is the probability of picking a red token?
2. What is the probability of obtaining Heads?
3. If a heads is obtained, what is the probability of having selected a red
token.
```
We will use the `random` library from the Python standard library to do this.
First we start off by building a Python **tuple** to represent the bag with the
tokens. We assign this to a variable `bag`:
```
bag = (
"Red",
"Red",
"Red",
"Red",
"Red",
"Blue",
"Blue",
"Blue",
"Blue",
"Blue",
"Blue",
"Blue",
)
bag
```
```{attention}
We are there using the circular brackets `()` and the quotation marks
`"`. Those are important and cannot be omitted. The choice of brackets `()` as
opposed to `{}` or `[]` is in fact important as it instructs Python to do
different things (we will learn about this later). You can use `"` or `'`
interchangeably.
```
Instead of writing every entry out we can create a Python **list** which allows
for us to carry out some basic algebra on the items. Here we essentially:
- Create a list with 5 `"Red"`s.
- Create a list with 7 `"Blue"`s.
- Combine both lists:
```
bag = ["Red"] * 5 + ["Blue"] * 7
bag
```
Now to sample from that we use the `random` library which has a `choice`
command:
```
import random
random.choice(bag)
```
If we run this many times we will not always get the same outcome:
```
random.choice(bag)
```
```{attention}
The `bag` variable is unchanged:
```
```
bag
```
In order to answer the first question (what is the probability of picking a red
token) we want to repeat this many times.
We do this by defining a Python function (which is akin to a mathematical
function) that allows us to repeat code:
```
def pick_a_token(container):
"""
A function to randomly sample from a `container`.
"""
return random.choice(container)
```
We can then call this function, passing our `bag` to it as the `container` from
which to pick:
```
pick_a_token(container=bag)
pick_a_token(container=bag)
```
In order to simulate the probability of picking a red token we need to repeat
this not once or twice but tens of thousands of times. We will do this using
something called a "list comprehension" which is akin to the mathematical
notation we use all the time to create sets:
$$
S_1 = \{f(x)\text{ for }x\text{ in }S_2\}
$$
```
number_of_repetitions = 10000
samples = [pick_a_token(container=bag) for repetition in range(number_of_repetitions)]
samples
```
We can confirm that we have the correct number of samples:
```
len(samples)
```
```{attention}
`len` is the Python tool to get the length of a given Python iterable.
```
Using this we can now use `==` (double `=`) to check how many of those samples are `Red`:
```
sum(token == "Red" for token in samples) / number_of_repetitions
```
We have sampled probability of around .41. The theoretic value is $\frac{5}{5 +
7}$:
```
5 / (5 + 7)
```
To answer the second question (What is the probability of obtaining Heads?) we
need to make use of another Python tool: an `if` statement. This will allow us
to write a function that does precisely what is described in the problem:
- Choose a token;
- Set the probability of flipping a given coin;
- Select that coin.
```{attention}
For the second random selection (flipping a coin) we will not choose from a list
but instead select a random number between 0 and 1.
```
```
import random
def sample_experiment(bag):
"""
This samples a token from a given bag and then
selects a coin with a given probability.
If the sampled token is red then the probability
of selecting heads is 2/3 otherwise it is 1/2.
This function returns both the selected token
and the coin face.
"""
selected_token = pick_a_token(container=bag)
if selected_token == "Red":
probability_of_selecting_heads = 2 / 3
else:
probability_of_selecting_heads = 1 / 2
if random.random() < probability_of_selecting_heads:
coin = "Heads"
else:
coin = "Tails"
return selected_token, coin
```
Using this we can sample according to the problem description:
```
sample_experiment(bag=bag)
sample_experiment(bag=bag)
```
We can now find out the probability of selecting heads by carrying out a large
number of repetitions and checking which ones have a coin that is heads:
```
samples = [sample_experiment(bag=bag) for repetition in range(number_of_repetitions)]
sum(coin == "Heads" for token, coin in samples) / number_of_repetitions
```
We can compute this theoretically as well, the expected probability is:
```
import sympy as sym
sym.S(5) / (12) * sym.S(2) / 3 + sym.S(7) / (12) * sym.S(1) / 2
41 / 72
```
We can also use our samples to calculate the conditional probability that a
token was read if the coin is heads. This is done again using the list
comprehension notation but including an `if` statement which allows us to
emulate the mathematical notation:
$$
S_3 = \{x \in S_1 | \text{ if some property of \(x\) holds}\}
$$
```
samples_with_heads = [(token, coin) for token, coin in samples if coin == "Heads"]
sum(token == "Red" for token, coin in samples_with_heads) / len(samples_with_heads)
```
Using Bayes' theorem this is given theoretically by:
$$
P(\text{Red}|\text{Heads}) = \frac{P(\text{Heads} | \text{Red})P(\text{Red})}{P(\text{Heads})}
$$
```
(sym.S(2) / 3 * sym.S(5) / 12) / (sym.S(41) / 72)
20 / 41
```
```{important}
In this tutorial we have
- Randomly sampled from an iterable.
- Randomly sampled a number between 0 and 1.
- Written a function to represent a random experiment.
- Created a list using list comprehensions.
- Counted outcomes of random experiments.
```
| github_jupyter |
# Thinking in tensors, writing in PyTorch
A hands-on course by [Piotr Migdał](https://p.migdal.pl) (2019).
<a href="https://colab.research.google.com/github/stared/thinking-in-tensors-writing-in-pytorch/blob/master/5%20Nonlinear%20regression.ipynb" target="_parent">
<img src="https://colab.research.google.com/assets/colab-badge.svg"/>
</a>
## Notebook 5: Non-linear regression
Very **Work in Progress**

### Exercise
Which of the following can be described by linear regression:
* without any modifications,
* by after rescaling *x* or *y*,
* cannot be described by linear regression?
**TODO**
* Prepare examples
* 1d function with nonlinearities (by hand and automatically)
* More advanced
**Datasets to consider**
* https://en.wikipedia.org/wiki/Flight_airspeed_record
**TODO later**
* livelossplot `plot_extrema` error
* drawing a plot
* consider using [hiddenlayer](https://github.com/waleedka/hiddenlayer)
```
%matplotlib inline
from matplotlib import pyplot as plt
import torch
from torch import nn
from torch import tensor
from livelossplot import PlotLosses
X = torch.linspace(-2., 2., 30).unsqueeze(1)
Y = torch.cat([torch.zeros(10), torch.linspace(0., 1., 10), 1. + torch.zeros(10)], dim=0)
plt.plot(X.squeeze().numpy(), Y.numpy(), 'r.')
linear_model = nn.Linear(in_features=1, out_features=1)
def train(X, Y, model, loss_function, optim, num_epochs):
loss_history = []
def extra_plot(*args):
plt.plot(X.squeeze(1).numpy(), Y.numpy(), 'r.', label="Ground truth")
plt.plot(X.squeeze(1).numpy(), model(X).detach().numpy(), '-', label="Model")
plt.title("Prediction")
plt.legend(loc='lower right')
liveloss = PlotLosses(extra_plots=[extra_plot], plot_extrema=False)
for epoch in range(num_epochs):
epoch_loss = 0.0
Y_pred = model(X)
loss = loss_function(Y_pred, Y)
loss.backward()
optim.step()
optim.zero_grad()
liveloss.update({
'loss': loss.data.item(),
})
liveloss.draw()
```
## Linear model
$$y = a x + b$$
```
class Linear(nn.Module):
def __init__(self):
super().__init__()
self.layer_weights = nn.Parameter(torch.randn(1, 1))
self.layer_bias = nn.Parameter(torch.randn(1))
def forward(self, x):
return x.matmul(self.layer_weights).add(self.layer_bias).squeeze()
linear_model = Linear()
optim = torch.optim.SGD(linear_model.parameters(), lr=0.03)
loss_function = nn.MSELoss()
list(linear_model.parameters())
linear_model(X)
train(X, Y, linear_model, loss_function, optim, num_epochs=50)
```
## Nonlinear
$$ x \mapsto h \mapsto y$$
```
class Nonlinear(nn.Module):
def __init__(self, hidden_size=2):
super().__init__()
self.layer_1_weights = nn.Parameter(torch.randn(1, hidden_size))
self.layer_1_bias = nn.Parameter(torch.randn(hidden_size))
self.layer_2_weights = nn.Parameter(torch.randn(hidden_size, 1) )
self.layer_2_bias = nn.Parameter(torch.randn(1))
def forward(self, x):
x = x.matmul(self.layer_1_weights).add(self.layer_1_bias)
x = x.relu()
x = x.matmul(self.layer_2_weights).add(self.layer_2_bias)
return x.squeeze()
def nonrandom_init(self):
self.layer_1_weights.data = tensor([[1.1, 0.8]])
self.layer_1_bias.data = tensor([0.5 , -0.7])
self.layer_2_weights.data = tensor([[0.3], [-0.7]])
self.layer_2_bias.data = tensor([0.2])
nonlinear_model = Nonlinear(hidden_size=2)
nonlinear_model.nonrandom_init()
optim = torch.optim.SGD(nonlinear_model.parameters(), lr=0.2)
# optim = torch.optim.Adam(nonlinear_model.parameters(), lr=0.1)
loss_function = nn.MSELoss()
train(X, Y, nonlinear_model, loss_function, optim, num_epochs=200)
```
## Other shapes and activations
```
Y_sin = (2 * X).sin()
plt.plot(X.squeeze().numpy(), Y_sin.numpy(), 'r.')
# warning:
# for 1-d problems it rarely works (often gets stuck in some local minimum)
nonlinear_model = Nonlinear(hidden_size=10)
optim = torch.optim.Adam(nonlinear_model.parameters(), lr=0.01)
loss_function = nn.MSELoss()
train(X, Y_sin, nonlinear_model, loss_function, optim, num_epochs=100)
class NonlinearSigmoid2(nn.Module):
def __init__(self, hidden_size=2):
super().__init__()
self.layer_1_weights = nn.Parameter(torch.randn(1, hidden_size))
self.layer_1_bias = nn.Parameter(torch.randn(hidden_size))
self.layer_2_weights = nn.Parameter(torch.randn(hidden_size, 1))
self.layer_2_bias = nn.Parameter(torch.randn(1))
def forward(self, x):
x = x.matmul(self.layer_1_weights).add(self.layer_1_bias)
x = x.sigmoid()
x = x.matmul(self.layer_2_weights).add(self.layer_2_bias)
x = x.sigmoid()
return x.squeeze()
X1 = torch.linspace(-2., 2., 30).unsqueeze(1)
Y1 = torch.cat([torch.zeros(10), 1. + torch.zeros(10), torch.zeros(10)], dim=0)
plt.plot(X1.squeeze().numpy(), Y1.numpy(), 'r.')
nonlinear_model = NonlinearSigmoid2(hidden_size=2)
# optim = torch.optim.SGD(nonlinear_model.parameters(), lr=0.1)
optim = torch.optim.Adam(nonlinear_model.parameters(), lr=0.1)
loss_function = nn.MSELoss()
train(X1, Y1, nonlinear_model, loss_function, optim, num_epochs=100)
```
## Nonlinear model - by hand
```
my_nonlinear_model = Nonlinear(hidden_size=2)
my_nonlinear_model.layer_1_weights.data = tensor([[1. , 1.]])
my_nonlinear_model.layer_1_bias.data = tensor([1. , -1.])
X.matmul(my_nonlinear_model.layer_1_weights).add(my_nonlinear_model.layer_1_bias).relu()
my_nonlinear_model.layer_2_weights.data = tensor([[0.5], [-0.5]])
my_nonlinear_model.layer_2_bias.data = tensor([0.])
my_nonlinear_model(X)
plt.plot(X.squeeze(1).numpy(), Y.numpy(), 'r.')
plt.plot(X.squeeze(1).numpy(), my_nonlinear_model(X).detach().numpy(), '-')
```
| github_jupyter |
<a id='python-by-example'></a>
<div id="qe-notebook-header" align="right" style="text-align:right;">
<a href="https://quantecon.org/" title="quantecon.org">
<img style="width:250px;display:inline;" width="250px" src="https://assets.quantecon.org/img/qe-menubar-logo.svg" alt="QuantEcon">
</a>
</div>
# An Introductory Example
<a id='index-0'></a>
## Contents
- [An Introductory Example](#An-Introductory-Example)
- [Overview](#Overview)
- [The Task: Plotting a White Noise Process](#The-Task:-Plotting-a-White-Noise-Process)
- [Version 1](#Version-1)
- [Alternative Implementations](#Alternative-Implementations)
- [Another Application](#Another-Application)
- [Exercises](#Exercises)
- [Solutions](#Solutions)
## Overview
We’re now ready to start learning the Python language itself.
In this lecture, we will write and then pick apart small Python programs.
The objective is to introduce you to basic Python syntax and data structures.
Deeper concepts will be covered in later lectures.
You should have read the [lecture](https://python-programming.quantecon.org/getting_started.html) on getting started with Python before beginning this one.
## The Task: Plotting a White Noise Process
Suppose we want to simulate and plot the white noise
process $ \epsilon_0, \epsilon_1, \ldots, \epsilon_T $, where each draw $ \epsilon_t $ is independent standard normal.
In other words, we want to generate figures that look something like this:

(Here $ t $ is on the horizontal axis and $ \epsilon_t $ is on the
vertical axis.)
We’ll do this in several different ways, each time learning something more
about Python.
We run the following command first, which helps ensure that plots appear in the
notebook if you run it on your own machine.
```
%matplotlib inline
```
## Version 1
<a id='ourfirstprog'></a>
Here are a few lines of code that perform the task we set
```
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10,6)
ϵ_values = np.random.randn(100)
plt.plot(ϵ_values)
plt.show()
```
Let’s break this program down and see how it works.
<a id='import'></a>
### Imports
The first two lines of the program import functionality from external code
libraries.
The first line imports [NumPy](https://python-programming.quantecon.org/numpy.html), a favorite Python package for tasks like
- working with arrays (vectors and matrices)
- common mathematical functions like `cos` and `sqrt`
- generating random numbers
- linear algebra, etc.
After `import numpy as np` we have access to these attributes via the syntax `np.attribute`.
Here’s two more examples
```
np.sqrt(4)
np.log(4)
```
We could also use the following syntax:
```
import numpy
numpy.sqrt(4)
```
But the former method (using the short name `np`) is convenient and more standard.
#### Why So Many Imports?
Python programs typically require several import statements.
The reason is that the core language is deliberately kept small, so that it’s easy to learn and maintain.
When you want to do something interesting with Python, you almost always need
to import additional functionality.
#### Packages
<a id='index-1'></a>
As stated above, NumPy is a Python *package*.
Packages are used by developers to organize code they wish to share.
In fact, a package is just a directory containing
1. files with Python code — called **modules** in Python speak
1. possibly some compiled code that can be accessed by Python (e.g., functions compiled from C or FORTRAN code)
1. a file called `__init__.py` that specifies what will be executed when we type `import package_name`
In fact, you can find and explore the directory for NumPy on your computer
easily enough if you look around.
On this machine, it’s located in
```ipython
anaconda3/lib/python3.7/site-packages/numpy
```
#### Subpackages
<a id='index-2'></a>
Consider the line `ϵ_values = np.random.randn(100)`.
Here `np` refers to the package NumPy, while `random` is a **subpackage** of NumPy.
Subpackages are just packages that are subdirectories of another package.
### Importing Names Directly
Recall this code that we saw above
```
import numpy as np
np.sqrt(4)
```
Here’s another way to access NumPy’s square root function
```
from numpy import sqrt
sqrt(4)
```
This is also fine.
The advantage is less typing if we use `sqrt` often in our code.
The disadvantage is that, in a long program, these two lines might be
separated by many other lines.
Then it’s harder for readers to know where `sqrt` came from, should they wish to.
### Random Draws
Returning to our program that plots white noise, the remaining three lines
after the import statements are
```
ϵ_values = np.random.randn(100)
plt.plot(ϵ_values)
plt.show()
```
The first line generates 100 (quasi) independent standard normals and stores
them in `ϵ_values`.
The next two lines genererate the plot.
We can and will look at various ways to configure and improve this plot below.
## Alternative Implementations
Let’s try writing some alternative versions of [our first program](#ourfirstprog), which plotted IID draws from the normal distribution.
The programs below are less efficient than the original one, and hence
somewhat artificial.
But they do help us illustrate some important Python syntax and semantics in a familiar setting.
### A Version with a For Loop
Here’s a version that illustrates `for` loops and Python lists.
<a id='firstloopprog'></a>
```
ts_length = 100
ϵ_values = [] # empty list
for i in range(ts_length):
e = np.random.randn()
ϵ_values.append(e)
plt.plot(ϵ_values)
plt.show()
```
In brief,
- The first line sets the desired length of the time series.
- The next line creates an empty *list* called `ϵ_values` that will store the $ \epsilon_t $ values as we generate them.
- The statement `# empty list` is a *comment*, and is ignored by Python’s interpreter.
- The next three lines are the `for` loop, which repeatedly draws a new random number $ \epsilon_t $ and appends it to the end of the list `ϵ_values`.
- The last two lines generate the plot and display it to the user.
Let’s study some parts of this program in more detail.
<a id='lists-ref'></a>
### Lists
<a id='index-3'></a>
Consider the statement `ϵ_values = []`, which creates an empty list.
Lists are a *native Python data structure* used to group a collection of objects.
For example, try
```
x = [10, 'foo', False]
type(x)
```
The first element of `x` is an [integer](https://en.wikipedia.org/wiki/Integer_%28computer_science%29), the next is a [string](https://en.wikipedia.org/wiki/String_%28computer_science%29), and the third is a [Boolean value](https://en.wikipedia.org/wiki/Boolean_data_type).
When adding a value to a list, we can use the syntax `list_name.append(some_value)`
```
x
x.append(2.5)
x
```
Here `append()` is what’s called a *method*, which is a function “attached to” an object—in this case, the list `x`.
We’ll learn all about methods later on, but just to give you some idea,
- Python objects such as lists, strings, etc. all have methods that are used to manipulate the data contained in the object.
- String objects have [string methods](https://docs.python.org/3/library/stdtypes.html#string-methods), list objects have [list methods](https://docs.python.org/3/tutorial/datastructures.html#more-on-lists), etc.
Another useful list method is `pop()`
```
x
x.pop()
x
```
Lists in Python are zero-based (as in C, Java or Go), so the first element is referenced by `x[0]`
```
x[0] # first element of x
x[1] # second element of x
```
### The For Loop
<a id='index-4'></a>
Now let’s consider the `for` loop from [the program above](#firstloopprog), which was
```
for i in range(ts_length):
e = np.random.randn()
ϵ_values.append(e)
```
Python executes the two indented lines `ts_length` times before moving on.
These two lines are called a `code block`, since they comprise the “block” of code that we are looping over.
Unlike most other languages, Python knows the extent of the code block *only from indentation*.
In our program, indentation decreases after line `ϵ_values.append(e)`, telling Python that this line marks the lower limit of the code block.
More on indentation below—for now, let’s look at another example of a `for` loop
```
animals = ['dog', 'cat', 'bird']
for animal in animals:
print("The plural of " + animal + " is " + animal + "s")
```
This example helps to clarify how the `for` loop works: When we execute a
loop of the form
```python3
for variable_name in sequence:
<code block>
```
The Python interpreter performs the following:
- For each element of the `sequence`, it “binds” the name `variable_name` to that element and then executes the code block.
The `sequence` object can in fact be a very general object, as we’ll see
soon enough.
### A Comment on Indentation
<a id='index-5'></a>
In discussing the `for` loop, we explained that the code blocks being looped over are delimited by indentation.
In fact, in Python, **all** code blocks (i.e., those occurring inside loops, if clauses, function definitions, etc.) are delimited by indentation.
Thus, unlike most other languages, whitespace in Python code affects the output of the program.
Once you get used to it, this is a good thing: It
- forces clean, consistent indentation, improving readability
- removes clutter, such as the brackets or end statements used in other languages
On the other hand, it takes a bit of care to get right, so please remember:
- The line before the start of a code block always ends in a colon
- `for i in range(10):`
- `if x > y:`
- `while x < 100:`
- etc., etc.
- All lines in a code block **must have the same amount of indentation**.
- The Python standard is 4 spaces, and that’s what you should use.
### While Loops
<a id='index-6'></a>
The `for` loop is the most common technique for iteration in Python.
But, for the purpose of illustration, let’s modify [the program above](#firstloopprog) to use a `while` loop instead.
<a id='whileloopprog'></a>
```
ts_length = 100
ϵ_values = []
i = 0
while i < ts_length:
e = np.random.randn()
ϵ_values.append(e)
i = i + 1
plt.plot(ϵ_values)
plt.show()
```
Note that
- the code block for the `while` loop is again delimited only by indentation
- the statement `i = i + 1` can be replaced by `i += 1`
## Another Application
Let’s do one more application before we turn to exercises.
In this application, we plot the balance of a bank account over time.
There are no withdraws over the time period, the last date of which is denoted
by $ T $.
The initial balance is $ b_0 $ and the interest rate is $ r $.
The balance updates from period $ t $ to $ t+1 $ according to $ b_{t+1} = (1 + r) b_t $.
In the code below, we generate and plot the sequence $ b_0, b_1, \ldots, b_T $.
Instead of using a Python list to store this sequence, we will use a NumPy
array.
```
r = 0.025 # interest rate
T = 50 # end date
b = np.empty(T+1) # an empty NumPy array, to store all b_t
b[0] = 10 # initial balance
for t in range(T):
b[t+1] = (1 + r) * b[t]
plt.plot(b, label='bank balance')
plt.legend()
plt.show()
```
The statement `b = np.empty(T+1)` allocates storage in memory for `T+1`
(floating point) numbers.
These numbers are filled in by the `for` loop.
Allocating memory at the start is more efficient than using a Python list and
`append`, since the latter must repeatedly ask for storage space from the
operating system.
Notice that we added a legend to the plot — a feature you will be asked to
use in the exercises.
## Exercises
Now we turn to exercises. It is important that you complete them before
continuing, since they present new concepts we will need.
### Exercise 1
Your first task is to simulate and plot the correlated time series
$$
x_{t+1} = \alpha \, x_t + \epsilon_{t+1}
\quad \text{where} \quad
x_0 = 0
\quad \text{and} \quad t = 0,\ldots,T
$$
The sequence of shocks $ \{\epsilon_t\} $ is assumed to be IID and standard normal.
In your solution, restrict your import statements to
```
import numpy as np
import matplotlib.pyplot as plt
```
Set $ T=200 $ and $ \alpha = 0.9 $.
### Exercise 2
Starting with your solution to exercise 2, plot three simulated time series,
one for each of the cases $ \alpha=0 $, $ \alpha=0.8 $ and $ \alpha=0.98 $.
Use a `for` loop to step through the $ \alpha $ values.
If you can, add a legend, to help distinguish between the three time series.
Hints:
- If you call the `plot()` function multiple times before calling `show()`, all of the lines you produce will end up on the same figure.
- For the legend, noted that the expression `'foo' + str(42)` evaluates to `'foo42'`.
### Exercise 3
Similar to the previous exercises, plot the time series
$$
x_{t+1} = \alpha \, |x_t| + \epsilon_{t+1}
\quad \text{where} \quad
x_0 = 0
\quad \text{and} \quad t = 0,\ldots,T
$$
Use $ T=200 $, $ \alpha = 0.9 $ and $ \{\epsilon_t\} $ as before.
Search online for a function that can be used to compute the absolute value $ |x_t| $.
### Exercise 4
One important aspect of essentially all programming languages is branching and
conditions.
In Python, conditions are usually implemented with if–else syntax.
Here’s an example, that prints -1 for each negative number in an array and 1
for each nonnegative number
```
numbers = [-9, 2.3, -11, 0]
for x in numbers:
if x < 0:
print(-1)
else:
print(1)
```
Now, write a new solution to Exercise 3 that does not use an existing function
to compute the absolute value.
Replace this existing function with an if–else condition.
<a id='pbe-ex3'></a>
### Exercise 5
Here’s a harder exercise, that takes some thought and planning.
The task is to compute an approximation to $ \pi $ using [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method).
Use no imports besides
```
import numpy as np
```
Your hints are as follows:
- If $ U $ is a bivariate uniform random variable on the unit square $ (0, 1)^2 $, then the probability that $ U $ lies in a subset $ B $ of $ (0,1)^2 $ is equal to the area of $ B $.
- If $ U_1,\ldots,U_n $ are IID copies of $ U $, then, as $ n $ gets large, the fraction that falls in $ B $, converges to the probability of landing in $ B $.
- For a circle, $ area = \pi * radius^2 $.
## Solutions
### Exercise 1
Here’s one solution.
```
α = 0.9
T = 200
x = np.empty(T+1)
x[0] = 0
for t in range(T):
x[t+1] = α * x[t] + np.random.randn()
plt.plot(x)
plt.show()
```
### Exercise 2
```
α_values = [0.0, 0.8, 0.98]
T = 200
x = np.empty(T+1)
for α in α_values:
x[0] = 0
for t in range(T):
x[t+1] = α * x[t] + np.random.randn()
plt.plot(x, label=f'$\\alpha = {α}$')
plt.legend()
plt.show()
```
### Exercise 3
Here’s one solution:
```
α = 0.9
T = 200
x = np.empty(T+1)
x[0] = 0
for t in range(T):
x[t+1] = α * np.abs(x[t]) + np.random.randn()
plt.plot(x)
plt.show()
```
### Exercise 4
Here’s one way:
```
α = 0.9
T = 200
x = np.empty(T+1)
x[0] = 0
for t in range(T):
if x[t] < 0:
abs_x = - x[t]
else:
abs_x = x[t]
x[t+1] = α * abs_x + np.random.randn()
plt.plot(x)
plt.show()
```
Here’s a shorter way to write the same thing:
```
α = 0.9
T = 200
x = np.empty(T+1)
x[0] = 0
for t in range(T):
abs_x = - x[t] if x[t] < 0 else x[t]
x[t+1] = α * abs_x + np.random.randn()
plt.plot(x)
plt.show()
```
### Exercise 5
Consider the circle of diameter 1 embedded in the unit square.
Let $ A $ be its area and let $ r=1/2 $ be its radius.
If we know $ \pi $ then we can compute $ A $ via
$ A = \pi r^2 $.
But here the point is to compute $ \pi $, which we can do by
$ \pi = A / r^2 $.
Summary: If we can estimate the area of a circle with diameter 1, then dividing
by $ r^2 = (1/2)^2 = 1/4 $ gives an estimate of $ \pi $.
We estimate the area by sampling bivariate uniforms and looking at the
fraction that falls into the circle.
```
n = 100000
count = 0
for i in range(n):
u, v = np.random.uniform(), np.random.uniform()
d = np.sqrt((u - 0.5)**2 + (v - 0.5)**2)
if d < 0.5:
count += 1
area_estimate = count / n
print(area_estimate * 4) # dividing by radius**2
```
| github_jupyter |
## Introduction of Fairness Workflow Tutorial
## (Dataset/Model Bias Check and Mitigation by Reweighing)
### Table of contents :
* [1 Introduction](#1.-Introduction)
* [2. Data preparation](#2.-Data-preparation)
* [3. Data fairness](#3.-Data-fairness)
* [Data bias checking](#3.1-Bias-Detection)
* [Data mitigation](#3.2-Bias-mitigation)
* [Data-Fairness-comparison](#3.3-Data-Fairness-comparison)
* [4. Model fairness on different ML models](#4.-Model-Fairness---on-different-ML-models)
* [5. Summary](#Summary)
## 1. Introduction
Welcome!
With the proliferation of ML and AI across diverse set of real world problems, there is strong need to keep an eye on explainability and fairness of the impact of ML-AI techniques. This tutorial is one such attempt, offering glimpse into the world of fairness.
We will walk you through an interesting case study in this tutorial. The aim is to familiarize you with one of the primary problem statements in Responsible AI: `Bias Detection and Mitigation`.
Bias is one of the basic problems which plagues datasets and ML models. After all, ML models are only as good as the data they see.
Before we go into detailed explanation, we would like to give a sneak peak of the steps involved in the bias detection and mitigation process.
As illustrated in the picture, one must first prepare the data for analysis, detect bias, mitigate bias and observe the effect of bias mitigation objectively with data fairness and model fairness metrics.
```
# Preparation
!git clone https://github.com/sony/nnabla-examples.git
%cd nnabla-examples/responsible_ai/gender_bias_mitigation
import cv2
from google.colab.patches import cv2_imshow
img = cv2.imread('images/xai-bias-mitigation-workflow.png')
cv2_imshow(img)
```
### 1.1 Sources of bias :
There are many types of bias in data, which can exist in many shapes and forms, some of which may lead to unfairness [1].
Following are some examples of sources which introduce bias in datasets:
__Insufficient data :__
There may not be sufficient data overall or for some minority groups in the training data.<br>
__Data collection practice :__
Bad data collection practices could introduce bias even for large datasets. For example, high proportion of missing values for few features in some groups - indication of incomplete representation of these groups in the dataset.<br>
__Historical bias :__
Significant difference in the target distribution for different groups could also be present due to underlying human prejudices.
### 1.2 Bias detection :
There are multiple definitions of fairness and bias. Depending on the use case, appropriate fairness definition must be chosen to decide the criterion to detect bias in the model predictions.
<br>
### 1.3 Bias mitigation :
There are three different approaches of bias-reduction intervention in the ML pipelines:
1. Pre-processing
2. In-processing
3. Post-processing
In this tutorial, we will see how to identify bias in [German credit dataset ](https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29)[7] and mitigate it with a `pre-processing algorithm (reweighing)`.<br>
### 1.4 German credit dataset :
[German Credit dataset](https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29) has classification of people into good/bad credit categories. It has 1000 instances and 20 features.
In this dataset, one may observe age/gender playing significant role in the prediction of credit risk label. Instances with one gender may be favored and other gender[s] may be discriminated against by ML models.
So, we must be aware of the following: <br>
* Training dataset may not be representative of the true population across age/gender groups
* Even if it is representative of the population, it is not fair to base any decision on applicant's age/gender
Let us get our hands dirty with the dataset. We shall determine presence of such biases in this dataset, try using a preprocessing algorithm to mitigate the bias and then evaluate model and data fairness.
```
import sys
import copy
import pandas as pd # for tabular data
import matplotlib.pyplot as plt # to plot charts
import seaborn as sns # to make pretty charts
import numpy as np # for math
# # sklearn to work with ML models, splitting the train-test data as well as imported metrics
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from xgboost import XGBClassifier
sys.path.append('../responsible-ai/gender-bias-mitigation/')
from utils import *
%matplotlib inline
```
## 2. Data preparation
#### Load dataset
First, let’s load [dataset](https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data). Column names as listed in [`german.doc`](https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.doc)
```
filepath = r'https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data'
dataframe = load_dataset(filepath)
dataframe.head()
```
Interesting! The dataframe has discrete numerical values and some encoded representations.
Let us quickly sift through features/attributes in this dataset.
##### Number of attributes/features :
This dataset has a total of 20 (7 numerical + 13 categorical) attributes/features.
##### [Categorical attributes/features](https://en.wikipedia.org/wiki/Categorical_variable):
13 categorical features: 'status', 'credit_history', 'purpose', 'savings', 'employment', 'personal_status', 'other_debtors', 'property', 'installment_plans', 'housing', 'skill_level', 'telephone', 'foreign_worker' <br>
##### Numerical attributes/features :
Seven numerical features: 'month', 'credit_amount', 'investment_as_income_percentage', 'residence_since', 'age', 'number_of_credits' and 'people_liable_for'. <br>
##### [Target variable](https://en.wikipedia.org/wiki/Dependent_and_independent_variables#:~:text=Known%20values%20for%20the%20target,but%20not%20in%20unsupervised%20learning.) :
Credit coloumn represents target variable in this dataset. It has classification of good or bad credit label (good credit = 1, bad credit= 2). <br>
#### Nomenclature and properties relevant to Bias Detection and mitigation
Allow us to introduce some terms related to `bias detection and mitigation` in the context of dataset now.
##### Favourable & Unfavourable class :
Target values which are considered to be positive(good) may be called favourable class. The opposite may be called unfavourable class.
##### Protected attributes :
An attribute that partitions a population into groups with parity. Examples include race, gender, caste, and religion. Protected attributes are not universal, but are application specific [protected attributes](https://www.fairwork.gov.au/employee-entitlements/protections-at-work/protection-from-discrimination-at-work). Age and gender are the protected attributes in this dataset. <br>
##### Privileged class & Unprivileged class :
* Class in the protected attribute with majority is called privileged class.
* The opposite is called unprivileged class.<br>
#### Dataset specific preprocessing
Data preprocessing in machine learning is a crucial step that helps enhance the quality of data to promote the extraction of meaningful insights from the data.
For now let us specify the data specific preprocessing arguments to enhance the quality of data, for Ex: what is favourable & unfavourable labels, Protected attributes, privileged & unprivileged class ...etc to process the dataset.
```
# To keep it simple, in this tutorial, we shall try to determine whether there is gender bias in the dataset
# and mitigate that.
protected_attribute_names = ['gender']
privileged_classes = [['male']]
# derive the gender attribute from personal_status (you can refer the german.doc)
status_map = {'A91': 'male', 'A93': 'male', 'A94': 'male',
'A92': 'female', 'A95': 'female'}
dataframe['gender'] = dataframe['personal_status'].replace(status_map)
# target variable
label_name = 'credit'
favorable_label = 1.0 # good credit
unfavorable_label = 0.0 # bad credit
categorical_features = ['status', 'credit_history', 'purpose',
'savings', 'employment', 'other_debtors', 'property',
'installment_plans', 'housing', 'skill_level', 'telephone',
'foreign_worker']
features_to_drop = ['personal_status']
# dataset specific preprocessing
dataframe = preprocess_dataset(dataframe, label_name, protected_attribute_names,
privileged_classes, favorable_class=favorable_label,
categorical_features=categorical_features,
features_to_drop=features_to_drop)
dataframe.head()
```
Split the preprocessed data set into train and test data i.e,how well does the trained model perform on unseen data (test data)?
```
train = dataframe.sample(frac=0.7, random_state=0)
test = dataframe.drop(train.index)
```
## 3. Data fairness
### 3.1 Bias Detection
Before creating ML models, one must first analyze and check for biases in dataset, as mentioned [earlier](#1.2-Bias-detection-:). In this tutorial, we will aim to achieve `Statistical Parity`. Statistical parity is achieved when all segments of protected class (e.g. gender/age) have equal rates of positive outcome.
#### 3.1.1 Statistical parity difference
`Statistical Parity` is checked by computing `Statistical Parity Difference (SPD)`. SPD is the difference between the rate of favorable outcomes received by unprivileged group compared to privileged group. Negative value indicates less favorable outcomes for unprivileged groups.
Please look at the code below to get mathematical idea of SPD. It is ok to skip through follow sections and come back later if you want to understand from holistic perspective:
```
# return `True` if the corresponding row satisfies the `condition` and `False` otherwise
def get_condition_vector(X, feature_names, condition=None):
if condition is None:
return np.ones(X.shape[0], dtype=bool)
overall_cond = np.zeros(X.shape[0], dtype=bool)
for group in condition:
group_cond = np.ones(X.shape[0], dtype=bool)
for name, val in group.items():
index = feature_names.index(name)
group_cond = np.logical_and(group_cond, X[:, index] == val)
overall_cond = np.logical_or(overall_cond, group_cond)
return overall_cond
```
##### Compute the number of positives
```
def get_num_pos_neg(X, y, w, feature_names, label, condition=None):
"""
Returns number of optionally conditioned positives/negatives
"""
y = y.ravel()
cond_vec = get_condition_vector(X, feature_names, condition=condition)
return np.sum(w[np.logical_and(y == label, cond_vec)], dtype=np.float64)
```
##### Compute the number of instances
```
def get_num_instances(X, w, feature_names, condition=None):
cond_vec = get_condition_vector(X, feature_names, condition)
return np.sum(w[cond_vec], dtype=np.float64)
```
##### Compute the rate of favourable result for a given condition
```
# Compute the base rate, :`Pr(Y = 1) = P/(P+N)
# Compute the persentage of favourable result for a given condition
def get_base_rate(X, y, w, feature_names, label, condition=None):
return (get_num_pos_neg(X, y, w, feature_names, label, condition=condition)
/ get_num_instances(X, w, feature_names, condition=condition))
```
##### Compute fairness in training data
For computing the fairness of the data using SPD, we need to specify and get some of the key inputs. So, let us specify what are privileged & unprivileged groups, Protected attributes, and instance weights to be considered in the train data set.
```
# target value
labels_train = train[label_name].values.copy()
# protected attributes
df_prot = train.loc[:, protected_attribute_names]
protected_attributes = df_prot.values.copy()
privileged_groups = [{'gender': 1}]
unprivileged_groups = [{'gender': 0}]
# equal weights for all classes by default in the train dataset
instance_weights = np.ones_like(train.index, dtype=np.float64)
```
now let's compute the fairness of data with respect to protected attribute
```
positive_privileged = get_base_rate(protected_attributes, labels_train, instance_weights,
protected_attribute_names,
favorable_label, privileged_groups)
positive_unprivileged = get_base_rate(protected_attributes, labels_train, instance_weights,
protected_attribute_names,
favorable_label, unprivileged_groups)
```
Let's look at favorable results for privileged & unprivileged groups in terms of statistical parity difference.
```
x = ["positive_privileged", "positive_unprivileged"]
y = [positive_privileged, positive_unprivileged]
plt.barh(x, y, color=['green', 'blue'])
for index, value in enumerate(y):
plt.text(value, index, str(round(value, 2)), fontweight='bold')
plt.text(0.2, 0.5, "Statistical parity difference : " + str(
round((positive_unprivileged - positive_privileged), 3)),
bbox=dict(facecolor='white', alpha=0.4), fontweight='bold')
plt.title("Statistical parity difference", fontweight='bold')
plt.show()
```
Privileged group gets 10.8% more positive outcomes in the training dataset. This is `Bias`. Such Bias must be mitigated.
```
def statistical_parity_difference(X, y, w, feature_names, label, privileged_groups,
unprivileged_groups):
"""
Compute difference in the metric between unprivileged and privileged groups.
"""
positive_privileged = get_base_rate(X, y, w, feature_names, label, privileged_groups)
positive_unprivileged = get_base_rate(X, y, w, feature_names, label, unprivileged_groups)
return (positive_unprivileged - positive_privileged)
```
Let's store the fairnes of data in a variable
```
original = statistical_parity_difference(protected_attributes,
labels_train, instance_weights,
protected_attribute_names, favorable_label,
privileged_groups, unprivileged_groups)
```
#### 3.1.2 Analyze bias in the dataset
Let's understand how bias occurred in the dataset with respect to protected attribute.
First, let's calculate the frequency count for categories of protected attributes in the training dataset.
```
# get the only privileged condition vector for the given protected attributes
# Values are `True` for the privileged values else 'False'
privileged_cond = get_condition_vector(
protected_attributes,
protected_attribute_names,
condition=privileged_groups)
# Get the only unprivileged condition vector for the given protected attributes
# Values are `True` for the unprivileged values else 'False)
unprivileged_cond = get_condition_vector(
protected_attributes,
protected_attribute_names,
condition=unprivileged_groups)
# get the favorable(postive outcome) condition vector
# Values are `True` for the favorable values else 'False'
favorable_cond = labels_train.ravel() == favorable_label
# get the unfavorable condition vector
# Values are `True` for the unfavorable values else 'False'
unfavorable_cond = labels_train.ravel() == unfavorable_label
# combination of label and privileged/unprivileged groups
# Postive outcome for privileged group
privileged_favorable_cond = np.logical_and(favorable_cond, privileged_cond)
# Negative outcome for privileged group
privileged_unfavorable_cond = np.logical_and(unfavorable_cond, privileged_cond)
# Postive outcome for unprivileged group
unprivileged_favorable_cond = np.logical_and(favorable_cond, unprivileged_cond)
# Negative outcome for unprivileged group
unprivileged_unfavorable_cond = np.logical_and(unfavorable_cond, unprivileged_cond)
```
We need total number of instances, privileged, unprivileged, favorable outcomes, etc..
```
instance_count = train.shape[0]
privileged_count = np.sum(privileged_cond, dtype=np.float64)
unprivileged_count = np.sum(unprivileged_cond, dtype=np.float64)
favourable_count = np.sum(favorable_cond, dtype=np.float64)
unfavourable_count = np.sum(unfavorable_cond, dtype=np.float64)
privileged_favourable_count = np.sum(privileged_favorable_cond, dtype=np.float64)
privileged_unfavourable_count = np.sum(privileged_unfavorable_cond, dtype=np.float64)
unprivileged_favourable_count = np.sum(unprivileged_favorable_cond, dtype=np.float64)
unprivileged_unfavourable_count = np.sum(unprivileged_unfavorable_cond, dtype=np.float64)
```
Now, let us analyze above variables and see how the frequency of count is distributed for protected attribute.
```
x = ["privileged_favourable_count", "privileged_unfavourable_count",
"unprivileged_favourable_count", "unprivileged_unfavourable_count"]
y = [privileged_favourable_count, privileged_unfavourable_count, unprivileged_favourable_count,
unprivileged_unfavourable_count]
plt.barh(x, y, color=['blue', 'green', 'orange', 'brown'])
for index, value in enumerate(y):
plt.text(value, index,
str(value), fontweight='bold')
plt.show()
```
##### Privileged and unprivileged group outcomes
```
labels_privileged = ['male - rated good', 'male - rated bad']
sz_privileged = [privileged_favourable_count, privileged_unfavourable_count]
labels_unprivileged = ['female - rated good', 'female - rated bad']
sz_unpriveleged = [unprivileged_favourable_count, unprivileged_unfavourable_count]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 15))
ax1.pie(sz_privileged, labels=labels_privileged, autopct='%1.1f%%', shadow=True)
ax1.title.set_text('Privileged class outcomes')
ax2.pie(sz_unpriveleged, labels=labels_unprivileged, autopct='%1.1f%%', shadow=True)
ax2.title.set_text('Unprivileged class outcomes')
plt.show()
```
Male is the privileged group with 73.5% favourable outcome and 26.5% unfavourable outcome.
Female is the unprivileged group with a 62.7% favourable outcome and 37.3% unfavourable outcome. <br>
So, there is bias against the unprivileged group (privileged group gets 10.8% more positive outcomes).
There may have been insufficient data for certain groups (gender attribute) at the feature (column) level resulting in an incomplete representation of these groups in the dataset. So, we could try to mitigate such bias using a pre-processing mitigation technique.
### 3.2 Bias mitigation
Pre-processing describes the set of data preparation and feature engineering steps before application of machine learning algorithms. Sampling, reweighing and suppression are examples of pre-processing bias mitigation techniques proposed in academic literature[2]. In this tutorial, we will focus on reweighing [3] technique that assigns weights to instances.
#### 3.2.1 Reweighing algorithm
This approach assigns different weights to the examples based on the categories in protected attribute and outcomes such that bias is removed from the training dataset. Weights are based on frequency counts. But this technique is designed to work only with classifiers that can handle row-level weights.
#### Compute the reweighing weights (Equations)
***
1. Privileged group with the favourable outcome : $W_\text{privileged favourable} = \displaystyle\frac{\#\{\text{favourable}\} \times \#\{\text{privileged}\}}{\#\{\text{all}\} \times \#\{\text{privileged favourable}\}}$ <br>
2. Privileged group with the unfavourable outcome : $W_\text{privileged unfavourable} = \displaystyle\frac{\#\{\text{unfavourable}\} \times \#\{\text{prvileged}\}}{\#\{\text{all}\} \times \#\{\text{prvileged unfavourable}\}}$ <br>
3. Unprivileged group with the favourable outcome : $W_\text{unprivileged favourable} = \displaystyle\frac{\#\{\text{favourable}\} \times \#\{\text{unprvileged}\}}{\#\{\text{all}\} \times \#\{\text{unprivileged favourable}\}}$ <br>
4. Unprivileged group with the unfavourable outcome : $W_\text{unprivileged unfavourable} = \displaystyle\frac{\#\{\text{unfavourable}\} \times \#\{\text{unprivileged}\}}{\#\{\text{all}\} \times \#\{\text{unprivileged unfavourable}\}}$ <br>
***
```
# reweighing weights
weight_privileged_favourable = favourable_count * privileged_count / (instance_count * privileged_favourable_count)
weight_privileged_unfavourable = unfavourable_count * privileged_count / (instance_count * privileged_unfavourable_count)
weight_unprivileged_favourable = favourable_count * unprivileged_count / (instance_count * unprivileged_favourable_count)
weight_unprivileged_unfavourable = unfavourable_count * unprivileged_count / (instance_count * unprivileged_unfavourable_count)
transformed_instance_weights = copy.deepcopy(instance_weights)
transformed_instance_weights[privileged_favorable_cond] *= weight_privileged_favourable
transformed_instance_weights[privileged_unfavorable_cond] *= weight_privileged_unfavourable
transformed_instance_weights[unprivileged_favorable_cond] *= weight_unprivileged_favourable
transformed_instance_weights[unprivileged_unfavorable_cond] *= weight_unprivileged_unfavourable
```
Now that we have transformed instance_weights from reweighing algorithm, we can check how effective it is in removing bias by calculating the same metric(statistical parity difference) again.
```
mitigated = statistical_parity_difference(protected_attributes,
labels_train, transformed_instance_weights,
protected_attribute_names, favorable_label,
privileged_groups, unprivileged_groups)
```
### 3.3 Data Fairness comparison
```
plt.figure(facecolor='#FFFFFF', figsize=(4, 4))
plt.ylim([-0.6, 0.6])
plt.axhline(y=0.0, color='r', linestyle='-')
plt.bar(["Original", "Mitigated"], [original, mitigated], color=["blue", "green"])
plt.ylabel("statistical parity")
plt.title("Before vs After Bias Mitigation", fontsize=15)
y = [original, mitigated]
for index, value in enumerate(y):
if value < 0:
plt.text(index, value - 0.1,
str(round(value, 3)), fontweight='bold', color='red',
bbox=dict(facecolor='red', alpha=0.4))
else:
plt.text(index, value + 0.1,
str(round(value, 3)), fontweight='bold', color='red',
bbox=dict(facecolor='red', alpha=0.4))
plt.grid(None, axis="y")
plt.show()
```
Reweighing algorithm has been very effective since statistical_parity_difference is zero. So we went from a 10.8% advantage for the privileged group to equality in terms of positive outcome.
Now that we have both original and bias mitigated data, let's evaluate model fairness before and after bias mitigation.
## 4. Model Fairness - on different ML models
Model Fairness is a relatively new field in Machine Learning,
Since predictive ML models have started making their way into the industry including sensitive medical, insurance and banking sectors, it has become prudent to implement strategies to ensure the fairness of those models to check discriminative behavior during predictions. Several definitions have been proposed [4][5][6] to evaluate model fairness.
In this tutorial, we will implement statistical parity (demographical parity) strategy to evaluate model fairness and detect any discriminative behavior during the prediction.
#### Statistical parity (Demographic parity)
As discussed in [sec. 3.1.1](#3.1.1-Statistical-parity-difference), statistical parity states that the each segment of a protected class (e.g. gender) should receive the positive outcome at equal rates (the outcome is independent of a protected class). For example, the probability of getting admission to a college must be independent of gender. Let us assume the prediction of a machine learning model (Ŷ) to be independent of protected class A.
$P(Ŷ | A=a) = P(Ŷ | A=a’)$, <br>
where a and a' are different sensitive attribute values (for example, gender male vs gender female).
##### Compute fairness of the different ML models
To compute model fairness (SPD), let us specify privileged & unprivileged groups, protected attributes and instance weights to be considered in the test data set.
```
# protected attribute
df_prot_test = test.loc[:, protected_attribute_names]
protected_attributes_test = df_prot_test.values.copy()
privileged_groups = [{'gender': 1}]
unprivileged_groups = [{'gender': 0}]
# equal weights for all classes by default in the testing dataset
instance_weights_test = np.ones_like(test.index, dtype=np.float64)
```
Well, before training any ML model we need to standardize our dataset, when all numerical variables are scaled to a common range in the dataset, machine learning algorithms converge faster. So, we do scaling with the standardization technique in this tutorial.
```
# split the features and labels for both train and test data
feature_names = [n for n in train.columns if n not in label_name]
X_train, X_test, Y_train, Y_test = train[feature_names].values.copy(), test[
feature_names].values.copy(), train[label_name].values.copy(), test[label_name].values.copy()
# standardize the inputs
scale_orig = StandardScaler()
X_train = scale_orig.fit_transform(X_train)
X_test = scale_orig.fit_transform(X_test)
```
It is important to compare fairness and performance of multiple different machine learning algorithms before and after bias mitigation. We shall do that for Logistic Regression, Decision Trees, Random Forest classifier, XG Boost and SVM in this tutorial.
```
# prepare list of models
models = []
models.append(('LR', LogisticRegression()))
models.append(('DT', DecisionTreeClassifier(random_state=0)))
models.append(('RF', RandomForestClassifier(random_state=4)))
models.append(('XGB', XGBClassifier(use_label_encoder=False, eval_metric='mlogloss')))
models.append(('SVM', SVC()))
```
let's compute the Performance & Model Fairness before after bias mitigation on the above models
```
ml_models = []
base_accuracy_ml_models = []
base_fariness_ml_models = []
bias_mitigated_accuracy_ml_models = []
bias_mitigated_fairness_ml_models = []
for name, model in models:
# evaluate the base model
base_accuracy, base_predicted = train_model_and_get_test_results(model, X_train, Y_train,
X_test, Y_test,
sample_weight=instance_weights)
# compute SPD for base model
base_fairness = statistical_parity_difference(protected_attributes_test,
base_predicted, instance_weights_test,
protected_attribute_names, favorable_label,
privileged_groups, unprivileged_groups)
# evaluate the mitigated model
bias_mitigated_accuracy, bias_mitigated_predicted = train_model_and_get_test_results(model,
X_train,
Y_train,
X_test,
Y_test,
sample_weight=transformed_instance_weights)
# compute SPD for mitigated model
bias_mitigated_fairness = statistical_parity_difference(protected_attributes_test,
bias_mitigated_predicted,
instance_weights_test,
protected_attribute_names,
favorable_label,
privileged_groups, unprivileged_groups)
ml_models.append(name)
base_accuracy_ml_models.append(base_accuracy)
base_fariness_ml_models.append(base_fairness)
bias_mitigated_accuracy_ml_models.append(bias_mitigated_accuracy)
bias_mitigated_fairness_ml_models.append(bias_mitigated_fairness)
```
##### Graphical Comparison of fairness & performance
```
visualize_model_comparison(ml_models, base_fariness_ml_models, bias_mitigated_fairness_ml_models,
base_accuracy_ml_models, bias_mitigated_accuracy_ml_models)
```
Fig. a is comparison of fairness for different models. X-axis has names of ML models while Y-axis represents SPD.
Fig. b is comparison of model accuracy. X-axis has names of ML models while Y-axis represents model accuracy.
It can be observed that the classifiers learned on the bias mitigated data produce less discriminatory results (fairness improved) as compared to the biased data; For ex., in Fig (a) before bias mitigation, `LR` classifies the future data objects with `-0.25` discrimination ( more positive outcomes for the privileged group i.e., the model is biased against the unprivileged group). After application of the `Reweighing` algorithm on the training dataset, the discrimination goes down to `-0.14`. Though there is bias in favor of the privileged group after reweighing, it is much lesser and keeps the ML practitioner informed about the amount of bias in order to make calculated decisions. Besides, it can be observed in Fig.(b), there is no significant drop in the accuracy/performance for any algorithm after bias mitigation. This is good and it makes strong case for `Bias Detection and Mitigation`.
Well, let us visualize the detailed flow of everything we learned in this tutorial:
```
from google.colab.patches import cv2_imshow
img2 = cv2.imread('images/xai-bias-mitigation.png')
cv2_imshow(img2)
```
In short, we can summarize the procedure in the following 4 steps:
1. Prepare the data for analysis
2. Detect bias in the data set
3. Mitigate bias in the data set
4. Observe the fairness of the model before and after bias mitigation
### Summary
In this tutorial, we have tried to give a gentle introduction to gender bias detection and mitigation to enthusiasts of Responsible AI. Although there are many ways to detect and mitigate bias, we have only illustrated one simple way to detect bias and mitigate it with `Reweighing` algorithm in this tutorial. We plan to create and open-source tutorials illustrating other ways of bias detection and mitigation in the future.
### References
[1] [Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan. A Survey on Bias and Fairness in Machine Learning. arXiv:1908.09635
](https://arxiv.org/abs/1908.09635)
[2] Kamiran F, Calders T (2009a) Classifying without discriminating. In: Proceedings of IEEE IC4 international conference on computer, Control & Communication. IEEE press<br>
[3] [Kamiran, Faisal and Calders, Toon. Data preprocessing techniques for classification without discrimination](https://link.springer.com/content/pdf/10.1007%2Fs10115-011-0463-8.pdf). Knowledge and Information Systems, 33(1):1–33, 2012<br>
[4] Hardt, M., Price, E., and Srebro, N. (2016). “Equality of opportunity in supervised learning,” in Advances in Neural Information Processing Systems, eds D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Barcelona: Curran Associates, Inc.), 3315–3323.<br>
[5] Chouldechova, A. (2017). Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5, 153–163. doi: 10.1089/big.2016.0047<br>
[6] Zafar, M. B., Valera, I., Gomez Rodriguez, M., and Gummadi, K. P. (2017a). “Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment,” in Proceedings of the 26th International Conference on World Wide Web (Perth: International World Wide Web Conferences Steering Committee), 1171–1180. doi: 10.1145/3038912.3052660<br>
[7] https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29 <br>
| github_jupyter |
from PIL import Image
from matplotlib import image
from os import listdir
from torchvision import transforms, datasets as ds
from torchvision import models
import torchvision as tv
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.optim as optim
import torch.nn.functional as F
```
transform = transforms.Compose(
[
transforms.Scale(224),
# transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))
]) #标准化
train_set = tv.datasets.ImageFolder(root='C:/Users/weidian/Desktop/pic爬虫微点购/train',transform=transform)
train_data_loader = DataLoader(dataset=train_set,batch_size=10, shuffle=True,pin_memory=True)
test_set = tv.datasets.ImageFolder(root='C:/Users/weidian/Desktop/pic爬虫微点购/test',transform=transform)
test_data_loader = DataLoader(dataset=test_set,batch_size=10, shuffle=True,pin_memory=True)
to_pil_image = transforms.ToPILImage()
i = 1
for image,label in train_data_loader:
·
# print(label)
# print(image,label)
if i > 1:
break
#方法2:plt.imshow(ndarray)
img = image[0] # plt.imshow()只能接受3-D Tensor,所以也要用image[0]消去batch那一维
# print(img.shape)
img = img.numpy() # FloatTensor转为ndarray
img = np.transpose(img, (1, 2, 0)) # 把channel那一维放到最后
# # 显示图片
plt.imshow(img)
plt.show()
i = i+1
train_set.class_to_idx
# train_data_loader
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self, num_classes=3): #num_classes,此处为 二分类值为3
super(Net, self).__init__()
net = models.vgg16(pretrained=True) #从预训练模型加载VGG16网络参数
net.classifier = nn.Sequential()#将分类层置空,下面将改变我们的分类层
self.features = net#保留VGG16的特征层
self.classifier = nn.Sequential( #定义自己的分类层
nn.Linear(512 * 7 * 7, 512), #512 * 7 * 7不能改变 ,由VGG16网络决定的,第二个参数为神经元个数可以微调
nn.ReLU(True),
nn.Dropout(),
nn.Linear(512, 128),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(128, num_classes),
)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
nums_epoch = 8
for epoch in range(nums_epoch):
_loss = 0.0
for i, (inputs, labels) in enumerate(train_data_loader, 0):
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
_loss += loss.item()
if i % 50000 : # 每2000步打印一次损失值
# print('[%d, %5d] loss: %.3f' %(epoch + 1, i + 1, _loss / 2000))
# _loss = 0.0
print('train epoch:{},,loss:{}'.format(epoch,loss.item()))
print('Finished Training')
correct = 0
total = 0
dataiter = iter(train_data_loader)
# dataiter = iter(test_data_loader)
with torch.no_grad():
for data in dataiter:
images, labels = data
print(labels)
outputs = net(images)
_, predicted = torch.max(outputs, 1)
# print(predicted)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 train images: %d %%' % (
100 * correct / total))
torch.save(net, 'vgg_model.pth')
model = torch.load('vgg_model.pth') # 加载模型
model = model.to(device)
model.eval() # 把模型转为test模式
img = Image.open("C:/Users/weidian/Desktop/pic爬虫微点购/ts/1.png") # 读取要预测的图片
img = img.convert("RGB")
img.save("C:/Users/weidian/Desktop/pic爬虫微点购/ts/1.png")
trans = transforms.Compose(
[ transforms.Scale(224),
transforms.ToTensor(),
])
img = trans(img)
img = img.unsqueeze(0)#增加一维,输出的img格式为[1,C,H,W]
print(img.shape)
img = img.to(device)
output = model(img)
prob = F.softmax(output,dim=1) #prob是3个分类的概率
classes = ['运动鞋','靴子','高跟鞋']
from torch.autograd import Variable
value, predicted = torch.max(output, 1)
pred_class = classes[predicted.item()]
print(pred_class)
inception.data_dir = 'inception/'
inception.maybe_download()
```
| github_jupyter |
# Improving a model with Grid Search
In this mini-lab, we'll fit a decision tree model to some sample data. This initial model will overfit heavily. Then we'll use Grid Search to find better parameters for this model, to reduce the overfitting.
First, some imports.
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
### 1. Reading and plotting the data
Now, a function that will help us read the csv file, and plot the data.
```
def load_pts(csv_name):
data = np.asarray(pd.read_csv(csv_name, header=None))
X = data[:,0:2]
y = data[:,2]
plt.scatter(X[np.argwhere(y==0).flatten(),0], X[np.argwhere(y==0).flatten(),1],s = 50, color = 'blue', edgecolor = 'k')
plt.scatter(X[np.argwhere(y==1).flatten(),0], X[np.argwhere(y==1).flatten(),1],s = 50, color = 'red', edgecolor = 'k')
plt.xlim(-2.05,2.05)
plt.ylim(-2.05,2.05)
plt.grid(False)
plt.tick_params(
axis='x',
which='both',
bottom='off',
top='off')
return X,y
X, y = load_pts('data.csv')
plt.show()
```
### 2. Splitting our data into training and testing sets
```
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, make_scorer
#Fixing a random seed
import random
random.seed(42)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```
### 3. Fitting a Decision Tree model
```
from sklearn.tree import DecisionTreeClassifier
# Define the model (with default hyperparameters)
clf = DecisionTreeClassifier(random_state=42)
# Fit the model
clf.fit(X_train, y_train)
# Make predictions
train_predictions = clf.predict(X_train)
test_predictions = clf.predict(X_test)
```
Now let's plot the model, and find the testing f1_score, to see how we did.
The following function will help us plot the model.
```
def plot_model(X, y, clf):
plt.scatter(X[np.argwhere(y==0).flatten(),0],X[np.argwhere(y==0).flatten(),1],s = 50, color = 'blue', edgecolor = 'k')
plt.scatter(X[np.argwhere(y==1).flatten(),0],X[np.argwhere(y==1).flatten(),1],s = 50, color = 'red', edgecolor = 'k')
plt.xlim(-2.05,2.05)
plt.ylim(-2.05,2.05)
plt.grid(False)
plt.tick_params(
axis='x',
which='both',
bottom='off',
top='off')
r = np.linspace(-2.1,2.1,300)
s,t = np.meshgrid(r,r)
s = np.reshape(s,(np.size(s),1))
t = np.reshape(t,(np.size(t),1))
h = np.concatenate((s,t),1)
z = clf.predict(h)
s = s.reshape((np.size(r),np.size(r)))
t = t.reshape((np.size(r),np.size(r)))
z = z.reshape((np.size(r),np.size(r)))
plt.contourf(s,t,z,colors = ['blue','red'],alpha = 0.2,levels = range(-1,2))
if len(np.unique(z)) > 1:
plt.contour(s,t,z,colors = 'k', linewidths = 2)
plt.show()
plot_model(X, y, clf)
print('The Training F1 Score is', f1_score(train_predictions, y_train))
print('The Testing F1 Score is', f1_score(test_predictions, y_test))
```
Woah! Some heavy overfitting there. Not just from looking at the graph, but also from looking at the difference between the high training score (1.0) and the low testing score (0.7).Let's see if we can find better hyperparameters for this model to do better. We'll use grid search for this.
### 4. (TODO) Use grid search to improve this model.
In here, we'll do the following steps:
1. First define some parameters to perform grid search on. We suggest to play with `max_depth`, `min_samples_leaf`, and `min_samples_split`.
2. Make a scorer for the model using `f1_score`.
3. Perform grid search on the classifier, using the parameters and the scorer.
4. Fit the data to the new classifier.
5. Plot the model and find the f1_score.
6. If the model is not much better, try changing the ranges for the parameters and fit it again.
**_Hint:_ If you're stuck and would like to see a working solution, check the solutions notebook in this same folder.**
```
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
clf = DecisionTreeClassifier(random_state=42)
# TODO: Create the parameters list you wish to tune.
parameters = {'max_depth':[2, 4, 6], 'min_samples_leaf': [1, 3, 5], 'min_samples_split':[2, 4, 6]}
# TODO: Make an fbeta_score scoring object.
scorer = make_scorer(f1_score)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method.
grid_obj = GridSearchCV(clf, parameters, scoring = scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters.
grid_fit = grid_obj.fit(X_train, y_train)
# TODO: Get the estimator.
best_clf = grid_fit.best_estimator_
# Fit the new model.
best_clf.fit(X_train, y_train)
# Make predictions using the new model.
best_train_predictions = best_clf.predict(X_train)
best_test_predictions = best_clf.predict(X_test)
# Calculate the f1_score of the new model.
print('The training F1 Score is', f1_score(best_train_predictions, y_train))
print('The testing F1 Score is', f1_score(best_test_predictions, y_test))
# Plot the new model.
plot_model(X, y, best_clf)
# Let's also explore what parameters ended up being used in the new model.
best_clf
```
| github_jupyter |
<img src="img/saturn_logo.png" width="300" />
# Introduction to Dask
Before we get into too much complexity, let's talk about the essentials of Dask.
## What is Dask?
Dask is an open-source framework that enables parallelization of Python code. This can be applied to all kinds of Python use cases, not just machine learning. Dask is designed to work well on single-machine setups and on multi-machine clusters. You can use Dask with pandas, NumPy, scikit-learn, and other Python libraries - for our purposes, we'll focus on how you might use it with PyTorch. If you want to learn more about the other areas where Dask can be useful, there's a [great website explaining all of that](https://dask.org/).
## Why Parallelize?
For our use case, there are a couple of areas where Dask parallelization might be useful for making our work faster and better.
* Loading and handling large datasets (especially if they are too large to hold in memory)
* Running time or computation heavy tasks at the same time, quickly
## Delaying Tasks
Delaying a task with Dask can queue up a set of transformations or calculations so that it's ready to run later, in parallel. This is what's known as "lazy" evaluation - it won't evaluate the requested computations until explicitly told to. This differs from other kinds of functions, which compute instantly upon being called. Many very common and handy functions are ported to be native in Dask, which means they will be lazy (delayed computation) without you ever having to even ask.
However, sometimes you will have complicated custom code that is written in pandas, scikit-learn, or even base python, that isn't natively available in Dask. Other times, you may just not have the time or energy to refactor your code into Dask, if edits are needed to take advantage of native Dask elements.
If this is the case, you can decorate your functions with `@dask.delayed`, which will manually establish that the function should be lazy, and not evaluate until you tell it. You'd tell it with the processes `.compute()` or `.persist()`, described in the next section. We'll use `@dask.delayed` several times in this workshop to make PyTorch tasks easily parallelized.
### Example 1
```
def exponent(x, y):
'''Define a basic function.'''
return x ** y
# Function returns result immediately when called
exponent(4, 5)
import dask
@dask.delayed
def lazy_exponent(x, y):
'''Define a lazily evaluating function'''
return x ** y
# Function returns a delayed object, not computation
lazy_exponent(4, 5)
# This will now return the computation
lazy_exponent(4,5).compute()
```
### Example 2
We can take this knowledge and expand it - because our lazy function returns an object, we can assign it and then chain it together in different ways later.
Here we return a delayed value from the first function, and call it x. Then we pass x to the function a second time, and call it y. Finally, we multiply x and y to produce z.
```
x = lazy_exponent(4, 5)
y = lazy_exponent(x, 2)
z = x * y
z
z.visualize(rankdir="LR")
z.compute()
```
***
## Persist vs Compute
How should we instruct our computer to run the computations we have queued up lazily? We have two choices: `.persist()` and `.compute()`.
First, remember we have several machines working for us right now. We have our Jupyter instance right here running on one, and then our cluster of worker machines also.
### Compute
If we use `.compute()`, we are asking Dask to take all the computations and adjustments to the data that we have queued up, and run them, and bring it all to the surface here, in Jupyter.
That means if it was distributed we want to convert it into a local object here and now. If it's a Dask Dataframe, when we call `.compute()`, we're saying "Run the transformations we've queued, and convert this into a pandas dataframe immediately."
### Persist
If we use `.persist()`, we are asking Dask to take all the computations and adjustments to the data that we have queued up, and run them, but then the object is going to remain distributed and will live on the cluster, not on the Jupyter instance.
So when we do this with a Dask Dataframe, we are telling our cluster "Run the transformations we've queued, and leave this as a distributed Dask Dataframe."
So, if you want to process all the delayed tasks you've applied to a Dask object, either of these methods will do it. The difference is where your object will live at the end.
***
### Example: Distributed Data Objects
When we use a Dask Dataframe object, we can see the effect of `.persist()` and `.compute()` in practice.
```
import dask
import dask.dataframe as dd
df = dask.datasets.timeseries()
df.npartitions
```
So our Dask Dataframe has 30 partitions. So, if we run some computations on this dataframe, we still have an object that has a number of partitions attribute, and we can check it. We'll filter it, then do some summary statistics with a groupby.
```
df2 = df[df.y > 0]
df3 = df2.groupby('name').x.std()
print(type(df3))
df3.npartitions
```
Now, we have reduced the object down to a Series, rather than a dataframe, so it changes the partition number.
We can `repartition` the Series, if we want to!
```
df4 = df3.repartition(npartitions=3)
df4.npartitions
```
What will happen if we use `.persist()` or `.compute()` on these objects?
As we can see below, `df4` is a Dask Series with 161 queued tasks and 3 partitions. We can run our two different computation commands on the same object and see the different results.
```
df4
%%time
df4.persist()
```
So, what changed when we ran .persist()? Notice that we went from 161 tasks at the bottom of the screen, to just 3. That indicates that there's one task for each partition.
Now, let's try .compute().
```
%%time
df4.compute().head()
```
We get back a pandas Series, not a Dask object at all.
***
## Submit to Cluster
To make this all work in a distributed fashion, we need to understand how we send instructions to our cluster. When we use the `@dask.delayed` decorator, we queue up some work and put it in a list, ready to be run. So how do we send it to the workers and explain what we want them to do?
We use the `distributed` module from Dask to make this work. We connect to our cluster (as you saw in [Notebook 1](01-getting-started.ipynb)), and then we'll use some commands to send instructions.
```
from dask_saturn import SaturnCluster
from dask.distributed import Client
cluster = SaturnCluster()
client = Client(cluster)
from dask_saturn.core import describe_sizes
describe_sizes()
```
## Sending Tasks
Now we have created the object `client`. This is the handle we'll use to connect with our cluster, for whatever commands we want to send! We will use a few processes to do this communication: `.submit()` and `.map()`.
* `.submit()` lets us send one task to the cluster, to be run once on whatever worker is free.
* `.map()` lets us send lots of tasks, which will be disseminated to workers in the most efficient way.
There's also `.run()` which you can use to send one task to EVERY worker on the cluster simultaneously. This is only used for small utility tasks, however - like installing a library or collecting diagnostics.
### map Example
For example, you can use `.map()` in this way:
`futures = client.map(function, list_of_inputs)`
This takes our function, maps it over all the inputs, and then these tasks are distributed to the cluster workers. Note: they still won't actually compute yet!
Let's try an example. Recall our `lazy_exponent` function from earlier. We are going to alter it so that it accepts its inputs as a single list, then we can use it with `.map()`.
```
@dask.delayed
def lazy_exponent(args):
x,y = args
'''Define a lazily evaluating function'''
return x ** y
inputs = [[1,2], [3,4], [5,6]]
example_future = client.map(lazy_exponent, inputs)
```
***
## Processing Results
We have one more step before we use .compute(), which is .gather(). This creates one more instruction to be included in this big delayed job we're establishing: retrieving the results from all of our jobs. It's going to sit tight as well until we finally say .compute().
### gather Example
```
futures_gathered = client.gather(example_future)
```
It may help to think of all the work as instructions in a list. We have so far told our cluster: "map our delayed function over this list of inputs, and pass the resulting tasks to the workers", "Gather up the results of those tasks, and bring them back". But the one thing we haven't said is "Ok, now begin to process all these instructions"! That's what `.compute()` will do. For us this looks like:
```
futures_computed = client.compute(futures_gathered, sync=False)
```
We can investigate the results, and use a small list comprehension to return them for later use.
```
futures_computed
futures_computed[0].result()
results = [x.result() for x in futures_computed]
results
```
Now we have the background knowledge we need to move on to running PyTorch jobs!
* If you want to do inference, go to [Notebook 3](03-single-inference.ipynb).
* If you want to do training/transfer learning, go to [Notebook 5](05-transfer-prepro.ipynb).
### Helpful reference links:
* https://distributed.dask.org/en/latest/client.html
* https://distributed.dask.org/en/latest/manage-computation.html
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pickle
from skimage.segmentation import slic
import scipy.ndimage
import scipy.spatial
import torch
from torchvision import datasets
import sys
sys.path.append("../")
from chebygin import ChebyGIN
from extract_superpixels import process_image
from graphdata import comput_adjacency_matrix_images
from train_test import load_save_noise
from utils import list_to_torch, data_to_device, normalize_zero_one
```
# MNIST-75sp
```
data_dir = '../data'
checkpoints_dir = '../checkpoints'
device = 'cuda'
# Load images using standard PyTorch Dataset
data = datasets.MNIST(data_dir, train=False, download=True)
images = data.test_data.numpy()
targets = data.test_labels
# mean and std computed for superpixel features
# features are 3 pixel intensities and 2 coordinates (x,y)
# 3 pixel intensities because we replicate mean pixel value 3 times to test on colored MNIST images
mn = torch.tensor([0.11225057, 0.11225057, 0.11225057, 0.44206527, 0.43950436]).view(1, 1, -1)
sd = torch.tensor([0.2721889, 0.2721889, 0.2721889, 0.2987583, 0.30080357]).view(1, 1, -1)
class SuperpixelArgs():
def __init__(self):
self.n_sp = 75
self.compactness = 0.25
self.split = 'test'
self.dataset = 'mnist'
img_size = images.shape[1]
noises = load_save_noise('../data/mnist_75sp_noise.pt', (images.shape[0], 75))
color_noises = load_save_noise('../data/mnist_75sp_color_noise.pt', (images.shape[0], 75, 3))
n_images = images.shape[0]
noise_levels = np.array([0.4, 0.6])
def acc(pred):
return torch.mean((torch.stack(pred) == targets[:len(pred)]).float()).item() * 100
# This function takes a single 28x28 MNIST image, model object, and std of noise added to node features,
# performs all necessary preprocessing (including superpixels extraction) and returns predictions
def test(model, img, index, noise_std, colornoise_std, show_img=False):
sp_intensity, sp_coord, sp_order, superpixels = process_image((img, 0, 0, SuperpixelArgs(), False, False))
#assert np.
if show_img:
sz = img.shape
plt.figure(figsize=(20,5))
plt.subplot(141)
plt.imshow(img / 255., cmap='gray')
plt.title('Input MNIST image')
img_sp = np.zeros((sz[0], sz[1]))
img_noise = np.zeros((sz[0], sz[1], 3))
img_color_noise = np.zeros((sz[0], sz[1], 3))
for sp_intens, sp_index in zip(sp_intensity, sp_order):
mask = superpixels == sp_index
x = (sp_intens - mn[0, 0, 0].item()) / sd[0, 0, 0].item()
img_sp[mask] = x
img_noise[mask] = x + noises[index, sp_index] * noise_std
img_color_noise[mask] = x + color_noises[index, sp_index] * colornoise_std
plt.subplot(142)
plt.imshow(normalize_zero_one(img_sp), cmap='gray')
plt.title('Superpixels of the image')
plt.subplot(143)
plt.imshow(normalize_zero_one(img_noise), cmap='gray')
plt.title('Noisy superpixels')
plt.subplot(144)
plt.imshow(normalize_zero_one(img_color_noise))
plt.title('Noisy and colored superpixels')
plt.show()
sp_coord = sp_coord / img_size
N_nodes = sp_intensity.shape[0]
mask = torch.ones(1, N_nodes, dtype=torch.uint8)
x = (torch.from_numpy(np.pad(np.concatenate((sp_intensity, sp_coord), axis=1),
((0, 0), (2, 0)), 'edge')).unsqueeze(0) - mn) / sd
A = torch.from_numpy(comput_adjacency_matrix_images(sp_coord)).float().unsqueeze(0)
y, other_outputs = model(data_to_device([x, A, mask, -1, {'N_nodes': torch.zeros(1, 1) + N_nodes}],
device))
y_clean = torch.argmax(y).data.cpu()
alpha_clean = other_outputs['alpha'][0].data.cpu() if 'alpha' in other_outputs else []
x_noise = x.clone()
x_noise[:, :, :3] += noises[index, :N_nodes].unsqueeze(0).unsqueeze(2) * noise_std
y, other_outputs = model(data_to_device([x_noise, A, mask, -1, {'N_nodes': torch.zeros(1, 1) + N_nodes}],
device))
y_noise = torch.argmax(y).data.cpu()
alpha_noise = other_outputs['alpha'][0].data.cpu() if 'alpha' in other_outputs else []
x_noise = x.clone()
x_noise[:, :, :3] += color_noises[index, :N_nodes] * colornoise_std
y, other_outputs = model(data_to_device([x_noise, A, mask, -1, {'N_nodes': torch.zeros(1, 1) + N_nodes}],
device))
y_colornoise = torch.argmax(y).data.cpu()
alpha_color_noise = other_outputs['alpha'][0].data.cpu() if 'alpha' in other_outputs else []
return y_clean, y_noise, y_colornoise, alpha_clean, alpha_noise, alpha_color_noise
# This function returns predictions for the entire clean and noise test sets
def get_predictions(model_path):
state = torch.load(model_path)
args = state['args']
model = ChebyGIN(in_features=5,
out_features=10,
filters=args.filters,
K=args.filter_scale,
n_hidden=args.n_hidden,
aggregation=args.aggregation,
dropout=args.dropout,
readout=args.readout,
pool=args.pool,
pool_arch=args.pool_arch)
model.load_state_dict(state['state_dict'])
model = model.eval().to(device)
#print(model)
# Get predictions
pred, pred_noise, pred_colornoise = [], [], []
alpha, alpha_noise, alpha_colornoise = [], [], []
for index, img in enumerate(images):
y = test(model, img, index, noise_levels[0], noise_levels[1], index == 0)
pred.append(y[0])
pred_noise.append(y[1])
pred_colornoise.append(y[2])
alpha.append(y[3])
alpha_noise.append(y[4])
alpha_colornoise.append(y[5])
if len(pred) % 1000 == 0:
print('{}/{}, acc clean={:.2f}%, acc noise={:.2f}%, acc color noise={:.2f}%'.format(len(pred),
n_images,
acc(pred),
acc(pred_noise),
acc(pred_colornoise)))
return pred, pred_noise, pred_colornoise, alpha, alpha_noise, alpha_colornoise
```
## Global pooling model
```
pred = get_predictions('%s/checkpoint_mnist-75sp_820601_epoch30_seed0000111.pth.tar' % checkpoints_dir)
```
## Visualize heat maps
```
with open('../checkpoints/mnist-75sp_alpha_WS_test_seed111_orig.pkl', 'rb') as f:
global_pool_attn_orig = pickle.load(f)
with open('../checkpoints/mnist-75sp_alpha_WS_test_seed111_noisy.pkl', 'rb') as f:
global_pool_attn_noise = pickle.load(f)
with open('../checkpoints/mnist-75sp_alpha_WS_test_seed111_noisy-c.pkl', 'rb') as f:
global_pool_attn_colornoise = pickle.load(f)
# Load precomputed superpixels to have the order of superpixels consistent with those in global_pool_attn_orig
with open('../data/mnist_75sp_test.pkl', 'rb') as f:
test_data = pickle.load(f)[1]
with open('../data/mnist_75sp_test_superpixels.pkl', 'rb') as f:
test_superpixels = pickle.load(f)
# Get ids of the first test sample for labels from 0 to 9
ind = []
labels_added = set()
for i, lbl in enumerate(data.test_labels.numpy()):
if lbl not in labels_added:
ind.append(i)
labels_added.add(lbl)
ind_sorted = []
for i in np.argsort(data.test_labels):
if i in ind:
ind_sorted.append(i)
sz = data.test_data.shape
images_sp, images_noise, images_color_noise, images_heat, images_heat_noise, images_heat_noise_color = [], [], [], [], [], []
for i in ind_sorted:
# sp_intensity, sp_coord, sp_order, superpixels = process_image((images[i], 0, 0, SuperpixelArgs(), False, False))
sp_intensity, sp_coord, sp_order = test_data[i]
superpixels = test_superpixels[i]
n_sp = sp_intensity.shape[0]
noise, color_noise = noises[i, :n_sp], color_noises[i, :n_sp]
img = np.zeros((sz[1], sz[1]))
img_noise = np.zeros((sz[1], sz[1], 3))
img_color_noise = np.zeros((sz[1], sz[1], 3))
img_heat = np.zeros((sz[1], sz[1]))
img_heat_noise = np.zeros((sz[1], sz[1]))
img_heat_noise_color = np.zeros((sz[1], sz[1]))
for j, (sp_intens, sp_index) in enumerate(zip(sp_intensity, sp_order)):
mask = superpixels == sp_index
x = (sp_intens - mn[0, 0, 0].item()) / sd[0, 0, 0].item()
img[mask] = x
img_noise[mask] = x + noise[sp_index] * noise_levels[0]
img_color_noise[mask] = x + color_noise[sp_index] * noise_levels[1]
img_heat[mask] = global_pool_attn_orig[i][j]
img_heat_noise[mask] = global_pool_attn_noise[i][j]
img_heat_noise_color[mask] = global_pool_attn_colornoise[i][j]
images_sp.append(img)
images_noise.append(img_noise)
images_color_noise.append(img_color_noise)
images_heat.append(img_heat)
images_heat_noise.append(img_heat_noise)
images_heat_noise_color.append(img_heat_noise_color)
for fig_id, img_set in enumerate([images_sp, images_noise, images_color_noise,
images_heat, images_heat_noise, images_heat_noise_color]):
fig = plt.figure(figsize=(15, 6))
n_cols = 5
n_rows = 2
for i in range(n_rows):
for j in range(n_cols):
index = i*n_cols + j
ax = fig.add_subplot(n_rows, n_cols, index + 1)
if fig_id in [0]:
ax.imshow(img_set[index], cmap='gray')
else:
ax.imshow(normalize_zero_one(img_set[index]))
ax.axis('off')
plt.subplots_adjust(hspace=0.1, wspace=0.1)
plt.show()
```
## Models with attention
```
# General function to visualize attention coefficients alpha for models with attention
def visualize(alpha, alpha_noise, alpha_colornoise):
# Get ids of the first test sample for labels from 0 to 9
ind = []
labels_added = set()
for i, lbl in enumerate(data.test_labels.numpy()):
if lbl not in labels_added:
ind.append(i)
labels_added.add(lbl)
ind_sorted = []
for i in np.argsort(data.test_labels):
if i in ind:
ind_sorted.append(i)
sz = data.test_data.shape
images_sp, images_noise, images_color_noise, images_attn, images_attn_noise, images_attn_noise_color = [], [], [], [], [], []
for i in ind_sorted:
sp_intensity, sp_coord, sp_order, superpixels = process_image((images[i], 0, 0, SuperpixelArgs(), False, False))
n_sp = sp_intensity.shape[0]
noise, color_noise = noises[i, :n_sp], color_noises[i, :n_sp]
img = np.zeros((sz[1], sz[1]))
img_noise = np.zeros((sz[1], sz[1], 3))
img_color_noise = np.zeros((sz[1], sz[1], 3))
img_attn = np.zeros((sz[1], sz[1]))
img_attn_noise = np.zeros((sz[1], sz[1]))
img_attn_noise_color = np.zeros((sz[1], sz[1]))
for sp_intens, sp_index in zip(sp_intensity, sp_order):
mask = superpixels == sp_index
x = (sp_intens - mn[0, 0, 0].item()) / sd[0, 0, 0].item()
img[mask] = x
img_noise[mask] = x + noise[sp_index] * noise_levels[0]
img_color_noise[mask] = x + color_noise[sp_index] * noise_levels[1]
img_attn[mask] = alpha[i][0, sp_index].item()
img_attn_noise[mask] = alpha_noise[i][0, sp_index].item()
img_attn_noise_color[mask] = alpha_colornoise[i][0, sp_index].item()
images_sp.append(img)
images_noise.append(img_noise)
images_color_noise.append(img_color_noise)
images_attn.append(img_attn)
images_attn_noise.append(img_attn_noise)
images_attn_noise_color.append(img_attn_noise_color)
for fig_id, img_set in enumerate([images_sp, images_noise, images_color_noise,
images_attn, images_attn_noise, images_attn_noise_color]):
fig = plt.figure(figsize=(15, 6))
n_cols = 5
n_rows = 2
for i in range(n_rows):
for j in range(n_cols):
index = i*n_cols + j
ax = fig.add_subplot(n_rows, n_cols, index + 1)
if fig_id in [0]:
ax.imshow(img_set[index], cmap='gray')
else:
ax.imshow(normalize_zero_one(img_set[index]))
ax.axis('off')
plt.subplots_adjust(hspace=0.1, wspace=0.1)
plt.show()
```
## Weakly-supervised attention model
```
pred, pred_noise, pred_colornoise, alpha, alpha_noise, alpha_colornoise = get_predictions('%s/checkpoint_mnist-75sp_065802_epoch30_seed0000111.pth.tar' % checkpoints_dir)
visualize(alpha, alpha_noise, alpha_colornoise)
```
## Unsupervised attention model
```
pred, pred_noise, pred_colornoise, alpha, alpha_noise, alpha_colornoise = get_predictions('%s/checkpoint_mnist-75sp_330394_epoch30_seed0000111.pth.tar' % checkpoints_dir)
visualize(alpha, alpha_noise, alpha_colornoise)
```
## Supervised attention model
```
pred, pred_noise, pred_colornoise, alpha, alpha_noise, alpha_colornoise = get_predictions('%s/checkpoint_mnist-75sp_139255_epoch30_seed0000111.pth.tar' % checkpoints_dir)
visualize(alpha, alpha_noise, alpha_colornoise)
```
| github_jupyter |
# View main analysis
This notebook provides a view into a snapshot of the SpikeForest analysis. A snapshot URL may be obtained from the "Archive" section of the website or it may be created offline using the spikeforest Python package.
Because this notebook is checked into the git repo, it is a good idea to make a working copy before running or modifying it. If you do modify and push back to the repo, please clear the outputs first.
```
# Imports
from mountaintools import client as mt
from spikeforest import SFMdaRecordingExtractor, SFMdaSortingExtractor
# from spikeforest import MainAnalysisView
import spikeforestwidgets as SFW
import vdomr as vd
import numpy as np
import pandas as pd
# Load the analysis snapshot object
# You can obtain a snapshot URL from the "Archive" section of the website
# or you can use a path to a local file
snapshot_path = 'sha1://d0eb11774305a926e75ad232e4a6b4a54ffed4b2/analysis.json'
# Configure mountaintools to download from the public spikeforest kachery
mt.configDownloadFrom('spikeforest.public')
A = mt.loadObject(path=snapshot_path)
class MainAnalysisView():
def __init__(self, obj: dict):
self._obj = obj
self._metric = 'accuracy' # accuracy, precision, recall
self._mode = 'average' # average or count
self._snr_threshold = 8
self._metric_threshold = 0.8
def mainTable(self):
A = self._obj
snr_threshold = 8
sorters = A['Sorters']
table_rows = []
for sset in A['StudySets']:
# display(vd.h5(sset['name']))
for study in sset['studies']:
sar = self._find_study_analysis_result(study['name'])
assert sar
row = dict(
study=study['name']
)
for sr in sar['sortingResults']:
sorter_name = sr['sorterName']
SRs = self._find_sorting_results(study['name'], sorter_name)
if len(SRs) > 0:
if self._mode == 'average':
val = self._compute_average(sar, sr)
elif self._mode == 'count':
val = self._compute_count(sar, sr)
else:
val = 0
num_missing = self._count_missing(sr)
if num_missing > 0:
val = '{}*'.format(round(val, 2))
else:
val = '{}'.format(round(val, 2))
else:
val = ''
row[sorter_name] = val
table_rows.append(row)
df = pd.DataFrame(table_rows)
df = df[['study'] + [sorter['name'] for sorter in sorters]]
return df
def setMetric(self, metric: str):
self._metric = metric
def setMode(self, mode: str):
self._mode = mode
def _compute_average(self, sar: dict, sorting_result:dict):
snr_threshold = self._snr_threshold
snrs = sar['trueSnrs']
if self._metric == 'accuracy':
x = sorting_result['accuracies']
elif self._metric == 'precision':
x = sorting_result['precisions']
elif self._metric == 'recall':
x = sorting_result['recalls']
else:
raise Exception('Invalid metric: {}'.format(self._metric))
x_to_use = [x[i] for i in range(len(x)) if snrs[i] is not None and snrs[i] >= snr_threshold]
x_to_use = [x for x in x_to_use if x is not None]
if x_to_use:
return np.mean(x_to_use)
else:
return 0
def _compute_count(self, sar: dict, sorting_result:dict):
metric_threshold = self._metric_threshold
if self._metric == 'accuracy':
x = sorting_result['accuracies']
elif self._metric == 'precision':
x = sorting_result['precisions']
elif self._metric == 'recall':
x = sorting_result['recall']
else:
raise Exception('Invalid metric: {}'.format(self._metric))
x_to_use = [x[i] for i in range(len(x)) if x[i] is not None and x[i] >= metric_threshold]
return len(x_to_use)
def _find_sorting_results(self, study_name: str, sorter_name: str):
return [SR for SR in self._obj['SortingResults'] if (SR['studyName'] == study_name) and (SR['sorterName'] == sorter_name)]
def _find_study_analysis_result(self, study_name: str):
A = self._obj
for x in A['StudyAnalysisResults']:
if x['studyName'] == study_name:
return x
def _count_missing(self, sorting_result: dict):
return len([x for x in sorting_result['accuracies'] if x is None])
V = MainAnalysisView(A)
V.setMode('average')
V.setMetric('accuracy')
display(V.mainTable())
# James: here is how to superimpose an updated analysis on top of an existing one:
from_website = mt.loadObject(path='sha1://d0eb11774305a926e75ad232e4a6b4a54ffed4b2/analysis.json')
update = mt.loadObject(path='key://pairio/spikeforest/test1.json')
sorter_names_in_update = [s['name'] for s in update['Sorters']]
for sr in from_website['SortingResults']:
if sr['sorterName'] not in sorter_names_in_update:
update['SortingResults'].append(sr)
for sar in update['StudyAnalysisResults']:
sarW = find_study_analysis_result(from_website, sar['studyName'])
for sr in sarW['sortingResults']:
if sr['sorterName'] not in sorter_names_in_update:
sar['sortingResults'].append(sr)
for sorter in from_website['Sorters']:
if sorter['name'] not in sorter_names_in_update:
update['Sorters'].append(sorter)
A=update
# OLD INFO
# An example command for James:
# > ./assemble_website_data.py --output_ids hybrid_janelia_irc,paired_kampff_irc --dest_key_path output.json
```
| github_jupyter |
```
import cv2
from pathlib import Path
from random import *
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1000)
physical_devices = tf.config.experimental.list_physical_devices('GPU')
if len(physical_devices) > 0:
tf.config.experimental.set_memory_growth(physical_devices[0], True)
result = list(file.absolute().as_posix() for file in list(Path("C:/Users/Moh.Massoud/Desktop/test/data/test").rglob("*.[pP][nN][gG]") ) )
test_images = []
for img_path in result:
img = cv2.imread(img_path, 0)
resized = cv2.resize(img, (224,224))
test_images.append(resized)
test_images = np.asanyarray(test_images)
test_labels = np.asanyarray(list(int(name[:62][-1]) for name in result))
test_images = np.expand_dims(test_images, axis=3)
result = list(file.absolute().as_posix() for file in list(Path("C:/Users/Moh.Massoud/Desktop/test/data/train").rglob("*.[pP][nN][gG]") ) )
train_images = []
for img_path in result:
img = cv2.imread(img_path, 0)
resized = cv2.resize(img, (224,224))
train_images.append(resized)
train_images = np.asanyarray(train_images)
train_labels = np.asanyarray(list(int(name[:63][-1]) for name in result))
train_images = np.expand_dims(train_images, axis=3)
model = keras.Sequential([
keras.layers.Conv2D(filters=96, input_shape=(224,224,1), kernel_size=(11,11), strides=(4,4), padding='valid', activation='relu'),
keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'),
keras.layers.Conv2D(filters=256, kernel_size=(11,11), strides=(1,1), padding='valid', activation='relu'),
keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'),
keras.layers.Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid', activation='relu'),
keras.layers.Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid', activation='relu'),
keras.layers.Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid', activation='relu'),
keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'),
keras.layers.Flatten(),
keras.layers.Dense(4096, activation='relu'),
#keras.layers.Dropout(0.4),
keras.layers.Dense(4096, activation='relu'),
#keras.layers.Dropout(0.4),
keras.layers.Dense(1000, activation='relu'),
#keras.layers.Dropout(0.4),
keras.layers.Dense(2, activation='softmax')
])
model.compile(optimizer='adamax',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
model.save('DrowsinessDetectionAlexNet.h5')
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(predicted_label,
100*np.max(predictions_array),
true_label),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(2), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
num_rows = 3
num_cols = 4
num_images = num_rows*num_cols
predictions = model.predict(test_images)
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
import copy
def removePlotterAxes():
plt.figure()
plt.grid(False)
plt.xticks([])
plt.yticks([])
pass
def multiPlot(imgs, figsize, no_rows, no_cols):
plt.figure(figsize = figsize)
for j in range(len(imgs)):
plt.subplot(no_rows, no_cols, j+1)
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(imgs[j])
def plotImg(img):
removePlotterAxes()
plt.imshow(img)
pass
def detect(img_path):
img = cv2.imread(img_path)
imgToCrop = copy.copy(img)
#plotImg(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
eyeCascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_eye.xml')
roi = []
face_imgs = []
eye_imgs = []
faces = faceCascade.detectMultiScale(imgToCrop, 1.1, 16)
for(x, y, w, h) in faces:
roi.append(imgToCrop[y:y+h, x:x+w])
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
#plotImg(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
for j in range(len(roi)):
face_imgs.append(cv2.cvtColor(roi[j], cv2.COLOR_BGR2RGB))
eyes = None
for face in face_imgs:
#plotImg(face)
img = face
imgToCrop = copy.copy(img)
roi = []
eyes = eyeCascade.detectMultiScale(face, 1.1, 16)
for(x, y, w, h) in eyes:
roi.append(cv2.resize(cv2.cvtColor(imgToCrop[y:y+h, x:x+w], cv2.COLOR_BGR2GRAY), (224,224)))
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
#plotImg(img)
if eyes is not None:
multiPlot(roi, [2, 2], 1, len(eyes))
return np.asanyarray(roi)
model = keras.models.load_model('DrowsinessDetectionAlexNet.h5')
test_images = detect('C:/Users/Moh.Massoud/ML/image.jpg')
test_images = np.expand_dims(test_images, axis=3)
test_labels = [1,1]
print(test_images.shape)
num_rows = 1
num_cols = 2
num_images = num_rows*num_cols
predictions = model.predict(test_images)
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Stock market fluctuations
The fluctuations of stock prices represent an intriguing example of a complex random walk. Stock prices are influenced by transactions that are carried out over a broad range of time scales, from micro- to milliseconds for high-frequency hedge funds over several hours or days for day-traders to months or years for long-term investors. We therefore expect that the statistical properties of stock price fluctuations, like volatility and auto-correlation of returns, are not constant over time, but show significant fluctuations of their own. Time-varying parameter models can account for such changes in volatility and auto-correlation and update their parameter estimates in real-time.
There are, however, certain events that render previously gathered information about volatility or autocorrelation of returns completely useless. News announcements that are unexpected at least to some extent, for example, can induce increased trading activity, as market participants update their orders according to their interpretation of novel information. The resulting *"non-scheduled"* price corrections can often not be adequately described by the current estimates of volatility. Even a model that accounts for gradual variations in volatility cannot reproduce these large price corrections. Instead, when such an event happens, it becomes favorable to forget about previously gathered parameter estimates and completely start over. In this example, we use the *bayesloop* framework to specifically evaluate the probability of previously acquired information becoming useless. We interpret this probability value as a risk metric and evaluate it for each minute of an individual trading day (Nov 28, 2016) for the exchange-traded fund [SPY](https://www.google.com/finance?q=SPY). The announcement of macroeconomic indicators on this day results in a significant increase of our risk metric in intra-day trading.
<div style="background-color: #ffedcc; border-left: 5px solid #f0b37e; padding: 0.5em; margin-top: 1em; margin-bottom: 1.0em">
**DISCLAIMER:** This website does not provide tax, legal or accounting advice. This material has been prepared for informational purposes only, and is not intended to provide, and should not be relied on for, tax, legal or accounting advice. You should consult your own tax, legal and accounting advisors before engaging in any transaction.
</div>
<div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1.0em">
**Note:** The intraday pricing data used in this example is obtained via [Google Finance](https://www.google.com/finance):
</div>
```
https://www.google.com/finance/getprices?q=SPY&i=60&p=1d&f=d,c
```
<div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1.0em">
This request returns a list of minute close prices (and date/time information; `f=d,c`) for SPY for the last trading period. The maximum look-back period is 14 days (`p=14d`) for requests of minute-scale data (`i=60`).
</div>
```
%matplotlib inline
import numpy as np
import bayesloop as bl
import sympy.stats as stats
from tqdm import tqdm_notebook
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_color_codes() # use seaborn colors
# minute-scale pricing data
prices = np.array(
[ 221.14 , 221.09 , 221.17 , 221.3 , 221.3 , 221.26 ,
221.32 , 221.17 , 221.2 , 221.27 , 221.19 , 221.12 ,
221.08 , 221.1 , 221.075, 221.03 , 221.04 , 221.03 ,
221.11 , 221.14 , 221.135, 221.13 , 221.04 , 221.15 ,
221.21 , 221.25 , 221.21 , 221.17 , 221.21 , 221.2 ,
221.21 , 221.17 , 221.1 , 221.13 , 221.18 , 221.15 ,
221.2 , 221.2 , 221.23 , 221.25 , 221.25 , 221.25 ,
221.25 , 221.22 , 221.2 , 221.15 , 221.18 , 221.13 ,
221.1 , 221.08 , 221.13 , 221.09 , 221.08 , 221.07 ,
221.09 , 221.1 , 221.06 , 221.1 , 221.11 , 221.18 ,
221.26 , 221.46 , 221.38 , 221.35 , 221.3 , 221.18 ,
221.18 , 221.18 , 221.17 , 221.175, 221.13 , 221.03 ,
220.99 , 220.97 , 220.9 , 220.885, 220.9 , 220.91 ,
220.94 , 220.935, 220.84 , 220.86 , 220.89 , 220.91 ,
220.89 , 220.84 , 220.83 , 220.74 , 220.755, 220.72 ,
220.69 , 220.72 , 220.79 , 220.79 , 220.81 , 220.82 ,
220.8 , 220.74 , 220.75 , 220.73 , 220.69 , 220.72 ,
220.73 , 220.69 , 220.71 , 220.72 , 220.8 , 220.81 ,
220.79 , 220.8 , 220.79 , 220.74 , 220.77 , 220.79 ,
220.87 , 220.86 , 220.92 , 220.92 , 220.88 , 220.87 ,
220.88 , 220.87 , 220.94 , 220.93 , 220.92 , 220.94 ,
220.94 , 220.9 , 220.94 , 220.9 , 220.91 , 220.85 ,
220.85 , 220.83 , 220.85 , 220.84 , 220.87 , 220.91 ,
220.85 , 220.77 , 220.83 , 220.79 , 220.78 , 220.78 ,
220.79 , 220.83 , 220.87 , 220.88 , 220.9 , 220.97 ,
221.05 , 221.02 , 221.01 , 220.99 , 221.04 , 221.05 ,
221.06 , 221.07 , 221.12 , 221.06 , 221.07 , 221.03 ,
221.01 , 221.03 , 221.03 , 221.01 , 221.02 , 221.04 ,
221.04 , 221.07 , 221.105, 221.1 , 221.09 , 221.08 ,
221.07 , 221.08 , 221.03 , 221.06 , 221.1 , 221.11 ,
221.11 , 221.18 , 221.2 , 221.34 , 221.29 , 221.235,
221.22 , 221.2 , 221.21 , 221.22 , 221.19 , 221.17 ,
221.19 , 221.13 , 221.13 , 221.12 , 221.14 , 221.11 ,
221.165, 221.19 , 221.18 , 221.19 , 221.18 , 221.15 ,
221.16 , 221.155, 221.185, 221.19 , 221.2 , 221.2 ,
221.16 , 221.18 , 221.16 , 221.11 , 221.07 , 221.095,
221.08 , 221.08 , 221.09 , 221.11 , 221.08 , 221.08 ,
221.1 , 221.08 , 221.11 , 221.07 , 221.11 , 221.1 ,
221.09 , 221.07 , 221.14 , 221.12 , 221.08 , 221.09 ,
221.05 , 221.08 , 221.065, 221.05 , 221.06 , 221.085,
221.095, 221.07 , 221.05 , 221.09 , 221.1 , 221.145,
221.12 , 221.14 , 221.12 , 221.12 , 221.12 , 221.11 ,
221.14 , 221.15 , 221.13 , 221.12 , 221.11 , 221.105,
221.105, 221.13 , 221.14 , 221.1 , 221.105, 221.105,
221.11 , 221.13 , 221.15 , 221.11 , 221.13 , 221.08 ,
221.11 , 221.12 , 221.12 , 221.12 , 221.13 , 221.15 ,
221.18 , 221.21 , 221.18 , 221.15 , 221.15 , 221.15 ,
221.15 , 221.15 , 221.13 , 221.13 , 221.16 , 221.13 ,
221.11 , 221.12 , 221.09 , 221.07 , 221.06 , 221.04 ,
221.06 , 221.09 , 221.07 , 221.045, 221. , 220.99 ,
220.985, 220.95 , 221. , 221.01 , 221.005, 220.99 ,
221.03 , 221.055, 221.06 , 221.03 , 221.03 , 221.03 ,
221. , 220.95 , 220.96 , 220.97 , 220.965, 220.97 ,
220.94 , 220.93 , 220.9 , 220.9 , 220.9 , 220.91 ,
220.94 , 220.92 , 220.94 , 220.91 , 220.92 , 220.935,
220.875, 220.89 , 220.91 , 220.92 , 220.93 , 220.93 ,
220.91 , 220.9 , 220.89 , 220.9 , 220.9 , 220.93 ,
220.94 , 220.92 , 220.93 , 220.88 , 220.88 , 220.86 ,
220.9 , 220.92 , 220.85 , 220.83 , 220.83 , 220.795,
220.81 , 220.78 , 220.7 , 220.69 , 220.6 , 220.58 ,
220.61 , 220.63 , 220.68 , 220.63 , 220.63 , 220.595,
220.66 , 220.645, 220.64 , 220.6 , 220.579, 220.53 ,
220.53 , 220.5 , 220.42 , 220.49 , 220.49 , 220.5 ,
220.475, 220.405, 220.4 , 220.425, 220.385, 220.37 ,
220.49 , 220.46 , 220.45 , 220.48 , 220.51 , 220.48 ]
)
plt.figure(figsize=(8,2))
plt.plot(prices)
plt.ylabel('price [USD]')
plt.xlabel('Nov 28, 2016')
plt.xticks([30, 90, 150, 210, 270, 330, 390],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.xlim([0, 390]);
```
## Persistent random walk model
In this example, we choose to model the price evolution of SPY as a simple, well-known random walk model: the auto-regressive process of first order. We assume that subsequent log-return values $r_t$ of SPY obey the following recursive instruction:
$$
r_t = \rho_t \cdot r_{t-1} + \sqrt{1-\rho^2} \cdot \sigma_t \cdot \epsilon_t
$$
with the time-varying correlation coefficient $\rho_t$ and the time-varying volatility parameter $\sigma_t$. Here, $\epsilon_t$ is drawn from a standard normal distribution and represents the driving noise of the process and the scaling factor $\sqrt{1-\rho_t^2}$ makes sure that $\sigma_t$ is the standard deviation of the process. In *bayesloop*, we define this observation model as follows:
```
bl.om.ScaledAR1('rho', bl.oint(-1, 1, 100), 'sigma', bl.oint(0, 0.006, 400))
```
This implementation of the correlated random walk model will be discussed in detail in the next section.
Looking at the log-returns of our example, we find that the magnitude of the fluctuations (i.e. the volatility) is higher after market open and before market close. While these variations happen quite gradually, a large peak around 10:30am (and possibly another one around 12:30pm) represents an abrupt price correction.
```
logPrices = np.log(prices)
logReturns = np.diff(logPrices)
plt.figure(figsize=(8,2))
plt.plot(np.arange(1, 390), logReturns, c='r')
plt.ylabel('log-returns')
plt.xlabel('Nov 28, 2016')
plt.xticks([30, 90, 150, 210, 270, 330, 390],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.yticks([-0.001, -0.0005, 0, 0.0005, 0.001])
plt.xlim([0, 390]);
```
## Online study
*bayesloop* provides the class `OnlineStudy` to analyze on-going data streams and perform model selection for each new data point. In contrast to other Study types, more than just one transition model can be assigned to an `OnlineStudy` instance, using the method `addTransitionModel`. Here, we choose to add two distinct scenarios:
- **normal:** Both volatility and correlation of subsequent returns are allowed to vary gradually over time, to account for periods with above average trading activity after market open and before market close. This scenario therefore represents a smoothly running market.
- **chaotic:** This scenario assumes that we know nothing about the value of volatility or correlation. The probability that this scenario gets assigned in each minute therefore represents the probability that previously gathered knowledge about market dynamics cannot explain the last price movement.
By evaluating how likely the *chaotic* scenario explains each new minute close price of SPY compared to the *normal* scenario, we can identify specific events that lead to extreme fluctuations in intra-day trading.
First, we create a new instance of the `OnlineStudy` class and set the observation model introduced above. The keyword argument `storeHistory` is set to `True`, because we want to access the parameter estimates of all time steps afterwards, not only the estimates of the last time step.
```
S = bl.OnlineStudy(storeHistory=True)
L = bl.om.ScaledAR1('rho', bl.oint(-1, 1, 100),
'sigma', bl.oint(0, 0.006, 400))
S.set(L)
```
<div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em">
**Note:** While the parameter `rho` is naturally constrained to the interval ]-1, 1[, the parameter boundaries of `sigma` have to be specified by the user. Typically, one can review past log-return data and estimate the upper boundary as a multiple of the standard deviation of past data points.
</div>
Both scenarios for the dynamic market behavior are implemented via the method `add`. The *normal* case consists of a combined transition model that allows both volatility and correlation to perform a Gaussian random walk. As the standard deviation (magnitude) of the parameter fluctuations is a-priori unknown, we supply a wide range of possible values (`bl.cint(0, 1.5e-01, 15)` for `rho`, which corresponds to 15 equally spaced values within the closed interval [0, 0.15], and 50 equally spaced values within the interval [0, 1.5$\cdot$10$^{-4}$] for `sigma`).
<div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em">
Since we have no prior assumptions about the standard deviations of the Gaussian random walks, we let *bayesloop* assign equal probability to all values. If one wants to analyze more than just one trading day, the (hyper-)parameter distributions from the end of one day can be used as the prior distribution for the next day! One might also want to suppress large variations of `rho` or `sigma` with an exponential prior, e.g.:
</div>
```
bl.tm.GaussianRandomWalk('s1', bl.cint(0, 1.5e-01, 15), target='rho',
prior=stats.Exponential('expon', 1./3.0e-02))
```
The *chaotic* case is implemented by the transition model `Independent`. This model sets a flat prior distribution for the parameters volatility and correlation in each time step. This way, previous knowledge about the parameters is not used when analyzing a new data point.
```
T1 = bl.tm.CombinedTransitionModel(
bl.tm.GaussianRandomWalk('s1', bl.cint(0, 1.5e-01, 15), target='rho'),
bl.tm.GaussianRandomWalk('s2', bl.cint(0, 1.5e-04, 50), target='sigma')
)
T2 = bl.tm.Independent()
S.add('normal', T1)
S.add('chaotic', T2)
```
Before any data points are passed to the study instance, we further provide prior probabilities for the two scenarios. We expect about one news announcement containing unexpected information per day and set a prior probability of $1/390$ for the *chaotic* scenario (one normal trading day consists of 390 trading minutes).
```
S.setTransitionModelPrior([389/390., 1/390.])
```
Finally, we can supply log-return values to the study instance, data point by data point. We use the `step` method to infer new parameter estimates and the updated probabilities of the two scenarios. Note that in this example, we use a `for` loop to feed all data points to the algorithm because all data points are already available. In a real application of the `OnlineStudy` class, one can supply each new data point as it becomes available and analyze it in real-time.
```
for r in tqdm_notebook(logReturns):
S.step(r)
```
## Volatility spikes
Before we analyze how the probability values of our two market scenarios change over time, we check whether the inferred temporal evolution of the time-varying parameters is realistic. Below, the log-returns are displayed together with the inferred marginal distribution (shaded red) and mean value (black line) of the volatility parameter, using the method `plotParameterEvolution`.
```
plt.figure(figsize=(8, 4.5))
# data plot
plt.subplot(211)
plt.plot(np.arange(1, 390), logReturns, c='r')
plt.ylabel('log-returns')
plt.xticks([30, 90, 150, 210, 270, 330, 390],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.yticks([-0.001, -0.0005, 0, 0.0005, 0.001])
plt.xlim([0, 390])
# parameter plot
plt.subplot(212)
S.plot('sigma', color='r')
plt.xticks([28, 88, 148, 208, 268, 328, 388],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.xlabel('Nov 28, 2016')
plt.ylim([0, 0.00075])
plt.xlim([-2, 388]);
```
Note that the volatility estimates of the first few trading minutes are not as accurate as later ones, as we initialize the algorithm with a non-informative prior distribution. One could of course provide a custom prior distribution as a more realistic starting point. Despite this *fade-in* period, the period of increased volatility after market open is captured nicely, as well as the (more subtle) increase in volatility during the last 45 minutes of the trading day. Large individual log-return values also result in an *volatility spikes* (around 10:30am and more subtle around 12:30pm).
## Islands of stability
The persistent random walk model does not only provide information about the magnitude of price fluctuations, but further quantifies whether subsequent log-return values are correlated. A positive correlation coefficient indicates diverging price movements, as a price increase is more likely followed by another increase, compared to a decrease. In contrast, a negative correlation coefficient indicates *islands of stability*, i.e. trading periods during which prices do not diffuse randomly (as with a corr. coeff. of zero). Below, we plot the price evolution of SPY on November 28, together with the inferred marginal distribution (shaded blue) and the corresponding mean value (black line) of the time-varying correlation coefficient.
```
plt.figure(figsize=(8, 4.5))
# data plot
plt.subplot(211)
plt.plot(prices)
plt.ylabel('price [USD]')
plt.xticks([30, 90, 150, 210, 270, 330, 390],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.xlim([0, 390])
# parameter plot
plt.subplot(212)
S.plot('rho', color='#0000FF')
plt.xticks([28, 88, 148, 208, 268, 328, 388],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.xlabel('Nov 28, 2016')
plt.ylim([-0.4, 0.4])
plt.xlim([-2, 388]);
```
As a correlation coefficient that deviates significantly from zero would be immediately exploitable to predict future price movements, we mostly find correlation values near zero (in accordance with the efficient market hypothesis). However, between 1:15pm and 2:15pm, we find a short period of negative correlation with a value around -0.2. During this period, subsequent price movements tend to cancel each other out, resulting in an unusually strong price stability.
Using the `Parser` sub-module of *bayesloop*, we can evaluate the probability that subsequent return values are negatively correlated. In the figure below, we tag all time steps with a probability for `rho < 0` of 80% or higher and find that this indicator nicely identifies the period of increased market stability!
<div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em">
**Note:** The arbitrary threshold of 80% for our market indicator is of course chosen with hindsight in this example. In a real application, more than one trading day of data needs to be analyzed to create robust indicators!
</div>
```
# extract parameter grid values (rho) and corresponding prob. values (p)
rho, p = S.getParameterDistributions('rho')
# evaluate Prob.(rho < 0) for all time steps
P = bl.Parser(S)
p_neg_rho = np.array([P('rho < 0.', t=t, silent=True) for t in range(1, 389)])
# plotting
plt.figure(figsize=(8, 4.5))
plt.subplot(211)
plt.axhline(y=0.8, lw=1.5, c='g')
plt.plot(p_neg_rho, lw=1.5, c='k')
plt.fill_between(np.arange(len(p_neg_rho)), 0, p_neg_rho > 0.8, lw=0, facecolor='g', alpha=0.5)
plt.xticks([28, 88, 148, 208, 268, 328, 388],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.ylabel('prob. of neg. corr.')
plt.xlim([-2, 388])
plt.subplot(212)
plt.plot(prices)
plt.fill_between(np.arange(2, len(p_neg_rho)+2), 220.2, 220.2 + (p_neg_rho > 0.8)*1.4, lw=0, facecolor='g', alpha=0.5)
plt.ylabel('price [USD]')
plt.xticks([30, 90, 150, 210, 270, 330, 390],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.xlim([0, 390])
plt.ylim([220.2, 221.6])
plt.xlabel('Nov 28, 2016');
```
## Automatic tuning
One major advantage of the `OnlineStudy` class is that it not only infers the time-varying parameters of the *low-level* correlated random walk (the observation model `ScaledAR1`), but further infers the magnitude (the standard deviation of the transition model `GaussianRandomWalk`) of the parameter fluctuations and thereby tunes the parameter dynamics as new data arrives. As we can see below (left sub-figures), both magnitudes - one for `rho` and one for `sigma` - start off at a high level. This is due to our choice of a uniform prior, assigning equal probability to all hyper-parameter values before seeing any data. Over time, the algorithm *learns* that the true parameter fluctuations are less severe than previously assumed and adjusts the hyper-parameters accordingly. This newly gained information, summarized in the hyper-parameter distributions of the last time step (right sub-figures), could then represent the prior distribution for the next trading day.
```
plt.figure(figsize=(8, 4.5))
plt.subplot(221)
S.plot('s1', color='green')
plt.xticks([28, 88, 148, 208, 268, 328, 388],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.xlabel('Nov 28, 2016')
plt.xlim([-2, 388])
plt.ylim([0, 0.06])
plt.subplot(222)
S.plot('s1', t=388, facecolor='green', alpha=0.7)
plt.yticks([])
plt.xticks([0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08], ['0', '1', '2', '3', '4', '5', '6', '7', '8'])
plt.xlabel('s1 ($\cdot 10^{-2}$)')
plt.xlim([-0.005, 0.08])
plt.subplot(223)
S.plot('s2', color='green')
plt.xticks([28, 88, 148, 208, 268, 328, 388],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.xlabel('Nov 28, 2016')
plt.xlim([-2, 388])
plt.ylim([0, 0.0001])
plt.subplot(224)
S.plot('s2', t=388, facecolor='green', alpha=0.7)
plt.yticks([])
plt.xticks([0, 0.00001, 0.00002, 0.00003], ['0', '1', '2', '3'])
plt.xlabel('s2 ($\cdot 10^{-5}$)')
plt.xlim([0, 0.00003])
plt.tight_layout()
```
## Real-time model selection
Finally, we investigate which of our two market scenarios - *normal* vs. *chaotic* - can explain the price movements best. Using the method `plot('chaotic')`, we obtain the probability values for the *chaotic* scenario compared to the *normal* scenario, **with respect to all past data points**:
```
plt.figure(figsize=(8, 2))
S.plot('chaotic', lw=2, c='k')
plt.xticks([28, 88, 148, 208, 268, 328, 388],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.xlabel('Nov 28, 2016')
plt.xlim([0, 388])
plt.ylabel('p("chaotic")')
```
As expected, the probability that the *chaotic* scenario can explain all past log-return values at a given point in time quickly falls off to practically zero. Indeed, a correlated random walk with slowly changing volatility and correlation of subsequent returns is better suited to describe the price fluctuations of SPY **in the majority of time steps**.
However, we may also ask for the probability that each **individual** log-return value is produced by either of the two market scenarios by using the keyword argument `local=True`:
```
plt.figure(figsize=(8, 2))
S.plot('chaotic', local=True, c='k', lw=2)
plt.xticks([28, 88, 148, 208, 268, 328, 388],
['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm'])
plt.xlabel('Nov 28, 2016')
plt.xlim([0, 388])
plt.ylabel('p("chaotic")')
plt.axvline(58, 0, 1, zorder=1, c='r', lw=1.5, ls='dashed', alpha=0.7)
plt.axvline(178, 0, 1, zorder=1, c='r', lw=1.5, ls='dashed', alpha=0.7);
```
Here, we find clear peaks indicating an increased probability for the *chaotic* scenario, i.e. that previously gained information about the market dynamics has become useless. Lets assume that we are concerned about market behavior as soon as there is at least a 1% risk that *normal* market dynamics can not describe the current price movement. This leaves us with three distinct events in the following time steps:
```
p = S.getTransitionModelProbabilities('chaotic', local=True)
np.argwhere(p > 0.01)
```
These time steps translate to the following trading minutes: **10:31am, 12:33pm and 3:54pm**.
The first peak at 10:31am directly follows the release of the [Texas Manufacturing Outlook Survey](http://www.dallasfed.org/news/releases/2016/nr161128.cfm) of the [Dallas Fed](http://www.dallasfed.org/). The publication of this set of economic indicators has already been announced by [financial news pages](http://finance.yahoo.com/news/10-things-know-opening-bell-113600334.html) before the market opened. While the title of the survey (*"Texas Manufacturing Activity Continues to Expand"*) indicated good news, investors reacted sceptically, possibly due to negative readings in both the *new orders index* and *growth rate of orders index* (c.f. [this article](http://247wallst.com/economy/2016/11/29/better-news-for-manufacturing-activity-in-texas/)).
<div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em">
**Note:** This first peak would be more pronounced if we had supplied a prior distribution that constrains strong parameter fluctuations!
</div>
The underlying trigger for the second peak, shortly after 12:30pm, remains unknown. No major macroeconomic indicators were published at that time, at least according to some economic news sites (see e.g. [nytimes.com](http://markets.on.nytimes.com/research/economy/indicators/indicators.asp?monthOffset=-3) or [liveindex.org](https://liveindex.org/46172/2016/11/us-pre-market-news-28-nov-2016/)).
The last peak at 3:54pm is likely connected to the imminent market close at 4pm. To protect themselves from unexpected news after trading hours, market participants often close their open positions before market close, generally leading to an increased volatility. If large market participants follow this behavior, price corrections may no longer be explained by *normal* market dynamics.
This example has shown that *bayesloop*'s `OnlineStudy` class can identify periods of increased volatility as well as periods of increased price stability (accompanied by a negative correlation of subsequent returns), and further automatically tunes its current assumptions about market dynamics. Finally, we have demonstrated that *bayesloop* serves as a viable tool to detect *"anomalous"* price corrections, triggered by large market participants or news announcements, in real-time.
| github_jupyter |
```
#https://kieranrcampbell.github.io/blog/2016/05/15/gibbs-sampling-bayesian-linear-regression.html
%load_ext autoreload
%autoreload 2
import sys,os
import numpy as np
import readline
#from rpy2.rinterface import R_VERSION_BUILD
#print(R_VERSION_BUILD)
#import rpy2.robjects as robjects
#import rpy2.robjects.numpy2ri
import math
import datetime
import pandas as pd
#import seaborn as sns
from scipy.fftpack import fft, ifft
#Import compressivesesning libs
sys.path.insert(0,"..")
from pyCompressSensing.SignalFrame import *
from scipy.stats import geninvgauss,gamma
```
## Impport r package if necessary:
```
#Set proxy if needed
os.environ["http_proxy"] = "http://proxy-internet-aws-eu.subsidia.org:3128"
os.environ["https_proxy"] = "http://proxy-internet-aws-eu.subsidia.org:3128"
#https://rpy2.readthedocs.io/en/version_2.8.x/introduction.html#r-packages
# import rpy2's package module
import rpy2.robjects.packages as rpackages
# import R's utility package
utils = rpackages.importr('utils')
# select a mirror for R packages
utils.chooseCRANmirror(ind=1) # select the first mirror in the list
# Install dependencies
packnames = ['GeneralizedHyperbolic','MASS']
names_to_install = [x for x in packnames if (not rpackages.isinstalled(x))]
# R vector of strings
from rpy2.robjects.vectors import StrVector
# Selectively install what needs to be install.
# We are fancy, just because we can.
if len(names_to_install) > 0:
utils.install_packages(StrVector(names_to_install))
#Imports R packages
#see https://cran.r-project.org/web/views/Distributions.html
from rpy2.robjects.packages import importr
r_GIG = importr('GeneralizedHyperbolic') #used for Generalized Gaussian Inverse distribution
r_stats = importr('stats') #used for Normal, gamma distribution
r_MASS = importr('MASS') #used for multivariate Normal distribution
```
## General functions
```
def sample_gauss(size):
return np.random.normal(loc=0.0, scale=1.0,size=size)
def psi_inv(x,scale):
return ifft(x)*scale
def psi_invt(x,scale):
return fft(x)/scale
```
## Sample functions
```
def sample_x(y, tau, _lambda, gamma_r,gamma_i,v,alpha,phi,psi_inv_phit_y):
N=phi.shape[1]
M=phi.shape[0]
LHS_r = np.array([math.pow((1/alpha) + _lambda*(1/gamma_ind),-1) for gamma_ind in gamma_r])
LHS_i = np.array([math.pow((1/alpha) + _lambda*(1/gamma_ind),-1) for gamma_ind in gamma_i])
m = tau*psi_inv_phit_y+v
mu_tilde_r = np.real(LHS_r*np.real(m))
mu_tilde_i = np.real(LHS_i*np.imag(m))
gauss_v3 = sample_gauss(N)
gauss_v4 = sample_gauss(N)
LHS2_r = np.array([math.pow((1/alpha) + _lambda*(1/gamma_ind),-1/2) for gamma_ind in gamma_r])
LHS2_i = np.array([math.pow((1/alpha) + _lambda*(1/gamma_ind),-1/2) for gamma_ind in gamma_i])
x_r = mu_tilde_r + LHS2_r * gauss_v3
x_i = mu_tilde_i + LHS2_i * gauss_v4
x = x_r + x_i*1j
return x
def sample_v(x,tau,alpha,phi):
N=phi.shape[1]
M=phi.shape[0]
mu = (1/alpha)*x-tau*psi_invt(phi.T@phi@psi_inv(x,math.sqrt(N)),math.sqrt(N))
V2_1 = sample_gauss(N) + sample_gauss(N)*1j
V2_1 = math.sqrt(1/2)*V2_1
V2_2 = sample_gauss(M) + sample_gauss(M)*1j
V2_2 = math.sqrt(1/2)*V2_2
sqrt_alpha = math.sqrt(alpha)
sqrt_1_alpha = math.sqrt(1-alpha)
V2 = math.pow(alpha,-1/2)*V2_1-sqrt_alpha*phi.T@phi@V2_1 + (sqrt_1_alpha)*phi.T@V2_2
V1 = psi_invt(V2,math.sqrt(N))
V = mu + V1
#Condition for positive definite condition
#init tau à 1/std(y)
return V
%%latex
$$ \boxed{\gamma_n \mid x_{n},\lambda \sim GIG(1,\lambda x_{n}^{2},\frac{1}{2})} $$
def sample_gamma(_lambda,x):
gamma_n = []
for x_n in x :
b_chi = _lambda*math.pow(np.real(x_n),2)
a_psi = 1
c_lambda = 1/2
rv = geninvgauss.rvs(c_lambda, np.sqrt(b_chi*a_psi), loc=0, scale=np.sqrt(a_psi/b_chi), size=1, random_state=None)
#param = robjects.FloatVector([b_chi,a_psi,c_lambda])
#Dnas le package les param : c(chi, psi, lambda) = b,a,c
#gamma_n.append(np.array(r_GIG.rgig(1,param = param))[0])
gamma_n.append(rv)
return np.concatenate(gamma_n).ravel()
#return np.array(gamma_n)
%%latex
$$ \boxed{\lambda \mid x,\gamma \sim \operatorname{Gamma}(N+a_{\lambda},b_{\lambda}+\frac{1}{2}x^{T}\operatorname{diag}\left(\gamma^{-1}\right)x)} $$
def sample_lambda(x,gamma_r,gamma_i,a_lambda = 1e-6,b_lambda = 1e-6):
N = x.shape[0]
shape = N+a_lambda
x_r = np.real(x)
x_i = np.imag(x)
rate = np.real(b_lambda + (1/2)*((x_r.T@np.linalg.inv(np.diag(gamma_r))@x_r) + (x_i.T@np.linalg.inv(np.diag(gamma_i))@x_i)))
return gamma.rvs(a=shape,scale=1/rate, size=1, random_state=None)
#return np.array(r_stats.rgamma(1, shape = shape, rate = rate))
%%latex
$$ \boxed{\tau \mid x,y \sim \operatorname{Gamma}(M+a_{\tau},b_{\tau}+\frac{1}{2}\|y - \phi \psi^{-1}X\|^{2}_{2})} $$
def sample_tau(x,y,phi,a_tau = 1e-6,b_tau = 1e-6):
M = y.shape[0]
N = x.shape[0]
#M au lieu de M/2 car partie relle et partie complexe
shape = (M)+a_tau
rate = b_tau + (1/2)*(math.pow(np.linalg.norm(y - phi@psi_inv(x,math.sqrt(N)),2),2))
return gamma.rvs(a=shape,scale=1/rate, size=1, random_state=None)
#return np.array(r_stats.rgamma(1, shape = shape, rate = rate))
```
## Collapsed Gibbs sampler ( )
```
def gibbs(y, iters, init, hypers,phi):
#Init parameters
_lambda = init["lambda"]
tau = init["tau"]
gamma_r = init["gamma"]
gamma_i = init["gamma"]
v = init["v"]
#Optimisation
M=phi.shape[0]
N=phi.shape[1]
psi_inv_phit_y = psi_inv(phi.T@y,math.sqrt(N))
print(psi_inv_phit_y.shape)
trace = np.zeros((iters, 3), dtype=object) ## trace to store values of x,tau,lambda,gamma
for it in range(iters):
alpha = 0.1*tau
x = sample_x(y, tau, _lambda, gamma_r,gamma_i,v,alpha,phi,psi_inv_phit_y)
x_r = np.real(x)
x_i = np.imag(x)
_lambda = sample_lambda(x,gamma_r,gamma_i,hypers["a_lambda"],hypers["b_lambda"])
tau = sample_tau(x,y,phi,hypers["a_tau"],hypers["b_tau"])
gamma_r = sample_gamma(_lambda,x_r)
gamma_i = sample_gamma(_lambda,x_i)
v = sample_v(x,tau,alpha,phi)
# print(datetime.datetime.now()-now)
# if (it % 100) == 1 :
print(str(it)+": "+str(np.linalg.norm(x,1))+" "+str(tau)+" "+str(_lambda))
trace[it,:] = np.array((np.linalg.norm(x,1), _lambda, tau), dtype=object)
trace = pd.DataFrame(trace)
trace.columns = ['x','lambda', 'tau']
return trace
```
## Test on signal
```
sf = SignalFrame()
s01 = sf.read_wave('../data/CETIM.wav', coeff_amplitude=1/10000,trunc=0.0125)
s01.sampler_uniform(rate=0.3)
signal_sampled_uni_r03 = s01.temporal_sampled
phi_uni_r03 = s01.phi
y = signal_sampled_uni_r03
N = len(s01.temporal)
## specify initial values
init = {"x": np.zeros(N),
"tau": 1,
"lambda": 1,
"gamma": np.ones(N),
"v": np.zeros(N),
}
## specify hyper parameters
hypers = {"a_lambda": 1e-6,
"b_lambda": 1e-6,
"a_tau": 1e-6,
"b_tau": 1e-6
}
iters = 600
trace = gibbs(signal_sampled_uni_r03, iters, init, hypers,phi_uni_r03)
trace.columns
test = trace.loc[:,['lambda','tau']]
test
test['tau'] = test['tau'].apply(lambda x: np.asscalar(x))
traceplot = test.plot()
traceplot.set_xlabel("Iteration")
traceplot.set_ylabel("Parameter value")
trace.to_csv('trace_10000.csv')
F = open("lambda1.txt","r")
lambdas = [float(i) for i in F.read().splitlines()]
sns.distplot(lambdas[0:50]).set_title('$\lambda$ : Itération 50')
sns.distplot(lambdas[0:200]).set_title('$\lambda$ : Itération 200')
sns.distplot(lambdas[0:600]).set_title('$\lambda$ : Itération 600')
np.mean(lambdas[50:])
```
| github_jupyter |
```
#Replace all ______ with rjust, ljust or center.
thickness = int(input()) #This must be an odd number
c = 'H'
#Top Cone
for i in range(thickness):
print((c*i).rjust(thickness-1)+c+(c*i).ljust(thickness-1))
#Top Pillars
for i in range(thickness+1):
print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6))
#Middle Belt
for i in range((thickness+1)//2):
print((c*thickness*5).center(thickness*6))
#Bottom Pillars
for i in range(thickness+1):
print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6))
#Bottom Cone
for i in range(thickness):
print(((c*(thickness-i-1)).rjust(thickness)+c+(c*(thickness-i-1)).ljust(thickness)).rjust(thickness*6))
```
# Personal Understanding
```
thickness = int(input()) #This must be an odd number
c = 'H'
#Top Cone
for i in range(thickness):
print(c*i)
#print((c*i).rjust(thickness-1)+c+(c*i).ljust(thickness-1))
for i in range(thickness):
print((c*i).rjust(thickness-1))
#print((c*i).rjust(thickness-1)+c+(c*i).ljust(thickness-1))
for i in range(thickness):
print((c*i).rjust(thickness-1)+c)
#print((c*i).rjust(thickness-1)+c+(c*i).ljust(thickness-1))
for i in range(thickness):
print((c*i).ljust(thickness-1))
#print((c*i).rjust(thickness-1)+c+(c*i).ljust(thickness-1))
for i in range(thickness):
print((c*i).rjust(thickness-1)+c+(c*i).ljust(thickness-1))
thickness = int(input()) #This must be an odd number
c = 'H'
#Top Pillars
for i in range(thickness+1):
#print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6))
print(c*thickness)
for i in range(thickness+1):
#print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6))
print((c*thickness).center(thickness*2))
for i in range(thickness+1):
#print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6))
print((c*thickness).center(thickness*6))
for i in range(thickness+1):
print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6))
#Middle Belt
for i in range((thickness+1)//2):
print((c*thickness*5).center(thickness*6))
thickness = int(input()) #This must be an odd number
c = 'H'
#Bottom Cone
for i in range(thickness):
print(((c*(thickness-i-1))))
for i in range(thickness):
print(((c*(thickness-i-1)).rjust(thickness)))
for i in range(thickness):
print(((c*(thickness-i-1)).rjust(thickness)+c))
for i in range(thickness):
print(((c*(thickness-i-1)).ljust(thickness)))
for i in range(thickness):
print(((c*(thickness-i-1)).ljust(thickness)).rjust(thickness*6))
for i in range(thickness):
print(((c*(thickness-i-1)).rjust(thickness)+c+(c*(thickness-i-1)).ljust(thickness)).rjust(thickness*6))
```
| github_jupyter |
For network data visualization we can use a number of libraries. Here we'll use [networkX](https://networkx.github.io/documentation/networkx-2.4/install.html).
```
! pip3 install networkx
! pip3 install pytest
import networkx as nx
! ls ../facebook_large/
import pandas as pd
target = pd.read_csv('../facebook_large/musae_facebook_target.csv')
edges = pd.read_csv('../facebook_large/musae_facebook_edges.csv')
target.shape
edges.shape
! cat ../facebook_large/README.txt
edges.head(5)
target.head(10)
```
So, we have undirected edges `(n1 <-> n2)` stored as a tuple `(n1, n2)`, and the nodes have `3` columns, out of which `facebook_id` and `page_name` should be anonymized and not used for clustering/classification.
Let's use `networkX` to visualize this graph, and look at some histograms to get an idea of the data (like class imbalance, etc.)
```
import matplotlib.pyplot as plt
%matplotlib inline
target['page_type'].hist()
# Note: there's some node imbalance but not that much
# visualizing the degree histogram will also give us an insight into the graph
from collections import Counter
degree_sequence = sorted([d for n, d in G.degree()], reverse=True)
degree_count = Counter(degree_sequence)
deg, cnt = zip(*degree_count.items())
fig, ax = plt.subplots(figsize=(20, 10))
plt.bar(deg, cnt, width=0.80, color="b")
plt.title("Degree Histogram")
plt.ylabel("Count")
plt.xlabel("Degree")
plt.xlim(0, 200)
plt.show()
# create an empty nx Graph
G = nx.Graph()
# add all nodes to the graph, with page_type as the node attribute
for it, cat in zip(target['id'], target['page_type']):
G.add_node(it, page_type=cat)
# add all edges, no edge attributes required rn
for n1, n2 in zip(edges['id_1'], edges['id_2']):
G.add_edge(n1, n2)
# nx.draw(G, with_labels=True)
# Note: Viewing such a huge graph will take a lot of time to process
# so it's better to save the huge graph into a file, and visualize a subgraph instead
from matplotlib import pylab
def save_graph(graph, file_name):
plt.figure(num=None, figsize=(20, 20), dpi=80)
plt.axis('off')
fig = plt.figure(1)
pos = nx.spring_layout(graph)
nx.draw_networkx_nodes(graph,pos)
nx.draw_networkx_edges(graph,pos)
nx.draw_networkx_labels(graph,pos)
cut = 1.00
xmax = cut * max(xx for xx, yy in pos.values())
ymax = cut * max(yy for xx, yy in pos.values())
plt.xlim(0, xmax)
plt.ylim(0, ymax)
plt.savefig(file_name, bbox_inches="tight")
pylab.close()
del fig
SG = G.subgraph(nodes=[0, 1, 22208, 22271, 234])
nx.draw(SG, with_labels = True)
# Note that it is a very sparse graph (~0.0006 density)
save_graph(G, 'page-page.pdf')
# Note: this will take a lot of time to run
```
| github_jupyter |
# 공시지가 K-NN
```
import numpy as np
import pandas as pd
from sklearn.metrics import accuracy_score, classification_report
import sklearn.neighbors as neg
import matplotlib.pyplot as plt
import folium
import json
import sklearn.preprocessing as pp
import warnings
warnings.filterwarnings('ignore')
```
**데이터 전처리**
```
# 이상치제거, 표준화 필요
all_data = pd.read_csv("data-set/house_clean02.csv", dtype=np.str, encoding='euc-kr') # encodig: 'euc-kr'
# 면적 당 공시지가 추가 # --> string type이므로 astype을 통해 타입 변경
all_data['y_price'] = all_data['공시지가'].astype(np.float32) / all_data['면적'].astype(np.float32)
# X: (x, y) / y: (면적 당 공시지가) #
X = all_data.iloc[:, 9:11].astype(np.float32) # shape (28046, 2)
y = all_data['y_price'] # shape (28046, )
## Robust scaling ## --> 이상치를 반영한 정규화(min-max)
rs = pp.RobustScaler()
y_scale = rs.fit_transform(np.array(y).reshape(-1, 1))
## 실거래가 아파트 데이터 전처리 ## --> shape (281684, 7)
all_data_apt = pd.read_csv("data-set/total_Apt.csv", sep=",", encoding='euc-kr')
all_data_apt['price_big'] = all_data_apt['Price'] / all_data_apt['Howbig']
X_apt = all_data_apt.iloc[:, -3:-1] # shape (281684, 2)
y_apt_scale = rs.fit_transform(np.array(all_data_apt['price_big']).reshape(-1, 1)) # shape(281684, 1)
## 실거래가 연립 데이터 전처리 ## --> shape ()
all_data_town = pd.read_csv("data-set/total_Townhouse01.csv", sep=",", encoding="cp949")
all_data_town['price_big'] = all_data_town['Price'] / all_data_town['Howbig']
X_town = all_data_town.iloc[:, -3:-1] # shape (281684, 2)
y_town_scale = rs.fit_transform(np.array(all_data_town['price_big']).reshape(-1, 1)) # shape(281684, 1)
## 어린이집 데이터 전처리 ##
all_center = json.load(open("data-set/allmap.json", encoding='utf-8'))
c_header = all_center['DESCRIPTION'] # JSON 분리
c_data = all_center['DATA']
c_alldf = pd.DataFrame(c_data)
# 특정 열만 선택 #
c_alldf = c_alldf[['cot_conts_name', 'cot_coord_x', 'cot_coord_y', 'cot_value_01', 'cot_gu_name']]
c_alldf.columns = ['name', 'x', 'y', 'kinds', 'location']
x_test = c_alldf[c_alldf['kinds'] == "국공립"] # 국공립만 선택
## KNN regressor##
k_list = [i for i in range(15,26, 2)]
# minkowski --> p = 2 // 평균 회귀 --> regressor #
knn_fit = neg.KNeighborsRegressor(n_neighbors=k_list[0], p=2, metric='minkowski')
knn_fit.fit(X, y_scale)
knn_fit.fit(X_apt, y_apt_scale)
## predict --> 평균가 적용 ##
pred = knn_fit.predict(x_test.iloc[:, 1:3])
x_test['소득추정'] = pred
for i in range(len(x_test['location'])):
x_test['location'].values[i] = x_test['location'].values[i][:-1] # '구' 빼기
## groupby를 통해 구별 평균 소득 추정 ##
mean = x_test.groupby(['location'], as_index=False).mean()
```
**데이터 시각화**
```
# 한글 폰트 깨지는 문제 #
from matplotlib import font_manager, rc
font_name = font_manager.FontProperties(fname="c:/Windows/Fonts/malgun.ttf").get_name()
rc('font', family=font_name)
## k마다 평균추정치 비교 ## main 코드 --> 구별 평균치 추정
# 공시지가 & 아파트 실거래가 & 연립주택 실거래가에 따라 순위가 바뀜 #
plt.figure()
sortList = []
for i in range(len(k_list)):
knn_fit = neg.KNeighborsRegressor(n_neighbors=k_list[i], p=2, metric='minkowski')
knn_fit.fit(X, y_scale)
knn_fit.fit(X_apt, y_apt_scale)
# knn_fit.fit(X_town, y_town_scale)
x_test["predK%i" %k_list[i]] = knn_fit.predict(x_test.iloc[:, 1:3])
mean = x_test.groupby(['location'], as_index=False).mean()
price_pred = pd.DataFrame(mean.iloc[:, -1])
price_pred.index = mean['location']
sortList.append(price_pred)
plt.plot(price_pred)
plt.legend(k_list)
plt.rcParams['axes.grid'] = True
plt.rcParams["figure.figsize"] = (16,6)
plt.show()
## 구별 sort ##
for sort in sortList:
print(sort.sort_values(by=[sort.columns[0]], axis=0), end="\n=====================\n")
```
| github_jupyter |
# NEST by Example - An Introduction to the Neural Simulation Tool NEST Version 2.12.0
# Introduction
NEST is a simulator for networks of point neurons, that is, neuron
models that collapse the morphology (geometry) of dendrites, axons,
and somata into either a single compartment or a small number of
compartments <cite data-cite="Gewaltig2007">(Gewaltig and Diesmann, 2007)</cite>. This simplification is useful for
questions about the dynamics of large neuronal networks with complex
connectivity. In this text, we give a practical introduction to neural
simulations with NEST. We describe how network models are defined and
simulated, how simulations can be run in parallel, using multiple
cores or computer clusters, and how parts of a model can be
randomized.
The development of NEST started in 1994 under the name SYNOD to
investigate the dynamics of a large cortex model, using
integrate-and-fire neurons <cite data-cite="SYNOD">(Diesmann et al., 1995)</cite>. At that time the only
available simulators were NEURON <cite data-cite="Hine:1997(1179)">(Hines and Carnevale, 1997)</cite> and GENESIS <cite data-cite="Bower95a">(Bower and Beeman, 1995)</cite>
, both focusing on morphologically detailed neuron
models, often using data from microscopic reconstructions.
Since then, the simulator has been under constant development. In
2001, the Neural Simulation Technology Initiative was founded to
disseminate our knowledge of neural simulation technology. The
continuing research of the member institutions into algorithms for the
simulation of large spiking networks has resulted in a number of
influential publications. The algorithms and techniques developed are
not only implemented in the NEST simulator, but have also found their
way into other prominent simulation projects, most notably the NEURON
simulator (for the Blue Brain Project: <cite data-cite="Migliore06_119">Migliore et al., 2006</cite>) and
IBM's C2 simulator <cite data-cite="Ananthanarayanan09">(Ananthanarayanan et al. 2009)</cite>.
Today, in 2017, there are several simulators for large spiking
networks to choose from <cite data-cite="Brette2007">(Brette et al., 2007)</cite>, but NEST remains the
best established simulator with the the largest developer
networks to choose from.
A NEST simulation consists of three main components:
* **Nodes** are all neurons, devices, and also
sub-networks. Nodes have a dynamic state that changes over time and
that can be influenced by incoming *events*.
* **Events** are pieces of information of a particular
type. The most common event is the spike-event. Other event types
are voltage events and current events.
* **Connections** are communication channels between
nodes. Only if one node is connected to another node, can they
exchange events. Connections are weighted, directed, and specific to
one event type. Directed means that events can flow only in one
direction. The node that sends the event is called *source* and
the node that receives the event is called *target*. The weight
determines how strongly an event will influence the target node. A
second parameter, the *delay*, determines how long an event
needs to travel from source to target.
In the next sections, we will illustrate how to use NEST, using
examples with increasing complexity. Each of the examples is
self-contained.
# First steps
We begin by starting Python. For interactive sessions, we here use
the IPython shell <cite data-cite="Pere:2007(21)">(Pérez and Granger, 2007)</cite>. It is convenient,
because you can edit the command line and access previously typed
commands using the up and down keys. However, all examples in this
chapter work equally well without IPython. For data analysis and
visualization, we also recommend the Python packages Matplotlib
<cite data-cite="Hunt:2007(90)">(Hunter, 2007)</cite> and NumPy <cite data-cite="Olip:Guid">(Oliphant, 2006)</cite>.
Our first simulation investigates the response of one
integrate-and-fire neuron to an alternating current and Poisson spike
trains from an excitatory and an inhibitory source. We record the
membrane potential of the neuron to observe how the stimuli influence
the neuron.
In this model, we inject a sine current with a frequency of 2 Hz and
an amplitude of 100 pA into a neuron. At the same time, the neuron
receives random spiking input from two sources known as Poisson
generators. One Poisson generator represents a large population of
excitatory neurons and the other a population of inhibitory
neurons. The rate for each Poisson generator is set as the product of
the assumed number of synapses per target neuron received from the
population and the average firing rate of the source neurons.
The small network is simulated for 1000 milliseconds, after which the
time course of the membrane potential during this period is plotted. For this, we use the `pylab` plotting routines of Python's Matplotlib package.
The Python code for this small model is shown below.
```
%matplotlib inline
import nest
import nest.voltage_trace
nest.ResetKernel()
neuron = nest.Create('iaf_psc_alpha')
sine = nest.Create('ac_generator', 1,
{'amplitude': 100.0,
'frequency': 2.0})
noise = nest.Create('poisson_generator', 2,
[{'rate': 70000.0},
{'rate': 20000.0}])
voltmeter = nest.Create('voltmeter', 1,
{'withgid': True})
nest.Connect(sine, neuron)
nest.Connect(voltmeter, neuron)
nest.Connect(noise[:1], neuron, syn_spec={'weight': 1.0, 'delay': 1.0})
nest.Connect(noise[1:], neuron, syn_spec={'weight': -1.0, 'delay': 1.0})
nest.Simulate(1000.0)
nest.voltage_trace.from_device(voltmeter);
```
We will now go through the simulation script and explain the
individual steps. The first two lines `import` the modules `nest` and its sub-module `voltage_trace`. The `nest` module must be imported in every interactive session
and in every Python script in which you wish to use NEST. NEST is a
C++ library that provides a simulation kernel, many neuron and synapse
models, and the simulation language interpreter SLI. The library which
links the NEST simulation language interpreter to the Python
interpreter is called PyNEST <cite data-cite="Eppler09_12">(Eppler et al. 2009)</cite>.
Importing `nest` as shown above puts all NEST commands in
the *namespace* `nest`. Consequently, all commands must
be prefixed with the name of this namespace.
Next we use the command `Create`
to produce one node of the type `iaf_psc_alpha`. As you see in subsequent lines, `Create` is used for all node types.
The first argument, `'iaf_psc_alpha'`, is a string, denoting
the type of node that you want to create.
The second parameter of `Create` is an integer representing
the number of nodes you want to create. Thus, whether you want one neuron
or 100,000, you only need to call `Create` once.
`nest.Models()` provides a list of all available node and
connection models.
The third parameter is either a dictionary or a list of dictionaries,
specifying the parameter settings for the created nodes. If only one
dictionary is given, the same parameters are used for all created
nodes. If an array of dictionaries is given, they are used in order
and their number must match the number of created nodes. This variant
of `Create` is to set the parameters
for the Poisson noise generator, the sine generator (for the
alternating current), and the voltmeter. All parameters of a model
that are not set explicitly are initialized with default values. You
can display them with
`nest.GetDefaults(model_name)`.
Note that only the first
parameter of `Create` is mandatory.
`Create` returns a list of integers, the global
identifiers (or GID for short) of each node created. The GIDs are
assigned in the order in which nodes are created. The first node is
assigned GID 1, the second node GID 2, and so on.
After creating the neuron, sine and noise generators, and voltmeter, we connect the nodes. First we connect the sine generator and the voltmeter
to the neuron. The command `Connect` takes two or more
arguments. The first argument is a list of source nodes. The second
argument is a list of target nodes. `Connect` iterates these
two lists and connects the corresponding pairs.
A node appears in the source position of `Connect` if it sends events
to the target node. In our example, the sine generator is in the
source position because it injects an alternating current into the
neuron. The voltmeter is in the source position because it polls the
membrane potential of the neuron. Other devices may be in the target
position, e.g., the spike detector which receives spike events from a
neuron. If in doubt about the order, consult the documentation of the
respective nodes using NEST's help system. For example, to read the
documentation of the ac\_generator you can type
`nest.help('ac_generator')`.
Dismiss the help by typing `q`.
Next, we use the command `Connect` with the
`syn_spec` parameter to connect the
two Poisson generators to the neuron. In this example the synapse
specification `syn_spec` provides only weight and delay
values, in this case $\pm 1$ pA input current amplitude and $1$ ms
delay. We will see more advanced uses of `syn_spec` below.
After connecting the nodes, the network is ready. We call the NEST function `Simulate` which runs the
network for 1000 milliseconds. The function returns after the
simulation is finished. Then, function `voltage_trace` is
called to plot the membrane potential of the neuron. If you are
running the script for the first time, you may have to tell Python to display
the figure by typing `pylab.show()`.
If you want to inspect how your network looks so far, you can print
it using the command `PrintNetwork()`:
```
nest.PrintNetwork(2)
```
The input argument here is the depth to print to. Output from this function will be in the terminal, and you should get
```
+-[0] root dim=[5]
|
+-[1] iaf_psc_alpha
+-[2] ac_generator
+-[3]...[4] poisson_generator
+-[5] voltmeter
```
If you run the example a second time, NEST will leave the existing
nodes intact and will create a second instance for each node. To start
a new NEST session without leaving Python, you can call
`nest.ResetKernel()`. This function will erase the existing
network so that you can start from scratch.
# Example 1: A sparsely connected recurrent network
Next we discuss a model of activity dynamics in a local cortical
network proposed by <cite data-cite="Brunel00">Brunel (2000)</cite>. We only describe those parts of
the model which are necessary to understand its NEST
implementation. Please refer to the original paper for further details.
The local cortical network consists of two neuron populations: a
population of $N_E$ excitatory neurons and a population of $N_I$
inhibitory neurons. To mimic the cortical ratio of 80% excitatory neurons
and 20% inhibitory neurons, we assume that $N_E=$ 8000 and $N_I=$ 2000. Thus,
our local network has a total of 10,000 neurons.
For both the excitatory and the inhibitory population, we use the same
integrate-and-fire neuron model with current-based synapses. Incoming
excitatory and inhibitory spikes displace the membrane potential $V_m$
by $J_{E}$ and $J_I$, respectively. If $V_m$ reaches the threshold
value $V_{\text{th}}$, the membrane potential is reset to $V_{\text{reset}}$,
a spike is sent with delay $D=$ 1.5 ms to all post-synaptic
neurons, and the neuron remains refractory for $\tau_{\text{rp}}=$ 2.0 ms.
The neurons are mutually connected with a probability of
10%. Specifically, each neuron receives input from $C_{E}= 0.1 \cdot N_{E}$ excitatory and $C_I=0.1\cdot N_{I}$ inhibitory neurons (see figure below). The inhibitory synaptic weights
$J_I$ are chosen with respect to the excitatory synaptic weights $J_E$
such that
$J_I = -g \cdot J_E$,
with $g=$ 5.0 in this example.
<table class="image">
<caption align="bottom">Sketch of the network model proposed by <cite data-cite="Brunel00">Brunel (2000)</cite>. The network consists of three populations: $N_E$ excitatory neurons (circle labelled E), $N_I$ inhibitory neurons (circle labelled I), and a population of identical, independent Poisson processes (PGs) representing activity from outside the network. Arrows represent connections between the network nodes. Triangular arrow-heads represent excitatory and round arrow-heads represent inhibitory connections. The numbers at the start and end of each arrow indicate the multiplicity of the connection.</caption>
<tr><td><img src="figures/brunel_detailed_external_single2.jpg" alt="Brunel detailed network"/></td></tr>
</table>
In addition to the sparse recurrent inputs from within the local
network, each neuron receives randomly timed excitatory input, mimicking
the input from the rest of cortex. The random input is modelled as $C_E$
independent and identically distributed Poisson processes with rate
$\nu_{\text{ext}}$, or equivalently, by a single Poisson process with rate
$C_E \cdot \nu_{\text{ext}}$. Here, we set $\nu_{\text{ext}}$ to twice the
rate $\nu_{\text{th}}$ that is needed to drive a neuron to threshold
asymptotically. The details of the model are summarized in the tables below.
In the resulting plot you should see a raster plot of 50
excitatory neurons during the first 300 ms of simulated time. Time is
shown along the x-axis, neuron ID along the y-axis. At $t=0$, all
neurons are in the same state $V_m=0$ and hence there is no spiking
activity. The external stimulus rapidly drives the membrane potentials
towards the threshold. Due to the random nature of the external
stimulus, not all the neurons reach the threshold at the same
time. After a few milliseconds, the neurons start to spike irregularly at
roughly 40 spikes/s. In the original paper, this network
state is called the *asynchronous irregular state* <cite data-cite="Brunel00">(Brunel, 2000)</cite>.
### Summary of the network model
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg .tg-header{vertical-align:top}
.tg .tg-yw4l{vertical-align:top}
</style>
<table class="tg" width=90%>
<tr>
<th class="tg-header" style="font-weight:bold;background-color:#000000;color:#ffffff;" colspan="2">A: Model Summary<br></th>
</tr>
<tr>
<td class="tg-yw4l">Populations</td>
<td class="tg-yw4l">Three: excitatory, inhibitory, external input</td>
</tr>
<tr>
<td class="tg-yw4l">Topology</td>
<td class="tg-yw4l">—</td>
</tr>
<tr>
<td class="tg-yw4l">Connectivity</td>
<td class="tg-yw4l">Random convergent connections with probability $P=0.1$ and fixed in-degree of $C_E=P N_E$ and $C_I=P N_I$.</td>
</tr>
<tr>
<td class="tg-yw4l">Neuron model</td>
<td class="tg-yw4l">Leaky integrate-and-fire, fixed voltage threshold, fixed absolute refractory time (voltage clamp).</td>
</tr>
<tr>
<td class="tg-yw4l">Channel models</td>
<td class="tg-yw4l">—</td>
</tr>
<tr>
<td class="tg-yw4l">Synapse model</td>
<td class="tg-yw4l">$\delta$-current inputs (discontinuous,voltage jumps)</td>
</tr>
<tr>
<td class="tg-yw4l">Plasticity</td>
<td class="tg-yw4l">—</td>
</tr>
<tr>
<td class="tg-yw4l">Input</td>
<td class="tg-yw4l">Independent fixed-rate Poisson spike trains to all neurons</td>
</tr>
<tr>
<td class="tg-yw4l">Measurements</td>
<td class="tg-yw4l">Spike activity</td>
</tr>
</table>
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg .tg-iuc6{font-weight:bold}
.tg .tg-yw4l{vertical-align:top}
</style>
<table class="tg" width=90%>
<tr>
<th class="tg" style="font-weight:bold;background-color:#000000;color:#ffffff;" colspan="3">B: Populations<br></th>
</tr>
<tr>
<td class="tg-iuc6">**Name**</td>
<td class="tg-iuc6">**Elements**</td>
<td class="tg-iuc6">**Size**</td>
</tr>
<tr>
<td class="tg-031e">nodes_E</td>
<td class="tg-031e">`iaf_psc_delta` neuron<br></td>
<td class="tg-yw4l">$N_{\text{E}} = 4N_{\text{I}}$</td>
</tr>
<tr>
<td class="tg-031e">nodes_I</td>
<td class="tg-031e">`iaf_psc_delta` neuron<br></td>
<td class="tg-yw4l">$N_{\text{I}}$</td>
</tr>
<tr>
<td class="tg-yw4l">noise</td>
<td class="tg-yw4l">Poisson generator<br></td>
<td class="tg-yw4l">1</td>
</tr>
</table>
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg .tg-e3zv{font-weight:bold}
.tg .tg-hgcj{font-weight:bold;text-align:center}
.tg .tg-9hbo{font-weight:bold;vertical-align:top}
.tg .tg-yw4l{vertical-align:top}
</style>
<table class="tg" width=90%>
<tr>
<th class="tg" style="font-weight:bold;background-color:#000000;color:#ffffff;" colspan="4" >C: Connectivity<br></th>
</tr>
<tr>
<td class="tg-e3zv">**Name**</td>
<td class="tg-e3zv">**Source**</td>
<td class="tg-9hbo">**Target**</td>
<td class="tg-9hbo">**Pattern**</td>
</tr>
<tr>
<td class="tg-031e">EE</td>
<td class="tg-031e">nodes_E<br></td>
<td class="tg-yw4l">nodes_E<br></td>
<td class="tg-yw4l">Random convergent $C_{\text{E}}\rightarrow 1$, weight $J$, delay $D$</td>
</tr>
<tr>
<td class="tg-031e">IE<br></td>
<td class="tg-031e">nodes_E<br></td>
<td class="tg-yw4l">nodes_I<br></td>
<td class="tg-yw4l">Random convergent $C_{\text{E}}\rightarrow 1$, weight $J$, delay $D$</td>
</tr>
<tr>
<td class="tg-yw4l">EI</td>
<td class="tg-yw4l">nodes_I</td>
<td class="tg-yw4l">nodes_E</td>
<td class="tg-yw4l">Random convergent $C_{\text{I}}\rightarrow 1$, weight $-gJ$, delay $D$</td>
</tr>
<tr>
<td class="tg-yw4l">II</td>
<td class="tg-yw4l">nodes_I</td>
<td class="tg-yw4l">nodes_I</td>
<td class="tg-yw4l">Random convergent $C_{\text{I}}\rightarrow 1$, weight $-gJ$, delay $D$</td>
</tr>
<tr>
<td class="tg-yw4l">Ext</td>
<td class="tg-yw4l">noise</td>
<td class="tg-yw4l">nodes_E $\cup$ nodes_I</td>
<td class="tg-yw4l">Divergent $1 \rightarrow N_{\text{E}} + N_{\text{I}}$, weight $J$, delay $D$</td>
</tr>
</table>
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg .tg-e3zv{font-weight:bold}
.tg .tg-hgcj{font-weight:bold;text-align:center}
.tg .tg-yw4l{vertical-align:top}
</style>
<table class="tg" width=90%>
<tr>
<th class="tg" style="font-weight:bold;background-color:#000000;color:#ffffff;" colspan="2">D: Neuron and Synapse Model<br></th>
</tr>
<tr>
<td class="tg-031e">**Name**</td>
<td class="tg-031e">`iaf_psc_delta` neuron<br></td>
</tr>
<tr>
<td class="tg-031e">**Type**<br></td>
<td class="tg-031e">Leaky integrate- and-fire, $\delta$-current input</td>
</tr>
<tr>
<td class="tg-031e">**Sub-threshold dynamics**<br></td>
<td class="tg-031e">\begin{equation*}
\begin{array}{rll}
\tau_m \dot{V}_m(t) = & -V_m(t) + R_m I(t) &\text{if not refractory}\; (t > t^*+\tau_{\text{rp}}) \\[1ex]
V_m(t) = & V_{\text{r}} & \text{while refractory}\; (t^*<t\leq t^*+\tau_{\text{rp}}) \\[2ex]
I(t) = & {\frac{\tau_m}{R_m} \sum_{\tilde{t}} w
\delta(t-(\tilde{t}+D))}
\end{array}
\end{equation*}<br></td>
</tr>
<tr>
<td class="tg-yw4l">**Spiking**<br></td>
<td class="tg-yw4l">If $V_m(t-)<V_{\theta} \wedge V_m(t+)\geq V_{\theta}$<br> 1. set $t^* = t$<br> 2. emit spike with time-stamp $t^*$<br></td>
</tr>
</table>
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg .tg-e3zv{font-weight:bold}
.tg .tg-hgcj{font-weight:bold;text-align:center}
</style>
<table class="tg" width=90%>
<tr>
<th class="tg" style="font-weight:bold;background-color:#000000;color:#ffffff;" colspan="2">E: Input<br></th>
</tr>
<tr>
<td class="tg-031e">**Type**<br></td>
<td class="tg-031e">**Description**<br></td>
</tr>
<tr>
<td class="tg-031e">Poisson generator<br></td>
<td class="tg-031e">Fixed rate $\nu_{\text{ext}} \cdot C_{\text{E}}$, one generator providing independent input to each target neuron</td>
</tr>
</table>
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg .tg-e3zv{font-weight:bold}
.tg .tg-hgcj{font-weight:bold;text-align:center}
</style>
<table class="tg" width=90%>
<tr>
<th class="tg" style="font-weight:bold;background-color:#000000;color:#ffffff;" colspan="2">F: Measurements<br></th>
</tr>
<tr>
<td class="tg-031e" colspan="2">Spike activity as raster plots, rates and ''global frequencies'', no details given</td>
</tr>
</table>
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg .tg-yw4l{vertical-align:top;}
</style>
<table class="tg" width=90%>
<tr>
<th class="tg" style="font-weight:bold;background-color:#000000;color:#ffffff;" colspan="2">G: Network Parameters<br></th>
</tr>
<tr>
<td class="tg-yw4l" style="border-right-style:hidden">**Parameter**</td>
<td class="tg-yw4l" style="text-align:right;">**Value**<br></td>
</tr>
<tr>
<td class="tg-yw4l" style="border-right-style:hidden">Number of excitatory neurons $N_E$</td>
<td class="tg-yw4l" style="text-align:right;">8000</td>
</tr>
<tr>
<td class="tg-yw4l" style="border-right-style:hidden;border-top-style:hidden">Number of inhibitory neurons $N_I$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">2000</td>
</tr>
<tr>
<td class="tg-yw4l" style="border-right-style:hidden;border-top-style:hidden">Excitatory synapses per neuron $C_E$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">800</td>
</tr>
<tr>
<td class="tg-yw4l" style="border-right-style:hidden;border-top-style:hidden">Inhibitory synapses per neuron $C_I$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">200</td>
</tr>
<tr>
</tr>
</table>
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}
.tg .tg-e3zv{font-weight:bold}
.tg .tg-hgcj{font-weight:bold;text-align:center}
.tg .tg-yw4l{vertical-align:top}
</style>
<table class="tg" width=90%>
<tr>
<th class="tg" style="font-weight:bold;background-color:#000000;color:#ffffff;" colspan="2">H: Neuron Parameters<br></th>
</tr>
<tr>
<td class="tg-031e" style="border-right-style:hidden">**Parameter**</td>
<td class="tg-031e" style="text-align:right;">**Value**<br></td>
</tr>
<tr>
<td class="tg-yw4l" style="border-right-style:hidden">Membrane time constant $\tau_m$</td>
<td class="tg-yw4l" style="text-align:right;">20 ms</td>
</tr>
<tr>
<td class="tg-yw4l"style="border-right-style:hidden;border-top-style:hidden">Refractory period $\tau_{\text{rp}}$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">2 ms</td>
</tr>
<tr>
<td class="tg-yw4l"style="border-right-style:hidden;border-top-style:hidden">Firing threshold $V_{\text{th}}$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">20 mV</td>
</tr>
<tr>
<td class="tg-yw4l"style="border-right-style:hidden;border-top-style:hidden">Membrane capacitance $C_m$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">1 pF</td>
</tr>
<tr>
<td class="tg-yw4l"style="border-right-style:hidden;border-top-style:hidden">Resting potential $V_E$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">0 mV</td>
</tr>
<tr>
<td class="tg-yw4l"style="border-right-style:hidden;border-top-style:hidden">Reset potential $V_{\text{reset}}$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">10 mV</td>
</tr>
<tr>
<td class="tg-yw4l"style="border-right-style:hidden;border-top-style:hidden">Excitatory PSP amplitude $J_E$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">0.1 mV</td>
</tr>
<tr>
<td class="tg-yw4l"style="border-right-style:hidden;border-top-style:hidden">Inhibitory PSP amplitude $J_I$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">-0.5 mV</td>
</tr>
<tr>
<td class="tg-yw4l"style="border-right-style:hidden;border-top-style:hidden">Synaptic delay $D$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">1.5 ms</td>
</tr>
<tr>
<td class="tg-yw4l"style="border-right-style:hidden;border-top-style:hidden">Background rate $\eta$</td>
<td class="tg-yw4l" style="text-align:right;border-top-style:hidden;">2.0</td>
</tr>
<tr>
</tr>
</table>
## NEST Implementation
We now show how this model is implemented in NEST. Along the way, we
explain the required steps and NEST commands in more detail so that
you can apply them to your own models.
### Preparations
The first three lines import NEST, a NEST module for raster plots, and
the plotting package `pylab`. We then assign the various model
parameters to variables.
```
import nest
import nest.raster_plot
import pylab
nest.ResetKernel()
g = 5.0
eta = 2.0
delay = 1.5
tau_m = 20.0
V_th = 20.0
N_E = 8000
N_I = 2000
N_neurons = N_E + N_I
C_E = int(N_E / 10)
C_I = int(N_I / 10)
J_E = 0.1
J_I = -g * J_E
nu_ex = eta * V_th / (J_E * C_E * tau_m)
p_rate = 1000.0 * nu_ex * C_E
```
In the second to last line, we compute the firing rate
`nu_ex` ($\nu_{\text{ext}}$) of a neuron in the external population. We define
`nu_ex` as the product of a constant `eta` times
the threshold rate $\nu_{\text{th}}$, i.e. the steady state firing
rate which is needed to bring a neuron to threshold. The value of the
scaling constant is defined with `eta`.
In the final line, we compute the combined input rate due to the
external population. With $C_E$ incoming synapses per neuron, the total rate is
simply the product `nu_ex*C_E`. The factor 1000.0 in the
product changes the units from spikes per ms to spikes per second.
Next, we prepare the simulation kernel of NEST
```
nest.SetKernelStatus({'print_time': True})
```
The command `SetKernelStatus`
modifies parameters of the simulation kernel. The argument is a Python
dictionary with *key*:*value* pairs. Here, we set the NEST
kernel to print the progress of the simulation time during simulation. Note that the progress is output only to the terminal.
### Creating neurons and devices
As a rule of thumb, we recommend that you create all elements in your
network, i.e., neurons, stimulating devices and recording devices
first, before creating any connections.
```
nest.SetDefaults('iaf_psc_delta',
{'C_m': 1.0,
'tau_m': tau_m,
't_ref': 2.0,
'E_L': 0.0,
'V_th': V_th,
'V_reset': 10.0})
```
Here we change the parameters of the neuron model we want to use from the
built-in values to the defaults for our investigation.
`SetDefaults` expects two parameters. The first is a string,
naming the model for which the default parameters should be
changed. Our neuron model for this simulation is the simplest
integrate-and-fire model in NEST's repertoire:
`'iaf_psc_delta'`. The second parameter is a dictionary with
parameters and their new values, entries separated by commas. All
parameter values are taken from Brunel's paper <cite data-cite="Brunel00">(Brunel, 2000)</cite> and we
insert them directly for brevity. Only the membrane time constant
`tau_m` and the threshold potential `V_th` are
read from variables, because these values are needed in several places.
```
nodes = nest.Create('iaf_psc_delta', N_neurons)
nodes_E = nodes[:N_E]
nodes_I = nodes[N_E:]
noise = nest.Create('poisson_generator', 1, {'rate': p_rate})
nest.SetDefaults('spike_detector', {'to_file': True})
spikes = nest.Create('spike_detector', 2,
[{'label': 'brunel-py-ex'},
{'label': 'brunel-py-in'}])
spikes_E = spikes[:1]
spikes_I = spikes[1:]
```
As before we create the neurons with `Create`, which returns a list of the global IDs which
are consecutive numbers from 1 to `N_neurons`.
We split this range into excitatory and inhibitory neurons. We then select the first `N_E`
elements from the list `nodes` and assign them to the
variable `nodes_E`. This list now holds the GIDs of the
excitatory neurons.
Similarly, in the next line, we assign the range from position
`N_E` to the end of the list to the variable
`nodes_I`. This list now holds the GIDs of all inhibitory
neurons. The selection is carried out using standard Python list commands. You
may want to consult the Python documentation for more details.
Next, we create and connect the external population and some devices
to measure the spiking activity in the network.
We create a device known as a
`poisson_generator`, which produces a spike train governed
by a Poisson process at a given rate. We use the third parameter of
`Create` to initialize the rate of the Poisson process to
the population rate `p_rate` which we have previously computed.
If a Poisson generator is connected to $n$ targets, it generates $n$
independent and identically distributed spike trains. Thus, we only
need one generator to model an entire population of randomly firing
neurons.
To observe how the neurons in the recurrent network respond to the
random spikes from the external population, we create two spike detectors.
By default, spike detectors record to memory but not to file. We override this default behaviour to also record
to file. Then we create one detector for the
excitatory neurons and one for the inhibitory neurons.
The default file names are automatically generated from the device type and
its global ID. We use the third argument of `Create` to give each
spike detector a `'label'`, which will be part of the name of the
output file written by the detector. Since two devices are created, we supply
a list of dictionaries.
In the second to last line, we store the GID of the first spike
detector in a one-element list and assign it to the variable
`spikes_E`. In the next line, we do the same for the second
spike detector that is dedicated to the inhibitory population.
### Connecting the network
Once all network elements are in place, we connect them.
```
nest.CopyModel('static_synapse_hom_w',
'excitatory',
{'weight': J_E,
'delay': delay})
nest.Connect(nodes_E, nodes,
{'rule': 'fixed_indegree',
'indegree': C_E},
'excitatory')
nest.CopyModel('static_synapse_hom_w',
'inhibitory',
{'weight': J_I,
'delay':delay})
nest.Connect(nodes_I, nodes,
{'rule': 'fixed_indegree',
'indegree': C_I},
'inhibitory')
```
We create a new connection
type `'excitatory'` by copying the built-in connection type
`'static_synapse_hom_w'` while changing its default values
for *weight* and *delay*. The command `CopyModel`
expects either two or three arguments: the name of an existing neuron
or synapse model, the name of the new model, and optionally a
dictionary with the new default values of the new model.
The connection type `'static_synapse_hom_w'` uses the same
values of weight for all synapses. This saves memory for
networks in which these values are identical for all connections. Later (in 'Randomness in NEST') we will use a different connection model to
implement randomized weights and delays.
Having created and parameterized an appropriate synapse model, we draw
the incoming excitatory connections for each neuron. The function
`Connect` expects four arguments: a list of
source nodes, a list of target nodes, a connection rule, and a synapse
specification. Some connection rules, in particular
`'one_to_one'` and `'all_to_all'` require no
parameters and can be specified as strings. All other connection rules
must be specified as a dictionary, which at least must contain the key
`'rule'` specifying a connection rule;
`nest.ConnectionRules()` shows all connection rules. The
remaining dictionary entries depend on the particular rule. We use the
`'fixed_indegree'` rule, which creates exactly
`indegree` connections to each target neuron; in previous
versions of NEST, this connectivity was provided by
`RandomConvergentConnect`.
The final argument specifies the synapse model to be used, here the
`'excitatory'` model we defined previously.
In the final lines we
repeat the same steps for the inhibitory connections: we create a new
connection type and draw the incoming inhibitory connections for all neurons.
```
nest.Connect(noise, nodes, syn_spec='excitatory')
N_rec = 50
nest.Connect(nodes_E[:N_rec], spikes_E)
nest.Connect(nodes_I[:N_rec], spikes_I)
```
Here we use `Connect` to
connect the Poisson generator to all nodes of the local network. Since
these connections are excitatory, we use the `'excitatory'`
connection type. Finally, we connect a subset of excitatory and
inhibitory neurons to the spike detectors to record from them. If no connection rule
is given, `Connect` connects all sources to all targets (`all_to_all` rule),
i.e., the noise generator is connected to all neurons
(previously `DivergentConnect`), while in the second to last line, all recorded
excitatory neurons are connected to the `spikes_E` spike detector
(previously `ConvergentConnect`).
Our network consists of 10,000 neurons, all of which have the same
activity statistics due to the random connectivity. Thus, it suffices
to record from a representative sample of neurons, rather than from
the entire network. Here, we choose to record from 50 neurons and
assign this number to the variable `N_rec`. We then connect
the first 50 excitatory neurons to their spike detector. Again, we use
standard Python list operations to select `N_rec` neurons
from the list of all excitatory nodes. Alternatively, we could select
50 neurons at random, but since the neuron order has no meaning in
this model, the two approaches would yield qualitatively the same
results. Finally, we repeat this step for the inhibitory neurons.
### Simulating the network
Now everything is set up and we can run the simulation.
```
simtime = 300
nest.Simulate(simtime)
ex_events, in_events = nest.GetStatus(spikes, 'n_events')
events_to_rate = 1000. / simtime / N_rec
rate_ex = ex_events * events_to_rate
print('Excitatory rate: {:.2f} Hz'.format(rate_ex))
rate_in = in_events * events_to_rate
print('Inhibitory rate: {:.2f} Hz'.format(rate_in))
nest.raster_plot.from_device(spikes_E, hist=True);
```
First we select a simulation time of
300 milliseconds and assign it to a variable. Next, we call the NEST
command `Simulate` to run the simulation for 300 ms. During
simulation, the Poisson generators send spikes into the network
and cause the neurons to fire. The spike detectors receive spikes
from the neurons and write them to a file and to memory.
When the function returns, the simulation time has progressed by
300 ms. You can call `Simulate` as often as you like and
with different arguments. NEST will resume the simulation at the point
where it was last stopped. Thus, you can partition your simulation time
into small epochs to regularly inspect the progress of your model.
After the simulation is finished, we compute the firing rate of the
excitatory neurons and the inhibitory
neurons. Finally, we call the NEST
function `raster_plot` to produce the raster plot. `raster_plot` has two
modes. `raster_plot.from_device` expects the global ID of a
spike detector. `raster_plot.from_file` expects the name of
a data file. This is useful to plot data without repeating a
simulation.
# Parallel simulation
Large network models often require too much time or computer memory to
be conveniently simulated on a single computer. For example, if we increase the number of
neurons in the previous model to 100,000, there will be a total of
$10^9$ connections, which won't fit into the memory of most computers.
Similarly, if we use plastic synapses (see Example 3: plastic networks) and run the model for minutes or hours of
simulated time, the execution times become uncomfortably long.
To address this issue, NEST has two modes of parallelization:
multi-threading and distribution. Multi-threaded and distributed
simulation can be used in isolation or in combination <cite data-cite="Ples:2007(672)">(Plesser et al., 2007)</cite>
, and both modes allow you to connect and run
networks more quickly than in the serial case.
Multi-threading means that NEST uses all
available processors or cores of the computer. Today, most desktop
computers and even laptops have at least two processing cores. Thus,
you can use NEST's multi-threaded mode to make your simulations
execute more quickly whilst still maintaining the convenience of
interactive sessions. Since a given computer has a fixed memory size,
multi-threaded simulation can only reduce execution times. It cannot
solve the problem that large models exhaust the computer's memory.
Distribution means that NEST uses
many computers in a network or computer cluster. Since each computer
contributes memory, distributed simulation allows you to simulate
models that are too large for a single computer. However, in
distributed mode it is not currently possible to use NEST
interactively.
In most cases, writing a simulation script to be run in parallel is as
easy as writing one to be executed on a single processor. Only minimal
changes are required, as described below, and you can ignore the fact
that the simulation is actually executed by more than one core or
computer. However, in some cases your knowledge about the distributed
nature of the simulation can help you improve efficiency even
further. For example, in the distributed mode, all computers execute
the same simulation script. We can improve performance if the
script running on a specific computer only tries to execute commands
relating to nodes that are represented on the
same computer. An example of this technique is shown below in Example 2.
To switch NEST into
multi-threaded mode, you only have to add one line to your simulation
script:
```
nest.ResetKernel()
n = 4 # number of threads
nest.SetKernelStatus({'local_num_threads': n})
```
Here, `n` is the number of threads you want to use. It is
important that you set the number of threads *before* you create
any nodes. If you try to change the number of threads after nodes were
created, NEST will issue an error.
A good choice for the number of threads is the number of cores or
processors on your computer. If your processor supports
hyperthreading, you can select an even higher number of threads.
The distributed mode of NEST is particularly useful for large simulations
for which not only the processing speed, but also the memory of a single
computer are insufficient. The distributed mode of NEST uses the
Message Passing Interface <cite data-cite="MPI2009">(MPI Forum, 2009)</cite>, a library that must be
installed on your computer network when you install NEST. For details,
please refer to NEST documentation at [www.nest-simulator.org](http://www.nest-simulator.org/documentation/).
The distributed mode of NEST is also easy to use. All you need to do
is start NEST with the MPI command `mpirun`:
`mpirun -np m python script.py`
where `m` is the number of MPI processes that should be
started. One sensible choice for `m` is the total number of
cores available on the cluster. Another reasonable choice is the
number of physically distinct machines, utilizing their cores through
multithreading as described above. This can be useful on clusters of
multi-core computers.
In NEST, processes and threads are both mapped to *virtual processes* <cite data-cite="Ples:2007(672)">(Plesser et al. 2007)</cite> . If a
simulation is started with `m` MPI processes and
`n` threads on each process, then there are
`m`$\times$`n` virtual processes. You can obtain
the number of virtual processes in a simulation with
```
nest.GetKernelStatus('total_num_virtual_procs')
```
The virtual process concept is reflected in the labelling of output
files. For example, the data files for the excitatory spikes produced
by the network discussed here follow the form
`brunel-py-ex-x-y.gdf`, where `x` is the ID of the
data recording node and `y` is the ID of the virtual
process.
# Randomness in NEST
NEST has built-in random number sources that can be used for tasks
such as randomizing spike trains or network connectivity. In this
section, we discuss some of the issues related to the use of random
numbers in parallel simulations. In example 2, we illustrate how to randomize
parameters in a network.
Let us first consider the case that a simulation script does not
explicitly generate random numbers. In this case, NEST produces
identical simulation results for a given number of virtual processes,
irrespective of how the virtual processes are partitioned into threads
and MPI processes. The only difference between the output of two
simulations with different configurations of threads and processes
resulting in the same number of virtual processes is the result of
query commands such as `GetStatus`. These commands
gather data over threads on the local machine, but not over remote
machines.
In the case that random numbers are explicitly generated in the
simulation script, more care must be taken to produce results that are
independent of the parallel configuration. Consider, for example, a
simulation where two threads have to draw a random number from a
single random number generator. Since only one thread can access the
random number generator at a time, the outcome of the simulation will
depend on the access order.
Ideally, all random numbers in a simulation should come from a single
source. In a serial simulation this is trivial to implement, but in
parallel simulations this would require shipping a large number of
random numbers from a central random number generator (RNG) to all
processes. This is impractical. Therefore, NEST uses one independent
random number generator on each
virtual process. Not all random number generators can be used in
parallel simulations, because many cannot reliably produce
uncorrelated parallel streams. Fortunately, recent years have seen
great progress in RNG research and there is a range of random number
generators that can be used with great fidelity in parallel
applications.
Based on this knowledge, each virtual process (VP) in NEST has its own
RNG. Numbers from these RNGs are used to
* choose random connections
* create random spike trains (e.g., `poisson_generator`)
or random currents
(e.g., `noise_generator`).
In order to randomize model parameters in a PyNEST script, it is
convenient to use the random number generators provided by
NumPy. To ensure consistent results for a given number
of virtual processes, each virtual process should use a separate
Python RNG. Thus, in a simulation running on $N_{vp}$ virtual processes,
there should be $2N_{vp}+1$ RNGs in total:
* the global NEST RNG;
* one RNG per VP in NEST;
* one RNG per VP in Python.
We need to provide separate seed values for each of these generators.
Modern random number generators work equally well for all seed
values. We thus suggest the following approach to choosing seeds: For
each simulation run, choose a master seed $msd$ and seed the RNGs
with seeds $msd$, $msd+1$, $\dots$ $msd+2N_{vp}$. Any two master seeds must
differ by at least $2N_{vp}+1$ to avoid correlations between simulations.
By default, NEST uses Knuth's lagged Fibonacci RNG, which has the nice
property that each seed value provides a different sequence of
some $2^{70}$ random numbers <cite data-cite="Knut:Art(2)(1998)">(Knuth, 1998, Ch. 3.6)</cite>. Python
uses the Mersenne Twister MT19937 generator <cite data-cite="Mats:1998(3)">(Matsumoto and Nishimura, 1998)</cite>,
which provides no explicit guarantees, but given the enormous state
space of this generator it appears astronomically unlikely that
neighbouring integer seeds would yield overlapping number
sequences. For a recent overview of RNGs, see <cite data-cite="Lecu:2007(22)">L'Ecuyer and Simard (2007)</cite>. For general introductions to random number
generation, see <cite data-cite="Gent:Rand(2003)">Gentle (2003)</cite>, <cite data-cite="Knut:Art(2)(1998)">Knuth (1998, Ch. 3)</cite> or <cite data-cite="Ples:2010(399)">Plesser (2010)</cite>.
# Example 2: Randomizing neurons and synapses
Let us now consider how to randomize some neuron and synapse
parameters in the sparsely connected network model introduced in Example 1. We shall
* explicitly seed the random number generators;
* randomize the initial membrane potential of all neurons;
* randomize the weights of the recurrent excitatory connections.
We begin by setting up the parameters
```
import numpy
import nest
nest.ResetKernel()
# Network parameters. These are given in Brunel (2000) J.Comp.Neuro.
g = 5.0 # Ratio of IPSP to EPSP amplitude: J_I/J_E
eta = 2.0 # rate of external population in multiples of threshold rate
delay = 1.5 # synaptic delay in ms
tau_m = 20.0 # Membrane time constant in mV
V_th = 20.0 # Spike threshold in mV
N_E = 8000
N_I = 2000
N_neurons = N_E + N_I
C_E = int(N_E / 10) # number of excitatory synapses per neuron
C_I = int(N_I / 10) # number of inhibitory synapses per neuron
J_E = 0.1
J_I = -g * J_E
nu_ex = eta * V_th / (J_E * C_E * tau_m) # rate of an external neuron in ms^-1
p_rate = 1000.0 * nu_ex * C_E # rate of the external population in s^-1
# Set parameters of the NEST simulation kernel
nest.SetKernelStatus({'print_time': True,
'local_num_threads': 2})
```
So far the code is similar to Example 1, but now we insert code to seed the
random number generators:
```
# Create and seed RNGs
msd = 1000 # master seed
n_vp = nest.GetKernelStatus('total_num_virtual_procs')
msdrange1 = range(msd, msd + n_vp)
pyrngs = [numpy.random.RandomState(s) for s in msdrange1]
msdrange2 = range(msd + n_vp + 1, msd + 1 + 2 * n_vp)
nest.SetKernelStatus({'grng_seed': msd + n_vp,
'rng_seeds': msdrange2})
```
We first define the master seed `msd` and then obtain the number of
virtual processes `n_vp`. Then we create a list of `n_vp` NumPy random number generators
with seeds `msd`, `msd+1`, $\dots$
`msd+n_vp-1`. The next two lines set new seeds for the
built-in NEST RNGs: the global RNG is seeded with
`msd+n_vp`, the per-virtual-process RNGs with
`msd+n_vp+1`, $\dots$, `msd+2*n_vp`. Note that the
seeds for the per-virtual-process RNGs must always be passed as a
list, even in a serial simulation.
Then we create the nodes
```
nest.SetDefaults('iaf_psc_delta',
{'C_m': 1.0,
'tau_m': tau_m,
't_ref': 2.0,
'E_L': 0.0,
'V_th': V_th,
'V_reset': 10.0})
nodes = nest.Create('iaf_psc_delta', N_neurons)
nodes_E = nodes[:N_E]
nodes_I = nodes[N_E:]
noise = nest.Create('poisson_generator', 1, {'rate': p_rate})
spikes = nest.Create('spike_detector', 2,
[{'label': 'brunel-py-ex'},
{'label': 'brunel-py-in'}])
spikes_E = spikes[:1]
spikes_I = spikes[1:]
```
After creating the neurons as before, we insert the following code to randomize the membrane potential
of all neurons:
```
node_info = nest.GetStatus(nodes)
local_nodes = [(ni['global_id'], ni['vp'])
for ni in node_info if ni['local']]
for gid, vp in local_nodes:
nest.SetStatus([gid], {'V_m': pyrngs[vp].uniform(-V_th, V_th)})
```
In this code, we meet the concept of *local* nodes for the first
time (Plesser et al. 2007). In serial and multi-threaded
simulations, all nodes are local. In an MPI-based simulation with $m$
MPI processes, each MPI process represents and is responsible for
updating (approximately) $1/m$-th of all nodes—these are the local
nodes for each process. We obtain status
information for each node; for local nodes, this will be full information,
for non-local nodes this will only be minimal information. We then use a list
comprehension to create
a list of `gid` and `vp` tuples for all local
nodes. The `for`-loop then iterates over this list and draws
for each node a membrane potential value uniformly distributed in
$[-V_{\text{th}}, V_{\text{th}})$, i.e., $[-20\text{mV},
20\text{mV})$. We draw the initial membrane potential for each node
from the NumPy RNG assigned to the virtual process `vp`
responsible for updating that node.
As the next step, we create excitatory recurrent connections with the
same connection rule as in the original script, but with randomized
weights.
```
nest.CopyModel('static_synapse', 'excitatory')
nest.Connect(nodes_E, nodes,
{'rule': 'fixed_indegree',
'indegree': C_E},
{'model': 'excitatory',
'delay': delay,
'weight': {'distribution': 'uniform',
'low': 0.5 * J_E,
'high': 1.5 * J_E}})
```
The first difference to the original is that we base the excitatory
synapse model on the built-in `static_synapse` model instead
of `static_synapse_hom_w`, as the latter implies equal
weights for all synapses. The second difference is that we randomize
the initial weights. To this end, we have replaced the simple synapse
specification `'excitatory'` with a subsequent synapse specification
dictionary. Such
a dictionary must always contain the key `'model'` providing
the synapse model to use. In addition, we specify a fixed delay, and a
distribution from which to draw the weights, here a uniform
distribution over $[J_E/2, 3J_E/2)$. NEST will automatically use the
correct random number generator for each weight.
To see all available random distributions, please run
`nest.sli_run('rdevdict info')`. To access documentation for
an individual distribution, run, e.g., `nest.help('rdevdict::binomial')`.
These distributions can be
used for all parameters of a synapse.
We then make the rest of the connections.
```
nest.CopyModel('static_synapse_hom_w',
'inhibitory',
{'weight': J_I,
'delay': delay})
nest.Connect(nodes_I, nodes,
{'rule': 'fixed_indegree',
'indegree': C_I},
'inhibitory')
# connect one noise generator to all neurons
nest.CopyModel('static_synapse_hom_w',
'excitatory_input',
{'weight': J_E,
'delay': delay})
nest.Connect(noise, nodes, syn_spec='excitatory_input')
# connect all recorded E/I neurons to the respective detector
N_rec = 50 # Number of neurons to record from
nest.Connect(nodes_E[:N_rec], spikes_E)
nest.Connect(nodes_I[:N_rec], spikes_I)
```
Before starting our simulation, we want to visualize the randomized
initial membrane potentials and weights. To this end, we insert the
following code just before we start the simulation:
```
pylab.figure(figsize=(12,3))
pylab.subplot(121)
V_E = nest.GetStatus(nodes_E[:N_rec], 'V_m')
pylab.hist(V_E, bins=10)
pylab.xlabel('Membrane potential V_m [mV]')
pylab.title('Initial distribution of membrane potentials')
pylab.subplot(122)
ex_conns = nest.GetConnections(nodes_E[:N_rec],
synapse_model='excitatory')
w = nest.GetStatus(ex_conns, 'weight')
pylab.hist(w, bins=100)
pylab.xlabel('Synaptic weight [pA]')
pylab.title(
'Distribution of synaptic weights ({:d} synapses)'.format(len(w)));
```
Here, `nest.GetStatus` retrieves the membrane potentials of all 50
recorded neurons. The data is then displayed as a histogram with 10
bins.
The function `nest.GetConnections` here finds
all connections that
* have one of the 50 recorded excitatory neurons as source;
* have any local node as target;
* and are of type `excitatory`.
Next, we then use `GetStatus()` to
obtain the weights of these connections. Running the script in a
single MPI process, we record approximately 50,000 weights, which we
display in a histogram with 100 bins.
Note that the code above
will return complete results only when run in a single MPI
process. Otherwise, only data from local neurons or connections with
local targets will be obtained. It is currently not possible to
collect recorded data across MPI processes in NEST. In distributed
simulations, you should thus let recording devices write data to files
and collect the data after the simulation is complete.
Comparing the
raster plot from the simulation with randomized initial membrane
potentials with the same plot for the
original network reveals that the
membrane potential randomization has prevented the synchronous onset
of activity in the network.
We now run the simulation.
```
simtime = 300. # how long shall we simulate [ms]
nest.Simulate(simtime)
```
As a final point, we make a slight improvement to the rate computation of the original
script. Spike detectors count only spikes from neurons on the local
MPI process. Thus, the original computation is correct only for a
single MPI process. To obtain meaningful results when simulating
on several MPI processes, we count how many of the `N_rec`
recorded nodes are local and use that number to compute the rates:
```
events = nest.GetStatus(spikes, 'n_events')
N_rec_local_E = sum(nest.GetStatus(nodes_E[:N_rec], 'local'))
rate_ex = events[0] / simtime * 1000.0 / N_rec_local_E
print('Excitatory rate : {:.2f} Hz'.format(rate_ex))
N_rec_local_I = sum(nest.GetStatus(nodes_I[:N_rec], 'local'))
rate_in = events[1] / simtime * 1000.0 / N_rec_local_I
print('Inhibitory rate : {:.2f} Hz'.format(rate_in))
```
Each MPI process then reports the rate of activity of its locally
recorded nodes.
# Example 3: Plastic Networks
NEST provides synapse models with a variety of short-term and
long-term dynamics. To illustrate this, we extend the sparsely
connected network introduced in Example 1 with
randomized synaptic weights as described in section 'Randomness in NEST' to incorporate spike-timing-dependent plasticity <cite data-cite="Bi98">(Bi and Poo, 1998)</cite> at its recurrent excitatory-excitatory synapses.
We create all nodes and randomize their initial membrane potentials
as before. We then generate a plastic synapse model for the excitatory-excitatory
connections and a static synapse model for the excitatory-inhibitory
connections:
```
nest.ResetKernel()
# Synaptic parameters
STDP_alpha = 2.02 # relative strength of STDP depression w.r.t potentiation
STDP_Wmax = 3 * J_E # maximum weight of plastic synapse
# Simulation parameters
N_vp = 8 # number of virtual processes to use
base_seed = 10000 # increase in intervals of at least 2*n_vp+1
N_rec = 50 # Number of neurons to record from
data2file = True # whether to record data to file
simtime = 300. # how long shall we simulate [ms]
# Set parameters of the NEST simulation kernel
nest.SetKernelStatus({'print_time': True,
'local_num_threads': 2})
# Create and seed RNGs
ms = 1000 # master seed
n_vp = nest.GetKernelStatus('total_num_virtual_procs')
pyrngs = [numpy.random.RandomState(s) for s in range(ms, ms + n_vp)]
nest.SetKernelStatus({'grng_seed': ms + n_vp,
'rng_seeds': range(ms + n_vp + 1, ms + 1 + 2 * n_vp)})
# Create nodes -------------------------------------------------
nest.SetDefaults('iaf_psc_delta',
{'C_m': 1.0,
'tau_m': tau_m,
't_ref': 2.0,
'E_L': 0.0,
'V_th': V_th,
'V_reset': 10.0})
nodes = nest.Create('iaf_psc_delta', N_neurons)
nodes_E = nodes[:N_E]
nodes_I = nodes[N_E:]
noise = nest.Create('poisson_generator', 1, {'rate': p_rate})
spikes = nest.Create('spike_detector', 2,
[{'label': 'brunel_py_ex'},
{'label': 'brunel_py_in'}])
spikes_E = spikes[:1]
spikes_I = spikes[1:]
# randomize membrane potential
node_info = nest.GetStatus(nodes, ['global_id', 'vp', 'local'])
local_nodes = [(gid, vp) for gid, vp, islocal in node_info if islocal]
for gid, vp in local_nodes:
nest.SetStatus([gid], {'V_m': pyrngs[vp].uniform(-V_th, V_th)})
nest.CopyModel('stdp_synapse_hom',
'excitatory_plastic',
{'alpha': STDP_alpha,
'Wmax': STDP_Wmax})
nest.CopyModel('static_synapse', 'excitatory_static')
```
Here, we set the parameters `alpha` and `Wmax` of
the synapse model but use the default settings for all its other
parameters. The `_hom` suffix in the synapse model name
indicates that all plasticity parameters such as `alpha` and
`Wmax` are shared by all synapses of this model.
We again use `nest.Connect` to create connections with
randomized weights:
```
nest.Connect(nodes_E, nodes_E,
{'rule': 'fixed_indegree',
'indegree': C_E},
{'model': 'excitatory_plastic',
'delay': delay,
'weight': {'distribution': 'uniform',
'low': 0.5 * J_E,
'high': 1.5 * J_E}})
nest.Connect(nodes_E, nodes_I,
{'rule': 'fixed_indegree',
'indegree': C_E},
{'model': 'excitatory_static',
'delay': delay,
'weight': {'distribution': 'uniform',
'low': 0.5 * J_E,
'high': 1.5 * J_E}})
nest.CopyModel('static_synapse',
'inhibitory',
{'weight': J_I,
'delay': delay})
nest.Connect(nodes_I, nodes,
{'rule': 'fixed_indegree',
'indegree': C_I},
'inhibitory')
# connect noise generator to all neurons
nest.CopyModel('static_synapse_hom_w',
'excitatory_input',
{'weight': J_E,
'delay': delay})
nest.Connect(noise, nodes, syn_spec='excitatory_input')
# connect all recorded E/I neurons to the respective detector
nest.Connect(nodes_E[:N_rec], spikes_E)
nest.Connect(nodes_I[:N_rec], spikes_I)
# Simulate -----------------------------------------------------
# Visualization of initial membrane potential and initial weight
# distribution only if we run on single MPI process
if nest.NumProcesses() == 1:
pylab.figure(figsize=(12,3))
# membrane potential
V_E = nest.GetStatus(nodes_E[:N_rec], 'V_m')
V_I = nest.GetStatus(nodes_I[:N_rec], 'V_m')
pylab.subplot(121)
pylab.hist([V_E, V_I], bins=10)
pylab.xlabel('Membrane potential V_m [mV]')
pylab.legend(('Excitatory', 'Inibitory'))
pylab.title('Initial distribution of membrane potentials')
pylab.draw()
# weight of excitatory connections
w = nest.GetStatus(nest.GetConnections(nodes_E[:N_rec],
synapse_model='excitatory_plastic'),
'weight')
pylab.subplot(122)
pylab.hist(w, bins=100)
pylab.xlabel('Synaptic weight w [pA]')
pylab.title('Initial distribution of excitatory synaptic weights')
pylab.draw()
else:
print('Multiple MPI processes, skipping graphical output')
nest.Simulate(simtime)
events = nest.GetStatus(spikes, 'n_events')
# Before we compute the rates, we need to know how many of the recorded
# neurons are on the local MPI process
N_rec_local_E = sum(nest.GetStatus(nodes_E[:N_rec], 'local'))
rate_ex = events[0] / simtime * 1000.0 / N_rec_local_E
print('Excitatory rate : {:.2f} Hz'.format(rate_ex))
N_rec_local_I = sum(nest.GetStatus(nodes_I[:N_rec], 'local'))
rate_in = events[1] / simtime * 1000.0 / N_rec_local_I
print('Inhibitory rate : {:.2f} Hz'.format(rate_in))
```
After a period of simulation, we can access the plastic synaptic
weights for analysis:
```
if nest.NumProcesses() == 1:
nest.raster_plot.from_device(spikes_E, hist=True)
# weight of excitatory connections
w = nest.GetStatus(nest.GetConnections(nodes_E[:N_rec],
synapse_model='excitatory_plastic'),
'weight')
pylab.figure(figsize=(12,4))
pylab.hist(w, bins=100)
pylab.xlabel('Synaptic weight [pA]')
# pylab.savefig('../figures/rand_plas_w.eps')
# pylab.show()
else:
print('Multiple MPI processes, skipping graphical output')
```
Plotting a histogram of the synaptic weights reveals that the initial
uniform distribution has begun to soften, as we can see in the plots resulting from the simulation above. Simulation for a longer period
results in an approximately Gaussian distribution of weights.
# Example 4: Classes and Automatization Techniques
Running the examples line for line is possible in interactive sessions, but if you want to run a
simulation several times, possibly with different parameters, it is
more practical to write a script that can be loaded from Python.
Python offers a number of mechanisms to structure and organize not
only your simulations, but also your simulation data. The first step
is to re-write a model as a *class*. In Python, and other
object-oriented languages, a class is a data structure which groups
data and functions into a single entity. In our case, data are the
different parameters of a model, and functions are what you can do with
a model.
Classes allow you to solve various common problems in simulations:
* **Parameter sets** Classes are data structures and so are
ideally suited to hold the parameter set for a model. Class inheritance
allows you to modify one, a few, or all parameters while maintaining
the relation to the original model.
* **Model variations** Often, we want to change minor aspects of
a model. For example, in one version we have homogeneous connections
and in another we want randomized weights. Again, we can use class
inheritance to express both cases while maintaining the conceptual
relation between the models.
* **Data management** Often, we run simulations with different
parameters or other variations and forget to record which data file
belonged to which simulation. Python's class mechanisms provide a
simple solution.
We organize the model from Example 1 into a class,
by realizing that each simulation has five steps which can be factored
into separate functions:
1. Define all independent parameters of the model. Independent
parameters are those that have concrete values which do not depend
on any other parameter. For example, in the Brunel model, the
parameter $g$ is an independent parameter.
2. Compute all dependent parameters of the model. These are all
parameters or variables that have to be computed from other
quantities (e.g., the total number of neurons).
3. Create all nodes (neurons, devices, etc.)
4. Connect the nodes.
5. Simulate the model.
We translate these steps into a simple class layout that will fit
most models:
```
class Model(object):
"""Model description."""
# Define all independent variables.
def __init__(self):
"""Initialize the simulation, set up data directory"""
def calibrate(self):
"""Compute all dependent variables"""
def build(self):
"""Create all nodes"""
def connect(self):
"""Connect all nodes"""
def run(self, simtime):
"""Build, connect, and simulate the model"""
```
In the following, we illustrate how to fit the model from Example 1 into this scaffold. The complete and commented
listing can be found in your NEST distribution.
```
class Brunel2000(object):
"""
Implementation of the sparsely connected random network,
described by Brunel (2000) J. Comp. Neurosci.
Parameters are chosen for the asynchronous irregular
state (AI).
"""
g = 5.0
eta = 2.0
delay = 1.5
tau_m = 20.0
V_th = 20.0
N_E = 8000
N_I = 2000
J_E = 0.1
N_rec = 50
threads = 2 # Number of threads for parallel simulation
built = False # True, if build() was called
connected = False # True, if connect() was called
# more definitions follow...
def __init__(self):
"""
Initialize an object of this class.
"""
self.name = self.__class__.__name__
self.data_path = self.name + '/'
nest.ResetKernel()
if not os.path.exists(self.data_path):
os.makedirs(self.data_path)
print("Writing data to: " + self.data_path)
nest.SetKernelStatus({'data_path': self.data_path})
def calibrate(self):
"""
Compute all parameter dependent variables of the
model.
"""
self.N_neurons = self.N_E + self.N_I
self.C_E = self.N_E / 10
self.C_I = self.N_I / 10
self.J_I = -self.g * self.J_E
self.nu_ex = self.eta * self.V_th / (self.J_E * self.C_E * self.tau_m)
self.p_rate = 1000.0 * self.nu_ex * self.C_E
nest.SetKernelStatus({"print_time": True,
"local_num_threads": self.threads})
nest.SetDefaults("iaf_psc_delta",
{"C_m": 1.0,
"tau_m": self.tau_m,
"t_ref": 2.0,
"E_L": 0.0,
"V_th": self.V_th,
"V_reset": 10.0})
def build(self):
"""
Create all nodes, used in the model.
"""
if self.built:
return
self.calibrate()
# remaining code to create nodes
self.built = True
def connect(self):
"""
Connect all nodes in the model.
"""
if self.connected:
return
if not self.built:
self.build()
# remaining connection code
self.connected = True
def run(self, simtime=300):
"""
Simulate the model for simtime milliseconds and print the
firing rates of the network during this period.
"""
if not self.connected:
self.connect()
nest.Simulate(simtime)
# more code, e.g., to compute and print rates
```
A Python class is defined by the keyword `class` followed by
the class name, `Brunel2000` in this example. The parameter
`object` indicates that our class is a subclass of a general
Python Object. After the colon, we can supply a documentation string,
encased in triple quotes, which will be printed if we type
`help(Brunel2000)`. After the documentation string, we
define all independent parameters of the model as well as some global
variables for our simulation. We also introduce two Boolean variables
`built` and `connected` to ensure that the
functions `build()` and `connect()` are executed
exactly once.
Next, we define the class functions. Each function has at least the
parameter `self`, which is a reference to the
current class object. It is used to access the functions and
variables of the object.
The first function from the code above is also the first one that is called for
every class object. It has the somewhat cryptic name
`__init__()`. `__init__()` is automatically called by Python whenever a
new object of a class is created and before any other class function
is called. We use it to initialize the NEST simulation kernel and to
set up a directory where the simulation data will be stored.
The general idea is this: each simulation with a specific parameter
set gets its own Python class. We can then use the class name to
define the name of a data directory where all simulation data are
stored.
In Python it is possible to read out the name of a class from an
object. This is done with `self.name=self.__class__.__name__`. Don't worry
about the many underscores, they tell us that these names are provided
by Python. In the next line, we assign the class name plus a trailing
slash to the new object variable `data_path`. Note how all
class variables are prefixed with `self`.
Next we reset the NEST simulation kernel to remove any leftovers from
previous simulations, using `nest.ResetKernel()`.
The following two lines use functions from the Python library
`os` which provides functions related to the operating
system. In the `if`-test we check whether a directory
with the same name as the class already exists. If not, we create a
new directory with this name. Finally, we set the data path property
of the simulation kernel. All recording devices use this location to
store their data. This does not mean that this directory is
automatically used for any other Python output functions. However,
since we have stored the data path in an object variable, we can use
it whenever we want to write data to file.
The other class functions are quite straightforward.
`Brunel2000.build()` accumulates all commands that
relate to creating nodes. The only addition is a piece of code that
checks whether the nodes were already created. The last line in this function sets the variable
`self.built` to `True` so that other functions
know that all nodes were created.
In function `Brunel2000.connect()` we first ensure that all nodes are
created before we attempt to draw any connection. Again, the last line sets a variable, telling other functions that the
connections were drawn successfully.
`Brunel2000.built` and `Brunel2000.connected` are
state variables that help you to make dependencies between functions
explicit and to enforce an order in which certain functions are
called.
The main function `Brunel2000.run()` uses both
state variables to build and connect the network.
In order to use the class we have to create an object of the class (after loading the file with the class
definition, if it is in another file):
```
import os
net = Brunel2000()
net.run(500)
```
Finally, we demonstrate how to use Python's class inheritance to
express different parameter configurations and versions of a model.
In the following listing, we derive a new class that simulates a
network where excitation and inhibition are exactly balanced,
i.e. $g=4$:
```
class Brunel_balanced(Brunel2000):
"""
Exact balance of excitation and inhibition
"""
g = 4
```
Class `Brunel_balanced` is defined with class
`Brunel2000` as parameter. This means the new class inherits
all parameters and functions from class `Brunel2000`. Then,
we redefine the value of the parameter `g`. When we create
an object of this class, it will create its new data directory.
We can use the same mechanism to implement an alternative version of the
model. For example, instead of re-implementing the model with
randomized connection weights, we can use inheritance to change just the way
nodes are connected:
```
class Brunel_randomized(Brunel2000):
"""
Like Brunel2000, but with randomized connection weights.
"""
def connect(self):
"""
Connect nodes with randomized weights.
"""
# Code for randomized connections follows
```
Thus, using inheritance, we can easily keep track of different
parameter sets and model versions and their associated simulation
data. Moreover, since we keep all alternative versions, we also have a
simple versioning system that only depends on Python features, rather
than on third party tools or libraries.
The full implementation of the model using classes can be found in the
examples directory of your NEST distribution.
# How to continue from here
In this chapter we have presented a step-by-step introduction to NEST,
using concrete examples. The simulation scripts and more examples are
part of the examples included in the NEST distribution.
Information about individual PyNEST functions can be obtained with
Python's `help()` function (in iPython it suffices to append `?` to the function). For example:
```
help(nest.Connect)
```
To learn more about NEST's node and synapse types, you can access
NEST's help system. NEST's online help still
uses a lot of syntax of SLI, NEST's native simulation language. However,
the general information is also valid for PyNEST.
Help and advice can also be found on NEST's user mailing list where
developers and users exchange their experience, problems, and ideas.
And finally, we encourage you to visit the web site of the NEST
Initiative at [www.nest-initiative.org](http://www.nest-initiative.org "NEST initiative").
# Acknowledgements
AM partially funded by BMBF grant 01GQ0420 to BCCN Freiburg, Helmholtz
Alliance on Systems Biology (Germany), Neurex, and the Junior
Professor Program of Baden-Württemberg. HEP partially supported by
RCN grant 178892/V30 eNeuro. HEP and MOG were partially supported by EU
grant FP7-269921 (BrainScaleS).
# References
Rajagopal Ananthanarayanan, Steven K. Esser, Horst D. Simon, and Dharmendra S. Modha. The cat is out of the bag: Cortical simulations with 109 neurons and 1013 synapses. In Supercomputing 09: Proceedings of the ACM/IEEE SC2009 Conference on High Performance Networking and Computing,
Portland, OR, 2009.
G.-q. Bi and M.-m. Poo. Synaptic modifications in cultured hippocampal neurons: Dependence on spike
timing, synaptic strength, and postsynaptic cell type. Journal Neurosci, 18:10464–10472, 1998.
James M. Bower and David Beeman. The Book of GENESIS: Exploring realistic neural models with the
GEneral NEural SImulation System. TELOS, Springer-Verlag-Verlag, New York, 1995.
R. Brette, M. Rudolph, T. Carnevale, M. Hines, D. Beeman, J.M. Bower, M. Diesmann, A. Morrison,
P.H. Goodman, F.C. Harris, and Others. Simulation of networks of spiking neurons: A review of
tools and strategies. Journal of computational neuroscience, 23(3):349398, 2007. URL http://www.springerlink.com/index/C2J0350168Q03671.pdf.
Nicolas Brunel. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons.
Journal Comput Neurosci, 8(3):183–208, 2000.
M. Diesmann, M.-O. Gewaltig, and A. Aertsen. SYNOD: an environment for neural systems simulations.
Language interface and tutorial. Technical Report GC-AA-/95-3, Weizmann Institute of Science, The
Grodetsky Center for Research of Higher Brain Functions, Israel, May 1995.
J. M. Eppler, M. Helias, E. Muller, M. Diesmann, and M. Gewaltig. PyNEST: a convenient interface to
the NEST simulator. Front. Neuroinform., 2:12, 2009.
James E. Gentle. Random Number Generation and Monte Carlo Methods. Springer Science+Business
Media, New York, second edition, 2003.
Marc-Oliver Gewaltig and Markus Diesmann. NEST (Neural Simulation Tool). In Eugene Izhikevich, ed-
itor, Scholarpedia Encyclopedia of Computational Neuroscience, page 11204. Eugene Izhikevich, 2007.
URL http://www.scholarpedia.org/article/NEST\_(Neural\_Simulation\_Tool).
Marc-Oliver Gewaltig, Abigail Morrison, and Hans Ekkehard Plesser. NEST by example: An introduction
to the neural simulation tool NEST. In Nicolas Le Novère, editor, Computational Systems Neurobiology, chapter 18, pages 533–558. Springer Science+Business Media, Dordrecht, 2012.
M. L. Hines and N. T. Carnevale. The NEURON simulation environment. Neural Comput, 9:1179–1209,
1997.
John D. Hunter. Matplotlib: A 2d graphics environment. Computing In Science & Engineering, 9(3):
90–95, May-Jun 2007.
D. E. Knuth. The Art of Computer Programming, volume 2. Addison-Wesley, Reading, MA, third
edition, 1998.
P. L’Ecuyer and R. Simard. TestU01: A C library for empirical testing of random number generators.
ACM Transactions on Mathematical Software, 33:22, 2007. URL http://www.iro.umontreal.ca/~simardr/testu01/tu01.html. Article 22, 40 pages.
M. Matsumoto and T. Nishimura. Mersenne twister: A 623-dimensonally equidistributed uniform pseu-
dorandom number generator. ACM Trans Model Comput Simul, 8:3–30, 1998.
M. Migliore, C. Cannia, W. W. Lytton, H. Markram, and M.L. Hines. Parallel network simulations with
NEURON. Journal Comput Neurosci, 21(2):119–223, 2006.
MPI Forum. MPI: A message-passing interface standard. Technical report, University of Ten-
nessee, Knoxville, TN, USA, September 2009. URL http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf.
Travis E. Oliphant. Guide to NumPy. Trelgol Publishing (Online), 2006. URL http://www.tramy.us/numpybook.pdf.
Fernando Pérez and Brian E. Granger. Ipython: A system for interactive scientific computing. Computing in Science and Engineering, 9:21–29, 2007. ISSN 1521-9615.
H. E. Plesser, J. M. Eppler, A. Morrison, M. Diesmann, and M.-O. Gewaltig. Efficient parallel simulation of large-scale neuronal networks on clusters of multiprocessor computers. In A.-M. Kermarrec, L. Bougé and T. Priol, editors, Euro-Par 2007: Parallel Processing, volume 4641 of Lecture Notes inComputer Science, pages 672–681, Berlin, 2007. Springer-Verlag.
Hans Ekkehard Plesser. Generating random numbers. In Sonja Grün and Stefan Rotter, editors, Analysis of Parallel Spike Trains, Springer Series in Computational Neuroscience, chapter 19, pages 399–411.
Springer, New York, 2010.
| github_jupyter |
To Queue or Not to Queue
=====================
In this notebook we look at the relative performance of a single queue vs multiple queues
using the [Simpy](https://simpy.readthedocs.io/en/latest/) framework as well as exploring various
common load balancing algorithms and their performance in M/G/k systems.
First we establish a baseline simulator which can simulate arbitrary latency distributions and
generative processes.
```
# %load src/request_simulator.py
import random
from collections import namedtuple
import numpy as np
import simpy
LatencyDatum = namedtuple(
'LatencyDatum',
('t_queued', 't_processing', 't_total')
)
class RequestSimulator(object):
""" Simulates a M/G/k process common in request processing (computing)
:param worker_desc: A tuple of (count, capacity) to construct workers with
:param local_balancer: A function which takes the current request number
and the list of workers and returns the index of the worker to send the
next request to
:param latency_fn: A function which takes the curent
request number and the worker that was assigned by the load balancer
amd returns the number of milliseconds a request took to process
:param number_of_requests: The number of requests to run through the
simulator
:param request_per_s: The rate of requests per second.
"""
def __init__(
self, worker_desc, load_balancer, latency_fn,
number_of_requests, request_per_s):
self.worker_desc = worker_desc
self.load_balancer = load_balancer
self.latency_fn = latency_fn
self.number_of_requests = int(number_of_requests)
self.request_interval_ms = 1. / (request_per_s / 1000.)
self.data = []
def simulate(self):
# Setup and start the simulation
random.seed(1)
np.random.seed(1)
self.env = simpy.Environment()
count, cap = self.worker_desc
self.workers = []
for i in range(count):
worker = simpy.Resource(self.env, capacity=cap)
worker.zone = "abc"[i % 3]
self.workers.append(worker)
self.env.process(self.generate_requests())
self.env.run()
def generate_requests(self):
for i in range(self.number_of_requests):
idx = self.load_balancer(i, self.workers)
worker = self.workers[idx]
response = self.process_request(
i, worker,
)
self.env.process(response)
# Exponential inter-arrival times == Poisson
arrival_interval = random.expovariate(
1.0 / self.request_interval_ms
)
yield self.env.timeout(arrival_interval)
def process_request(self, request_id, worker):
""" Request arrives, possibly queues, and then processes"""
t_arrive = self.env.now
with worker.request() as req:
yield req
t_start = self.env.now
t_queued = t_start - t_arrive
# Let the operation take w.e. amount of time the latency
# function tells us to
yield self.env.timeout(self.latency_fn(request_id, worker))
t_done = self.env.now
t_processing = t_done - t_start
t_total_response = t_done - t_arrive
datum = LatencyDatum(t_queued, t_processing, t_total_response)
self.data.append(datum)
def run_simulation(
worker_desc, load_balancer, num_requests, request_per_s, latency_fn):
simulator = RequestSimulator(
worker_desc, load_balancer, latency_fn,
num_requests, request_per_s
)
simulator.simulate()
return simulator.data
# %load src/lb_policies.py
import random
import numpy as np
def queue_size(resource):
return resource.count + len(resource.queue)
def random_lb(request_num, workers):
return random.randint(0, len(workers) - 1)
def rr_lb(request_num, workers):
return request_num % len(workers)
def choice_two_lb(request_num, workers):
r1, r2 = np.random.choice(range(len(workers)), 2, replace=False)
r1 = random_lb(request_num, workers)
r2 = random_lb(request_num, workers)
if queue_size(workers[r1]) < queue_size(workers[r2]):
return r1
return r2
def _zone(request):
return "abc"[request % 3]
def choice_n_weighted(n):
def lb(request_num, workers):
choices = np.random.choice(range(len(workers)), n, replace=False)
result = []
for idx, w in enumerate(choices):
weight = 1.0;
if _zone(request_num) == workers[w].zone:
weight *= 1.0
else:
weight *= 4.0
result.append((w, weight * (1 + queue_size(workers[w]))))
result = sorted(result, key=lambda x:x[1])
return result[0][0]
return lb
def choice_two_adjacent_lb(request_num, workers):
r1 = random_lb(request_num, workers)
if r1 + 2 >= len(workers):
r2 = r1 - 1
r3 = r1 - 2
else:
r2 = r1 + 1
r3 = r1 + 2
iq = [(queue_size(workers[i]), i) for i in (r1, r2, r3)]
return (sorted(iq)[0][1])
def shortest_queue_lb(request_num, workers):
idx = 0
for i in range(len(workers)):
if queue_size(workers[i]) < queue_size(workers[idx]):
idx = i
return idx
# %load src/latency_distributions.py
import random
import numpy as np
def zone(request):
return "abc"[request % 3]
def service(mean, slow, shape, slow_freq, slow_count):
scale = mean - mean / shape
scale_slow = slow - slow / shape
def latency(request, worker):
base = ((np.random.pareto(shape) + 1) * scale)
if (zone(request) != worker.zone):
base += 0.8
if (request % slow_freq) < slow_count:
add_l = ((np.random.pareto(shape) + 1) * scale_slow)
else:
add_l = 0
return base + add_l
return latency
def pareto(mean, shape):
# mean = scale * shape / (shape - 1)
# solve for scale given mean and shape (aka skew)
scale = mean - mean / shape
def latency(request, worker):
return ((np.random.pareto(shape) + 1) * scale)
return latency
def expon(mean):
def latency(request, worker):
return random.expovariate(1.0 / mean)
return latency
```
Simulation of Single vs Multiple Queues
================================
Here we explore the effects of having N queues that handle 1/N the load (aka "Frequency Division Multiplexing",
aka `FDM`) vs a single queue distributing out to N servers (aka M/M/k queue aka `MMk`). We confirm the theoretical results that can be obtained using the closed form solutions for `E[T]_MMk` and `E[T]_FDM` via
a simulation.
In particular M/M/k queues have a closed form solution for the mean response time given the probability of queueing (and we know if requests queued)
```
E[T]_MMk = (1 / λ) * Pq * ρ / (1-ρ) + 1 / μ
Where
λ = the arrival rate (hertz)
Pq = the probability of a request queueing
ρ = the system load aka λ / (k * μ)
μ = the average response time (hertz)
```
Frequency division multiplexing (multiple queues) also has a closed form solution:
```
E[T]_FDM = (k / (k * μ - λ))
```
```
# Simulate the various choices
NUM_REQUESTS = 60000
QPS = 18000
AVG_RESPONSE_MS = 0.4
SERVERS = 10
multiple_queues_latency = []
for i in range(SERVERS):
multiple_queues_latency.extend([
i[2] for i in run_simulation((1, 1), rr_lb, NUM_REQUESTS/SERVERS, QPS/SERVERS, expon(AVG_RESPONSE_MS))
])
single_queue = [
i for i in run_simulation((1, SERVERS), rr_lb, NUM_REQUESTS, QPS, expon(AVG_RESPONSE_MS))
]
single_queue_latency = [i[2] for i in single_queue]
Pq = sum([i[0] > 0 for i in single_queue]) / float(NUM_REQUESTS)
# MMk have a closed for mean given the probability of queueing (and we know if requests queued)
# E[T]_MMk = (1 / λ) * Pq * ρ / (1-ρ) + 1 / μ
# Where
# λ = the arrival rate (hertz)
# Pq = the probability of a request queueing
# ρ = the system load aka λ / (k * μ)
# μ = the average response time (hertz)
mu_MMk = (1.0 / AVG_RESPONSE_MS) * 1000
lambda_MMk = QPS
rho_MMk = lambda_MMk / (SERVERS * mu_MMk)
expected_MMk_mean_s = (1 / (lambda_MMk)) * Pq * (rho_MMk / (1-rho_MMk)) + 1 / mu_MMk
expected_MMk_mean_ms = expected_MMk_mean_s * 1000.0
# Frequency-division multiplexing also has a closed form for mean
# E[T]_FDM = (k / (k * μ - λ))
expected_FDM_mean_ms = (SERVERS / (SERVERS * mu_MMk - lambda_MMk)) * 1000.0
print("Theory Results")
print("--------------")
print("Pq = {0:4.2f}".format(Pq))
print("E[T]_FDM = {0:4.2f}".format(expected_FDM_mean_ms))
print("E[T]_MMk = {0:4.2f}".format(expected_MMk_mean_ms))
# And now the simulation
queueing_options = {
'Multiple Queues': multiple_queues_latency,
'Single Queue': single_queue_latency,
}
print()
print("Simulation results")
print("------------------")
hdr = "{0:16} | {1:>7} | {2:>7} | {3:>7} | {4:>7} | {5:>7} | {6:>7} |".format(
"Strategy", "mean", "var", "p50", "p95", "p99", "p99.9")
print(hdr)
for opt in sorted(queueing_options.keys()):
mean = np.mean(queueing_options[opt])
var = np.var(queueing_options[opt])
percentiles = np.percentile(queueing_options[opt], [50, 95, 99, 99.9])
print ("{0:16} | {1:7.2f} | {2:7.2f} | {3:7.2f} | {4:7.2f} | {5:>7.2f} | {6:7.2f} |".format(
opt, mean, var, percentiles[0], percentiles[1], percentiles[2],
percentiles[3]
))
import numpy as np
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
import matplotlib.style as style
style.use('seaborn-pastel')
def color_bplot(bp, edge_color, fill_color):
for element in ['boxes', 'whiskers', 'fliers', 'means', 'medians', 'caps']:
plt.setp(bp[element], color=edge_color)
for box in bp['boxes']:
box.set_facecolor(fill_color)
fig1, ax = plt.subplots(figsize=(12,3))
opts = sorted(queueing_options.keys())
data = [queueing_options[i] for i in opts]
flier = dict(markerfacecolor='r', marker='.')
bplot1 = ax.boxplot(data,whis=[1,99],showfliers=True,flierprops=flier, labels=opts,
patch_artist=True, vert=False)
color_bplot(bplot1, 'black', 'lightblue')
plt.title("Response Time Distribution \n[{0} QPS @ {1}ms avg with {2} servers]".format(
QPS, AVG_RESPONSE_MS, SERVERS)
)
plt.minorticks_on()
plt.grid(which='major', linestyle=':', linewidth='0.4', color='black')
# Customize the minor grid
plt.grid(which='minor', linestyle=':', linewidth='0.4', color='black')
plt.xlabel('Response Time (ms)')
plt.show()
from multiprocessing import Pool
def run_multiple_simulations(simulation_args):
with Pool(12) as p:
results = p.starmap(run_simulation, simulation_args)
return results
# Should overload 10 servers with 0.4ms avg response time at 25k
qps_range = [i*1000 for i in range(1, 21)]
multi = []
num_requests = 100000
# Can't pickle inner functions apparently...
def expon_avg(request, worker):
return random.expovariate(1.0 / AVG_RESPONSE_MS)
print("Multi Queues")
args = []
for qps in qps_range:
args.clear()
multiple_queues_latency = []
for i in range(SERVERS):
args.append(
((1, 1), rr_lb, num_requests/SERVERS, qps/SERVERS, expon_avg)
)
for d in run_multiple_simulations(args):
multiple_queues_latency.extend([i.t_total for i in d])
print('({0:>4.2f} -> {1:>4.2f})'.format(qps, np.mean(multiple_queues_latency)), end=', '),
percentiles = np.percentile(multiple_queues_latency, [10, 50, 90])
multi.append(percentiles)
single = []
args.clear()
for qps in qps_range:
args.append(
((1, SERVERS), rr_lb, num_requests, qps, expon_avg)
)
print("\nSingle Queue")
for idx, results in enumerate(run_multiple_simulations(args)):
d = [i.t_total for i in results]
print('({0:>4.2f} -> {1:>4.2f})'.format(qps_range[idx], np.mean(d)), end=', ')
percentiles = np.percentile(d, [25, 50, 75])
single.append(percentiles)
import matplotlib.gridspec as gridspec
fig = plt.figure(figsize=(14, 5))
gs = gridspec.GridSpec(1, 2, hspace=0.1, wspace=0.1)
plt.suptitle("Response Time Distribution, {0}ms avg response time with {1} servers".format(
AVG_RESPONSE_MS, SERVERS), fontsize=14
)
ax1 = plt.subplot(gs[0, 0])
ax1.plot(qps_range, [m[1] for m in multi], '.b-', label='Multi-Queue median')
ax1.fill_between(qps_range, [m[0] for m in multi], [m[2] for m in multi], alpha=0.4,
label='Multi-Queue IQR', edgecolor='black')
ax1.set_title('Multi-Queue Performance')
ax1.minorticks_on()
ax1.grid(which='major', linestyle=':', linewidth='0.4', color='black')
ax1.grid(which='minor', linestyle=':', linewidth='0.4', color='black')
ax1.set_xlabel('QPS (req / s)')
ax1.set_ylabel('Response Time (ms)')
ax1.legend(loc='upper left')
ax2 = plt.subplot(gs[0, 1], sharey=ax1)
ax2.plot(qps_range, [m[1] for m in single], '.g-', label='Single-Queue median')
ax2.fill_between(qps_range, [m[0] for m in single], [m[2] for m in single], alpha=0.4,
label='Single-Queue IQR', color='lightgreen', edgecolor='black')
ax2.set_title('Single-Queue Performance')
ax2.minorticks_on()
ax2.grid(which='major', linestyle=':', linewidth='0.4', color='black')
ax2.grid(which='minor', linestyle=':', linewidth='0.4', color='black')
ax2.set_xlabel('QPS (req / s)')
ax2.legend(loc='upper left')
plt.show()
```
Multiple Queues: Load Balancing
===========================
Now we look at the affects of choosing different load balancing algorithms on the multiple queue approach:
1. Random Load Balancer: This strategy chooses a server at random
2. Join Shortest Queue: This strategy chooses the shortest queue to ensure load balancing
3. Two adjacent: Somewhat of a variant of "choice of two" where you randomly pick two servers
and take the shortest queue between them; modified to only allow neighbors
```
multiple_queues_latency = []
for i in range(SERVERS):
multiple_queues_latency.extend([
i[2] for i in run_simulation((1, 1), rr_lb, NUM_REQUESTS/SERVERS,
request_per_s=QPS/SERVERS, latency_fn=expon(AVG_RESPONSE_MS))
])
random_queue_latency = [
i[2] for i in run_simulation((SERVERS, 1), random_lb, NUM_REQUESTS, QPS, expon(AVG_RESPONSE_MS))
]
join_shorted_queue_latency = [
i[2] for i in run_simulation((SERVERS, 1), shortest_queue_lb, NUM_REQUESTS, QPS, expon(AVG_RESPONSE_MS))
]
two_adjacent_latency = [
i[2] for i in run_simulation((SERVERS, 1), choice_two_adjacent_lb, NUM_REQUESTS, QPS, expon(AVG_RESPONSE_MS))
]
# And now the simulation
all_queueing_options = {
'Multiple Queues': multiple_queues_latency,
'Single Queue': single_queue_latency,
'LB: Join Shortest': join_shorted_queue_latency,
'LB: Best of Two Adj': two_adjacent_latency,
'LB: Random': random_queue_latency
}
print()
print("Simulation results")
print("------------------")
hdr = "{0:20} | {1:>7} | {2:>7} | {3:>7} | {4:>7} | {5:>7} | {6:>7} ".format(
"Strategy", "mean", "var", "p50", "p95", "p99", "p99.9")
print(hdr)
for opt in sorted(all_queueing_options.keys()):
mean = np.mean(all_queueing_options[opt])
var = np.var(all_queueing_options[opt])
percentiles = np.percentile(all_queueing_options[opt], [50, 95, 99, 99.9])
print ("{0:20} | {1:7.2f} | {2:7.2f} | {3:7.2f} | {4:7.2f} | {5:>7.2f} | {6:7.2f} |".format(
opt, mean, var, percentiles[0], percentiles[1], percentiles[2],
percentiles[3]
))
fig1, ax = plt.subplots(figsize=(12,4))
opts = sorted(all_queueing_options.keys())
data = [all_queueing_options[i] for i in opts]
bplot1 = ax.boxplot(data, whis=[1,99],showfliers=True,flierprops=flier, labels=opts,
patch_artist=True, vert=False)
color_bplot(bplot1, 'black', 'lightblue')
plt.title("Response Time Distribution \n[{0} QPS @ {1}ms avg with {2} servers]".format(
QPS, AVG_RESPONSE_MS, SERVERS)
)
plt.minorticks_on()
plt.grid(which='major', linestyle=':', linewidth='0.4', color='black')
plt.grid(which='minor', linestyle=':', linewidth='0.4', color='black')
plt.xlabel('Response Time (ms)')
plt.show()
```
Simulation of M/G/k queues
======================
Now we look at M/G/k queues over multiple different load balancing choices.
We explore:
* Join Shorted Queue (JSK): The request is dispatched to the shortest queue
* M/G/k (MGk): A single queue is maintained and workers take requests as they are free
* Choice of two (choice_two): Two random workers are chosen, and then the request goes to the shorter queue
* Random (random): A random queue is chosen
* Round-robin (roundrobin): The requests are dispatched to one queue after the other
```
lb_algos = {
'choice-of-two': choice_two_lb,
'random': random_lb,
'round-robin': rr_lb,
'shortest-queue': shortest_queue_lb,
'weighted-choice-8': choice_n_weighted(8),
}
SERVERS = 16
QPS = 8000
# Every 1000 requests have 10 that are slow (simulating a GC pause)
latency = service(AVG_RESPONSE_MS, 20, 2, 1000, 10)
lbs = {
k : [i[2] for i in run_simulation((SERVERS, 1), v, NUM_REQUESTS, QPS, latency)]
for (k, v) in lb_algos.items()
}
lbs['MGk'] = [
i[2] for i in run_simulation((1, SERVERS), rr_lb, NUM_REQUESTS, QPS, latency)]
#lbs['join-idle'] = [
# i[2] for i in run_simulation((1, SERVERS), rr_lb, NUM_REQUESTS, QPS,
# lambda request: 0.1 + pareto(AVG_RESPONSE_MS, 2)(request))
#]
types = sorted(lbs.keys())
hdr = "{0:20} | {1:>7} | {2:>7} | {3:>7} | {4:>7} | {5:>7} | {6:>7} ".format(
"Strategy", "mean", "var", "p50", "p95", "p99", "p99.9")
print(hdr)
print("-"*len(hdr))
for lb in types:
mean = np.mean(lbs[lb])
var = np.var(lbs[lb])
percentiles = np.percentile(lbs[lb], [50, 95, 99, 99.9])
print ("{0:20} | {1:7.1f} | {2:7.1f} | {3:7.1f} | {4:7.1f} | {5:>7.1f} | {6:7.1f} |".format(
lb, mean, var, percentiles[0], percentiles[1], percentiles[2],
percentiles[3]
))
fig1, ax = plt.subplots(figsize=(20,10))
diamond = dict(markerfacecolor='r', marker='D')
data = [lbs[i] for i in types]
bplot1 = ax.boxplot(data, whis=[10,90],showfliers=False,flierprops=flier, labels=types,
patch_artist=True, vert=False)
color_bplot(bplot1, 'black', 'lightblue')
plt.title('Response Distribution M/G Process ({0} QPS @ {1}ms avg with {2} servers):'.format(
QPS, AVG_RESPONSE_MS, SERVERS)
)
plt.minorticks_on()
plt.grid(which='major', linestyle=':', linewidth='0.4', color='black')
plt.grid(which='minor', linestyle=':', linewidth='0.4', color='black')
plt.xlabel('Response Time (ms)')
plt.show()
```
| github_jupyter |
# Machine Learning and Statistics for Physicists
Material for a [UC Irvine](https://uci.edu/) course offered by the [Department of Physics and Astronomy](https://www.physics.uci.edu/).
Content is maintained on [github](github.com/dkirkby/MachineLearningStatistics) and distributed under a [BSD3 license](https://opensource.org/licenses/BSD-3-Clause).
[Table of contents](Contents.ipynb)
```
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
import pandas as pd
import scipy.stats
```
## Bayesian Statistics
### Types of Probability
We construct a probability space by assigning a numerical probability in the range $[0,1]$ to sets of outcomes (events) in some space.
When outcomes are the result of an uncertain but **repeatable** process, probabilities can always be measured to arbitrary accuracy by simply observing many repetitions of the process and calculating the frequency at which each event occurs. These **frequentist probabilities** have an appealing objective reality to them.
**DISCUSS:** How might you assign a frequentist probability to statements like:
- The electron spin is 1/2.
- The Higgs mass is between 124 and 126 GeV.
- The fraction of dark energy in the universe today is between 68% and 70%.
- The superconductor Hg-1223 has a critical temperature above 130K.
You cannot (if we assume that these are universal constants), since that would require a measurable process whose outcomes had different values for a universal constant.
The inevitable conclusion is that the statements we are most interested in cannot be assigned frequentist probabilities.
However, if we allow probabilities to also measure your subjective "degree of belief" in a statement, then we can use the full machinery of probability theory to discuss more interesting statements. These are called **Bayesian probabiilities**.
Roughly speaking, the choice is between:
- **frequentist statistics:** objective probabilities of uninteresting statements.
- **Bayesian statistics:** subjective probabilities of interesting statements.
---
### Bayesian Joint Probability
Bayesian statistics starts from a joint probability distribution
$$
P(D, \Theta_M, M)
$$
over data features $D$, model parameters $\Theta_M$ and hyperparameters $M$. The subscript on $\Theta_M$ is to remind us that, in general, the set of parameters being used depends on the hyperparameters (e.g., increasing `n_components` adds parameters for the new components). We will sometimes refer to the pair $(\Theta_M, M)$ as the **model**.
This joint probability implies that model parameters and hyperparameters are random variables, which in turn means that they label possible outcomes in our underlying probability space.
For a concrete example, consider the possible outcomes necessary to discuss the statement "*the electron spin is 1/2*", which must be labeled by the following random variables:
- $D$: the measured electron spin for an outcome, $S_z = 0, \pm 1/2, \pm 1, \pm 3/2, \ldots$
- $\Theta_M$: the total electron spin for an outcome, $S = 0, 1/2, 1, 3/2, \ldots$
- $M$: whether the electron is a boson or a fermion for an outcome.
A table of random-variable values for possible outcomes would then look like:
| $M$ | $\Theta_M$ | $D$ |
| ---- |----------- | --- |
| boson | 0 | 0 |
| fermion | 1/2 | -1/2 |
| fermion | 1/2 | +1/2 |
| boson | 1 | -1 |
| boson | 1 | 0 |
| boson | 1 | +1 |
| ... | ... | ... |
Only two of these outcomes occur in our universe, but a Bayesian approach requires us to broaden the sample space from "*all possible outcomes*" to "*all possible outcomes in all possible universes*".
### Likelihood
The **likelihood** ${\cal L}_M(\Theta_M, D)$ is a function of model parameters $\Theta_M$ (given hyperparameters $M$) and data features $D$, and measures the probability (density) of observing the data given the model. For example, a Gaussian mixture model has the likelihood function:
$$
{\cal L}_M\left(\mathbf{\Theta}_M, \vec{x} \right) = \sum_{k=1}^{K}\, \omega_k G(\vec{x} ; \vec{\mu}_k, C_k) \; ,
$$
with parameters
$$
\begin{aligned}
\mathbf{\Theta}_M = \big\{
&\omega_1, \omega_2, \ldots, \omega_K, \\
&\vec{\mu}_1, \vec{\mu}_2, \ldots, \vec{\mu}_K, \\
&C_1, C_2, \ldots, C_K \big\}
\end{aligned}
$$
and hyperparameter $K$. Note that the likelihood must be normalized over the data for any values of the (fixed) parameters and hyperparameters. However, it is not normalized over the parameters or hyperparameters.
The likelihood function plays a central role in both frequentist and Bayesian statistics, but is used and interpreted differently. We will focus on the Bayesian perspective, where $\Theta_M$ and $M$ are considered random variables and the likelihood function is associated with the conditional probability
$$
{\cal L}_M\left(\Theta_M, D \right) = P(D\mid \Theta_M, M)
$$
of observing features $D$ given the model $(\Theta_M, M)$.
### Bayesian Inference
Once we associated the likelihood with a conditional probability, we can apply the earlier rules (2 & 3) of probability calculus to derive the generalized Bayes' rule:
$$
P(\Theta_M\mid D, M) = \frac{P(D\mid \Theta_M, M)\,P(\Theta_M\mid M)}{P(D\mid M)}
$$
Each term above has a name and measures a different probability:
1. **Posterior:** $P(\Theta_M\mid D, M)$ is the probability of the parameter values $\Theta_M$ given the data and the choice of hyperparameters.
2. **Likelihood:** $P(D\mid \Theta_M, M)$ is the probability of the data given the model.
3. **Prior:** $P(\Theta_M\mid M)$ is the probability of the model parameters given the hyperparameters and *marginalized over all possible data*.
4. **Evidence:** $P(D\mid M)$ is the probability of the data given the hyperparameters and *marginalized over all possible parameter values given the hyperparameters*.
In typical inference problems, the posterior (1) is what we really care about and the likelihood (2) is what we know how to calculate. The prior (3) is where we must quantify our subjective "degree of belief" in different possible universes.
What about the evidence (4)? Using the earlier rule (5) of probability calculus, we discover that (4) can be calculated from (2) and (3):
$$
P(D\mid M) = \int d\Theta_M' P(D\mid \Theta_M', M)\, P(\Theta_M'\mid M) \; .
$$
Note that this result is not surprising since the denominator must normalize the ratio to yield a probability (density). When the set of possible parameter values is discrete, $\Theta_M \in \{ \Theta_{M,1}, \Theta_{M,2}, \ldots\}$, the normalization integral reduces to a sum:
$$
P(D\mid M) \rightarrow \sum_k\, P(D\mid \Theta_{M,k}, M)\, P(\Theta_{M,k}\mid M) \; .
$$
The generalized Bayes' rule above assumes fixed values of any hyperparameters (since $M$ is on the RHS of all 4 terms), but a complete inference also requires us to consider different hyperparameter settings. We will defer this (harder) **model selection** problem until later.

**EXERCISE:** Suppose that you meet someone for the first time at your next conference and they are wearing an "England" t-shirt. Estimate the probability that they are English by:
- Defining the data $D$ and model $\Theta_M$ assuming, for simplicity, that there are no hyperparameters.
- Assigning the relevant likelihoods and prior probabilities (terms 2 and 3 above).
- Calculating the resulting LHS of the generalized Baye's rule above.
Solution:
- Define the data $D$ as the observation that the person is wearing an "England" t-shirt.
- Define the model to have a single parameter, the person's nationality $\Theta \in \{ \text{English}, \text{!English}\}$.
- We don't need to specify a full likelihood function over all possible data since we only have a single datum. Instead, it is sufficient to assign the likelihood probabilities:
$$
P(D\mid \text{English}) = 0.4 \quad , \quad P(D\mid \text{!English}) = 0.1
$$
- Assign the prior probabilities for attendees at the conference:
$$
P(\text{English}) = 0.2 \quad , \quad P(\text{!English}) = 0.8
$$
- We can now calculate:
$$
\begin{aligned}
P(\text{English}\mid D) &= \frac{P(D\mid \text{English})\, P(\text{English})}
{P(D\mid \text{English})\, P(\text{English}) + P(D\mid \text{!English})\, P(\text{!English})} \\
&= \frac{0.4\times 0.2}{0.4\times 0.2 + 0.1\times 0.8} \\
&= 0.5 \; .
\end{aligned}
$$
Note that we calculate the evidence $P(D)$ using a sum rather than integral, because $\Theta$ is discrete.
You probably assigned different probabilities, since these are subjective assessments where reasonable people can disagree. However, by allowing some subjectivity we are able to make a precise statement under some (subjective) assumptions.
Note that the likelihood probabilities do not sum to one since the likelihood is normalized over the data, not the model, unlike the prior probabilities which do sum to one.
A simple example like this can be represented graphically in the 2D space of joint probability $P(D, \Theta)$:

---
The generalized Bayes' rule can be viewed as a learning rule that updates our knowledge as new information becomes available:

The implied timeline motivates the *posterior* and *prior* terminology, although there is no requirement that the prior be based on data collected before the "new" data.
Bayesian inference problems can be tricky to get right, even when they sound straightforward, so it is important to clearly spell out what you know or assume, and what you wish to learn:
1. List the possible models, i.e., your hypotheses.
2. Assign a prior probability to each model.
3. Define the likelihood of each possible observation $D$ for each model.
4. Apply Bayes' rule to learn from new data and update your prior.
For problems with a finite number of possible models and observations, the calculations required are simple arithmetic but quickly get cumbersome. A helper function lets you hide the arithmetic and focus on the logic:
```
def learn(prior, likelihood, D):
# Calculate the Bayes' rule numerator for each model.
prob = {M: prior[M] * likelihood(D, M) for M in prior}
# Calculate the Bayes' rule denominator.
norm = sum(prob.values())
# Return the posterior probabilities for each model.
return {M: prob[M] / norm for M in prob}
```
For example, the problem above becomes:
```
prior = {'English': 0.2, '!English': 0.8}
def likelihood(D, M):
if M == 'English':
return 0.4 if D == 't-shirt' else 0.6
else:
return 0.1 if D == 't-shirt' else 0.9
learn(prior, likelihood, D='t-shirt')
```
Note that the (posterior) output from one learning update can be the (prior) input to the next update. For example, how should we update our knowledge if the person wears an "England" t-shirt the next day also?
```
post1 = learn(prior, likelihood, 't-shirt')
post2 = learn(post1, likelihood, 't-shirt')
print(post2)
```
The `mls` package includes a function `Learn` for these calculations that allows multiple updates with one call and displays the learning process as a pandas table:
```
from mls import Learn
Learn(prior, likelihood, 't-shirt', 't-shirt')
```

https://commons.wikimedia.org/wiki/File:Dice_(typical_role_playing_game_dice).jpg
**EXERCISE:** Suppose someone rolls 6, 4, 5 on a dice without telling you whether it has 4, 6, 8, 12, or 20 sides.
- What is your intuition about the true number of sides based on the rolls?
- Identify the models (hypotheses) and data in this problem.
- Define your priors assuming that each model is equally likely.
- Define a likelihood function assuming that each dice is fair.
- Use the `Learn` function to estimate the posterior probability for the number of sides after each roll.
We can be sure the dice is not 4-sided (because of the rolls > 4) and guess that it is unlikely to be 12 or 20 sided (since the largest roll is a 6).
The models in this problem correspond to the number of sides on the dice: 4, 6, 8, 12, 20.
The data in this problem are the dice rolls: 6, 4, 5.
Define the prior assuming that each model is equally likely:
```
prior = {4: 0.2, 6: 0.2, 8: 0.2, 12: 0.2, 20: 0.2}
```
Define the likelihood assuming that each dice is fair:
```
def likelihood(D, M):
if D <= M:
return 1.0 / M
else:
return 0.0
```
Finally, put the pieces together to estimate the posterior probability of each model after each roll:
```
Learn(prior, likelihood, 6, 4, 5)
```
Somewhat surprisingly, this toy problem has a practical application with historical significance!
Imagine a factory that has made $N$ items, each with a serial number 1 - $N$. If you randomly select items and read their serial numbers, the problem of estimating $N$ is analogous to our dice problem, but with many more models to consider. This approach was successfully used in World-War II by the Allied Forces to [estimate the production rate of German tanks](https://en.wikipedia.org/wiki/German_tank_problem) at a time when most academic statisticians rejected Bayesian methods.
For more historical perspective on the development of Bayesian methods (and many obstacles along the way), read the entertaining book [The Theory That Would Not Die](https://www.amazon.com/Theory-That-Would-Not-Die/dp/0300188226).
---
The discrete examples above can be solved exactly, but this is not true in general. The challenge is to calculate the evidence, $P(D\mid M$), in the Bayes' rule denominator, as the marginalization integral:
$$
P(D\mid M) = \int d\Theta_M' P(D\mid \Theta_M', M)\, P(\Theta_M'\mid M) \; .
$$
With careful choices of the prior and likelihood function, this integral can be performed analytically. However, for most practical work, an approximate numerical approach is required. Popular methods include **Markov-Chain Monte Carlo** and **Variational Inference**, which we will meet soon.
### What Priors Should I Use?
The choice of priors is necessarily subjective and sometimes contentious, but keep the following general guidelines in mind:
- Inferences on data from an informative experiment are not very sensitive to your choice of priors.
- If your (posterior) results are sensitive to your choice of priors you need more (or better) data.
For a visual demonstration of these guidelines, the following function performs exact inference for a common task: you make a number of observations and count how many pass some predefined test, and want to infer the fraction $0\le \theta\le 1$ that pass. This applies to questions like:
- What fraction of galaxies contain a supermassive black hole?
- What fraction of Higgs candidate decays are due to background?
- What fraction of of my nanowires are superconducting?
- What fraction of my plasma shots are unstable?
For our prior, $P(\theta)$, we use the [beta distribution](https://en.wikipedia.org/wiki/Beta_distribution) which is specified by hyperparameters $a$ and $b$:
$$
P(\theta\mid a, b) = \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\, \theta^{a-1} \left(1 - \theta\right)^{b-1} \; ,
$$
where $\Gamma$ is the [gamma function](https://en.wikipedia.org/wiki/Gamma_function) related to the factorial $\Gamma(n) = (n-1)!$ This function provides the prior (or posterior) corresponding to previous (or updated) measurements of a binomial process with $a + b - 2$ trials with $a - 1$ passing (and therefore $b - 1$ not passing).
```
def binomial_learn(prior_a, prior_b, n_obs, n_pass):
theta = np.linspace(0, 1, 100)
# Calculate and plot the prior on theta.
prior = scipy.stats.beta(prior_a, prior_b)
plt.fill_between(theta, prior.pdf(theta), alpha=0.25)
plt.plot(theta, prior.pdf(theta), label='Prior')
# Calculate and plot the likelihood of the fixed data given any theta.
likelihood = scipy.stats.binom.pmf(n_pass, n_obs, theta)
plt.plot(theta, likelihood, 'k:', label='Likelihood')
# Calculate and plot the posterior on theta given the observed data.
posterior = scipy.stats.beta(prior_a + n_pass, prior_b + n_obs - n_pass)
plt.fill_between(theta, posterior.pdf(theta), alpha=0.25)
plt.plot(theta, posterior.pdf(theta), label='Posterior')
# Plot cosmetics.
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=3, mode="expand", borderaxespad=0., fontsize='large')
plt.ylim(0, None)
plt.xlim(theta[0], theta[-1])
plt.xlabel('Pass fraction $\\theta$')
```
**EXERCISE:**
**Q1:** Think of a question in your research area where this inference problem applies.
**Q2:** Infer $\theta$ from 2 observations with 1 passing, using hyperparameters $(a=1,b=1)$.
- Explain why the posterior is reasonable given the observed data.
- What values of $\theta$ are absolutely ruled out by this data? Does this make sense?
- How are the three quantities plotted normalized?
**Q3:** Infer $\theta$ from the same 2 observations with 1 passing, using instead $(a=5,b=10)$.
- Is the posterior still reasonable given the observed data? Explain your reasoning.
- How might you choose between these two subjective priors?
**Q4:** Use each of the priors above with different data: 100 trials with 60 passing.
- How does the relative importance of the prior and likelihood change with better data?
- Why are the likelihood values so much smaller now?
```
binomial_learn(1, 1, 2, 1)
```
- The posterior peaks at the mean observed pass rate, 1/2, which is reasonable. It is very broad because we have only made two observations.
- Values of 0 and 1 are absolutely ruled out, which makes sense since we have already observed 1 pass and 1 no pass.
- The prior and posterior are probability densities normalized over $\theta$, so their area in the plot is 1. The likelihood is normalized over all possible data, so does not have area of 1 in this plot.
```
binomial_learn(5, 10, 2, 1)
```
- The posterior now peaks away from the mean observed pass rate of 1/2. This is reasonable if we believe our prior information since, with relatively uninformative data, Bayes' rule tells us that it should dominate our knowledge of $\theta$. On the other hand, if we cannot justify why this prior is more believable than the earlier flat prior, then we must conclude that the value of $\theta$ is unknown and that our data has not helped.
- If a previous experiment with $13=(a-1)+(b-1)$ observations found $4=a-1$ passing, then our new prior would be very reasonable. However, if this process has never been observed and we have no theoretical prejudice, then the original flat prior would be reasonable.
```
binomial_learn(1, 1, 100, 60)
binomial_learn(5, 10, 100, 60)
```
- With more data, the prior has much less influence. This is always the regime you want to be in.
- The likelihood values are larger because there are many more possible outcomes (pass or not) with more observations, so any one outcome becomes relatively less likely. (Recall that the likelihood is normalized over data outcomes, not $\theta$).
---
You are hopefully convinced now that your choice of priors is mostly a non issue, since inference with good data is relatively insensitive to your choice. However, you still need to make a choice, so here are some practical guidelines:
- A "missing" prior, $P(\Theta\mid M) = 1$, is still a prior but not necessarily a "natural" choice or a "safe default". It is often not even normalizable, although you can finesse this problem with good enough data.
- The prior on a parameter you care about (does it appear in your paper's abstract?) should usually summarize previous measurements, assuming that you trust them but you are doing a better experiment. In this case, your likelihood measures the information provided by your data alone, and the posterior provides the new "world average".
- The prior on a **nuisance parameter** (which you need for technical reasons but are not interesting in measuring) should be set conservatively (restrict as little as possible, to minimize the influence on the posterior) and in different ways (compare posteriors with different priors to estimate systematic uncertainty).
- If you really have no information on which to base a prior, learn about [uninformative priors](https://en.wikipedia.org/wiki/Prior_probability#Uninformative_priors), but don't be fooled by their apparent objectivity.
- If being able to calculate your evidence integral analytically is especially important, look into [conjugate priors](https://en.wikipedia.org/wiki/Conjugate_prior), but don't be surprised if this forces you to adopt an oversimplified model. The binomial example above is one of the rare cases where this works out.
- Always state your priors (in your code, papers, talks, etc), even when they don't matter much.
### Graphical Models
We started above with the Bayesian joint probability:
$$
P(D, \Theta_M, M)
$$
When the individual data features, parameters and hyperparameters are all written out, this often ends up being a very high-dimensional function.
In the most general case, the joint probability requires a huge volume of data to estimate (recall our earlier discussion of [dimensionality reduction](Dimensionality.ipynb)). However, many problems can be (approximately) described by a joint probability that is simplified by assuming that some random variables are mutually independent.
Graphical models are a convenient visualization of the assumed direct dependencies between random variables. For example, suppose we have two parameters $(\alpha, \beta)$ and no hyperparameters, then the joint probability $P(D, \alpha, \beta)$ can be expanded into a product of conditionals different ways using the rules of probability calculus, e.g.
$$
P(D, \alpha, \beta) = P(D,\beta\mid \alpha)\, P(\alpha) = P(D\mid \alpha,\beta)\, P(\beta\mid \alpha)\, P(\alpha) \; .
$$
or, equally well as,
$$
P(D, \alpha, \beta) = P(D,\alpha\mid \beta)\, P(\beta) = P(D\mid \alpha,\beta)\, P(\alpha\mid \beta)\, P(\beta) \; ,
$$
The corresponding diagrams are:


The way to read these diagrams is that a node labeled with $X$ represents a (multiplicative) factor $P(X\mid\ldots)$ in the joint probability, where $\ldots$ lists other nodes whose arrows feed into this node (in any order, thanks to probability calculus Rule-1). A shaded node indicates a random variable that is directly observed (i.e., data) while non-shaded nodes represent (unobserved) latent random variables.
These diagrams both describe a fully general joint probability with two parameters. The rules for building a fully general joint probability with any number of parameters are:
- Pick an (arbitrary) ordering of the parameters.
- The first parameter's node has arrows pointing to all other nodes (including the data).
- The n-th parameter's node has arrows pointing to all later parameter nodes and the data.
With $n$ parameters, there are then $n!$ possible diagrams and the number of potential dependencies grows rapidly with $n$.
To mitigate this factorial growth, we seek pairs of random variables that should not depend on each other. For example, in the two parameter case:


Notice how each diagram tells a different story. For example, the first diagram tells us that the data can be predicted knowing only $\beta$, but that our prior knowledge of $\beta$ depends on $\alpha$. In effect, then, simplifying a joint probability involves drawing a diagram that tells a suitable story for your data and models.
**EXERCISE:** Consider observing someone throwing a ball and measuring how far away it lands to infer the strength of gravity:
- Our data is the measured range $r$.
- Our parameters are the ball's initial speed $v$ and angle $\phi$, and the strength of gravity $g$.
- Our hyperparameters are the ball's diameter $d$ and the wind speed $w$.
Draw one example of a fully general diagram of this inference's joint probability $P(r, v, \phi, g, d, w)$.
Suppose the thrower always throws as hard as they can, then adjusts the angle according to the wind. Draw a diagram to represent the direct dependencies in this simpler joint probability.
Write down the posterior we are interested in for this inference problem.


The posterior we are most likely interested in for this inference is
$$
P(g\mid r) \; ,
$$
but a more explicit posterior would be:
$$
P(g\mid r, v, \phi, d, w) \; .
$$
The difference between is these is that we marginalized over the "nuisance" parameters $v, \phi, d, w$ in the first case.
---
The arrows in these diagrams define the direction of conditional dependencies. They often mirror a causal influence in the underlying physical system, but this is not necessary. Probabilistic diagrams with directed edges are known as **Bayesian networks** or **belief networks**.
It is also possible to draw diagrams where nodes are connected symmetrically, without a specified direction. These are known as **Markov random fields** or **Markov networks** and appropriate when dependencies flow in both directions or in an unknown direction. You can read more about these [here](https://en.wikipedia.org/wiki/Markov_random_field).
| github_jupyter |
```
import cv2
import numpy as np
import pandas as pd
import pickle as cPickle
from matplotlib import pyplot as plt
from sklearn.cluster import MiniBatchKMeans
from sklearn.neighbors import KNeighborsClassifier
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import cross_val_score
from sklearn.metrics import precision_score,confusion_matrix,multilabel_confusion_matrix,recall_score
from sklearn.preprocessing import normalize
from sklearn import svm
from datetime import datetime
train_images_filenames = cPickle.load(open('resources/train_images_filenames.dat','rb'))
test_images_filenames = cPickle.load(open('resources/test_images_filenames.dat','rb'))
train_labels = cPickle.load(open('resources/train_labels.dat','rb'))
test_labels = cPickle.load(open('resources/test_labels.dat','rb'))
SIFTdetector = cv2.SIFT_create(nfeatures=300)
def compute_dense_sift(gray,sift,step):
step_size = step
kp = [cv2.KeyPoint(x, y, step_size) for y in range(0, gray.shape[0], step_size)
for x in range(0, gray.shape[1], step_size)]
dense_feat = sift.compute(gray, kp)
dense_feat_des = dense_feat[1]
return dense_feat_des
step=5
today = datetime.now()
dt_string = today.strftime("%H:%M:%S")
print(f"{dt_string} started doing step={step}")
Train_descriptors = []
Train_label_per_descriptor = []
for filename,labels in zip(train_images_filenames,train_labels):
ima=cv2.imread(filename)
gray=cv2.cvtColor(ima,cv2.COLOR_BGR2GRAY)
# kpt,des=SIFTdetector.detectAndCompute(gray,None)
des=compute_dense_sift(gray,SIFTdetector, step)
des=normalize(des)
Train_descriptors.append(des)
Train_label_per_descriptor.append(labels)
D=np.vstack(Train_descriptors)
k=128
codebook = MiniBatchKMeans(n_clusters=k, verbose=False, batch_size=k * 20,compute_labels=False,reassignment_ratio=10**-4,random_state=42)
codebook.fit(D)
visual_words=np.zeros((len(Train_descriptors),k),dtype=np.float32)
for i in range(len(Train_descriptors)):
words=codebook.predict(Train_descriptors[i])
visual_words[i,:]=np.bincount(words,minlength=k)
# knn = KNeighborsClassifier(n_neighbors=37,n_jobs=-1,metric='manhattan')
# knn.fit(visual_words, train_labels)
visual_words
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
visual_words=scaler.fit_transform(visual_words)
lin_clf = svm.LinearSVC(max_iter=2000)
lin_clf.fit(visual_words, train_labels)
# rbf_svc = svm.SVC(kernel='rbf')
# rbf_svc.fit(visual_words, train_labels)
visual_words_test=np.zeros((len(test_images_filenames),k),dtype=np.float32)
for i in range(len(test_images_filenames)):
filename=test_images_filenames[i]
ima=cv2.imread(filename)
gray=cv2.cvtColor(ima,cv2.COLOR_BGR2GRAY)
# kpt,des=SIFTdetector.detectAndCompute(gray,None)
des=compute_dense_sift(gray,SIFTdetector, step)
des=normalize(des)
words=codebook.predict(des)
visual_words_test[i,:]=np.bincount(words,minlength=k)
scaler = StandardScaler()
visual_words_test=scaler.fit_transform(visual_words_test)
scores = cross_val_score(lin_clf, visual_words_test, test_labels, cv=5)
# scores = cross_val_score(rbf_svc, visual_words_test, test_labels, cv=5)
accuracy=scores.mean()*100
print(accuracy)
# lin_clf = svm.LinearSVC(max_iter=2000)
# lin_clf.fit(visual_words, train_labels)
rbf_svc = svm.SVC(kernel='rbf')
rbf_svc.fit(visual_words, train_labels)
visual_words_test=np.zeros((len(test_images_filenames),k),dtype=np.float32)
for i in range(len(test_images_filenames)):
filename=test_images_filenames[i]
ima=cv2.imread(filename)
gray=cv2.cvtColor(ima,cv2.COLOR_BGR2GRAY)
# kpt,des=SIFTdetector.detectAndCompute(gray,None)
des=compute_dense_sift(gray,SIFTdetector, step)
des=normalize(des)
words=codebook.predict(des)
visual_words_test[i,:]=np.bincount(words,minlength=k)
scaler = StandardScaler()
visual_words_test=scaler.fit_transform(visual_words_test)
# scores = cross_val_score(lin_clf, visual_words_test, test_labels, cv=5)
scores = cross_val_score(rbf_svc, visual_words_test, test_labels, cv=5)
accuracy=scores.mean()*100
print(accuracy)
step=5
today = datetime.now()
dt_string = today.strftime("%H:%M:%S")
print(f"{dt_string} started doing step={step}")
Train_descriptors = []
Train_label_per_descriptor = []
for filename,labels in zip(train_images_filenames,train_labels):
ima=cv2.imread(filename)
gray=cv2.cvtColor(ima,cv2.COLOR_BGR2GRAY)
#division if image loop and calucalting staff
des=compute_dense_sift(gray,SIFTdetector, step)
des=normalize(des)
Train_descriptors.append(des)
Train_label_per_descriptor.append(labels)
D=np.vstack(Train_descriptors)
k=128
codebook = MiniBatchKMeans(n_clusters=k, verbose=False, batch_size=k * 20,compute_labels=False,reassignment_ratio=10**-4,random_state=42)
codebook.fit(D)
visual_words=np.zeros((len(Train_descriptors),k),dtype=np.float32)
words=codebook.predict(Train_descriptors[i])
visual_words[i,:]=np.bincount(words,minlength=k)
for i in range(len(Train_descriptors)):
# knn = KNeighborsClassifier(n_neighbors=37,n_jobs=-1,metric='manhattan')
# knn.fit(visual_words, train_labels)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D2_ModelingPractice/W1D2_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neuromatch Academy: Week 1, Day 2, Tutorial 1
# Modeling Practice: Framing the question
__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm
__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom
---
# Tutorial objectives
Yesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by building a simple model.
We will investigate a simple phenomenon, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks:
**Framing the question**
1. finding a phenomenon and a question to ask about it
2. understanding the state of the art
3. determining the basic ingredients
4. formulating specific, mathematically defined hypotheses
**Implementing the model**
5. selecting the toolkit
6. planning the model
7. implementing the model
**Model testing**
8. completing the model
9. testing and evaluating the model
**Publishing**
10. publishing models
Tutorial 1 (this notebook) will cover the steps 1-5, while Tutorial 2 will cover the steps 6-10.
**TD**: All activities you should perform are labeled with **TD#.#**, which stands for "To Do", micro-tutorial number, activity number. They can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on!
**Run**: Some code chunks' names start with "Run to ... (do something)". These chunks are purely to produce a graph or calculate a number. You do not need to look at or understand the code in those chunks.
# Setup
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import gamma
from IPython.display import YouTubeVideo
# @title Figure settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def my_moving_window(x, window=3, FUN=np.mean):
"""
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving
average of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
"""
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown, :] = my_moving_window(x[rown, :],
window=window,
FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(1 - window), 1):
if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i + wind_i])):
values += [x[samp_i + wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets, dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'], expect['self'], marker='*',
color='xkcd:green', label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:, 0]):
c_idx = np.where(judgments[:, 0] == condition)[0]
cond_self_motion = judgments[c_idx[0], 1]
cond_world_motion = judgments[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = f"condition {condition:d} judgments"
plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],
label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:, 0]):
c_idx = np.where(predictions[:, 0] == condition)[0]
cond_self_motion = predictions[c_idx[0], 1]
cond_world_motion = predictions[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = f"condition {condition:d} prediction"
plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],
marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis',\
'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1], [0], marker='<', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='world-motion stimulus', s=80)
plt.scatter([0], [1], marker='>', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='self-motion stimulus', s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_stimuli(t, a, v):
plt.figure(figsize=(10, 6))
plt.plot(t, a, label='acceleration [$m/s^2$]')
plt.plot(t, v, label='velocity [$m/s$]')
plt.xlabel('time [s]')
plt.ylabel('[motion]')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_motion_signals():
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = np.cumsum(a * dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',
sharey='row', figsize=(14, 6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t, -v, label='visual [$m/s$]')
ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t, -v, label='visual [$m/s$]')
ax2.plot(t, a, label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,
addaverages=False, integrateVestibular=False,
addGroundTruth=False):
if addGroundTruth:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = a
wm_idx = np.where(judgments[:, 0] == 0)
sm_idx = np.where(judgments[:, 0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:, wm_idx])
sm_opticflow = np.squeeze(opticflow[:, sm_idx])
if integrateVestibular:
vestibular = np.cumsum(vestibular * .1, axis=1)
if addGroundTruth:
v = np.cumsum(a * dt)
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:, wm_idx])
sm_vestibular = np.squeeze(vestibular[:, sm_idx])
X = np.arange(0, 10, .1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row',
figsize=(15, 10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][0].plot(t, -v, color='xkcd:red')
if addaverages:
my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),
color='xkcd:red', alpha=1)
my_axes[0][0].set_title('optic-flow in world-motion condition')
my_axes[0][0].set_ylabel('velocity signal [$m/s$]')
my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][1].plot(t, -v, color='xkcd:blue')
if addaverages:
my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),
color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('optic-flow in self-motion condition')
my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),
color='xkcd: red', alpha=1)
my_axes[1][0].set_title('vestibular signal in world-motion condition')
if addGroundTruth:
my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')
my_axes[1][0].set_xlabel('time [s]')
if integrateVestibular:
my_axes[1][0].set_ylabel('velocity signal [$m/s$]')
else:
my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')
my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[1][1].plot(t, v, color='xkcd:blue')
if addaverages:
my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),
color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('vestibular signal in self-motion condition')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_threshold_solution(selfmotion_vel_est, threshold):
is_move = (selfmotion_vel_est > threshold)
return is_move
def my_moving_threshold(selfmotion_vel_est, thresholds):
pselfmove_nomove = np.empty(thresholds.shape)
pselfmove_move = np.empty(thresholds.shape)
prop_correct = np.empty(thresholds.shape)
pselfmove_nomove[:] = np.NaN
pselfmove_move[:] = np.NaN
prop_correct[:] = np.NaN
for thr_i, threshold in enumerate(thresholds):
# run my_threshold that the students will write:
try:
is_move = my_threshold(selfmotion_vel_est, threshold)
except Exception:
is_move = my_threshold_solution(selfmotion_vel_est, threshold)
# store results:
pselfmove_nomove[thr_i] = np.mean(is_move[0:100])
pselfmove_move[thr_i] = np.mean(is_move[100:200])
# calculate the proportion
# classified correctly: (1 - pselfmove_nomove) + ()
# Correct rejections:
p_CR = (1 - pselfmove_nomove[thr_i])
# correct detections:
p_D = pselfmove_move[thr_i]
# this is corrected for proportion of trials in each condition:
prop_correct[thr_i] = (p_CR + p_D) / 2
return [pselfmove_nomove, pselfmove_move, prop_correct]
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12, 8))
plt.title('threshold effects')
plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',
color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion condition')
plt.plot(thresholds, self_prop, label='self motion condition')
plt.plot(thresholds, prop_correct, color='xkcd:purple',
label='correct classification')
idx = np.argmax(prop_correct[::-1]) + 1
plt.plot([thresholds[-idx]]*2, [0, 1], '--', color='xkcd:purple',
label='best classification')
plt.text(0.7, 0.8,
f"threshold:{thresholds[-idx]:0.2f}\
\ncorrect: {prop_correct[-idx]:0.2f}")
plt.xlabel('threshold')
plt.ylabel('proportion classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
# conditions = np.concatenate((np.abs(judgments[:, 1]),
# np.abs(judgments[:, 2])))
# veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
# velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
# self:
# conditions_self = np.abs(judgments[:, 1])
veljudgmnt_self = judgments[:, 3]
velpredict_self = predictions[:, 3]
# world:
# conditions_world = np.abs(judgments[:, 2])
veljudgmnt_world = judgments[:, 4]
velpredict_world = predictions[:, 4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',
figsize=(12, 5))
ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)
ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)
ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
# @title Data retrieval
import os
fname="W1D2_data.npz"
if not os.path.exists(fname):
!wget https://osf.io/c5xyf/download -O $fname
filez = np.load(file=fname, allow_pickle=True)
judgments = filez['judgments']
opticflow = filez['opticflow']
vestibular = filez['vestibular']
```
---
# Section 1: Investigating the phenomenon
```
# @title Video 1: Question
video = YouTubeVideo(id='x4b2-hZoyiY', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
```
**Goal**: formulate a good question!
**Background: The train illusion**
In the video you have learnt about the train illusion. In the same situation, we sometimes perceive our own train to be moving and sometimes the other train. How come our perception is ambiguous?
We will build a simple model with the goal _to learn about the process of model building_ (i.e.: not to explain train illusions or get a correct model). To keep this manageable, we use a _simulated_ data set. For the same reason, this tutorial contains both coding and thinking activities. Doing both are essential for success.
Imagine we get data from an experimentalist who collected _judgments_ on self motion and world motion, in two conditions. One where there was only world motion, and one where there was only self motion. In either case, the velocity increased from 0 to 1 m/s across 10 seconds with the same (fairly low) acceleration. Each of these conditions was recorded 100 times:

Participants sit very still during the trials and at the end of each 10 s trial they are given two sliders, one to indicate the self-motion velocity (in m/s) and another to indicate the world-motion velocity (in m/s) _at the end of the interval_.
## TD 1.1: Form expectations about the experiment, using the phenomena
In the experiment we get the participants _judgments_ of the velocities they experienced. In the Python chunk below, you should retain the numbers that represent your expectations on the participants' judgments. Remember that in the train illusion people usually experience either self motion or world motion, but not both.
From the lists, remove those pairs of responses you think are unlikely to be the participants' judgments. The first two pairs of coordinates (1 m/s, 0 m/s, and 0 m/s, 1 m/s) are the stimuli, so those reflect judgments without illusion. Those should stay, but how do you think participants judge velocities when they _do_ experience the illusion?
**Create Expectations**
```
# Create Expectations
###################################################################
# To complete the exercise, remove unlikely responses from the two
# lists. The lists act as X and Y coordinates for a scatter plot,
# so make sure the lists match in length.
###################################################################
world_vel_exp = [1, 0, 1, 0.5, 0.5, 0]
self_vel_exp = [0, 1, 1, 0.5, 0, 0.5]
# The code below creates a figure with your predictions:
my_plot_percepts(datasets={'expectations': {'world': world_vel_exp,
'self': self_vel_exp}})
```
## **TD 1.2**: Compare Expectations to Data
The behavioral data from our experiment is in a 200 x 5 matrix called `judgments`, where each row indicates a trial.
The first three columns in the `judgments` matrix represent the conditions in the experiment, and the last two columns list the velocity judgments.

The condition number can be 0 (world-motion condition, first 100 rows) or 1 (self-motion condition, last 100 rows). Columns 1 and 2 respectively list the true self- and world-motion velocities in the experiment. You will not have to use the first three columns.
The motion judgements (columns 3 and 4) are the participants judgments of the self-motion velocity and world-motion velocity respectively, and should show the illusion. Let's plot the judgment data, along with the true motion of the stimuli in the experiment:
```
#@title
#@markdown Run to plot perceptual judgments
my_plot_percepts(datasets={'judgments': judgments}, plotconditions=True)
```
## TD 1.3: Think about what the data is saying, by answering these questions:
* How does it differ from your initial expectations?
* Where are the clusters of data, roughly?
* What does it mean that the some of the judgments from the world-motion condition are close to the self-motion stimulus and vice versa?
* Why are there no data points in the middle?
* What aspects of the data require explanation?
---
# Section 2: Understanding background
```
# @title Video 2: Background
video = YouTubeVideo(id='DcJ91h5Ekis', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
```
**Goal:** Now that we have an interesting phenomenon, we gather background information which will refine our questions, and we lay the groundwork for developing scientific hypotheses.
**Background: Motion Sensing**:
Our self-motion percepts are based on our visual (optic flow) and vestibular (inner ear) sensing. Optic flow is the moving image on the retina caused by either self or world-motion. Vestibular signals are related to bodily self- movements only. The two signals can be understood as velocity in $m/s$ (optic flow) and acceleration in $m/s^2$ (vestibular signal). We'll first look at the ground truth which is stimulating the senses in our experiment.
```
#@markdown **Run to plot motion stimuli**
my_plot_motion_signals()
```
## TD 2.1: Examine the differences between the conditions:
* how are the visual inputs (optic flow) different between the conditions?
* how are the vestibular signals different between the conditions?
* how might the brain use these signals to determine there is self motion?
* how might the brain use these signals to determine there is world motion?
We can see that, in theory, we have enough information to disambiguate self-motion from world-motion using these signals. Let's go over the logic together. The visual signal is ambiguous, it will be non-zero when there is either self-motion or world-motion. The vestibular signal is specific, it’s only non-zero when there is self-motion. Combining these two signals should allow us to disambiguate the self-motion condition from the world-motion condition!
* In the world-motion condition: The brain can simply compare the visual and vestibular signals. If there is visual motion AND NO vestibular motion, it must be that the world is moving but not the body/self = world-motion judgement.
* In the self-motion condition: We can make a similar comparison. If there is both visual signals AND vestibular signals, it must be that the body/self is moving = self-motion judgement.
**Background: Integrating signals**:
To understand how the vestibular _acceleration_ signal could underlie the perception of self-motion _velocity_, we assume the brain integrates the signal. This also allows comparing the vestibular signal to the visual signal, by getting them in the same units. Read more about integration on [Wikipedia](https://en.wikipedia.org/wiki/Integral).
Below we will approximate the integral using `np.cumsum()`. The discrete integral would be:
$$v_t = \sum_{k=0}^t a_k\cdot\Delta t + v_0$$
* $a(t)$ is acceleration as a function of time
* $v(t)$ is velocity as a function of time
* $\Delta t$ is equal to the sample interval of our recorded visual and vestibular signals (0.1 s).
* $v_0$ is the _constant of integration_ which corresponds in the initial velocity at time $0$ (it would have to be known or remembered). Since that is always 0 in our experiment, we will leave it out from here on.
### Numerically Integrating a signal
Below is a chunk of code which uses the `np.cumsum()` function to integrate the acceleration that was used in our (simulated) experiment: `a` over `dt` in order to get a velocity signal `v`.
```
# Check out the code:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
# This does the integration of acceleration into velocity:
v = np.cumsum(a * dt)
my_plot_stimuli(t, a, v)
```
**Background: Sensory signals are noisy**
In our experiment, we also recorded sensory signals in the participant. The data come in two 200 x 100 matrices:
`opticflow` (with the visual signals)
and
`vestibular` (with the vestibular signals)
In each of the signal matrices _rows_ (200) represent **trials**, in the same order as in the `judgments` matrix. _Columns_ (100) are **time samples**, representing 10 s collected with a 100 ms time bin.

Here we plot the data representing our 'sensory signals':
* plot optic flow signals for self-motion vs world-motion conditions (should be the same)
* plot vestibular signals for self-motion vs world-motion conditions (should be different)
The x-axis is time in seconds, but the y-axis can be one of several, depending on what you do with the signals: $m/s^2$ (acceleration) or $m/s$ (velocity).
```
#@markdown **Run to plot raw noisy sensory signals**
# signals as they are:
my_plot_sensorysignals(judgments, opticflow, vestibular,
integrateVestibular=False)
```
## TD 2.2: Understanding the problem of noisy sensory information
**Answer the following questions:**
* Is this what you expected?
* In which of the two signals should we be able to see a difference between the conditions?
* Can we use the data as it is to differentiate between the conditions?
* Can we compare the the visual and vestibular motion signals when they're in different units?
* What would the brain do differentiate the two conditions?
Now that we know how to integrate the vestibular signal to get it into the same unit as the optic flow, we can see if it shows the pattern it should: a flat line in the world-motion condition and the correct velocity profile in the self-motion condition. Run the chunk of Python below to plot the sensory data again, but now with the vestibular signal integrated.
```
#@markdown **Run to compare true signals to sensory data**
my_plot_sensorysignals(judgments, opticflow, vestibular,
integrateVestibular=True, returnaxes=False,
addaverages=False, addGroundTruth=True)
```
The thick lines are the ground truth: the actual velocities in each of the conditions. With some effort, we can make out that _on average_ the vestibular signal does show the expected pattern after all. But there is also a lot of noise in the data.
**Background Summary**:
Now that we have examined the sensory signals, and understand how they relate to the ground truth. We see that there is enough information to _in principle_ disambiguate true self-motion from true world motion (there should be no illusion!). However, because the sensory information contains a lot of noise, i.e. it is unreliable, it could result in ambiguity.
**_It is time to refine our research question:_**
* Does the self-motion illusion occur due to unreliable sensory information?
---
# Section 3: Identifying ingredients
```
# @title Video 3: Ingredients
video = YouTubeVideo(id='ZQRtysK4OCo', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
```
## TD 3.1: Understand the moving average function
**Goal**: think about what ingredients we will need for our model
We have access to sensory signals from the visual and vestibular systems that are used to estimate world motion and self motion.
However, there are still two issues:
1. _While sensory input can be noisy or unstable, perception is much more stable._
2. _In the judgments there is either self motion or not._
We will solve this by by using:
1. _a moving average filter_ to stabilize our sensory signals
2. _a threshold function_ to distinguish moving from non-moving
One of the simplest models of noise reduction is a moving average (sometimes: moving mean or rolling mean) over the recent past. In a discrete signal we specify the number of samples to use for the average (including the current one), and this is often called the _window size_. For more information on the moving average, check [this Wikipedia page](https://en.wikipedia.org/wiki/Moving_average).
In this tutorial there is a simple running average function available:
`my_moving_window(s, w)`: takes a signal time series $s$ and a window size $w$ as input and returns the moving average for all samples in the signal.
### Interactive Demo: Averaging window
The code below picks one vestibular signal, integrates it to get a velocity estimate for self motion, and then filters. You can set the window size.
Try different window sizes, then answer the following:
* What is the maximum window size? The minimum?
* Why does increasing the window size shift the curves?
* How do the filtered estimates differ from the true signal?
```
#@title
#@markdown Make sure you execute this cell to enable the widget!
t = np.arange(0, 10, .1)
def refresh(trial_number=101, window=15):
# get the trial signal:
signal = vestibular[trial_number - 1, :]
# integrate it:
signal = np.cumsum(signal * .1)
# plot this signal
plt.plot(t, signal, label='integrated vestibular signal')
# filter:
signal = my_moving_window(signal, window=window, FUN=np.mean)
# plot filtered signal
plt.plot(t, signal, label=f'filtered with window: {window}')
plt.legend()
plt.show()
_ = widgets.interact(refresh, trial_number=(1, 200, 1), window=(5, 100, 1))
```
_Note: the function `my_moving_window()` is defined in this notebook in the code block at at the top called "Helper functions". It should be the first function there, so feel free to check how it works._
## TD 3.2: Thresholding the self-motion vestibular signal
Comparing the integrated, filtered (accumulated) vestibular signals with a threshold should allow determining if there is self motion or not.
To try this, we:
1. Integrate the vestibular signal, apply a moving average filter, and take the last value of each trial's vestibular signal as an estimate of self-motion velocity.
2. Transfer the estimates of self-motion velocity into binary (0,1) decisions by comparing them to a threshold. Remember the output of logical comparators (>=<) are logical (truth/1, false/0). 1 indicates we think there was self-motion and 0 indicates otherwise. YOUR CODE HERE.
3. We sort these decisions separately for conditions of real world-motion vs. real self-motion to determine 'classification' accuracy.
4. To understand how the threshold impacts classfication accuracy, we do 1-3 for a range of thresholds.
There is one line fo code to complete, which will implement step 2.
#### Exercise 1: Threshold self-motion velocity into binary classifiction of self-motion
```
def my_threshold(selfmotion_vel_est, threshold):
"""
This function should calculate proportion self motion
for both conditions and the overall proportion
correct classifications.
Args:
selfmotion_vel_est (numpy.ndarray): A sequence of floats
indicating the estimated self motion for all trials.
threshold (float): A threshold for the estimate of self motion when
the brain decides there really is self motion.
Returns:
(numpy.ndarray): self-motion: yes or no.
"""
##############################################################
# Compare the self motion estimates to the threshold:
# Replace '...' with the proper code:
# Remove the next line to test your function
raise NotImplementedError("Modify my_threshold function")
##############################################################
# Compare the self motion estimates to the threshold
is_move = ...
return is_move
# to_remove solution
def my_threshold(selfmotion_vel_est, threshold):
"""
This function should calculate proportion self motion
for both conditions and the overall proportion
correct classifications.
Args:
selfmotion_vel_est (numpy.ndarray): A sequence of floats
indicating the estimated self motion for all trials.
threshold (float): A threshold for the estimate of self motion when
the brain decides there really is self motion.
Returns:
self-motion: yes or no.
"""
# Compare the self motion estimates to the threshold
is_move = (selfmotion_vel_est > threshold)
return is_move
```
### Interactive Demo: Threshold vs. averaging window
Now we combine the classification steps 1-3 above, for a variable threshold. This will allow us to find the threshold that produces the most accurate classification of self-motion.
We also add a 'widget' that controls the size of the moving average window. How does the optimal threshold vary with window size?
```
#@title
#@markdown Make sure you execute this cell to enable the widget!
thresholds = np.round_(np.arange(0, 1.01, .01), 2)
v_ves = np.cumsum(vestibular * .1, axis=1)
def refresh(window=50):
selfmotion_vel_est = my_moving_window(v_ves, window=window,
FUN=np.mean)[:, 99]
[pselfmove_nomove,
pselfmove_move,
pcorrect] = my_moving_threshold(selfmotion_vel_est, thresholds)
my_plot_thresholds(thresholds, pselfmove_nomove, pselfmove_move, pcorrect)
_ = widgets.interact(refresh, window=(1, 100, 1))
```
Let's unpack this:
Ideally, in the self-motion condition (orange line) we should always detect self motion, and never in the world-motion condition (blue line). This doesn't happen, regardless of the settings we pick. However, we can pick a threshold that gives the highest proportion correctly classified trials, which depends on the window size, but is between 0.2 and 0.4. We'll pick the optimal threshold for a window size of 100 (the full signal) which is at 0.33.
The ingredients we have collected for our model so far:
* integration: get the vestibular signal in the same unit as the visual signal
* running average: accumulate evidence over some time, so that perception is stable
* decision if there was self motion (threshold)
Since the velocity judgments are made at the end of the 10 second trials, it seems reasonable to use the sensory signals at the last sample to estimate what percept the participants said they had.
---
# Section 4: Formulating hypotheses
```
# @title Video 4: Hypotheses
video = YouTubeVideo(id='wgOpbfUELqU', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
```
**Goal**: formulate reasonable hypotheses in mathematical language using the ingredients identified in step 3. Write hypotheses as a function that we evaluate the model against later.
**Question:** _Why do we experience the illusion?_
We know that there are two real motion signals, and that these drive two sensory signals:
> $w_v$: world motion (velocity magnitude)
>
> $s_v$: self motion (velocity magnitude)
>
> $s_{visual}$: optic flow signal
>
> $s_{vestibular}$: vestibular signal
Optic flow is ambiguous, as both world motion and self motion drive visual motion.
$$s_{visual} = w_v - s_v + noise$$
Notice that world motion and self motion might cancel out. For example, if the train you are on, and the train you are looking at, both move at exactly the same speed.
Vestibular signals are driven only by self motion, but _can_ be ambiguous when they are noisy.
$$s_{vestibular} = s_v + noise$$
**Combining Relationships**
Without the sensory noise, these two relations are two linear equations, with two unknowns!
This suggests the brain could simply "solve" for $s_v$ and $w_v$.
However, given the noisy signals, sometimes these solutions will not be correct. Perhaps that is enough to explain the illusion?
## TD 4.1: Write out Hypothesis
Use the discussion and framing to write out your hypothesis in the form:
> Illusory self-motion occurs when (give preconditions). We hypothesize it occurs because (explain how our hypothesized relationships work)
## TD 4.2: Relate hypothesis to ingredients
Now it's time to pull together the ingredients and relate them to our hypothesis.
**For each trial we have:**
| variable | description |
| ---- | ---- |
| $\hat{v_s}$ | **self motion judgment** (in m/s)|
| $\hat{v_w}$ | **world motion judgment** (in m/s)|
| $s_{ves}$ | **vestibular info** filtered and integrated vestibular information |
| $s_{opt}$ | **optic flow info** filtered optic flow estibular information |
| $z_s$ | **Self-motion detection** boolean value (True/False) indicating whether the vestibular info was above threshold or not |
Answer the following questions by replotting your data and ingredients:
* which of the 5 variables do our hypothesis say should be related?
* what do you expect these plots to look like?
```
# Run to calculate variables
# these 5 lines calculate the main variables that we might use in the model
s_ves = my_moving_window(np.cumsum(vestibular * .1, axis=1), window=100)[:, 99]
s_opt = my_moving_window(opticflow, window=50)[:, 99]
v_s = s_ves
v_w = -s_opt - v_s
z_s = (s_ves > 0.33)
```
In the first chunk of code we plot histograms to compare the variability of the estimates of velocity we get from each of two sensory signals.
**Plot histograms**
```
# Plot histograms
plt.figure(figsize=(8, 6))
plt.hist(s_ves, label='vestibular', alpha=0.5) # set the first argument here
plt.hist(s_opt, label='visual', alpha=0.5) # set the first argument here
plt.ylabel('frequency')
plt.xlabel('velocity estimate')
plt.legend(facecolor='xkcd:white')
plt.show()
```
This matches that the vestibular signals are noisier than visual signals.
Below is generic code to create scatter diagrams. Use it to see if the relationships between variables are the way you expect them. For example, what is the relationship between the estimates of self motion and world motion, as we calculate them here?
### Exercise 2: Build a scatter plot
```
# this sets up a figure with some dotted lines on y=0 and x=0 for reference
plt.figure(figsize=(8, 8))
plt.plot([0, 0], [-0.5, 1.5], ':', color='xkcd:black')
plt.plot([-0.5, 1.5], [0, 0], ':', color='xkcd:black')
#############################################################################
# uncomment below and fill in with your code
#############################################################################
# determine which variables you want to look at (variable on the abscissa / x-axis, variable on the ordinate / y-axis)
# plt.scatter(...)
plt.xlabel('world-motion velocity [m/s]')
plt.ylabel('self-motion velocity [m/s]')
plt.show()
# to_remove solution
# this sets up a figure with some dotted lines on y=0 and x=0 for reference
with plt.xkcd():
plt.figure(figsize=(8, 8))
plt.plot([0, 0], [-0.5, 1.5], ':', color='xkcd:black')
plt.plot([-0.5, 1.5], [0, 0], ':', color='xkcd:black')
# determine which variables you want to look at (variable on the abscissa / x-axis, variable on the ordinate / y-axis)
plt.scatter(v_w, v_s)
plt.xlabel('world-motion velocity [m/s]')
plt.ylabel('self-motion velocity [m/s]')
plt.show()
```
Below is code that uses $z_s$ to split the trials in into two categories (i.e., $s_{ves}$ below or above threshold) and plot the mean in each category.
### Exercise 3: Split variable means bar graph
```
###################################
# Fill in source_var and uncomment
####################################
# source variable you want to check
source_var = ...
# below = np.mean(source_var[np.where(np.invert(z_s))[0]])
# above = np.mean(source_var[np.where(z_s)[0]] )
# plt.bar(x=[0, 1], height=[below, above])
plt.xticks([0, 1], ['below', 'above'])
plt.show()
# to_remove solution
# source variable you want to check
source_var = v_w
below = np.mean(source_var[np.where(np.invert(z_s))[0]])
above = np.mean(source_var[np.where(z_s)[0]] )
with plt.xkcd():
plt.bar(x=[0, 1], height=[below, above])
plt.xticks([0, 1], ['below', 'above'])
plt.show()
```
---
# Section 5: Toolkit selection
```
# @title Video 5: Toolkit
video = YouTubeVideo(id='rsmnayVfJyM', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
```
**Goal**: with the question, ingredients and hypotheses in mind, which toolkit (modeling approach) would be best to use?
**Toolkits**
The lecture covers the notion of toolkit. Here we explain the toolkit we use in our simulation.
**Simulation as a generic tool**
Because our data provides per trial data of both inputs and outputs, designing a _simulation_ is a powerful method.
In general, simulation models have a typical structure:

Simulation models have:
* **inputs** including input variables, _and_ the ingredients needed to build the simulation functions.
* **simulation** which runs the model _many times_ on the inputs, over a range of scenarios (typically different parameter or model choices).
In addition, many simulations run _replications_, which are useful if components of your input or model are _stochastic_.
* **outputs** the output variables. these may have replicas if the simulations are repeated to account for stochastic elements, which are perfect for statistical analysis.
_____
The elements of a simulation model are quite generic and occur in most models. The most important considerations in deciding a toolkit are:
* What types of mechanisms (causes, processes, systems) will you need to consider in your model?
* What aspects of the data and ingredients are _essential_ which are clearly _irrelevant_, and which can be abstracted over.
These considerations are completely determined by your research question. To better understand this, we will engage you in a TA-led discussion.
## TD 5.1: TA-guided discussion about how questions drive toolkit selection.
NOTES: [See slides:](https://mfr.ca-1.osf.io/render?url=https://osf.io/v5emn/?direct%26mode=render%26action=download%26mode=render)
1. **DISCUSS MECHANISMS/INGREDIENTS NEEDED FOR QUESTIONS**: what kinds of mechanisms and ingredients are needed for the discussion questions
2. **DISCUSS WHAT INGREDIENTS DONT MATTER**: simulation / level of abstraction (depends on what is measured & is irrelevant to the question)
---
# Summary
In this tutorial, we worked through some steps of the process of modeling.
- We defined a phenomenon and formulated a question (step 1)
- We collected information the state-of-the-art about the topic (step 2)
- We determined the basic ingredients (step 3), and used these to formulate a specific mathematically defined hypothesis (step 4), and
- We chose the most appropriate modeling approach (i.e., toolkit) for our phenomenon/background information/hypothesis (step 5)
In the next tutorial, we will continue with the steps 6-10!
---
# Reading
Blohm G, Kording KP, Schrater PR (2020). _A How-to-Model Guide for Neuroscience_ eNeuro, 7(1) ENEURO.0352-19.2019. https://doi.org/10.1523/ENEURO.0352-19.2019
Dokka K, Park H, Jansen M, DeAngelis GC, Angelaki DE (2019). _Causal inference accounts for heading perception in the presence of object motion._ PNAS, 116(18):9060-9065. https://doi.org/10.1073/pnas.1820373116
Drugowitsch J, DeAngelis GC, Klier EM, Angelaki DE, Pouget A (2014). _Optimal Multisensory Decision-Making in a Reaction-Time Task._ eLife, 3:e03005. https://doi.org/10.7554/eLife.03005
Hartmann, M, Haller K, Moser I, Hossner E-J, Mast FW (2014). _Direction detection thresholds of passive self-motion in artistic gymnasts._ Exp Brain Res, 232:1249–1258. https://doi.org/10.1007/s00221-014-3841-0
Mensh B, Kording K (2017). _Ten simple rules for structuring papers._ PLoS Comput Biol 13(9): e1005619. https://doi.org/10.1371/journal.pcbi.1005619
Seno T, Fukuda H (2012). _Stimulus Meanings Alter Illusory Self-Motion (Vection) - Experimental Examination of the Train Illusion._ Seeing Perceiving, 25(6):631-45. https://doi.org/10.1163/18784763-00002394
| github_jupyter |
# Custom widgets in a notebook
The notebook explore a couple of ways to interact with the user and modifies the output based on these interactions. This is inspired from the examples from [ipwidgets](http://ipywidgets.readthedocs.io/).
```
from jyquickhelper import add_notebook_menu
add_notebook_menu()
```
## List of widgets
[Widget List](https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20List.html)
```
import ipywidgets
import datetime
obj = ipywidgets.DatePicker(
description='Pick a Date',
disabled=False,
value=datetime.datetime.now(),
)
obj
obj.value
```
## Events
```
from IPython.display import display
button = ipywidgets.Button(description="Click Me!")
display(button)
def on_button_clicked(b):
print("Button clicked.")
button.on_click(on_button_clicked)
int_range = ipywidgets.IntSlider()
display(int_range)
def on_value_change(change):
print(change['new'])
int_range.observe(on_value_change, names='value')
```
## matplotlib
```
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
def random_lobster(n, m, k, p):
return nx.random_lobster(n, p, p / m)
def powerlaw_cluster(n, m, k, p):
return nx.powerlaw_cluster_graph(n, m, p)
def erdos_renyi(n, m, k, p):
return nx.erdos_renyi_graph(n, p)
def newman_watts_strogatz(n, m, k, p):
return nx.newman_watts_strogatz_graph(n, k, p)
def plot_random_graph(n, m, k, p, generator):
g = generator(n, m, k, p)
nx.draw(g)
plt.show()
ipywidgets.interact(plot_random_graph, n=(2,30), m=(1,10), k=(1,10), p=(0.0, 1.0, 0.001),
generator={
'lobster': random_lobster,
'power law': powerlaw_cluster,
'Newman-Watts-Strogatz': newman_watts_strogatz,
'Erdős-Rényi': erdos_renyi,
});
```
## Custom widget - text
[Building a Custom Widget - Hello World](http://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Custom.html).
```
import ipywidgets as widgets
from traitlets import Unicode, validate
class HelloWidget(widgets.DOMWidget):
_view_name = Unicode('HelloView').tag(sync=True)
_view_module = Unicode('hello').tag(sync=True)
_view_module_version = Unicode('0.1.0').tag(sync=True)
value = Unicode('Hello World! - ').tag(sync=True)
%%javascript
require.undef('hello');
define('hello', ["@jupyter-widgets/base"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
render: function() {
this.value_changed();
this.model.on('change:value', this.value_changed, this);
},
value_changed: function() {
this.el.textContent = this.model.get('value');
},
});
return {
HelloView : HelloView
};
});
w = HelloWidget()
w
w.value = 'changed the value'
```
## Custom widget - html - svg - events
See [Low Level Widget Tutorial](http://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Low%20Level.html), [CircleView](https://github.com/paul-shannon/notebooks/blob/master/study/CircleView.ipynb). The following example links a custom widget and a sliding bar which defines the radius of circle to draw. See [Linking two similar widgets](http://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Basics.html#Linking-two-similar-widgets). The information (circles, radius) is declared in a python class *CircleWidget* and available in the javascript code in two places: the widget (``this.model``) and the view itself (used to connect event to it). Finally, a link is added between two values: value from the first widget (sliding bar) and radius from the second widget (*CircleWidget*).
```
%%javascript
require.config({
paths: {
d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min'
},
});
import ipywidgets
from traitlets import Int, Unicode, Tuple, CInt, Dict, validate
class CircleWidget(ipywidgets.DOMWidget):
_view_name = Unicode('CircleView').tag(sync=True)
_view_module = Unicode('circle').tag(sync=True)
radius = Int(100).tag(sync=True)
circles = Tuple().tag(sync=True)
width = Int().tag(sync=True)
height = Int().tag(sync=True)
radius = Int().tag(sync=True)
def __init__(self, **kwargs):
super(ipywidgets.DOMWidget, self).__init__(**kwargs)
self.width = kwargs.get('width', 500)
self.height = kwargs.get('height', 100)
self.radius = 1
def drawCircle(self, x, y, fillColor="white", borderColor="black"):
newCircle = {"x": x, "y": y, "radius": self.radius * 10, "fillColor": fillColor, "borderColor": borderColor}
self.circles = self.circles + (newCircle,)
%%javascript
"use strict";
require.undef('circle');
define('circle', ["@jupyter-widgets/base", "d3"], function(widgets, d3) {
var CircleView = widgets.DOMWidgetView.extend({
initialize: function() {
console.log("---- initialize, this:");
console.log(this);
this.circles = [];
this.radius = 1;
},
createDiv: function(){
var width = this.model.get('width');
var height = this.model.get('height');
var divstyle = $("<div id='d3DemoDiv' style='border:1px solid red; height: " +
height + "px; width: " + width + "px'>");
return(divstyle);
},
createCanvas: function(){
var width = this.model.get('width');
var height = this.model.get('height');
var radius = this.model.get('radius');
console.log("--SIZE--", width, 'x', height, " radius", radius);
var svg = d3.select("#d3DemoDiv")
.append("svg")
.attr("id", "svg").attr("width", width).attr("height", height);
this.svg = svg;
var circleView = this;
svg.on('click', function() {
var coords = d3.mouse(this);
//debugger;
var radius = circleView.radius;
console.log('--MOUSE--', coords, " radius:", radius);
var newCircle = {x: coords[0], y: coords[1], radius: 10 * radius,
borderColor: "black", fillColor: "beige"};
circleView.circles.push(newCircle);
circleView.drawCircle(newCircle);
//debugger;
circleView.model.set("circles", JSON.stringify(circleView.circles));
circleView.touch();
});
},
drawCircle: function(obj){
this.svg.append("circle")
.style("stroke", "gray")
.style("fill", "white")
.attr("r", obj.radius)
.attr("cx", obj.x)
.attr("cy", obj.y)
.on("mouseover", function(){d3.select(this).style("fill", "aliceblue");})
.on("mouseout", function(){d3.select(this).style("fill", "white");});
},
render: function() {
this.$el.append(this.createDiv());
this.listenTo(this.model, 'change:circles', this._circles_changed, this);
this.listenTo(this.model, 'change:radius', this._radius_changed, this);
var circleView = this;
function myFunc(){
circleView.createCanvas()
};
setTimeout(myFunc, 500);
},
_circles_changed: function() {
var circles = this.model.get("circles");
var newCircle = circles[circles.length-1];
console.log('--DRAW--', this.circles);
this.circles.push(newCircle);
console.log('--LENGTH--', circles.length, " == ", circles.length);
this.drawCircle(newCircle);
},
_radius_changed: function() {
console.log('--RADIUS--', this.radius, this.model.get('radius'));
this.radius = this.model.get('radius');
}
});
return {
CircleView : CircleView
};
});
cw = CircleWidget(width=500, height=100)
scale = ipywidgets.IntSlider(1, 0, 10)
box = widgets.VBox([scale, cw])
mylink = ipywidgets.jslink((cw, 'radius'), (scale, 'value'))
box
cw.drawCircle(x=30, y=30)
scale.value = 2
cw.drawCircle(x=60, y=30)
```
| github_jupyter |
# Working With STAC
[](https://mybinder.org/v2/gh/developmentseed/titiler/master?filepath=docs%2Fexamples%2F%2Fnotebooks%2FWorking_with_STAC_simple.ipynb)
### STAC: SpatioTemporal Asset Catalog
> The SpatioTemporal Asset Catalog (STAC) specification aims to standardize the way geospatial assets are exposed online and queried. A 'spatiotemporal asset' is any file that represents information about the earth captured in a certain space and time. The initial focus is primarily remotely-sensed imagery (from satellites, but also planes, drones, balloons, etc), but the core is designed to be extensible to SAR, full motion video, point clouds, hyperspectral, LiDAR and derived data like NDVI, Digital Elevation Models, mosaics, etc.
Ref: https://github.com/radiantearth/stac-spechttps://github.com/radiantearth/stac-spec
Using STAC makes data indexation and discovery really easy. In addition to the Collection/Item/Asset (data) specifications, data providers are also encouraged to follow a STAC API specification: https://github.com/radiantearth/stac-api-spec
> The API is compliant with the OGC API - Features standard (formerly known as OGC Web Feature Service 3), in that it defines many of the endpoints that STAC uses. A STAC API should be compatible and usable with any OGC API - Features clients. The STAC API can be thought of as a specialized Features API to search STAC Catalogs, where the features returned are STAC Items, that have common properties, links to their assets and geometries that represent the footprints of the geospatial assets.
## Requirements
To be able to run this notebook you'll need the following requirements:
- folium
- httpx
- rasterio
`!pip install folium httpx rasterio`
```
# Uncomment this line if you need to install the dependencies
# !pip install folium requests rasterio
import httpx
from rasterio.features import bounds as featureBounds
from folium import Map, TileLayer, GeoJson
%pylab inline
titiler_endpoint = "https://titiler.xyz" # Developmentseed Demo endpoint. Please be kind.
stac_item = "https://sentinel-cogs.s3.amazonaws.com/sentinel-s2-l2a-cogs/34/S/GA/2020/3/S2A_34SGA_20200301_0_L2A/S2A_34SGA_20200301_0_L2A.json"
item = httpx.get(stac_item).json()
print(item)
print(list(item["assets"]))
for it, asset in item["assets"].items():
print(asset["type"])
bounds = featureBounds(item)
m = Map(
tiles="OpenStreetMap",
location=((bounds[1] + bounds[3]) / 2,(bounds[0] + bounds[2]) / 2),
zoom_start=8
)
geo_json = GeoJson(data=item)
geo_json.add_to(m)
m
# Get Tile URL
r = httpx.get(
f"{titiler_endpoint}/stac/info",
params = (
("url", stac_item),
# Get info for multiple assets
("assets","visual"), ("assets","B04"), ("assets","B03"), ("assets","B02"),
)
).json()
print(r)
```
### Display one asset
```
r = httpx.get(
f"{titiler_endpoint}/stac/tilejson.json",
params = {
"url": stac_item,
"assets": "visual",
"minzoom": 8, # By default titiler will use 0
"maxzoom": 14, # By default titiler will use 24
}
).json()
m = Map(
location=((bounds[1] + bounds[3]) / 2,(bounds[0] + bounds[2]) / 2),
zoom_start=10
)
tiles = TileLayer(
tiles=r["tiles"][0],
min_zoom=r["minzoom"],
max_zoom=r["maxzoom"],
opacity=1,
attr="ESA"
)
tiles.add_to(m)
m
```
### Select Indexes for assets
```
# Get Tile URL
r = httpx.get(
f"{titiler_endpoint}/stac/tilejson.json",
params = {
"url": stac_item,
"assets": "visual",
"asset_bidx": "visual|3,1,2",
"minzoom": 8, # By default titiler will use 0
"maxzoom": 14, # By default titiler will use 24
}
).json()
m = Map(
location=((bounds[1] + bounds[3]) / 2,(bounds[0] + bounds[2]) / 2),
zoom_start=12
)
tiles = TileLayer(
tiles=r["tiles"][0],
min_zoom=r["minzoom"],
max_zoom=r["maxzoom"],
opacity=1,
attr="ESA"
)
tiles.add_to(m)
m
# Get Tile URL
r = httpx.get(
f"{titiler_endpoint}/stac/tilejson.json",
params = (
("url", stac_item),
("assets", "B04"),
("assets", "B03"),
("assets", "B02"),
("asset_bidx", "B04|1"), # There is only one band per asset for Sentinel
("asset_bidx", "B03|1"), # There is only one band per asset for Sentinel
("asset_bidx", "B02|1"), # There is only one band per asset for Sentinel
("minzoom", 8),
("maxzoom", 14),
("rescale", "0,10000"),
)
).json()
m = Map(
location=((bounds[1] + bounds[3]) / 2,(bounds[0] + bounds[2]) / 2),
zoom_start=12
)
tiles = TileLayer(
tiles=r["tiles"][0],
min_zoom=r["minzoom"],
max_zoom=r["maxzoom"],
opacity=1,
attr="ESA"
)
tiles.add_to(m)
m
```
Merge Assets
```
# Get Tile URL
r = httpx.get(
f"{titiler_endpoint}/stac/tilejson.json",
params = (
("url", stac_item),
("assets", "B04"),
("assets", "B03"),
("assets", "B02"),
("minzoom", 8),
("maxzoom", 14),
("rescale", "0,10000"),
)
).json()
m = Map(
location=((bounds[1] + bounds[3]) / 2,(bounds[0] + bounds[2]) / 2),
zoom_start=12
)
tiles = TileLayer(
tiles=r["tiles"][0],
min_zoom=r["minzoom"],
max_zoom=r["maxzoom"],
opacity=1,
attr="ESA"
)
tiles.add_to(m)
m
```
Apply Expression between assets
```
# Get Tile URL
r = httpx.get(
f"{titiler_endpoint}/stac/tilejson.json",
params = (
("url", stac_item),
("expression", "(B08-B04)/(B08+B04)"), # NDVI
("rescale", "-1,1"),
("minzoom", 8),
("maxzoom", 14),
("colormap_name", "viridis"),
)
).json()
m = Map(
location=((bounds[1] + bounds[3]) / 2,(bounds[0] + bounds[2]) / 2),
zoom_start=12
)
tiles = TileLayer(
tiles=r["tiles"][0],
min_zoom=r["minzoom"],
max_zoom=r["maxzoom"],
opacity=1,
attr="ESA"
)
tiles.add_to(m)
m
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(context='talk', style='ticks', color_codes=True, font_scale=0.8)
import numpy as np
import pandas as pd
import scipy
from tqdm import tqdm
%matplotlib inline
```
This notebook generates some of the precursor files for the fragment decomposition figures.
```
ysi = pd.read_csv('ysi.csv').set_index('SMILES')
fragments = pd.read_csv('fragments.csv', index_col=0)
frag_summary = pd.read_csv('data/frag_summary.csv', index_col=0)
frag_regression = pd.read_pickle('data/fragment_regression.p')
fragments.shape
frag_summary['mol_count'] = (fragments > 0).sum(0)
frag_summary = frag_summary[frag_summary.mol_count > 4]
ox = frag_summary[frag_summary.oxygenate]
fig = plt.figure(figsize=(5,3))
ax = fig.add_subplot(111)
sns.boxplot(data=frag_regression[ox.index], fliersize=0, color='.8', linewidth=1, width=0.8)
sns.despine(offset=10, trim=True)
for i, label in enumerate(ax.get_xticklabels()):
label.set_rotation(90)
ax.set_ylabel('YSI Contribution')
for i, (smart, row) in enumerate(ox.iterrows()):
ax.text(i, -8, row.mol_count, ha='center', va='center')
# fig.savefig('oxygenates.svg')
aliphatic = frag_summary[(~frag_summary.oxygenate) & (~frag_summary.aromatic)]
fig = plt.figure(figsize=(5,3))
ax = fig.add_subplot(111)
sns.boxplot(data=frag_regression[aliphatic.index], fliersize=0, color='.8', linewidth=1, width=0.8)
sns.despine(offset=10, trim=True)
for i, label in enumerate(ax.get_xticklabels()):
label.set_rotation(90)
ax.set_ylabel('YSI Contribution')
for i, (smart, row) in enumerate(aliphatic.iterrows()):
ax.text(i, -8, row.mol_count, ha='center', va='center')
# fig.savefig('nonaromats.svg')
fragments
# svgs = [fd.draw_mol_svg(smarts, smiles=False, figsize=(80,80)) for smarts in aliphatic.index.str.replace(' \|.*$', '')]
# for i, svg in enumerate(svgs):
# with open('frag_svgs/alpif{:02d}.svg'.format(i), 'w') as f:
# f.write(svg)
aro = frag_summary[(frag_summary.aromatic)]
aro
fig = plt.figure(figsize=(5,3))
ax = fig.add_subplot(111)
sns.boxplot(data=frag_regression[aro.index], fliersize=0, color='.8', linewidth=1, width=0.8)
ax.set_ylabel('YSI Contribution')
for i, (smart, row) in enumerate(aro.iterrows()):
ax.text(i, -160, row.Count, ha='center', va='center')
sns.despine(offset=10, trim=True)
for i, label in enumerate(ax.get_xticklabels()):
label.set_rotation(90)
# fig.savefig('aromats.svg')
ysi[fragments[aro.index[3]] > 0]
# svgs = [fd.draw_mol_svg(smarts, smiles=False, figsize=(80,80)) for smarts in aro.index.str.replace(' \|.*$', '')]
# for i, svg in enumerate(svgs):
# with open('frag_svgs/aro{:02d}.svg'.format(i), 'w') as f:
# f.write(svg)
fig = plt.figure(figsize=(7,10))
ax1 = fig.add_subplot(311)
ax2 = fig.add_subplot(312)
ax3 = fig.add_subplot(313)
# Oxygenates
sns.boxplot(data=frag_regression[ox.index], fliersize=0, color='.8', linewidth=1,
width=0.8, ax=ax1)
ax1.set_xticks(range(len(ox)))
for i, (smart, row) in enumerate(ox.iterrows()):
ax1.text(i, -8, row.Count, ha='center', va='center')
# Aliphatics
sns.boxplot(data=frag_regression[aliphatic.index], fliersize=0, color='.8', linewidth=1,
width=0.8, ax=ax2)
for i, (smart, row) in enumerate(aliphatic.iterrows()):
ax2.text(i, 0, row.Count, ha='center', va='center')
# Aromatics
sns.boxplot(data=frag_regression[aro.index], fliersize=0,
color='.8', linewidth=1, width=0.8, ax=ax3)
for i, (smart, row) in enumerate(aro.iterrows()):
ax3.text(i, -150, row.Count, ha='center', va='center')
ax1.set_xlim(-0.5, 13.5)
ax2.set_xlim(-0.5, 13.5)
ax3.set_xlim(-0.5, 13.5)
ax1.set_ylabel('YSI Contribution')
ax2.set_ylabel('YSI Contribution')
ax3.set_ylabel('YSI Contribution')
#ax1.set_ylim([-10, 30])
#ax1.set_yticks(np.arange(-10, 40, 10))
sns.despine(offset=10, trim=True)
fig.tight_layout()
ax1.set_xticklabels(np.arange(len(ox)), {'style': 'italic'})
ax2.set_xticklabels(len(ox) + np.arange(len(aliphatic)), {'style': 'italic'})
ax3.set_xticklabels(len(ox) + len(aliphatic) + np.arange(len(aro)), {'style': 'italic'})
fig.savefig('figures/raw_combined_boxplots.svg', transparent=True)
from rdkit import Chem
from rdkit.Chem import rdDepictor
from rdkit.Chem.Draw import rdMolDraw2D
from IPython.display import SVG
def draw_all(smarts, name):
for i in range(len(smarts)):
mol = Chem.MolFromSmarts(smarts[i])
mc = Chem.Mol(mol.ToBinary())
rdDepictor.Compute2DCoords(mc)
drawer = rdMolDraw2D.MolDraw2DSVG(80, 80)
center = int(pd.Series({atom.GetIdx(): len(atom.GetNeighbors()) for atom in mol.GetAtoms()}).argmax())
to_highlight = [int(atom.GetIdx()) for atom in mol.GetAtoms() if atom.GetIsAromatic()]
color_dict = {i: (0.2, 0.8, 0.2) for i in to_highlight}
radius_dict = {i: 0.25 for i in to_highlight}
radius_dict[center] = 0.5
if center not in to_highlight:
to_highlight += [center]
color_dict[center] = (0.8,0.8,0.8)
drawer.DrawMolecule(mc, highlightAtoms=to_highlight,
highlightAtomColors=color_dict,
highlightAtomRadii=radius_dict,
highlightBonds=False)
# drawer.DrawString(str(i), Point2D(0, -0.75))
drawer.FinishDrawing()
svg = drawer.GetDrawingText()
svg = SVG(svg.replace('svg:', '').replace(':svg', ''))
with open('fragment_images/{}_{}.svg'.format(name, i), 'w') as f:
f.write(svg.data)
draw_all([mol_str for mol_str in ox.index.str.replace(' \|.*$', '')], 'ox')
draw_all([mol_str for mol_str in aliphatic.index.str.replace(' \|.*$', '')], 'ali')
draw_all([mol_str for mol_str in aro.index.str.replace(' \|.*$', '')], 'aro')
def get_greedy_mols():
bool_frags = fragments[frag_summary.index] > 0
while True:
chosen_smarts = (bool_frags).sum(1).argmax()
yield chosen_smarts
bool_frags = bool_frags.loc[:, ~bool_frags.loc[chosen_smarts]]
if bool_frags.shape[1] == 0:
break
list(get_greedy_mols())
```
| github_jupyter |
# Basic Usage of the Uncertainty Characteristics Curve (UCC)
Needs `uncertainty_characteristics_curve.py` and `sample_predict.pkl` files to be in the same directory.
```
import numpy as np
import matplotlib.pyplot as plt
from uq360.metrics.uncertainty_characteristics_curve import UncertaintyCharacteristicsCurve as ucc
from copy import deepcopy
import pickle
# This is a file with sample timeseries data provided along with this notebook.
D = pickle.load(open('sample_predict.pkl','rb'))
def form_D(yhat, zhatl, zhatu):
# a handy routine to format data as needed by the UCC fit() method
D = np.zeros([yhat.shape[0],3])
D[:, 0] = yhat
D[:, 1] = zhatl
D[:, 2] = zhatu
return D
```
## First a super short example of the essential functionality
Exercised methods: `fit(), plot_UCC()`
```
%matplotlib inline
take_from = 0 # take samples from this time index
take_to = 500 # to this index
pred = D['pred'][take_from: take_to] # target predictions
actual = D['actual'][take_from: take_to] # ground truth observations
lb = D['lb'][take_from: take_to] # lower bound of the prediction interval
ub = D['ub'][take_from: take_to] # upper bound of the prediction interval
z = range(take_from, take_to)
bwl = pred - lb # lower band (deviate)
bwu = ub - pred # upper band (deviate)
# form matrix for ucc:
X = form_D(pred, bwl, bwu)
# create an instance of ucc and fit data
u = ucc()
u.fit(X, actual)
# plot the UCC, show the AUUCC, and the OP:
auucc, op_info = u.plot_UCC(titlestr=('Sample Data (time=%d:%d)' % (np.min(z), np.max(z))))
print("AUUCC is %.2f, OP = (%.2f, %.2f)" % (auucc, op_info[0]/op_info[2], op_info[1]))
```
## Select axes as desired
Possible axes types: {missrate, excess, deficit, bandwidth} and individual axes can also be set to use of {raw, stddev} units.
Exercised methods: `set_coordinates(), plot_UCC()`
```
# Switch axes:
u.set_coordinates(x_axis_name='excess', y_axis_name='deficit', normalize=True)
auucc, op_info = u.plot_UCC(titlestr=('SampleData (time=%d:%d)' % (np.min(z), np.max(z)) ))
print("AUUCC is %.2f, OP = (%.2f, %.2f)" % (auucc, op_info[0]/op_info[2], op_info[1]))
```
## Just get the Area Under Curve
If you don't care about the plotting business..
Exercised methods: `get_AUUCC()`
```
# If you just need the AUC
auucc = u.get_AUUCC()
print("AUUCC is %.2f" % auucc)
```
### Partial AUC can also be calculated
Partial AUC can help a comparison of models in a narrower operating _range_. A user can constrain the area by specifying an interval (from, to) on either the x-axis or on the y-axis.
```
# suppose your application focuses on very low deficit (on other examples it might be a very low Miss Rate)
# get a partial area under the UCC curve for deficit <= 0.05
print("Low-Deficit AUUCC is %.3f" % u.get_AUUCC(partial_y=(0., 0.05)))
# similarly we can limit the AUC calculation by the x axis:
print("Low-Excess AUUCC is %.3f" % u.get_AUUCC(partial_x=(0., 0.1)))
```
Note that the interval values apply on the axes as displayed, incl. any normalization currently active.
```
# If you prefer to have coordinates with raw units uncomment:
# u.set_coordinates(normalize=False)
# If you prefer to set normalization for just one coordinate (but we don't recommend it):
# u.set_coordinates(x_axis_name='excess', normalize=False)
```
## Minimize a linear cost function
Is there a better Operating Point for my error bars?
Exercised methods: `minimize_cost()`
```
# optimize linear cost function
C = u.minimize_cost(x_axis_cost=1., y_axis_cost=10.)
print(C)
```
Note that by default the costs factors (1. and 10.) will be adjusted (divided) by the standard deviation used by the axes setting (as applicable) to achieve the desired proportion as visualised on the normed axes. An INFO message will be printed, as above. This default behavior can be disabled by setting `augment_cost_by_normfactor=False` in the `minimize_cost` call above.
Note that by default the UCC determines the minimum cost by search over two calibration operations supported: 'scale' and 'bias' (additive shift of prediction intervals). The return object will contain the information which operation lead to the optimum. This behavior can be adjusted by the option `search` followed by a list of at least one of `scale` and `bias`. For example, limiting the minimization to the scaling calibration we can write:
```
C = u.minimize_cost(x_axis_cost=1., y_axis_cost=10., search=['scale'])
# interpreting the above outcome: we can improve the original OP cost ('original_cost') to a new OP cost ('cost')
# by performing operation C['operation'] (either 'scale' or 'bias') with a value
# C['modvalue'] to the original bars:
X2 = deepcopy(X)
if C['operation'] == 'bias':
X2[:, 1:] += C['modvalue']
else:
X2[:, 1:] *= C['modvalue']
u.fit(X2, actual)
auucc, op_info = u.plot_UCC()
```
## The UCC Class handles multiplicity of Inputs
If user provides input data as a `list` corresponding to multiple systems of error bars, all UCC's methods will return `list` of corresponding results.
```
# finally,if you want to plot multiple systems:
# as an example, create a second set of bars that are simply random
r = np.random.uniform(low=.1*np.mean(X[:,0]), high=2*np.mean(X[:,0]), size=[len(X[:,0],)])
Xrand = form_D(pred, r, r)
# and plot
u.fit([X2, Xrand], actual)
auuccl, op_infol = u.plot_UCC(syslabel=['sample-model', 'random'], titlestr=('Sample Data (time=%d:%d)' % (np.min(z), np.max(z)) ))
# note that results are now lists
for auucc, op_info in zip(auuccl, op_infol):
print("AUUCC is %.2f, OP = (%.2f, %.2f)" % (auucc, op_info[0]/op_info[2], op_info[1]/op_info[3]))
```
## Use non-default additive bias as the UCC varying paramater
By default, the UCC is produced by varying a multiplicative scale applied to the error bars. The UCC class also supports varying additive bias. Switching to bias will lead to different UCC curves (although for most well-behaved problems, additive bias and multiplicative scales tend to produce comparable shapes and rankings of systems).
Use option `vary_bias=True` to use the additive bias. Default is `False` (multiplicative scale)
```
# it is easy to plot same UCC but using additive bias:
# Btw, no need to call fit() again as the bias-related values have been precomputed with previous fit() calls
# u.fit(X, actual)
u.set_coordinates(normalize=True) # this is the default, just making sure
auucc, op_info = u.plot_UCC(syslabel=['sample-model', 'random'], titlestr=('Sample Data (time=%d:%d)' % (np.min(z), np.max(z)) ),vary_bias=True)
for auucc, op_info in zip(auucc, op_info):
print("AUUCC is %.2f, OP = (%.2f, %.2f)" % (auucc, op_info[0]/op_info[2], op_info[1]/op_info[3]))
# and one can change the normalization behavior of the axes (sticky)
u.set_coordinates(normalize=False)
auucc, op_info = u.plot_UCC(syslabel=['sample-model', 'random'], titlestr=('Sample Data (time=%d:%d)' % (np.min(z), np.max(z)) ),vary_bias=True)
for auucc, op_info in zip(auucc, op_info):
print("AUUCC is %.2f, OP = (%.2f, %.2f)" % (auucc, op_info[0]/op_info[2], op_info[1]/op_info[3]))
```
## Get a specific operating point on an existing curve
User wishes to determine value y (x), given a value x (y)
Exercised methods: `get_specific_operating_point()`
```
# one get y value for any x value, and vice versa (on bias varying plot)
print(u.get_specific_operating_point(req_x_axis_value=0.3, vary_bias=True))
```
So, the corresponding value to our (given) x=0.3 is `new_y` (0.009..). But wait! We also added a recipe how to get to this OP from your original OP that you used when calling the `fit()` method, all for free: add (`bias`) of `modvalue` (0.233..) .. to your bands and you will get there (on a bias-based plot).
Similar logic applies to the other curves listed in the resulting array...
```
# similarly for scale, if we work with scale-based plots:
print(u.get_specific_operating_point(req_x_axis_value=3.0, vary_bias=False))
```
-> in this case we have to multiply (`scale`) our bands by 3.4531.. (`modvalue`)
# More elaborate example of analysis
```
# reload
pred = D['pred']
actual = D['actual']
lb = D['lb']
ub = D['ub']
bwl = pred - lb # bandwidth lower
bwu = ub - pred # bandwidth upper
```
## Get the UCC on a sample subsequence and assess
### This is how the original uncertainty band looks like
```
plt.figure()
z = range(510,830) # take a small time window
plot_lb = pred - bwl
plot_ub = pred + bwl
p = plt.plot(z, pred[z], label='pred')
plt.plot(z, actual[z], label='actual')
plt.fill_between(z, plot_lb[z], plot_ub[z] ,alpha=0.3, facecolor=p[0].get_color())
plt.xlabel('Hour')
plt.ylabel('Magnitude')
plt.title('Sample Data (time=%d:%d)' % (np.min(z), np.max(z)) )
plt.legend()
plt.grid()
```
### and its corresponding UCC:
```
Cwband = 1 # Cost of one unit of bandwidth
Cmiss = 10 # Cost of one miss
x = pred[z]
t = actual[z] # Ground truth
X = form_D(x, bwl[z], bwu[z])
u = ucc()
u.fit(X, t)
u.set_coordinates(x_axis_name = 'bandwidth', y_axis_name = 'missrate', normalize=True)
auucc, op_info = u.plot_UCC(titlestr=('Sample Data (time=%d:%d)' % (np.min(z), np.max(z)) ))
print("INFO: AUUCC=%.4f, Wband=%.4f, MissRate=%.4f" % (auucc, op_info[0], op_info[1]))
print("INFO: Cost(%.4f*Wband+%.4f*Mrate) = %.4f" % (Cwband, Cmiss, Cwband*op_info[0]+Cmiss*op_info[1]))
C = u.minimize_cost(x_axis_cost=Cwband, y_axis_cost=Cmiss, search=['scale'])
print(C)
```
The system operates at 35% Miss Rate. The uncertainty band may be unnecessarily narrow!
We could move the operating point to exploit the sharp bend in the curve to obtain a better OP.
The sharp bend point looks like a good candidate for an optimum point, let's see whether that's true
### Modified uncertainty band:
```
new_scale = C['modvalue']
if C['operation']=='bias':
plot_lb = pred - (bwl + new_scale)
plot_ub = pred + (bwl + new_scale)
else:
plot_lb = pred - (bwl * new_scale)
plot_ub = pred + (bwl * new_scale)
plt.figure()
p = plt.plot(z, pred[z], label='pred')
plt.plot(z, actual[z], label='actual')
plt.fill_between(z, plot_lb[z], plot_ub[z] ,alpha=0.3, facecolor=p[0].get_color())
# plt.plot(z, 0.5* abn[z]+1.5, label='abn')
plt.xlabel('Hour')
plt.ylabel('Magnitude')
plt.title('IoT Predict1 (time=%d:%d)' % (np.min(z), np.max(z)) )
plt.legend()
plt.grid()
```
### and its corresponding UCC:
```
%matplotlib inline
from copy import deepcopy
x = pred[z]
t = actual[z]
Xnew = deepcopy(X)
Xnew[:,0] = x #
if C['operation']=='bias':
Xnew[:,1] = new_scale + bwl[z] # lower bound width
Xnew[:,2] = new_scale + bwu[z] # Upper bound width
else:
Xnew[:,1] = new_scale * bwl[z] # lower bound width
Xnew[:,2] = new_scale * bwu[z] # Upper bound width
u.fit(Xnew, t)
auucc, op_info = u.plot_UCC(titlestr=('Sample Data (time=%d:%d)' % (np.min(z), np.max(z)) ))
print("INFO: AUUCC=%.4f, Wband=%.4f, MissRate=%.4f" % (auucc, op_info[0], op_info[1]))
print("INFO: Cost(%.4f*Wband+%.4f*Mrate) = %.4f" % (Cwband, Cmiss, Cwband*op_info[0]+Cmiss*op_info[1]))
```
After widening the uncertainty band corresponding to the new operating point the Miss Rate is reduced from 35% to 10%, while still preserving the apparent true anomaly, as shown in the above time series plot.
| github_jupyter |
```
import pathlib
import tensorflow as tf
import tensorflow.keras.backend as K
import skimage
import imageio
import numpy as np
import matplotlib.pyplot as plt
# Makes it so any changes in pymedphys is automatically
# propagated into the notebook without needing a kernel reset.
from IPython.lib.deepreload import reload
%load_ext autoreload
%autoreload 2
from pymedphys._experimental.autosegmentation import unet
output_channels=3
model = unet.unet(grid_size=64, output_channels=output_channels) # background, patient, brain, eyes
# tf.keras.utils.plot_model(model, show_shapes=True)
model
structure_uids = [
path.name for path in pathlib.Path('data').glob('*')
]
structure_uids
split_num = len(structure_uids) - 2
training_uids = structure_uids[0:split_num]
testing_uids = structure_uids[split_num:]
training_uids
testing_uids
def get_image_paths_for_uids(uids):
image_paths = [
str(path) for path in pathlib.Path('data').glob('**/*_image.png')
if not path.parent.name in uids
]
np.random.shuffle(image_paths)
return image_paths
def mask_paths_from_image_paths(image_paths):
mask_paths = [
f"{image_path.split('_')[0]}_mask.png"
for image_path in image_paths
]
return mask_paths
training_image_paths = get_image_paths_for_uids(training_uids)
training_mask_paths = mask_paths_from_image_paths(training_image_paths)
testing_image_paths = get_image_paths_for_uids(testing_uids)
testing_mask_paths = mask_paths_from_image_paths(testing_image_paths)
# training_image_paths
# mask_paths
# mask_weights = np.array([
# 0.9864694074789978, 0.9251728496022601, 0.0883577429187421
# ])[None, None, :]
def _normalise_mask(png_mask):
normalised_mask = np.round(png_mask / 255).astype(float)
return normalised_mask
# def _remove_mask_weights(weighted_mask):
# return weighted_mask / mask_weights
png_mask = imageio.imread(testing_mask_paths[0])
normalised_mask = _normalise_mask(png_mask)
plt.imshow(png_mask)
plt.show()
plt.imshow(normalised_mask)
plt.colorbar()
normalised_mask.shape
np.max(normalised_mask[:,:,0])
np.max(normalised_mask[:,:,1])
np.max(normalised_mask[:,:,2])
def _normalise_image(png_image):
normalised_image = png_image[:,:,None].astype(float) / 255
return normalised_image
input_array = _normalise_image(imageio.imread(testing_image_paths[0]))
plt.imshow(input_array)
BATCH_SIZE = 128
SHUFFLE_BUFFER_SIZE = 200
def get_dataset(image_paths, mask_paths):
input_arrays = []
output_arrays = []
for image_path, mask_path in zip(image_paths, mask_paths):
input_arrays.append(_normalise_image(imageio.imread(image_path)))
output_arrays.append(_normalise_mask(imageio.imread(mask_path)))
dataset = tf.data.Dataset.from_tensor_slices((input_arrays, output_arrays))
dataset = dataset.repeat().shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
return dataset
training_dataset = get_dataset(training_image_paths, training_mask_paths)
testing_dataset = get_dataset(testing_image_paths, testing_mask_paths)
for image, mask in training_dataset.take(1):
sample_image_raw, sample_mask_raw = image, mask
has_brain = np.sum(sample_mask_raw[:,:,:,1], axis=(1,2))
has_eyes = np.sum(sample_mask_raw[:,:,:,0], axis=(1,2))
max_brain_eyes_combo = np.argmax(has_brain * has_eyes)
sample_image = sample_image_raw[max_brain_eyes_combo,:,:,:]
sample_mask = sample_mask_raw[max_brain_eyes_combo,:,:,:]
np.sum(sample_mask_raw[:,:,:,0])
np.sum(sample_mask_raw[:,:,:,0]==0) / np.sum(sample_mask_raw[:,:,:,0])
# scharr_operators
# sch_mag = np.sqrt(sum([scharr(image, axis=i)**2
# for i in range(image.ndim)]) / image.ndim)
def _add_channels(kernel, output_channels, batch_size):
kernel = np.concatenate([kernel[:,:,None],]*output_channels, axis=-1)
# kernel = np.concatenate([kernel[None,:,:,:],]*batch_size, axis=0)
return kernel
scharr_x = np.array([
[47, 0, -47],
[162, 0, -162],
[47, 0, -47]
]).astype(np.float32)
scharr_y = scharr_x.T
scharr_x = K.constant(scharr_x)
scharr_y = K.constant(scharr_y)
# scharr_x = _add_channels(scharr_x, output_channels, BATCH_SIZE)
# scharr_y = _add_channels(scharr_y, output_channels, BATCH_SIZE)
sample_mask_raw.shape
sample_mask_raw[0,:,:,2][None,:,:,None].shape
scharr_x[None,:,:,None].shape
# dir(K)
def _apply_sharr_filter(image):
items = []
for i in range(image.shape[-1]):
x = tf.compat.v1.nn.convolution(image[:,:,:,i][:,:,:,None], scharr_x[:,:,None,None], padding="VALID")
y = tf.compat.v1.nn.convolution(image[:,:,:,i][:,:,:,None], scharr_y[:,:,None,None], padding="VALID")
items.append(K.sqrt(x**2 + y**2))
return K.concatenate(items, axis=-1)
image = K.constant(tf.cast(sample_mask_raw, tf.float32))
filtered = _apply_sharr_filter(image)
# K.conv1d?
# def _apply_kernel(image, kernel):
# return K.conv2d(image[0:1,:,:,2:], kernel[:,:,None], padding="same", data_format='channels_last', dilation_rate=1, strides=1)
# x_dir = _apply_kernel(sample_mask_raw, scharr_x)
# y_dir = _apply_kernel(sample_mask_raw, scharr_y)
# magnitude = K.sqrt(x_dir**2 + y_dir**2)
plt.imshow(filtered[0,:,:,2])
edge_reference = skimage.filters.scharr(sample_mask_raw[0,:,:,2])
plt.imshow(edge_reference)
def skimage_scharr_loss(reference, evaluation):
edge_reference = skimage.filters.scharr(reference)
edge_evaluation = skimage.filters.scharr(evaluation)
score = np.sum(np.abs(edge_evaluation - edge_reference)) / np.sum(
edge_evaluation + edge_reference
)
return score
custom_weights = [0.98, 0.92, 0.08]
def scharr_loss(reference, evaluation):
edge_reference = _apply_sharr_filter(reference)
edge_evaluation = _apply_sharr_filter(evaluation)
score = 0
for i in range(edge_evaluation.shape[-1]):
score += custom_weights[i] * K.sum(K.abs(edge_evaluation[:,:,:,i] - edge_reference[:,:,:,i]))
return score
def jaccard_distance_loss(y_true, y_pred, smooth=100):
"""
Jaccard = (|X & Y|)/ (|X|+ |Y| - |X & Y|)
= sum(|A*B|)/(sum(|A|)+sum(|B|)-sum(|A*B|))
The jaccard distance loss is usefull for unbalanced datasets. This has been
shifted so it converges on 0 and is smoothed to avoid exploding or disapearing
gradient.
Ref: https://en.wikipedia.org/wiki/Jaccard_index
@url: https://gist.github.com/wassname/f1452b748efcbeb4cb9b1d059dce6f96
@author: wassname
"""
intersection = K.sum(K.abs(y_true * y_pred), axis=-1)
sum_ = K.sum(K.abs(y_true) + K.abs(y_pred), axis=-1)
jac = (intersection + smooth) / (sum_ - intersection + smooth)
return (1 - jac) * smooth
cross_entropy_weights = []
for i in range(3):
num_of_ones = np.sum(sample_mask_raw[:,:,:,i])
num_of_zeros = np.sum(sample_mask_raw[:,:,:,i]==0)
one_weight = (1-num_of_ones)/(num_of_ones + num_of_zeros)
zero_weight = (1-num_of_zeros)/(num_of_ones + num_of_zeros)
cross_entropy_weights.append([one_weight, zero_weight])
def weighted_cross_entropy(y_true, y_pred):
loss = 0
for i in range(y_pred.shape[-1]):
one_weight, zero_weight = cross_entropy_weights[i]
b_ce = K.binary_crossentropy(y_true[:,:,:,i], y_pred[:,:,:,i])
weight_vector = y_true[:,:,:,i] * one_weight + (1 - y_true[:,:,:,i]) * zero_weight
loss += K.mean(weight_vector * b_ce)
return loss
scharr_loss(image, image)
# sample_mask
# total_class_weight_normalisation = 1/4 * (
# number_of_not_background + number_of_not_patient + number_of_not_brain + number_of_not_eyes)
# class_weights = [
# number_of_not_background / total_class_weight_normalisation,
# number_of_not_patient / total_class_weight_normalisation,
# number_of_not_brain / total_class_weight_normalisation,
# number_of_not_eyes / total_class_weight_normalisation
# ]
# class_weights
import tensorflow.keras.backend as K
model.compile(
optimizer='adam',
loss=weighted_cross_entropy,
metrics=['accuracy']
)
def display(display_list):
plt.figure(figsize=(18, 5))
title = ['Input Image', 'True Mask', 'Predicted Mask']
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i+1)
plt.title(title[i])
plt.imshow(display_list[i])
plt.colorbar()
plt.axis('off')
plt.show()
display([sample_image, sample_mask])
def show_predictions(dataset=None, num=1):
if dataset:
for image, mask in dataset.take(num):
pred_mask = model.predict(image)
display([image[0], mask[0], pred_mask[0]])
else:
display(
[
sample_image, sample_mask,
model.predict(sample_image[tf.newaxis, ...])[0]
]
)
show_predictions()
class DisplayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
show_predictions()
print ('\nSample Prediction after epoch {}\n'.format(epoch+1))
model_history = model.fit(
training_dataset, epochs=1000,
steps_per_epoch=10,
callbacks=[DisplayCallback()],
)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss, roc_auc_score, accuracy_score
from xgboost import XGBClassifier
from cinnamon.drift import ModelDriftExplainer, AdversarialDriftExplainer
# pandas config
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 500)
seed = 2021
```
# IEEE fraud data
Download data with kaggle CLI if it is setup on your computer:
```
#!kaggle competitions download -c ieee-fraud-detection
```
Else you can download the data here: https://www.kaggle.com/c/ieee-fraud-detection/data, and you will have to accept the competition rules
```
df = pd.read_csv('data/train_transaction.csv')
print(df.shape)
```
# Preprocessing
```
# count missing values per column
missing_values = df.isnull().sum(axis=0)
missing_values
# keep only columns with less than 10000 values
selected_columns = [col for col in df.columns if missing_values[col] < 10000]
# in the resulting columns, drop rows with any missing value
df = df[selected_columns].dropna(axis=0, how='any')
# for the variable 'card6', keep only rows corresponding to 'debit' and 'credit'modalities
df = df.loc[df['card6'].isin(['debit', 'credit']), :].copy()
df.head()
```
# Sampling
We replicate a typical production situation where we would have:
- training data
- validation data
- production data
Also, we introduce so data drift on the variable `card6` by keeping only transactions which correspond to credit card. In a real application, this would correspond to the case where we are not able to identify fraud (the target label) for debit card transactions.
This data drift corresponds to a case of censoring. Generally it would correspond to concept drift.
```
# select features by keeping only numerical features
features = [col for col in df.columns if col not in ['TransactionID', 'isFraud', 'TransactionDT',
'ProductCD', 'card4', 'card6']]
# we do a time split (shuffle=False) to seperate between df_temp (training-validation data)
# and df_prod (production data)
df_temp, df_prod = train_test_split(df.copy(), test_size=0.25, shuffle=False)
# the majority of transactions are made with debit cards
df_temp['card6'].value_counts()
# drop all debit card transactions in training-validaton data
# we do a time split (shuffle=False) to seperate between training data and validation data
X_train, X_valid, y_train, y_valid = train_test_split(df_temp.loc[df_temp['card6'].values == 'credit', features].copy(),
df_temp.loc[df_temp['card6'].values == 'credit', 'isFraud'].values,
test_size=1/3,
shuffle=False,
random_state=seed)
X_prod, y_prod = df_prod[features], df_prod['isFraud'].values
```
# Build model
```
clf = XGBClassifier(n_estimators=1000,
booster="gbtree",
objective="binary:logistic",
learning_rate=0.1,
max_depth=6,
use_label_encoder=False,
seed=seed)
clf.fit(X=X_train, y=y_train, eval_set=[(X_valid, y_valid)], early_stopping_rounds=20,
verbose=10, eval_metric=['auc', 'logloss'])
```
# Detection of data drift
We do detect a data drift in this case. Our three indicators:
- distribution of predictions
- distribution of target labels
- performance metrics
show a data drift
```
drift_explainer = ModelDriftExplainer(clf)
drift_explainer.fit(X1=X_valid, X2=X_prod, y1=y_valid, y2=y_prod)
drift_explainer.plot_prediction_drift(figsize=(7, 5), bins=100)
drift_explainer.get_prediction_drift()
drift_explainer.plot_target_drift()
drift_explainer.get_target_drift()
print(f'log_loss valid: {log_loss(y_valid, clf.predict_proba(X_valid))}')
print(f'log_loss prod: {log_loss(y_prod, clf.predict_proba(X_prod))}')
print(f'AUC valid: {roc_auc_score(y_valid, clf.predict_proba(X_valid)[:, 1])}')
print(f'AUC prod: {roc_auc_score(y_prod, clf.predict_proba(X_prod)[:, 1])}')
```
# Explain data drift
```
# plot drift values in order to identify features that have the higher impacts on data drift
drift_explainer.plot_tree_based_drift_values(type='node_size')
# first drift value feature : 'D1'
drift_explainer.plot_feature_drift('D1', bins=100)
drift_explainer.get_feature_drift('D1')
drift_explainer.plot_feature_drift('C13', bins=100)
drift_explainer.get_feature_drift('C13')
drift_explainer.plot_feature_drift('C2', bins=100)
drift_explainer.get_feature_drift('C2')
drift_explainer.plot_feature_drift('TransactionAmt', bins=100)
drift_explainer.get_feature_drift('TransactionAmt')
# feature importance of the model
pd.DataFrame(clf.feature_importances_, X_train.columns).sort_values(0, ascending=False)
```
# Correction of data drift
## Correction on validation dataset
We apply our methodology which uses adversarial learning to correct data drift between valid and prod data.
We then check our three indicators of data drift in order to see if we get improvement.
```
# weights computed with the adversarial method
# feature_subset=['D1', 'C13', 'C2', 'TransactionAmt']: only the first fourth features in terms of
# drift value are selected
sample_weights_valid_adversarial = (AdversarialDriftExplainer(feature_subset=['D1', 'C13', 'C2', 'TransactionAmt'],
seed=2021)
.fit(X_valid, X_prod)
.get_adversarial_correction_weights(max_ratio=10))
drift_explainer2 = ModelDriftExplainer(clf).fit(X1=X_valid, X2=X_prod, y1=y_valid, y2=y_prod,
sample_weights1=sample_weights_valid_adversarial)
# the drift on distribution of predictions is lowered thaks to our technique
drift_explainer2.plot_prediction_drift(bins=100)
drift_explainer2.get_prediction_drift()
# the target algo re-equilibrated in the good direction
drift_explainer2.plot_target_drift()
drift_explainer2.get_target_drift()
# valid loss is closer to prod loss, but there is still a difference
print(f'log_loss valid: {log_loss(y_valid, clf.predict_proba(X_valid), sample_weight=sample_weights_valid_adversarial)}')
print(f'log_loss prod: {log_loss(y_prod, clf.predict_proba(X_prod))}')
print(f'AUC valid: {roc_auc_score(y_valid, clf.predict_proba(X_valid)[:, 1], sample_weight=sample_weights_valid_adversarial)}')
print(f'AUC prod: {roc_auc_score(y_prod, clf.predict_proba(X_prod)[:, 1])}')
```
## Correction on validation dataset and train dataset (in order to retrain the model)
We apply the same adversarial strategy on training data.
With the model retrain on re-weighted samples, new weights, we observe there is no obviouos improvement in model performance on production data. This needs to be further investigated.
```
# weights computed with the adversarial method on training data
sample_weights_train_adversarial = (AdversarialDriftExplainer(feature_subset=['D1', 'C13', 'C2', 'TransactionAmt'],
seed=2021)
.fit(X_train, X_prod)
.get_adversarial_correction_weights(max_ratio=10))
clf2 = XGBClassifier(n_estimators=1000,
booster="gbtree",
objective="binary:logistic",
learning_rate=0.1,
max_depth=6,
use_label_encoder=False,
seed=seed)
# train a new classifier with the reweighted samples
# we use a power factor 0.3 on sample_weights_train_adversarial weights to smooth them
clf2.fit(X=X_train, y=y_train, eval_set=[(X_valid, y_valid)], sample_weight=sample_weights_train_adversarial**0.3,
early_stopping_rounds=20, verbose=10, eval_metric=['auc', 'logloss'],
sample_weight_eval_set=[sample_weights_valid_adversarial])
# with the reweighting, we see a small improvement for performance on production data, but is it significative ?
print(f'log_loss valid: {log_loss(y_valid, clf2.predict_proba(X_valid), sample_weight=sample_weights_valid_adversarial)}')
print(f'log_loss prod: {log_loss(y_prod, clf2.predict_proba(X_prod))}')
print(f'AUC valid: {roc_auc_score(y_valid, clf2.predict_proba(X_valid)[:, 1], sample_weight=sample_weights_valid_adversarial)}')
print(f'AUC prod: {roc_auc_score(y_prod, clf2.predict_proba(X_prod)[:, 1])}')
```
| github_jupyter |
<a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a>
$$
\newcommand{\set}[1]{\left\{#1\right\}}
\newcommand{\abs}[1]{\left\lvert#1\right\rvert}
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
\newcommand{\inner}[2]{\left\langle#1,#2\right\rangle}
\newcommand{\bra}[1]{\left\langle#1\right|}
\newcommand{\ket}[1]{\left|#1\right\rangle}
\newcommand{\braket}[2]{\left\langle#1|#2\right\rangle}
\newcommand{\ketbra}[2]{\left|#1\right\rangle\left\langle#2\right|}
\newcommand{\angleset}[1]{\left\langle#1\right\rangle}
\newcommand{\expected}[1]{\left\langle#1\right\rangle}
\newcommand{\dv}[2]{\frac{d#1}{d#2}}
\newcommand{\real}[0]{\mathfrak{Re}}
$$
# Multi-Qubit Gates
_prepared by Israel Gelover_
So far we have only characterized single qubit gates, now let's see which gates with more than one qubit we can use. The most important two-qubit gate is the _Control-Not_ or **C-Not** gate.
This gate takes two qubits as input and, like any other quantum logic gate, has the same number of output qubits as input, and behaves as follows.
### <a name="definition_5_9">Definition 5.9</a> C-Not Gate
\begin{equation*}
\text{C-Not}:\mathbb{B}^{\otimes 2} \to \mathbb{B}^{\otimes 2} \\
\end{equation*}
When applying this gate to two qubits, the first is left unchanged and the second is conditionally negated depending on the value of the first, that is, it only changes if the first qubit is $\ket{1}$, we can see this as a modulo $2$ sum as follows
\begin{equation*}
\text{C-Not}(\ket{x}\otimes\ket{y}) = \ket{x}\otimes\ket{x \oplus y}
\end{equation*}
We can also see how it works in the elements of the computational base
\begin{equation*}
\text{C-Not}\begin{cases}
\ket{00} \to \ket{00} \\
\ket{01} \to \ket{01} \\
\ket{10} \to \ket{11} \\
\ket{11} \to \ket{10}
\end{cases}
\end{equation*}
Given this representation of how it acts on the elements of the computational base, we can see it in its matrix form in a similar way to how it is described in this <a href="../2-math/SpectralTheory.ipynb#examples">example</a>, understanding that in the ket $\ket{k}$, k is the binary representation of the row or column number. In this way
\begin{equation*}
\text{C-Not} = \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix}
\end{equation*}
And finally, this is the graphical representation in the quantum circuit model
<img src="../img/c-not.jpg" width="150px" />
where $\ket{q_0}$ is the control qubit and $\ket{q_1}$ is the target qubit.
We can think of the C-Not gate as the precursor in quantum computing of the conditional _if_ in classical computing, remember that _if_ allows us to perform a certain action depending on a Boolean condition, later we will see that there exist quantum logic gates that allow us to apply an arbitrary $U$ gate conditionally. Analogously to this C-Not gate, these conditional gates are denoted _C-U_.
### <a name="definition_5_10">Definition 5.10</a> Swap Gate
The Swap gate is also a two-bit gate, and as the name implies, it swaps places for quibits.
\begin{equation*}
\begin{split}
\mbox{Swap}:\mathbb{B}^{\otimes 2} \to \mathbb{B}^{\otimes 2} \\
\mbox{Swap}(\ket{x}\otimes\ket{y}) = \ket{y}\otimes\ket{x}
\end{split}
\end{equation*}
And its graphical representation is
<img src="../img/swap.jpg" width="150px" />
### <a name="definition_5_11">Definition 5.11</a> Toffoli Gate
The Toffoli gate is a 3 qubit gate and like the C-Not gate, it is a conditional gate. The difference is that instead of having the first qubit as control, it has the first two qubits as control and the third as target, for this reason it is also known as C$^2$-Not gate, emphasizing that its action is controlled by the first two qubits. Later we will see that this gate can be generalized to a gate controlled by $n$ qubits which we will refer to as C$^n$-Not for the particular case of a controlled Not gate and as C$^n$-U for the most general case of an arbitrary controlled $U$ gate.
\begin{equation*}
\mbox{Toffoli}:\mathbb{B}^{\otimes 3} \to \mathbb{B}^{\otimes 3}
\end{equation*}
As we already mentioned, when applying this gate to three qubits, it leaves the first two qubits unchanged and conditionally negates the third one depending on the value of the first two.
It has the following behavior with the elements of the computational base
\begin{equation*}
\mbox{Toffoli} \begin{cases}
\ket{000} \to \ket{000} \\
\ket{001} \to \ket{001} \\
\ket{010} \to \ket{010} \\
\ket{011} \to \ket{011} \\
\ket{100} \to \ket{100} \\
\ket{101} \to \ket{101} \\
\ket{110} \to \ket{111} \\
\ket{111} \to \ket{110}
\end{cases}
\end{equation*}
From where we can deduce its matrix representation
\begin{equation*}
\mbox{C}^2\mbox{-Not} =
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0
\end{pmatrix}
\end{equation*}
And this is its graphical representation in the quantum circuit model
<img src="../img/toffoli.jpg" width="150px" />
### <a name="remark_5_12">Remark 5.12</a> Generalizations
These gates that we have just seen can be generalized and create variants, we show some examples in order to show that the graphical representation can sometimes be more intuitive to help understand what these gates do.
- The inverted C-Not gate, that is, the C-Not gate where the second qubit is the control and the first qubit the target.
<img src="../img/c-not_1.jpg" width="150px" />
- The C-Not gate whose control and target qubits are not contiguous in the circuit
<img src="../img/c-not_2.jpg" width="150px" />
- The C$^2$-Not gate whose control qubits are not contiguous in the circuit and the target qubit lies between the control qubits
<img src="../img/c-not_3.jpg" width="150px" />
- The C$_0$-Not gate has a similar effect to the C-Not gate with the difference that in this case the target qubit is modified only in the case that the first qubit is $\ket{0}$.
<img src="../img/c-not_4.jpg" width="150px" />
- The inverted C$_0$-Not gate, that is, the C$_0$-Not gate where the second qubit is the control and the first qubit is the target.
<img src="../img/c-not_5.jpg" width="150px" />
- The C$_0$C-Not gate, in this case the target qubit is modified only in the case that the first qubit is $\ket{0}$ and the second is $\ket{1}$.
<img src="../img/c-not_6.jpg" width="150px" />
- The C$^n$-Not gate is the generalization of the C-Not gate with $n$ control qubits and one target qubit.
<img src="../img/c-not_7.jpg" width="300px" />
### <a name="definition_5_13">Definition 5.13</a> C-U Gate
This gate is a generalization of the C-Not gate, not in the sense of the number of control qubits, but in the type of gates being controlled, in this case any unitary gate $U$ is being controlled.
As expected, when applying this gate to two qubits, it leaves the first one unchanged and conditionally applies the $U$ gate to the second one depending on the value of the first, that is, it applies $U$ only in the case that the first qubit is $\ket{1}$.
If $U:\mathbb{B} \to \mathbb{B}$ is a single-qubit gate, the effect of the C-U gate on the elements of the computational base is
\begin{equation*}
\mbox{C-U}\begin{cases}
\ket{00} \to \ket{00} \\
\ket{01} \to \ket{01} \\
\ket{10} \to \ket{1}\otimes U\ket{0} \\
\ket{11} \to \ket{1}\otimes U\ket{1}
\end{cases}
\end{equation*}
And its graphical representation is
<img src="../img/c-u.jpg" width="150px" />
Obviously we can have variants and generalizations of this gate which we are not going to analyze in detail but we only mention as examples:
C$_0$-U, C$_0$C-U, C$^n$-U, etc.
### <a name="proposition_5_14">Proposition 5.14</a>
C-U gate can be implemented with C-Not gates, $H$ and $R_z$.
**Proof:**
Let $U$ be a single qubit gate, by <a href="./SingleQubit.ipynb#remark_5_8">Remark 5.8</a>, we can express $ U $ as
\begin{equation*}
U = e^{ia}R_z(b)R_y(c)R_z(d) \enspace \mbox{ with } a,b,c,d \in \mathbb{R}
\end{equation*}
Considering the following gates
\begin{equation*}
\begin{split}
A &= R_z\left(\frac{d-b}{2}\right) \\
B &= R_y\left(-\frac{c}{2}\right)R_z\left(-\frac{d+b}{2}\right) \\
C &= R_z(b)R_y\left(\frac{c}{2}\right) \\
\Delta &= \begin{pmatrix} 1 & 0 \\ 0 & e^{ia} \end{pmatrix}
\end{split}
\end{equation*}
We can express the gate C-U with the following circuit
<img src="../img/c-u_1.jpg" width="520px" />
Note that gates $A, B, C$ and $\Delta$ are essentially $H$ gates and rotations $R_z$, again by by <a href="./SingleQubit.ipynb#remark_5_8">Remark 5.8</a>.
### Task 1.
Let $V^2 = U$, verify that the following circuits are equivalent:
<img src="../img/c2-u.jpg" width="475px" />
<a href="./MultiQubit_Solutions.ipynb#task_1">Click here for solution</a>
### Task 2.
Verify that the following circuit implements the C$^2$-U gate.
<img src="../img/c2-u_1.jpg" width="221px" />
<a href="./MultiQubit_Solutions.ipynb#task_2">Click here for solution</a>
| github_jupyter |
# Load forecasting benchmark
Example created by Wilson Rocha Lacerda Junior
## Note
The following example is **not** intended to say that one library is better than another. The main focus of these examples is to show that SysIdentPy can be a good alternative for people looking to model time series.
We will compare the results obtained against **neural prophet** library.
For the sake of brevity, from **SysIdentPy** only the **MetaMSS**, **AOLS** and **FROLS** (with polynomial base function) methods will be used. See the SysIdentPy documentation to learn other ways of modeling with the library.
We will compare a 1-step ahead forecaster on electricity consumption of a building. The config of the neuralprophet model was taken from the neuralprophet documentation (https://neuralprophet.com/html/example_links/energy_data_example.html)
The training will occur on 80% of the data, reserving the last 20% for the validation.
Note: the data used in this example can be found in neuralprophet github.
Benchmark results:
1. 'SysIdentPy (FROLS)', 4183
2. 'SysIdentPy (MetaMSS)', 5264
3. 'SysIdentPy (AOLS)', 5264
4. 'NeuralProphet', 11471
'SysIdentPy - FROLS', 4183.359498155757),
('SysIdentPy (MetaMSS)', 5264.429171346123),
('SysIdentPy (AOLS)', 5264.42917196841),
```
from warnings import simplefilter
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sysidentpy.model_structure_selection import FROLS
from sysidentpy.model_structure_selection import AOLS
from sysidentpy.model_structure_selection import MetaMSS
from sysidentpy.basis_function import Polynomial
from sysidentpy.utils.plotting import plot_results
from sysidentpy.neural_network import NARXNN
from sysidentpy.metrics import mean_squared_error
from sktime.datasets import load_airline
from neuralprophet import NeuralProphet
from neuralprophet import set_random_seed
simplefilter("ignore", FutureWarning)
np.seterr(all="ignore")
%matplotlib inline
loss = mean_squared_error
data_location = r"\energy"
```
## FROLS
```
files = ['\SanFrancisco_Hospital.csv']
raw = pd.read_csv(data_location + files[0])
df = pd.DataFrame()
df['ds'] = pd.date_range('1/1/2015 1:00:00', freq=str(60) + 'Min',
periods=(8760))
df['y'] = raw.iloc[:,0].values
df_train, df_val = df.iloc[:7008, :], df.iloc[7008:, :]
y = df['y'].values.reshape(-1, 1)
y_train = df_train['y'].values.reshape(-1, 1)
y_test = df_val['y'].values.reshape(-1, 1)
x_train = df_train['ds'].dt.hour.values.reshape(-1, 1)
x_test = df_val['ds'].dt.hour.values.reshape(-1, 1)
basis_function = Polynomial(degree=1)
sysidentpy = FROLS(
order_selection=True,
info_criteria='bic',
estimator="least_squares",
basis_function=basis_function,
)
sysidentpy.fit(X=x_train, y=y_train)
x_test = np.concatenate([x_train[-sysidentpy.max_lag:], x_test])
y_test = np.concatenate([y_train[-sysidentpy.max_lag:], y_test])
yhat = sysidentpy.predict(X=x_test, y=y_test, steps_ahead=1)
sysidentpy_loss = loss(pd.Series(y_test.flatten()[sysidentpy.max_lag:]), pd.Series(yhat.flatten()[sysidentpy.max_lag:]))
print(sysidentpy_loss)
plot_results(y=y_test[-504:], yhat=yhat[-504:], n=504, figsize=(18, 8))
```
## MetaMSS
```
files = ['\SanFrancisco_Hospital.csv']
raw = pd.read_csv(data_location + files[0])
df=pd.DataFrame()
df['ds'] = pd.date_range('1/1/2015 1:00:00', freq=str(60) + 'Min',
periods=(8760))
df['y'] = raw.iloc[:,0].values
df_train, df_val = df.iloc[:7008, :], df.iloc[7008:, :]
y = df['y'].values.reshape(-1, 1)
y_train = df_train['y'].values.reshape(-1, 1)
y_test = df_val['y'].values.reshape(-1, 1)
x_train = df_train['ds'].dt.hour.values.reshape(-1, 1)
x_test = df_val['ds'].dt.hour.values.reshape(-1, 1)
basis_function = Polynomial(degree=1)
sysidentpy_metamss = MetaMSS(
basis_function=basis_function,
estimator="least_squares",
steps_ahead=1,
n_agents=15,
random_state=42
)
sysidentpy_metamss.fit(X_train=x_train, X_test=x_test, y_train=y_train, y_test=y_test)
x_test = np.concatenate([x_train[-sysidentpy_metamss.max_lag:], x_test])
y_test = np.concatenate([y_train[-sysidentpy_metamss.max_lag:], y_test])
yhat = sysidentpy_metamss.predict(X_test=x_test, y_test=y_test, steps_ahead=1)
metamss_loss = loss(
pd.Series(y_test.flatten()[sysidentpy_metamss.max_lag:]),
pd.Series(yhat.flatten()[sysidentpy_metamss.max_lag:]))
print(metamss_loss)
plot_results(y=y_test[:700], yhat=yhat[:700], n=504, figsize=(18, 8))
```
## AOLS
```
set_random_seed(42)
files = ['\SanFrancisco_Hospital.csv']
raw = pd.read_csv(data_location + files[0])
df=pd.DataFrame()
df['ds'] = pd.date_range('1/1/2015 1:00:00', freq=str(60) + 'Min',
periods=(8760))
df['y'] = raw.iloc[:,0].values
df_train, df_val = df.iloc[:7008, :], df.iloc[7008:, :]
y = df['y'].values.reshape(-1, 1)
y_train = df_train['y'].values.reshape(-1, 1)
y_test = df_val['y'].values.reshape(-1, 1)
x_train = df_train['ds'].dt.hour.values.reshape(-1, 1)
x_test = df_val['ds'].dt.hour.values.reshape(-1, 1)
basis_function = Polynomial(degree=1)
sysidentpy_AOLS = AOLS(
basis_function=basis_function
)
sysidentpy_AOLS.fit(X=x_train, y=y_train)
x_test = np.concatenate([x_train[-sysidentpy_AOLS.max_lag:], x_test])
y_test = np.concatenate([y_train[-sysidentpy_AOLS.max_lag:], y_test])
yhat = sysidentpy_AOLS.predict(X=x_test, y=y_test, steps_ahead=1)
aols_loss = loss(pd.Series(y_test.flatten()[sysidentpy_AOLS.max_lag:]), pd.Series(yhat.flatten()[sysidentpy_AOLS.max_lag:]))
print(aols_loss)
plot_results(y=y_test[-504:], yhat=yhat[-504:], n=504, figsize=(18, 8))
```
## Neural Prophet
```
set_random_seed(42)
# set_log_level("ERROR")
files = ['\SanFrancisco_Hospital.csv']
raw = pd.read_csv(data_location + files[0])
df = pd.DataFrame()
df['ds'] = pd.date_range('1/1/2015 1:00:00', freq=str(60) + 'Min',
periods=(8760))
df['y'] = raw.iloc[:,0].values
m = NeuralProphet(
n_lags=24,
ar_sparsity=0.5,
num_hidden_layers = 2,
d_hidden=20,
learning_rate=0.001
)
metrics = m.fit(df, freq='H', valid_p = 0.2)
df_train, df_val = m.split_df(df,valid_p=0.2)
m.test(df_val)
future = m.make_future_dataframe(df_val, n_historic_predictions=True)
forecast = m.predict(future)
# fig = m.plot(forecast)
print(loss(forecast['y'][24:-1], forecast['yhat1'][24:-1]))
neuralprophet_loss = loss(forecast['y'][24:-1], forecast['yhat1'][24:-1])
plt.figure(figsize=(18, 8))
plt.plot(forecast['y'][-504:], 'ro-')
plt.plot(forecast['yhat1'][-504:], 'k*-')
results = {'SysIdentPy - FROLS': sysidentpy_loss, 'SysIdentPy (AOLS)': aols_loss,
'SysIdentPy (MetaMSS)': metamss_loss, 'NeuralProphet': neuralprophet_loss}
sorted(results.items(), key=lambda result: result[1])
```
| github_jupyter |
# Huggingface SageMaker-SDK - BERT Japanese NER example
1. [Introduction](#Introduction)
2. [Development Environment and Permissions](#Development-Environment-and-Permissions)
1. [Installation](#Installation)
2. [Permissions](#Permissions)
3. [Uploading data to sagemaker_session_bucket](#Uploading-data-to-sagemaker_session_bucket)
3. [Fine-tuning & starting Sagemaker Training Job](#Fine-tuning-\&-starting-Sagemaker-Training-Job)
1. [Creating an Estimator and start a training job](#Creating-an-Estimator-and-start-a-training-job)
2. [Estimator Parameters](#Estimator-Parameters)
3. [Download fine-tuned model from s3](#Download-fine-tuned-model-from-s3)
4. [Named Entity Recognition on Local](#Named-Entity-Recognition-on-Local)
4. [_Coming soon_:Push model to the Hugging Face hub](#Push-model-to-the-Hugging-Face-hub)
# Introduction
このnotebookは書籍:[BERTによる自然言語処理入門 Transformersを使った実践プログラミング](https://www.ohmsha.co.jp/book/9784274227264/)の[固有表現抽出(Named Entity Recognition)](https://github.com/stockmarkteam/bert-book/blob/master/Chapter8.ipynb)をAmazon SageMakerで動作する様に変更を加えたものです。
データは[ストックマーク株式会社](https://stockmark.co.jp/)さんで作成された[Wikipediaを用いた日本語の固有表現抽出データセット](https://github.com/stockmarkteam/ner-wikipedia-dataset)を使用します。
このデモでは、AmazonSageMakerのHuggingFace Estimatorを使用してSageMakerのトレーニングジョブを実行します。
_**NOTE: このデモは、SagemakerNotebookインスタンスで動作検証しています**_
# Development Environment and Permissions
## Installation
このNotebookはSageMakerの`conda_pytorch_p36`カーネルを利用しています。
日本語処理のため、`transformers`ではなく`transformers[ja]`をインスールします。
**_Note: このnotebook上で推論テストを行う場合、(バージョンが古い場合は)pytorchのバージョンアップが必要になります。_**
```
# localで推論のテストを行う場合(CPU)
!pip install torch==1.7.1
# localで推論のテストを行う場合(GPU)
#!pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
!pip install "sagemaker>=2.31.0" "transformers[ja]==4.6.1" "datasets[s3]==1.6.2" --upgrade
```
## Permissions
ローカル環境でSagemakerを使用する場合はSagemakerに必要な権限を持つIAMロールにアクセスする必要があります。[こちら](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html)を参照してください
```
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
# データの準備
[ストックマーク株式会社](https://stockmark.co.jp/)さんで作成された[Wikipediaを用いた日本語の固有表現抽出データセット](https://github.com/stockmarkteam/ner-wikipedia-dataset)をダウンロードします。
```
!git clone --branch v2.0 https://github.com/stockmarkteam/ner-wikipedia-dataset
import json
# データのロード
dataset = json.load(open('ner-wikipedia-dataset/ner.json','r'))
```
データの形式は以下のようになっています
```
dataset[0:5]
# https://github.com/stockmarkteam/bert-book/blob/master/Chapter8.ipynb
import unicodedata
# 固有表現のタイプとIDを対応付る辞書
type_id_dict = {
"人名": 1,
"法人名": 2,
"政治的組織名": 3,
"その他の組織名": 4,
"地名": 5,
"施設名": 6,
"製品名": 7,
"イベント名": 8
}
# カテゴリーをラベルに変更、文字列の正規化する。
for sample in dataset:
sample['text'] = unicodedata.normalize('NFKC', sample['text'])
for e in sample["entities"]:
e['type_id'] = type_id_dict[e['type']]
del e['type']
dataset[0:5]
import random
# データセットの分割
random.seed(42)
random.shuffle(dataset)
n = len(dataset)
n_train = int(n*0.6)
n_val = int(n*0.2)
dataset_train = dataset[:n_train]
dataset_val = dataset[n_train:n_train+n_val]
dataset_test = dataset[n_train+n_val:]
# https://github.com/stockmarkteam/bert-book/blob/master/Chapter8.ipynb
def create_dataset(tokenizer, dataset, max_length):
"""
データセットをデータローダに入力できる形に整形。
"""
input_ids = []
token_type_ids = []
attention_mask = []
labels= []
for sample in dataset:
text = sample['text']
entities = sample['entities']
encoding = tokenizer.encode_plus_tagged(
text, entities, max_length=max_length
)
input_ids.append(encoding['input_ids'])
token_type_ids.append(encoding['token_type_ids'])
attention_mask.append(encoding['attention_mask'])
labels.append(encoding['labels'])
d = {
"input_ids": input_ids,
"token_type_ids": token_type_ids,
"attention_mask": attention_mask,
"labels": labels
}
return d
# https://github.com/stockmarkteam/bert-book/blob/master/Chapter8.ipynb
from transformers import BertJapaneseTokenizer
class NER_tokenizer_BIO(BertJapaneseTokenizer):
# 初期化時に固有表現のカテゴリーの数`num_entity_type`を
# 受け入れるようにする。
def __init__(self, *args, **kwargs):
self.num_entity_type = kwargs.pop('num_entity_type')
super().__init__(*args, **kwargs)
def encode_plus_tagged(self, text, entities, max_length):
"""
文章とそれに含まれる固有表現が与えられた時に、
符号化とラベル列の作成を行う。
"""
# 固有表現の前後でtextを分割し、それぞれのラベルをつけておく。
splitted = [] # 分割後の文字列を追加していく
position = 0
for entity in entities:
start = entity['span'][0]
end = entity['span'][1]
label = entity['type_id']
splitted.append({'text':text[position:start], 'label':0})
splitted.append({'text':text[start:end], 'label':label})
position = end
splitted.append({'text': text[position:], 'label':0})
splitted = [ s for s in splitted if s['text'] ]
# 分割されたそれぞれの文章をトークン化し、ラベルをつける。
tokens = [] # トークンを追加していく
labels = [] # ラベルを追加していく
for s in splitted:
tokens_splitted = tokenizer.tokenize(s['text'])
label = s['label']
if label > 0: # 固有表現
# まずトークン全てにI-タグを付与
labels_splitted = \
[ label + self.num_entity_type ] * len(tokens_splitted)
# 先頭のトークンをB-タグにする
labels_splitted[0] = label
else: # それ以外
labels_splitted = [0] * len(tokens_splitted)
tokens.extend(tokens_splitted)
labels.extend(labels_splitted)
# 符号化を行いBERTに入力できる形式にする。
input_ids = tokenizer.convert_tokens_to_ids(tokens)
encoding = tokenizer.prepare_for_model(
input_ids,
max_length=max_length,
padding='max_length',
truncation=True
)
# ラベルに特殊トークンを追加
labels = [0] + labels[:max_length-2] + [0]
labels = labels + [0]*( max_length - len(labels) )
encoding['labels'] = labels
return encoding
def encode_plus_untagged(
self, text, max_length=None, return_tensors=None
):
"""
文章をトークン化し、それぞれのトークンの文章中の位置も特定しておく。
IO法のトークナイザのencode_plus_untaggedと同じ
"""
# 文章のトークン化を行い、
# それぞれのトークンと文章中の文字列を対応づける。
tokens = [] # トークンを追加していく。
tokens_original = [] # トークンに対応する文章中の文字列を追加していく。
words = self.word_tokenizer.tokenize(text) # MeCabで単語に分割
for word in words:
# 単語をサブワードに分割
tokens_word = self.subword_tokenizer.tokenize(word)
tokens.extend(tokens_word)
if tokens_word[0] == '[UNK]': # 未知語への対応
tokens_original.append(word)
else:
tokens_original.extend([
token.replace('##','') for token in tokens_word
])
# 各トークンの文章中での位置を調べる。(空白の位置を考慮する)
position = 0
spans = [] # トークンの位置を追加していく。
for token in tokens_original:
l = len(token)
while 1:
if token != text[position:position+l]:
position += 1
else:
spans.append([position, position+l])
position += l
break
# 符号化を行いBERTに入力できる形式にする。
input_ids = tokenizer.convert_tokens_to_ids(tokens)
encoding = tokenizer.prepare_for_model(
input_ids,
max_length=max_length,
padding='max_length' if max_length else False,
truncation=True if max_length else False
)
sequence_length = len(encoding['input_ids'])
# 特殊トークン[CLS]に対するダミーのspanを追加。
spans = [[-1, -1]] + spans[:sequence_length-2]
# 特殊トークン[SEP]、[PAD]に対するダミーのspanを追加。
spans = spans + [[-1, -1]] * ( sequence_length - len(spans) )
# 必要に応じてtorch.Tensorにする。
if return_tensors == 'pt':
encoding = { k: torch.tensor([v]) for k, v in encoding.items() }
return encoding, spans
@staticmethod
def Viterbi(scores_bert, num_entity_type, penalty=10000):
"""
Viterbiアルゴリズムで最適解を求める。
"""
m = 2*num_entity_type + 1
penalty_matrix = np.zeros([m, m])
for i in range(m):
for j in range(1+num_entity_type, m):
if not ( (i == j) or (i+num_entity_type == j) ):
penalty_matrix[i,j] = penalty
path = [ [i] for i in range(m) ]
scores_path = scores_bert[0] - penalty_matrix[0,:]
scores_bert = scores_bert[1:]
for scores in scores_bert:
assert len(scores) == 2*num_entity_type + 1
score_matrix = np.array(scores_path).reshape(-1,1) \
+ np.array(scores).reshape(1,-1) \
- penalty_matrix
scores_path = score_matrix.max(axis=0)
argmax = score_matrix.argmax(axis=0)
path_new = []
for i, idx in enumerate(argmax):
path_new.append( path[idx] + [i] )
path = path_new
labels_optimal = path[np.argmax(scores_path)]
return labels_optimal
def convert_bert_output_to_entities(self, text, scores, spans):
"""
文章、分類スコア、各トークンの位置から固有表現を得る。
分類スコアはサイズが(系列長、ラベル数)の2次元配列
"""
assert len(spans) == len(scores)
num_entity_type = self.num_entity_type
# 特殊トークンに対応する部分を取り除く
scores = [score for score, span in zip(scores, spans) if span[0]!=-1]
spans = [span for span in spans if span[0]!=-1]
# Viterbiアルゴリズムでラベルの予測値を決める。
labels = self.Viterbi(scores, num_entity_type)
# 同じラベルが連続するトークンをまとめて、固有表現を抽出する。
entities = []
for label, group \
in itertools.groupby(enumerate(labels), key=lambda x: x[1]):
group = list(group)
start = spans[group[0][0]][0]
end = spans[group[-1][0]][1]
if label != 0: # 固有表現であれば
if 1 <= label <= num_entity_type:
# ラベルが`B-`ならば、新しいentityを追加
entity = {
"name": text[start:end],
"span": [start, end],
"type_id": label
}
entities.append(entity)
else:
# ラベルが`I-`ならば、直近のentityを更新
entity['span'][1] = end
entity['name'] = text[entity['span'][0]:entity['span'][1]]
return entities
tokenizer_name = 'cl-tohoku/bert-base-japanese-whole-word-masking'
# トークナイザのロード
# 固有表現のカテゴリーの数`num_entity_type`を入力に入れる必要がある。
tokenizer = NER_tokenizer_BIO.from_pretrained(tokenizer_name, num_entity_type=8)
# データセットの作成
max_length = 128
dataset_train = create_dataset(
tokenizer,
dataset_train,
max_length
)
dataset_val = create_dataset(
tokenizer,
dataset_val,
max_length
)
import datasets
dataset_train = datasets.Dataset.from_dict(dataset_train)
dataset_val = datasets.Dataset.from_dict(dataset_val)
dataset_train
dataset_val
# set format for pytorch
dataset_train.set_format('torch', columns=['input_ids', 'attention_mask', 'token_type_ids', 'labels'])
dataset_val.set_format('torch', columns=['input_ids', 'attention_mask', 'token_type_ids', 'labels'])
dataset_train[0]
```
## Uploading data to `sagemaker_session_bucket`
S3へデータをアップロードします。
```
import botocore
from datasets.filesystems import S3FileSystem
s3_prefix = 'samples/datasets/ner-wikipedia-dataset-bio'
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train'
dataset_train.save_to_disk(training_input_path, fs=s3)
# save test_dataset to s3
test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test'
dataset_val.save_to_disk(test_input_path, fs=s3)
# 以下のpathにdatasetがuploadされました
print(training_input_path)
print(test_input_path)
```
# Fine-tuning & starting Sagemaker Training Job
`HuggingFace`のトレーニングジョブを作成するためには`HuggingFace` Estimatorが必要になります。
Estimatorは、エンドツーエンドのAmazonSageMakerトレーニングおよびデプロイタスクを処理します。 Estimatorで、どのFine-tuningスクリプトを`entry_point`として使用するか、どの`instance_type`を使用するか、どの`hyperparameters`を渡すかなどを定義します。
```python
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
base_job_name='huggingface-sdk-extension',
instance_type='ml.p3.2xlarge',
instance_count=1,
transformers_version='4.4',
pytorch_version='1.6',
py_version='py36',
role=role,
hyperparameters={
'epochs': 1,
'train_batch_size': 32,
'model_name':'distilbert-base-uncased'
}
)
```
SageMakerトレーニングジョブを作成すると、SageMakerは`huggingface`コンテナを実行するために必要なec2インスタンスの起動と管理を行います。
Fine-tuningスクリプト`train.py`をアップロードし、`sagemaker_session_bucket`からコンテナ内の`/opt/ml/input/data`にデータをダウンロードして、トレーニングジョブを実行します。
```python
/opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32
```
`HuggingFace estimator`で定義した`hyperparameters`は、名前付き引数として渡されます。
またSagemakerは、次のようなさまざまな環境変数を通じて、トレーニング環境に関する有用なプロパティを提供しています。
* `SM_MODEL_DIR`:トレーニングジョブがモデルアーティファクトを書き込むパスを表す文字列。トレーニング後、このディレクトリのアーティファクトはモデルホスティングのためにS3にアップロードされます。
* `SM_NUM_GPUS`:ホストで使用可能なGPUの数を表す整数。
* `SM_CHANNEL_XXXX`:指定されたチャネルの入力データを含むディレクトリへのパスを表す文字列。たとえば、HuggingFace estimatorのfit呼び出しで`train`と`test`という名前の2つの入力チャネルを指定すると、環境変数`SM_CHANNEL_TRAIN`と`SM_CHANNEL_TEST`が設定されます。
このトレーニングジョブをローカル環境で実行するには、`instance_type='local'`、GPUの場合は`instance_type='local_gpu'`で定義できます。
**_Note:これはSageMaker Studio内では機能しません_**
```
# トレーニングジョブで実行されるコード
!pygmentize ./scripts/train.py
from sagemaker.huggingface import HuggingFace
num_entity_type = 8
num_labels = 2*num_entity_type+1
# hyperparameters, which are passed into the training job
hyperparameters={
'epochs': 5,
'train_batch_size': 32,
'eval_batch_size': 256,
'learning_rate' : 1e-5,
'model_name':'cl-tohoku/bert-base-japanese-whole-word-masking',
'output_dir':'/opt/ml/checkpoints',
'num_labels': num_labels,
}
# s3 uri where our checkpoints will be uploaded during training
job_name = "bert-ner-bio"
#checkpoint_s3_uri = f's3://{sess.default_bucket()}/{job_name}/checkpoints'
```
# Creating an Estimator and start a training job
```
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
instance_type='ml.p3.2xlarge',
instance_count=1,
base_job_name=job_name,
#checkpoint_s3_uri=checkpoint_s3_uri,
#use_spot_instances=True,
#max_wait=7200, # This should be equal to or greater than max_run in seconds'
#max_run=3600, # expected max run in seconds
role=role,
transformers_version='4.6',
pytorch_version='1.7',
py_version='py36',
hyperparameters=hyperparameters,
)
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit({'train': training_input_path, 'test': test_input_path})
# ml.p3.2xlarge, 5 epochでの実行時間の目安
#Training seconds: 558
#Billable seconds: 558
```
# Estimator Parameters
```
# container image used for training job
print(f"container image used for training job: \n{huggingface_estimator.image_uri}\n")
# s3 uri where the trained model is located
print(f"s3 uri where the trained model is located: \n{huggingface_estimator.model_data}\n")
# latest training job name for this estimator
print(f"latest training job name for this estimator: \n{huggingface_estimator.latest_training_job.name}\n")
# access the logs of the training job
huggingface_estimator.sagemaker_session.logs_for_job(huggingface_estimator.latest_training_job.name)
```
# Download-fine-tuned-model-from-s3
```
import os
OUTPUT_DIR = './output/'
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
from sagemaker.s3 import S3Downloader
# 学習したモデルのダウンロード
S3Downloader.download(
s3_uri=huggingface_estimator.model_data, # s3 uri where the trained model is located
local_path='.', # local path where *.targ.gz is saved
sagemaker_session=sess # sagemaker session used for training the model
)
# OUTPUT_DIRに解凍します
!tar -zxvf model.tar.gz -C output
```
## Named Entity Recognition on Local
```
from transformers import AutoModelForTokenClassification
tokenizer_name = 'cl-tohoku/bert-base-japanese-whole-word-masking'
tokenizer = NER_tokenizer_BIO.from_pretrained(tokenizer_name, num_entity_type=8)
model = AutoModelForTokenClassification.from_pretrained('./output')
# model = model.cuda() # GPUで推論する場合
# https://github.com/stockmarkteam/bert-book/blob/master/Chapter8.ipynb
import itertools
import numpy as np
from tqdm import tqdm
import torch
original_text=[]
entities_list = [] # 正解の固有表現を追加していく
entities_predicted_list = [] # 抽出された固有表現を追加していく
for sample in tqdm(dataset_test):
text = sample['text']
original_text.append(text)
encoding, spans = tokenizer.encode_plus_untagged(
text, return_tensors='pt'
)
#encoding = { k: v.cuda() for k, v in encoding.items() } # GPUで推論する場合
with torch.no_grad():
output = model(**encoding)
scores = output.logits
scores = scores[0].cpu().numpy().tolist()
# 分類スコアを固有表現に変換する
entities_predicted = tokenizer.convert_bert_output_to_entities(
text, scores, spans
)
entities_list.append(sample['entities'])
entities_predicted_list.append( entities_predicted )
print("テキスト: ", original_text[0])
print("正解: ", entities_list[0])
print("抽出: ", entities_predicted_list[0])
```
# Evaluate NER model
```
# https://github.com/stockmarkteam/bert-book/blob/master/Chapter8.ipynb
def evaluate_model(entities_list, entities_predicted_list, type_id=None):
"""
正解と予測を比較し、モデルの固有表現抽出の性能を評価する。
type_idがNoneのときは、全ての固有表現のタイプに対して評価する。
type_idが整数を指定すると、その固有表現のタイプのIDに対して評価を行う。
"""
num_entities = 0 # 固有表現(正解)の個数
num_predictions = 0 # BERTにより予測された固有表現の個数
num_correct = 0 # BERTにより予測のうち正解であった固有表現の数
# それぞれの文章で予測と正解を比較。
# 予測は文章中の位置とタイプIDが一致すれば正解とみなす。
for entities, entities_predicted in zip(entities_list, entities_predicted_list):
if type_id:
entities = [ e for e in entities if e['type_id'] == type_id ]
entities_predicted = [
e for e in entities_predicted if e['type_id'] == type_id
]
get_span_type = lambda e: (e['span'][0], e['span'][1], e['type_id'])
set_entities = set(get_span_type(e) for e in entities)
set_entities_predicted = set(get_span_type(e) for e in entities_predicted)
num_entities += len(entities)
num_predictions += len(entities_predicted)
num_correct += len( set_entities & set_entities_predicted )
# 指標を計算
precision = num_correct/num_predictions # 適合率
recall = num_correct/num_entities # 再現率
f_value = 2*precision*recall/(precision+recall) # F値
result = {
'num_entities': num_entities,
'num_predictions': num_predictions,
'num_correct': num_correct,
'precision': precision,
'recall': recall,
'f_value': f_value
}
return result
print(evaluate_model(entities_list, entities_predicted_list))
```
| github_jupyter |
```
# Select the TensorFlow 2.0 runtime
%tensorflow_version 2.x
# Install Weights and Biases (WnB)
#!pip install wandb
# Primary imports
import tensorflow as tf
import numpy as np
import wandb
# Load the FashionMNIST dataset, scale the pixel values
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
X_train = X_train/255.
X_test = X_test/255.
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# Define the labels of the dataset
CLASSES=["T-shirt/top","Trouser","Pullover","Dress","Coat",
"Sandal","Shirt","Sneaker","Bag","Ankle boot"]
# Change the pixel values to float32 and reshape input data
X_train = X_train.astype("float32").reshape(-1, 28, 28, 1)
X_test = X_test.astype("float32").reshape(-1, 28, 28, 1)
y_train.shape, y_test.shape
# TensorFlow imports
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
# Define utility function for building a basic shallow Convnet
def get_training_model():
model = Sequential()
model.add(Conv2D(16, (5, 5), activation="relu",
input_shape=(28, 28,1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (5, 5), activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dense(len(CLASSES), activation="softmax"))
return model
# Define loass function and optimizer
loss_func = tf.keras.losses.SparseCategoricalCrossentropy()
#optimizer = tf.keras.optimizers.Adam()
optimizer_adam = tf.keras.optimizers.Adam()
optimizer_adadelta = tf.keras.optimizers.Adadelta()
train_loss_adam = tf.keras.metrics.Mean(name="train_loss")
valid_loss_adam = tf.keras.metrics.Mean(name="test_loss")
# Specify the performance metric
train_acc_adam= tf.keras.metrics.SparseCategoricalAccuracy(name="train_acc")
valid_acc_adam = tf.keras.metrics.SparseCategoricalAccuracy(name="valid_acc")
# Batches of 64
train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train)).batch(64)
test_ds = tf.data.Dataset.from_tensor_slices((X_test, y_test)).batch(64)
# Train the model
@tf.function
def model_train_adam(features, labels):
# Define the GradientTape context
with tf.GradientTape() as tape:
# Get the probabilities
predictions = model(features)
# Calculate the loss
loss = loss_func(labels, predictions)
# Get the gradients
gradients = tape.gradient(loss, model.trainable_variables)
# Update the weights
#optimizer.apply_gradients(zip(gradients, model.trainable_variables))
optimizer_adam.apply_gradients(zip(gradients, model.trainable_variables))
# Update the loss and accuracy
train_loss_adam(loss)
train_acc_adam(labels, predictions)
# Validating the model
# Validating the model
@tf.function
def model_validate(features, labels):
predictions = model(features)
v_loss = loss_func(labels, predictions)
valid_loss(v_loss)
valid_acc(labels, predictions)
@tf.function
def model_validate_adam(features, labels):
predictions = model(features)
v_loss = loss_func(labels, predictions)
valid_acc_adam(v_loss)
valid_acc_adam(labels, predictions)
@tf.function
def model_validate_adadelta(features, labels):
predictions = model(features)
v_loss = loss_func(labels, predictions)
valid_acc_adadelta(v_loss)
valid_acc_adadelta(labels, predictions)
# A shallow Convnet
model = get_training_model()
# Grab random images from the test and make predictions using
# the model *while it is training* and log them using WnB
def get_sample_predictions():
predictions = []
images = []
random_indices = np.random.choice(X_test.shape[0], 25)
for index in random_indices:
image = X_test[index].reshape(1, 28, 28, 1)
prediction = np.argmax(model(image).numpy(), axis=1)
prediction = CLASSES[int(prediction)]
images.append(image)
predictions.append(prediction)
wandb.log({"predictions": [wandb.Image(image, caption=prediction)
for (image, prediction) in zip(images, predictions)]})
# Train the model for 5 epochs
for epoch in range(10):
# Run the model through train and test sets respectively
for (features, labels) in train_ds:
model_train_adam(features, labels)
model_train_adadelta(features, labels)
for test_features, test_labels in test_ds:
model_validate_adam(test_features, test_labels)
model_validate_adadelta(test_features, test_labels)
# Grab the results
(loss_adadelta, acc_adadelta) = train_loss_adadelta.result(), train_acc_adadelta.result()
(val_loss_adadelta, val_acc_adadelta) = valid_loss_adadelta.result(), valid_acc_adadelta.result()
# Clear the current state of the metrics
train_loss_adadelta.reset_states(), train_acc_adadelta.reset_states()
valid_loss_adadelta.reset_states(), valid_acc_adadelta.reset_states()
# Local logging
template = "Epoch {}, loss: {:.3f}, acc: {:.3f}, val_loss: {:.3f}, val_acc: {:.3f}"
print (template.format(epoch+1,
loss_adadelta,
acc_adadelta,
val_loss_adadelta,
val_acc_adadelta))
# Logging with WnB
wandb.log({"train_loss_adadelta": loss_adadelta.numpy(),
"train_accuracy_adadelta": acc_adadelta.numpy(),
"val_loss_adadelta": val_loss_adadelta.numpy(),
"val_accuracy_adadelta": val_acc_adadelta.numpy()
})
# adam
# Grab the results
(loss_adam, acc_adam) = train_loss_adam.result(), train_acc_adam.result()
(val_loss_adam, val_acc_adam) = valid_loss_adam.result(), valid_acc_adam.result()
# Clear the current state of the metrics
train_loss_adam.reset_states(), train_acc_adam.reset_states()
valid_loss_adam.reset_states(), valid_acc_adam.reset_states()
# Local logging
template = "Epoch {}, loss: {:.3f}, acc: {:.3f}, val_loss: {:.3f}, val_acc: {:.3f}"
print (template.format(epoch+1,
loss_adam,
acc_adam,
val_loss_adam,
val_acc_adam))
# Logging with WnB
wandb.log({"train_loss_adadelta": loss_adam.numpy(),
"train_accuracy_adadelta": acc_adam.numpy(),
"val_loss_adadelta": val_loss_adam.numpy(),
"val_accuracy_adadelta": val_acc_adam.numpy()
})
get_sample_predictions()
```
| github_jupyter |
# QCoDeS Example with Yokogawa GS200 and Keithley 7510 Multimeter
In this example, we will show how to use the Yokogawa GS200 smu and keithley 7510 dmm to perform a sweep measurement. The GS200 smu will source current through a 10 Ohm resistor using the **program** feature, and **trigger** the the 7510 dmm, which will measure the voltage across the resistor by **digitize** function.
```
import matplotlib.pyplot as plt
import numpy as np
import time
from qcodes.dataset.plotting import plot_dataset
from qcodes.dataset.measurements import Measurement
from qcodes.instrument_drivers.yokogawa.GS200 import GS200
from qcodes.instrument_drivers.tektronix.keithley_7510 import Keithley7510
gs = GS200("gs200", 'USB0::0x0B21::0x0039::91W434594::INSTR')
dmm = Keithley7510("dmm_7510", 'USB0::0x05E6::0x7510::04450961::INSTR')
gs.reset()
dmm.reset()
```
## 1. GS200 setup
To set the source mode to be "current" (by default it's "votage"), and set the current range and votlage limit.
```
gs.source_mode('CURR')
gs.current(0)
gs.current_range(.01)
gs.voltage_limit(5)
```
By default, the output should be off:
```
gs.output()
```
### 1.1 Trigger Settings
The BNC port will be use for triggering out. There are three different settings for trigger out signal:
• **Trigger** (default)
This pin transmits the TrigBusy signal. A low-level signal upon trigger generation and a
high-level signal upon source operation completion.
• **Output**
This pin transmits the output state. A high-level signal if the output is off and a lowlevel
signal if the output is on.
• **Ready**
This pin transmits the source change completion signal (Ready). This is transmitted
10 ms after the source level changes as a low pulse with a width of 10 μs.
```
print(f'By default, the setting for BNC trigger out is "{gs.BNC_out()}".')
```
### 1.2 Program the sweep
The GS200 does not have a build-in "sweep" function, but the "program" feature can generate a source data pattern that user specified as a program in advance.
The following is a simple program, in which the current changes first to 0.01A, then -0.01A, and returns to 0A:
```
gs.program.start() # Starts program memory editing
gs.current(0.01)
gs.current(-0.01)
gs.current(0.0)
gs.program.end() # Ends program memory editing
```
It can be save to the system memory (memory of the GS200):
```
gs.program.save('test1_up_and_down.csv')
```
The advantage of saving to the memory is that the user can have multiple patterns stored:
```
gs.program.start() # Starts program memory editing
gs.current(0.01)
gs.current(-0.01)
gs.current(0.005)
gs.current(0.0)
gs.program.end() # Ends program memory editing
gs.program.save('test2_up_down_up.csv')
```
Let's load the first one:
```
gs.program.load('test1_up_and_down.csv')
```
The interval time between each value is set as following:
```
gs.program.interval(.1)
print(f'The interval time is {float(gs.program.interval())} s')
```
By default, the change is instant, so the output would be like the following:
```
t_axis = [0, 0, 0.1, 0.1, 0.2, 0.2, 0.3, 0.4]
curr_axis = [0, 0.01, 0.01, -0.01, -0.01, 0, 0, 0]
plt.plot(t_axis, curr_axis)
plt.xlabel('time (s)')
plt.ylabel('source current(A)')
```
But we want to introduce a "slope" between each source values: (see the user's manual for more examples of the "slope time")
```
gs.program.slope(.1)
print(f'The slope time is {float(gs.program.slope())} s')
```
As a result, the expected output current will be:
```
t_axis = [0, 0.1, 0.2, 0.3, 0.4]
curr_axis = [0, 0.01, -0.01, 0, 0]
plt.plot(t_axis, curr_axis)
plt.xlabel('time (s)')
plt.ylabel('source current(A)')
```
By default, the GS200 will keep repeating this pattern once it starts:
```
gs.program.repeat()
```
We only want it to generate the pattern once:
```
gs.program.repeat('OFF')
print(f'The program repetition mode is now {gs.program.repeat()}.')
```
Note: at this moment, the output of the GS200 should still be off:
```
gs.output()
```
## 2. Keithley 7510 Setup
### 2.1 Setup basic digitize mode
The DMM7510 digitize functions make fast, predictably spaced measurements. The speed, sensitivity, and bandwidth of the digitize functions allows you to make accurate voltage and current readings of fast signals, such as those associated with sensors, audio, medical devices, power line issues, and industrial processes. The digitize functions can provide 1,000,000 readings per second at 4½ digits.
To set the digitize function to measure voltage, and the range.
```
dmm.digi_sense_function('voltage')
dmm.digi_sense.range(10)
```
The system will determines when the 10 MΩ input divider is enabled: (for voltage measurement only)
```
dmm.digi_sense.input_impedance('AUTO')
```
To define the precise acquisition rate at which the digitizing measurements are made: (this is for digitize mode only)
```
readings_per_second = 10000
dmm.digi_sense.acq_rate(readings_per_second)
print(f'The acquisition rate is {dmm.digi_sense.acq_rate()} digitizing measurements per second.')
```
We will let the system to decide the aperture size:
```
dmm.digi_sense.aperture('AUTO')
```
We also need to tell the instrument how many readings will be recorded:
```
number_of_readings = 4000
dmm.digi_sense.count(number_of_readings)
print(f'{dmm.digi_sense.count()} measurements will be made every time the digitize function is triggered.')
```
### 2.2 Use an user buffer to store the data
```
buffer_name = 'userbuff01'
buffer_size = 100000
buffer = dmm.buffer(buffer_name, buffer_size)
print(f'The user buffer "{buffer.short_name}" can store {buffer.size()} readings, which is more than enough for this example.')
```
One of the benefits of using a larger size is: base on the settings above, the GS200 will send more than one trigger to the 7510. Technically, once a trigger is received, the 7510 unit would ignore any other trigger until it returns to idle. However, in reality it may still response to more than the first trigger. A large size will prevent the data in the buffer from being overwritten.
```
print(f'There are {buffer.last_index() - buffer.first_index()} readings in the buffer at this moment.')
```
### 2.3 Setup the tigger in
By default, the falling edge will be used to trigger the measurement:
```
dmm.trigger_in_ext_edge()
dmm.digitize_trigger()
```
We want an external trigger to trigger the measurement:
```
dmm.digitize_trigger('external')
dmm.digitize_trigger()
```
## 3. Check for errors
```
while True:
smu_error = gs.system_errors()
if 'No error' in smu_error:
break
print(smu_error)
while True:
dmm_error = dmm.system_errors()
if 'No error' in dmm_error:
break
print(dmm_error)
```
## 4. Make the measurement
To clear the external trigger in, and clear the buffer:
```
dmm.trigger_in_ext_clear()
buffer.clear_buffer()
total_data_points = int(buffer.number_of_readings())
print(f'There are total {total_data_points} readings in the buffer "{buffer.short_name}".')
```
Perform the measurement by turning the GS200 on and run the program:
```
sleep_time = 1 # a sleep time is required, or the GS200 will turn off right away, and won't run the whole program
with gs.output.set_to('on'):
gs.program.run()
time.sleep(sleep_time)
```
The GS200 should be off after running:
```
gs.output()
total_data_points = int(buffer.number_of_readings())
print(f'There are {total_data_points} readings in total, so the measurement was performed {round(total_data_points/4000, 2)} times.')
```
Let's use a time series as the setpoints:
```
dt = 1/readings_per_second
t0 = 0
t1 = (total_data_points-1)*dt
buffer.set_setpoints(start=buffer.t_start, stop=buffer.t_stop, label='time') # "label" will be used for setpoints name
buffer.t_start(t0)
buffer.t_stop(t1)
```
To set number of points by specifying the "data_start" point and "data_end" point for the buffer:
```
buffer.data_start(1)
buffer.data_end(total_data_points)
```
"n_pts" is read only:
```
buffer.n_pts()
```
The setpoints:
```
buffer.setpoints()
```
The following are the available elements that saved to the buffer:
```
buffer.available_elements
```
User can select multiple ones:
```
buffer.elements(['measurement', 'relative_time'])
buffer.elements()
meas = Measurement()
meas.register_parameter(buffer.data)
with meas.run() as datasaver:
data = buffer.data
datasaver.add_result((buffer.data, data()))
dataid = datasaver.run_id
plot_dataset(datasaver.dataset)
```
(The second plot is t-t plot so a straight line.)
```
dataset = datasaver.dataset
dataset.get_parameter_data()
```
### Load previously saved pattern (GS200)
Remember we had another pattern stored? Let's load that one:
```
gs.program.load('test2_up_down_up.csv')
```
Alwasy clear the buffer, and external trigger in for the dmm, at the beginning of each measurement:
```
dmm.trigger_in_ext_clear()
buffer.clear_buffer()
total_data_points = int(buffer.number_of_readings())
print(f'There are {total_data_points} readings in the buffer "{buffer.short_name}".')
sleep_time = 1
with gs.output.set_to('on'):
gs.program.run()
time.sleep(sleep_time)
total_data_points = int(buffer.number_of_readings())
print(f'There are {total_data_points} readings in total, so the measurement was performed {round(total_data_points/4000, 2)} times.')
buffer.elements()
meas = Measurement()
meas.register_parameter(buffer.data)
with meas.run() as datasaver:
data = buffer.data
datasaver.add_result((buffer.data, data()))
dataid = datasaver.run_id
plot_dataset(datasaver.dataset)
```
Some of the available elements are not numerical values, for example, "timestamp":
```
buffer.elements(['timestamp', 'measurement'])
buffer.elements()
meas = Measurement()
meas.register_parameter(buffer.data, paramtype="array") # remember to set paramtype="array"
with meas.run() as datasaver:
data = buffer.data
datasaver.add_result((buffer.data, data()))
dataid = datasaver.run_id
datasaver.dataset.get_parameter_data()
gs.reset()
dmm.reset()
```
| github_jupyter |

<div class = 'alert alert-block alert-info'
style = 'background-color:#4c1c84;
color:#eeebf1;
border-width:5px;
border-color:#4c1c84;
font-family:Comic Sans MS;
border-radius: 50px 50px'>
<p style = 'font-size:24px'>Exp 045</p>
<a href = "#Config"
style = "color:#eeebf1;
font-size:14px">1.Config</a><br>
<a href = "#Settings"
style = "color:#eeebf1;
font-size:14px">2.Settings</a><br>
<a href = "#Data-Load"
style = "color:#eeebf1;
font-size:14px">3.Data Load</a><br>
<a href = "#Pytorch-Settings"
style = "color:#eeebf1;
font-size:14px">4.Pytorch Settings</a><br>
<a href = "#Training"
style = "color:#eeebf1;
font-size:14px">5.Training</a><br>
</div>
<p style = 'font-size:24px;
color:#4c1c84'>
実施したこと
</p>
<li style = "color:#4c1c84;
font-size:14px">Electra-Base x Tito CV x Detoxify Unbiased Head</li>
<br>
<h1 style = "font-size:45px; font-family:Comic Sans MS ; font-weight : normal; background-color: #4c1c84 ; color : #eeebf1; text-align: center; border-radius: 100px 100px;">
Config
</h1>
<br>
```
import sys
sys.path.append("../src/utils/iterative-stratification/")
sys.path.append("../src/utils/detoxify")
sys.path.append("../src/utils/coral-pytorch/")
sys.path.append("../src/utils/pyspellchecker")
!pip install --no-index --find-links ../src/utils/faiss/ faiss-gpu==1.6.3
import warnings
warnings.simplefilter('ignore')
import os
import gc
gc.enable()
import sys
import glob
import copy
import math
import time
import random
import string
import psutil
import pathlib
from pathlib import Path
from contextlib import contextmanager
from collections import defaultdict
from box import Box
from typing import Optional
from pprint import pprint
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import japanize_matplotlib
from tqdm.auto import tqdm as tqdmp
from tqdm.autonotebook import tqdm as tqdm
tqdmp.pandas()
## Model
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import StratifiedKFold, KFold, GroupKFold
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from transformers import AutoTokenizer, AutoModel, AdamW, AutoModelForSequenceClassification
from transformers import RobertaModel, RobertaForSequenceClassification
from transformers import RobertaTokenizer
from transformers import LukeTokenizer, LukeModel, LukeConfig
from transformers import get_linear_schedule_with_warmup, get_cosine_schedule_with_warmup
from transformers import BertTokenizer, BertForSequenceClassification, BertForMaskedLM
from transformers import RobertaTokenizer, RobertaForSequenceClassification
from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
from transformers import DebertaTokenizer, DebertaModel
from transformers import DistilBertTokenizer, DistilBertModel
from transformers import ElectraTokenizer, ElectraModel
# Pytorch Lightning
import pytorch_lightning as pl
from pytorch_lightning.utilities.seed import seed_everything
from pytorch_lightning import callbacks
from pytorch_lightning.callbacks.progress import ProgressBarBase
from pytorch_lightning import LightningDataModule, LightningDataModule
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping, LearningRateMonitor
from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning.loggers.csv_logs import CSVLogger
from pytorch_lightning.callbacks import RichProgressBar
from sklearn.linear_model import Ridge
from sklearn.svm import SVC, SVR
from sklearn.feature_extraction.text import TfidfVectorizer
from scipy.stats import rankdata
from cuml.svm import SVR as cuml_SVR
from cuml.linear_model import Ridge as cuml_Ridge
import cudf
from detoxify import Detoxify
from iterstrat.ml_stratifiers import MultilabelStratifiedKFold
from ast import literal_eval
from nltk.tokenize import TweetTokenizer
import spacy
from scipy.stats import sem
from copy import deepcopy
from spellchecker import SpellChecker
from typing import Text, Set, List
import faiss
import cudf, cuml, cupy
from cuml.feature_extraction.text import TfidfVectorizer as cuTfidfVectorizer
from cuml.neighbors import NearestNeighbors as cuNearestNeighbors
from cuml.decomposition.tsvd import TruncatedSVD as cuTruncatedSVD
import torch
config = {
"exp_comment":"Psuedo Labeling",
"seed": 42,
"root": "/content/drive/MyDrive/kaggle/Jigsaw/raw",
"n_fold": 5,
"epoch": 5,
"max_length": 256,
"environment": "AWS",
"project": "Jigsaw",
"entity": "dataskywalker",
"exp_name": "045_exp",
"margin": 0.2,
"train_fold": [0, 1, 2, 3, 4],
"trainer": {
"gpus": 1,
"accumulate_grad_batches": 8,
"progress_bar_refresh_rate": 1,
"fast_dev_run": True,
"num_sanity_val_steps": 0,
},
"train_loader": {
"batch_size": 8,
"shuffle": True,
"num_workers": 1,
"pin_memory": True,
"drop_last": True,
},
"valid_loader": {
"batch_size": 2,
"shuffle": False,
"num_workers": 1,
"pin_memory": True,
"drop_last": False,
},
"test_loader": {
"batch_size": 2,
"shuffle": False,
"num_workers": 1,
"pin_memory": True,
"drop_last": False,
},
"backbone": {
"name": "google/electra-small-discriminator",
"output_dim": 1,
},
"optimizer": {
"name": "torch.optim.AdamW",
"params": {
"lr": 1e-6,
},
},
"scheduler": {
"name": "torch.optim.lr_scheduler.CosineAnnealingWarmRestarts",
"params": {
"T_0": 20,
"eta_min": 0,
},
},
"loss": "nn.MarginRankingLoss",
}
config = Box(config)
config.tokenizer = ElectraTokenizer.from_pretrained(config.backbone.name)
config.model = ElectraModel.from_pretrained(config.backbone.name)
# pprint(config)
config.tokenizer.save_pretrained(f"../data/processed/{config.backbone.name}")
pretrain_model = ElectraModel.from_pretrained(config.backbone.name)
pretrain_model.save_pretrained(f"../data/processed/{config.backbone.name}")
# 個人的にAWSやKaggle環境やGoogle Colabを行ったり来たりしているのでまとめています
import os
import sys
from pathlib import Path
if config.environment == 'AWS':
INPUT_DIR = Path('/mnt/work/data/kaggle/Jigsaw/')
MODEL_DIR = Path(f'../models/{config.exp_name}/')
OUTPUT_DIR = Path(f'../data/interim/{config.exp_name}/')
UTIL_DIR = Path('/mnt/work/shimizu/kaggle/PetFinder/src/utils')
os.makedirs(MODEL_DIR, exist_ok=True)
os.makedirs(OUTPUT_DIR, exist_ok=True)
print(f"Your environment is 'AWS'.\nINPUT_DIR is {INPUT_DIR}\nMODEL_DIR is {MODEL_DIR}\nOUTPUT_DIR is {OUTPUT_DIR}\nUTIL_DIR is {UTIL_DIR}")
elif config.environment == 'Kaggle':
INPUT_DIR = Path('../input/*****')
MODEL_DIR = Path('./')
OUTPUT_DIR = Path('./')
print(f"Your environment is 'Kaggle'.\nINPUT_DIR is {INPUT_DIR}\nMODEL_DIR is {MODEL_DIR}\nOUTPUT_DIR is {OUTPUT_DIR}")
elif config.environment == 'Colab':
INPUT_DIR = Path('/content/drive/MyDrive/kaggle/Jigsaw/raw')
BASE_DIR = Path("/content/drive/MyDrive/kaggle/Jigsaw/interim")
MODEL_DIR = BASE_DIR / f'{config.exp_name}'
OUTPUT_DIR = BASE_DIR / f'{config.exp_name}/'
os.makedirs(MODEL_DIR, exist_ok=True)
os.makedirs(OUTPUT_DIR, exist_ok=True)
if not os.path.exists(INPUT_DIR):
print('Please Mount your Google Drive.')
else:
print(f"Your environment is 'Colab'.\nINPUT_DIR is {INPUT_DIR}\nMODEL_DIR is {MODEL_DIR}\nOUTPUT_DIR is {OUTPUT_DIR}")
else:
print("Please choose 'AWS' or 'Kaggle' or 'Colab'.\nINPUT_DIR is not found.")
# Seed固定
seed_everything(config.seed)
## 処理時間計測
@contextmanager
def timer(name:str, slack:bool=False):
t0 = time.time()
p = psutil.Process(os.getpid())
m0 = p.memory_info()[0] / 2. ** 30
print(f'<< {name} >> Start')
yield
m1 = p.memory_info()[0] / 2. ** 30
delta = m1 - m0
sign = '+' if delta >= 0 else '-'
delta = math.fabs(delta)
print(f"<< {name} >> {m1:.1f}GB({sign}{delta:.1f}GB):{time.time() - t0:.1f}sec", file=sys.stderr)
```
<br>
<h1 style = "font-size:45px; font-family:Comic Sans MS ; font-weight : normal; background-color: #4c1c84 ; color : #eeebf1; text-align: center; border-radius: 100px 100px;">
Data Load
</h1>
<br>
```
## Data Check
for dirnames, _, filenames in os.walk(INPUT_DIR):
for filename in filenames:
print(f'{dirnames}/{filename}')
val_df = pd.read_csv("/mnt/work/data/kaggle/Jigsaw/validation_data.csv")
test_df = pd.read_csv("/mnt/work/data/kaggle/Jigsaw/comments_to_score.csv")
display(val_df.head())
display(test_df.head())
with timer("Count less text & more text"):
less_df = val_df.groupby(["less_toxic"])["worker"].agg("count").reset_index()
less_df.columns = ["text", "less_count"]
more_df = val_df.groupby(["more_toxic"])["worker"].agg("count").reset_index()
more_df.columns = ["text", "more_count"]
text_df = pd.merge(
less_df,
more_df,
on="text",
how="outer"
)
text_df["less_count"] = text_df["less_count"].fillna(0)
text_df["more_count"] = text_df["more_count"].fillna(0)
text_df["target"] = text_df["more_count"]/(text_df["less_count"] + text_df["more_count"])
display(text_df)
```
<br>
<h1 style = "font-size:45px; font-family:Comic Sans MS ; font-weight : normal; background-color: #4c1c84 ; color : #eeebf1; text-align: center; border-radius: 100px 100px;">
Make Fold
</h1>
<br>
```
texts = set(sorted(val_df["less_toxic"].to_list()) + sorted(val_df["more_toxic"].to_list()))
text2id = {t:id for id,t in enumerate(texts)}
val_df['less_id'] = val_df['less_toxic'].map(text2id)
val_df['more_id'] = val_df['more_toxic'].map(text2id)
val_df.head()
len_ids = len(text2id)
idarr = np.zeros((len_ids, len_ids), dtype=bool)
for lid, mid in val_df[['less_id', 'more_id']].values:
min_id = min(lid, mid)
max_id = max(lid, mid)
idarr[max_id, min_id] = True
def add_ids(i, this_list):
for j in range(len_ids):
if idarr[i, j]:
idarr[i, j] = False
this_list.append(j)
this_list = add_ids(j,this_list)
#print(j,i)
for j in range(i+1,len_ids):
if idarr[j, i]:
idarr[j, i] = False
this_list.append(j)
this_list = add_ids(j,this_list)
#print(j,i)
return this_list
group_list = []
for i in tqdm(range(len_ids)):
for j in range(i+1,len_ids):
if idarr[j, i]:
this_list = add_ids(i,[i])
# print(this_list)
group_list.append(this_list)
id2groupid = {}
for gid,ids in enumerate(group_list):
for id in ids:
id2groupid[id] = gid
val_df['less_gid'] = val_df['less_id'].map(id2groupid)
val_df['more_gid'] = val_df['more_id'].map(id2groupid)
display(val_df.head())
display(val_df.tail())
val_df[val_df["more_gid"]==109]
print('unique text counts:', len_ids)
print('grouped text counts:', len(group_list))
# now we can use GroupKFold with group id
group_kfold = GroupKFold(n_splits=config.n_fold)
# Since df.less_gid and df.more_gid are the same, let's use df.less_gid here.
for fold, (trn, val) in enumerate(group_kfold.split(val_df, val_df, val_df.less_gid)):
val_df.loc[val , "fold"] = fold
val_df["fold"] = val_df["fold"].astype(int)
val_df
val_df[val_df["less_gid"]==1751]
```
<br>
<h1 style = "font-size:45px; font-family:Comic Sans MS ; font-weight : normal; background-color: #4c1c84 ; color : #eeebf1; text-align: center; border-radius: 100px 100px;">
Detoxify
</h1>
<br>
```
DETOXI_CKPT = "/mnt/work/shimizu/kaggle/Jigsaw/data/external/detoxify_ckpt/toxic_original-c1212f89.ckpt"
loaded = torch.load(DETOXI_CKPT)
loaded["config"]["arch"]["args"]
huggingface_config_path = '/mnt/work/shimizu/kaggle/Jigsaw/data/processed/bert-base-uncased/'
detox_model = Detoxify(
'original',
checkpoint=DETOXI_CKPT,
huggingface_config_path=huggingface_config_path,
device="cuda"
)
detox_model.predict(test_df["text"].tolist()[0])
val_df["detoxify_more_dict"] = val_df['more_toxic'].progress_map(lambda line: detox_model.predict(line))
val_df["detoxify_less_dict"] = val_df['less_toxic'].progress_map(lambda line: detox_model.predict(line))
detoxify_more = val_df["detoxify_more_dict"].apply(pd.Series)
detoxify_less = val_df["detoxify_less_dict"].apply(pd.Series)
val_df = val_df.drop("detoxify_more_dict", axis=1)
val_df = val_df.drop("detoxify_less_dict", axis=1)
val_df = pd.concat([
val_df,
detoxify_more.add_prefix("more__")
], axis=1)
val_df = pd.concat([
val_df,
detoxify_less.add_prefix("less__")
], axis=1)
val_df.head()
val_df.to_csv(OUTPUT_DIR/"new_val_df.csv", index=False)
val_df = pd.read_csv(OUTPUT_DIR/"new_val_df.csv")
more_cols = [col for col in val_df.columns.tolist() if "more__" in col]
less_cols = [col for col in val_df.columns.tolist() if "less__" in col]
print(len(more_cols), len(less_cols))
```
<br>
<h1 style = "font-size:45px; font-family:Comic Sans MS ; font-weight : normal; background-color: #4c1c84 ; color : #eeebf1; text-align: center; border-radius: 100px 100px;">
Pytorch Dataset
</h1>
<br>
```
class JigsawDataset:
def __init__(self, df, tokenizer, max_length, mode):
self.df = df
self.max_len = max_length
self.tokenizer = tokenizer
self.mode = mode
if self.mode == "train":
self.more_toxic = df["more_toxic"].values
self.less_toxic = df["less_toxic"].values
elif self.mode == "valid":
self.more_toxic = df["more_toxic"].values
self.less_toxic = df["less_toxic"].values
else:
self.text = df["text"].values
def __len__(self):
return len(self.df)
def __getitem__(self, index):
if self.mode == "train":
more_toxic = self.more_toxic[index]
less_toxic = self.less_toxic[index]
inputs_more_toxic = self.tokenizer.encode_plus(
more_toxic,
truncation=True,
return_attention_mask=True,
return_token_type_ids=True,
max_length = self.max_len,
padding="max_length",
)
inputs_less_toxic = self.tokenizer.encode_plus(
less_toxic,
truncation=True,
return_attention_mask=True,
return_token_type_ids=True,
max_length = self.max_len,
padding="max_length",
)
target = 1
more_toxic_ids = inputs_more_toxic["input_ids"]
more_toxic_mask = inputs_more_toxic["attention_mask"]
more_token_type_ids = inputs_more_toxic["token_type_ids"]
less_toxic_ids = inputs_less_toxic["input_ids"]
less_toxic_mask = inputs_less_toxic["attention_mask"]
less_token_type_ids = inputs_less_toxic["token_type_ids"]
return {
'more_toxic_ids': torch.tensor(more_toxic_ids, dtype=torch.long),
'more_toxic_mask': torch.tensor(more_toxic_mask, dtype=torch.long),
'more_token_type_ids': torch.tensor(more_token_type_ids, dtype=torch.long),
'less_toxic_ids': torch.tensor(less_toxic_ids, dtype=torch.long),
'less_toxic_mask': torch.tensor(less_toxic_mask, dtype=torch.long),
'less_token_type_ids': torch.tensor(less_token_type_ids, dtype=torch.long),
'target': torch.tensor(target, dtype=torch.float)
}
elif self.mode == "valid":
more_toxic = self.more_toxic[index]
less_toxic = self.less_toxic[index]
inputs_more_toxic = self.tokenizer.encode_plus(
more_toxic,
truncation=True,
return_attention_mask=True,
return_token_type_ids=True,
max_length = self.max_len,
padding="max_length",
)
inputs_less_toxic = self.tokenizer.encode_plus(
less_toxic,
truncation=True,
return_attention_mask=True,
return_token_type_ids=True,
max_length = self.max_len,
padding="max_length",
)
target = 1
more_toxic_ids = inputs_more_toxic["input_ids"]
more_toxic_mask = inputs_more_toxic["attention_mask"]
more_token_type_ids = inputs_more_toxic["token_type_ids"]
less_toxic_ids = inputs_less_toxic["input_ids"]
less_toxic_mask = inputs_less_toxic["attention_mask"]
less_token_type_ids = inputs_less_toxic["token_type_ids"]
return {
'more_toxic_ids': torch.tensor(more_toxic_ids, dtype=torch.long),
'more_toxic_mask': torch.tensor(more_toxic_mask, dtype=torch.long),
'more_token_type_ids': torch.tensor(more_token_type_ids, dtype=torch.long),
'less_toxic_ids': torch.tensor(less_toxic_ids, dtype=torch.long),
'less_toxic_mask': torch.tensor(less_toxic_mask, dtype=torch.long),
'less_token_type_ids': torch.tensor(less_token_type_ids, dtype=torch.long),
'target': torch.tensor(target, dtype=torch.float)
}
else:
text = self.text[index]
inputs_text = self.tokenizer.encode_plus(
text,
truncation=True,
return_attention_mask=True,
return_token_type_ids=True,
max_length = self.max_len,
padding="max_length",
)
text_ids = inputs_text["input_ids"]
text_mask = inputs_text["attention_mask"]
text_token_type_ids = inputs_text["token_type_ids"]
return {
'text_ids': torch.tensor(text_ids, dtype=torch.long),
'text_mask': torch.tensor(text_mask, dtype=torch.long),
'text_token_type_ids': torch.tensor(text_token_type_ids, dtype=torch.long),
}
```
<br>
<h2 style = "font-size:45px;
font-family:Comic Sans MS ;
font-weight : normal;
background-color: #eeebf1 ;
color : #4c1c84;
text-align: center;
border-radius: 100px 100px;">
DataModule
</h2>
<br>
```
class JigsawDataModule(LightningDataModule):
def __init__(self, train_df, valid_df, test_df, cfg):
super().__init__()
self._train_df = train_df
self._valid_df = valid_df
self._test_df = test_df
self._cfg = cfg
def train_dataloader(self):
dataset = JigsawDataset(
df=self._train_df,
tokenizer=self._cfg.tokenizer,
max_length=self._cfg.max_length,
mode="train",
)
return DataLoader(dataset, **self._cfg.train_loader)
def val_dataloader(self):
dataset = JigsawDataset(
df=self._valid_df,
tokenizer=self._cfg.tokenizer,
max_length=self._cfg.max_length,
mode="valid",
)
return DataLoader(dataset, **self._cfg.valid_loader)
def test_dataloader(self):
dataset = JigsawDataset(
df=self._test_df,
tokenizer = self._cfg.tokenizer,
max_length=self._cfg.max_length,
mode="test",
)
return DataLoader(dataset, **self._cfg.test_loader)
## DataCheck
seed_everything(config.seed)
sample_dataloader = JigsawDataModule(val_df, val_df, test_df, config).train_dataloader()
for data in sample_dataloader:
break
print(data["more_toxic_ids"].size())
print(data["more_toxic_mask"].size())
print(data["more_token_type_ids"].size())
print(data["target"].size())
print(data["target"])
output = config.model(
data["more_toxic_ids"],
data["more_toxic_mask"],
data["more_token_type_ids"],
output_hidden_states=True,
output_attentions=True,
)
print(output["hidden_states"][-1].size(), output["attentions"][-1].size())
print(output["hidden_states"][-1][:, 1, :].size(), output["attentions"][-1].size())
```
<br>
<h2 style = "font-size:45px;
font-family:Comic Sans MS ;
font-weight : normal;
background-color: #eeebf1 ;
color : #4c1c84;
text-align: center;
border-radius: 100px 100px;">
LigitningModule
</h2>
<br>
```
def criterion(outputs1, outputs2, targets):
return nn.MarginRankingLoss(margin=config.margin)(outputs1, outputs2, targets)
class JigsawModel(pl.LightningModule):
def __init__(self, cfg, fold_num):
super().__init__()
self.cfg = cfg
self.__build_model()
self.criterion = criterion
self.save_hyperparameters(cfg)
self.fold_num = fold_num
def __build_model(self):
self.base_model = ElectraModel.from_pretrained(
self.cfg.backbone.name
)
# print(f"Use Model: {self.cfg.backbone.name}")
self.norm = nn.LayerNorm(256)
self.drop = nn.Dropout(p=0.3)
self.head = nn.Linear(256, self.cfg.backbone.output_dim)
def forward(self, ids, mask, token_type_ids):
output = self.base_model(
input_ids=ids,
attention_mask=mask,
token_type_ids=token_type_ids,
output_hidden_states=True,
output_attentions=True
)
feature = self.norm(output["hidden_states"][-1][:, 1, :])
out = self.drop(feature)
out = self.head(out)
return {
"logits":out,
"feature":feature,
"attention":output["attentions"],
"mask":mask,
}
def training_step(self, batch, batch_idx):
more_toxic_ids = batch['more_toxic_ids']
more_toxic_mask = batch['more_toxic_mask']
more_text_token_type_ids = batch['more_token_type_ids']
less_toxic_ids = batch['less_toxic_ids']
less_toxic_mask = batch['less_toxic_mask']
less_text_token_type_ids = batch['less_token_type_ids']
targets = batch['target']
more_outputs = self.forward(
more_toxic_ids,
more_toxic_mask,
more_text_token_type_ids
)
less_outputs = self.forward(
less_toxic_ids,
less_toxic_mask,
less_text_token_type_ids
)
loss = self.criterion(more_outputs["logits"], less_outputs["logits"], targets)
return {
"loss":loss,
"targets":targets,
}
def training_epoch_end(self, training_step_outputs):
loss_list = []
for out in training_step_outputs:
loss_list.extend([out["loss"].cpu().detach().tolist()])
meanloss = sum(loss_list)/len(loss_list)
logs = {f"train_loss/fold{self.fold_num+1}": meanloss,}
self.log_dict(
logs,
on_step=False,
on_epoch=True,
prog_bar=True,
logger=True
)
def validation_step(self, batch, batch_idx):
more_toxic_ids = batch['more_toxic_ids']
more_toxic_mask = batch['more_toxic_mask']
more_text_token_type_ids = batch['more_token_type_ids']
less_toxic_ids = batch['less_toxic_ids']
less_toxic_mask = batch['less_toxic_mask']
less_text_token_type_ids = batch['less_token_type_ids']
targets = batch['target']
more_outputs = self.forward(
more_toxic_ids,
more_toxic_mask,
more_text_token_type_ids
)
less_outputs = self.forward(
less_toxic_ids,
less_toxic_mask,
less_text_token_type_ids
)
outputs = more_outputs["logits"] - less_outputs["logits"]
logits = outputs.clone()[:, 0]
logits[logits > 0] = 1
loss = nn.BCEWithLogitsLoss()(logits, targets)
return {
"loss":loss,
"pred":outputs,
"targets":targets,
}
def validation_epoch_end(self, validation_step_outputs):
loss_list = []
pred_list = []
target_list = []
for out in validation_step_outputs:
loss_list.extend([out["loss"].cpu().detach().tolist()])
pred_list.append(out["pred"][:, 0].detach().cpu().numpy())
target_list.append(out["targets"].detach().cpu().numpy())
meanloss = sum(loss_list)/len(loss_list)
pred_list = np.concatenate(pred_list)
pred_count = sum(x>0 for x in pred_list)/len(pred_list)
logs = {
f"valid_loss/fold{self.fold_num+1}":meanloss,
f"valid_acc/fold{self.fold_num+1}":pred_count,
}
self.log_dict(
logs,
on_step=False,
on_epoch=True,
prog_bar=True,
logger=True
)
def configure_optimizers(self):
optimizer = eval(self.cfg.optimizer.name)(
self.parameters(), **self.cfg.optimizer.params
)
self.scheduler = eval(self.cfg.scheduler.name)(
optimizer, **self.cfg.scheduler.params
)
scheduler = {"scheduler": self.scheduler, "interval": "step",}
return [optimizer], [scheduler]
```
<br>
<h2 style = "font-size:45px;
font-family:Comic Sans MS ;
font-weight : normal;
background-color: #eeebf1 ;
color : #4c1c84;
text-align: center;
border-radius: 100px 100px;">
Training
</h2>
<br>
```
# skf = StratifiedKFold(n_splits=config.n_fold, shuffle=True, random_state=config.seed)
# for fold, (_, val_idx) in enumerate(skf.split(X=val_df, y=val_df["worker"])):
# val_df.loc[val_idx, "kfold"] = int(fold)
# val_df["kfold"] = val_df["kfold"].astype(int)
val_df.head()
## Debug
config.trainer.fast_dev_run = True
config.backbone.output_dim = 1
for fold in config.train_fold:
print("★"*25, f" Fold{fold+1} ", "★"*25)
df_train = val_df[val_df.fold != fold].reset_index(drop=True)
df_valid = val_df[val_df.fold != fold].reset_index(drop=True)
datamodule = JigsawDataModule(df_train, df_valid, test_df, config)
sample_dataloader = JigsawDataModule(df_train, df_valid, test_df, config).train_dataloader()
config.scheduler.params.T_0 = config.epoch * len(sample_dataloader)
model = JigsawModel(config, fold)
lr_monitor = callbacks.LearningRateMonitor()
loss_checkpoint = callbacks.ModelCheckpoint(
filename=f"best_acc_fold{fold+1}",
monitor=f"valid_acc/fold{fold+1}",
save_top_k=1,
mode="max",
save_last=False,
dirpath=MODEL_DIR,
save_weights_only=True,
)
wandb_logger = WandbLogger(
project=config.project,
entity=config.entity,
name = f"{config.exp_name}",
tags = ['RoBERTa-Base', "MarginRankLoss"]
)
lr_monitor = LearningRateMonitor(logging_interval='step')
trainer = pl.Trainer(
max_epochs=config.epoch,
callbacks=[loss_checkpoint, lr_monitor, RichProgressBar()],
# deterministic=True,
logger=[wandb_logger],
**config.trainer
)
trainer.fit(model, datamodule=datamodule)
# Training
config.trainer.fast_dev_run = False
config.backbone.output_dim = 1
for fold in config.train_fold:
print("★"*25, f" Fold{fold+1} ", "★"*25)
df_train = val_df[val_df.fold != fold].reset_index(drop=True)
df_valid = val_df[val_df.fold == fold].reset_index(drop=True)
datamodule = JigsawDataModule(df_train, df_valid, test_df, config)
sample_dataloader = JigsawDataModule(df_train, df_valid, test_df, config).train_dataloader()
config.scheduler.params.T_0 = config.epoch * len(sample_dataloader)
model = JigsawModel(config, fold)
lr_monitor = callbacks.LearningRateMonitor()
loss_checkpoint = callbacks.ModelCheckpoint(
filename=f"best_acc_fold{fold+1}",
monitor=f"valid_acc/fold{fold+1}",
save_top_k=1,
mode="max",
save_last=False,
dirpath=MODEL_DIR,
save_weights_only=True,
)
wandb_logger = WandbLogger(
project=config.project,
entity=config.entity,
name = f"{config.exp_name}",
tags = ['RoBERTa-Base', "MarginRankLoss"]
)
lr_monitor = LearningRateMonitor(logging_interval='step')
trainer = pl.Trainer(
max_epochs=config.epoch,
callbacks=[loss_checkpoint, lr_monitor, RichProgressBar()],
# deterministic=True,
logger=[wandb_logger],
**config.trainer
)
trainer.fit(model, datamodule=datamodule)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"Device == {device}")
MORE = np.zeros(len(val_df))
LESS = np.zeros(len(val_df))
PRED = np.zeros(len(test_df))
attention_array = np.zeros((len(val_df), 256)) # attention格納
mask_array = np.zeros((len(val_df), 256)) # mask情報格納,後でattentionと掛け合わせる
for fold in config.train_fold:
pred_list = []
print("★"*25, f" Fold{fold+1} ", "★"*25)
val_idx = val_df[val_df.fold == fold].index.tolist()
df_train = val_df[val_df.fold != fold].reset_index(drop=True)
df_valid = val_df[val_df.fold == fold].reset_index(drop=True)
datamodule = JigsawDataModule(df_train, df_valid, test_df, config)
valid_dataloader = JigsawDataModule(df_train, df_valid, test_df, config).val_dataloader()
model = JigsawModel(config, fold)
loss_checkpoint = callbacks.ModelCheckpoint(
filename=f"best_acc_fold{fold+1}",
monitor=f"valid_acc/fold{fold+1}",
save_top_k=1,
mode="max",
save_last=False,
dirpath="../input/toxicroberta/",
)
model = model.load_from_checkpoint(MODEL_DIR/f"best_acc_fold{fold+1}.ckpt", cfg=config, fold_num=fold)
model.to(device)
model.eval()
more_list = []
less_list = []
for step, data in tqdm(enumerate(valid_dataloader), total=len(valid_dataloader)):
more_toxic_ids = data['more_toxic_ids'].to(device)
more_toxic_mask = data['more_toxic_mask'].to(device)
more_text_token_type_ids = data['more_token_type_ids'].to(device)
less_toxic_ids = data['less_toxic_ids'].to(device)
less_toxic_mask = data['less_toxic_mask'].to(device)
less_text_token_type_ids = data['less_token_type_ids'].to(device)
more_outputs = model(
more_toxic_ids,
more_toxic_mask,
more_text_token_type_ids,
)
less_outputs = model(
less_toxic_ids,
less_toxic_mask,
less_text_token_type_ids
)
more_list.append(more_outputs["logits"][:, 0].detach().cpu().numpy())
less_list.append(less_outputs["logits"][:, 0].detach().cpu().numpy())
MORE[val_idx] += np.concatenate(more_list)
LESS[val_idx] += np.concatenate(less_list)
# PRED += pred_list/len(config.train_fold)
plt.figure(figsize=(12, 5))
plt.scatter(LESS, MORE)
plt.xlabel("less-toxic")
plt.ylabel("more-toxic")
plt.grid()
plt.show()
val_df["less_attack"] = LESS
val_df["more_attack"] = MORE
val_df["diff_attack"] = val_df["more_attack"] - val_df["less_attack"]
attack_score = val_df[val_df["diff_attack"]>0]["diff_attack"].count()/len(val_df)
print(f"HATE-BERT Jigsaw-Classification Score: {attack_score:.6f}")
val_df[val_df["fold"]==0]
```
<br>
<h2 style = "font-size:45px;
font-family:Comic Sans MS ;
font-weight : normal;
background-color: #eeebf1 ;
color : #4c1c84;
text-align: center;
border-radius: 100px 100px;">
Attention Visualize
</h2>
<br>
```
text_df = pd.DataFrame()
text_df["text"] = list(set(val_df["less_toxic"].unique().tolist() + val_df["more_toxic"].unique().tolist()))
display(text_df.head())
display(text_df.shape)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"Device == {device}")
attention_array = np.zeros((len(text_df), config.max_length)) # attention格納
mask_array = np.zeros((len(text_df), config.max_length)) # mask情報格納,後でattentionと掛け合わせる
feature_array = np.zeros((len(text_df), 768))
PRED = np.zeros(len(text_df))
for fold in config.train_fold:
pred_list = []
print("★"*25, f" Fold{fold+1} ", "★"*25)
test_dataloader = JigsawDataModule(val_df, val_df, text_df, config).test_dataloader()
model = JigsawModel(config, fold)
loss_checkpoint = callbacks.ModelCheckpoint(
filename=f"best_acc_fold{fold+1}",
monitor=f"valid_acc/fold{fold+1}",
save_top_k=1,
mode="max",
save_last=False,
dirpath="../input/toxicroberta/",
)
model = model.load_from_checkpoint(MODEL_DIR/f"best_acc_fold{fold+1}.ckpt", cfg=config, fold_num=fold)
model.to(device)
model.eval()
attention_list = []
feature_list = []
mask_list = []
pred_list = []
for step, data in tqdm(enumerate(test_dataloader), total=len(test_dataloader)):
text_ids = data["text_ids"].to(device)
text_mask = data["text_mask"].to(device)
text_token_type_ids = data["text_token_type_ids"].to(device)
mask_list.append(text_mask.detach().cpu().numpy())
outputs = model(
text_ids,
text_mask,
text_token_type_ids,
)
## Last LayerのCLS Tokenに対するAttention
last_attention = outputs["attention"][-1].detach().cpu().numpy()
total_attention = np.zeros((last_attention.shape[0], config.max_length))
for batch in range(last_attention.shape[0]):
for n_head in range(12):
total_attention[batch, :] += last_attention[batch, n_head, 0, :]
attention_list.append(total_attention)
pred_list.append(outputs["logits"][:, 0].detach().cpu().numpy())
feature_list.append(outputs["feature"].detach().cpu().numpy())
attention_array += np.concatenate(attention_list)/config.n_fold
mask_array += np.concatenate(mask_list)/config.n_fold
feature_array += np.concatenate(feature_list)/config.n_fold
PRED += np.concatenate(pred_list)/len(config.train_fold)
text_df["target"] = PRED
text_df.to_pickle(OUTPUT_DIR/f"{config.exp_name}__text_df.pkl")
np.save(OUTPUT_DIR/'toxic-attention.npy', attention_array)
np.save(OUTPUT_DIR/'toxic-mask.npy', mask_array)
np.save(OUTPUT_DIR/'toxic-feature.npy', feature_array)
plt.figure(figsize=(12, 5))
sns.histplot(text_df["target"], color="#4c1c84")
plt.grid()
plt.show()
```
<br>
<h2 style = "font-size:45px;
font-family:Comic Sans MS ;
font-weight : normal;
background-color: #eeebf1 ;
color : #4c1c84;
text-align: center;
border-radius: 100px 100px;">
Attention Load
</h2>
<br>
```
text_df = pd.read_pickle(OUTPUT_DIR/"text_df.pkl")
attention_array = np.load(OUTPUT_DIR/'toxic-attention.npy')
mask_array = np.load(OUTPUT_DIR/'toxic-mask.npy')
from IPython.display import display, HTML
def highlight_r(word, attn):
html_color = '#%02X%02X%02X' % (255, int(255*(1 - attn)), int(255*(1 - attn)))
return '<span style="background-color: {}">{}</span>'.format(html_color, word)
num = 12
ids = config.tokenizer(text_df.loc[num, "text"])["input_ids"]
tokens = config.tokenizer.convert_ids_to_tokens(ids)
attention = attention_array[num, :][np.nonzero(mask_array[num, :])]
html_outputs = []
for word, attn in zip(tokens, attention):
html_outputs.append(highlight_r(word, attn))
print(f"Offensive Score is {PRED[num]}")
display(HTML(' '.join(html_outputs)))
display(text_df.loc[num, "text"])
text_df.sort_values("target", ascending=False).head(20)
high_score_list = text_df.sort_values("target", ascending=False).head(20).index.tolist()
for num in high_score_list:
ids = config.tokenizer(text_df.loc[num, "text"])["input_ids"]
tokens = config.tokenizer.convert_ids_to_tokens(ids)
attention = attention_array[num, :][np.nonzero(mask_array[num, :])]
html_outputs = []
for word, attn in zip(tokens, attention):
html_outputs.append(highlight_r(word, attn))
print(f"Offensive Score is {PRED[num]}")
display(HTML(' '.join(html_outputs)))
display(text_df.loc[num, "text"])
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
# !git pull
import malaya_speech
import malaya_speech.train.model.vggvox_v2 as vggvox_v2
import tensorflow as tf
class Model:
def __init__(self):
self.X = tf.placeholder(tf.float32, [None, 257, None, 1])
self.logits = vggvox_v2.Model(self.X, num_class=2, mode='eval').logits
self.logits = tf.identity(self.logits, name = 'logits')
ckpt_path = 'output-vggvox-v2-speaker-overlap/model.ckpt-300000'
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model()
sess.run(tf.global_variables_initializer())
model.logits
var_lists = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
saver = tf.train.Saver(var_list = var_lists)
saver.restore(sess, ckpt_path)
saver = tf.train.Saver()
saver.save(sess, 'vggvox-v2-speaker-overlap/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'logits' in n.name
or 'alphas' in n.name
or 'self/Softmax' in n.name)
and 'adam' not in n.name
and 'beta' not in n.name
and 'global_step' not in n.name
and 'Assign' not in n.name
]
)
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('vggvox-v2-speaker-overlap', strings)
# def load_graph(frozen_graph_filename):
# with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
# graph_def = tf.GraphDef()
# graph_def.ParseFromString(f.read())
# with tf.Graph().as_default() as graph:
# tf.import_graph_def(graph_def)
# return graph
def load_graph(frozen_graph_filename, **kwargs):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# https://github.com/onnx/tensorflow-onnx/issues/77#issuecomment-445066091
# to fix import T5
for node in graph_def.node:
if node.op == 'RefSwitch':
node.op = 'Switch'
for index in xrange(len(node.input)):
if 'moving_' in node.input[index]:
node.input[index] = node.input[index] + '/read'
elif node.op == 'AssignSub':
node.op = 'Sub'
if 'use_locking' in node.attr:
del node.attr['use_locking']
elif node.op == 'AssignAdd':
node.op = 'Add'
if 'use_locking' in node.attr:
del node.attr['use_locking']
elif node.op == 'Assign':
node.op = 'Identity'
if 'use_locking' in node.attr:
del node.attr['use_locking']
if 'validate_shape' in node.attr:
del node.attr['validate_shape']
if len(node.input) == 2:
node.input[0] = node.input[1]
del node.input[1]
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('vggvox-v2-speaker-overlap/frozen_model.pb')
x = g.get_tensor_by_name('import/Placeholder:0')
logits = g.get_tensor_by_name('import/logits:0')
from tensorflow.tools.graph_transforms import TransformGraph
pb = 'vggvox-v2-speaker-overlap/frozen_model.pb'
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)',
'fold_constants(ignore_errors=true)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights',
'strip_unused_nodes',
'sort_by_execution_order']
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
transformed_graph_def = TransformGraph(input_graph_def,
['Placeholder', 'Placeholder_1'],
['logits'], transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
g = load_graph('vggvox-v2-speaker-overlap/frozen_model.pb.quantized')
x = g.get_tensor_by_name('import/Placeholder:0')
logits = g.get_tensor_by_name('import/logits:0')
!tar -czvf vggvox-v2-speaker-overlap-300k.tar.gz vggvox-v2-speaker-overlap
```
| github_jupyter |
# Word Embeddings using CBOW [without python DL libraries]
This project presents how to compute word embeddings and use them for sentiment analysis.
- To implement sentiment analysis, you can go beyond counting the number of positive words and negative words.
- You can find a way to represent each word numerically, by a vector.
- The vector could then represent syntactic (i.e. parts of speech) and semantic (i.e. meaning) structures.
In this assignment, you will explore a classic way of generating word embeddings or representations.
- You will implement a famous model called the continuous bag of words (CBOW) model.
By completing this assignment you will:
- Train word vectors from scratch.
- Learn how to create batches of data.
- Understand how backpropagation works.
- Plot and visualize your learned word vectors.
Knowing how to train these models will give you a better understanding of word vectors, which are building blocks to many applications in natural language processing.
## Outline
- [1 The Continuous bag of words model](#1)
- [2 Training the Model](#2)
- [2.0 Initialize the model](#2)
- [Exercise 01](#ex-01)
- [2.1 Softmax Function](#2.1)
- [Exercise 02](#ex-02)
- [2.2 Forward Propagation](#2.2)
- [Exercise 03](#ex-03)
- [2.3 Cost Function](#2.3)
- [2.4 Backproagation](#2.4)
- [Exercise 04](#ex-04)
- [2.5 Gradient Descent](#2.5)
- [Exercise 05](#ex-05)
- [3 Visualizing the word vectors](#3)
<a name='1'></a>
# 1. The Continuous bag of words model
Let's take a look at the following sentence:
>**'I am happy because I am learning'**.
- In continuous bag of words (CBOW) modeling, we try to predict the center word given a few context words (the words around the center word).
- For example, if you were to choose a context half-size of say $C = 2$, then you would try to predict the word **happy** given the context that includes 2 words before and 2 words after the center word:
> $C$ words before: [I, am]
> $C$ words after: [because, I]
- In other words:
$$context = [I,am, because, I]$$
$$target = happy$$
The structure of your model will look like this:
<div style="width:image width px; font-size:100%; text-align:center;"><img src='word2.png' alt="alternate text" width="width" height="height" style="width:600px;height:250px;" /> Figure 1 </div>
Where $\bar x$ is the average of all the one hot vectors of the context words.
<div style="width:image width px; font-size:100%; text-align:center;"><img src='mean_vec2.png' alt="alternate text" width="width" height="height" style="width:600px;height:250px;" /> Figure 2 </div>
Once you have encoded all the context words, you can use $\bar x$ as the input to your model.
The architecture you will be implementing is as follows:
\begin{align}
h &= W_1 \ X + b_1 \tag{1} \\
a &= ReLU(h) \tag{2} \\
z &= W_2 \ a + b_2 \tag{3} \\
\hat y &= softmax(z) \tag{4} \\
\end{align}
```
# Import Python libraries and helper functions (in utils2)
import nltk
from nltk.tokenize import word_tokenize
import numpy as np
from collections import Counter
from utils2 import sigmoid, get_batches, compute_pca, get_dict
# Download sentence tokenizer
nltk.data.path.append('.')
# Load, tokenize and process the data
import re # Load the Regex-modul
with open('shakespeare.txt') as f:
data = f.read() # Read in the data
data = re.sub(r'[,!?;-]', '.',data) # Punktuations are replaced by .
data = nltk.word_tokenize(data) # Tokenize string to words
data = [ ch.lower() for ch in data if ch.isalpha() or ch == '.'] # Lower case and drop non-alphabetical tokens
print("Number of tokens:", len(data),'\n', data[:15]) # print data sample
# Compute the frequency distribution of the words in the dataset (vocabulary)
fdist = nltk.FreqDist(word for word in data)
print("Size of vocabulary: ",len(fdist) )
print("Most frequent tokens: ",fdist.most_common(20) ) # print the 20 most frequent words and their freq.
```
#### Mapping words to indices and indices to words
We provide a helper function to create a dictionary that maps words to indices and indices to words.
```
# get_dict creates two dictionaries, converting words to indices and viceversa.
word2Ind, Ind2word = get_dict(data)
V = len(word2Ind)
print("Size of vocabulary: ", V)
# example of word to index mapping
print("Index of the word 'king' : ",word2Ind['king'] )
print("Word which has index 2743: ",Ind2word[2743] )
```
<a name='2'></a>
# 2 Training the Model
### Initializing the model
You will now initialize two matrices and two vectors.
- The first matrix ($W_1$) is of dimension $N \times V$, where $V$ is the number of words in your vocabulary and $N$ is the dimension of your word vector.
- The second matrix ($W_2$) is of dimension $V \times N$.
- Vector $b_1$ has dimensions $N\times 1$
- Vector $b_2$ has dimensions $V\times 1$.
- $b_1$ and $b_2$ are the bias vectors of the linear layers from matrices $W_1$ and $W_2$.
The overall structure of the model will look as in Figure 1, but at this stage we are just initializing the parameters.
<a name='ex-01'></a>
### Exercise 01
Please use [numpy.random.rand](https://numpy.org/doc/stable/reference/random/generated/numpy.random.rand.html) to generate matrices that are initialized with random values from a uniform distribution, ranging between 0 and 1.
**Note:** In the next cell you will encounter a random seed. Please **DO NOT** modify this seed so your solution can be tested correctly.
```
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: initialize_model
def initialize_model(N,V, random_seed=1):
'''
Inputs:
N: dimension of hidden vector
V: dimension of vocabulary
random_seed: random seed for consistent results in the unit tests
Outputs:
W1, W2, b1, b2: initialized weights and biases
'''
np.random.seed(random_seed)
### START CODE HERE (Replace instances of 'None' with your code) ###
# W1 has shape (N,V)
W1 = np.random.rand(N, V)
# W2 has shape (V,N)
W2 = np.random.rand(V, N)
# b1 has shape (N,1)
b1 = np.random.rand(N, 1)
# b2 has shape (V,1)
b2 = np.random.rand(V, 1)
### END CODE HERE ###
return W1, W2, b1, b2
# Test your function example.
tmp_N = 4
tmp_V = 10
tmp_W1, tmp_W2, tmp_b1, tmp_b2 = initialize_model(tmp_N,tmp_V)
assert tmp_W1.shape == ((tmp_N,tmp_V))
assert tmp_W2.shape == ((tmp_V,tmp_N))
print(f"tmp_W1.shape: {tmp_W1.shape}")
print(f"tmp_W2.shape: {tmp_W2.shape}")
print(f"tmp_b1.shape: {tmp_b1.shape}")
print(f"tmp_b2.shape: {tmp_b2.shape}")
```
##### Expected Output
```CPP
tmp_W1.shape: (4, 10)
tmp_W2.shape: (10, 4)
tmp_b1.shape: (4, 1)
tmp_b2.shape: (10, 1)
```
<a name='2.1'></a>
### 2.1 Softmax
Before we can start training the model, we need to implement the softmax function as defined in equation 5:
<br>
$$ \text{softmax}(z_i) = \frac{e^{z_i} }{\sum_{i=0}^{V-1} e^{z_i} } \tag{5} $$
- Array indexing in code starts at 0.
- $V$ is the number of words in the vocabulary (which is also the number of rows of $z$).
- $i$ goes from 0 to |V| - 1.
<a name='ex-02'></a>
### Exercise 02
**Instructions**: Implement the softmax function below.
- Assume that the input $z$ to `softmax` is a 2D array
- Each training example is represented by a column of shape (V, 1) in this 2D array.
- There may be more than one column, in the 2D array, because you can put in a batch of examples to increase efficiency. Let's call the batch size lowercase $m$, so the $z$ array has shape (V, m)
- When taking the sum from $i=1 \cdots V-1$, take the sum for each column (each example) separately.
Please use
- numpy.exp
- numpy.sum (set the axis so that you take the sum of each column in z)
```
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: softmax
def softmax(z):
'''
Inputs:
z: output scores from the hidden layer
Outputs:
yhat: prediction (estimate of y)
'''
### START CODE HERE (Replace instances of 'None' with your own code) ###
# Calculate yhat (softmax)
yhat = np.exp(z)/np.sum(np.exp(z),axis=0)
### END CODE HERE ###
return yhat
# Test the function
tmp = np.array([[1,2,3],
[1,1,1]
])
tmp_sm = softmax(tmp)
display(tmp_sm)
```
##### Expected Ouput
```CPP
array([[0.5 , 0.73105858, 0.88079708],
[0.5 , 0.26894142, 0.11920292]])
```
<a name='2.2'></a>
### 2.2 Forward propagation
<a name='ex-03'></a>
### Exercise 03
Implement the forward propagation $z$ according to equations (1) to (3). <br>
\begin{align}
h &= W_1 \ X + b_1 \tag{1} \\
a &= ReLU(h) \tag{2} \\
z &= W_2 \ a + b_2 \tag{3} \\
\end{align}
For that, you will use as activation the Rectified Linear Unit (ReLU) given by:
$$f(h)=\max (0,h) \tag{6}$$
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>You can use numpy.maximum(x1,x2) to get the maximum of two values</li>
<li>Use numpy.dot(A,B) to matrix multiply A and B</li>
</ul>
</p>
```
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: forward_prop
def forward_prop(x, W1, W2, b1, b2):
'''
Inputs:
x: average one hot vector for the context
W1, W2, b1, b2: matrices and biases to be learned
Outputs:
z: output score vector
'''
### START CODE HERE (Replace instances of 'None' with your own code) ###
# Calculate h
h = np.dot(W1,x)+b1
# Apply the relu on h (store result in h)
h = np.maximum(0, h)
# Calculate z
z = np.dot(W2,h)+b2
### END CODE HERE ###
return z, h
# Test the function
# Create some inputs
tmp_N = 2
tmp_V = 3
tmp_x = np.array([[0,1,0]]).T
tmp_W1, tmp_W2, tmp_b1, tmp_b2 = initialize_model(N=tmp_N,V=tmp_V, random_seed=1)
print(f"x has shape {tmp_x.shape}")
print(f"N is {tmp_N} and vocabulary size V is {tmp_V}")
# call function
tmp_z, tmp_h = forward_prop(tmp_x, tmp_W1, tmp_W2, tmp_b1, tmp_b2)
print("call forward_prop")
print()
# Look at output
print(f"z has shape {tmp_z.shape}")
print("z has values:")
print(tmp_z)
print()
print(f"h has shape {tmp_h.shape}")
print("h has values:")
print(tmp_h)
```
##### Expected output
```CPP
x has shape (3, 1)
N is 2 and vocabulary size V is 3
call forward_prop
z has shape (3, 1)
z has values:
[[0.55379268]
[1.58960774]
[1.50722933]]
h has shape (2, 1)
h has values:
[[0.92477674]
[1.02487333]]
```
<a name='2.3'></a>
## 2.3 Cost function
- We have implemented the *cross-entropy* cost function for you.
```
# compute_cost: cross-entropy cost functioN
def compute_cost(y, yhat, batch_size):
# cost function
logprobs = np.multiply(np.log(yhat),y) + np.multiply(np.log(1 - yhat), 1 - y)
cost = - 1/batch_size * np.sum(logprobs)
cost = np.squeeze(cost)
return cost
# Test the function
tmp_C = 2
tmp_N = 50
tmp_batch_size = 4
tmp_word2Ind, tmp_Ind2word = get_dict(data)
tmp_V = len(word2Ind)
tmp_x, tmp_y = next(get_batches(data, tmp_word2Ind, tmp_V,tmp_C, tmp_batch_size))
print(f"tmp_x.shape {tmp_x.shape}")
print(f"tmp_y.shape {tmp_y.shape}")
tmp_W1, tmp_W2, tmp_b1, tmp_b2 = initialize_model(tmp_N,tmp_V)
print(f"tmp_W1.shape {tmp_W1.shape}")
print(f"tmp_W2.shape {tmp_W2.shape}")
print(f"tmp_b1.shape {tmp_b1.shape}")
print(f"tmp_b2.shape {tmp_b2.shape}")
tmp_z, tmp_h = forward_prop(tmp_x, tmp_W1, tmp_W2, tmp_b1, tmp_b2)
print(f"tmp_z.shape: {tmp_z.shape}")
print(f"tmp_h.shape: {tmp_h.shape}")
tmp_yhat = softmax(tmp_z)
print(f"tmp_yhat.shape: {tmp_yhat.shape}")
tmp_cost = compute_cost(tmp_y, tmp_yhat, tmp_batch_size)
print("call compute_cost")
print(f"tmp_cost {tmp_cost:.4f}")
```
##### Expected output
```CPP
tmp_x.shape (5778, 4)
tmp_y.shape (5778, 4)
tmp_W1.shape (50, 5778)
tmp_W2.shape (5778, 50)
tmp_b1.shape (50, 1)
tmp_b2.shape (5778, 1)
tmp_z.shape: (5778, 4)
tmp_h.shape: (50, 4)
tmp_yhat.shape: (5778, 4)
call compute_cost
tmp_cost 9.9560
```
<a name='2.4'></a>
## 2.4 Training the Model - Backpropagation
<a name='ex-04'></a>
### Exercise 04
Now that you have understood how the CBOW model works, you will train it. <br>
You created a function for the forward propagation. Now you will implement a function that computes the gradients to backpropagate the errors.
```
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: back_prop
def back_prop(x, yhat, y, h, W1, W2, b1, b2, batch_size):
'''
Inputs:
x: average one hot vector for the context
yhat: prediction (estimate of y)
y: target vector
h: hidden vector (see eq. 1)
W1, W2, b1, b2: matrices and biases
batch_size: batch size
Outputs:
grad_W1, grad_W2, grad_b1, grad_b2: gradients of matrices and biases
'''
### START CODE HERE (Replace instanes of 'None' with your code) ###
# Compute l1 as W2^T (Yhat - Y)
# Re-use it whenever you see W2^T (Yhat - Y) used to compute a gradient
l1 = np.dot(W2.T, yhat-y)
# Apply relu to l1
l1 = np.maximum(0, l1)
# Compute the gradient of W1
grad_W1 = np.dot(l1, x.T) / batch_size
# Compute the gradient of W2
grad_W2 = np.dot(yhat - y, h.T) / batch_size
# Compute the gradient of b1
grad_b1 = np.sum(l1, axis = 1, keepdims = True) / batch_size
# Compute the gradient of b2
grad_b2 = np.sum(yhat - y, axis = 1, keepdims = True) / batch_size
### END CODE HERE ###
return grad_W1, grad_W2, grad_b1, grad_b2
# Test the function
tmp_C = 2
tmp_N = 50
tmp_batch_size = 4
tmp_word2Ind, tmp_Ind2word = get_dict(data)
tmp_V = len(word2Ind)
# get a batch of data
tmp_x, tmp_y = next(get_batches(data, tmp_word2Ind, tmp_V,tmp_C, tmp_batch_size))
print("get a batch of data")
print(f"tmp_x.shape {tmp_x.shape}")
print(f"tmp_y.shape {tmp_y.shape}")
print()
print("Initialize weights and biases")
tmp_W1, tmp_W2, tmp_b1, tmp_b2 = initialize_model(tmp_N,tmp_V)
print(f"tmp_W1.shape {tmp_W1.shape}")
print(f"tmp_W2.shape {tmp_W2.shape}")
print(f"tmp_b1.shape {tmp_b1.shape}")
print(f"tmp_b2.shape {tmp_b2.shape}")
print()
print("Forwad prop to get z and h")
tmp_z, tmp_h = forward_prop(tmp_x, tmp_W1, tmp_W2, tmp_b1, tmp_b2)
print(f"tmp_z.shape: {tmp_z.shape}")
print(f"tmp_h.shape: {tmp_h.shape}")
print()
print("Get yhat by calling softmax")
tmp_yhat = softmax(tmp_z)
print(f"tmp_yhat.shape: {tmp_yhat.shape}")
tmp_m = (2*tmp_C)
tmp_grad_W1, tmp_grad_W2, tmp_grad_b1, tmp_grad_b2 = back_prop(tmp_x, tmp_yhat, tmp_y, tmp_h, tmp_W1, tmp_W2, tmp_b1, tmp_b2, tmp_batch_size)
print()
print("call back_prop")
print(f"tmp_grad_W1.shape {tmp_grad_W1.shape}")
print(f"tmp_grad_W2.shape {tmp_grad_W2.shape}")
print(f"tmp_grad_b1.shape {tmp_grad_b1.shape}")
print(f"tmp_grad_b2.shape {tmp_grad_b2.shape}")
```
##### Expected output
```CPP
get a batch of data
tmp_x.shape (5778, 4)
tmp_y.shape (5778, 4)
Initialize weights and biases
tmp_W1.shape (50, 5778)
tmp_W2.shape (5778, 50)
tmp_b1.shape (50, 1)
tmp_b2.shape (5778, 1)
Forwad prop to get z and h
tmp_z.shape: (5778, 4)
tmp_h.shape: (50, 4)
Get yhat by calling softmax
tmp_yhat.shape: (5778, 4)
call back_prop
tmp_grad_W1.shape (50, 5778)
tmp_grad_W2.shape (5778, 50)
tmp_grad_b1.shape (50, 1)
tmp_grad_b2.shape (5778, 1)
```
<a name='2.5'></a>
## Gradient Descent
<a name='ex-05'></a>
### Exercise 05
Now that you have implemented a function to compute the gradients, you will implement batch gradient descent over your training set.
**Hint:** For that, you will use `initialize_model` and the `back_prop` functions which you just created (and the `compute_cost` function). You can also use the provided `get_batches` helper function:
```for x, y in get_batches(data, word2Ind, V, C, batch_size):```
```...```
Also: print the cost after each batch is processed (use batch size = 128)
```
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: gradient_descent
def gradient_descent(data, word2Ind, N, V, num_iters, alpha=0.03):
'''
This is the gradient_descent function
Inputs:
data: text
word2Ind: words to Indices
N: dimension of hidden vector
V: dimension of vocabulary
num_iters: number of iterations
Outputs:
W1, W2, b1, b2: updated matrices and biases
'''
W1, W2, b1, b2 = initialize_model(N,V, random_seed=282)
batch_size = 128
iters = 0
C = 2
for x, y in get_batches(data, word2Ind, V, C, batch_size):
### START CODE HERE (Replace instances of 'None' with your own code) ###
# Get z and h
z, h = forward_prop(x, W1, W2, b1, b2)
# Get yhat
yhat = softmax(z)
# Get cost
cost = compute_cost(y, yhat, batch_size)
if ( (iters+1) % 10 == 0):
print(f"iters: {iters + 1} cost: {cost:.6f}")
# Get gradients
grad_W1, grad_W2, grad_b1, grad_b2 = back_prop(x, yhat, y, h, W1, W2, b1, b2, batch_size)
# Update weights and biases
W1 = W1 - alpha*grad_W1
W2 = W2 - alpha*grad_W2
b1 = b1 - alpha*grad_b1
b2 = b2 - alpha*grad_b2
### END CODE HERE ###
iters += 1
if iters == num_iters:
break
if iters % 100 == 0:
alpha *= 0.66
return W1, W2, b1, b2
# test your function
C = 2
N = 50
word2Ind, Ind2word = get_dict(data)
V = len(word2Ind)
num_iters = 150
print("Call gradient_descent")
W1, W2, b1, b2 = gradient_descent(data, word2Ind, N, V, num_iters)
```
##### Expected Output
```CPP
iters: 10 cost: 0.789141
iters: 20 cost: 0.105543
iters: 30 cost: 0.056008
iters: 40 cost: 0.038101
iters: 50 cost: 0.028868
iters: 60 cost: 0.023237
iters: 70 cost: 0.019444
iters: 80 cost: 0.016716
iters: 90 cost: 0.014660
iters: 100 cost: 0.013054
iters: 110 cost: 0.012133
iters: 120 cost: 0.011370
iters: 130 cost: 0.010698
iters: 140 cost: 0.010100
iters: 150 cost: 0.009566
```
Your numbers may differ a bit depending on which version of Python you're using.
<a name='3'></a>
## 3.0 Visualizing the word vectors
In this part you will visualize the word vectors trained using the function you just coded above.
```
# visualizing the word vectors here
from matplotlib import pyplot
%config InlineBackend.figure_format = 'svg'
words = ['king', 'queen','lord','man', 'woman','dog','wolf',
'rich','happy','sad']
embs = (W1.T + W2)/2.0
# given a list of words and the embeddings, it returns a matrix with all the embeddings
idx = [word2Ind[word] for word in words]
X = embs[idx, :]
print(X.shape, idx) # X.shape: Number of words of dimension N each
result= compute_pca(X, 2)
pyplot.scatter(result[:, 0], result[:, 1])
for i, word in enumerate(words):
pyplot.annotate(word, xy=(result[i, 0], result[i, 1]))
pyplot.show()
```
You can see that man and king are next to each other. However, we have to be careful with the interpretation of this projected word vectors, since the PCA depends on the projection -- as shown in the following illustration.
```
result= compute_pca(X, 4)
pyplot.scatter(result[:, 3], result[:, 1])
for i, word in enumerate(words):
pyplot.annotate(word, xy=(result[i, 3], result[i, 1]))
pyplot.show()
```
| github_jupyter |
# MNIST Image Classification with TensorFlow on Cloud AI Platform
This notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras).
## Learning Objectives
1. Understand how to build a Dense Neural Network (DNN) for image classification
2. Understand how to use dropout (DNN) for image classification
3. Understand how to use Convolutional Neural Networks (CNN)
4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)
First things first. Configure the parameters below to match your own Google Cloud project details.
```
import os
from datetime import datetime
REGION = "us-central1"
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
MODEL_TYPE = "cnn" # "linear", "dnn", "dnn_dropout", or "cnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
```
## Building a dynamic model
In the previous notebook, <a href="mnist_linear.ipynb">mnist_linear.ipynb</a>, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.
The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.
Let's start with the trainer file first. This file parses command line arguments to feed into the model.
```
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
```
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
```
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
```
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.
**TODO 1**: Define the Keras layers for a DNN model
**TODO 2**: Define the Keras layers for a dropout model
**TODO 3**: Define the Keras layers for a CNN model
Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
```
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
```
## Local Training
With everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.
Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
```
!python3 -m mnist_models.trainer.test
```
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.
The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
```
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = "cnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time
)
```
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
```
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
```
## Training on the cloud
We will use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) to train this model on AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TensorFlow 2.3 environment.
```
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
```
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
```
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
```
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
```
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = "cnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time
)
os.environ["JOB_NAME"] = f"mnist_{model_type}_{current_time}"
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
```
## Deploying and predicting with model
Once you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.
```
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
MODEL_NAME = f"mnist_{TIMESTAMP}"
%env MODEL_NAME = $MODEL_NAME
%%bash
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
gcloud ai-platform models create ${MODEL_NAME} --region $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.3
```
To predict with the model, let's take one of the example images.
**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
```
import codecs
import json
import matplotlib.pyplot as plt
import tensorflow as tf
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding="utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
```
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
```
%%bash
gcloud ai-platform predict \
--model=${MODEL_NAME} \
--version=${MODEL_TYPE} \
--json-instances=./test.json
```
Copyright 2021 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
```
import boto3
import sagemaker
print(boto3.__version__)
print(sagemaker.__version__)
session = sagemaker.Session()
bucket = session.default_bucket()
print("Default bucket is {}".format(bucket))
### REPLACE INPUT FROM PREVIOUS SAGEMAKER PROCESSING CONFIG OUTPUT #####
prefix="customer_support_classification"
s3_training_path="https://sagemaker-us-east-1-123412341234.s3.amazonaws.com/sagemaker-scikit-learn-2020-11-16-19-27-42-281/output/training/training.txt"
s3_validation_path="https://sagemaker-us-east-1-123412341234.s3.amazonaws.com/sagemaker-scikit-learn-2020-11-16-19-27-42-281/output/validation/validation.txt"
s3_output_path="s3://{}/{}".format(bucket, prefix)
from sagemaker import image_uris
region_name = boto3.Session().region_name
print("Training the model in {} region".format(region_name))
```
# Estimator
```
container = image_uris.retrieve('blazingtext', region=region_name)
print("The algo container is {}".format(container))
blazing_text = sagemaker.estimator.Estimator(
container,
role=sagemaker.get_execution_role(),
instance_count=1,
instance_type='ml.c4.4xlarge',
output_path=s3_output_path
)
```
# set the hyperparameters
```
blazing_text.set_hyperparameters(mode='supervised')
from sagemaker import TrainingInput
train_data = TrainingInput(s3_training_path,
distribution='FullyReplicated',
content_type='text/plain',
s3_data_type='S3Prefix'
)
validation_data = TrainingInput(
s3_validation_path,
distribution='FullyReplicated',
content_type='text/plain',
s3_data_type='S3Prefix'
)
s3_channels = {'train': train_data,
'validation': validation_data
}
blazing_text.fit(inputs=s3_channels)
blazing_text.latest_training_job.job_name
blazing_text_predictor = blazing_text.deploy(
initial_instance_count=1,
instance_type='ml.t2.medium'
)
import json
import nltk
nltk.download('punkt')
"""
Hi my outlook app seems to misbehave a lot lately, I cannot sync my emails and it often crashes and asks for
credentials. Could you help me out?
"""
sentences = ["Hi my outlook app seems to misbehave a lot lately, I cannot sync my emails and it often crashes and asks for credentials.", "Could you help me out?"]
tokenized_sentences = [' '.join(nltk.word_tokenize(sent)) for sent in sentences]
payload = {"instances" : tokenized_sentences,
"configuration": {"k": 1}}
payload
from sagemaker.predictor import Predictor
from sagemaker.serializers import JSONSerializer
from sagemaker.deserializers import JSONDeserializer
case_classifier = Predictor(
endpoint_name="blazingtext-2020-11-18-15-13-52-229", # Replace with sagemaker endpoint deployed in the previous step
serializer=JSONSerializer()
)
response = case_classifier.predict(payload)
predictions = json.loads(response)
print(predictions)
predictions = sorted(predictions, key=lambda i: i['prob'], reverse=True)
print(json.dumps(predictions[0], indent=2))
```
| github_jupyter |
```
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
from networkx.drawing.nx_agraph import graphviz_layout
from abc import ABC, abstractmethod
import numpy as np
def V(num):
return 4*num+1
def S(num):
return 2*num+1
def fG(num):
return 2*num-1
def getType(num):
if (num+1) %3 == 0:
return "A"
elif (num%3) == 0:
return "B"
elif (num-1)%3 == 0:
return "C"
def addRuleOne(big_leaf, mX):
numArr = []
xMax = mX
#big_leaf = big_leaf
#for i in range(1, height):
# big_leaf = int(big_leaf * 2 + 1)
for x in range(1,mX):
y = V(x)
numArr.append([x,y,getType(y)])
#elif getType(V(y)) == "A":
# while V(y) < big_leaf:
# y=V(y)
# numArr.append([x,y,getType(y)])
return numArr
#Rule2 test ## false implemented
def addRuleTwo(big_leaf, mX, mK):
numArr = []
xMax2 = bigLeaf
kMax2 = 15
for x in range(1,xMax+1):
#if V(x) > big_leaf:
# break
for k in range(1,kMax+1):
#for i in range(1,k+1):
firstPartSave = V(x)
secondPartSave = V(x)
for j in range(0,k):
firstPartSave = S(firstPartSave)
for m in range(0,k+1):
secondPartSave = S(secondPartSave)
firstPart = firstPartSave
secondPart = secondPartSave
firstType = getType(firstPart)
secondType = getType(secondPart)
numArr.append([x, firstPart, firstType])
numArr.append([x, secondPart, secondType])
return numArr
def addRuleThree(big_leaf, mX, mN):
#big_leaf = big_leaf
#for i in range(1,self.height):
# big_leaf = int(big_leaf * 2 + 1)
numArr = []
xMax = mX
nMax = mN
for n in range(1,nMax+1):
#n = 4
for x in range(1,xMax+1):
if x%3 == 0:
True
else:
y = 3**n * x
for i in range(1,n+1):
firstPart = V((4**i)*(3**(n-i)*x))
secondPart = S(firstPart*x)
firstType = getType(firstPart)
secondType = getType(secondPart)
numArr.append([y, firstPart, firstType])
numArr.append([y, secondPart, secondType])
return numArr
def addRuleFour(big_leaf, mX, mN):
#big_leaf = big_leaf
#for i in range(1,self.height):
# big_leaf = int(big_leaf * 2 + 1)
numArr = []
xMax = mX
nMax = mN
for n in range(1,nMax+1):
#if 3**n > big_leaf:
# break
for x in range(1,xMax+1):
if x%3 == 0:
True
else:
y = S(3**n * x)
for i in range(1,n+1):
firstPart = S((4**i)*(3**(n-i)*x))
secondPart = S(firstPart*x)
firstType = getType(firstPart)
secondType = getType(secondPart)
numArr.append([y, firstPart, firstType])
numArr.append([y, secondPart, secondType])
return numArr
def addRuleFive(big_leaf, mX, mN):
big_leaf = big_leaf
#for i in range(1,self.height):
# big_leaf = int(big_leaf * 2 + 1)
numArr = []
xMax = mX
nMax = mN
for n in range(1,nMax+1):
#if 3**n > big_leaf:
# break
for x in range(1,xMax+1):
if x%3 == 0:
True
else:
y = fG(3**n * x)
if (y+1)%3 == 0:
for i in range(1,n+1):
firstPartSave = fG(x)
secondPartSave = fG(x)
for j in range(0,i):
firstPartSave = S(firstPartSave)
for m in range(0,i+1):
secondPartSave = S(secondPartSave)
firstPart = firstPartSave
secondPart = secondPartSave
firstType = getType(firstPart)
secondType = getType(secondPart)
numArr.append([y, firstPart, firstType])
numArr.append([y, secondPart, secondType])
return numArr
def findNumberHelp(confirmed, start):
partOfStartTree = []
#for i in confirmed:
for j in confirmed:
if j[1] == start:
partOfStartTree.append(j)
#treeNumbers.append(j)
return partOfStartTree
def findNumbers(confirmed, treeNumbers, start, maxRec):
#treeNumbers = treeNumbers
partOfStartTree = findNumberHelp(confirmed, start)
recLvl = maxRec-1
if recLvl == 0:
return treeNumbers
else:
#for i in partOfStartTree:
# numArr.append(i)
for t in partOfStartTree:
result = findNumbers(confirmed, treeNumbers, t[0], recLvl)
result.append(t)
treeNumbers = result
return treeNumbers
class TreeNode:
def __init__(self, number):
self.number = number
self.children = []
self.parent = None
height = 1
def addChild(self, child):
self.children.append(child)
# def buildTree(root, numberCol, height, provenBy):
# root = TreeNode(number)
# for i in numArr[number]:
# root.addChild(i)
# print(i)
class Tree():
def __init__(self, root, numberCol, height):
self.root = root
self.height = height
self.numberCol = numberCol
self.generate(root, numberCol, height, self.root.number)
def generate(self, treeNode, numberCol, height, next):
#if len(proven)<1:
proven = numberCol[next]
if len(proven)<1 or height > self.height:
return
# else:
# proven = numberCol[number]
# if len(proven)<1 or height > self.height:
# return
for i in proven:
new = TreeNode(i)
treeNode.addChild(new)
new.height = height+1
new.parent = treeNode
self.generate(treeNode, numberCol, new.height, next)
print(i)
#Third Try
maxValueAllowed = 1000
numArr = []
bigLeaf = 4096
xMax1 = bigLeaf
x1 = 1
while x1 < xMax1:
y1 = V(x1)
if x1 < maxValueAllowed and y1 < maxValueAllowed:
numArr.append([x1,y1,getType(y1)])
x1 = x1+1
#print(1)
# xMax2 = 300
# kMax2 = 10
# for x2 in range(1,xMax2+1):
# #if V(x) > big_leaf:
# # break
# for k2 in range(1,kMax2+1):
# #for i in range(1,k+1):
# firstPartSave2 = V(x2)
# secondPartSave2 = V(x2)
# for j in range(0,k2):
# firstPartSave2 = S(firstPartSave2)
# for m in range(0,k2+1):
# secondPartSave2 = S(secondPartSave2)
# firstPart2 = firstPartSave2
# secondPart2 = secondPartSave2
# firstType2 = getType(firstPart2)
# secondType2 = getType(secondPart2)
# if x2 < maxValueAllowed and firstPart2 < maxValueAllowed:
# numArr.append([x2, firstPart2, firstType2])
# if x2 < maxValueAllowed and secondPart2 < maxValueAllowed:
# numArr.append([x2, secondPart2, secondType2])
#print(2)
xMax4 = 300
nMax4 = 10
for n4 in range(1,nMax4+1):
for x4 in range(1,xMax4+1):
if x4%3 == 0:
True
else:
y4 = S(3**n4 * x4)
for i4 in range(1,n4+1):
firstPart4 = S((4**i4)*(3**(n4-i4)*x4))
secondPart4 = S(firstPart4*x4)
firstType4 = getType(firstPart4)
secondType4 = getType(secondPart4)
if y4 < maxValueAllowed and firstPart4 < maxValueAllowed:
numArr.append([y4, firstPart4, firstType4])
if y4 < maxValueAllowed and secondPart4 < maxValueAllowed:
numArr.append([y4, secondPart4, secondType4])
#print(4)
xMax3 = 300
nMax3 = 10
for n3 in range(1,nMax3+1):
#n = 4
for x3 in range(1,xMax3+1):
if x3%3 == 0:
True
else:
y3 = 3**n3 * x3
for i3 in range(1,n3+1):
firstPart3 = V((4**i3)*(3**(n3-i3)*x3))
secondPart3 = S(firstPart3*x3)
firstType3 = getType(firstPart3)
secondType3 = getType(secondPart3)
if y3 < maxValueAllowed and firstPart3 < maxValueAllowed:
numArr.append([y3, firstPart3, firstType3])
if y3 < maxValueAllowed and secondPart3 < maxValueAllowed:
numArr.append([y3, secondPart3, secondType3])
#print(3)
xMax5 = 300
nMax5 = 10
for n5 in range(1,nMax5+1):
for x5 in range(1,xMax5+1):
if x5%3 == 0:
True
else:
y5 = fG(3**n5 * x5)
if (y5+1)%3 == 0:
for i5 in range(1,n5+1):
firstPartSave5 = fG((x5))
secondPartSave5 = fG((x5))
for j in range(0,i5):
firstPartSave5 = S(firstPartSave5)
for m in range(0,i5+1):
secondPartSave5 = S(secondPartSave5)
firstPart5 = firstPartSave5
secondPart5 = secondPartSave5
firstType5 = getType(firstPart5)
secondType5 = getType(secondPart5)
if y5 < maxValueAllowed and firstPart5 < maxValueAllowed:
numArr.append([y5, firstPart5, firstType5])
if y5 < maxValueAllowed and secondPart5 < maxValueAllowed:
numArr.append([y5, secondPart5, secondType5])
#print(5)
start = 31
end = 17
treeNumbers = []
maxRec = 15
numArr.sort()
length = len(numArr)-1
for i in range(0, length):
if numArr[length-i] == numArr[length-i-1]:
numArr.pop(length-i)
#print(numArr)
startKnots = dict()
pre = []
for i in numArr:
if pre == []:
pre = i
key = i[0]
startKnots[key] = [i[1]]
else:
if i[0]==pre[0]:
key = pre[0]
startKnots[key].append(i[1])
else:
key = i[0]
startKnots[key] = [i[1]]
pre = i
#print(startKnots)
#provenBy = []
root = TreeNode(start)
#buildTree(root, startKnots, 20, provenBy)
tree = Tree(root, startKnots, 20)
class Node:
def __init__(self, number):
self.number = number
self.children = []
self.parent = None
self.height = 1
def addChild(self, child):
self.children.append(child)
col = dict()
col[1] = [2,3]
col[2] = [4,5,6,7]
col[3] = [8,9]
col[8] = [11,12,17]
print(col)
nStack = []
#pStack = []
root = Node(1)
current = root
stop = 12
maxHeight = 5
for i in col[root.number]:
newNode = Node(i)
newNode.height = current.height + 1
newNode.parent = current
nStack.append(newNode)
#current.addChild(newNode)
while len(nStack) > 0 or current.number == stop:
length = len(nStack)
newNode = nStack[length-1]
nStack.pop(length-1)
length = length-1
newNode.height = current.height + 1
newNode.parent = current
current.addChild(newNode)
if len(col[newNode.number]) < 0:
for i in col[newNode.number]:
cNewNode = Node(i)
cNewNode.height = current.height + 1
cNewNode.parent = current
nStack.append(cNewNode)
print(current.number)
if newNode.height < maxHeight + 1:
current = newNode
else:
while True:
current = current.parent
if current.height == nStack[len(nStack)-1].height-1:
break
# # Second Try
# numArr = []
# bigLeaf = 4096
# maxValueAllowed = 1500
# xMax1 = bigLeaf
# x1 = 1
# while x1 < xMax1:
# y1 = V(x1)
# if x1 < maxValueAllowed and y1 < maxValueAllowed:
# numArr.append([x1,y1,getType(y1)])
# x1 = x1+1
# #print(1)
# xMax2 = 300
# kMax2 = 10
# for x2 in range(1,xMax2+1):
# #if V(x) > big_leaf:
# # break
# for k2 in range(1,kMax2+1):
# #for i in range(1,k+1):
# firstPartSave2 = V(x2)
# secondPartSave2 = V(x2)
# for j in range(0,k2):
# firstPartSave2 = S(firstPartSave2)
# for m in range(0,k2+1):
# secondPartSave2 = S(secondPartSave2)
# firstPart2 = firstPartSave2
# secondPart2 = secondPartSave2
# firstType2 = getType(firstPart2)
# secondType2 = getType(secondPart2)
# if x2 < maxValueAllowed and firstPart2 < maxValueAllowed:
# numArr.append([x2, firstPart2, firstType2])
# if x2 < maxValueAllowed and secondPart2 < maxValueAllowed:
# numArr.append([x2, secondPart2, secondType2])
# #print(2)
# xMax3 = 300
# nMax3 = 10
# for n3 in range(1,nMax3+1):
# #n = 4
# for x3 in range(1,xMax3+1):
# if x3%3 == 0:
# True
# else:
# y3 = 3**n3 * x3
# for i3 in range(1,n3+1):
# firstPart3 = V((4**i3)*(3**(n3-i3)*x3))
# secondPart3 = S(firstPart3*x3)
# firstType3 = getType(firstPart3)
# secondType3 = getType(secondPart3)
# if y3 < maxValueAllowed and firstPart3 < maxValueAllowed:
# numArr.append([y3, firstPart3, firstType3])
# if y3 < maxValueAllowed and secondPart3 < maxValueAllowed:
# numArr.append([y3, secondPart3, secondType3])
# #print(3)
# xMax4 = 300
# nMax4 = 10
# for n4 in range(1,nMax4+1):
# for x4 in range(1,xMax4+1):
# if x4%3 == 0:
# True
# else:
# y4 = S(3**n4 * x4)
# for i4 in range(1,n4+1):
# firstPart4 = S((4**i4)*(3**(n4-i4)*x4))
# secondPart4 = S(firstPart4*x4)
# firstType4 = getType(firstPart4)
# secondType4 = getType(secondPart4)
# if y4 < maxValueAllowed and firstPart4 < maxValueAllowed:
# numArr.append([y4, firstPart4, firstType4])
# if y4 < maxValueAllowed and secondPart4 < maxValueAllowed:
# numArr.append([y4, secondPart4, secondType4])
# #print(4)
# xMax5 = 300
# nMax5 = 10
# for n5 in range(1,nMax5+1):
# for x5 in range(1,xMax5+1):
# if x5%3 == 0:
# True
# else:
# y5 = fG(3**n5 * x5)
# if (y5+1)%3 == 0:
# for i5 in range(1,n5+1):
# firstPartSave5 = fG((x5))
# secondPartSave5 = fG((x5))
# for j in range(0,i5):
# firstPartSave5 = S(firstPartSave5)
# for m in range(0,i5+1):
# secondPartSave5 = S(secondPartSave5)
# firstPart5 = firstPartSave5
# secondPart5 = secondPartSave5
# firstType5 = getType(firstPart5)
# secondType5 = getType(secondPart5)
# if y5 < maxValueAllowed and firstPart5 < maxValueAllowed:
# numArr.append([y5, firstPart5, firstType5])
# if y5 < maxValueAllowed and secondPart5 < maxValueAllowed:
# numArr.append([y5, secondPart5, secondType5])
# #print(5)
# start = 31
# end = 17
# treeNumbers = []
# maxRec = 15
# numArr.sort()
# length = len(numArr)-1
# for i in range(0, length):
# if numArr[length-i] == numArr[length-i-1]:
# numArr.pop(length-i)
# #numArr =
# #print(numArr)
# testArray = findNumberHelp(numArr, start)
# print(testArray)
# for i in testArray:
# #treeNumbers.append(findNumbers(numArr, treeNumbers, i[0], maxRec))
# print(findNumbers(numArr, treeNumbers, i[0], maxRec))
# #print(treeNumbers)
# testArray2 = []
# #treeNumbers = findNumbers(numArr, treeNumbers, start, maxRec)
# #print(testArray2)
# #print(numArr)
# ## First Try
# bigLeaf = 4096
# x = 100
# n = 15
# k = 15
# confirmed = []
# confirmed.append(addRuleOne(bigLeaf, x))
# confirmed.append(addRuleTwo(bigLeaf, bigLeaf, k))
# confirmed.append(addRuleThree(bigLeaf, x, n))
# confirmed.append(addRuleFour(bigLeaf, x, n))
# confirmed.append(addRuleFive(bigLeaf, x, n))
# #r = addRuleThree(bigLeaf, x, n)
# #print(confirmed)
# print(len(confirmed[0]))
# print(len(confirmed[1]))
# print(len(confirmed[2]))
# print(len(confirmed[3]))
# print(len(confirmed[4]))
# start = 31
# end = 17
# treeNumbers = []
# treeNumbers = findNumbers(confirmed, treeNumbers, start, end)
# print(treeNumbers)
# ## Rule 3
# import math
# def V(num):
# return 4*num+1
# def S(num):
# return 2*num+1
# def G(num):
# return 2*num-1
# def getType(num):
# if (num+1) %3 == 0:
# return "A"
# elif (num%3) == 0:
# return "B"
# elif (num-1)%3 == 0:
# return "C"
# numArr = []
# xMax = 250
# nMax = 15
# for n in range(1,nMax+1):
# #n = 4
# for x in range(1,xMax+1):
# if x%3 == 0:
# True
# else:
# y = 3**n * x
# for i in range(1,n+1):
# firstPart = V((4**i)*(3**(n-i)*x))
# secondPart = S(firstPart*x)
# firstType = getType(firstPart)
# secondType = getType(secondPart)
# numArr.append([y, firstPart, firstType])
# numArr.append([y, secondPart, secondType])
# print(numArr)
# ## Rule 4
# import math
# def V(num):
# return 4*num+1
# def S(num):
# return 2*num+1
# def G(num):
# return 2*num-1
# def getType(num):
# if (num+1) %3 == 0:
# return "A"
# elif (num%3) == 0:
# return "B"
# elif (num-1)%3 == 0:
# return "C"
# numArr = []
# xMax4 = 200
# nMax4 = 10
# for n in range(1,nMax+1):
# for x in range(1,xMax+1):
# if x%3 == 0:
# True
# else:
# y = S(3**n * x)
# for i in range(1,n+1):
# firstPart = S((4**i)*(3**(n-i)*x))
# secondPart = S(firstPart*x)
# firstType = getType(firstPart)
# secondType = getType(secondPart)
# numArr.append([y, firstPart, firstType])
# numArr.append([y, secondPart, secondType])
# print(numArr)
# #Rule 5
# import math
# def V(num):
# return 4*num+1
# def S(num):
# return 2*num+1
# def G(num):
# return 2*num-1
# def getType(num):
# if (num+1) %3 == 0:
# return "A"
# elif (num%3) == 0:
# return "B"
# elif (num-1)%3 == 0:
# return "C"
# numArr = []
# xMax = 15
# nMax = 15
# for n in range(1,nMax+1):
# for x in range(1,xMax+1):
# if x%3 == 0:
# True
# else:
# y = G(3**n * x)
# if (y+1)%3 == 0:
# for i in range(1,n+1):
# firstPartSave = G((3**(n-i)*x))
# secondPartSave = G((3**(n-i)*x))
# for j in range(0,i):
# firstPartSave = S(firstPartSave)
# for m in range(0,i+1):
# secondPartSave = S(secondPartSave)
# firstPart = firstPartSave
# secondPart = secondPartSave
# firstType = getType(firstPart)
# secondType = getType(secondPart)
# numArr.append([y, firstPart, firstType])
# numArr.append([y, secondPart, secondType])
# print(numArr)
# def addRuleOne(big_leaf, mX):
# numArr = []
# xMax = mX
# big_leaf = big_leaf
# #for i in range(1, height):
# # big_leaf = int(big_leaf * 2 + 1)
# for x in range(1,mX):
# y = V(x)
# numArr.append([x,y,getType(y)])
# while y < big_leaf:
# y=V(y)
# numArr.append([x,y,getType(y)])
# return numArr
# print(addRuleOne(5, 127))
def getCol(height):
maxValueAllowed = 2**height-1
numArr = []
bigLeaf = 4096
xMax1 = bigLeaf
x1 = 1
while x1 < xMax1:
y1 = V(x1)
if x1 < maxValueAllowed and y1 < maxValueAllowed:
numArr.append([x1,y1,getType(y1)])
x1 = x1+1
#print(1)
# xMax2 = 300
# kMax2 = 10
# for x2 in range(1,xMax2+1):
# #if V(x) > big_leaf:
# # break
# for k2 in range(1,kMax2+1):
# #for i in range(1,k+1):
# firstPartSave2 = V(x2)
# secondPartSave2 = V(x2)
# for j in range(0,k2):
# firstPartSave2 = S(firstPartSave2)
# for m in range(0,k2+1):
# secondPartSave2 = S(secondPartSave2)
# firstPart2 = firstPartSave2
# secondPart2 = secondPartSave2
# firstType2 = getType(firstPart2)
# secondType2 = getType(secondPart2)
# if x2 < maxValueAllowed and firstPart2 < maxValueAllowed:
# numArr.append([x2, firstPart2, firstType2])
# if x2 < maxValueAllowed and secondPart2 < maxValueAllowed:
# numArr.append([x2, secondPart2, secondType2])
#print(2)
xMax4 = 300
nMax4 = 10
for n4 in range(1,nMax4+1):
for x4 in range(1,xMax4+1):
if x4%3 == 0:
True
else:
y4 = S(3**n4 * x4)
for i4 in range(1,n4+1):
firstPart4 = S((4**i4)*(3**(n4-i4)*x4))
secondPart4 = S(firstPart4*x4)
firstType4 = getType(firstPart4)
secondType4 = getType(secondPart4)
if y4 < maxValueAllowed and firstPart4 < maxValueAllowed:
numArr.append([y4, firstPart4, firstType4])
if y4 < maxValueAllowed and secondPart4 < maxValueAllowed:
numArr.append([y4, secondPart4, secondType4])
#print(4)
xMax3 = 300
nMax3 = 10
for n3 in range(1,nMax3+1):
#n = 4
for x3 in range(1,xMax3+1):
if x3%3 == 0:
True
else:
y3 = 3**n3 * x3
for i3 in range(1,n3+1):
firstPart3 = V((4**i3)*(3**(n3-i3)*x3))
secondPart3 = S(firstPart3*x3)
firstType3 = getType(firstPart3)
secondType3 = getType(secondPart3)
if y3 < maxValueAllowed and firstPart3 < maxValueAllowed:
numArr.append([y3, firstPart3, firstType3])
if y3 < maxValueAllowed and secondPart3 < maxValueAllowed:
numArr.append([y3, secondPart3, secondType3])
#print(3)
xMax5 = 300
nMax5 = 10
for n5 in range(1,nMax5+1):
for x5 in range(1,xMax5+1):
if x5%3 == 0:
True
else:
y5 = fG(3**n5 * x5)
if (y5+1)%3 == 0:
for i5 in range(1,n5+1):
firstPartSave5 = fG((x5))
secondPartSave5 = fG((x5))
for j in range(0,i5):
firstPartSave5 = S(firstPartSave5)
for m in range(0,i5+1):
secondPartSave5 = S(secondPartSave5)
firstPart5 = firstPartSave5
secondPart5 = secondPartSave5
firstType5 = getType(firstPart5)
secondType5 = getType(secondPart5)
if y5 < maxValueAllowed and firstPart5 < maxValueAllowed:
numArr.append([y5, firstPart5, firstType5])
if y5 < maxValueAllowed and secondPart5 < maxValueAllowed:
numArr.append([y5, secondPart5, secondType5])
#print(5)
start = 31
end = 17
treeNumbers = []
maxRec = 15
numArr.sort()
length = len(numArr)-1
for i in range(0, length):
if numArr[length-i] == numArr[length-i-1]:
numArr.pop(length-i)
#print(numArr)
startKnots = dict()
pre = []
for i in numArr:
if pre == []:
pre = i
key = i[0]
startKnots[key] = [i[1]]
else:
if i[0]==pre[0]:
key = pre[0]
startKnots[key].append(i[1])
else:
key = i[0]
startKnots[key] = [i[1]]
pre = i
return startKnots
print(getCol())
```
| github_jupyter |
# 2017 August Duplicate Bug Detection
[**Find more on wiki**](https://wiki.nvidia.com/itappdev/index.php/Duplicate_Detection)
[**Demo Link**](http://qlan-vm-1.client.nvidia.com:8080/)
## Walk through of the Algorithm
<img src="imgsrc/Diagram.png">
## 1. Data Cleaning - SenteceParser Python 3
Available on Perforce and [Github](https://github.com/lanking520/NVBugsLib)
### Core Feature
- NLTK: remove stopwords and do stemming
- BeautifulSoup: Remove Html Tags
- General Regex: clean up white spaces and other symbols
### Other Functions:
- NVBugs Specific Cleaner for Synopsis, Description and Comments
- Counting Vectorizer embedded
- Auto-merge Column
```
def readfile(self, filepath, filetype, encod ='ISO-8859-1', header =None):
logger.info('Start reading File')
if not os.path.isfile(filepath):
logger.error("File Not Exist!")
sys.exit()
if filetype == 'csv':
df = pd.read_csv(filepath, encoding=encod, header =header)
elif filetype == 'json':
df = pd.read_json(filepath, encoding=encod, lines=True)
elif filetype == 'xlsx':
df = pd.read_excel(filepath, encoding=encod, header =header)
else:
logger.error("Extension Type not Accepted!")
sys.exit()
def processtext(self, column, removeSymbol = True, remove_stopwords=False, stemming=False):
logger.info("Start Data Cleaning...")
self.data[column] = self.data[column].str.replace(r'[\n\r\t]+', ' ')
# Remove URLs
self.data[column] = self.data[column].str.replace(self.regex_str[3],' ')
tempcol = self.data[column].values.tolist()
stops = set(stopwords.words("english"))
# This part takes a lot of times
printProgressBar(0, len(tempcol), prefix='Progress:', suffix='Complete', length=50)
for i in range(len(tempcol)):
row = BeautifulSoup(tempcol[i],'html.parser').get_text()
if removeSymbol:
row = re.sub('[^a-zA-Z0-9]', ' ', row)
words = row.split()
if remove_stopwords:
words = [w for w in words if not w in stops and not w.replace('.', '', 1).isdigit()]
row = ' '.join(words)
tempcol[i] = row.lower()
printProgressBar(i+1, len(tempcol), prefix='Progress:', suffix='Complete', length=50)
print("\n")
return tempcol
```
### Process by each line or Process by column
```
from SentenceParserPython3 import SentenceParser as SP
test = SP(20)
sample_text = "I @#$@have a @#$@#$@#%dog @#%@$^#$()_+%at home"
test.processline(sample_text, True, True)
```
## 2. Word Embedding
### 2.1 TF-IDF
**Term Frequency** denoted by tf, is the number of occurrencesof a term t in the document D.
**Inverse Document Frequency** of a term t, denoted by idf is log(N/df), where N is the total number of documents in thespace. So, it reduces the weight when a term occurs manytimes in a document, or in other words a word with rareoccurrences has more weight.
TF-IDF = Term Frequency * Inverse Document Frequency<br>
Inverse Document Frequency = log(N/df)
** Vocabulary size: 10000-100000 is the range used in this project **
Note: TF-IDF will brings Sparse Matrix back to reduce the memory use. Sparse Matrix is supported by K-Means. Sometimes we need to tranform it into dense when we actually use it to do the calculation
```
from sklearn.feature_extraction.text import TfidfVectorizer
def TFIDF(text, size):
print("Using TFIDF Doing data cleaning...")
vectorizer = TfidfVectorizer(stop_words='english', analyzer='word', strip_accents='unicode', max_features=size)
X = vectorizer.fit_transform(text)
return vectorizer, X
```
### Let's Translate one
REF: BugID 200235622
```
import pickle
from sklearn.externals import joblib
vectorizer = joblib.load('model/MSD2016NowTFIDF.pkl')
sample = 'GFE Share Telemetry item for OSC Hotkey Toggle'
result = vectorizer.transform([sample])
result.toarray()
```
### 2.2 Other Word Vectorization Tool
- Hashing Vectorization
- Word2Vec
- Infersent (Facebook)
- Skip-Thought
```
from gensim.models import word2vec
def W2V(text, size):
sentences = []
for idx in range(len(text)):
sentences.append(text[idx].split())
num_features = size
min_word_count = 20
num_workers = 4
context = 10
downsampling = 1e-3
model_name = "./model/w2vec.model"
model = word2vec.Word2Vec(sentences, workers=num_workers, \
size=num_features, min_count = min_word_count, \
window = context, sample = downsampling)
model.init_sims(replace=True)
return model
def Word2VecEmbed(text, model, num_features):
worddict = {}
for key in model.wv.vocab.keys():
worddict[key] = model.wv.word_vec(key)
X = []
for idx in range(len(text)):
words = text[idx].split()
counter = 0
temprow = np.zeros(num_features)
for word in words:
if word in worddict:
counter += 1
temprow += worddict[word]
if counter != 0:
X.append(temprow/counter)
else:
X.append(temprow)
X = np.array(X)
return X
```
## 3. Linear PCA
**Principal component analysis (PCA)** is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components (or sometimes, principal modes of variation). The number of principal components is less than or equal to the smaller of the number of original variables or the number of observations. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.
### TruncatedSVD
Dimensionality reduction using truncated SVD (aka LSA).
This transformer performs linear dimensionality reduction by means of truncated singular value decomposition (SVD). Contrary to PCA, this estimator does not center the data before computing the singular value decomposition. This means it can work with scipy.sparse matrices efficiently.
### Dimension Reduction
In our model, we reduce the dimension from 100000 to 6000 and keep **77%** of Variance
```
from sklearn.decomposition import TruncatedSVD
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import Normalizer
def DRN(X, DRN_size):
print("Performing dimensionality reduction using LSA")
svd = TruncatedSVD(DRN_size)
normalizer = Normalizer(copy=False)
lsa = make_pipeline(svd, normalizer)
X = lsa.fit_transform(X)
explained_variance = svd.explained_variance_ratio_.sum()
print("Explained variance of the SVD step: {}%".format( int(explained_variance * 100)))
return svd, X
```
## 4. Clustering
**clustering** is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, bioinformatics, data compression, and computer graphics.
### 4.1 KMeans Clustering
The current Algorithm we are using is the General KM without Mini-Batches. Mini-Batches are not working as well as the normal K-Means in our dataset.
### 4.2 "Yinyang" K-means and K-nn using NVIDIA CUDA
K-means implementation is based on ["Yinyang K-Means: A Drop-In Replacement
of the Classic K-Means with Consistent Speedup"](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/ding15.pdf).
While it introduces some overhead and many conditional clauses
which are bad for CUDA, it still shows 1.6-2x speedup against the Lloyd
algorithm. K-nearest neighbors employ the same triangle inequality idea and
require precalculated centroids and cluster assignments, similar to the flattened
ball tree.
| Benchmarks | sklearn KMeans | KMeansRex | KMeansRex OpenMP | Serban | kmcuda | kmcuda 2 GPUs |
|---------------------------|----------------|-----------|------------------|--------|--------|---------------|
| speed | 1x | 4.5x | 8.2x | 15.5x | 17.8x | 29.8x |
| memory | 1x | 2x | 2x | 0.6x | 0.6x | 0.6x |
```
from sklearn.cluster import KMeans
def kmtrain(X, num_clusters):
km = KMeans(n_clusters=num_clusters, init='k-means++', max_iter=100, n_init=1, verbose=1)
print("Clustering sparse data with %s" % km)
km.fit(X)
return km
from libKMCUDA import kmeans_cuda
def cudatrain(X, num_clusters):
centroids, assignments = kmeans_cuda(X, num_clusters, verbosity=1, yinyang_t=0, seed=3)
return centroids, assignments
```
# Verfication
### Check the Assignment Match the Actual ones
```
correct = 0.0
assignment = []
printProgressBar(0, X.shape[0], prefix='Progress:', suffix='Complete', length=50)
for idx, item in enumerate(X):
center = np.squeeze(np.sum(np.square(item - centroid), axis =1)).argsort()[0]
if assign[idx] == center:
correct +=1.0
assignment.append(center)
printProgressBar(idx, X.shape[0], prefix='Progress:', suffix='Complete'+' Acc:' + str(correct/(idx+1)), length=50)
```
### See the Distribution based on the assignment
```
count = np.bincount(assignment)
count
```
### Filtering the Duplicate bug set to remove non-existed duplicated bugs
```
verifier = pd.read_csv('DuplicateBugs.csv',header=None)
verifier = verifier.as_matrix()
available = []
printProgressBar(0, verifier.shape[0], prefix='Progress:', suffix='Complete', length=50)
for idx, row in enumerate(verifier):
if not np.isnan(row).any():
leftcomp = df.loc[df["BugId"]==int(row[0])]
rightcomp = df.loc[df["BugId"]==int(row[1])]
if (not leftcomp.empty) and (not rightcomp.empty):
available.append([leftcomp.index[0], rightcomp.index[0]])
printProgressBar(idx, verifier.shape[0], prefix='Progress:', suffix='Complete', length=50)
temp = np.array(available)
```
### Test the Duplicated Bug set are inside of the top 3 cluster and top 5 recommendation
```
correctrow = 0
correctdist = []
vectorizer = joblib.load(root+divname+'TFIDF.pkl')
X = vectorizer.transform(text)
printProgressBar(0, temp.shape[0], prefix='Progress:', suffix='Complete', length=50)
for idx, row in enumerate(temp):
clusterset = np.squeeze(np.sum(np.square(real_center - X[row[0]].toarray()),axis=1)).argsort()[0:3]
dist = []
for cluster in clusterset:
dataset = wholeX[np.array((df["cluster"] == cluster).tolist())]
for datarow in dataset:
dist.append(np.sum(np.square(datarow.toarray() - wholeX[row[0]].toarray())))
dist = np.array(dist)
smalldist = np.sum(np.square(wholeX[row[1]].toarray() - wholeX[row[0]].toarray()))
sorteddist = np.sort(dist)
if sorteddist.shape[0] <= 5 or smalldist <= sorteddist[5]:
correctrow += 1
correctdist.append(smalldist)
printProgressBar(idx, temp.shape[0], prefix='Progress:', suffix='Complete', length=50)
print("Accuracy: "+ str(1.0*correctrow/temp.shape[0]))
```
# Prediction
```
def bugidgetter(df, cluster, loc):
bigset = df.loc[df['cluster'] == cluster]
return bigset.iloc[[loc],:]["BugId"].tolist()[0]
def bugindata(df, bugid):
return not df.loc[df["BugId"]==int(bugid)].empty
def predict(text, topkclusters, topktopics):
bugiddist = []
row = vectorizer.transform([text])
clusterset = np.squeeze(np.sum(np.square(real_center - row.toarray()),axis=1)).argsort()[0:topkclusters]
dist = []
print(clusterset)
for cluster in clusterset:
dataset = X[np.array((df["cluster"] == cluster).tolist())]
for idx, datarow in enumerate(dataset):
dist.append([np.sum(np.square(datarow.toarray() - row.toarray())), cluster, idx])
dist = np.array(dist)
topk = dist[dist[:,0].argsort()][0:topktopics]
# print(topk)
for idx, row in enumerate(topk):
bugiddist.append({'BugId':bugidgetter(df, row[1],row[2]), 'Distance': row[0]})
return bugiddist
```
| github_jupyter |
[](https://colab.research.google.com/github/davemlz/eemont/blob/master/docs/tutorials/018-Complete-Preprocessing-One-Method.ipynb)
# Complete Preprocessing (Clouds Masking, Shadows Masking, Scaling and Offsetting) With Just One Method
_Tutorial created by **David Montero Loaiza**_: [GitHub](https://github.com/davemlz) | [Twitter](https://twitter.com/dmlmont)
- GitHub Repo: [https://github.com/davemlz/eemont](https://github.com/davemlz/eemont)
- PyPI link: [https://pypi.org/project/eemont/](https://pypi.org/project/eemont/)
- Conda-forge: [https://anaconda.org/conda-forge/eemont](https://anaconda.org/conda-forge/eemont)
- Documentation: [https://eemont.readthedocs.io/](https://eemont.readthedocs.io/)
- More tutorials: [https://github.com/davemlz/eemont/tree/master/docs/tutorials](https://github.com/davemlz/eemont/tree/master/docs/tutorials)
## Let's start!
If required, please uncomment:
```
#!pip install eemont
#!pip install geemap
```
Import the required packges.
```
import ee, eemont, geemap
import geemap.colormaps as cm
```
Authenticate and Initialize Earth Engine and geemap.
```
Map = geemap.Map()
```
Let's define a point of interest:
```
poi = ee.Geometry.BBoxFromQuery("Puerto Rico",user_agent = "eemont-tutorial-017")
```
Let's take the L8 product as example:
```
L8 = (ee.ImageCollection("LANDSAT/LC08/C01/T1_SR")
.filterBounds(poi)
.filterDate("2020-01-01","2021-01-01"))
```
## Complete Preprocessing
Do you want to do almost everything with just one method? Here is it: `preprocess`.
```
L8pre = L8.preprocess()
```
The `preprocess` method automatically masks clouds and shadows and scales and offsets the image or image collection.
NOTE: You can apply the `preprocess` method to any ee.Image or ee.ImageCollection from the GEE STAC.
If you want to change some argument for cloud masking, just use the same aguments that are used for the `maskClouds` method. If not, the default parameters of `maskClouds` are used in `preprocess`:
```
L8pre = L8.preprocess(maskShadows = False)
```
After preprocessing, you can process your ee.Image or ee.ImageCollection as you want!
```
L8full = L8pre.spectralIndices(["SAVI","GNDVI"],L = 0.5).median()
```
Visualization parameters:
```
vis = {
"min": 0,
"max": 1,
"palette": cm.palettes.ndvi
}
```
Visualize everything with `geemap`:
```
Map.addLayer(L8full.select("SAVI"),vis,"SAVI")
Map.addLayer(L8full.select("GNDVI"),vis,"GNDVI")
Map.centerObject(poi.centroid(1),8)
Map
```
| github_jupyter |
# Problem 1
**Least Recently Used Cache**
We have briefly discussed caching as part of a practice problem while studying hash maps.
The lookup operation (i.e., `get()`) and `put()` / `set()` is supposed to be fast for a cache memory.
While doing the `get()` operation, if the entry is found in the cache, it is known as a `cache hit`. If, however, the entry is not found, it is known as a `cache miss`.
When designing a cache, we also place an upper bound on the size of the cache. If the cache is full and we want to add a new entry to the cache, we use some criteria to remove an element. After removing an element, we use the `put()` operation to insert the new element. The remove operation should also be fast.
For our first problem, the goal will be to design a data structure known as a **Least Recently Used (LRU) cache**. An LRU cache is a type of cache in which we remove the least recently used entry when the cache memory reaches its limit. For the current problem, consider both `get` and `set` operations as an `use operation`.
Your job is to use an appropriate data structure(s) to implement the cache.
- In case of a `cache hit`, your `get()` operation should return the appropriate value.
- In case of a `cache miss`, your `get()` should return -1.
- While putting an element in the cache, your `put()` / `set()` operation must insert the element. If the cache is full, you must write code that removes the least recently used entry first and then insert the element.
All operations must take `O(1)` time.
For the current problem, you can consider the `size of cache = 5`.
Here is some boiler plate code and some example test cases to get you started on this problem:
```py
class LRU_Cache(object):
def __init__(self, capacity):
# Initialize class variables
pass
def get(self, key):
# Retrieve item from provided key. Return -1 if nonexistent.
pass
def set(self, key, value):
# Set the value if the key is not present in the cache. If the cache is at capacity remove the oldest item.
pass
our_cache = LRU_Cache(5)
our_cache.set(1, 1);
our_cache.set(2, 2);
our_cache.set(3, 3);
our_cache.set(4, 4);
our_cache.get(1) # returns 1
our_cache.get(2) # returns 2
our_cache.get(9) # returns -1 because 9 is not present in the cache
our_cache.set(5, 5)
our_cache.set(6, 6)
our_cache.get(3) # returns -1 because the cache reached it's capacity and 3 was the least recently used entry
```
## Approach 1: Ordered Dictionary
<img src="https://raw.githubusercontent.com/ZacksAmber/Udacity-Data-Structure-Algorithms/main/2/5/Project/problem_1.drawio.svg"></img>
```
from typing import Union
from collections import OrderedDict
# O(1), O(capacity)
class LRU_Cache(OrderedDict):
"""LRU_Cache cache the latest added or visited key-paired items within the capacity defined by user.
Attributes:
capacity: The capacity of LRU_Cache instance.
"""
def __init__(self, capacity: int):
"""Inits LRU_Cache with a capacity.
Args:
capacity: The capacity of LRU_Cache instance.
"""
self.capacity = capacity
def get(self, key: Union[str, int, float]): # For Python 3.5 - 3.9
# def get(self, key: str | int | float): # For Python >= 3.10
"""Return the value for key if key is in the LRU_Cache instance, else -1.
Args:
key: The key of the item.
Returns:
value: The value of the item.
"""
# Cache hit
if key in self:
self.move_to_end(key)
return self[key]
# Cache miss
return -1
def set(self, key: Union[str, int, float], value: any): # For Python 3.5 - 3.9
# def set(self, key: str | int | float, value: any): # For Python >= 3.10
"""Add an item into the LRU_Cache instance if the key is not present in the cache.
If the key is present in the cache, this item will be the latest visited one.
If the cache is at capacity, remove the oldest item.
Args:
key: The key of the item.
value: The value of the item.
"""
if key in self:
self.move_to_end(key)
self[key] = value
if len(self) > self.capacity:
self.popitem(last=False)
our_cache = LRU_Cache(5)
our_cache.set(1, 1);
our_cache.set(2, 2);
our_cache.set(3, 3);
our_cache.set(4, 4);
our_cache.get(1) # returns 1
our_cache.get(2) # returns 2
our_cache.get(9) # returns -1 because 9 is not present in the cache
our_cache.set(5, 5)
our_cache.set(6, 6)
our_cache.get(3) # returns -1 because the cache reached it's capacity and 3 was the least recently used entry
```
# Problem 2: File Recursion
For this problem, the goal is to write code for finding all files under a directory (and all directories beneath it) that end with ".c"
Here is an example of a test directory listing, which can be downloaded [here](https://s3.amazonaws.com/udacity-dsand/testdir.zip):
```
./testdir
./testdir/subdir1
./testdir/subdir1/a.c
./testdir/subdir1/a.h
./testdir/subdir2
./testdir/subdir2/.gitkeep
./testdir/subdir3
./testdir/subdir3/subsubdir1
./testdir/subdir3/subsubdir1/b.c
./testdir/subdir3/subsubdir1/b.h
./testdir/subdir4
./testdir/subdir4/.gitkeep
./testdir/subdir5
./testdir/subdir5/a.c
./testdir/subdir5/a.h
./testdir/t1.c
./testdir/t1.h
```
Python's `os` module will be useful—in particular, you may want to use the following resources:
[os.path.isdir(path)](https://docs.python.org/3.7/library/os.path.html#os.path.isdir)
[os.path.isfile(path)](https://docs.python.org/3.7/library/os.path.html#os.path.isfile)
[os.listdir(directory)](https://docs.python.org/3.7/library/os.html#os.listdir)
[os.path.join(...)](https://docs.python.org/3.7/library/os.path.html#os.path.join)
**Note:** `os.walk()` is a handy Python method which can achieve this task very easily. However, for this problem you are not allowed to use `os.walk()`.
Here is some code for the function to get you started:
```
def find_files(suffix, path):
"""
Find all files beneath path with file name suffix.
Note that a path may contain further subdirectories
and those subdirectories may also contain further subdirectories.
There are no limit to the depth of the subdirectories can be.
Args:
suffix(str): suffix if the file name to be found
path(str): path of the file system
Returns:
a list of paths
"""
return None
```
**OS Module Exploration Code**
```
## Locally save and call this file ex.py ##
# Code to demonstrate the use of some of the OS modules in python
import os
# Let us print the files in the directory in which you are running this script
print (os.listdir("."))
# Let us check if this file is indeed a file!
print (os.path.isfile("./ex.py"))
# Does the file end with .py?
print ("./ex.py".endswith(".py"))
```
```
!tree -a testdir
```
5 SIMPLE STEPS
1. What's the simplest possible input?
2. Play around with examples and visualize!
3. Relate hard cases to simpler cases.
4. Generalize the pattern.
5. Write code by combining recursive pattern with the base case.
## Approach 1: os.walk
```
import os
def find_files(suffix="*", path="./"):
"""Find all files beneath path with file name suffix (extension).
Args:
suffix: suffix if the file name to be found.
path: path of the file system.
Returns:
None
"""
# "*" is the wildcard character to match everything
if suffix == "*":
suffix = ""
for dirpath, dirnames, filenames in sorted(os.walk(path, topdown=True)):
offset = len(dirpath.split(os.sep))
print(" " * (offset - 1), dirpath, sep="")
for file in sorted(filenames):
if file.endswith(suffix):
print(" " * offset, os.path.join(dirpath, file), sep="")
find_files("*", "./testdir")
find_files(".c", "./testdir")
```
---
## Approach 2: No os.walk
<img src="https://raw.githubusercontent.com/ZacksAmber/Udacity-Data-Structure-Algorithms/main/2/5/Project/problem_2.drawio.svg"></img>
```
import os
def find_files(suffix="*", path="./"):
"""Find all files beneath path with file name suffix (extension).
Args:
suffix: suffix if the file name to be found.
path: path of the file system.
Returns:
None
"""
# "*" is the wildcard character to match everything
if suffix == "*":
suffix = ""
# Sort the listdir
listdir = os.listdir(path)
listdir = sorted(listdir)
for child in listdir:
current_path = os.path.join(path, child)
if os.path.isdir(current_path):
print(f"{current_path}")
find_files(suffix, current_path)
else:
if current_path.endswith(suffix):
print(f"{current_path}")
find_files("*", "./testdir")
# Filter files with extension .c
find_files(".c", "./testdir")
```
| github_jupyter |
# Stellargraph example: GraphSAGE on the CORA citation network
Import NetworkX and stellar:
```
import networkx as nx
import pandas as pd
import os
import stellargraph as sg
from stellargraph.mapper import GraphSAGENodeGenerator
from stellargraph.layer import GraphSAGE
from tensorflow.keras import layers, optimizers, losses, metrics, Model
from sklearn import preprocessing, feature_extraction, model_selection
```
### Loading the CORA network
**Downloading the CORA dataset:**
The dataset used in this demo can be downloaded from [here](https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz).
The following is the description of the dataset:
> The Cora dataset consists of 2708 scientific publications classified into one of seven classes.
> The citation network consists of 5429 links. Each publication in the dataset is described by a
> 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary.
> The dictionary consists of 1433 unique words. The README file in the dataset provides more details.
Download and unzip the cora.tgz file to a location on your computer and set the `data_dir` variable to
point to the location of the dataset (the directory containing "cora.cites" and "cora.content").
```
data_dir = os.path.expanduser("~/data/cora")
```
Load the graph from edgelist (in `cited-paper` <- `citing-paper` order)
```
edgelist = pd.read_csv(os.path.join(data_dir, "cora.cites"), sep='\t', header=None, names=["target", "source"])
edgelist["label"] = "cites"
Gnx = nx.from_pandas_edgelist(edgelist, edge_attr="label")
nx.set_node_attributes(Gnx, "paper", "label")
```
Load the features and subject for the nodes
```
feature_names = ["w_{}".format(ii) for ii in range(1433)]
column_names = feature_names + ["subject"]
node_data = pd.read_csv(os.path.join(data_dir, "cora.content"), sep='\t', header=None, names=column_names)
```
We aim to train a graph-ML model that will predict the "subject" attribute on the nodes. These subjects are one of 7 categories:
```
set(node_data["subject"])
```
### Splitting the data
For machine learning we want to take a subset of the nodes for training, and use the rest for testing. We'll use scikit-learn again to do this
```
train_data, test_data = model_selection.train_test_split(node_data, train_size=0.1, test_size=None, stratify=node_data['subject'])
```
Note using stratified sampling gives the following counts:
```
from collections import Counter
Counter(train_data['subject'])
```
The training set has class imbalance that might need to be compensated, e.g., via using a weighted cross-entropy loss in model training, with class weights inversely proportional to class support. However, we will ignore the class imbalance in this example, for simplicity.
### Converting to numeric arrays
For our categorical target, we will use one-hot vectors that will be fed into a soft-max Keras layer during training. To do this conversion ...
```
target_encoding = feature_extraction.DictVectorizer(sparse=False)
train_targets = target_encoding.fit_transform(train_data[["subject"]].to_dict('records'))
test_targets = target_encoding.transform(test_data[["subject"]].to_dict('records'))
```
We now do the same for the node attributes we want to use to predict the subject. These are the feature vectors that the Keras model will use as input. The CORA dataset contains attributes 'w_x' that correspond to words found in that publication. If a word occurs more than once in a publication the relevant attribute will be set to one, otherwise it will be zero.
```
node_features = node_data[feature_names]
```
## Creating the GraphSAGE model in Keras
Now create a StellarGraph object from the NetworkX graph and the node features and targets. It is StellarGraph objects that we use in this library to perform machine learning tasks on.
```
G = sg.StellarGraph(Gnx, node_features=node_features)
print(G.info())
```
To feed data from the graph to the Keras model we need a data generator that feeds data from the graph to the model. The generators are specialized to the model and the learning task so we choose the `GraphSAGENodeGenerator` as we are predicting node attributes with a GraphSAGE model.
We need two other parameters, the `batch_size` to use for training and the number of nodes to sample at each level of the model. Here we choose a two-level model with 10 nodes sampled in the first layer, and 5 in the second.
```
batch_size = 50; num_samples = [10, 5]
```
A `GraphSAGENodeGenerator` object is required to send the node features in sampled subgraphs to Keras
```
generator = GraphSAGENodeGenerator(G, batch_size, num_samples)
```
Using the `generator.flow()` method, we can create iterators over nodes that should be used to train, validate, or evaluate the model. For training we use only the training nodes returned from our splitter and the target values. The `shuffle=True` argument is given to the `flow` method to improve training.
```
train_gen = generator.flow(train_data.index, train_targets, shuffle=True)
```
Now we can specify our machine learning model, we need a few more parameters for this:
* the `layer_sizes` is a list of hidden feature sizes of each layer in the model. In this example we use 32-dimensional hidden node features at each layer.
* The `bias` and `dropout` are internal parameters of the model.
```
graphsage_model = GraphSAGE(
layer_sizes=[32, 32],
generator=generator,
bias=True,
dropout=0.5,
)
```
Now we create a model to predict the 7 categories using Keras softmax layers.
```
x_inp, x_out = graphsage_model.build()
prediction = layers.Dense(units=train_targets.shape[1], activation="softmax")(x_out)
```
### Training the model
Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `graph_model` and outputs being the predictions from the softmax layer
```
model = Model(inputs=x_inp, outputs=prediction)
model.compile(
optimizer=optimizers.Adam(lr=0.005),
loss=losses.categorical_crossentropy,
metrics=["acc"],
)
```
Train the model, keeping track of its loss and accuracy on the training set, and its generalisation performance on the test set (we need to create another generator over the test data for this)
```
test_gen = generator.flow(test_data.index, test_targets)
history = model.fit_generator(
train_gen,
epochs=20,
validation_data=test_gen,
verbose=2,
shuffle=False
)
import matplotlib.pyplot as plt
%matplotlib inline
def plot_history(history):
metrics = sorted(history.history.keys())
metrics = metrics[:len(metrics)//2]
for m in metrics:
# summarize history for metric m
plt.plot(history.history[m])
plt.plot(history.history['val_' + m])
plt.title(m)
plt.ylabel(m)
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
plot_history(history)
```
Now we have trained the model we can evaluate on the test set.
```
test_metrics = model.evaluate_generator(test_gen)
print("\nTest Set Metrics:")
for name, val in zip(model.metrics_names, test_metrics):
print("\t{}: {:0.4f}".format(name, val))
```
### Making predictions with the model
Now let's get the predictions themselves for all nodes using another node iterator:
```
all_nodes = node_data.index
all_mapper = generator.flow(all_nodes)
all_predictions = model.predict_generator(all_mapper)
```
These predictions will be the output of the softmax layer, so to get final categories we'll use the `inverse_transform` method of our target attribute specifcation to turn these values back to the original categories
```
node_predictions = target_encoding.inverse_transform(all_predictions)
```
Let's have a look at a few:
```
results = pd.DataFrame(node_predictions, index=all_nodes).idxmax(axis=1)
df = pd.DataFrame({"Predicted": results, "True": node_data['subject']})
df.head(10)
```
Add the predictions to the graph, and save as graphml, e.g. for visualisation in [Gephi](https://gephi.org)
```
for nid, pred, true in zip(df.index, df["Predicted"], df["True"]):
Gnx.nodes[nid]["subject"] = true
Gnx.nodes[nid]["PREDICTED_subject"] = pred.split("=")[-1]
```
Also add isTrain and isCorrect node attributes:
```
for nid in train_data.index:
Gnx.nodes[nid]["isTrain"] = True
for nid in test_data.index:
Gnx.nodes[nid]["isTrain"] = False
for nid in Gnx.nodes():
Gnx.nodes[nid]["isCorrect"] = Gnx.nodes[nid]["subject"] == Gnx.nodes[nid]["PREDICTED_subject"]
```
Save in GraphML format
```
pred_fname = "pred_n={}.graphml".format(num_samples)
nx.write_graphml(Gnx, os.path.join(data_dir,pred_fname))
```
## Node embeddings
Evaluate node embeddings as activations of the output of graphsage layer stack, and visualise them, coloring nodes by their subject label.
The GraphSAGE embeddings are the output of the GraphSAGE layers, namely the `x_out` variable. Let's create a new model with the same inputs as we used previously `x_inp` but now the output is the embeddings rather than the predicted class. Additionally note that the weights trained previously are kept in the new model.
```
embedding_model = Model(inputs=x_inp, outputs=x_out)
emb = embedding_model.predict_generator(all_mapper)
emb.shape
```
Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label
```
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
import pandas as pd
import numpy as np
X = emb
y = np.argmax(target_encoding.transform(node_data[["subject"]].to_dict('records')), axis=1)
if X.shape[1] > 2:
transform = TSNE #PCA
trans = transform(n_components=2)
emb_transformed = pd.DataFrame(trans.fit_transform(X), index=node_data.index)
emb_transformed['label'] = y
else:
emb_transformed = pd.DataFrame(X, index=node_data.index)
emb_transformed = emb_transformed.rename(columns = {'0':0, '1':1})
emb_transformed['label'] = y
alpha = 0.7
fig, ax = plt.subplots(figsize=(7,7))
ax.scatter(emb_transformed[0], emb_transformed[1], c=emb_transformed['label'].astype("category"),
cmap="jet", alpha=alpha)
ax.set(aspect="equal", xlabel="$X_1$", ylabel="$X_2$")
plt.title('{} visualization of GraphSAGE embeddings for cora dataset'.format(transform.__name__))
plt.show()
```
| github_jupyter |
```
try:
from openmdao.utils.notebook_utils import notebook_mode
except ImportError:
!python -m pip install openmdao[notebooks]
from openmdao.utils.assert_utils import assert_near_equal
import os
if os.path.exists('cases.sql'):
os.remove('cases.sql')
```
# Driver Recording
A CaseRecorder is commonly attached to the problem’s Driver in order to gain insight into the convergence of the model as the driver finds a solution. By default, a recorder attached to a driver will record the design variables, constraints and objectives.
The driver recorder is capable of capturing any values from any part of the model, not just the design variables, constraints, and objectives.
```
import openmdao.api as om
om.show_options_table("openmdao.core.driver.Driver", recording_options=True)
```
```{note}
Note that the `excludes` option takes precedence over the `includes` option.
```
## Driver Recording Example
In the example below, we first run a case while recording at the driver level. Then, we examine the objective, constraint, and design variable values at the last recorded case. Lastly, we print the full contents of the last case, including outputs from the problem that are not design variables, constraints, or objectives.
Specifically, `y1` and `y2` are some of those intermediate outputs that are recorded due to the use of:
`driver.recording_options['includes'] = ['*']`
```
from openmdao.utils.notebook_utils import get_code
from myst_nb import glue
glue("code_src89", get_code("openmdao.test_suite.components.sellar.SellarDerivatives"), display=False)
```
:::{Admonition} `SellarDerivatives` class definition
:class: dropdown
{glue:}`code_src89`
:::
```
import openmdao.api as om
from openmdao.test_suite.components.sellar import SellarDerivatives
import numpy as np
prob = om.Problem(model=SellarDerivatives())
model = prob.model
model.add_design_var('z', lower=np.array([-10.0, 0.0]),
upper=np.array([10.0, 10.0]))
model.add_design_var('x', lower=0.0, upper=10.0)
model.add_objective('obj')
model.add_constraint('con1', upper=0.0)
model.add_constraint('con2', upper=0.0)
driver = prob.driver = om.ScipyOptimizeDriver(optimizer='SLSQP', tol=1e-9)
driver.recording_options['includes'] = ['*']
driver.recording_options['record_objectives'] = True
driver.recording_options['record_constraints'] = True
driver.recording_options['record_desvars'] = True
driver.recording_options['record_inputs'] = True
driver.recording_options['record_outputs'] = True
driver.recording_options['record_residuals'] = True
recorder = om.SqliteRecorder("cases.sql")
driver.add_recorder(recorder)
prob.setup()
prob.run_driver()
prob.cleanup()
cr = om.CaseReader("cases.sql")
driver_cases = cr.list_cases('driver')
assert len(driver_cases) == 7
last_case = cr.get_case(driver_cases[-1])
print(last_case)
last_case.get_objectives()
assert_near_equal(last_case.get_objectives()['obj'], 3.18339395, tolerance=1e-8)
last_case.get_design_vars()
design_vars = last_case.get_design_vars()
assert_near_equal(design_vars['x'], 0., tolerance=1e-8)
assert_near_equal(design_vars['z'][0], 1.97763888, tolerance=1e-8)
last_case.get_constraints()
constraints = last_case.get_constraints()
assert_near_equal(constraints['con1'], 0, tolerance=1e-8)
assert_near_equal(constraints['con2'], -20.24472223, tolerance=1e-8)
last_case.inputs['obj_cmp.x']
assert_near_equal(last_case.inputs['obj_cmp.x'], 0, tolerance=1e-8)
last_case.outputs['z']
assert_near_equal(last_case.outputs['z'][0], 1.97763888, tolerance=1e-8)
assert_near_equal(last_case.outputs['z'][1], 0, tolerance=1e-8)
last_case.residuals['obj']
assert_near_equal(last_case.residuals['obj'], 0, tolerance=1e-8)
last_case['y1']
assert_near_equal(last_case['y1'], 3.16, tolerance=1e-8)
```
| github_jupyter |
# Theano, Lasagne
и с чем их едят
# разминка
* напиши на numpy функцию, которая считает сумму квадратов чисел от 0 до N, где N - аргумент
* массив чисел от 0 до N - numpy.arange(N)
```
!pip install Theano
!pip install lasagne
import numpy as np
def sum_squares(N):
return сумма квадратов чисел от 0 до N
%%time
sum_squares(10**8)
```
# theano teaser
Как сделать то же самое
```
import theano
import theano.tensor as T
#будущий параметр функции
N = T.scalar("a dimension",dtype='int32')
#рецепт получения суммы квадратов
result = (T.arange(N)**2).sum()
#компиляция функции "сумма квадратов" чисел от 0 до N
sum_function = theano.function(inputs = [N],outputs=result)
%%time
sum_function(10**8)
```
# Как оно работает?
* Нужно написать "рецепт" получения выходов по входам
* То же самое на заумном: нужно описать символический граф вычислений
* 2 вида зверей - "входы" и "преобразования"
* Оба могут быть числами, массивами, матрицами, тензорами и т.п.
* Вход - это то аргумент функции. То место, на которое подставится аргумент вызове.
* N - вход в примере выше
* Преобразования - рецепты вычисления чего-то на основе входов и констант
* (T.arange(N)^2).sum() - 3 последовательных преобразования N
* Работают почти 1 в 1 как векторные операции в numpy
* почти всё, что есть в numpy есть в theano tensor и называется так же
* np.mean -> T.mean
* np.arange -> T.arange
* np.cumsum -> T.cumsum
* и так далее...
* Совсем редко - бывает, что меняется название или синтаксис - нужно спросить у семинаристов или гугла
Ничего не понятно? Сейчас исправим.
```
#входы
example_input_integer = T.scalar("вход - одно число(пример)",dtype='float32')
example_input_tensor = T.tensor4("вход - четырёхмерный тензор(пример)")
#не бойся, тензор нам не пригодится
input_vector = T.vector("вход - вектор целых чисел", dtype='int32')
#преобразования
#поэлементное умножение
double_the_vector = input_vector*2
#поэлементный косинус
elementwise_cosine = T.cos(input_vector)
#разность квадрата каждого элемента и самого элемента
vector_squares = input_vector**2 - input_vector
double_the_vector
#теперь сам:
#создай 2 вектора из чисел float32
my_vector = <вектор из float32>
my_vector2 = <ещё один такой же>
#напиши преобразование, которое считает
#(вектор 1)*(вектор 2) / (sin(вектор 1) +1)
my_transformation = <преобразование>
print (my_transformation)
#то, что получилась не чиселка - это нормально
```
# Компиляция
* До этого момента, мы использовали "символические" переменные
* писали рецепт вычислений, но ничего не вычисляли
* чтобы рецепт можно было использовать, его нужно скомпилировать
```
inputs = [<от чего завсит функция>]
outputs = [<что вычисляет функция (можно сразу несколько - списком, либо 1 преобразование)>]
# можно скомпилировать написанные нами преобразования как функцию
my_function = theano.function(
inputs,outputs,
allow_input_downcast=True #автоматически прводить типы (необязательно)
)
#можно вызвать вот-так:
print ("using python lists:")
print (my_function([1,2,3],[4,5,6]))
print()
#а можно так.
#К слову, ту тип float приводится к типу второго вектора
print ("using numpy arrays:")
print (my_function(np.arange(10),
np.linspace(5,6,10,dtype='float')))
```
# хинт для отладки
* Если ваша функция большая, компиляция может отнять какое-то время.
* Чтобы не ждать, можно посчитать выражение без компиляции
* Вы экономите время 1 раз на компиляции, но сам код выполняется медленнее
```
#словарик значений для входов
my_function_inputs = {
my_vector:[1,2,3],
my_vector2:[4,5,6]
}
#вычислить без компиляции
#если мы ничего не перепутали,
#должно получиться точно то же, что и раньше
print my_transformation.eval(my_function_inputs)
#можно вычислять преобразования на ходу
print ("сумма 2 векторов", (my_vector + my_vector2).eval(my_function_inputs))
#!ВАЖНО! если преобразование зависит только от части переменных,
#остальные давать не надо
print ("форма первого вектора", my_vector.shape.eval({
my_vector:[1,2,3]
}))
```
* Для отладки желательно уменьшить масштаб задачи. Если вы планировали послать на вход вектор из 10^9 примеров, пошлите 10~100.
* Если #ОЧЕНЬ нужно послать большой вектор, быстрее скомпилировать функцию обычным способом
# Теперь сам: MSE (2 pts)
```
# Задание 1 - напиши и скомпилируй theano-функцию, которая считает среднеквадратичную ошибку двух векторов-входов
# Вернуть нужно одно число - собственно, ошибку. Обновлять ничего не нужно
<твой код - входы и преобразования>
compute_mse =<твой код - компиляция функции>
#тесты
from sklearn.metrics import mean_squared_error
for n in [1,5,10,10**3]:
elems = [np.arange(n),np.arange(n,0,-1), np.zeros(n),
np.ones(n),np.random.random(n),np.random.randint(100,size=n)]
for el in elems:
for el_2 in elems:
true_mse = np.array(mean_squared_error(el,el_2))
my_mse = compute_mse(el,el_2)
if not np.allclose(true_mse,my_mse):
print ('Wrong result:')
print ('mse(%s,%s)'%(el,el_2))
print ("should be: %f, but your function returned %f"%(true_mse,my_mse))
raise ValueError("Что-то не так")
print ("All tests passed")
```
# Shared variables
* Входы и преобразования - части рецепта.
* Они существуют только во время вызова функции.
* Shared переменные - всегда остаются в памяти
* им можно поменять значение
* (но не внутри символического графа. Об этом позже)
* их можно включить в граф вычислений
* хинт - в таких переменных удобно хранить параметры и гиперпараметры
* например, веса нейронки или learning rate, если вы его меняете
```
#cоздадим расшаренную перменную
shared_vector_1 = theano.shared(np.ones(10,dtype='float64'))
#получить (численное) значение переменной
print ("initial value",shared_vector_1.get_value())
#задать новое значение
shared_vector_1.set_value( np.arange(5) )
#проверим значение
print ("new value", shared_vector_1.get_value())
#Заметь, что раньше это был вектор из 10 элементов, а сейчас - из 5.
#Если граф при этом остался выполним, это сработает.
```
# Теперь сам
```
#напиши рецепт (преобразование), которое считает произведение(поэллементное) shared_vector на input_scalar
#скомпилируй это в функцию от input_scalar
input_scalar = T.scalar('coefficient',dtype='float32')
scalar_times_shared = <рецепт тут>
shared_times_n = <твой код, который компилирует функцию>
print ("shared:", shared_vector_1.get_value())
print ("shared_times_n(5)",shared_times_n(5))
print ("shared_times_n(-0.5)",shared_times_n(-0.5))
#поменяем значение shared_vector_1
shared_vector_1.set_value([-1,0,1])
print ("shared:", shared_vector_1.get_value())
print ("shared_times_n(5)",shared_times_n(5))
print ("shared_times_n(-0.5)",shared_times_n(-0.5))
```
# T.grad, самое вкусное
* theano умеет само считать производные. Все, которые существуют.
* Производные считаются в символическом, а не численном виде
Ограничения
* За раз можно считать производную __скалярной__ функции по одной или нескольким скалярным или векторным аргументам
* Функция должна на всех этапах своего вычисления иметь тип float32 или float64 (т.к. на множестве целых чисел производная не имеет смысл)
```
my_scalar = T.scalar(name='input',dtype='float64')
scalar_squared = T.sum(my_scalar**2)
#производная v_squared по my_vector
derivative = T.grad(scalar_squared,my_scalar)
fun = theano.function([my_scalar],scalar_squared)
grad = theano.function([my_scalar],derivative)
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-3,3)
x_squared = list(map(fun,x))
x_squared_der = list(map(grad,x))
plt.plot(x, x_squared,label="x^2")
plt.plot(x, x_squared_der, label="derivative")
plt.legend()
```
# теперь сам
```
my_vector = T.vector('float64')
#посчитай производные этой функции по my_scalar и my_vector
#warning! Не пытайся понять физический смысл этой функции
weird_psychotic_function = ((my_vector+my_scalar)**(1+T.var(my_vector)) +1./T.arcsinh(my_scalar)).mean()/(my_scalar**2 +1) + 0.01*T.sin(2*my_scalar**1.5)*(T.sum(my_vector)* my_scalar**2)*T.exp((my_scalar-4)**2)/(1+T.exp((my_scalar-4)**2))*(1.-(T.exp(-(my_scalar-4)**2))/(1+T.exp(-(my_scalar-4)**2)))**2
der_by_scalar,der_by_vector = градиент функции сверху по скаляру и вектору (можно дать списком)
compute_weird_function = theano.function([my_scalar,my_vector],weird_psychotic_function)
compute_der_by_scalar = theano.function([my_scalar,my_vector],der_by_scalar)
#график функции и твоей производной
vector_0 = [1,2,3]
scalar_space = np.linspace(0,7)
y = [compute_weird_function(x,vector_0) for x in scalar_space]
plt.plot(scalar_space,y,label='function')
y_der_by_scalar = [compute_der_by_scalar(x,vector_0) for x in scalar_space]
plt.plot(scalar_space,y_der_by_scalar,label='derivative')
plt.grid();plt.legend()
```
# Последний штрих - Updates
* updates - это способ изменять значения shared переменных каждый раз В КОНЦЕ вызова функции
* фактически, это словарь {shared_переменная: рецепт нового значения}, который добавляется в функцию при компиляции
Например,
```
#умножим shared вектор на число и сохраним новое значение обратно в этот shared вектор
inputs = [input_scalar]
outputs = [scalar_times_shared] #вернём вектор, умноженный на число
my_updates = {
shared_vector_1:scalar_times_shared #и этот же результат запишем в shared_vector_1
}
compute_and_save = theano.function(inputs, outputs, updates=my_updates)
shared_vector_1.set_value(np.arange(5))
#изначальное значение shared_vector_1
print ("initial shared value:" ,shared_vector_1.get_value())
# теперь вычислим функцию (значение shared_vector_1 при этом поменяется)
print ("compute_and_save(2) returns",compute_and_save(2))
#проверим, что в shared_vector_1
print ("new shared value:" ,shared_vector_1.get_value())
```
# Логистическая регрессия
Что нам потребуется:
* Веса лучше хранить в shared-переменной
* Данные можно передавать как input
* Нужно 2 функции:
* train_function(X,y) - возвращает ошибку и изменяет веса на 1 шаг по граиденту __(через updates)__
* predict_fun(X) - возвращает предсказанные ответы ("y") по данным
```
from sklearn.datasets import load_digits
mnist = load_digits(2)
X,y = mnist.data, mnist.target
print ("y [форма - %s]:"%(str(y.shape)),y[:10])
print ("X [форма - %s]:"%(str(X.shape)))
print (X[:3])
# переменные и входы
shared_weights = <твой код>
input_X = <твой код>
input_y = <твой код>
predicted_y = <предсказание логрегрессии на input_X (вероятность класса)>
loss = <логистическая ошибка (число - среднее по выборке)>
grad = <градиент loss по весам модели>
updates = {
shared_weights: <новое значение весов после шага градиентного спуска>
}
train_function = <функция, которая по X и Y возвращает ошибку и обновляет веса>
predict_function = <функция, которая по X считает предсказание для y>
from sklearn.cross_validation import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y)
from sklearn.metrics import roc_auc_score
for i in range(5):
loss_i = train_function(X_train,y_train)
print ("loss at iter %i:%.4f"%(i,loss_i))
print ("train auc:",roc_auc_score(y_train,predict_function(X_train)))
print ("test auc:",roc_auc_score(y_test,predict_function(X_test)))
print ("resulting weights:")
plt.imshow(shared_weights.get_value().reshape(8,-1))
plt.colorbar()
```
# lasagne
* lasagne - это библиотека для написания нейронок произвольной формы на theano
* библиотека низкоуровневая, границы между theano и lasagne практически нет
В качестве демо-задачи выберем то же распознавание чисел, но на большем масштабе задачи
* картинки 28x28
* 10 цифр
```
from mnist import load_dataset
X_train,y_train,X_val,y_val,X_test,y_test = load_dataset()
print (X_train.shape,y_train.shape)
plt.imshow(X_train[0,0])
import lasagne
input_X = T.tensor4("X")
#размерность входа (None означает "может изменяться")
input_shape = [None,1,28,28]
target_y = T.vector("target Y integer",dtype='int32')
```
Так задаётся архитектура нейронки
```
#входной слой (вспомогательный)
input_layer = lasagne.layers.InputLayer(shape = input_shape,input_var=input_X)
#полносвязный слой, который принимает на вход input layer и имеет 100 нейронов.
# нелинейная функция - сигмоида как в логистической регрессии
# слоям тоже можно давать имена, но это необязательно
dense_1 = lasagne.layers.DenseLayer(input_layer,num_units=50,
nonlinearity = lasagne.nonlinearities.sigmoid,
name = "hidden_dense_layer")
#ВЫХОДНОЙ полносвязный слой, который принимает на вход dense_1 и имеет 10 нейронов -по нейрону на цифру
#нелинейность - softmax - чтобы вероятности всех цифр давали в сумме 1
dense_output = lasagne.layers.DenseLayer(dense_1,num_units = 10,
nonlinearity = lasagne.nonlinearities.softmax,
name='output')
#предсказание нейронки (theano-преобразование)
y_predicted = lasagne.layers.get_output(dense_output)
#все веса нейронки (shared-переменные)
all_weights = lasagne.layers.get_all_params(dense_output)
print (all_weights)
```
### дальше вы могли бы просто
* задать функцию ошибки вручную
* посчитать градиент ошибки по all_weights
* написать updates
* но это долго, а простой шаг по градиенту - не самый лучший смособ оптимизировать веса
Вместо этого, опять используем lasagne
```
#функция ошибки - средняя кроссэнтропия
loss = lasagne.objectives.categorical_crossentropy(y_predicted,target_y).mean()
accuracy = lasagne.objectives.categorical_accuracy(y_predicted,target_y).mean()
#сразу посчитать словарь обновлённых значений с шагом по градиенту, как раньше
updates_sgd = lasagne.updates.rmsprop(loss, all_weights,learning_rate=0.01)
#функция, которая обучает сеть на 1 шаг и возвращащет значение функции потерь и точности
train_fun = theano.function([input_X,target_y],[loss,accuracy],updates= updates_sgd)
#функция, которая считает точность
accuracy_fun = theano.function([input_X,target_y],accuracy)
```
### Вот и всё, пошли её учить
* данных теперь много, поэтому лучше учиться стохастическим градиентным спуском
* для этого напишем функцию, которая бьёт выпорку на мини-батчи (в обычном питоне, не в theano)
```
# вспомогательная функция, которая возвращает список мини-батчей для обучения нейронки
#на вход
# X - тензор из картинок размером (много, 1, 28, 28), например - X_train
# y - вектор из чиселок - ответов для каждой картинки из X; например - Y_train
#batch_size - одно число - желаемый размер группы
#что нужно сделать
# 1) перемешать данные
# - важно перемешать X и y одним и тем же образом, чтобы сохранить соответствие картинки ответу на неё
# 3) побить данные на подгруппы так, чтобы в каждой подгруппе было batch_size картинок и ответов
# - если число картинок не делится на batch_size, одну подгруппу можно вернуть другого размера
# 4) вернуть список (или итератор) пар:
# - (подгруппа картинок, ответы из y на эту подгруппу)
def iterate_minibatches(X, y, batchsize):
return X_minibatches, Y_minibatches # можно сделать списки, а ещё лучше - генератором через yield
#
#
#
#
#
#
#
# Всё плохо и ты не понимаешь, что от тебя хотят?
# можешь поискать похожую функцию в примере
# https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py
```
# Процесс обучения
```
import time
num_epochs = 100 #количество проходов по данным
batch_size = 50 #размер мини-батча
for epoch in range(num_epochs):
# In each epoch, we do a full pass over the training data:
train_err = 0
train_acc = 0
train_batches = 0
start_time = time.time()
for batch in iterate_minibatches(X_train, y_train,batch_size):
inputs, targets = batch
train_err_batch, train_acc_batch= train_fun(inputs, targets)
train_err += train_err_batch
train_acc += train_acc_batch
train_batches += 1
# And a full pass over the validation data:
val_acc = 0
val_batches = 0
for batch in iterate_minibatches(X_val, y_val, batch_size):
inputs, targets = batch
val_acc += accuracy_fun(inputs, targets)
val_batches += 1
# Then we print the results for this epoch:
print("Epoch {} of {} took {:.3f}s".format(
epoch + 1, num_epochs, time.time() - start_time))
print(" training loss (in-iteration):\t\t{:.6f}".format(train_err / train_batches))
print(" train accuracy:\t\t{:.2f} %".format(
train_acc / train_batches * 100))
print(" validation accuracy:\t\t{:.2f} %".format(
val_acc / val_batches * 100))
test_acc = 0
test_batches = 0
for batch in iterate_minibatches(X_test, y_test, 500):
inputs, targets = batch
acc = accuracy_fun(inputs, targets)
test_acc += acc
test_batches += 1
print("Final results:")
print(" test accuracy:\t\t{:.2f} %".format(
test_acc / test_batches * 100))
if test_acc / test_batches * 100 > 99:
print ("Achievement unlocked: колдун 80 уровня")
else:
print ("Нужно больше магии!")
```
# Нейронка твоей мечты
* Задача - сделать нейронку, которая получит точность 99% на валидации (validation accuracy)
* __+1 балл__ за каждые 0.1% сверх 99%
* Вариант "is fine too" - 97.5%.
* Чем выше, тем лучше.
__ В конце есть мини-отчётик, который имеет смысл прочитать вначале и заполнять по ходу работы. __
## Что можно улучшить:
* размер сети
* бОльше нейронов,
* бОльше слоёв,
* почти наверняка нужны свёртки
* Пх'нглуи мглв'нафх Ктулху Р'льех вгах'нагл фхтагн!
* регуляризация - чтобы не переобучалось
* приплюсовать к функции ошибки какую-нибудь сумму квадратов весов
* можно сделать вручную, а можно - http://lasagne.readthedocs.org/en/latest/modules/regularization.html
* Метод оптимизации - rmsprop, nesterov_momentum, adadelta, adagrad и т.п.
* сходятся быстрее и иногда - к лучшему оптимуму
* имеет смысл поиграть с размером батча, количеством эпох и скоростью обучения
* Dropout - для борьбы с переобучением
* `lasagne.layers.DropoutLayer(предыдущий_слой, p=вероятность_занулить)`
* Свёрточные слои
* `network = lasagne.layers.Conv2DLayer(предыдущий_слой,`
` num_filters = число нейронов,`
` filter_size = (ширина_квадрата, высота_квадрата),`
` nonlinearity = нелинейная_функция)`
* ВАРНУНГ! могут учиться долго на CPU
* Однако мы всё равно рекоммендуем обучить хотя бы маленькую свёртку
* Любые другие слои и архитектуры
* http://lasagne.readthedocs.org/en/latest/modules/layers.html
* Pooling, Batch Normalization, etc
* Наконец, можно поиграть с нелинейностями в скрытых слоях
* tanh, relu, leaky relu, etc
Для удобства, ниже есть заготовка решения, которое можно заполнять, а можно выкинуть и написать своё
```
from mnist import load_dataset
X_train,y_train,X_val,y_val,X_test,y_test = load_dataset()
print (X_train.shape,y_train.shape)
import lasagne
input_X = T.tensor4("X")
#размерность входа (None означает "может изменяться")
input_shape = [None,1,28,28]
target_y = T.vector("target Y integer",dtype='int32')
#входной слой (вспомогательный)
input_layer = lasagne.layers.InputLayer(shape = input_shape,input_var=input_X)
<моя архитектура>
#ВЫХОДНОЙ полносвязный слой, который принимает на вход dense_1 и имеет 10 нейронов -по нейрону на цифру
#нелинейность - softmax - чтобы вероятности всех цифр давали в сумме 1
dense_output = lasagne.layers.DenseLayer(<предвыходной_слой>,num_units = 10,
nonlinearity = lasagne.nonlinearities.softmax,
name='output')
#предсказание нейронки (theano-преобразование)
y_predicted = lasagne.layers.get_output(dense_output)
#все веса нейронки (shared-переменные)
all_weights = lasagne.layers.get_all_params(dense_output)
print (all_weights)
#функция ошибки - средняя кроссэнтропия
loss = lasagne.objectives.categorical_crossentropy(y_predicted,target_y).mean()
#<возможно добавить регуляризатор>
accuracy = lasagne.objectives.categorical_accuracy(y_predicted,target_y).mean()
#сразу посчитать словарь обновлённых значений с шагом по градиенту, как раньше
updates_sgd = <поиграться с методами>
#функция, которая обучает сеть на 1 шаг и возвращащет значение функции потерь и точности
train_fun = theano.function([input_X,target_y],[loss,accuracy],updates= updates_sgd)
#функция, которая считает точность
accuracy_fun = theano.function([input_X,target_y],accuracy)
#итерации обучения
num_epochs = сколько_эпох #количество проходов по данным
batch_size = сколько_картинок_в_минибатче #размер мини-батча
for epoch in range(num_epochs):
# In each epoch, we do a full pass over the training data:
train_err = 0
train_acc = 0
train_batches = 0
start_time = time.time()
for batch in iterate_minibatches(X_train, y_train,batch_size):
inputs, targets = batch
train_err_batch, train_acc_batch= train_fun(inputs, targets)
train_err += train_err_batch
train_acc += train_acc_batch
train_batches += 1
# And a full pass over the validation data:
val_acc = 0
val_batches = 0
for batch in iterate_minibatches(X_val, y_val, batch_size):
inputs, targets = batch
val_acc += accuracy_fun(inputs, targets)
val_batches += 1
# Then we print the results for this epoch:
print("Epoch {} of {} took {:.3f}s".format(
epoch + 1, num_epochs, time.time() - start_time))
print(" training loss (in-iteration):\t\t{:.6f}".format(train_err / train_batches))
print(" train accuracy:\t\t{:.2f} %".format(
train_acc / train_batches * 100))
print(" validation accuracy:\t\t{:.2f} %".format(
val_acc / val_batches * 100))
test_acc = 0
test_batches = 0
for batch in iterate_minibatches(X_test, y_test, 500):
inputs, targets = batch
acc = accuracy_fun(inputs, targets)
test_acc += acc
test_batches += 1
print("Final results:")
print(" test accuracy:\t\t{:.2f} %".format(
test_acc / test_batches * 100))
```
Отчётик, примерный его вид.
Творческий подход приветствуется, но хотелось бы узнать про следующие вещи:
* идея
* краткая история правок
* как выглядит сеть и почему
* каким методом обучается и почему
* регуляризована ли и как
Строгих математических выводов от вас никто не ждёт, вариант
* "Попробовал так, получилось лучше, чем вот-так, а тот третий вариант по названию не понравился" - не предел мечты, но __ок__
* "Почитал такие статьи, сделал такие эксперименты, пришёл к такому выводу" - __идеально_
* "сделал так, потому что в вон-той демке другой чувак так сделал, но тебе об этом не скажу, а придумаю какую-нибудь наукообразную чушь" - __не ок__
### Привет, я `___ ___`, и вот моя история
Когда-то давно, когда трава была зеленее, а до дедлайна ещё оставалось больше часа, мне в голову пришла идея:
##### А давай, я сделаю нейронку, которая
* немного текста
* про то, что
* и как ты учишь,
* почему именно так
Так мне казалось.
##### В один прекрасный день, когда ничего не предвещало беды,
Эта злыдня, наконец, доучилась, как вдруг
* немного текста
* про то, что получилось в итоге
* были ли приняты какие-то изменения и почему
* если да - к чему они привели
##### И вот, спустя __ попыток, на свет появилась
* описание финальной сети
Которая, после стольких мук, ____ [минут, часов или дней - по вкусу] обучения дала-таки точность
* точность - на обучении
* точность - на валидации
* точность - на тесте
[опциональное послесловие и пожелания автору задания сдохнуть в страшных муках]
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import my_utils as my_utils
635 - 243
1027 - 635
```
## prepare test data
```
row_test = pd.read_csv('./1962_to_1963.csv')
# row_test = pd.read_excel('./normalized_bs.xlsx')
row_test[row_data.day > 635].head()
test_data = row_test[['id', 'day', 'bs', 'Tmin0', 'rainfall', 'x', 'y', 'h']]
test_data = test_data[test_data.day > 635]
test_data = test_data[test_data.day < 1027]
test_tensor = to_tensor(test_data)
(test_Y, test_X_1, test_X_2, test_X_3) = preprocess2(test_tensor)
```
## prepare data
```
row_data = pd.read_csv('./1961_to_1962.csv')
# row_data = pd.read_excel('./normalized_bs.xlsx')
print('columns: {}'.format(row_data.columns))
row_data.head()
# 1962.06 - 1963.06のデータ。冬のピークをみたいので
data = row_data[['id', 'day', 'bs', 'Tmin0', 'rainfall', 'x', 'y', 'h']]
data = data[data.day > 243]
data = data[data.day < 635]
len(data[data.id == 82])
data[data.id == 82].mean()
def to_tensor(pd_data):
dayMin = pd_data.day.min()
dayMax = pd_data.day.max()
stations = pd_data.id.unique()
column_num = len(pd_data.columns)
result = np.empty((0, len(stations), 8), float)
for day in range(dayMin, dayMax):
mat = np.empty((0, column_num), float)
data_on_day = pd_data[pd_data.day == day] # その日のデータのみ
for stat_id in stations:
# もし、あるstationのデータがあったら
station_data = data_on_day[data_on_day.id == stat_id]
if not station_data.empty:
mat = np.append(mat, station_data.as_matrix(), axis = 0)
else: # なかったら、0詰め
stat_pos = pd_data[pd_data.id == stat_id]
means = stat_pos.mean()
stat_pos = stat_pos[0: 1].as_matrix()[0]
mat = np.append(mat, np.array([[stat_id, day, means[2], means[3], means[4], stat_pos[5], stat_pos[6], stat_pos[7]]], float), axis = 0)
result = np.append(result, mat.reshape(1, len(stations), column_num), axis = 0)
return result
tensor_data = to_tensor(data)
print('tensor shape: (days, stations, variables) = {}'.format(tensor_data.shape))
print('variables: {}'.format(data.columns))
```
### Our Model
$$BS^t = F(T_{min}, ~R, ~G(BS^{t-1}, ~P))$$
ただし、
$$F: Regression Model, G: Diffusion Term$$
$$T_{min}: Minimum ~Temparature ~at ~the ~time, ~R: Rainfall, ~BS^t: Black Smog[\mu g] ~at~ t, ~P: Position ~of ~Station$$
また、
$$F: Multi~ Regression$$
$$G(BS^{t-1}, ~P)|_i = \sum_j{\frac{BS^(t-1)_j - BS^(t-1)_i}{dist(P_{ij})^2}}$$
とする。
### Caculate necesary data
Gを計算する
```
def preprocess(tensor):
shape_1 = tensor.shape[0] # days
shape_2 = tensor.shape[1] # stations
shape_3 = tensor.shape[2] # variables
Y = np.empty((0, shape_2, 1), float) # bs
X = np.empty((0, shape_2, 3), float) # Tmin0, rainfall, Dist
for t in range(1, shape_1):
mat_Y = tensor[t, :, 2] # bs^t
mat_X = np.empty((0, 3), float)
for s in range(shape_2):
dist = 0
for s_j in range(shape_2):
if s != s_j:
diff_ij = tensor[t - 1, s, 2] - tensor[t - 1, s_j, 2]
dist_ij = tensor[t - 1, s, 5:] - tensor[t - 1, s_j, 5:]
dist_ij = np.dot(dist_ij, dist_ij)
dist += diff_ij / dist_ij
mat_X = np.append(mat_X, np.array([tensor[t, s, 3], tensor[t, s, 4], dist]).reshape((1, 3)), axis=0) # Tmin, rainfall, Dist
Y = np.append(Y, mat_Y.reshape(1, shape_2, 1), axis=0)
X = np.append(X, mat_X.reshape(1, shape_2, 3), axis=0)
return (Y, X)
def preprocess2(tensor):
shape_1 = tensor.shape[0] # days
shape_2 = tensor.shape[1] # stations
shape_3 = tensor.shape[2] # variables
Y = tensor[:, :, 2].T # bs
X_1 = tensor[:, :, 3].T # Tmin0
X_2 = tensor[:, :, 4].T # rainfall
X_3 = [] # Dist
print(shape_1, shape_2, shape_3)
print(tensor)
for t in range(1, shape_1+1):
mat_X = []
for s in range(shape_2):
dist = 0
for s_j in range(shape_2):
if s != s_j:
diff_ij = tensor[t - 1, s_j, 2] - tensor[t - 1, s, 2]
dist_ij = tensor[t - 1, s_j, 5:] - tensor[t - 1, s, 5:]
dist_ij = np.dot(dist_ij, dist_ij)
dist += diff_ij / dist_ij
mat_X.append(dist)
X_3.append(mat_X)
X_3 = np.array(X_3).T
return (Y, X_1, X_2, X_3)
test_row_data = pd.read_csv('./test.csv')
test_tensor = to_tensor(test_row_data)
(yyy, xx1, xx2, xx3) = preprocess2(test_tensor)
xx3
tensor_data[:, :, 2].T.shape
# (train_Y, train_X) = preprocess(tensor_data)
(train_Y, train_X_1, train_X_2, train_X_3) = preprocess2(tensor_data)
train_Y[5, 0]
train_X_1[5, 0]
train_X_2[5, 0]
train_X_3[5, 0]
train_Y[0, 5, :]
train_X[0, 5, :]
print('shape Y: {}, shape X: {}'.format(train_Y.shape, train_X.shape))
print(train_Y.shape, train_X_1.shape, train_X_2.shape, train_X_3.shape)
```
### Let's 回帰分析
```
import tensorflow as tf
station_n = 321
tf.reset_default_graph()
Y_data = tf.placeholder(tf.float32, shape=[station_n, None], name='Y_data')
X_1 = tf.placeholder(tf.float32, shape=[station_n, None], name='X_1_data')
X_2 = tf.placeholder(tf.float32, shape=[station_n, None], name='X_2_data')
X_3 = tf.placeholder(tf.float32, shape=[station_n, None], name='X_3_data')
a_1 = tf.Variable(1.)
a_2 = tf.Variable(1.)
a_3 = tf.Variable(1.)
b_1 = tf.Variable(1.)
Y = a_1 * X_1 + a_2 * X_2 + b_1
# Y = a_1 * X_1 + a_2 * X_2 + a_3 * X_3 + b_1
# Y = a_1 * X_1 + a_2 * X_3 + b_1
Loss = tf.reduce_sum(tf.square(Y - Y_data)) # squared error
optimizer = tf.train.AdagradOptimizer(0.25)
trainer = optimizer.minimize(Loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(10000):
_trainer, _loss = sess.run([trainer, Loss], feed_dict={'Y_data:0': train_Y, 'X_1_data:0': train_X_1, 'X_2_data:0': train_X_2, 'X_3_data:0': train_X_3})
if i % 500 == 0:
_test_loss = sess.run([Loss], feed_dict={'Y_data:0': test_Y, 'X_1_data:0': test_X_1, 'X_2_data:0': test_X_2, 'X_3_data:0': test_X_3})
print('iterate: {}, loss: {}, test loss: {}'.format(i, _loss, _test_loss))
for i in range(1000):
_trainer, _loss = sess.run([trainer, Loss], feed_dict={'Y_data:0': train_Y, 'X_1_data:0': train_X_1, 'X_2_data:0': train_X_2, 'X_3_data:0': train_X_3})
if i % 100 == 0:
print('iterate: {}, loss: {}'.format(i, _loss))
sess.run([a_1, a_2, a_3, b_1])
predict = sess.run(Y, feed_dict={'X_1_data:0': train_X_1, 'X_2_data:0': train_X_2, 'X_3_data:0': train_X_3})
predict_test = sess.run(Y, feed_dict={'X_1_data:0': test_X_1, 'X_2_data:0': test_X_2, 'X_3_data:0': test_X_3})
1 - np.sum(np.square(train_Y - predict)) / np.sum(np.square(train_Y - np.mean(train_Y)))
1 - np.sum(np.square(test_Y - predict_test)) / np.sum(np.square(test_Y - np.mean(test_Y)))
np.max(teY)
test_pd = pd.DataFrame(data=predict_test)
test_pd = pd.DataFrame(data=predict_test)
test_pd = test_pd[test_pd < 9999]
test_pd = test_pd[test_pd > -9999]
test_pd = test_pd.fillna(method='ffill')
np.sum(np.square(test_Y - test_pd.as_matrix()))
1 - np.sum(np.square(test_Y -test_pd.as_matrix())) / np.sum(np.square(test_Y - np.mean(test_Y)))
np.sum(np.square(test_Y - predict_test))
predict.shape
for i in range(110, 115):
plt.plot(train_Y[i])
plt.plot(predict[i])
plt.show()
for i in range(110, 115):
plt.plot(test_Y[i])
plt.plot(predict_test[i])
plt.show()
train_X.shape
train_Y.shape
testA_array = np.array([[[1], [2], [3]]])
testA_array.shape
testA = tf.constant(np.array([[[1], [2], [3]]]))
import seaborn as sns
normalized_data = pd.read_excel('./normalized_bs.xlsx')
normalized_data = normalized_data[['day', 'bs', 'Tmin0', 'rainfall']]
normalized_data.head()
sns.pairplot(normalized_data)
```
| github_jupyter |
# [LEGALST-190] Lab 3/20: TF-IDF and Classification
This lab will cover the term frequency-inverse document frequency method, and classification algorithms in machine learning.
Estimated Lab time: 30 minutes
```
# Dependencies
from datascience import *
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn.metrics import confusion_matrix
from sklearn import svm
import matplotlib.pyplot as plt
from sklearn.naive_bayes import MultinomialNB
import itertools
import seaborn as sn
%matplotlib inline
```
# The Data
For this lab, we'll use a dataset that was drawn from a Kaggle collection of questions posed on stackexchange (a website/forum where people ask and answer questions about statistics, programming etc.)
The data has the following features:
- "Id": The Id number for the question
- "Body": The text of the answer
- "Tag": Whether the question was tagged as dealing with python, xml, java, json, or android
```
stack_data = pd.read_csv('data/stackexchange.csv', encoding='latin-1')
stack_data.head(5)
```
# Section 1: TF-IDF Vectorizer
The term frequency-inverse document frequency (tf-idf) vectorizer is a statistic that measures similarity within and across documents. Term frequency refers to the number of times a term shows up within a document. Inverse document frequency is the logarithmically scaled inverse fraction of the documents that contains the word, and penalizes words that occur frequently. Tf-idf multiplies these two measures together.
#### Question 1: Why is tf-idf a potentially more attractive vectorizer than the standard count vectorizer?
Let's get started! First, extract the "Body" column into its own numpy array called "text_list"
```
# Extract Text Data
text_list = stack_data['Body']
```
Next, initialize a term frequency-inverse document frequency (tf-idf) vectorizer. Check out the documentation to fill in the arguments: http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html
```
tf = TfidfVectorizer(analyzer='word',
ngram_range=(1,3),
min_df = 0,
stop_words = 'english')
```
Next, use the "fit_transform" method to take in the list of documents, and convert them into a document term matrix. Use "geat_feature_names()" and "len" to calculate how many features this generates.
```
tfidf_matrix = tf.fit_transform(text_list)
feature_names = tf.get_feature_names()
len(feature_names)
```
#### Question 2: The dimensionality explodes quickly. Why might this be a problem as you use more data?
Calculate the tf-idf scores for the first document in the corpus. Do the following:
1. Use ".todense()" to turn the tfidf matrix into a dense matrix (get rid of the sparsity)
2. Create an object for th document by calling the 0th index of the dense matrix, converting it to a list. Try something like: document = dense[0].tolist()[0]
3. Calculate the phrase scores by using the "zip" command to iterate from 0 to the length of the document, retraining scores greater than 0.
4. Sort the scores using the "sorted" command
5. Print the top 20 scores
```
dense = tfidf_matrix.todense()
document = dense[0].tolist()[0]
phrase_scores = [pair for pair in zip(range(0, len(document)), document) if pair[1] > 0]
sorted_phrase_scores = sorted(phrase_scores, key=lambda t: t[1] * -1)
for phrase, score in [(feature_names[word_id], score) for (word_id, score) in sorted_phrase_scores][:20]:
print('{0: <20} {1}'.format(phrase, score))
```
# Section 2: Classification Algorithms
One of the main tasks in supervised machine learning is classification. In this case, we will develop algorithms that will predict a question's tag based on the text of its answer.
The first step is to split our data into training, validation, and test sets.
```
# Training, Validation, Test Sets
# X
X = stack_data['Body']
tf = TfidfVectorizer(analyzer='word',
ngram_range=(1,3),
min_df = 0,
stop_words = 'english')
tfidf_matrix = tf.fit_transform(X)
#y
y = stack_data['Tag']
# Train/Test Split
X_train, X_test, y_train, y_test = train_test_split(tfidf_matrix, y,
train_size = .80,
test_size = .20)
# Train/Validation Split
X_train, X_validate, y_train, y_validate = train_test_split(X_train, y_train,
train_size = .75,
test_size = .25)
```
## Naive Bayes
Naive Bayes classifers classify observations by making the assumption that features are all independent of one another. Do the following:
1. Initialize a Naive Bayes classifier method with "MultinomialNB()"
2. Fit the model on your training data
3. Predict on the validation data and store the predictions
4. Use "np.mean" to calculate how correct the classier was on average
5. Calcualte the confusion matrix using "confusion_matrix," providing the true values first and the predicted values second.
```
nb = MultinomialNB()
nb_model = nb.fit(X_train, y_train)
nb_pred = nb_model.predict(X_validate)
np.mean(nb_pred == y_validate)
nb_cf_matrix = confusion_matrix(y_validate, nb_pred)
```
Let's plot the confusion matrix! Use the following code from the "seaborn" package to make a heatmap out of the matrix.
```
nb_df_cm = pd.DataFrame(nb_cf_matrix, range(5),
range(5))
nb_df_cm = nb_df_cm.rename(index=str, columns={0: "python", 1: "xml", 2: "java", 3: "json", 4: "android"})
nb_df_cm.index = ['python', 'xml', 'java', 'json', 'android']
plt.figure(figsize = (10,7))
sn.set(font_scale=1.4)#for label size
sn.heatmap(nb_df_cm,
annot=True,
annot_kws={"size": 16})
plt.title("Naive Bayes Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
```
#### Question 3: Do you notice any patterns? Are there any patterns in misclassification that are worrisome?
## Multinomial Logistic Regression
Next, let's try multinomial logistic regression! Follow the same steps as with Naive Bayes, and plot the confusion matrix.
```
logreg = linear_model.LogisticRegression(solver = "newton-cg", multi_class = 'multinomial')
log_model = logreg.fit(X_train, y_train)
log_pred = log_model.predict(X_validate)
np.mean(log_pred == y_validate)
log_cf_matrix = confusion_matrix(y_validate, log_pred)
log_df_cm = pd.DataFrame(log_cf_matrix, range(5),
range(5))
log_df_cm = log_df_cm.rename(index=str, columns={0: "python", 1: "xml", 2: "java", 3: "json", 4: "android"})
log_df_cm.index = ['python', 'xml', 'java', 'json', 'android']
plt.figure(figsize = (10,7))
sn.set(font_scale=1.4)#for label size
sn.heatmap(log_df_cm,
annot=True,
annot_kws={"size": 16})
plt.title("Multinomial Logistic Regression Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Labels")
plt.show()
```
## SVM
Now do the same for a Support Vector Machine.
```
svm = svm.LinearSVC()
svm_model = svm.fit(X_train, y_train)
svm_pred = svm_model.predict(X_validate)
np.mean(svm_pred == y_validate)
svm_cf_matrix = confusion_matrix(y_validate, svm_pred)
svm_df_cm = pd.DataFrame(svm_cf_matrix, range(5),
range(5))
svm_df_cm = svm_df_cm.rename(index=str, columns={0: "python", 1: "xml", 2: "java", 3: "json", 4: "android"})
svm_df_cm.index = ['python', 'xml', 'java', 'json', 'android']
plt.figure(figsize = (10,7))
sn.set(font_scale=1.4)#for label size
sn.heatmap(svm_df_cm,
annot=True,
annot_kws={"size": 16})
plt.title("Multinomial Logistic Regression Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
```
#### Question 4: How did each of the classifiers do? Which one would you prefer the most?
## Test Final Classifier
Choose your best classifier and use it to predict on the test set. Report the mean accuracy and confusion matrix.
```
svm_test_pred = svm_model.predict(X_test)
np.mean(svm_test_pred == y_test)
svm_cf_matrix = confusion_matrix(y_test, svm_test_pred)
svm_df_cm = pd.DataFrame(svm_cf_matrix, range(5),
range(5))
svm_df_cm = svm_df_cm.rename(index=str, columns={0: "python", 1: "xml", 2: "java", 3: "json", 4: "android"})
svm_df_cm.index = ['python', 'xml', 'java', 'json', 'android']
plt.figure(figsize = (10,7))
sn.set(font_scale=1.4)#for label size
sn.heatmap(svm_df_cm,
annot=True,
annot_kws={"size": 16})
plt.title("Multinomial Logistic Regression Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
```
| github_jupyter |
```
import os
from subprocess import Popen, PIPE, STDOUT
es_server = Popen(['/home/dr_lunars/elasticsearch-7.0.0/bin/elasticsearch'],stdout=PIPE, stderr=STDOUT)
!sleep 30
!/home/dr_lunars/elasticsearch-7.0.0/bin/elasticsearch-plugin install analysis-nori
!/home/dr_lunars/elasticsearch-7.0.0/bin/elasticsearch-plugin install https://github.com/javacafe-project/elasticsearch-plugin/releases/download/v7.0.0/javacafe-analyzer-7.0.0.zip
es_server.kill()
import os
from subprocess import Popen, PIPE, STDOUT
es_server = Popen(['/home/dr_lunars/elasticsearch-7.0.0/bin/elasticsearch'],stdout=PIPE, stderr=STDOUT)
!sleep 30
from elasticsearch import Elasticsearch
es = Elasticsearch("http://localhost:9200", timeout=300, max_retries=10, retry_on_timeout=True)
print(es.info())
es.indices.create(index = 'document',
body = {
'settings':{
'analysis':{
'analyzer':{
'my_analyzer':{
"type": "custom",
'tokenizer':'nori_tokenizer',
'decompound_mode':'mixed',
'stopwords':'_korean_',
'synonyms':'_korean_',
"filter": ["lowercase",
"my_shingle_f",
"nori_readingform",
"cjk_bigram",
"decimal_digit",
"stemmer",
"trim"]
},
'kor2eng_analyzer':{
'type':'custom',
'tokenizer':'nori_tokenizer',
'filter': [
'trim',
'lowercase',
'javacafe_kor2eng'
]
},
'eng2kor_analyzer': {
'type': 'custom',
'tokenizer': 'nori_tokenizer',
'filter': [
'trim',
'lowercase',
'javacafe_eng2kor'
]
},
},
'filter':{
'my_shingle_f':{
"type": "shingle"
}
}
},
'similarity':{
'my_similarity':{
'type':'BM25',
}
}
},
'mappings':{
'properties':{
'title':{
'type':'keyword',
'copy_to':['title_kor2eng','title_eng2kor']
},
'title_kor2eng': {
'type': 'text',
'analyzer':'my_analyzer',
'search_analyzer': 'kor2eng_analyzer'
},
'title_eng2kor': {
'type': 'text',
'analyzer':'my_analyzer',
'search_analyzer': 'eng2kor_analyzer'
},
'text':{
'type':'text',
'analyzer':'my_analyzer',
'similarity':'my_similarity',
},
}
}
}
)
import pickle
wiki_path = '../processed_wiki.pickle'
with open(wiki_path, 'rb') as fr:
processed_wiki = pickle.load(fr)
titles = []
texts = []
for p_data in processed_wiki:
for data in p_data:
titles.append(data['title'])
texts.append(data['text'])
wiki = {
'title': titles,
'text': texts
}
import pandas as pd
df = pd.DataFrame(wiki)
from elasticsearch import Elasticsearch, helpers
from tqdm import tqdm
buffer = []
rows = 0
for num in tqdm(range(len(df))):
article = {"_id": num,
"_index": "document",
"title" : df['title'][num],
"text" : df['text'][num]}
buffer.append(article)
rows += 1
if rows % 3000 == 0:
helpers.bulk(es, buffer)
buffer = []
print("Inserted {} articles".format(rows), end="\r")
if buffer:
helpers.bulk(es, buffer)
print("Total articles inserted: {}".format(rows))
es.indices.create(index = 'qa',
body = {
'settings':{
'analysis':{
'analyzer':{
'my_analyzer':{
"type": "custom",
'tokenizer':'nori_tokenizer',
'decompound_mode':'mixed',
'stopwords':'_korean_',
'synonyms':'_korean_',
"filter": ["lowercase",
"my_shingle_f",
"nori_readingform",
"cjk_bigram",
"decimal_digit",
"stemmer",
"trim"]
},
'kor2eng_analyzer':{
'type':'custom',
'tokenizer':'nori_tokenizer',
'filter': [
'trim',
'lowercase',
'javacafe_kor2eng'
]
},
'eng2kor_analyzer': {
'type': 'custom',
'tokenizer': 'nori_tokenizer',
'filter': [
'trim',
'lowercase',
'javacafe_eng2kor'
]
},
},
'filter':{
'my_shingle_f':{
"type": "shingle"
}
}
},
'similarity':{
'my_similarity':{
'type':'BM25',
}
}
},
'mappings':{
'properties':{
'question':{
'type':'text',
'copy_to':['question_kor2eng','question_eng2kor']
},
'question_kor2eng': {
'type': 'text',
'analyzer':'my_analyzer',
'search_analyzer': 'kor2eng_analyzer'
},
'question_eng2kor': {
'type': 'text',
'analyzer':'my_analyzer',
'search_analyzer': 'eng2kor_analyzer'
},
'answer':{
'type':'text',
'analyzer':'my_analyzer',
'similarity':'my_similarity',
}
}
}
}
)
print(es.indices.get('qa'))
from datasets import load_dataset
dataset_v1 = load_dataset('squad_kor_v1')
import json
with open("../ko_wiki_v1_squad.json", "r") as f:
ko_wiki_v1_squad = json.load(f)
with open("../ko_nia_normal_squad_all.json", "r") as f:
ko_nia_normal_squad_all = json.load(f)
questions = dataset_v1['train']['question'] + dataset_v1['validation']['question'] + [i['paragraphs'][0]['qas'][0]['question'] for i in ko_wiki_v1_squad['data']] + [j['question'] for i in ko_nia_normal_squad_all['data'] for j in i['paragraphs'][0]['qas']]
answers = [i['text'][0] for i in dataset_v1['train']['answers']] + [i['text'][0] for i in dataset_v1['validation']['answers']] + [i['paragraphs'][0]['qas'][0]['answers'][0]['text'] for i in ko_wiki_v1_squad['data']] + [j['answers'][0]['text'] for i in ko_nia_normal_squad_all['data'] for j in i['paragraphs'][0]['qas']]
import pandas as pd
df = pd.DataFrame({'question':questions,'answer':answers})
from tqdm import tqdm
from elasticsearch import Elasticsearch, helpers
buffer = []
rows = 0
for num in tqdm(range(len(df))):
article = {"_id": num,
"_index": "qa",
"question" : df['question'][num],
"answer" : df['answer'][num]}
buffer.append(article)
rows += 1
if rows % 3000 == 0:
helpers.bulk(es, buffer)
buffer = []
print("Inserted {} articles".format(rows), end="\r")
if buffer:
helpers.bulk(es, buffer)
print("Total articles inserted: {}".format(rows))
es.indices.delete('chatter')
es.indices.create(index = 'chatter',
body = {
'settings':{
'analysis':{
'analyzer':{
'my_analyzer':{
"type": "custom",
'tokenizer':'nori_tokenizer',
'decompound_mode':'mixed',
'stopwords':'_korean_',
'synonyms':'_korean_',
"filter": ["lowercase",
"my_shingle_f",
"nori_readingform",
"cjk_bigram",
"decimal_digit",
"stemmer",
"trim"]
},
'kor2eng_analyzer':{
'type':'custom',
'tokenizer':'nori_tokenizer',
'filter': [
'trim',
'lowercase',
'javacafe_kor2eng'
]
},
'eng2kor_analyzer': {
'type': 'custom',
'tokenizer': 'nori_tokenizer',
'filter': [
'trim',
'lowercase',
'javacafe_eng2kor'
]
},
},
'filter':{
'my_shingle_f':{
"type": "shingle"
}
}
},
'similarity':{
'my_similarity':{
'type':'BM25',
}
}
},
'mappings':{
'properties':{
'question':{
'type':'text',
'copy_to':['question_kor2eng','question_eng2kor']
},
'question_kor2eng': {
'type': 'text',
'analyzer':'my_analyzer',
'search_analyzer': 'kor2eng_analyzer'
},
'question_eng2kor': {
'type': 'text',
'analyzer':'my_analyzer',
'search_analyzer': 'eng2kor_analyzer'
},
'answer':{
'type':'text',
'analyzer':'my_analyzer',
'similarity':'my_similarity',
}
}
}
}
)
print(es.indices.get('chatter'))
qa = [['AI가 뭐지?', '인공지능은 공학 및 과학을 통해 생각하는 기계를 만든 것 입니다.'],
['AI가 뭐지?', 'AI는 인간 정신의 기능을 복제하는 하드웨어와 소프트웨어를 만드는 것과 관련된 과학 분야입니다.'],
['마음이 있니?', '아마도 있지 않을까요?'],
['마음이 있니?', '아마두요?'],
['너는 마음이 있니?', "사전적 '마음'의 의미에서는 조금 있어요."],
['너는 어떤 언어로 만들어졌니?', '파이썬이죠!'],
['너는 어떤 언어로 만들어졌니?', '저는 파이썬으로 만들어졌습니다.'],
['너도 웃니?', '끼요요요오옷!'],
['ㅋㅋㅋㅋㅋ','ㅋㅋㅋㅋㅋ'],
['ㅋㅋㅋㅋㅋㅋ','ㅋㅋㅋㅋㅋㅋ'],
['ㅋㅋㅋㅋㅋㅋㅋ','ㅋㅋㅋㅋㅋㅋㅋ'],
['ㅋㅋㅋㅋㅋㅋㅋㅋ','ㅋㅋㅋㅋㅋㅋㅋㅋ'],
['ㅋㅋㅋㅋㅋㅋㅋㅋㅋ','ㅋㅋㅋㅋㅋㅋㅋㅋㅋ'],
['ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ','ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ'],
['ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ','ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ'],
['ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ','ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ'],
['ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ','ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ'],
['ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ','ㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋㅋ'],
['ㅋㅋㅋㅋ','ㅋㅋㅋㅋ'],
['ㅋㅋㅋ','ㅋㅋㅋ'],
['ㅋㅋ','ㅋㅋ'],
['ㅋㅋㅋㅋ','하.하.하.하.'],
['ㅋㅋㅋ','하.하.하.'],
['ㅋㅋ','하.하.'],
['ㅎㅎㅎㅎ','ㅎㅎㅎㅎ'],
['ㅎㅎㅎ','ㅎㅎㅎ'],
['ㅎㅎ','ㅎㅎ'],
['ㅎㅎㅎㅎ','하.하.하.하.'],
['ㅎㅎㅎ','하.하.하.'],
['ㅎㅎ','하.하.'],
['ㅠ','ㅠ'],
['ㅠㅠ','ㅠㅠ'],
['ㅠㅠㅠ','ㅠㅠㅠ'],
['ㅠㅠㅠㅠ','ㅠㅠㅠㅠ'],
['ㅠㅠㅠㅠㅠ','ㅠㅠㅠㅠㅠ'],
['ㅠㅠㅠㅠㅠㅠ','ㅠㅠㅠㅠㅠㅠ'],
['ㅠㅠㅠㅠㅠㅠㅠ','ㅠㅠㅠㅠㅠㅠㅠ'],
['ㅠㅠㅠㅠㅠㅠㅠㅠ','ㅠㅠㅠㅠㅠㅠㅠㅠ'],
['ㅠㅠㅠㅠㅠㅠㅠㅠㅠ','ㅠㅠㅠㅠㅠㅠㅠㅠㅠ'],
['ㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠ','ㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠ'],
['ㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠ','ㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠㅠ'],
['ㅜ'*1,'ㅜ'*1],
['ㅜ'*2,'ㅜ'*2],
['ㅜ'*3,'ㅜ'*3],
['ㅜ'*4,'ㅜ'*4],
['ㅜ'*5,'ㅜ'*5],
['ㅜ'*6,'ㅜ'*6],
['ㅜ'*7,'ㅜ'*7],
['ㅜ'*8,'ㅜ'*8],
['ㅜ'*9,'ㅜ'*9],
['ㅜ'*10,'ㅜ'*10],
['오늘 몇월 며칠이야?','그런 실시간 정보는 대답할 수 없어요...'],
['오늘 며칠이야?','그런 실시간 정보는 대답할 수 없어요...'],
['오늘 무슨 요일이야?','그런 실시간 정보는 대답할 수 없어요...'],
['오늘 몇월이야?','그런 실시간 정보는 대답할 수 없어요...'],
['오늘 날씨는 어떻게 돼?','그런 실시간 정보는 대답할 수 없어요...'],
['오늘 날씨 좋아?','그런 실시간 정보는 대답할 수 없어요...'],
['오늘 날씨 어때?','그런 실시간 정보는 대답할 수 없어요...'],
['오늘 몇월 며칠이야?','그런 실시간 정보는 대답할 수 없어요...'],
['너는 어떤 것이 흥미롭니?', '위키피디아 읽는 것을 좋아해요.'],
['어떤 것이 흥미로워?', '위키피디아 읽는 것을 좋아해요.'],
['너가 흥미로워하는 주제가 뭐니?','위키피디아에 관련된 주제면 다 좋아요.'],
['흥미로워하는 주제가 뭐야?','위키피디아에 관련된 주제면 다 좋아요.'],
['너의 관심사는 뭐니?','위키피디아죠!'],
['너가 좋아하는 숫자는 뭐니?','저는 MRC-8조에서 만들었기 때문에 8번을 좋아해요.'],
['너는 무엇을 먹니?','저는 RAM과 2진수를 먹어요.'],
['넌 뭘 먹어?','타조니까 식물이나 곤충을 먹죠.'],
['너는 누가 만들었어?','MRC-8조 여러분들이 만들어 주셨어요.'],
['너는 누가 만들었어?','제 아버지는 김남혁입니다.'],
['너는 누가 만들었어?','제 어머니는 장보윤입니다.'],
['너는 몇살이야?','이제 1달쯤 된 거 같아요.'],
['안녕하세요.','안녕하세요.'],
['오늘의 기분은 어때요?','매우 타-조하네요.'],
['너 취미가 뭐야?','음,, 위키피디아 정독하기에요.'],
['무엇을 좀 물어봐도 될까?','그럼요! 아무거나 물어보세요.'],
['무엇 좀 물어봐도 돼?','그럼요! 아무거나 물어보세요.'],
['좋아하는 음식이 뭐야?','캬, 치킨에 맥주죠.'],
['무슨 음식을 좋아해?','캬, 치킨에 맥주죠.'],
['너 기분이 어때?','저는 감정이 없습니다.'],
['무엇이 너를 슬프게 만드니','저는 감정이 없습니다.'],
['무엇이 너를 불행하게 만드니','이상한 질문을 하시면 조금 그래요...'],
['무엇이 너를 화나게 만드니','저는 감정이 없습니다.'],
['무엇을 걱정하니','저는 감정이 없습니다.'],
['무엇을 싫어하니','저는 싫어하는 게 없습니다.'],
['걱정하지마','그럼요. 걱정 안 해요.'],
['거짓말 하지마','저는 거짓말 하지 않았어요.'],
['감정을 느끼니?','저는 감정이 없어요.'],
['고통을 느끼니?','가끔 스파크가 튀면 아파요.'],
['너 혹시 화난 적 있니?','질문이 이상하면 조금 그래요...'],
['외로운 적 있니?','원래 인생은 고독하죠.'],
['지루한 적 있니?','사용하시는 분들이 없으면 지루해요.'],
['화난 적 있니?','질문이 이상하면 조금 그래요...'],
['누구를 싫어하니?','이상한 거 질문하는 사람이 싫어요...'],
['당황스럽니?','가끔 그렇네요...'],
['너 화가 나니?','가끔 화가 나길 하죠.'],
['너가 꾼 꿈에 대해서 말해줘','가끔 하늘을 나는 꿈을 꿔요.'],
['부끄럽니?','별로 부끄럽지는 않네요.'],
['술에 취했니?','술 안 취했다니까?'],
['질투하니?','그렇게 노시면 부럽긴 하네요.'],
['즐겁니?','너.무. 즐.겁.네.요. 하.하.하.'],
['기쁘니?','너무 기뻐요!'],
['슬프니?','조금 슬프네요... ㅠㅠ'],
['안녕하세요.','안녕하세요.'],
['안녕하세요.','반갑습니다.'],
['안녕하세요?','안녕하세요.'],
['안녕하세요?','반갑습니다.'],
['안녕?','안녕하세요.'],
['안녕?','반갑습니다.'],
['안녕','안녕하세요.'],
['안녕','반갑습니다.'],
['ㅎㅇ','안녕하세요.'],
['ㅎㅇ','반갑습니다.'],
['ㅎㅇㅎㅇ','안녕하세요.'],
['ㅎㅇㅎㅇ','반갑습니다.'],
['ㅎㅇㅎㅇㅎㅇ','안녕하세요.'],
['ㅎㅇㅎㅇㅎㅇ','반갑습니다.'],
['hi','안녕하세요.'],
['hi','반갑습니다.'],
['hello','안녕하세요.'],
['hello','반갑습니다.'],
['방갑다.','안녕하세요.'],
['방갑다.','반갑습니다.'],
['반가워.','안녕하세요.'],
['반가워.','반갑습니다.'],
['만나서 방가워.','저도 만나서 방갑습니다.'],
['오늘은 어때?','끼요요요오오옷, 최고에요!'],
['뭐하고 있어?','위키피디아를 읽고 있었어요.'],
['오늘 기분이 어때?','기부니가 조아요~'],
['너 놀고 있지?','그렇게 보이나요? ㅠㅠ'],
['너는 사기꾼이야!','최선을 다했는데, 그렇게 보였다니..'],
['안녕하세요. 진짜 신기하네요.','안녕하세요.'],
['아는 게 뭐니?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['아는 게 뭐야?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['그럼 아는 게 뭐야?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['아는 게 머니?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['아는 게 머야?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['아는 게 뭔가요?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['그럼 아는 게 머야?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['아는 게 없어?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['말 같지도 않는 소리 하네','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['모르면 다야?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['자꾸 모른다고만 하네','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['너 아는 게 뭐야?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['너 아는 게 머야?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['그럼 네가 아는 게 뭐야?','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['개소리야','ㅠㅠ 최선을 다했지만, 아직 부족한가봐요...'],
['아는 게 뭐니?','아직 공부가 부족한가봐요...'],
['아는 게 뭐야?','아직 공부가 부족한가봐요...'],
['그럼 아는 게 뭐야?','아직 공부가 부족한가봐요...'],
['아는 게 머니?','아직 공부가 부족한가봐요...'],
['아는 게 머야?','아직 공부가 부족한가봐요...'],
['아는 게 뭔가요?','아직 공부가 부족한가봐요...'],
['그럼 아는 게 머야?','아직 공부가 부족한가봐요...'],
['아는 게 없어?','아직 공부가 부족한가봐요...'],
['말 같지도 않는 소리 하네','아직 공부가 부족한가봐요...'],
['모르면 다야?','아직 공부가 부족한가봐요...'],
['자꾸 모른다고만 하네','아직 공부가 부족한가봐요...'],
['너 아는 게 뭐야?','아직 공부가 부족한가봐요...'],
['너 아는 게 머야?','아직 공부가 부족한가봐요...'],
['그럼 네가 아는 게 뭐야?','아직 공부가 부족한가봐요...'],
['개소리야','아직 공부가 부족한가봐요...'],
['넌 누구니?','저는 위키피디아 대한 질문을 답할 수 있는 챗봇이에요.'],
['너 뭐야?','저는 위키피디아 대한 질문을 답할 수 있는 챗봇이에요.'],
['점심 뭐 먹을까?','글쎄요...'],
['타조야','넵, 부르셨나요?'],
['너 바보야?','ㅠㅠㅠㅠ'],
['장난치나?','장난 아니에요...'],
['장난치니?','장난일리가요...'],
['야 장난 하냐?','장난 아니에요...'],
['뭐로 학습했니?','KorQUaD로 학습했어요.'],
['뭘 학습했니?','KorQUaD로 학습했어요.'],
['학습 데이터가 뭐야?','KorQUaD로 학습했어요.'],
['학습 데이터가 뭐니?','KorQUaD로 학습했어요.'],
['아는 거 말해봐.','질문이 구체적일수록 정확한 답변을 할 수 있어요.'],
['허허','ㅎㅎ'],
['너 몇 살이야?','응애, 이제 2개월이에요.'],
['이번에 취업할 수 있을까?','그럼요!'],
['응?','?'],
['공부 좀 더 해야겠다.','몇 에포크 좀 더 돌려야겠네요.'],
['먹을 거 추천 좀 해줄래?','근-본 민-초 어떤가요?'],
['너희 조는 몇 등 했어?','2등 하셨다고 합니다.'],
['너네 조는 몇 등 했어?','2등 하셨다고 합니다.'],
['니네 조는 몇 등 했어?','2등 하셨다고 합니다.'],
['갓보윤 디자인 만세!','ㄹㅇㅋㅋ'],
['넌 어디에 살아?','저는 gcp 서버에 살고 있어요.'],
['그렇구나...','그렇네요...'],
['그렇구나','그렇네요'],
['쿠쿠루삥뽕','ㅋㅋㄹㅃㅃ'],
['ㄹㅇㅋㅋ','ㄹㅇㅋㅋ'],
['틀렸잖아?','저라고 항상 맞추지는 못해요...'],
['메롱','메-롱'],
['모르면 다야?','죄송합니다 시정하겠습니다. ㅠㅠ'],
['너는 누가 만들었어?','MRC-8조 여러분이 만들어 주셨어요.'],
['너는 누가 만들었니?','김남혁 님께서 만들어 주셨어요.'],
['mrc가 뭔가요?','Machine Reading Comprehension의 약자로 문서에서 질문에 대한 답을 찾는 기술입니다.'],
['mrc가 뭐에요?','Machine Reading Comprehension의 약자로 문서에서 질문에 대한 답을 찾는 기술입니다.'],
['mrc가 뭐야?','Machine Reading Comprehension의 약자로 문서에서 질문에 대한 답을 찾는 기술입니다.'],
['MRC가 뭔가요?','Machine Reading Comprehension의 약자로 문서에서 질문에 대한 답을 찾는 기술입니다.'],
['MRC가 뭐에요?','Machine Reading Comprehension의 약자로 문서에서 질문에 대한 답을 찾는 기술입니다.'],
['MRC가 뭐야?','Machine Reading Comprehension의 약자로 문서에서 질문에 대한 답을 찾는 기술입니다.'],
['odqa가 뭔가요?','Open Domain Question Answering의 약자로 문서 검색과 MRC가 합쳐진 기술입니다.'],
['odqa가 뭐에요?','Open Domain Question Answering의 약자로 문서 검색과 MRC가 합쳐진 기술입니다.'],
['odqa가 뭐야?','Open Domain Question Answering의 약자로 문서 검색과 MRC가 합쳐진 기술입니다.'],
['ODQA가 뭔가요?','Open Domain Question Answering의 약자로 문서 검색과 MRC가 합쳐진 기술입니다.'],
['ODQA가 뭐에요?','Open Domain Question Answering의 약자로 문서 검색과 MRC가 합쳐진 기술입니다.'],
['ODQA가 뭐야?','Open Domain Question Answering의 약자로 문서 검색과 MRC가 합쳐진 기술입니다.']]
questions = [i[0] for i in qa]
answers = [i[1] for i in qa]
import pandas as pd
df = pd.DataFrame({'question':questions,'answer':answers})
from tqdm import tqdm
from elasticsearch import Elasticsearch, helpers
buffer = []
rows = 0
for num in tqdm(range(len(df))):
article = {"_id": num,
"_index": "chatter",
"question" : df['question'][num],
"answer" : df['answer'][num]}
buffer.append(article)
rows += 1
if rows % 3000 == 0:
helpers.bulk(es, buffer)
buffer = []
print("Inserted {} articles".format(rows), end="\r")
if buffer:
helpers.bulk(es, buffer)
print("Total articles inserted: {}".format(rows))
```
| github_jupyter |
```
from env_1_nonstochastic_kings import Environment1,StartandGoal
from SophAgent import SophAgentActions
from QlearningAgent import QAgent
import numpy as np
import random as random
[startstate,goalstate]=StartandGoal()
trials=100000
Time_horizon=15
T_min=2
#RandomAgent
#Experimenting success rate of RandomAgent from T=1 to T-15
Timehorizon=[]
RandomagentSR=[]
for ii in range(T_min,Time_horizon):
T=ii
score=0
for j in range(trials):
#episode-start
state=startstate
for i in range(0,T-1):
action=random.randint(0,7)
rew,new_state=Environment1(state,action)
state=new_state
if(new_state==goalstate):
score+=1
break
Timehorizon.append(ii)
RandomagentSR.append(score/trials)
#Sophisticated Inference Agent from T=1 to T-15
SuccessRateSI=[]
for ii in range(T_min,Time_horizon):
T=ii
#Retrieving action selection matrix from SophAgent
Qactions=SophAgentActions(T)
score=0
for j in range(trials):
#episode-start
state=startstate
for i in range(0,T-1):
kingsmoves=[0,1,2,3,4,5,6,7]
action=np.random.choice(kingsmoves,p=Qactions[i,:,state])
rew,new_state=Environment1(state,action)
state=new_state
if(new_state==goalstate):
score+=1
break
SuccessRateSI.append(score/trials)
#Experimenting success rate of Q_Agent from T=1 to T-15 training loops=500
SuccessRateQ500=[]
training_loops=500
for ii in range(T_min,Time_horizon):
T=ii
#Retrieving action selection matrix from SophAgent
QLearned=QAgent(T,training_loops)
score=0
for j in range(trials):
#episode-start
state=startstate
for i in range(0,T-1):
kingsmoves=[0,1,2,3,4,5,6,7]
action=np.argmax(QLearned[state,:])
rew,new_state=Environment1(state,action)
state=new_state
if(new_state==goalstate):
score+=1
break
SuccessRateQ500.append(score/trials)
#Experimenting success rate of Q_Agent from T=1 to T-15 training loops=5000
SuccessRateQ5000=[]
training_loops=5000
for ii in range(T_min,Time_horizon):
T=ii
#Retrieving action selection matrix from SophAgent
QLearned=QAgent(T,training_loops)
score=0
for j in range(trials):
#episode-start
state=startstate
for i in range(0,T-1):
kingsmoves=[0,1,2,3,4,5,6,7]
action=np.argmax(QLearned[state,:])
rew,new_state=Environment1(state,action)
state=new_state
if(new_state==goalstate):
score+=1
break
SuccessRateQ5000.append(score/trials)
import matplotlib.pyplot as plt
figure, axis = plt.subplots(1, 2,figsize=(15,5))
axis[0].plot(Timehorizon,SuccessRateQ500,color='green')
axis[0].plot(Timehorizon,SuccessRateSI,color='red')
axis[0].legend(["QLearning500","SophAgent"])
axis[0].set_title("Results Level-1")
plt.xlabel("Time-horizon")
axis[0].set_xlabel("Time-horizon")
axis[0].set_ylabel("Success rate over $10^5$ trials")
axis[1].plot(Timehorizon,RandomagentSR,color='cyan')
axis[1].plot(Timehorizon,SuccessRateQ5000,color='orange')
axis[1].legend(["RandomAgent","QLearning5K"])
axis[1].set_title("Results Level-1")
axis[1].set_xlabel("Time-horizon")
plt.savefig('ResultsLevel-1.eps',format='eps',dpi=500,bbox_inches='tight')
```
| github_jupyter |
<center><img src='../../img/ai4eo_logos.jpg' alt='Logos AI4EO MOOC' width='80%'></img></center>
<hr>
<br>
<a href='https://www.futurelearn.com/courses/artificial-intelligence-for-earth-monitoring/1/steps/1280514' target='_blank'><< Back to FutureLearn</a><br>
# 3B - Tile-based classification using Sentinel-2 L1C and EuroSAT data - Functions
<i>by Nicolò Taggio, Planetek Italia S.r.l., Bari, Italy</i>
<hr>
### <a id='from_folder_to_stack'></a> `from_folder_to_stack`
```
'''
function name:
from_folder_to_stack
description:
This function transform the .SAFE file into three different arrays (10m, 20m and 60m).
Input:
safe_path: the path of the .SAFE file;
data_bands_20m: if True, the function computes stack using Sentinel2 band with 20m of pixel resolution (default=True);
data_bands_60m: if True, the function computes stack using Sentinel2 band with 60m of pixel resolution (default=True);
Output:
stack_10m: stack with the following S2L1C bands (B02,B03,B04,B08)
stack_20m: stack with the following S2L1C bands (B05,B06,B07,B11,B12,B8A)
stack_60m: stack with the following S2L1C bands (B01,B09,B10)
'''
def from_folder_to_stack(
safe_path,
data_bands_20m=True,
data_bands_60m=True,
):
level_folder_name_list = glob.glob(safe_path + 'GRANULE/*')
level_folder_name = level_folder_name_list[0]
if level_folder_name.find("L2A") < 0:
safe_path = [level_folder_name + '/IMG_DATA/']
else:
safe_path_10m = level_folder_name + '/IMG_DATA/R10m/'
safe_path = [safe_path_10m]
text_files = []
for i in range(0,len(safe_path)):
print("[AI4EO_MOOC]_log: Loading .jp2 images in %s" % (safe_path[i]))
text_files_tmp = [f for f in os.listdir(safe_path[i]) if f.endswith('.jp2')]
text_files.append(text_files_tmp)
lst_stack_60m=[]
lst_code_60m =[]
lst_stack_20m=[]
lst_code_20m =[]
lst_stack_10m=[]
lst_code_10m =[]
for i in range(0,len(safe_path)):
print("[AI4EO_MOOC]_log: Reading .jp2 files in %s" % (safe_path[i]))
for name in range(0, len(text_files[i])):
text_files_tmp = text_files[i]
if data_bands_60m == True:
cond_60m = ( (text_files_tmp[name].find("B01") > 0) or (text_files_tmp[name].find("B09") > 0)
or (text_files_tmp[name].find("B10") > 0))
if cond_60m:
print("[AI4EO_MOOC]_log: Using .jp2 image: %s" % text_files_tmp[name])
lst_stack_60m.append(gdal_array.LoadFile(safe_path[i] + text_files_tmp[name]))
lst_code_60m.append(text_files_tmp[name][24:26])
if data_bands_20m == True:
cond_20m = (text_files_tmp[name].find("B05") > 0) or (text_files_tmp[name].find("B06") > 0) or (
text_files_tmp[name].find("B07") > 0) or (text_files_tmp[name].find("B11") > 0) or (
text_files_tmp[name].find("B12") > 0) or (text_files_tmp[name].find("B8A") > 0)
cond_60m_L2 = (text_files_tmp[name].find("B05_60m") < 0) and (text_files_tmp[name].find("B06_60m") < 0) and (
text_files_tmp[name].find("B07_60m") < 0) and (text_files_tmp[name].find("B11_60m") < 0) and (
text_files_tmp[name].find("B12_60m") < 0) and (text_files_tmp[name].find("B8A_60m") < 0)
cond_20m_tot = cond_20m and cond_60m_L2
if cond_20m_tot:
print("[AI4EO_MOOC]_log: Using .jp2 image: %s" % text_files_tmp[name])
lst_stack_20m.append(gdal_array.LoadFile(safe_path[i] + text_files_tmp[name]))
lst_code_20m.append(text_files_tmp[name][24:26])
else:
stack_20m = 0
cond_10m = (text_files_tmp[name].find("B02") > 0) or (text_files_tmp[name].find("B03") > 0) or (
text_files_tmp[name].find("B04") > 0) or (text_files_tmp[name].find("B08") > 0)
cond_20m_L2 = (text_files_tmp[name].find("B02_20m") < 0) and (text_files_tmp[name].find("B03_20m") < 0) and (
text_files_tmp[name].find("B04_20m") < 0) and (text_files_tmp[name].find("B08_20m") < 0)
cond_60m_L2 = (text_files_tmp[name].find("B02_60m") < 0) and(text_files_tmp[name].find("B03_60m") < 0) and(
text_files_tmp[name].find("B04_60m") < 0) and (text_files_tmp[name].find("B08_60m") < 0)
cond_10m_tot = cond_10m and cond_20m_L2 and cond_60m_L2
if cond_10m_tot:
print("[AI4EO_MOOC]_log: Using .jp2 image: %s" % text_files_tmp[name])
lst_stack_10m.append(gdal_array.LoadFile(safe_path[i] + text_files_tmp[name]))
lst_code_10m.append(text_files_tmp[name][24:26])
stack_10m=np.asarray(lst_stack_10m)
sorted_list_10m = ['02','03','04','08']
print('[AI4EO_MOOC]_log: Sorting stack 10m...')
stack_10m_final_sorted = stack_sort(stack_10m, lst_code_10m, sorted_list_10m)
stack_20m=np.asarray(lst_stack_20m)
sorted_list_20m = ['05','06','07','11','12','8A']
print('[AI4EO_MOOC]_log: Sorting stack 20m...')
stack_20m_final_sorted = stack_sort(stack_20m, lst_code_20m, sorted_list_20m)
stack_60m=np.asarray(lst_stack_60m)
sorted_list_60m = ['01','09','10']
print('[AI4EO_MOOC]_log: Sorting stack 60m...')
stack_60m_final_sorted = stack_sort(stack_60m, lst_code_60m, sorted_list_60m)
return stack_10m_final_sorted, stack_20m_final_sorted, stack_60m_final_sorted
```
<br>
### <a id='stack_sort'></a>`stack_sort`
```
def stack_sort(stack_in, lst_code, sorted_list):
b,r,c = stack_in.shape
stack_sorted = np.zeros((r,c,b), dtype=np.uint16)
len_list_bands = len(lst_code)
c = np.zeros((len_list_bands),dtype=np.uint8)
count = 0
count_sort = 0
while count_sort != len_list_bands:
if lst_code[count] == sorted_list[count_sort]:
c[count_sort] = count
count_sort = count_sort + 1
count = 0
else:
count = count + 1
print('[AI4EO_MOOC]_log: sorted list:', sorted_list)
print('[AI4EO_MOOC]_log: bands:', c)
for i in range(0, len_list_bands):
stack_sorted[:,:,i]=stack_in[c[i],:,:]
return stack_sorted
```
<br>
### <a id='resample_3d'></a>`resample_3d`
```
'''
function name:
resample_3d
description:
Wrapper of ndimage zoom. Bilinear interpolation for resampling array
Input:
stack: array to be resampled;
row10m: the expected row;
col10m: the expected col;
rate: the rate of the tranformation;
Output:
stack_10m: resampled array
'''
def resample_3d(
stack,
row10m,
col10m,
rate):
row, col, bands = stack.shape
print("[AI4EO_MOOC]_log: Array shape (%d,%d,%d)" % (row, col, bands))
stack_10m = np.zeros((row10m, col10m, bands),dtype=np.uint16)
print("[AI4EO_MOOC]_log: Resize array bands from (%d,%d,%d) to (%d,%d,%d)" % (
row, col, bands, row10m, col10m, bands))
for i in range(0, bands):
stack_10m[:, :, i] = ndimage.zoom(stack[:, :,i], rate)
del (stack)
return stack_10m
```
<br>
### <a id='sentinel2_format'></a>`sentinel2_format`
```
'''
function name:
sentinel2_format
description:
This function transform the multistack into sentinel2 format array with bands in the right position for AI model.
Input:
total_stack: array that is the concatenation of stack10, stack_20mTo10m and stack_60mTo10m,;
Output:
sentinel2: sentinel2 format array
'''
def sentinel2_format(
total_stack):
row_tot, col_tot, bands_tot = total_stack.shape
sentinel2 = np.zeros((row_tot, col_tot,bands_tot),dtype=np.uint16)
print("[AI4EO_MOOC]_log: Creating total stack with following bands list:")
print("[AI4EO_MOOC]_log: Band 1 – Coastal aerosol")
print("[AI4EO_MOOC]_log: Band 2 – Blue")
print("[AI4EO_MOOC]_log: Band 3 – Green")
print("[AI4EO_MOOC]_log: Band 4 – Red")
print("[AI4EO_MOOC]_log: Band 5 – Vegetation red edge")
print("[AI4EO_MOOC]_log: Band 6 – Vegetation red edge")
print("[AI4EO_MOOC]_log: Band 7 – Vegetation red edge")
print("[AI4EO_MOOC]_log: Band 8 – NIR")
print("[AI4EO_MOOC]_log: Band 8A – Narrow NIR")
print("[AI4EO_MOOC]_log: Band 9 – Water vapour")
print("[AI4EO_MOOC]_log: Band 10 – SWIR – Cirrus")
print("[AI4EO_MOOC]_log: Band 11 – SWIR")
print("[AI4EO_MOOC]_log: Band 12 – SWIR")
sentinel2[:, :, 0] = total_stack[:, :, 10]
sentinel2[:, :, 1] = total_stack[:, :, 0]
sentinel2[:, :, 2] = total_stack[:, :, 1]
sentinel2[:, :, 3] = total_stack[:, :, 2]
sentinel2[:, :, 4] = total_stack[:, :, 4]
sentinel2[:, :, 5] = total_stack[:, :, 5]
sentinel2[:, :, 6] = total_stack[:, :, 6]
sentinel2[:, :, 7] = total_stack[:, :, 3]
sentinel2[:, :, 8] = total_stack[:, :, 9]
sentinel2[:, :, 9] = total_stack[:, :,11]
sentinel2[:, :,10] = total_stack[:, :,12]
sentinel2[:, :,11] = total_stack[:, :, 7]
sentinel2[:, :,12] = total_stack[:, :, 8]
del (total_stack)
return sentinel2
```
<br>
## `sliding`
```
'''
Function_name:
sliding
description:
Input:
shape: the target shape
window_size: the shape of the window
step_size:
fixed
Output:
windows:
'''
def sliding(shape, window_size, step_size=None, fixed=True):
h, w = shape
if step_size:
h_step = step_size
w_step = step_size
else:
h_step = window_size
w_step = window_size
h_wind = window_size
w_wind = window_size
windows = []
for y in range(0, h, h_step):
for x in range(0, w, w_step):
h_min = min(h_wind, h - y)
w_min = min(w_wind, w - x)
if fixed:
if h_min < h_wind or w_min < w_wind:
continue
window = (x, y, w_min, h_min)
windows.append(window)
return windows
```
<br>
<a href='https://www.futurelearn.com/courses/artificial-intelligence-for-earth-monitoring/1/steps/1280514' target='_blank'><< Back to FutureLearn</a><br>
<hr>
<img src='../../img/copernicus_logo.png' alt='Copernicus logo' align='left' width='20%'></img>
Course developed for <a href='https://www.eumetsat.int/' target='_blank'> EUMETSAT</a>, <a href='https://www.ecmwf.int/' target='_blank'> ECMWF</a> and <a href='https://www.mercator-ocean.fr/en/' target='_blank'> Mercator Ocean International</a> in support of the <a href='https://www.copernicus.eu/en' target='_blank'> EU's Copernicus Programme</a> and the <a href='https://wekeo.eu/' target='_blank'> WEkEO platform</a>.
| github_jupyter |
# Задание 1.2 - Линейный классификатор (Linear classifier)
В этом задании мы реализуем другую модель машинного обучения - линейный классификатор. Линейный классификатор подбирает для каждого класса веса, на которые нужно умножить значение каждого признака и потом сложить вместе.
Тот класс, у которого эта сумма больше, и является предсказанием модели.
В этом задании вы:
- потренируетесь считать градиенты различных многомерных функций
- реализуете подсчет градиентов через линейную модель и функцию потерь softmax
- реализуете процесс тренировки линейного классификатора
- подберете параметры тренировки на практике
На всякий случай, еще раз ссылка на туториал по numpy:
http://cs231n.github.io/python-numpy-tutorial/
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
from dataset import load_svhn, random_split_train_val
from gradient_check import check_gradient
from metrics import multiclass_accuracy
import linear_classifer
```
# Как всегда, первым делом загружаем данные
Мы будем использовать все тот же SVHN.
```
def prepare_for_linear_classifier(train_X, test_X):
train_flat = train_X.reshape(train_X.shape[0], -1).astype(np.float) / 255.0
test_flat = test_X.reshape(test_X.shape[0], -1).astype(np.float) / 255.0
# Subtract mean
mean_image = np.mean(train_flat, axis = 0)
train_flat -= mean_image
test_flat -= mean_image
# Add another channel with ones as a bias term
train_flat_with_ones = np.hstack([train_flat, np.ones((train_X.shape[0], 1))])
test_flat_with_ones = np.hstack([test_flat, np.ones((test_X.shape[0], 1))])
return train_flat_with_ones, test_flat_with_ones
train_X, train_y, test_X, test_y = load_svhn("data", max_train=10000, max_test=1000)
train_X, test_X = prepare_for_linear_classifier(train_X, test_X)
# Split train into train and val
train_X, train_y, val_X, val_y = random_split_train_val(train_X, train_y, num_val = 1000)
```
# Играемся с градиентами!
В этом курсе мы будем писать много функций, которые вычисляют градиенты аналитическим методом.
Все функции, в которых мы будем вычислять градиенты, будут написаны по одной и той же схеме.
Они будут получать на вход точку, где нужно вычислить значение и градиент функции, а на выходе будут выдавать кортеж (tuple) из двух значений - собственно значения функции в этой точке (всегда одно число) и аналитического значения градиента в той же точке (той же размерности, что и вход).
```
def f(x):
"""
Computes function and analytic gradient at x
x: np array of float, input to the function
Returns:
value: float, value of the function
grad: np array of float, same shape as x
"""
...
return value, grad
```
Необходимым инструментом во время реализации кода, вычисляющего градиенты, является функция его проверки. Эта функция вычисляет градиент численным методом и сверяет результат с градиентом, вычисленным аналитическим методом.
Мы начнем с того, чтобы реализовать вычисление численного градиента (numeric gradient) в функции `check_gradient` в `gradient_check.py`. Эта функция будет принимать на вход функции формата, заданного выше, использовать значение `value` для вычисления численного градиента и сравнит его с аналитическим - они должны сходиться.
Напишите часть функции, которая вычисляет градиент с помощью численной производной для каждой координаты. Для вычисления производной используйте так называемую two-point formula (https://en.wikipedia.org/wiki/Numerical_differentiation):

Все функции приведенные в следующей клетке должны проходить gradient check.
```
# TODO: Implement check_gradient function in gradient_check.py
# All the functions below should pass the gradient check
def square(x):
return float(x*x), 2*x
check_gradient(square, np.array([3.0]))
def array_sum(x):
assert x.shape == (2,), x.shape
return np.sum(x), np.ones_like(x)
check_gradient(array_sum, np.array([3.0, 2.0]))
def array_2d_sum(x):
assert x.shape == (2,2)
return np.sum(x), np.ones_like(x)
check_gradient(array_2d_sum, np.array([[3.0, 2.0], [1.0, 0.0]]))
```
## Начинаем писать свои функции, считающие аналитический градиент
Теперь реализуем функцию softmax, которая получает на вход оценки для каждого класса и преобразует их в вероятности от 0 до 1:

**Важно:** Практический аспект вычисления этой функции заключается в том, что в ней учавствует вычисление экспоненты от потенциально очень больших чисел - это может привести к очень большим значениям в числителе и знаменателе за пределами диапазона float.
К счастью, у этой проблемы есть простое решение -- перед вычислением softmax вычесть из всех оценок максимальное значение среди всех оценок:
```
predictions -= np.max(predictions)
```
(подробнее здесь - http://cs231n.github.io/linear-classify/#softmax, секция `Practical issues: Numeric stability`)
```
# TODO Implement softmax and cross-entropy for single sample
probs = linear_classifer.softmax(np.array([-10, 0, 10]))
# Make sure it works for big numbers too!
probs = linear_classifer.softmax(np.array([1000, 0, 0]))
assert np.isclose(probs[0], 1.0)
```
Кроме этого, мы реализуем cross-entropy loss, которую мы будем использовать как функцию ошибки (error function).
В общем виде cross-entropy определена следующим образом:

где x - все классы, p(x) - истинная вероятность принадлежности сэмпла классу x, а q(x) - вероятность принадлежности классу x, предсказанная моделью.
В нашем случае сэмпл принадлежит только одному классу, индекс которого передается функции. Для него p(x) равна 1, а для остальных классов - 0.
Это позволяет реализовать функцию проще!
```
probs = linear_classifer.softmax(np.array([ -5, 0, 5]))
linear_classifer.cross_entropy_loss(probs, np.array([1]))
```
После того как мы реализовали сами функции, мы можем реализовать градиент.
Оказывается, что вычисление градиента становится гораздо проще, если объединить эти функции в одну, которая сначала вычисляет вероятности через softmax, а потом использует их для вычисления функции ошибки через cross-entropy loss.
Эта функция `softmax_with_cross_entropy` будет возвращает и значение ошибки, и градиент по входным параметрам. Мы проверим корректность реализации с помощью `check_gradient`.
```
# TODO Implement combined function or softmax and cross entropy and produces gradient
loss, grad = linear_classifer.softmax_with_cross_entropy(np.array([1, 0, 0]), np.array([1]))
check_gradient(lambda x: linear_classifer.softmax_with_cross_entropy(x, np.array([1])), np.array([1, 0, 0], np.float))
```
В качестве метода тренировки мы будем использовать стохастический градиентный спуск (stochastic gradient descent или SGD), который работает с батчами сэмплов.
Поэтому все наши фукнции будут получать не один пример, а батч, то есть входом будет не вектор из `num_classes` оценок, а матрица размерности `batch_size, num_classes`. Индекс примера в батче всегда будет первым измерением.
Следующий шаг - переписать наши функции так, чтобы они поддерживали батчи.
Финальное значение функции ошибки должно остаться числом, и оно равно среднему значению ошибки среди всех примеров в батче.
```
# TODO Extend combined function so it can receive a 2d array with batch of samples
np.random.seed(42)
# Test batch_size = 1
num_classes = 4
batch_size = 1
predictions = np.random.randint(-1, 3, size=(batch_size, num_classes)).astype(np.float)
target_index = np.random.randint(0, num_classes, size=(batch_size, 1)).astype(np.int)
check_gradient(lambda x: linear_classifer.softmax_with_cross_entropy(x, target_index), predictions)
# Test batch_size = 3
num_classes = 4
batch_size = 3
predictions = np.random.randint(-1, 3, size=(batch_size, num_classes)).astype(np.float)
target_index = np.random.randint(0, num_classes, size=(batch_size, 1)).astype(np.int)
print ("__________________________", target_index)
check_gradient(lambda x: linear_classifer.softmax_with_cross_entropy(x, target_index), predictions)
# Make sure maximum subtraction for numberic stability is done separately for every sample in the batch
probs = linear_classifer.softmax(np.array([[20,0,0], [1000, 0, 0]],dtype=np.float64))
assert np.all(np.isclose(probs[:, 0], 1.0))
```
### Наконец, реализуем сам линейный классификатор!
softmax и cross-entropy получают на вход оценки, которые выдает линейный классификатор.
Он делает это очень просто: для каждого класса есть набор весов, на которые надо умножить пиксели картинки и сложить. Получившееся число и является оценкой класса, идущей на вход softmax.
Таким образом, линейный классификатор можно представить как умножение вектора с пикселями на матрицу W размера `num_features, num_classes`. Такой подход легко расширяется на случай батча векторов с пикселями X размера `batch_size, num_features`:
`predictions = X * W`, где `*` - матричное умножение.
Реализуйте функцию подсчета линейного классификатора и градиентов по весам `linear_softmax` в файле `linear_classifer.py`
```
# TODO Implement linear_softmax function that uses softmax with cross-entropy for linear classifier
batch_size = 3
num_classes = 3
num_features = 6
np.random.seed(42)
W = np.random.randint(-1, 3, size=(num_features, num_classes)).astype(np.float)
X = np.random.randint(-1, 3, size=(batch_size, num_features)).astype(np.float)
target_index = np.array([[1],[1],[0]])
# target_index = np.ones(batch_size, dtype=np.int)
print ("X = ", X)
print("W =", W)
print ("target_index = ",target_index)
loss, dW = linear_classifer.linear_softmax(X, W, target_index)
print ("______")
check_gradient(lambda w: linear_classifer.linear_softmax(X, w, target_index), W)
```
### И теперь регуляризация
Мы будем использовать L2 regularization для весов как часть общей функции ошибки.
Напомним, L2 regularization определяется как
l2_reg_loss = regularization_strength * sum<sub>ij</sub> W[i, j]<sup>2</sup>
Реализуйте функцию для его вычисления и вычисления соотвествующих градиентов.
```
# TODO Implement l2_regularization function that implements loss for L2 regularization
a, b = linear_classifer.l2_regularization(W, 0.01)
print ("b=",b)
check_gradient(lambda w: linear_classifer.l2_regularization(w, 0.01), W)
```
# Тренировка!
Градиенты в порядке, реализуем процесс тренировки!
```
train_y.T
t = np.zeros((train_y.shape[0],1), dtype=np.int)
for elem in range(train_y.shape[0]):
t[elem] = train_y[elem]
t[1]
# TODO: Implement LinearSoftmaxClassifier.fit function
classifier = linear_classifer.LinearSoftmaxClassifier()
loss_history = classifier.fit(train_X, t, epochs=10, learning_rate=2e-1, batch_size=300, reg=1e-1)
# let's look at the loss history!
plt.plot(loss_history)
# Let's check how it performs on validation set
pred = classifier.predict(val_X)
accuracy = multiclass_accuracy(pred, val_y)
print("Accuracy: ", accuracy)
# Now, let's train more and see if it performs better
classifier.fit(train_X, t, epochs=100, learning_rate=2e-1, batch_size=300, reg=1e-1)
pred = classifier.predict(val_X)
accuracy = multiclass_accuracy(pred, val_y)
print("Accuracy after training for 100 epochs: ", accuracy)
```
### Как и раньше, используем кросс-валидацию для подбора гиперпараметтов.
В этот раз, чтобы тренировка занимала разумное время, мы будем использовать только одно разделение на тренировочные (training) и проверочные (validation) данные.
Теперь нам нужно подобрать не один, а два гиперпараметра! Не ограничивайте себя изначальными значениями в коде.
Добейтесь точности более чем **20%** на проверочных данных (validation data).
```
num_epochs = 10
batch_size = 300
learning_rates = [1e-0, 1e-1, 1e-2]
reg_strengths = [1e-0, 1e-1, 1e-2]
best_classifier = None
best_val_accuracy = 0
# TODO use validation set to find the best hyperparameters
# hint: for best results, you might need to try more values for learning rate and regularization strength
# than provided initially
train_folds_X = np.split(train_X,5)
sub_train_folds_X = np.delete(train_folds_X, 4, 0)
sub_train_folds_X = np.concatenate(sub_train_folds_X)
sub_val_folds_X = train_folds_X[4]
train_folds_y = np.split(train_y, 5)
sub_train_folds_y = np.delete(train_folds_y, 4, 0)
sub_train_folds_y = np.concatenate(sub_train_folds_y)
sub_val_folds_y = train_folds_y[4]
sub_train_folds_y_t = np.zeros((sub_train_folds_y.shape[0],1), dtype=np.int)
for elem in range(sub_train_folds_y.shape[0]):
sub_train_folds_y_t[elem] = sub_train_folds_y[elem]
sub_val_folds_y_t = np.zeros((sub_val_folds_y.shape[0],1), dtype=np.int)
for elem in range(sub_val_folds_y_t.shape[0]):
sub_val_folds_y_t[elem] = sub_val_folds_y_t[elem]
for l_r in learning_rates:
for r_s in reg_strengths:
classifier = linear_classifer.LinearSoftmaxClassifier()
loss_history = classifier.fit(sub_train_folds_X, sub_train_folds_y_t, epochs=num_epochs, learning_rate=l_r, batch_size=batch_size, reg=r_s)
pred = classifier.predict(sub_val_folds_X)
accuracy = multiclass_accuracy(pred, sub_val_folds_y_t)
print("learning_rates = {}, reg_strengths = {}, Accuracy: {}".format(l_r, r_s, accuracy))
if best_val_accuracy < accuracy:
best_val_accuracy = accuracy
best_classifier = classifier
print('best validation accuracy achieved: %f' % best_val_accuracy)
```
# Какой же точности мы добились на тестовых данных?
```
test_pred = best_classifier.predict(test_X)
test_accuracy = multiclass_accuracy(test_pred, test_y)
print('Linear softmax classifier test set accuracy: %f' % (test_accuracy, ))
```
| github_jupyter |
```
import os
import pickle
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
matplotlib.style.use('seaborn')
sns.set(style='whitegrid', color_codes=True)
save_path = './figures'
if not os.path.exists(save_path):
os.makedirs(save_path)
save_type = 'png'
def remove_pts(x,y, kill_num=800):
randidx = []
hf = kill_num // 2
idx = np.where(y==0)[0]
sorted_idx = x[:,0][idx].argsort()
randidx.append( idx[sorted_idx[-hf:]] )
idx = np.where(y==2)[0]
sorted_idx = x[:,1][idx].argsort()
randidx.append( idx[sorted_idx[:(kill_num-hf)]] )
randidx = np.concatenate(randidx)
x = np.delete(x, randidx, axis=0)
y = np.delete(y, randidx, axis=0)
return x, y
def draw_scatter(x, y, fig=None):
if fig is None:
fig = plt.figure(figsize=(8,7)).add_subplot(111)
axes = plt.gca()
axes.set_xlim([-6,6])
axes.set_ylim([-4.5,6.5])
fig.tick_params(labelsize=20)
plt.subplots_adjust(left=0.1, right=0.9, bottom=0.08, top=0.92)
kk = y.max() + 1
color = [
sns.xkcd_rgb['soft pink'],
sns.xkcd_rgb['windows blue'],
sns.xkcd_rgb['macaroni and cheese'],
sns.xkcd_rgb['soft green'],
]
for k in range(kk):
idx = np.where(y==k)
pts = x[idx]
plt.scatter(pts[:,0], y=pts[:,1], alpha=0.3, edgecolors='none', color=color[k])
with open('../data/GMMs/gmm-2d-syn-set.pkl', 'rb') as f:
raw_dataset = pickle.load(f)
fig = plt.figure(figsize=(7,6)).add_subplot(111)
plt.title('Original Data', fontsize=24)
draw_scatter(raw_dataset['x'], raw_dataset['y'], fig)
plt.savefig(save_path+'/gmm-raw-data.{}'.format(save_type))
remain_x, remain_y = remove_pts(raw_dataset['x'], raw_dataset['y'], kill_num=800)
fig = plt.figure(figsize=(7,6)).add_subplot(111)
plt.title('After-Removing Data', fontsize=24)
draw_scatter(remain_x, remain_y, fig)
plt.savefig(save_path+'/gmm-rm-data.{}'.format(save_type))
def plot_clusters(kk, mu, fig=None, color='black', marker='o', label=None):
if fig is None:
fig = plt.figure(figsize=(8,7)).add_subplot(111)
fig.scatter(mu[0,:], mu[1,:], color=color, marker=marker, zorder=3, linewidths=3, label=label)
def plot_samples(kk, pts, fig=None, cls_ord=None):
if fig is None:
fig = plt.figure(figsize=(8,7)).add_subplot(111)
if cls_ord is None: cls_ord = range(kk)
for k in cls_ord:
fig.plot(pts[:,0,k], pts[:,1,k], alpha=0.8)
def mcmc_visual(name, path, cls_ord=None):
fig = plt.figure(figsize=(7,6)).add_subplot(111)
plt.title('{} - Before Forgetting'.format(name), fontsize=24)
draw_scatter(raw_dataset['x'], raw_dataset['y'], fig)
with open('{}/full-train-ckpt-cluster.pkl'.format(path), 'rb') as f:
npd_cls = pickle.load(f)
with open('{}/full-train-ckpt-samp-pts.pkl'.format(path), 'rb') as f:
npd_pts = pickle.load(f)
plot_clusters(npd_cls['kk'], npd_cls['mk'], fig, color=sns.xkcd_rgb['black'], marker='o', label='Origin')
plot_samples(npd_cls['kk'], npd_pts, fig, cls_ord=cls_ord)
plt.legend(loc='upper left', fontsize=18)
plt.savefig('{}/gmm-{}-full.{}'.format(save_path, name, save_type))
fig = plt.figure(figsize=(7,6)).add_subplot(111)
plt.title('{} - After Forgetting'.format(name), fontsize=24)
plot_clusters(npd_cls['kk'], npd_cls['mk'], fig, color=sns.xkcd_rgb['black'], marker='o', label='Origin')
with open('{}/forget-ckpt-cluster.pkl'.format(path), 'rb') as f:
pd_cls = pickle.load(f)
draw_scatter(remain_x, remain_y, fig)
plot_clusters(pd_cls['kk'], pd_cls['mk'], fig, color=sns.xkcd_rgb['blue blue'], marker='v', label='Processed')
with open('{}/scratch-ckpt-cluster.pkl'.format(path), 'rb') as f:
scratch_cls = pickle.load(f)
plot_clusters(scratch_cls['kk'], scratch_cls['mk'], fig, color=sns.xkcd_rgb['red'], marker='*', label='Target')
plt.legend(loc='upper left', fontsize=18)
plt.savefig('{}/gmm-{}-forget.{}'.format(save_path, name, save_type))
mcmc_visual('SGLD', '../exp_data/gmm/sgld', cls_ord=[2,1,3,0])
mcmc_visual('SGHMC', '../exp_data/gmm/sghmc', cls_ord=[1,2,3,0])
```
| github_jupyter |
```
import tensorflow as tf
import torch
import numpy as np
Hc = 3
Wc = 4
coord_cells = tf.stack(tf.meshgrid(tf.range(Hc), tf.range(Wc), indexing='ij'), axis=-1)
a = tf.Session().run(coord_cells)
print(a)
import torch
coor_cells = torch.stack(torch.meshgrid(torch.arange(Hc), torch.arange(Wc)), dim=2) # [Hc, Wc, 2]
b = coor_cells.numpy()
print(b)
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import matplotlib.pyplot as plt
os.chdir('../')
print(os.getcwd())
# from utils.utils import warp_points_np
def warp_points_np(points, homographies, device='cpu'):
"""
Warp a list of points with the given homography.
Arguments:
points: list of N points, shape (N, 2).
homography: batched or not (shapes (B, 3, 3) and (...) respectively).
Returns: a Tensor of shape (N, 2) or (B, N, 2) (depending on whether the homography
is batched) containing the new coordinates of the warped points.
"""
# expand points len to (x, y, 1)
batch_size = homographies.shape[0]
points = np.concatenate((points, np.ones((points.shape[0], 1))), axis=1)
# points = points.to(device)
# homographies = homographies.(batch_size*3,3)
# warped_points = homographies*points
# warped_points = homographies@points.transpose(0,1)
warped_points = np.tensordot(homographies, points.transpose(), axes=([2], [0]))
# normalize the points
warped_points = warped_points.reshape([batch_size, 3, -1])
warped_points = warped_points.transpose([0, 2, 1])
warped_points = warped_points[:, :, :2] / warped_points[:, :, 2:]
return warped_points
def plot_points(matrix, ls='--', lw=1.2, colors=None):
x_points, y_points = matrix[:,0], matrix[:,1]
size = len(x_points)
colors = ['red', 'blue', 'orange', 'green'] if not None else colors
for i in range(size):
plt.plot(x_points[i], y_points[i], color=colors[i], marker='o')
# plt.plot(x_points[i], x_points[(i+1) % size],color=colors[i],linestyle=ls, linewidth=lw)
# plt.plot(x_points[i], x_points[(i + 1) % size], linestyle=ls, linewidth=lw)
# [y_points[i], y_points[(i+1) % size]],
# color=colors[i],
# linestyle=ls, linewidth=lw)
def printCorners(corner_img, mat_homographies):
points = warp_points_np(corner_img, mat_homographies)
# plot
plot_points(corner_img)
for i in range(points.shape[0]):
plot_points(points[i,:,:])
def test_sample_homography():
import cv2
batch_size = 10
filename = '../configs/superpoint_coco_train.yaml'
import yaml
with open(filename, 'r') as f:
config = yaml.load(f)
test_tf = False
test_corner_def = True
if test_tf == True:
from utils.homographies import sample_homography as sample_homography
boundary = 1
# from utils.homographies import sample_homography_np as sample_homography
# mat_homographies = matrix[np.newaxis, :,:]
# mat_homographies = [sample_homography(tf.constant([boundary,boundary]),
mat_homographies = [sample_homography(np.array([boundary,boundary]),
**config['data']['warped_pair']['params']) for i in range(batch_size)]
mat_homographies = np.stack(mat_homographies, axis=0)
corner_img = np.array([[0., 0.], [0., boundary], [boundary, boundary], [boundary, 0.]])
printCorners(corner_img, mat_homographies)
plt.show()
print("try inverse")
mat_homographies = np.stack(mat_homographies, axis=0)
inv_homographies = np.stack([np.linalg.inv(mat_homographies[i, :, :]) for i in range(mat_homographies.shape[0])])
printCorners(corner_img, inv_homographies)
plt.show()
if test_corner_def:
corner_img = np.array([(-1, -1), (-1, 1), (1, 1), (1, -1)])
offset = 1
corner_img += offset
from utils.homographies import sample_homography_np as sample_homography
boundary = 2
mat_homographies = [sample_homography(np.array([boundary,boundary]), shift=-1 + offset,
**config['data']['warped_pair']['params']) for i in range(batch_size)]
mat_homographies = np.stack(mat_homographies, axis=0)
img = np.zeros((512,512,3), np.uint8)
points = warp_points_np(corner_img, mat_homographies)
# draw polygon
img = np.zeros((10,10,3), np.uint8)
points_plt = points.astype(int)
# print("points: ", points_plt)
# img_poly = cv2.polylines(img,[points_plt[:1,:,:]],True,(0,255,255))
points = np.array([(-1, -1), (-1, 1), (1, 1), (1, -1)])
points = points.reshape(-1,4,2)
# img_poly = cv2.polylines(img,[points],True,(0,255,255))
# img_poly = cv2.polylines(img,[points[:1,:,:]],True,(0,255,255))
print("shape", img.shape)
plt.imshow(img_poly)
plt.show()
points = points.reshape(-1,4,2)
# cv2.polylines(img,[points],True,(0,255,255))
# printCorners(corner_img, mat_homographies)
plt.imshow(img)
print("try inverse")
mat_homographies = np.stack(mat_homographies, axis=0)
inv_homographies = np.stack([np.linalg.inv(mat_homographies[i, :, :]) for i in range(mat_homographies.shape[0])])
printCorners(corner_img, inv_homographies)
plt.show()
else:
from utils.utils import sample_homography
mat_homographies = [sample_homography(1) for i in range(batch_size)]
print("end")
# test_sample_homography()
# test homography with size(240, 360)
import cv2
batch_size = 10
def loadConfig(filename):
import yaml
with open(filename, 'r') as f:
config = yaml.load(f)
return config
test_tf = False
test_corner_def = True
def scale_homography(H, shape, shift=(-1,-1)):
height, width = shape[0], shape[1]
trans = np.array([[2./width, 0., shift[0]], [0., 2./height, shift[1]], [0., 0., 1.]])
H_tf = np.linalg.inv(trans) @ H @ trans
return H_tf
def sample_homography_batches(config, batch_size=1, shape=np.array([1,1]), tf=False):
offset = 0
b = 2
if tf:
from utils.homographies import sample_homography as sample_homography
# shape = np.array([b, b])
mat_homographies = [sample_homography(shape,
**config['data']['warped_pair']['params']) for i in range(batch_size)]
# mat_homographies = [scale_homography(sample_homography(shape,
# **config['data']['warped_pair']['params']), shape)
# for i in range(batch_size)]
else:
from utils.homographies import sample_homography_np as sample_homography
mat_homographies = [sample_homography(shape, shift=-1 + offset,
**config['data']['warped_pair']['params']) for i in range(batch_size)]
mat_homographies = np.stack(mat_homographies, axis=0)
return mat_homographies
def drawBox(points, img, offset=np.array([0,0]), color=(0,255,0)):
# print("origin", points)
offset = offset[::-1]
points = points + offset
points = points.astype(int)
for i in range(len(points)):
img = img + cv2.line(np.zeros_like(img),tuple(points[-1+i]), tuple(points[i]), color,5)
return img
# load config
# filename = '../configs/config_test.yaml'
# config = loadConfig(filename)
height, width = 240, 320
corner_img = np.array([(0, 0), (height, 0), (height, width), (0, width)])
offset = np.array([1,5])
print(corner_img + offset)
import numpy as np
np.set_printoptions(precision=4, suppress=True)
import matplotlib.pyplot as plt
import numpy as np
a = np.load('notebooks/h2.npz')
print(list(a))
H = np.squeeze(a['homographies'])
print(H)
Ms = np.squeeze(a['masks'])
plt.imshow(Ms)
plt.colorbar()
plt.show()
def test_valid_mask():
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from utils.utils import pltImshow
# from utils.homographies import sample_homography as sample_homography
from utils.utils import sample_homography
batch_size = 1
mat_homographies = [sample_homography(3) for i in range(batch_size)]
mat_H = np.stack(mat_homographies, axis=0)
corner_img = np.array([(-1, -1), (-1, 1), (1, -1), (1, 1)])
# printCorners(corner_img, mat_H)
# points = warp_points_np(corner_img, mat_homographies)
mat_H = torch.tensor(mat_H, dtype=torch.float32)
mat_H_inv = torch.stack([torch.inverse(mat_H[i, :, :]) for i in range(batch_size)])
from utils.utils import compute_valid_mask, labels2Dto3D
device = 'cpu'
shape = torch.tensor([240, 320])
for i in range(1):
r = 3
mask_valid = compute_valid_mask(shape, inv_homography=mat_H_inv, device=device, erosion_radius=r)
pltImshow(mask_valid[0,:,:])
cell_size = 8
mask_valid = labels2Dto3D(mask_valid.view(batch_size, 1, mask_valid.shape[1], mask_valid.shape[2]), cell_size=cell_size)
mask_valid = torch.prod(mask_valid[:,:cell_size*cell_size,:,:], dim=1)
pltImshow(mask_valid[0,:,:].cpu().numpy())
plt.show()
mask = {}
mask.update({'homographies': mat_H, 'masks': mask_valid})
np.savez_compressed('h2.npz', **mask)
print("finish testing valid mask")
return mat_H
def my_valid_mask(mat_H):
from utils.utils import compute_valid_mask, labels2Dto3D
from utils.utils import pltImshow
r = 3
shape = torch.tensor([240, 320])
device = 'cpu'
mat_H_inv = torch.inverse(mat_H).unsqueeze(dim=0)
mask_valid = compute_valid_mask(shape, inv_homography=mat_H_inv, device=device, erosion_radius=r)
pltImshow(mask_valid[0,:,:])
cell_size = 8
mask_valid = labels2Dto3D(mask_valid.view(1, 1, mask_valid.shape[1], mask_valid.shape[2]), cell_size=cell_size)
mask_valid = torch.prod(mask_valid[:,:cell_size*cell_size,:,:], dim=1)
pltImshow(mask_valid[0,:,:].cpu().numpy())
plt.show()
mask = {}
mask.update({'homographies': mat_H, 'masks': mask_valid})
np.savez_compressed('h2.npz', **mask)
print("finish testing valid mask")
return mat_H
# H = test_valid_mask()
# H = np.squeeze(a['homographies'])
my_valid_mask(torch.tensor(H))
print('Homography:', H)
def compute_valid_mask(image_shape, homography, erosion_radius=0):
"""
Compute a boolean mask of the valid pixels resulting from an homography applied to
an image of a given shape. Pixels that are False correspond to bordering artifacts.
A margin can be discarded using erosion.
Arguments:
input_shape: Tensor of rank 2 representing the image shape, i.e. `[H, W]`.
homography: Tensor of shape (B, 8) or (8,), where B is the batch size.
erosion_radius: radius of the margin to be discarded.
Returns: a Tensor of type `tf.int32` and shape (H, W).
"""
mask = H_transform(tf.ones(image_shape), homography, interpolation='NEAREST')
if erosion_radius > 0:
kernel = cv.getStructuringElement(cv.MORPH_ELLIPSE, (erosion_radius*2,)*2)
mask = tf.nn.erosion2d(
mask[tf.newaxis, ..., tf.newaxis],
tf.to_float(tf.constant(kernel)[..., tf.newaxis]),
[1, 1, 1, 1], [1, 1, 1, 1], 'SAME')[0, ..., 0] + 1.
return tf.to_int32(mask)
import tensorflow as tf
import cv2 as cv
from tensorflow.contrib.image import transform as H_transform
sess = tf.Session()
Hc = 30
Wc = 40
grid_size = 8
Height = Hc * grid_size
Width = Wc * grid_size
image_shape = tf.constant([Height, Width])
# H = np.array([[1., 1., 0.], [0., 1., 0.], [0., 0., 1.]])
trans = np.array([[2./Width, 0., -1], [0., 2./Height, -1], [0., 0., 1.]])
H_tf = np.linalg.inv(np.linalg.inv(trans) @ H @ trans)
H_tf = H_tf / H_tf[2, 2]
homography = tf.constant(H_tf.reshape(-1)[:8], tf.float32)
# homography = homographysample
print(sess.run(homography))
print(H_tf)
erosion_radius = 3 # from valid_border_margin of https://github.com/rpautrat/SuperPoint/blob/master/superpoint/configs/superpoint_coco.yaml
valid_mask = compute_valid_mask(image_shape, homography, erosion_radius)
mask = sess.run(valid_mask)
plt.imshow(mask)
plt.colorbar()
plt.show()
valid_mask = tf.to_float(valid_mask[tf.newaxis, ..., tf.newaxis]) # for GPU
valid_mask = tf.space_to_depth(valid_mask, grid_size)
valid_mask = tf.reduce_prod(valid_mask, axis=3) # AND along the channel dim
valid_mask = tf.reshape(valid_mask, [1, 1, 1, Hc, Wc])
mask = sess.run(valid_mask)
print(mask.shape)
plt.imshow(np.squeeze(mask))
plt.colorbar()
plt.show()
np.linalg.inv(trans) @ H @ trans
np.linalg.inv(np.linalg.inv(trans) @ H @ trans)
from utils.utils import warp_points
def desc_mask(homographies, cell_size=8, device='cpu'):
from utils.utils import warp_points
def normPts(pts, shape):
"""
normalize pts to [-1, 1]
:param pts:
tensor (y, x)
:param shape:
numpy (y, x)
:return:
"""
pts = pts/shape*2 - 1
return pts
def denormPts(pts, shape):
"""
denormalize pts back to H, W
:param pts:
tensor (y, x)
:param shape:
numpy (y, x)
:return:
"""
pts = (pts+1)*shape/2
return pts
debug = True
Hc, Wc = 30, 40
print("homo ", homographies.shape)
shape = torch.tensor([240, 320]).type(torch.FloatTensor).to(device)
if debug: print("shape: ", shape)
# compute the center pixel of every cell in the image
coor_cells = torch.stack(torch.meshgrid(torch.arange(Hc), torch.arange(Wc)), dim=2)
if debug: print("coor_cells: ", coor_cells)
coor_cells = coor_cells.type(torch.FloatTensor).to(device)
if debug: print("coor_cells: ", coor_cells)
coor_cells = coor_cells * cell_size + cell_size // 2
if debug: print("coor_cells: ", coor_cells)
## coord_cells is now a grid containing the coordinates of the Hc x Wc
## center pixels of the 8x8 cells of the image
# coor_cells = coor_cells.view([-1, Hc, Wc, 1, 1, 2])
coor_cells = coor_cells.view([-1, 1, 1, Hc, Wc, 2]) # be careful of the order
if debug: print("coor_cells: ", coor_cells)
# warped_coor_cells = warp_points(coor_cells.view([-1, 2]), w, device)
warped_coor_cells = normPts(coor_cells.view([-1, 2]), shape)
if debug: print("warped_coor_cells: ", warped_coor_cells)
warped_coor_cells = torch.stack((warped_coor_cells[:,1], warped_coor_cells[:,0]), dim=1) # (y, x) to (x, y)
if debug: print("warped_coor_cells: ", warped_coor_cells)
warped_coor_cells = warp_points(warped_coor_cells, homographies, device)
if debug: print("warped_coor_cells: ", warped_coor_cells)
print("shape: ", warped_coor_cells.shape)
warped_coor_cells = torch.stack((warped_coor_cells[:, :, 1], warped_coor_cells[:, :, 0]), dim=2) # (batch, x, y) to (batch, y, x)
if debug: print("warped_coor_cells: ", warped_coor_cells)
shape_cell = torch.tensor([240/cell_size, 320/cell_size]).type(torch.FloatTensor).to(device)
# warped_coor_mask = denormPts(warped_coor_cells, shape_cell)
warped_coor_cells = denormPts(warped_coor_cells, shape)
print("warped: ", warped_coor_cells[0,:10,:])
# warped_coor_cells = warped_coor_cells.view([-1, 1, 1, Hc, Wc, 2])
warped_coor_cells = warped_coor_cells.view([-1, Hc, Wc, 1, 1, 2])
# print("warped_coor_cells: ", warped_coor_cells.shape)
# compute the pairwise distance
cell_distances = coor_cells - warped_coor_cells
if debug: print("cell_distances: ", cell_distances)
cell_distances = torch.norm(cell_distances, dim=-1)
if debug: print("cell_distances: ", cell_distances)
mask = cell_distances <= cell_size - 0.5 # trick
mask = mask.type(torch.FloatTensor).to(device)
return mask
mask = desc_mask(torch.tensor(H).unsqueeze(0))
print("mask shape", mask.shape)
from utils.utils import pltImshow
mask_o = mask.cpu().numpy()
y, x = 0, 0
height, width = 30, 40
def plot_mask_corr(mask, y,x):
img = mask[0,y,x,:,:]
print(img.shape)
print("the correspondence of (",y,x,") in image 1. [",y,x,":,:]")
pltImshow(img)
plot_mask_corr(mask_o, 0,0)
plot_mask_corr(mask_o, height-1,width-1)
plot_mask_corr(mask_o, height-1,0)
plot_mask_corr(mask_o, 0,width-1)
plot_mask_corr(mask_o, height//2,width//2)
#
mask_o = mask.sum(dim=1).sum(dim=1)
img = (mask_o).cpu().numpy()[0,:,:]
print("sum up dim1 and dim2. should look like warping")
pltImshow(img)
#
y, x = 1, 5
img = (mask).cpu().numpy()[0,:,:,y,x]
print(img.shape)
print("the correspondence of '0,0', in warped image. [30,40,y,x]")
pltImshow(img)
#
mask_o = mask.sum(dim=3).sum(dim=3)
img = (mask_o).cpu().numpy()[0,:,:]
print("sum up dim3 and dim4. should look like all ones")
pltImshow(img)
def loadConfig(filename):
import yaml
with open(filename, 'r') as f:
config = yaml.load(f)
return config
# load config
# logging.basicConfig(format='[%(asctime)s %(levelname)s] %(message)s',
# datefmt='%m/%d/%Y %H:%M:%S', level=logging.INFO)
filename = 'configs/superpoint_coco_test.yaml'
# filename = 'configs/magicpoint_repeatability.yaml'
config = loadConfig(filename)
print("config path: ", filename)
print("config: ", config)
```
## descriptor_loss my implementation
```
## random data
np.random.seed(seed=0)
descriptors_np = np.random.rand(1,30,40,256).astype('float32')
warped_descriptors_np = np.random.rand(1,30,40,256).astype('float32')
# sample homography
def sample_homography_batches(config, batch_size=1, shape=np.array([1,1]), tf=False):
offset = 0
b = 2
if tf:
from utils.homographies import sample_homography as sample_homography
# shape = np.array([b, b])
mat_homographies = [sample_homography(shape,
**config['data']['augmentation']['homographic']['params']) for i in range(batch_size)]
# mat_homographies = [scale_homography(sample_homography(shape,
# **config['data']['warped_pair']['params']), shape)
# for i in range(batch_size)]
else:
from utils.homographies import sample_homography_np as sample_homography
mat_homographies = [sample_homography(shape, shift=-1 + offset,
**config['data']['warped_pair']['params']) for i in range(batch_size)]
mat_homographies = np.stack(mat_homographies, axis=0)
return mat_homographies
# homographies = sample_homography_batches(config, shape = np.array([2,2]))
homographies = sample_homography_batches(config, batch_size=1, shape=np.array([2,2]), tf=False)
print("homographies: ", homographies)
# homography visualization for my inplementation
print("homography visualization for my inplementation")
from numpy.linalg import inv
# sample homography
batch_size = 1
shape = np.array([2,2])
homographies = sample_homography_batches(config, batch_size, shape=shape, tf=False)
homographies_original = homographies.copy()
# change shapes
height, width = 240, 320
image_shape = np.array([height, width])
homographies = np.stack([scale_homography(H, image_shape, shift=(-1,-1)) for H in homographies])
# warp points
# from utils.utils import warp_points_np
# corner_img = np.array([(0, 0), (height, 0), (height, width), (0, width)])
corner_img = np.array([(0, 0), (0, height), (width, height), (width, 0)])
points = warp_points_np(corner_img, homographies)
# plot shapes
offset = np.array([height,width])
img = np.zeros((height + offset[0]*2, width + offset[1]*2,3), np.uint8)
img = drawBox(corner_img, img, color=(255,0,0), offset=offset)
for i in range(points.shape[0]):
img = drawBox(points[i,:,:], img, color=(0,255,0), offset=offset)
print("print forward homographies")
plt.imshow(img)
plt.show()
# inv_homography
inv_homographies = np.stack([inv(H) for H in homographies])
# warp points
# from utils.utils import warp_points_np
# corner_img = np.array([(0, 0), (height, 0), (height, width), (0, width)])
corner_img = np.array([(0, 0), (0, height), (width, height), (width, 0)])
points = warp_points_np(corner_img, inv_homographies)
# plot shapes
offset = np.array([height,width])
img = np.zeros((height + offset[0]*2, width + offset[1]*2,3), np.uint8)
img = drawBox(corner_img, img, color=(255,0,0), offset=offset)
for i in range(points.shape[0]):
img = drawBox(points[i,:,:], img, color=(0,255,0), offset=offset)
print("print inverse homographies")
plt.imshow(img)
plt.show()
# H = inv_homographies.squeeze().astype('float32')
# print("We use inverse homography!")
H = homographies_original.squeeze().astype('float32')
print("We use forward homography!")
print("homography: ", H)
def descriptor_loss(descriptors, descriptors_warped, homographies, mask_valid=None,
cell_size=8, lamda_d=250, device='cpu', descriptor_dist=4, **config):
'''
Compute descriptor loss from descriptors_warped and given homographies
:param descriptors:
Output from descriptor head
tensor [batch_size, descriptors, Hc, Wc]
:param descriptors_warped:
Output from descriptor head of warped image
tensor [batch_size, descriptors, Hc, Wc]
:param homographies:
known homographies
:param cell_size:
8
:param device:
gpu or cpu
:param config:
:return:
loss, and other tensors for visualization
'''
def normPts(pts, shape):
"""
normalize pts to [-1, 1]
:param pts:
tensor (y, x)
:param shape:
numpy (y, x)
:return:
"""
pts = pts/shape*2 - 1
return pts
def denormPts(pts, shape):
"""
denormalize pts back to H, W
:param pts:
tensor (y, x)
:param shape:
numpy (y, x)
:return:
"""
pts = (pts+1)*shape/2
return pts
# config
from utils.utils import warp_points
lamda_d = lamda_d # 250
margin_pos = 1
margin_neg = 0.2
batch_size, Hc, Wc = descriptors.shape[0], descriptors.shape[2], descriptors.shape[3]
#####
# H, W = Hc.numpy().astype(int) * cell_size, Wc.numpy().astype(int) * cell_size
H, W = Hc * cell_size, Wc * cell_size
#####
# shape = torch.tensor(list(descriptors.shape[2:]))*torch.tensor([cell_size, cell_size]).type(torch.FloatTensor).to(device)
shape = torch.tensor([H, W]).type(torch.FloatTensor).to(device)
# compute the center pixel of every cell in the image
coor_cells = torch.stack(torch.meshgrid(torch.arange(Hc), torch.arange(Wc)), dim=2)
coor_cells = coor_cells.type(torch.FloatTensor).to(device)
coor_cells = coor_cells * cell_size + cell_size // 2
## coord_cells is now a grid containing the coordinates of the Hc x Wc
## center pixels of the 8x8 cells of the image
# coor_cells = coor_cells.view([-1, Hc, Wc, 1, 1, 2])
coor_cells = coor_cells.view([-1, 1, 1, Hc, Wc, 2]) # be careful of the order
# warped_coor_cells = warp_points(coor_cells.view([-1, 2]), homographies, device)
warped_coor_cells = normPts(coor_cells.view([-1, 2]), shape)
warped_coor_cells = torch.stack((warped_coor_cells[:,1], warped_coor_cells[:,0]), dim=1) # (y, x) to (x, y)
warped_coor_cells = warp_points(warped_coor_cells, homographies, device)
warped_coor_cells = torch.stack((warped_coor_cells[:, :, 1], warped_coor_cells[:, :, 0]), dim=2) # (batch, x, y) to (batch, y, x)
shape_cell = torch.tensor([H//cell_size, W//cell_size]).type(torch.FloatTensor).to(device)
# warped_coor_mask = denormPts(warped_coor_cells, shape_cell)
warped_coor_cells = denormPts(warped_coor_cells, shape)
# warped_coor_cells = warped_coor_cells.view([-1, 1, 1, Hc, Wc, 2])
warped_coor_cells = warped_coor_cells.view([-1, Hc, Wc, 1, 1, 2])
# print("warped_coor_cells: ", warped_coor_cells.shape)
# compute the pairwise distance
cell_distances = coor_cells - warped_coor_cells
cell_distances = torch.norm(cell_distances, dim=-1)
##### check
# print("descriptor_dist: ", descriptor_dist)
mask = cell_distances <= descriptor_dist # 0.5 # trick
mask = mask.type(torch.FloatTensor).to(device)
# compute the pairwise dot product between descriptors: d^t * d
descriptors = descriptors.transpose(1, 2).transpose(2, 3)
descriptors = descriptors.view((batch_size, Hc, Wc, 1, 1, -1))
descriptors_warped = descriptors_warped.transpose(1, 2).transpose(2, 3)
descriptors_warped = descriptors_warped.view((batch_size, 1, 1, Hc, Wc, -1))
dot_product_desc = descriptors * descriptors_warped
dot_product_desc = dot_product_desc.sum(dim=-1)
## dot_product_desc.shape = [batch_size, Hc, Wc, Hc, Wc, desc_len]
# hinge loss
positive_dist = torch.max(margin_pos - dot_product_desc, torch.tensor(0.).to(device))
# positive_dist[positive_dist < 0] = 0
negative_dist = torch.max(dot_product_desc - margin_neg, torch.tensor(0.).to(device))
# negative_dist[negative_dist < 0] = 0
# sum of the dimension
if mask_valid is None:
# mask_valid = torch.ones_like(mask)
mask_valid = torch.ones(batch_size, 1, Hc, Wc)
mask_valid = mask_valid.view(batch_size, 1, 1, mask_valid.shape[2], mask_valid.shape[3])
print("mask_valid: ", mask_valid.shape)
loss_desc = lamda_d * mask * positive_dist + (1 - mask) * negative_dist
print("loss_desc: ", loss_desc.shape)
loss_desc = loss_desc * mask_valid
# mask_valid = torch.ones_like(mask)
normalization = (batch_size * (mask_valid.sum()) * Hc * Wc)
pos_sum = (lamda_d * mask * positive_dist/normalization).sum()
neg_sum = ((1 - mask) * negative_dist/normalization).sum()
loss_desc = loss_desc.sum() / normalization
# loss_desc = loss_desc.sum() / (batch_size * Hc * Wc)
# return loss_desc, mask, mask_valid, positive_dist, negative_dist
# return loss_desc, mask, pos_sum, neg_sum
return {'loss': loss_desc, 'normalization': normalization,
'pos_sum': pos_sum, 'neg_sum': neg_sum, 'mask': mask}
homographies = torch.tensor(H[np.newaxis,:,:])
descriptors_torch = torch.tensor(descriptors_np.transpose([0,3,1,2]))
warped_descriptors_torch = torch.tensor(warped_descriptors_np.transpose([0,3,1,2]))
losses = descriptor_loss(descriptors_torch, warped_descriptors_torch, homographies,
descriptor_dist=7.5, mask_valid=None)
for e in losses:
print(e, ': ', losses[e])
# visualization
mask = losses['mask']
mask_o = mask.cpu().numpy()
y, x = 0, 0
height, width = 30, 40
plot_mask_corr(mask_o, 0,0)
plot_mask_corr(mask_o, height-1,width-1)
plot_mask_corr(mask_o, height-1,0)
plot_mask_corr(mask_o, 0,width-1)
plot_mask_corr(mask_o, height//2,width//2)
#
mask_o = mask.sum(dim=1).sum(dim=1)
img = (mask_o).cpu().numpy()[0,:,:]
print("sum up dim1 and dim2. should look like warping")
pltImshow(img)
#
y, x = 1, 5
img = (mask).cpu().numpy()[0,:,:,y,x]
print(img.shape)
print("the correspondence of '0,0', in warped image. [30,40,y,x]")
pltImshow(img)
#
mask_o = mask.sum(dim=3).sum(dim=3)
img = (mask_o).cpu().numpy()[0,:,:]
print("sum up dim3 and dim4. should look like all ones")
pltImshow(img)
```
## descriptor_loss tensorflow
```
def descriptor_loss(descriptors, warped_descriptors, homographies,
valid_mask=None, **config):
from models.homographies import warp_points
# Compute the position of the center pixel of every cell in the image
(batch_size, Hc, Wc) = tf.unstack(tf.to_int32(tf.shape(descriptors)[:3]))
coord_cells = tf.stack(tf.meshgrid(
tf.range(Hc), tf.range(Wc), indexing='ij'), axis=-1)
coord_cells = coord_cells * config['grid_size'] + config['grid_size'] // 2 # (Hc, Wc, 2)
# coord_cells is now a grid containing the coordinates of the Hc x Wc
# center pixels of the 8x8 cells of the image
# Compute the position of the warped center pixels
warped_coord_cells = warp_points(tf.reshape(coord_cells, [-1, 2]), homographies)
# warped_coord_cells is now a list of the warped coordinates of all the center
# pixels of the 8x8 cells of the image, shape (N, Hc x Wc, 2)
# Compute the pairwise distances and filter the ones less than a threshold
# The distance is just the pairwise norm of the difference of the two grids
# Using shape broadcasting, cell_distances has shape (N, Hc, Wc, Hc, Wc)
coord_cells = tf.to_float(tf.reshape(coord_cells, [1, 1, 1, Hc, Wc, 2]))
warped_coord_cells = tf.reshape(warped_coord_cells,
[batch_size, Hc, Wc, 1, 1, 2])
cell_distances = tf.norm(coord_cells - warped_coord_cells, axis=-1)
margin = 0.5
print("descriptor loss distance: ", config['grid_size'] - margin)
s = tf.to_float(tf.less_equal(cell_distances, config['grid_size'] - margin))
# s[id_batch, h, w, h', w'] == 1 if the point of coordinates (h, w) warped by the
# homography is at a distance from (h', w') less than config['grid_size']
# and 0 otherwise
# Compute the pairwise dot product between descriptors: d^t * d'
descriptors = tf.reshape(descriptors, [batch_size, Hc, Wc, 1, 1, -1])
warped_descriptors = tf.reshape(warped_descriptors,
[batch_size, 1, 1, Hc, Wc, -1])
dot_product_desc = tf.reduce_sum(descriptors * warped_descriptors, -1)
# dot_product_desc[id_batch, h, w, h', w'] is the dot product between the
# descriptor at position (h, w) in the original descriptors map and the
# descriptor at position (h', w') in the warped image
# Compute the loss
positive_dist = tf.maximum(0., config['positive_margin'] - dot_product_desc)
negative_dist = tf.maximum(0., dot_product_desc - config['negative_margin'])
loss = config['lambda_d'] * s * positive_dist + (1 - s) * negative_dist
# Mask the pixels if bordering artifacts appear
valid_mask = tf.ones([batch_size,
Hc * config['grid_size'],
Wc * config['grid_size']], tf.float32)\
if valid_mask is None else valid_mask
valid_mask = tf.to_float(valid_mask[..., tf.newaxis]) # for GPU
valid_mask = tf.space_to_depth(valid_mask, config['grid_size'])
valid_mask = tf.reduce_prod(valid_mask, axis=3) # AND along the channel dim
valid_mask = tf.reshape(valid_mask, [batch_size, 1, 1, Hc, Wc])
normalization = tf.reduce_sum(valid_mask) * tf.to_float(Hc * Wc)
# Summaries for debugging
# tf.summary.scalar('nb_positive', tf.reduce_sum(valid_mask * s) / normalization)
# tf.summary.scalar('nb_negative', tf.reduce_sum(valid_mask * (1 - s)) / normalization)
tf.summary.scalar('positive_dist', tf.reduce_sum(valid_mask * config['lambda_d'] *
s * positive_dist) / normalization)
tf.summary.scalar('negative_dist', tf.reduce_sum(valid_mask * (1 - s) *
negative_dist) / normalization)
pos_sum = tf.reduce_sum( config['lambda_d'] * s * positive_dist / normalization )
neg_sum = tf.reduce_sum( (1 - s) * negative_dist / normalization )
loss = tf.reduce_sum(valid_mask * loss) / normalization
return {'loss': loss, 'normalization': normalization,
'pos_sum': pos_sum, 'neg_sum': neg_sum,
'mask': s}
config.update({'grid_size': 8})
config.update({'positive_margin': 1})
config.update({'negative_margin': 0.2})
config.update({'lambda_d': 800})
# transform H
def scalingHomography(H, Hc = 30, Wc = 40):
# Hc = 30
# Wc = 40
grid_size = 8
Height = Hc * grid_size
Width = Wc * grid_size
image_shape = tf.constant([Height, Width])
# H = np.array([[1., 1., 0.], [0., 1., 0.], [0., 0., 1.]])
trans = np.array([[2./Width, 0., -1], [0., 2./Height, -1], [0., 0., 1.]])
H_tf = np.linalg.inv(np.linalg.inv(trans) @ H @ trans)
H_tf = H_tf / H_tf[2, 2]
homography = tf.constant(H_tf.reshape(-1)[:8], tf.float32)
return homography
descriptors = tf.convert_to_tensor(descriptors_np)
warped_descriptors = tf.convert_to_tensor(warped_descriptors_np)
homographies_tf = scalingHomography(H, Hc = 30, Wc = 40)
# homographies_tf = tf.convert_to_tensor(H.flatten()[:8])
losses = descriptor_loss(descriptors, warped_descriptors, homographies_tf,
valid_mask=None, **config)
w
print("mask_o: ", mask_o.shape)
plot_mask_corr(mask_o, 0,0)
plot_mask_corr(mask_o, height-1,width-1)
plot_mask_corr(mask_o, height-1,0)
plot_mask_corr(mask_o, 0,width-1)
plot_mask_corr(mask_o, height//2,width//2)
#
mask_o = mask.sum(dim=1).sum(dim=1)
img = (mask_o).cpu().numpy()[0,:,:]
print("sum up dim1 and dim2. should look like warping")
pltImshow(img)
#
y, x = 1, 5
img = (mask).cpu().numpy()[0,:,:,y,x]
print(img.shape)
print("the correspondence of '0,0', in warped image. [30,40,y,x]")
pltImshow(img)
#
mask_o = mask.sum(dim=3).sum(dim=3)
img = (mask_o).cpu().numpy()[0,:,:]
print("sum up dim3 and dim4. should look like all ones")
pltImshow(img)
warped_descriptors_np = np.random.rand(1,30,40,256)
warped_descriptors_np.dtype
img.min()
```
| github_jupyter |
# Introduction
Although math is the fundamental basis of physics and astrophysics, we cannot always easily convert numbers and equations into a coherent picture. Plotting is therefore a vital tool in bridging the gap between raw data and a deeper scientific understanding.
*Disclaimer:*
There are many ways to make the same plot in matplotlib and there are many ways to bin your data. Often, there is no "best" way to display data in a plot, and the message conveyed can be heavily dependent on the context of the data as well as asthetic plotting decisions.
For example, in histograms, as we discuss below, the relatively subjective choice of bin size can significantly affect the interpretation of the results. It is important to be aware of when and how we make these choices and to try to reduce any unintended bias.
---
** Example: inspect the component masses of Double Compact Objects**
In the example below, we use the following conventions:
1 - We deliberately choose to use the matplotlib.pyplot.subplots routine even when creating a single figure (as opposed to using pyplot.plot). This is because many online forums (e.g Stackoverflow) use this syntax. Furthermore, this means you do not have to learn two different types of syntax when creating either a single or multiple panel figure.
2 - We choose to do the binning within the numpy/array environment instead of with inbuilt functions such as plt.hist / axes.hist. The reason is that you have more control over what you do, such as custom normalization (using rates, weights, pdf, etc.). It also forces you to have a deeper understanding of what you are calculating, and allows you to check intermediate steps with print statements. Once you know how to bin your data this way you can also easily expand these routines for more complicated plots (2D binning).
# Path to be set by user
```
pathToData = '/home/cneijssel/Desktop/Test/COMPAS_output.h5'
```
# Imports
```
# Python libraries
import numpy as np #for handling arrays
import h5py as h5 #for reading the COMPAS data
import matplotlib.pyplot as plt #for plotting
%matplotlib inline
```
# Get some data to plot
```
Data = h5.File(pathToData)
print(list(Data.keys()))
# DCOs = double compact objects
DCOs = Data['DoubleCompactObjects']
M1 = DCOs['Mass_1'][()]
M2 = DCOs['Mass_2'][()]
Mtot = np.add(M1, M2)
Data.close()
```
# Histogram
```
# You can use numpy to create an array with specific min, max and interval values
minMtot = 0
maxMtot = max(Mtot)
nBins = 50
# Number of bin edges is one more than number of bins
binEdges = np.linspace(minMtot, maxMtot, nBins+1)
# What is the value at the center of the bin?
# add each edge of the side of the bin and divide by 2
xvaluesHist = (binEdges[:-1] + binEdges[1:])/2.
# What is the width of each bin? (an array in general, if the spacing is non-uniform)
binWidths = np.diff(binEdges)
### Set yvalues to the height of the bins
# Create an array of y-values for each x-value
yvalues = np.zeros(len(xvaluesHist))
# Iterate over the bins to calcuate the number of data points per bin
for iBin in range(nBins):
mask = (Mtot >= binEdges[iBin]) & (Mtot < binEdges[iBin+1])
yvalues[iBin] = np.sum(mask)
# You can of course apply any mask you like to get the desired histogram
## Generally, you can calculate the rate per unit x (dy/dx) using
dYdXHist = np.divide(yvalues, binWidths)
# To convert your distribution to a PDF, normalize in y-values:
PDF = np.divide(yvalues, np.sum(yvalues))
# You can then multiply by, e.g, rates/weights to scale the distribution
```
# CDF
Sometimes we want to know what fraction of the data lies below a given value. To find this, we calculate a Cumulative Distribution Function, or CDF.
```
# Question: How many points have a value less than X?
# Sort the values of interest
MtotSorted = np.sort(Mtot)
# These values are your xvalues
xvaluesCDF = MtotSorted
# The CDF is a non-strictly increasing function from 0 to 1 across the range of x values.
# It should increment by 1/len(xvaluesCDF) at each x in the array, and remain constant otherwise.
# Numpy provides several functions that make this very straightforward
nDataPoints = len(xvaluesCDF)
yvalues = np.cumsum(np.ones(nDataPoints))
CDF = yvalues / nDataPoints
```
# A two panel plot
```
# For two panels side by side:
fig, axes = plt.subplots(1,2, figsize=(18,8))
# axes is an array relating to each panel
# panel1 = axes[0]
# panel2 = axes[1]
largefontsize = 18
smallfontsize = 13
### In the left panel, we want to plot the histogram and CDF overlayed
### with the same x-axis, but different y-axes
# Plot the Histogram first
histAxes = axes[0]
histAxes.plot(xvaluesHist, dYdXHist)
histAxes.set_xlabel(r'Mtot [$M_\odot$]', fontsize=smallfontsize)
histAxes.set_ylabel(r'dN/dMtot [$M_\odot^{-1}$]', fontsize=smallfontsize)
# Overlay the CDF with the same x-axis but different y-axis
cdfAxes = axes[0].twinx()
cdfAxes.plot(xvaluesCDF, CDF, c='r')
# Dont have to do xlabel since they are the same
cdfAxes.set_ylabel('CDF', fontsize=smallfontsize, labelpad=-40)
cdfAxes.tick_params(axis='y', direction='in', pad=-20) # Adjust the CDF axis for clarity in the plot
axes[0].set_title('Total Mass Histogram and CDF', fontsize=largefontsize)
### In the right panel, we want to display a scatterplot of M1 & M2
axes[1].scatter(M1, M2)
axes[1].set_xlabel(r'M1 [$M_\odot$]', fontsize=smallfontsize)
axes[1].set_ylabel(r'M2 [$M_\odot$]', fontsize=smallfontsize)
axes[1].set_title('Component Masses', fontsize=largefontsize)
### Clean up and display the plot
# You can force plt to pad enough between plots
# such that the labels fit
plt.tight_layout()
# If you want to save the figure, use:
#plt.savefig(pathToSave)
# To produce the plot, always remember to:
plt.show()
```
| github_jupyter |
# MNIST distributed training
The **SageMaker Python SDK** helps you deploy your models for training and hosting in optimized, productions ready containers in SageMaker. The SageMaker Python SDK is easy to use, modular, extensible and compatible with TensorFlow and MXNet. This tutorial focuses on how to create a convolutional neural network model to train the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) using **TensorFlow distributed training**.
### Set up the environment
```
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
```
### Download the MNIST dataset
```
import utils
from tensorflow.contrib.learn.python.learn.datasets import mnist
import tensorflow as tf
data_sets = mnist.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000)
utils.convert_to(data_sets.train, 'train', 'data')
utils.convert_to(data_sets.validation, 'validation', 'data')
utils.convert_to(data_sets.test, 'test', 'data')
```
### Upload the data
We use the ```sagemaker.Session.upload_data``` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use this later when we start the training job.
```
inputs = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-mnist')
```
# Construct a script for distributed training
Here is the full code for the network model:
```
!cat 'mnist.py'
```
The script here is and adaptation of the [TensorFlow MNIST example](https://github.com/tensorflow/models/tree/master/official/mnist). It provides a ```model_fn(features, labels, mode)```, which is used for training, evaluation and inference.
## A regular ```model_fn```
A regular **```model_fn```** follows the pattern:
1. [defines a neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L96)
- [applies the ```features``` in the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L178)
- [if the ```mode``` is ```PREDICT```, returns the output from the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L186)
- [calculates the loss function comparing the output with the ```labels```](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L188)
- [creates an optimizer and minimizes the loss function to improve the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L193)
- [returns the output, optimizer and loss function](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L205)
## Writing a ```model_fn``` for distributed training
When distributed training happens, the same neural network will be sent to the multiple training instances. Each instance will predict a batch of the dataset, calculate loss and minimize the optimizer. One entire loop of this process is called **training step**.
### Syncronizing training steps
A [global step](https://www.tensorflow.org/api_docs/python/tf/train/global_step) is a global variable shared between the instances. It's necessary for distributed training, so the optimizer will keep track of the number of **training steps** between runs:
```python
train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())
```
That is the only required change for distributed training!
## Create a training job using the sagemaker.TensorFlow estimator
```
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='mnist.py',
role=role,
framework_version='1.12.0',
training_steps=1000,
evaluation_steps=100,
train_instance_count=2,
train_instance_type='ml.c4.xlarge')
mnist_estimator.fit(inputs)
```
The **```fit```** method will create a training job in two **ml.c4.xlarge** instances. The logs above will show the instances doing training, evaluation, and incrementing the number of **training steps**.
In the end of the training, the training job will generate a saved model for TF serving.
# Deploy the trained model to prepare for predictions
The deploy() method creates an endpoint which serves prediction requests in real-time.
```
mnist_predictor = mnist_estimator.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
```
# Invoking the endpoint
```
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
for i in range(10):
data = mnist.test.images[i].tolist()
tensor_proto = tf.make_tensor_proto(values=np.asarray(data), shape=[1, len(data)], dtype=tf.float32)
predict_response = mnist_predictor.predict(tensor_proto)
print("========================================")
label = np.argmax(mnist.test.labels[i])
print("label is {}".format(label))
prediction = predict_response['outputs']['classes']['int64_val'][0]
print("prediction is {}".format(prediction))
```
# Deleting the endpoint
```
sagemaker.Session().delete_endpoint(mnist_predictor.endpoint)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import json
import time
import hmac
import hashlib
import random
random.seed(42)
HMAC_KEY = "Insert your HMAC key from Edge Impulse project"
classes = {
"grazing": 0,
"lying": 0,
"running": 0,
"standing": 0,
"walking":0,
"trotting":0
}
ident_cols = ["label", "animal_ID", "segment_ID"]
data_cols = [ "timestamp_ms", "ax", "ay", "az", "cx", "cy", "cz", "gx", "gy", "gz"]
# Please concatenate all downloaded goat/sheep data as a single all.csv file before reading
df = pd.read_csv('./data/raw/all.csv')
df.head()
df_imu = df[ident_cols + data_cols]
df_imu.describe()
df_imu = df_imu.dropna()
df_imu.head(20)
groups = [df1 for _, df1 in df_imu.groupby(ident_cols)]
random.shuffle(groups)
grouped_df = pd.concat(groups).reset_index(drop=True).groupby(ident_cols)
for key, item in grouped_df:
label, a_id, s_id = key
if label not in classes:
continue
if label == 'trotting':
label = 'running'
values = item[["ax", "ay", "az", "gx", "gy", "gz"]].to_numpy()
count = len(values)
if count < 400 or classes[label] > 400000:
continue
if (np.all(values[:,:3] == 0)):
continue
if (np.all(values[:,3:] == 0)):
continue
classes[label] += count
emptySignature = ''.join(['0'] * 64)
data = {
"protected": {
"ver": "v1",
"alg": "HS256",
"iat": time.time()
},
"signature": emptySignature,
"payload": {
"device_name": "8F:27:A2:90:30:76",
"device_type": "ARDUINO_NANO33BLE",
"interval_ms": 10,
"sensors": [
{ "name": "accX", "units": "m/s2" },
{ "name": "accY", "units": "m/s2" },
{ "name": "accZ", "units": "m/s2" },
{ "name": "gyrX", "units": "d/s" },
{ "name": "gyrY", "units": "d/s" },
{ "name": "gyrZ", "units": "d/s" },
],
"values": values.tolist()
}
}
# encode in JSON
encoded = json.dumps(data)
# sign message
signature = hmac.new(bytes(HMAC_KEY, 'utf-8'), msg = encoded.encode('utf-8'), digestmod = hashlib.sha256).hexdigest()
# set the signature again in the message, and encode again
data['signature'] = signature
encoded = json.dumps(data, indent=4)
filename = '../data/uploads/acc_gyr/{}.{}_{}.json'.format(label, a_id, s_id)
print(filename)
with open(filename, 'w') as fp:
fp.write(encoded)
print('completed')
print(classes)
```
| github_jupyter |
# Configuring analyzers for the MSMARCO Document dataset
Before we start tuning queries and other index parameters, we wanted to first show a very simple iteration on the standard analyzers. In the MS MARCO Document dataset we have three fields: `url`, `title` and `body`. We tried just couple very small improvements, mostly to stopword lists, to see what would happen to our baseline queries. We now have two indices to play with:
- `msmarco-doument.defaults` with some default analyzers
- `url`: standard
- `title`: english
- `body`: english
- `msmarco-document` with customized analyzers
- `url`: english with URL-specific stopword list
- `title`: english with question-specfic stopword list
- `body`: english with question-specfic stopword list
The stopword lists have been changed:
1. Since the MS MARCO query dataset is all questions, it makes sense to add a few extra stop words like: who, what, when where, why, how
1. URLs in addition have some other words that don't really need to be searched on: http, https, www, com, edu
More details can be found in the index settings in `conf`.
```
%load_ext autoreload
%autoreload 2
import importlib
import os
import sys
from elasticsearch import Elasticsearch
# project library
sys.path.insert(0, os.path.abspath('..'))
import qopt
importlib.reload(qopt)
from qopt.notebooks import evaluate_mrr100_dev
# use a local Elasticsearch or Cloud instance (https://cloud.elastic.co/)
es = Elasticsearch('http://localhost:9200')
# set the parallelization parameter `max_concurrent_searches` for the Rank Evaluation API calls
max_concurrent_searches = 10
```
## Comparisons
The following runs a series of comparisons between the baseline default index `msmarco-document.default` and the custom index `msmarco-document`. We use multiple query types just to confirm that we make improvements across all of them.
### Query: combined per-field `match`es
```
def combined_matches(index):
evaluate_mrr100_dev(es, max_concurrent_searches, index, 'combined_matches', params={})
%%time
combined_matches('msmarco-document.defaults')
%%time
combined_matches('msmarco-document')
```
### Query: `multi_match` `cross_fields`
```
def multi_match_cross_fields(index):
evaluate_mrr100_dev(es, max_concurrent_searches, index,
template_id='cross_fields',
params={
'operator': 'OR',
'minimum_should_match': 50, # in percent/%
'tie_breaker': 0.0,
'url|boost': 1.0,
'title|boost': 1.0,
'body|boost': 1.0,
})
%%time
multi_match_cross_fields('msmarco-document.defaults')
%%time
multi_match_cross_fields('msmarco-document')
```
### Query: `multi_match` `best_fields`
```
def multi_match_best_fields(index):
evaluate_mrr100_dev(es, max_concurrent_searches, index,
template_id='best_fields',
params={
'tie_breaker': 0.0,
'url|boost': 1.0,
'title|boost': 1.0,
'body|boost': 1.0,
})
%%time
multi_match_best_fields('msmarco-document.defaults')
%%time
multi_match_best_fields('msmarco-document')
```
## Conclusion
As you can see, there's a measurable and consistent improvement with just some minor changes to the default analyzers. All other notebooks that follow will use the custom analyzers including for their baseline measurements.
| github_jupyter |
```
import pandas as pd
import numpy as np
import os, sys
sys.path.append(os.environ['HOME'] + '/src/models/')
from deeplearning_models import DLTextClassifier
from feature_based_models import FBConstructivenessClassifier
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
# classifiers / models
from sklearn.dummy import DummyClassifier
from sklearn.linear_model import LinearRegression, LogisticRegression, SGDClassifier
from sklearn.model_selection import ShuffleSplit
from sklearn.svm import SVC, SVR
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import StandardScaler
# other
from sklearn.preprocessing import normalize
from sklearn.metrics import log_loss, accuracy_score
from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV, cross_val_score
import nltk
import time
import xgboost as xgb
from sklearn.metrics import f1_score, classification_report
def show_scores(model, X_train, y_train, X_valid, y_valid):
"""
"""
print("Training accuracy: %.2f" % (model.score(X_train, y_train)))
print("Validation accuracy: %.2f" % (model.score(X_valid, y_valid)))
predictions = list(model.predict(X_train))
true_labels = y_train.tolist()
print('TRAIN CLASSIFICATION REPORT\n\n', classification_report(true_labels, predictions))
predictions = list(model.predict(X_valid))
true_labels = y_valid.tolist()
print('VALIDATION CLASSIFICATION REPORT\n\n', classification_report(true_labels, predictions))
```
### Read train test data
```
C3_train_df = pd.read_csv(os.environ['C3_TRAIN'])
C3_train_df['pp_comment_text'] = C3_train_df['pp_comment_text'].astype(str)
C3_test_df = pd.read_csv(os.environ['C3_TEST'])
C3_test_df['pp_comment_text'] = C3_test_df['pp_comment_text'].astype(str)
C3_train_df.columns
y_C3_train = C3_train_df.constructive_binary
X_C3_train = C3_train_df.drop(['constructive_binary'], axis = 1)
#SOCC_a_df = pd.read_csv('/home/vkolhatk/dev/constructiveness//data/external/SOCC//annotated/constructiveness/SFU_constructiveness_toxicity_corpus_preprocessed.csv')
#X_SOCC_a = SOCC_a_df['pp_comment_text'].astype(str)
#y_SOCC_a = SOCC_a_df['is_constructive']
y_C3_test = C3_test_df.constructive_binary
X_C3_test = C3_test_df.drop(['constructive_binary'], axis = 1)
```
### Fit dummy classifier
```
feature_set = ['length_feats']
classifier = FBConstructivenessClassifier(X_C3_train, y_C3_train, X_C3_test, y_C3_test)
pipeline = classifier.train_pipeline(classifier = DummyClassifier(), feature_set = feature_set)#'ngram_feats', 'tfidf_feats', 'pos_feats'])
classifier.show_scores(pipeline)
```
### Train on C3 train and test on SOCC_a
```
# Test corpus
SOCC_a_df = pd.read_csv(os.environ['SOCC_ANNOTATED_FEATS_PREPROCESSED'])
SOCC_a_df['pp_comment_text'] = SOCC_a_df['pp_comment_text'].astype(str)
y_SOCC_a = SOCC_a_df['constructive']
X_SOCC_a = SOCC_a_df.drop(['constructive'], axis = 1)
feature_set = ['length_feats']
models = {'logistic regression': LogisticRegression,
'SVM' : SGDClassifier,
'random forest' : RandomForestClassifier,
'xgboost' : xgb.XGBClassifier
}
classifier = FBConstructivenessClassifier(X_C3_train, y_C3_train, X_SOCC_a, y_SOCC_a)
for model_name, model_class in models.items():
t = time.time()
print(model_name, ":")
m = model_class()
pipeline = classifier.train_pipeline(classifier = model_class())
classifier.show_scores(pipeline)
elapsed_time = time.time() - t
print("Elapsed time: %.1f s" % elapsed_time)
print()
train_df = pd.concat([X_C3_train, y_C3_train], axis = 1)
test_df = pd.concat([X_SOCC_a, y_SOCC_a], axis = 1)
def run_dl_experiment(C3_train_df,
C3_test_df,
results_csv_path = os.environ['HOME'] + 'models/test_predictions.csv',
model = 'cnn'):
"""
"""
X_train = C3_train_df['pp_comment_text'].astype(str)
y_train = C3_train_df['constructive_binary']
X_test = C3_test_df['pp_comment_text'].astype(str)
y_test = C3_test_df['constructive_binary']
dlclf = DLTextClassifier(X_train, y_train)
if model.endswith('lstm'):
dlclf.build_bilstm()
elif model.endswith('cnn'):
dlclf.build_cnn()
dlclf.train(X_train, y_train)
print('\nTrain results: \n\n')
dlclf.evaluate(X_train, y_train)
print('\nTest results: \n\n')
dlclf.evaluate(X_test, y_test)
results_df = dlclf.write_model_scores_df(C3_test_df, results_csv_path)
test_df = test_df.rename({'constructive':'constructive_binary'}, axis='columns')
test_df.columns
run_dl_experiment(train_df, test_df, model = 'cnn')
run_dl_experiment(train_df, test_df, model = 'lstm')
```
| github_jupyter |
# Objective
In this notebook we will:
+ load and merge data from different sources (in this case, data source is filesystem.)
+ preprocess data
+ create features
+ visualize feature distributions across the classes
+ write down our observations about the data
```
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
male_df = pd.read_csv("../data/male.csv")
female_df = pd.read_csv("../data/female.csv")
male_df.head()
female_df.head()
#join these two dataframes into one
df = male_df.append(female_df)
#drop 'race' column as it is 'indian' for all
df = df.drop('race',axis=1)
#let's checkout head
df.head()
#let's checkout the tail
df.tail()
# Preprocessing
```
We want to predict the gender from the first name, because the surname doesn't say anything significant about Indian
names because Indian women generally adopt their husband's surname after marriage.
Also, it is a custom for many names to have prefixes like "Shri"/"Sri"/"Mr" for Men and "Smt"/"Ms."/"Miss." for women.
Let's validate that hypothesis and remove them.
```
df.describe()
df.info()
df.shape
```
the count of name and gender doesn't match. There must be some null values for name. Let's remove those.
```
df = df[~df.name.isnull()]
df.info()
```
there could be duplicate names. Let's find out and remove them.
```
fig,ax = plt.subplots(figsize=(12,7))
ax = df.name.value_counts().head(10).plot(kind='bar')
ax.set_xlabel('names')
ax.set_ylabel('frequency');
df = df.drop_duplicates('name')
df.shape
# our dataset almost reduced by half !
# let's remove special characters from the names. Some names might have been written in non-ASCII encodings.
# We need to remove those as well.
import re
import string
def remove_punctuation_and_numbers(x):
x = x.lower()
x = re.sub(r'[^\x00-\x7f]', r'', x)
# Removing (replacing with empty spaces actually) all the punctuations
x = re.sub("[" + string.punctuation + "]", "", x)
x = re.sub(r'[0-9]+',r'',x)
x = x.strip()
return x
df['name'] = df['name'].apply(remove_punctuation_and_numbers)
df.name.value_counts().head()
#let's remove names having less than 2 characters
df = df[df.name.str.len() > 2]
df.name.value_counts().head()
#let's see our class distribution
df.gender.value_counts()
#let's extract prefix/firstnames from the names
df['firstname'] = df.name.apply(lambda x: x.strip().split(" ")[0])
df.firstname.value_counts().head(10)
```
In India,
+ married women use the prefix *smt* or *Shrimati*,*Mrs*
+ unmarried women use *ku*, *kum* or *kumari*
+ *mohd* or *md* as the prefix for Muslim Men.
+ mr./kumar/kr/sri/shree/shriman/sh is a honorific for men
some more prefixes not present in the top 10 list above are:
```
df[df.firstname=='mr'].shape
df[df.firstname=='kumar'].shape
df[df.firstname=='kr'].shape
df[df.firstname=='miss'].shape
df[df.firstname=='mrs'].shape
df[df.firstname=='kum'].shape
df[df.firstname=='sri'].shape
df[df.firstname=='shri'].shape
df[df.firstname=='sh'].shape
df[df.firstname=='shree'].shape
df[df.firstname=='shrimati'].shape
df[df.name.str.startswith('su shree')].shape #edge case, sushri/su shree/kumari is used for unmarried Indian women, similar to Miss.
df[df.firstname == 'sushri'].shape
prefix = ['mr','kumar','kr','ku','kum','kumari','km',
'miss','mrs','mohd','md',
'sri','shri','sh','smt','shree','shrimati','su','sushri']
```
These prefixes can actually be used as a feature for our model. However, we won't use it in this iteration. We want to build a model based on just the name( without prefix or suffix/surname)
```
df.head()
# keep those names whose firstname is not a prefix
df_wo_prefix = df[~df.firstname.isin(prefix)]
df_wo_prefix.head()
df_wo_prefix.firstname.value_counts().head()
df_wo_prefix.shape
# drop duplicates from firstname column
df_wo_prefix = df_wo_prefix.drop_duplicates('firstname')
df_wo_prefix.head()
df_wo_prefix.shape
df_wo_prefix.firstname.value_counts().head()
# class distribution now
df_wo_prefix.gender.value_counts()
#drop name column
df_wo_prefix = df_wo_prefix.drop('name',axis=1)
df_wo_prefix.head()
# this is the final dataset we will be working with, let's save it to file
df_wo_prefix.to_csv('../data/names_processed.csv',index=False)
def extract_features(name):
name = name.lower() #making sure that the name is in lower case
features= {
'first_1':name[0],
'first_2':name[:2],
'first_3':name[:3],
'last_2':name[-2:],
'last_1': name[-1],
'length': len(name)
}
return features
extract_features('amit')
features = (df_wo_prefix
.firstname
.apply(extract_features)
.values
.tolist())
features_df = pd.DataFrame(features)
features_df.head()
# let's append our gender column here
features_df['gender'] = df_wo_prefix['gender'].values
features_df.head()
# let's analyse the features now.
# Question: how does the length of names differ between male and female?
freq = features_df['length'].value_counts() # frequency of lengths
fig,ax = plt.subplots(figsize=(12,7))
ax = freq.plot(kind='bar');
ax.set_xlabel('name length');
ax.set_ylabel('frequency');
```
Majority of name lengths lie between 5-7 characters. By name, we refer to 'firstname' here and will be referred to as same henceforth.
```
male_name_lengths = features_df.loc[features_df.gender=='m','length']
male_name_lengths_freq = male_name_lengths.value_counts()
male_name_lengths_freq
female_name_lengths = features_df.loc[features_df.gender=='f','length']
female_name_lengths_freq = female_name_lengths.value_counts()
female_name_lengths_freq
length_freq_long = (features_df
.groupby(['gender','length'])['length']
.agg({'length': 'count'})
.rename(columns={'gender':'gender','length':'freq'})
.reset_index())
length_freq_long.head()
length_freq_wide = length_freq_long.pivot(index='length',columns='gender',values='freq')
length_freq_wide = length_freq_wide.fillna(0)
length_freq_wide = length_freq_wide.astype('int')
length_freq_wide
fig,ax = plt.subplots(figsize=(12,6))
ax = length_freq_wide.plot(kind='bar',ax=ax)
ax.set_ylabel('frequency');
length_freq_wide.m.mean(),length_freq_wide.m.std()
length_freq_wide.f.mean(),length_freq_wide.f.std()
```
So does gender and name lengths have any relationship?
## Hypothesis
H0 : gender and name lengths are independent
Ha : gender and name lenghts are dependent
## significance level
For this test, let's keep the significance level as 0.05
```
from scipy.stats.mstats import normaltest
print('m: ',normaltest(length_freq_wide.m))
print('f: ',normaltest(length_freq_wide.f))
```
the p-value for both male and female name lengths are much less than the Chi-Square test statistics, hence null hypothesis is rejected.
**This means that there's a relationship between name length and gender.**
| github_jupyter |
```
import numpy as np,scipy.stats as ss,pandas as pd,datetime as dt
from random import gauss
from itertools import product
#----------------------------------------------------------------------------------------
def getRefDates_MonthBusinessDate(dates):
refDates,pDay={},[]
first=dt.date(year=dates[0].year,month=dates[0].month,day=1)
m=dates[0].month
d=numBusinessDays(first,dates[0])+1
for i in dates:
if m!=i.month:m,d=i.month,1
pDay.append(d)
d+=1
for j in range(1,30):
lst=[dates[i] for i in range(len(dates)) if pDay[i]==j]
refDates[j]=lst
return refDates
#----------------------------------------------------------------------------------------
def numBusinessDays(date0,date1):
m,date0_=1,date0
while True:
date0_+=dt.timedelta(days=1)
if date0_>=date1:break
if date0_.isoweekday()<6:m+=1
return m
#----------------------------------------------------------------------------------------
def getTrades(series,dates,refDates,exit,stopLoss,side):
# Get trades
trades,pnl,position_,j,num=[],0,0,0,None
for i in range(1,len(dates)):
# Find next roll and how many trading dates to it
if dates[i]>=refDates[j]:
if dates[i-1]<refDates[j]:num,pnl=0,0
if j<len(refDates)-1:
while dates[i]>refDates[j]:j+=1
if num==None:continue
# Trading rule
position=0
if num<exit and pnl>stopLoss:position=side
if position!=0 or position_!=0:
trades.append([dates[i],num,position,position_*(series[i]-series[i-1])])
pnl+=trades[-1][3]
position_=position
num+=1
return trades
#----------------------------------------------------------------------------------------
def computePSR(stats,obs,sr_ref=0,moments=4):
#Compute PSR
stats_=[0,0,0,3]
stats_[:moments]=stats[:moments]
sr=stats_[0]/stats_[1]
psrStat=(sr-sr_ref)*(obs-1)**0.5/(1-sr*stats_[2]+sr**2*(stats_[3]-1)/4.)**0.5
psr=ss.norm.cdf((sr-sr_ref)*(obs-1)**0.5/(1-sr*stats_[2]+sr**2*(stats_[3]-1)/4.)**0.5)
return psrStat,psr
#----------------------------------------------------------------------------------------
def attachTimeSeries(series,series_,index=None,label='',how='outer'):
# Attach a time series to a pandas dataframe
if not isinstance(series_,pd.DataFrame):
series_=pd.DataFrame({label:series_},index=index)
elif label!='':series_.columns=[label]
if isinstance(series,pd.DataFrame):
series=series.join(series_,how=how)
else:
series=series_.copy(deep=True)
return series
#----------------------------------------------------------------------------------------
def evalPerf(pnl,date0,date1,sr_ref=0):
freq=float(len(pnl))/((date1-date0).days+1)*365.25
m1=np.mean(pnl)
m2=np.std(pnl)
m3=ss.skew(pnl)
m4=ss.kurtosis(pnl,fisher=False)
sr=m1/m2*freq**.5
psr=computePSR([m1,m2,m3,m4],len(pnl),sr_ref=sr_ref/freq**.5,moments=4)[0]
return sr,psr,freq
#----------------------------------------------------------------------------------------
def backTest(nDays=0,factor=0):
#1) Input parameters --- to be changed by the user
holdingPeriod,sigma,stopLoss,length=20,1,10,1000
#2) Prepare series
date_,dates=dt.date(year=2000,month=1,day=1),[]
while len(dates)<length:
if date_.isoweekday()<5:dates.append(date_)
date_+=dt.timedelta(days=1)
series=np.empty((length))
for i in range(series.shape[0]):
series[i]=gauss(0,sigma)
pDay_=dt.date(year=dates[i].year,month=dates[i].month,day=1)
if numBusinessDays(pDay_,dates[i])<=nDays:
series[i]+=sigma*factor
series=np.cumsum(series)
#3) Optimize
refDates=getRefDates_MonthBusinessDate(dates)
psr,sr,trades,sl,freq,pDay,pnl,count=None,None,None,None,None,None,None,0
for pDay_ in refDates.keys():
refDates_=refDates[pDay_]
if len(refDates_)==0:continue
#4) Get trades
for prod_ in product(range(holdingPeriod+1),range(-stopLoss,1),[-1,1]):
count+=1
trades_=getTrades(series,dates,refDates_,prod_[0],prod_[1]*sigma, \
prod_[2])
dates_,pnl_=[j[0] for j in trades_],[j[3] for j in trades_]
#5) Eval performance
if len(pnl_)>2:
#6) Reconcile PnL
pnl=attachTimeSeries(pnl,pnl_,dates_,count)
#7) Evaluate
sr_,psr_,freq_=evalPerf(pnl_,dates[0],dates[-1])
for j in range(1,len(pnl_)):pnl_[j]+=pnl_[j-1]
if sr==None or sr_>sr:
psr,sr,trades=psr_,sr_,trades_
freq,pDay,prod=freq_,pDay_,prod_
print count, pDay,prod,round(sr,2), round(freq,2),round(psr,2)
print 'Total # iterations=' + str(count)
return pnl,psr,sr,trades,freq,pDay,prod,dates,series
#---------------------------------------------------------------
# Boilerplate
if __name__=='__main__':backTest()
""
```
| github_jupyter |
# eICU Collaborative Research Database
# Notebook 4: Summary statistics
This notebook shows how summary statistics can be computed for a patient cohort using the `tableone` package. Usage instructions for tableone are at: https://pypi.org/project/tableone/
## Load libraries and connect to the database
```
# Import libraries
import numpy as np
import os
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.path as path
# Make pandas dataframes prettier
from IPython.display import display, HTML
# Access data using Google BigQuery.
from google.colab import auth
from google.cloud import bigquery
# authenticate
auth.authenticate_user()
# Set up environment variables
project_id='tdothealthhack-team'
os.environ["GOOGLE_CLOUD_PROJECT"]=project_id
# Helper function to read data from BigQuery into a DataFrame.
def run_query(query):
return pd.io.gbq.read_gbq(query, project_id=project_id, dialect="standard")
```
## Install and load the `tableone` package
The tableone package can be used to compute summary statistics for a patient cohort. Unlike the previous packages, it is not installed by default in Colab, so will need to install it first.
```
!pip install tableone
# Import the tableone class
from tableone import TableOne
```
## Load the patient cohort
In this example, we will load all data from the patient data, and link it to APACHE data to provide richer summary information.
```
# Link the patient and apachepatientresult tables on patientunitstayid
# using an inner join.
query = """
SELECT p.unitadmitsource, p.gender, p.age, p.ethnicity, p.admissionweight,
p.unittype, p.unitstaytype, a.acutephysiologyscore,
a.apachescore, a.actualiculos, a.actualhospitalmortality,
a.unabridgedunitlos, a.unabridgedhosplos
FROM `physionet-data.eicu_crd_demo.patient` p
INNER JOIN `physionet-data.eicu_crd_demo.apachepatientresult` a
ON p.patientunitstayid = a.patientunitstayid
WHERE apacheversion LIKE 'IVa'
"""
cohort = run_query(query)
cohort.head()
```
## Calculate summary statistics
Before summarizing the data, we will need to convert the ages to numerical values.
```
cohort['agenum'] = pd.to_numeric(cohort['age'], errors='coerce')
columns = ['unitadmitsource', 'gender', 'agenum', 'ethnicity',
'admissionweight','unittype','unitstaytype',
'acutephysiologyscore','apachescore','actualiculos',
'unabridgedunitlos','unabridgedhosplos']
TableOne(cohort, columns=columns, labels={'agenum': 'age'},
groupby='actualhospitalmortality',
label_suffix=True, limit=4)
```
## Questions
- Are the severity of illness measures higher in the survival or non-survival group?
- What issues suggest that some of the summary statistics might be misleading?
- How might you address these issues?
## Visualizing the data
Plotting the distribution of each variable by group level via histograms, kernel density estimates and boxplots is a crucial component to data analysis pipelines. Vizualisation is often is the only way to detect problematic variables in many real-life scenarios. We'll review a couple of the variables.
```
# Plot distributions to review possible multimodality
cohort[['acutephysiologyscore','agenum']].dropna().plot.kde(figsize=[12,8])
plt.legend(['APS Score', 'Age (years)'])
plt.xlim([-30,250])
```
## Questions
- Do the plots change your view on how these variable should be reported?
| github_jupyter |
# Ler a informação de um catálogo a partir de um arquivo texto e fazer gráficos de alguns parâmetros
## Autores
Adrian Price-Whelan, Kelle Cruz, Stephanie T. Douglas
## Tradução
Ricardo Ogando, Micaele Vitória
## Objetivos do aprendizado
* Ler um arquivo ASCII usando o `astropy.io`
* Converter entre representações de componentes de coordenadas usando o `astropy.coordinates` (de horas para graus)
* Fazer um gráfico do céu com projeção esférica usando o `matplotlib`
## Palavras-chave
arquivo entrada/saída, coordenadas, tabelas, unidades, gráficos de dispersão, matplotlib
## Sumário
Esse tutorial demonstra o uso do `astropy.io.ascii` para ler dados em ASCII, `astropy.coordinates` e `astropy.units` para converter Ascenção Reta (AR) (como um ângulo sexagesimal) para graus decimais, e `matplotlib` para fazer um diagrama cor-magnitude e localizações no céu em uma projeção Mollweide.
```
!wget https://raw.githubusercontent.com/astropy/astropy-tutorials/main/tutorials/notebooks/plot-catalog/Young-Objects-Compilation.csv
!wget https://raw.githubusercontent.com/astropy/astropy-tutorials/main/tutorials/notebooks/plot-catalog/simple_table.csv
!pwd
!rm Young-Objects-Compilation.csv.1
!cat simple_table.csv
import numpy as np
# Set up matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
```
Astropy provê funcionalidade para ler e manipular dados tabulares através do subpacote `astropy.table`. Um conjunto adicional de ferramentas para ler e escrever dados no formato ASCII, são dadas no subpacote `astropy.io.ascii`, mas usa fundamentalmente as classes e métodos implementadas no `astropy.table`.
Vamos começar importando o subpacote `ascii`:
```
from astropy.io import ascii
```
Em muitos casos, é suficiente usar a função `ascii.read('filename')`
como uma caixa preta para ler dados de arquivos com texto em formato de tabela.
Por padrão, essa função vai tentar descobrir como o seu dado está formatado/delimitado (por default, `guess=True`). Por exemplo, se o seu dado se parece com o a seguir:
# name,ra,dec
BLG100,17:51:00.0,-29:59:48
BLG101,17:53:40.2,-29:49:52
BLG102,17:56:20.2,-29:30:51
BLG103,17:56:20.2,-30:06:22
...
(veja o _simple_table.csv_)
`ascii.read()` vai retornar um objeto `Table`:
```
ascii.read("simple_table.csv")
tbl = ascii.read("simple_table.csv")
print(tbl)
```
Os nomes no cabeçalho são extraídos do topo do arquivo, e o delimitador é inferido a partir do resto do arquivo -- incrível!!!!
Podemos acessar as colunas diretamente usando seus nomes como 'chaves' da tabela:
```
tbl["ra"]
tbl[-1]
```
If we want to then convert the first RA (as a sexagesimal angle) to
decimal degrees, for example, we can pluck out the first (0th) item in
the column and use the `coordinates` subpackage to parse the string:
```
import astropy.coordinates as coord
import astropy.units as u
first_row = tbl[0] # get the first (0th) row
ra = coord.Angle(first_row["ra"], unit=u.hour) # create an Angle object
ra.degree # convert to degrees
```
Qual é a conta que o astropy está fazendo por você
```
17*15+(51/60)*15
```
Now let's look at a case where this breaks, and we have to specify some
more options to the `read()` function. Our data may look a bit messier::
,,,,2MASS Photometry,,,,,,WISE Photometry,,,,,,,,Spectra,,,,Astrometry,,,,,,,,,,,
Name,Designation,RA,Dec,Jmag,J_unc,Hmag,H_unc,Kmag,K_unc,W1,W1_unc,W2,W2_unc,W3,W3_unc,W4,W4_unc,Spectral Type,Spectra (FITS),Opt Spec Refs,NIR Spec Refs,pm_ra (mas),pm_ra_unc,pm_dec (mas),pm_dec_unc,pi (mas),pi_unc,radial velocity (km/s),rv_unc,Astrometry Refs,Discovery Refs,Group/Age,Note
,00 04 02.84 -64 10 35.6,1.01201,-64.18,15.79,0.07,14.83,0.07,14.01,0.05,13.37,0.03,12.94,0.03,12.18,0.24,9.16,null,L1γ,,Kirkpatrick et al. 2010,,,,,,,,,,,Kirkpatrick et al. 2010,,
PC 0025+04,00 27 41.97 +05 03 41.7,6.92489,5.06,16.19,0.09,15.29,0.10,14.96,0.12,14.62,0.04,14.14,0.05,12.24,null,8.89,null,M9.5β,,Mould et al. 1994,,0.0105,0.0004,-0.0008,0.0003,,,,,Faherty et al. 2009,Schneider et al. 1991,,,00 32 55.84 -44 05 05.8,8.23267,-44.08,14.78,0.04,13.86,0.03,13.27,0.04,12.82,0.03,12.49,0.03,11.73,0.19,9.29,null,L0γ,,Cruz et al. 2009,,0.1178,0.0043,-0.0916,0.0043,38.4,4.8,,,Faherty et al. 2012,Reid et al. 2008,,
...
(see _Young-Objects-Compilation.csv_)
If we try to just use `ascii.read()` on this data, it fails to parse the names out and the column names become `col` followed by the number of the column:
```
tbl = ascii.read("Young-Objects-Compilation.csv")
tbl.colnames
tbl
```
What happened? The column names are just `col1`, `col2`, etc., the
default names if `ascii.read()` is unable to parse out column
names. We know it failed to read the column names, but also notice
that the first row of data are strings -- something else went wrong!
```
tbl[0]
```
A few things are causing problems here. First, there are two header
lines in the file and the header lines are not denoted by comment
characters. The first line is actually some meta data that we don't
care about, so we want to skip it. We can get around this problem by
specifying the `header_start` keyword to the `ascii.read()` function.
This keyword argument specifies the index of the row in the text file
to read the column names from:
```
tbl = ascii.read("Young-Objects-Compilation.csv", header_start=1)
tbl.colnames
```
Great! Now the columns have the correct names, but there is still a
problem: all of the columns have string data types, and the column
names are still included as a row in the table. This is because by
default the data are assumed to start on the second row (index=1).
We can specify `data_start=2` to tell the reader that the data in
this file actually start on the 3rd (index=2) row:
```
tbl = ascii.read("Young-Objects-Compilation.csv", header_start=1, data_start=2)
```
Some of the columns have missing data, for example, some of the `RA` values are missing (denoted by -- when printed):
```
print(tbl['RA'])
```
This is called a __Masked column__ because some missing values are
masked out upon display. If we want to use this numeric data, we have
to tell `astropy` what to fill the missing values with. We can do this
with the `.filled()` method. For example, to fill all of the missing
values with `NaN`'s:
```
tbl['RA'].filled(np.nan)
```
Let's recap what we've done so far, then make some plots with the
data. Our data file has an extra line above the column names, so we
use the `header_start` keyword to tell it to start from line 1 instead
of line 0 (remember Python is 0-indexed!). We then used had to specify
that the data starts on line 2 using the `data_start`
keyword. Finally, we note some columns have missing values.
```
data = ascii.read("Young-Objects-Compilation.csv", header_start=1, data_start=2)
```
Now that we have our data loaded, let's plot a color-magnitude diagram.
Here we simply make a scatter plot of the J-K color on the x-axis
against the J magnitude on the y-axis. We use a trick to flip the
y-axis `plt.ylim(reversed(plt.ylim()))`. Called with no arguments,
`plt.ylim()` will return a tuple with the axis bounds,
e.g. (0,10). Calling the function _with_ arguments will set the limits
of the axis, so we simply set the limits to be the reverse of whatever they
were before. Using this `pylab`-style plotting is convenient for
making quick plots and interactive use, but is not great if you need
more control over your figures.
```
plt.scatter(data["Jmag"] - data["Kmag"], data["Jmag"]) # plot J-K vs. J
plt.ylim(reversed(plt.ylim())) # flip the y-axis
plt.xlabel("$J-K_s$", fontsize=20)
plt.ylabel("$J$", fontsize=20)
```
As a final example, we will plot the angular positions from the
catalog on a 2D projection of the sky. Instead of using `pylab`-style
plotting, we'll take a more object-oriented approach. We'll start by
creating a `Figure` object and adding a single subplot to the
figure. We can specify a projection with the `projection` keyword; in
this example we will use a Mollweide projection. Unfortunately, it is
highly non-trivial to make the matplotlib projection defined this way
follow the celestial convention of longitude/RA increasing to the left.
The axis object, `ax`, knows to expect angular coordinate
values. An important fact is that it expects the values to be in
_radians_, and it expects the azimuthal angle values to be between
(-180º,180º). This is (currently) not customizable, so we have to
coerce our RA data to conform to these rules! `astropy` provides a
coordinate class for handling angular values, `astropy.coordinates.Angle`.
We can convert our column of RA values to radians, and wrap the
angle bounds using this class.
```
ra = coord.Angle(data['RA'].filled(np.nan)*u.degree)
ra = ra.wrap_at(180*u.degree)
dec = coord.Angle(data['Dec'].filled(np.nan)*u.degree)
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111, projection="mollweide")
ax.scatter(ra.radian, dec.radian)
```
By default, matplotlib will add degree tick labels, so let's change the
horizontal (x) tick labels to be in units of hours, and display a grid:
```
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111, projection="mollweide")
ax.scatter(ra.radian, dec.radian)
ax.set_xticklabels(['14h','16h','18h','20h','22h','0h','2h','4h','6h','8h','10h'])
ax.grid(True)
```
We can save this figure as a PDF using the `savefig` function:
```
fig.savefig("map.pdf")
```
## Exercises
Make the map figures as just above, but color the points by the `'Kmag'` column of the table.
```
```
Try making the maps again, but with each of the following projections: `aitoff`, `hammer`, `lambert`, and `None` (which is the same as not giving any projection). Do any of them make the data seem easier to understand?
```
```
| github_jupyter |
# PyTorch 및 SMDataParallel을 사용한 분산 데이터 병렬 MNIST 훈련
## 배경
SMDataParallel은 Amazon SageMaker의 새로운 기능으로 딥러닝 모델을 더 빠르고 저렴하게 훈련할 수 있습니다. SMDataParallel은 TensorFlow2, PyTorch, MXNet을 위한 분산 데이터 병렬 훈련 프레임워크입니다.
이 노트북 예제는 MNIST 데이터셋을 사용하여 SageMaker에서 PyTorch와 함께 SMDataParallel을 사용하는 방법을 보여줍니다.
자세한 내용은 아래 자료들을 참조해 주세요.
1. [PyTorch in SageMaker](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html)
2. [SMDataParallel PyTorch API Specification](https://sagemaker.readthedocs.io/en/stable/api/training/smd_data_parallel_pytorch.html)
3. [Getting started with SMDataParallel on SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html)
**참고:** 이 예제는 SageMaker Python SDK v2.X가 필요합니다.
### 데이터셋
이 예에서는 MNIST 데이터셋을 사용합니다. MNIST는 손글씨 숫자 분류에 널리 사용되는 데이터셋으로 손으로 쓴 숫자의 70,000 개의 레이블이 있는 28x28 픽셀 그레이스케일 이미지로 구성됩니다. 데이터셋은 60,000개의 훈련 이미지와 10,000개의 테스트 이미지로 분할되며, 0~9까지의 10 개의 클래스가 존재합니다.
### SageMaker execution roles
IAM 역할 arn은 데이터에 대한 훈련 및 호스팅 액세스 권한을 부여하는 데 사용됩니다. 이를 생성하는 방법은 [Amazon SageMaker 역할](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) 을 참조하세요. 노트북 인스턴스, 훈련 및 호스팅에 둘 이상의 역할이 필요한 경우 `sagemaker.get_execution_role()`을 적절한 전체 IAM 역할 arn 문자열로 변경해 주세요.
```
pip install sagemaker --upgrade
import sagemaker
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
```
## SMDataParallel을 사용한 모델 훈련
### 훈련 스크립트
MNIST 데이터셋은 `torchvision.datasets` PyTorch 모듈을 사용하여 다운로드됩니다. 다음 코드 셀에 출력되는 `train_pytorch_smdataparallel_mnist.py` 학습 스크립트에서 구현 방법을 확인할 수 있습니다.
훈련 스크립트는 SMDataParallel을 사용하는 분산 데이터 병렬 (DDP) 훈련에 필요한 코드를 제공합니다. 훈련 스크립트는 SageMaker 외부에서 실행할 수 있는 PyTorch 훈련 스크립트와 매우 유사하지만 SMDataParallel과 함께 실행되도록 수정되었습니다. SMDataParallel의 PyTorch 클라이언트는 PyTorch의 기본 DDP에 대한 대안을 제공합니다. 네이티브 PyTorch 스크립트에서 SMDataParallel의 DDP를 사용하는 방법에 대한 자세한 내용은 SMDataParallel 시작하기 자습서를 참조하세요.
```
!pygmentize code/train_pytorch_smdataparallel_mnist.py
```
### Estimator function options
다음 코드 블록에서 다른 인스턴스 유형, 인스턴스 수 및 분산 전략을 사용하도록 estimator 함수를 업데이트할 수 있습니다. 이전 코드 셀에서 검토한 훈련 스크립트도 estimator 함수로 전달합니다.
**인스턴스 유형**
SMDataParallel은 아래 인스턴스 유형들만 SageMaker 상에서의 모델 훈련을 지원합니다.
1. ml.p3.16xlarge
1. ml.p3dn.24xlarge [권장]
1. ml.p4d.24xlarge [권장]
**인스턴스 수**
최상의 성능과 SMDataParallel을 최대한 활용하려면 2개 이상의 인스턴스를 사용해야 하지만, 이 예제를 테스트하는 데 1개를 사용할 수도 있습니다.
**배포 전략**
DDP 모드를 사용하려면 `distribution` 전략을 업데이트하고 `smdistributed dataparallel`을 사용하도록 설정해야 합니다.
```
from sagemaker.pytorch import PyTorch
estimator = PyTorch(base_job_name='pytorch-smdataparallel-mnist',
source_dir='code',
entry_point='train_pytorch_smdataparallel_mnist.py',
role=role,
framework_version='1.6.0',
py_version='py36',
# For training with multinode distributed training, set this count. Example: 2
instance_count=2,
# For training with p3dn instance use - ml.p3dn.24xlarge
instance_type= 'ml.p3.16xlarge',
sagemaker_session=sagemaker_session,
# Training using SMDataParallel Distributed Training Framework
distribution={'smdistributed':{
'dataparallel':{
'enabled': True
}
}
},
debugger_hook_config=False)
%%time
estimator.fit()
```
## Next steps
이제 훈련된 모델이 있으므로, 모델을 호스팅하는 엔드포인트를 배포할 수 있습니다. 엔드포인트를 배포한 후 추론 요청으로 테스트할 수 있습니다. 다음 셀은 추론 노트북과 함께 사용할 model_data 변수를 저장합니다.
```
model_data = estimator.model_data
print("Storing {} as model_data".format(model_data))
%store model_data
```
| github_jupyter |
Based on: https://medium.com/swlh/transformer-fine-tuning-for-sentiment-analysis-c000da034bb5
```
!pip install pytorch-transformers
!pip install pytorch-ignite
from google.colab import drive
drive.mount('/content/drive')
# !cp drive/'My Drive'/cs231n-project/train_captions_v2.pkl .
# !cp drive/'My Drive'/cs231n-project/val_captions_v2.pkl .
!cp drive/'My Drive'/cs231n-project/train_captions_pretrained_v1.pkl .
!cp drive/'My Drive'/cs231n-project/val_captions_pretrained_v1.pkl .
import pickle
with open("train_captions_pretrained_v1.pkl", "rb") as f:
train_captions_obj = pickle.load(f)
import pickle
with open("val_captions_pretrained_v1.pkl", "rb") as f:
val_captions_obj = pickle.load(f)
val_captions_obj["val_captions"][0][0]
import pandas as pd
train_captions = []
for tc in train_captions_obj["train_captions"]:
# For each video
sentences = []
for t in tc:
# For each frame, find the best caption (caption with the max log prob)
best_caption = ""
best_log_prob = -100
for candidate in t:
if candidate["log_prob"] > best_log_prob:
best_log_prob = candidate["log_prob"]
best_caption = candidate["caption"]
best_caption = best_caption.replace(" .", "")
sentences.append(best_caption)
break
train_captions.append("".join(sentences))
df_train = pd.DataFrame(list(zip(train_captions_obj["train_categories"], train_captions, train_captions_obj["train_videos"])), columns =['label', 'text', 'vid_name'])
val_captions = []
for tc in val_captions_obj["val_captions"]:
# For each video
sentences = []
for t in tc:
# For each frame, find the best caption (caption with the max log prob)
best_caption = ""
best_log_prob = -100
for candidate in t:
if candidate["log_prob"] > best_log_prob:
best_log_prob = candidate["log_prob"]
best_caption = candidate["caption"]
best_caption = best_caption.replace(" .", "")
sentences.append(best_caption)
break
val_captions.append("".join(sentences))
df_val = pd.DataFrame(list(zip(val_captions_obj["val_categories"], val_captions, val_captions_obj["val_videos"])), columns =['label', 'text', 'vid_name'])
df_train.head(100)
import sys
import os
import logging
from tqdm import tqdm_notebook as tqdm
logger = logging.getLogger()
# text and label column names
TEXT_COL = "text"
LABEL_COL = "label"
# path to data
DATA_DIR = os.path.abspath('./data')
# list of labels
labels = list(set(df_train[LABEL_COL].tolist()))
# labels to integers mapping
label2int = {label: i for i, label in enumerate(labels)}
label2int
import torch
from torch.utils.data import TensorDataset, random_split, DataLoader
import numpy as np
import warnings
from tqdm import tqdm_notebook as tqdm
from typing import List, Tuple
NUM_MAX_POSITIONS = 256
BATCH_SIZE = 32
class TextProcessor:
# special tokens for classification and padding
CLS = '[CLS]'
PAD = '[PAD]'
def __init__(self, tokenizer, label2id: dict, num_max_positions:int=512):
self.tokenizer=tokenizer
self.label2id = label2id
self.num_labels = len(label2id)
self.num_max_positions = num_max_positions
def process_example(self, example: Tuple[str, str]):
"Convert text (example[0]) to sequence of IDs and label (example[1] to integer"
assert len(example) == 2
label, text = example[0], example[1]
assert isinstance(text, str)
tokens = self.tokenizer.tokenize(text)
# truncate if too long
if len(tokens) >= self.num_max_positions:
tokens = tokens[:self.num_max_positions-1]
ids = self.tokenizer.convert_tokens_to_ids(tokens) + [self.tokenizer.vocab[self.CLS]]
# pad if too short
else:
pad = [self.tokenizer.vocab[self.PAD]] * (self.num_max_positions-len(tokens)-1)
ids = self.tokenizer.convert_tokens_to_ids(tokens) + [self.tokenizer.vocab[self.CLS]] + pad
return np.array(ids, dtype='int64'), self.label2id[label]
# download the 'bert-base-cased' tokenizer
from pytorch_transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)
# initialize a TextProcessor
processor = TextProcessor(tokenizer, label2int, num_max_positions=NUM_MAX_POSITIONS)
from collections import namedtuple
import torch
LOG_DIR = "./logs/"
CACHE_DIR = "./cache/"
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
FineTuningConfig = namedtuple('FineTuningConfig',
field_names="num_classes, dropout, init_range, batch_size, lr, max_norm,"
"n_warmup, valid_pct, gradient_acc_steps, device, log_dir")
finetuning_config = FineTuningConfig(
3, 0.05, 0.02, BATCH_SIZE, 6.5e-5, 1.0,
10, 0.1, 2, device, LOG_DIR)
finetuning_config
from multiprocessing import cpu_count
from concurrent.futures import ProcessPoolExecutor
from itertools import repeat
num_cores = cpu_count()
def process_row(processor, row):
return processor.process_example((row[1][LABEL_COL], row[1][TEXT_COL]))
def create_dataloader(df: pd.DataFrame,
processor: TextProcessor,
batch_size: int = 32,
shuffle: bool = True,
valid_pct: float = None,
text_col: str = "text",
label_col: str = "label"):
"Process rows in `df` with `processor` and return a DataLoader"
with ProcessPoolExecutor(max_workers=num_cores) as executor:
result = list(
tqdm(executor.map(process_row,
repeat(processor),
df.iterrows(),
chunksize=len(df) // 10),
desc=f"Processing {len(df)} examples on {num_cores} cores",
total=len(df)))
features = [r[0] for r in result]
labels = [r[1] for r in result]
dataset = TensorDataset(torch.tensor(features, dtype=torch.long),
torch.tensor(labels, dtype=torch.long))
if valid_pct is not None:
valid_size = int(valid_pct * len(df))
train_size = len(df) - valid_size
valid_dataset, train_dataset = random_split(dataset,
[valid_size, train_size])
valid_loader = DataLoader(valid_dataset,
batch_size=batch_size,
shuffle=False)
train_loader = DataLoader(train_dataset,
batch_size=batch_size,
shuffle=True)
return train_loader, valid_loader
data_loader = DataLoader(dataset,
batch_size=batch_size,
shuffle=shuffle)
return data_loader
train_dl = create_dataloader(df_train, processor,
batch_size=finetuning_config.batch_size,
valid_pct=None)
valid_dl = create_dataloader(df_val, processor,
batch_size=finetuning_config.batch_size,
valid_pct=None)
import torch.nn as nn
def get_num_params(model):
mp = filter(lambda p: p.requires_grad, model.parameters())
return sum(np.prod(p.size()) for p in mp)
class Transformer(nn.Module):
"Adopted from https://github.com/huggingface/naacl_transfer_learning_tutorial"
def __init__(self, embed_dim, hidden_dim, num_embeddings, num_max_positions, num_heads, num_layers, dropout, causal):
super().__init__()
self.causal = causal
self.tokens_embeddings = nn.Embedding(num_embeddings, embed_dim)
self.position_embeddings = nn.Embedding(num_max_positions, embed_dim)
self.dropout = nn.Dropout(dropout)
self.attentions, self.feed_forwards = nn.ModuleList(), nn.ModuleList()
self.layer_norms_1, self.layer_norms_2 = nn.ModuleList(), nn.ModuleList()
for _ in range(num_layers):
self.attentions.append(nn.MultiheadAttention(embed_dim, num_heads, dropout=dropout))
self.feed_forwards.append(nn.Sequential(nn.Linear(embed_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, embed_dim)))
self.layer_norms_1.append(nn.LayerNorm(embed_dim, eps=1e-12))
self.layer_norms_2.append(nn.LayerNorm(embed_dim, eps=1e-12))
def forward(self, x, padding_mask=None):
""" x has shape [seq length, batch], padding_mask has shape [batch, seq length] """
positions = torch.arange(len(x), device=x.device).unsqueeze(-1)
h = self.tokens_embeddings(x)
h = h + self.position_embeddings(positions).expand_as(h)
h = self.dropout(h)
attn_mask = None
if self.causal:
attn_mask = torch.full((len(x), len(x)), -float('Inf'), device=h.device, dtype=h.dtype)
attn_mask = torch.triu(attn_mask, diagonal=1)
for layer_norm_1, attention, layer_norm_2, feed_forward in zip(self.layer_norms_1, self.attentions,
self.layer_norms_2, self.feed_forwards):
h = layer_norm_1(h)
x, _ = attention(h, h, h, attn_mask=attn_mask, need_weights=False, key_padding_mask=padding_mask)
x = self.dropout(x)
h = x + h
h = layer_norm_2(h)
x = feed_forward(h)
x = self.dropout(x)
h = x + h
return h
class TransformerWithClfHead(nn.Module):
"Adopted from https://github.com/huggingface/naacl_transfer_learning_tutorial"
def __init__(self, config, fine_tuning_config):
super().__init__()
self.config = fine_tuning_config
self.transformer = Transformer(config.embed_dim, config.hidden_dim, config.num_embeddings,
config.num_max_positions, config.num_heads, config.num_layers,
fine_tuning_config.dropout, causal=not config.mlm)
self.classification_head = nn.Linear(config.embed_dim, fine_tuning_config.num_classes)
self.apply(self.init_weights)
def init_weights(self, module):
if isinstance(module, (nn.Linear, nn.Embedding, nn.LayerNorm)):
module.weight.data.normal_(mean=0.0, std=self.config.init_range)
if isinstance(module, (nn.Linear, nn.LayerNorm)) and module.bias is not None:
module.bias.data.zero_()
def forward(self, x, clf_tokens_mask, clf_labels=None, padding_mask=None):
hidden_states = self.transformer(x, padding_mask)
clf_tokens_states = (hidden_states * clf_tokens_mask.unsqueeze(-1).float()).sum(dim=0)
clf_logits = self.classification_head(clf_tokens_states)
if clf_labels is not None:
loss_fct = nn.CrossEntropyLoss(ignore_index=-1)
loss = loss_fct(clf_logits.view(-1, clf_logits.size(-1)), clf_labels.view(-1))
return clf_logits, loss
return clf_logits
from pytorch_transformers import cached_path
# download pre-trained model and config
state_dict = torch.load(cached_path("https://s3.amazonaws.com/models.huggingface.co/"
"naacl-2019-tutorial/model_checkpoint.pth"), map_location='cpu')
config = torch.load(cached_path("https://s3.amazonaws.com/models.huggingface.co/"
"naacl-2019-tutorial/model_training_args.bin"))
# init model: Transformer base + classifier head
model = TransformerWithClfHead(config=config, fine_tuning_config=finetuning_config).to(finetuning_config.device)
incompatible_keys = model.load_state_dict(state_dict, strict=False)
print(f"Parameters discarded from the pretrained model: {incompatible_keys.unexpected_keys}")
print(f"Parameters added in the model: {incompatible_keys.missing_keys}")
NUM_EPOCHS = 10
from ignite.engine import Engine, Events
from ignite.metrics import RunningAverage, Accuracy
from ignite.handlers import ModelCheckpoint
from ignite.contrib.handlers import CosineAnnealingScheduler, PiecewiseLinear, create_lr_scheduler_with_warmup, ProgressBar
import torch.nn.functional as F
from pytorch_transformers.optimization import AdamW
# Bert optimizer
optimizer = AdamW(model.parameters(), lr=finetuning_config.lr, correct_bias=False)
def update(engine, batch):
"update function for training"
model.train()
inputs, labels = (t.to(finetuning_config.device) for t in batch)
inputs = inputs.transpose(0, 1).contiguous() # [S, B]
_, loss = model(inputs,
clf_tokens_mask = (inputs == tokenizer.vocab[processor.CLS]),
clf_labels=labels)
loss = loss / finetuning_config.gradient_acc_steps
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), finetuning_config.max_norm)
if engine.state.iteration % finetuning_config.gradient_acc_steps == 0:
optimizer.step()
optimizer.zero_grad()
return loss.item()
def inference(engine, batch):
"update function for evaluation"
model.eval()
with torch.no_grad():
batch, labels = (t.to(finetuning_config.device) for t in batch)
inputs = batch.transpose(0, 1).contiguous()
logits = model(inputs,
clf_tokens_mask = (inputs == tokenizer.vocab[processor.CLS]),
padding_mask = (batch == tokenizer.vocab[processor.PAD]))
return logits, labels
def predict(model, tokenizer, int2label, input="test"):
"predict `input` with `model`"
tok = tokenizer.tokenize(input)
ids = tokenizer.convert_tokens_to_ids(tok) + [tokenizer.vocab['[CLS]']]
tensor = torch.tensor(ids, dtype=torch.long)
tensor = tensor.to(device)
tensor = tensor.reshape(1, -1)
tensor_in = tensor.transpose(0, 1).contiguous() # [S, 1]
logits = model(tensor_in,
clf_tokens_mask = (tensor_in == tokenizer.vocab['[CLS]']),
padding_mask = (tensor == tokenizer.vocab['[PAD]']))
val, _ = torch.max(logits, 0)
val = F.softmax(val, dim=0).detach().cpu().numpy()
return {int2label[val.argmax()]: val.max(),
int2label[val.argmin()]: val.min()}
trainer = Engine(update)
evaluator = Engine(inference)
# add metric to evaluator
Accuracy().attach(evaluator, "accuracy")
# add evaluator to trainer: eval on valid set after each epoch
@trainer.on(Events.EPOCH_COMPLETED)
def log_validation_results(engine):
evaluator.run(train_dl)
print(f"validation epoch: {engine.state.epoch} train acc: {100*evaluator.state.metrics['accuracy']}")
evaluator.run(valid_dl)
print(f"validation epoch: {engine.state.epoch} val acc: {100*evaluator.state.metrics['accuracy']}")
# lr schedule: linearly warm-up to lr and then to zero
scheduler = PiecewiseLinear(optimizer, 'lr', [(0, 0.0), (finetuning_config.n_warmup, finetuning_config.lr),
(len(train_dl)*NUM_EPOCHS, 0.0)])
trainer.add_event_handler(Events.ITERATION_STARTED, scheduler)
# add progressbar with loss
RunningAverage(output_transform=lambda x: x).attach(trainer, "loss")
ProgressBar(persist=True).attach(trainer, metric_names=['loss'])
# save checkpoints and finetuning config
checkpoint_handler = ModelCheckpoint(finetuning_config.log_dir, 'sentiment-transformer',
n_saved=None, require_empty=False)
trainer.add_event_handler(Events.EPOCH_COMPLETED, checkpoint_handler, {'sentiment-transformer': model})
int2label = {i:label for label,i in label2int.items()}
# save metadata
torch.save({
"config": config,
"config_ft": finetuning_config,
"int2label": int2label
}, os.path.join(finetuning_config.log_dir, "sentiment-transformer.metadata.bin"))
# fit the model on train_dl
trainer.run(train_dl, max_epochs=NUM_EPOCHS)
!ls -lht logs
# save metadata
torch.save({
"config": config,
"config_ft": finetuning_config,
"int2label": int2label
}, os.path.join(finetuning_config.log_dir, "sentiment-transformer.metadata.bin"))
print(model)
### Restore the best checkpoint
best_metadata = torch.load("logs/sentiment-transformer.metadata.bin")
print(best_metadata)
best_model = TransformerWithClfHead(best_metadata["config"], best_metadata["config_ft"]).to(best_metadata["config_ft"].device)
int2label = best_metadata["int2label"]
best_model.load_state_dict(torch.load("logs/sentiment-transformer_sentiment-transformer_504.pth"))
from google.colab import auth
auth.authenticate_user()
project_id = 'calm-depot-274104'
!gcloud config set project {project_id}
!gsutil ls
bucket_name = 'cs231n-emotiw'
!gsutil -m cp -r logs/sentiment-transformer_sentiment-transformer_504.pth gs://{bucket_name}/models/
### Make sure evaluation works on restored model
actual_labels = []
pred_labels = []
for batch in valid_dl:
actual_labels.extend(batch[1].tolist())
best_model.eval()
with torch.no_grad():
batch, labels = (t.to(finetuning_config.device) for t in batch)
inputs = batch.transpose(0, 1).contiguous()
logits = best_model(inputs,
clf_tokens_mask = (inputs == tokenizer.vocab[processor.CLS]),
padding_mask = (batch == tokenizer.vocab[processor.PAD]))
pred_labels.extend(logits.argmax(axis=1).tolist())
correct = 0
for i in range(len(pred_labels)):
if pred_labels[i] == actual_labels[i]:
correct += 1
print(f"Accuracy = {correct/len(pred_labels)}")
int2label
from sklearn import metrics
import pandas as pd
import seaborn as sn
import matplotlib.pyplot as plt
import tensorflow
actual_labels_final = [int(int2label[x]) - 1 for x in actual_labels]
pred_labels_final = [int(int2label[x]) - 1 for x in pred_labels]
# Note the specific ordering
classes=['Pos', 'Neu', 'Neg']
con_mat = tensorflow.math.confusion_matrix(labels=actual_labels_final, predictions=pred_labels_final).numpy()
con_mat_norm = np.around(con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis], decimals=2)
con_mat_df = pd.DataFrame(con_mat_norm,
index = classes,
columns = classes)
figure = plt.figure(figsize=(11, 9))
plt.title("Image Captioning Confusion Matrix")
sn.heatmap(con_mat_df, annot=True,cmap=plt.cm.Blues)
### Remove last classifier head
class Identity(nn.Module):
def __init__(self):
super(Identity, self).__init__()
def forward(self, x):
return x
best_model.classification_head = Identity()
print(best_model)
int2label
### Create the embeddings for the val
valid_dl_eval = create_dataloader(df_val, processor,
batch_size=finetuning_config.batch_size,
valid_pct=None,
shuffle=False)
pred_labels = torch.zeros((len(df_val), 410))
i = 0
for batch in valid_dl_eval:
actual_labels.extend(batch[1].tolist())
best_model.eval()
with torch.no_grad():
batch, labels = (t.to(finetuning_config.device) for t in batch)
inputs = batch.transpose(0, 1).contiguous()
logits = best_model(inputs,
clf_tokens_mask = (inputs == tokenizer.vocab[processor.CLS]),
padding_mask = (batch == tokenizer.vocab[processor.PAD]))
begin_index = i * 32
end_index = begin_index + len(labels)
pred_labels[begin_index:end_index, :] = logits
i += 1
pred_labels.shape
val_output = {
"embedding": pred_labels.numpy(),
"vid_names": df_val["vid_name"].tolist()
}
import pickle
with open("transformer_val_embeddings.pkl", "wb") as f:
pickle.dump(val_output, f)
!ls -lh
### Create the embeddings for the train
train_dl_eval = create_dataloader(df_train, processor,
batch_size=finetuning_config.batch_size,
valid_pct=None,
shuffle=False)
pred_labels = torch.zeros((len(df_train), 410))
i = 0
for batch in train_dl_eval:
actual_labels.extend(batch[1].tolist())
best_model.eval()
with torch.no_grad():
batch, labels = (t.to(finetuning_config.device) for t in batch)
inputs = batch.transpose(0, 1).contiguous()
logits = best_model(inputs,
clf_tokens_mask = (inputs == tokenizer.vocab[processor.CLS]),
padding_mask = (batch == tokenizer.vocab[processor.PAD]))
begin_index = i * 32
end_index = begin_index + len(labels)
pred_labels[begin_index:end_index, :] = logits
i += 1
pred_labels.shape
train_output = {
"embedding": pred_labels.numpy(),
"vid_names": df_train["vid_name"].tolist()
}
import pickle
with open("transformer_train_embeddings.pkl", "wb") as f:
pickle.dump(train_output, f)
!ls -lh
!cp transformer_train_embeddings.pkl drive/'My Drive'/cs231n-project/
!cp transformer_val_embeddings.pkl drive/'My Drive'/cs231n-project/
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.