text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Assigment 1
# Part One: Network Models
## 1. Watts-Strogatz Networks
* Use `nx.watts_strogatz_graph` to generate 3 graphs with 500 nodes each, average degree = 4, and rewiring probablity $p = 0, 0.1, \textrm{and} 1$. Calculate the average shortest path length $\langle d \rangle$ for each one. Describe what happens to the network when $p = 1$.
The 3 graphs with the characteristics specified are generated with the code below.
```
import networkx as nx
#Generate of three different graphs with different values of probability
gr = nx.watts_strogatz_graph(500,4,0)
gr1 = nx.watts_strogatz_graph(500,4,0.1)
gr2 = nx.watts_strogatz_graph(500,4,1)
```
After the creation of the graphs, the average shortest path for each of them is calculated as follows.
```
#Calculate of the average shortest path length for each graph
print(nx.average_shortest_path_length(gr))
print(nx.average_shortest_path_length(gr1))
print(nx.average_shortest_path_length(gr2))
```
With the results obtained, it can be appreciated that the bigger the p is, the smaller the average of shorted path is.
So, when $p=0$, each node is just connected with its immediate neighbours and it takes more jumps to arrive to the other nodes. But, when $p=1$, it is easier to arrive from one to another because, as it is stated at Watts-Strogatz Model, the network contains more random jumps. This way, when random edges are introduced, shortcuts to the graph's nodes, with long initial distances between them, are provided.
In conclusion, it can be seen that the bigger the p is, the more random the network is and the shorter the distances between nodes get, which is exactly what Watts-Strogatz Model states.
Let's understand the behavior of the WS model as the *p* is increased in more detail.
Generate 50 networks with $N = 500$, $\langle k \rangle = 4$, for each of $p = \{0, 0.01, 0.03, 0.05, 0.1, 0.2\}$. Calculate the average of $\langle d \rangle$ as well as the standard deviation over the 50 networks, to create a plot that shows how the path length decreases very quickly with only a little fraction of re-wiring. Use the standard deviation to add errorbars to the plot.
The $\langle d \rangle$ of each of the $[50*6]$ networks generated is calculated with the code below.
```
j = [0,0.01,0.03,0.05,0.1,0.2]
myDict = {}
for i in j:
myList = []
for z in range(0,50):
#Generate a new graph with the given value of probability
gr = nx.watts_strogatz_graph(500,4,i)
#Save the average shortest path length of the generated graph
myList.append(nx.average_shortest_path_length(gr))
myDict[i] = myList
```
The average of $\langle d \rangle$ as well as the standard deviation for each of the networks generated is calculated.
```
import numpy as np
means = []
stds = []
for i in j:
#Mean
means.append(np.mean(myDict[i]))
#Standard deviation
stds.append(np.std(myDict[i]))
```
The results calculated are plotted below.
```
import matplotlib.pyplot as pyplot
import matplotlib.pyplot as plt
pyplot.errorbar(j, means, stds, marker='o', label='1')
pyplot.ylabel('avg shortest_path')
pyplot.xlabel('probability')
pyplot.show()
```
With the plot displayed, it can be seen that the path length decreases very quickly with only a little fraction of re-wiring.
## 2. The Barabasi-Albert Model
We're going to create our own Barabasi-Albert model (a special case) in right in a `notebook`. Follow the recipe below for success
* Create a 100 node BA network using a BA model that you've coded on your own (so don't use the built-in NetworkX function, but the one you created during week 3). And plot it using NetworkX.
The code generated to create and display the specified network is the one showed below.
```
import itertools
from random import choice
# Create node consisting of a single link
G = nx.Graph()
G.add_edge(0, 1)
# Add the new nodes and links up to 100
for i in range(2,100):
#Flatten list of nodes to choose them according to probability
flattenList = list(itertools.chain(*G.edges))
#Choose one randomly
random_node = choice(flattenList)
#Add the new random link
G.add_edge(i, random_node)
#Plot the generated node
nx.draw(G, node_size = 10, with_labels=False)
```
* Now create a 5000 node network.
* What's the maximum and minimum degree?
* Now, bin the degree distribution, for example using `numpy.histogram`.
* Plot the distribution. Plot it with both linear and log-log axes.
First, the 5000 node network has been created reusing the 100 node BA network generated during the previous exercise.
```
for i in range(100,5000):
flattenList = list(itertools.chain(*G.edges))
random_node = choice(flattenList)
#G.add_node(i)
G.add_edge(i, random_node)
options = {
'node_color': 'red',
'node_size': 10,
'width': 1,
}
plt.figure(3,figsize=(12,12))
#Plot the generated node
nx.draw(G, with_labels=False, **options)
plt.show()
```
With the network generated, the maximum degree has been > 100 and the minumum 1, which means that there is no node isolated and all nodes are connected to another.
```
degrees = [val for (node, val) in G.degree()]
print("Maximum degree: "+str(max(degrees)))
print("Minimum degree: "+str(min(degrees)))
```
The degree distribution is plotted below.
```
import numpy as np
#Bin the degree distribution
histo, bin_edges = np.histogram(degrees, bins=range(max(degrees)+1))
bincenters = 0.5*(bin_edges[1:]+bin_edges[:-1])
#Plot the linear distribution
pyplot.plot(bincenters,histo,'.')
pyplot.ylabel('count')
pyplot.show()
#Plot the log-log distribution
pyplot.loglog(histo, '.')
pyplot.ylabel('count')
pyplot.xlabel('k')
pyplot.show()
```
## 3. Power-laws and the friendship paradox
Next step is to explore the [Friendship paradox](https://en.wikipedia.org/wiki/Friendship_paradox). This paradox states that almost everyone have fewer friends than their friends have, on average.
This might come into contradiction with most people's beliefs but it is a direct concequence of a scale free network that follows a power law, such as the friendship/ social network between people.
The BA scale free model should be able to help us to depict that social network more accurately by introducing the elements of Growth and of Preferential Attachment.
Thus, the validation of the Friendship paradox will be done by selecting nodes at random and comparing their degrees with the average degree of their neighbors's.
The process followed is:
* Do this 1000 times. How many out of those 1000 times is the friendship paradox true?
* Pick a node i at random (e.g. use random.choice). Find its degree.
* Find it's neighbors. Calculate their average degree.
* Compare the two numbers to check if it's true that i's friends (on average) have more friends than i.
* Repeat 1000 times and present how many out of those 1000 times is the friendship paradox true
Below a node is picked at random and its degree is found.
Afterwards its neighbors are returned and their average degree is calculated. Finally the script prints True if the random node data validates the Friendship paradox and False otherwise.
```
# Choose a random node
flattenList = list(G.nodes)
random_node = choice(flattenList)
# Calculate the degree of the random node
nodDe = G.degree(random_node)
# Obtain all neighbors of the random node
neigh = [n for n in G.neighbors(random_node)]
# Calculate their degrees
neighDe = G.degree(neigh)
# Keep all degree values
values = [x[1] for x in neighDe]
# Calculate the mean of the degree
if (len(values)>1):
neighmean = np.mean(values)
else:
neighmean = values[0]
print(nodDe < neighmean)
```
As it can be seen with the result retrieved, it is true that the mean degree of all the neighbours of the node chosen at random is bigger than the degree value of the node.
Now, the above procedure is repeated for 1000 randomly selected nodes and the number of times that the Friendship paradox has been validated is presented below.
```
count = 0
flattenList = list(G.nodes)
for i in range(1000):
random_node = choice(flattenList)
nodDe = G.degree(random_node)
neigh = [n for n in G.neighbors(random_node)]
neighDe = G.degree(neigh)
values = [x[1] for x in neighDe]
if (len(values)>1):
neighmean = np.mean(values)
else:
neighmean = values[0]
if nodDe < neighmean:
count+=1
# print "Num of node: ", nodDe, "- Avg. of its neighbors:", neighmean
print "Friendship paradox has been true", count ,"times"
```
As it can be realised with the result obtained, the Friendship paradox is validated 870 times out of 1000, which means that it occurs the $87\%$ of the times.
# Part Two: The network of American politics
This exercise assumes that you have already downloaded wiki-pages and created the directed network of members of congress from Wikipedia. If you are interested in re-running our code please make sure to note the pd.read_csv() paths to recreate the same file structure as the code.
```
import pandas as pd
#Load all the information from the three csv files, previously downloaded
df = pd.read_csv('./data/H113.csv')
df['congress_number'] = 113
df_1 = pd.read_csv('./data/H114.csv')
df_1['congress_number'] = 114
df_2 = pd.read_csv('./data/H115.csv')
df_2['congress_number'] = 115
all_members = pd.concat([df, df_1, df_2]).reset_index(drop= True)
#all_members
```
* Plot the number of *members* of the house of Representatives over time. You chose if you want to use a line-chart or a bar-chart. Is this development over time what you would expect? Why? Explain in your own words.
On the code snippets below, the process of plotting the number of members of the house of Representatives over time is depicted.
To be able to implement it, at start, the groups by are created based on the Congress Number of each Representative. The result of this grouping is printed out below.
```
groups = all_members.groupby('congress_number')
members_in_congress = groups.size()
#members_in_congress
```
Finally, the before mentioned results are plotted in a line-chart to be able to visualy see the differences between the values obtained.
```
import matplotlib.pyplot as plt
members_in_congress.plot()
plt.ylabel('members')
plt.xlabel('congress number')
#plt.xticks(members_in_congress.index.values, 1)
plt.show()
```
The obtained development over time is not the expected one. Based on the [United States Congress Wikipedia page](https://en.wikipedia.org/wiki/United_States_Congress), the number of representatives is constant at the number of 435 with 6 non-voting delegates coming from Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, the U.S. Virgin Islands, and Washington, D.C., which reaches a final number of 441 representatives. The above numbers are subject of a recent and interesting article of the political repercussions they can have in a country with growing population. You can access it [here](http://www.pewresearch.org/fact-tank/2018/05/31/u-s-population-keeps-growing-but-house-of-representatives-is-same-size-as-in-taft-era/).
Thus, it can be inferred that there is some issue with the data that has been collected and processed. There can be two hypotheses here:
On the one hand, the data includes resigned or, for any reason, departed congressmen and congresswomen, leading to flactuated numbers.
Or, on the other hand, the collection and parsing of data has been erroneous.
* How many members appear in all the three congresses? How many in two? How many in one? Plot your results using a histogram.
Below the number of re-elected congressmen and congresswomen is found and the results plotted.
Again, at start, the groups by are created based on the name of each Representative.
```
groups = all_members.groupby('WikiPageName')
```
Then, the size of each group, which is the sum of represenatives with the same number of being elected, is found and sorted.
```
times_in_congress = groups.size().sort_values()
```
Since *times_in_congress* is a Pandas Series, its data is grouped to find how many congressmen and congresswomen where members of one, two or three congresses. The results are then plotted.
```
#series and not a data frame
result = times_in_congress.groupby(times_in_congress).size()
result.plot('bar')
plt.ylabel('# members')
plt.xlabel('times in congress')
plt.show()
```
Now, an answer is given to the following questions:
* Which states are more represented in the house of representatives?
* Which are less?
```
groups = all_members.groupby('State')
states = groups.size().sort_values()
states_rep_num_list = list(zip(states, states.index))
min_num_rep = states_rep_num_list[0][0]
max_num_rep = states_rep_num_list[-1][0]
print("Less represented states (Nr. repr.:"+str(min_num_rep)+"):")
for state in states_rep_num_list:
if (state[0] == min_num_rep):
print(state[1])
else:
break
print("Most represented states (Nr. repr.:"+str(max_num_rep)+"):")
index = -1
while(True):
if (states_rep_num_list[index][0] == max_num_rep):
print(states_rep_num_list[index][1])
index -= 1
else:
break
```
Just above we can see the list of the states that are more or less represented in the house of representatives.
Finally a histogram showing the number of members per state is plotted.
```
states.plot('bar')
plt.ylabel('# members')
plt.xlabel('states')
plt.show()
```
Now, an answer is given to the following question:
* How has the party composition of the house of representative changed over time? Plot your results.
This is achieved by first grouping by two variables: Party and Congress Number of the Representative
```
party = all_members.groupby(['Party', 'congress_number'])
partySize = party.size()
```
Then, the result is plotted in a graph, having the number of Democratic representatives as blue bars of the histogram and the Republican on the red ones.
```
fig = plt.figure()
ax = fig.add_subplot(111) # Create matplotlib axis
ax2 = ax.twinx() # Create another axis sharing the same x-axis with ax
width = 0.2
partySize['Democratic'].plot(kind='bar', color='blue', ax=ax, width=width, position=1)
partySize['Republican'].plot(kind='bar', color='red', ax=ax, width=width, position=0)
ax.set_ylabel('Number of rep.')
ax.set_xlabel('Congress Number')
ax2.set_ylabel('Pcnt. of rep.')
plt.show()
```
## 5. Basic stats for the network
Create simple network statistics for the **113th house of representatives**.
Below, a network involving the Representatives of the 113th House is created.
Specifically, the page content for every representative in the 113th House is fetched and saved into a file in the data/113 directory with a filename same as the title of the representative's name in his Wikipedia page.
For example: [Aaron Shock's](https://en.wikipedia.org/wiki/Aaron_Schock) page contents can be found in: data/113/Aaron_Schock
```
#import urllib2
#import json
#import io
#construction of the API query
#baseurl = "http://en.wikipedia.org/w/api.php/?"
#action = "action=query"
#content = "prop=revisions"
#rvprop ="rvprop=timestamp|content"
#dataformat = "format=json"
#rvdir = "rvdir=older" #sort revisions from newest to oldest
#start = "rvend=2000-01-03T00:00:00Z" #start of my time period
#end = "rvstart=2015-01-01T00:00:00Z" #end of my time period
#limit = "rvlimit=1" #consider only the first revision
#memberCongress113 = all_members[all_members.congress_number == 113]
#for member in memberCongress113.WikiPageName:
#title = "titles=" + member[0]
#query = "%s%s&%s&%s&%s&%s&%s&%s&%s&%s" % (baseurl, action, title, content, rvprop, dataformat, rvdir, end, start, limit)
#req = urllib2.Request(query)
#response = urllib2.urlopen(req)
#the_page = response.read()
#json_string = json.loads(the_page)
#f = io.open('./data/113/'+member, 'a+', encoding='utf-8')
#f.write(unicode(the_page, "utf-8"))
#f.close()
```
Below, it lies the proceding of the before mentioned graph's creation.
The code follows this execution next detailed schema:
* For every member (member1) of the 113th Congress:
* Get all pages that redirect to this congressman's page and add it to our member_redirect_dict
* For every member (member1) of the 113th Congress:
* Create a node in the graph
* Open the member's file in /data/113
* For every link:
* Strip link to its useful parts and create new link string without any parentheses
* If link in member_redirect_dict then match
* In case of a match, create the edge (member1, member2)
```
import re
import io
import networkx as nx
import json
import requests
memberCongress113 = all_members[all_members.congress_number == 113]
G = nx.DiGraph()
member_list = memberCongress113.WikiPageName.tolist()
member_list = set(member_list)
df = pd.read_csv('./data/H113.csv', encoding='utf-8')
redirect_dict = {}
non_member_list = set()
member_redirect_set = set()
member_redirect_dict = {}
count = 0
for member in member_list:
print("Congressman "+ str(count) +" passed")
count += 1
query = requests.get(r'https://en.wikipedia.org/w/api.php?action=query&titles={}&prop=redirects&format=json'.format(member))
data = json.loads(query.text)
for page, value_page in data['query']['pages'].iteritems():
try:
for title_dict in value_page['redirects']:
title = title_dict['title'].replace(" ", "_")
member_redirect_set.add(title)
member_redirect_dict[title] = member
except:
pass
member_redirect_dict[member] = member
for member in memberCongress113.values.tolist():
print(member)
# Create node and its attributed
G.add_node(member[0])
nx.set_node_attributes(G, {member[0]: {"State": member[2]}})
nx.set_node_attributes(G, {member[0]: {"Party": member[1]}})
path_folder = './data/113/'
path_folder = path_folder+member[0]
f = io.open(path_folder, 'r', encoding='utf-8').read()
# Get all links
links = re.findall("\[\[(.*?)\]\]",f)
links = set(links)
for link in links:
# Clean link
splitedLink = link.split("|")
link = splitedLink[0]
link = link.replace(" ", "_")
try:
if (member_redirect_dict[link] in member_list):
if (link != member[0]):
G.add_edge(member[0], member_redirect_dict[link])
except:
pass
```
* What is the number of nodes in the network? And the number of links?
The number of nodes in the graph is printed below, followed by the number of edges.
```
print(len(G.nodes))
print(len(G.edges))
```
* Plot the in and out-degree distributions.
Immediatelly below, the *in_degree* and *out_degree* hold the in and out degrees of each node respectivelly.
```
in_degree = list(G.in_degree(G.nodes))
out_degree = list(G.out_degree(G.nodes))
#print(out_degree)
```
The distribution of the in-degrees of the graph are plotted below.
```
import matplotlib.pyplot as plt
in_degree_df = pd.DataFrame(in_degree)
groups = in_degree_df.groupby(1)
groups.size().plot('bar')
plt.show()
```
Below, the weighted average in-degree of the graph. This number validates the results depicted in the plot above.
```
weighted_avg_degree = 0
for index, item in groups.size().iteritems():
weighted_avg_degree += index * item
weighted_avg_degree /= groups.size().sum()
print(weighted_avg_degree)
```
The distribution of the out-degrees of the graph are plotted below. We observe that the results look like a Gaussian bell curve.
```
out_degree_df = pd.DataFrame(out_degree)
groups = out_degree_df.groupby(1)
groups.size().plot()
plt.show()
```
* Who is the most connected representative?
Finally, the most connected representatives based on the in and out-degrees, are selected.
This is done by sorting the sum of in-degrees and out-degrees and providing the results for the most connected representatives based on those results.
```
degree = [(x[0], x[1] + y[1]) for x, y in zip(in_degree, out_degree)]
sorted_degree = sorted(degree, key=lambda x: x[1], reverse=True)
print("Most connected representative is : "+sorted_degree[0][0])
```
| github_jupyter |
# Univariate time series classification with sktime
In this notebook, we will use sktime for univariate time series classification. Here, we have a single time series variable and an associated label for multiple instances. The goal is to find a classifier that can learn the relationship between time series and label and accurately predict the label of new series.
When you have multiple time series variables and want to learn the relationship between them and a label, you can take a look at our [multivariate time series classification notebook](https://github.com/alan-turing-institute/sktime/blob/main/examples/03_classification_multivariate.ipynb).
## Preliminaries
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.tree import DecisionTreeClassifier
from sktime.classification.compose import ComposableTimeSeriesForestClassifier
from sktime.datasets import load_arrow_head
from sktime.utils.slope_and_trend import _slope
```
## Load data
In this notebook, we use the [arrow head problem](https://timeseriesclassification.com/description.php?Dataset=ArrowHead).
The arrowhead dataset consists of outlines of the images of arrow heads. The classification of projectile points is an important topic in anthropology. The classes are based on shape distinctions such as the presence and location of a notch in the arrow.
<img src="./img/arrow-heads.png" width="400" alt="arrow heads">
The shapes of the projectile points are converted into a sequence using the angle-based method as described in this [blog post](https://izbicki.me/blog/converting-images-into-time-series-for-data-mining.html) about converting images into time series for data mining.
<img src="./img/from-shapes-to-time-series.png" width="400" alt="from shapes to time series">
### Data representation
Throughout sktime, the expected data format is a `pd.DataFrame`, but in a slightly unusual format. A single column can contain not only primitives (floats, integers or strings), but also entire time series in form of a `pd.Series` or `np.array`.
For more details on our choice of data container, see this [wiki entry](https://github.com/alan-turing-institute/sktime/wiki/Time-series-data-container).
```
X, y = load_arrow_head(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
# univariate time series input data
X_train.head()
# binary target variable
labels, counts = np.unique(y_train, return_counts=True)
print(labels, counts)
fig, ax = plt.subplots(1, figsize=plt.figaspect(0.25))
for label in labels:
X_train.loc[y_train == label, "dim_0"].iloc[0].plot(ax=ax, label=f"class {label}")
plt.legend()
ax.set(title="Example time series", xlabel="Time");
```
## Why not just use scikit-learn?
We can still use scikit-learn, but using scikit-learn comes with some implicit modelling choices.
### Reduction: from time-series classification to tabular classification
To use scikit-learn, we have to convert the data into the required tabular format. There are different ways we can do that:
#### Treating time points as separate features (tabularisation)
Alternatively, we could bin and aggregate observations in time bins of different length.
```
from sklearn.ensemble import RandomForestClassifier
from sktime.datatypes._panel._convert import from_nested_to_2d_array
X_train_tab = from_nested_to_2d_array(X_train)
X_test_tab = from_nested_to_2d_array(X_test)
X_train_tab.head()
# let's get a baseline for comparison
from sklearn.dummy import DummyClassifier
classifier = DummyClassifier(strategy="prior")
classifier.fit(X_train_tab, y_train)
classifier.score(X_test_tab, y_test)
# now we can apply any scikit-learn classifier
classifier = RandomForestClassifier(n_estimators=100)
classifier.fit(X_train_tab, y_train)
y_pred = classifier.predict(X_test_tab)
accuracy_score(y_test, y_pred)
from sklearn.pipeline import make_pipeline
# with sktime, we can write this as a pipeline
from sktime.transformations.panel.reduce import Tabularizer
classifier = make_pipeline(Tabularizer(), RandomForestClassifier())
classifier.fit(X_train, y_train)
classifier.score(X_test, y_test)
```
What's the implicit modelling choice here?
> We treat each observation as a separate feature and thus ignore they are ordered in time. A tabular algorithm cannot make use of the fact that features are ordered in time, i.e. if we changed the order of the features, the fitted model and predictions wouldn't change. Sometimes this works well, sometimes it doesn't.
#### Feature extraction
Another modelling choice: we could extract features from the time series and then use the features to fit our tabular classifier. Here we use [tsfresh](https://tsfresh.readthedocs.io) for automatic feature extraction.
```
from sktime.transformations.panel.tsfresh import TSFreshFeatureExtractor
transformer = TSFreshFeatureExtractor(default_fc_parameters="minimal")
extracted_features = transformer.fit_transform(X_train)
extracted_features.head()
classifier = make_pipeline(
TSFreshFeatureExtractor(show_warnings=False), RandomForestClassifier()
)
classifier.fit(X_train, y_train)
classifier.score(X_test, y_test)
```
What's the implicit modelling choice here?
> Instead of working in the domain of the time series, we extract features from time series and choose to work in the domain of the features. Again, sometimes this works well, sometimes it doesn't. The main difficulty is finding discriminative features for the classification problem.
## Time series classification with sktime
sktime has a number of specialised time series algorithms.
### Time series forest
Time series forest is a modification of the random forest algorithm to the time series setting:
1. Split the series into multiple random intervals,
2. Extract features (mean, standard deviation and slope) from each interval,
3. Train a decision tree on the extracted features,
4. Ensemble steps 1 - 3.
For more details, take a look at the [paper](https://www.sciencedirect.com/science/article/pii/S0020025513001473).
In sktime, we can write:
```
from sktime.transformations.panel.summarize import RandomIntervalFeatureExtractor
steps = [
(
"extract",
RandomIntervalFeatureExtractor(
n_intervals="sqrt", features=[np.mean, np.std, _slope]
),
),
("clf", DecisionTreeClassifier()),
]
time_series_tree = Pipeline(steps)
```
We can directly fit and evaluate the single time series tree (which is simply a pipeline).
```
time_series_tree.fit(X_train, y_train)
time_series_tree.score(X_test, y_test)
```
For time series forest, we can simply use the single tree as the base estimator in the forest ensemble.
```
tsf = ComposableTimeSeriesForestClassifier(
estimator=time_series_tree,
n_estimators=100,
bootstrap=True,
oob_score=True,
random_state=1,
n_jobs=-1,
)
```
Fit and obtain the out-of-bag score:
```
tsf.fit(X_train, y_train)
if tsf.oob_score:
print(tsf.oob_score_)
tsf = ComposableTimeSeriesForestClassifier()
tsf.fit(X_train, y_train)
tsf.score(X_test, y_test)
```
We can also obtain feature importances for the different features and intervals that the algorithms looked at and plot them in a feature importance graph over time.
```
fi = tsf.feature_importances_
# renaming _slope to slope.
fi.rename(columns={"_slope": "slope"}, inplace=True)
fig, ax = plt.subplots(1, figsize=plt.figaspect(0.25))
fi.plot(ax=ax)
ax.set(xlabel="Time", ylabel="Feature importance");
```
#### More about feature importances
The feature importances method is based on the example showcased in [this paper](https://arxiv.org/abs/1302.2277).
In addition to the feature importances method [available in scikit-learn](https://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html), our method collects the feature importances values from each estimator for their respective intervals, calculates the sum of feature importances values on each timepoint, and normalises the values first by the number of estimators and then by the number of intervals.
As a result, the temporal importance curves can be plotted, as shown in the previous example.
Please note that this method currently supports only one particular
structure of the TSF, where RandomIntervalFeatureExtractor() is used in the pipeline, or simply the default TimeSeriesForestClassifier() setting. For instance, two possible approaches could be:
```
# Method 1: Default time-series forest classifier
tsf1 = ComposableTimeSeriesForestClassifier()
tsf1.fit(X_train, y_train)
fi1 = tsf1.feature_importances_
# renaming _slope to slope.
fi1.rename(columns={"_slope": "slope"}, inplace=True)
fig, ax = plt.subplots(1, figsize=plt.figaspect(0.25))
fi1.plot(ax=ax)
# Method 2: Pipeline
features = [np.mean, np.std, _slope]
steps = [
("transform", RandomIntervalFeatureExtractor(features=features)),
("clf", DecisionTreeClassifier()),
]
base_estimator = Pipeline(steps)
tsf2 = ComposableTimeSeriesForestClassifier(estimator=base_estimator)
tsf2.fit(X_train, y_train)
fi2 = tsf2.feature_importances_
# renaming _slope to slope.
fi2.rename(columns={"_slope": "slope"}, inplace=True)
fig, ax = plt.subplots(1, figsize=plt.figaspect(0.25))
fi2.plot(ax=ax);
```
### RISE
Another popular variant of time series forest is the so-called Random Interval Spectral Ensemble (RISE), which makes use of several series-to-series feature extraction transformers, including:
* Fitted auto-regressive coefficients,
* Estimated autocorrelation coefficients,
* Power spectrum coefficients.
```
from sktime.classification.interval_based import RandomIntervalSpectralForest
rise = RandomIntervalSpectralForest(n_estimators=10)
rise.fit(X_train, y_train)
rise.score(X_test, y_test)
```
### K-nearest-neighbours classifier for time series
For time series, the most popular k-nearest-neighbours algorithm is based on [dynamic time warping](https://en.wikipedia.org/wiki/Dynamic_time_warping) (dtw) distance measure.
<img src="img/dtw.png" width=500 />
Here we look at the [BasicMotions data set](http://www.timeseriesclassification.com/description.php?Dataset=BasicMotions). The data was generated as part of a student project where four students performed four activities whilst wearing a smart watch. The watch collects 3D accelerometer and a 3D gyroscope It consists of four classes, which are walking, resting, running and badminton. Participants were required to record motion a total of five times, and the data is sampled once every tenth of a second, for a ten second period.
```
from sktime.datasets import load_basic_motions
X, y = load_basic_motions(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X.iloc[:, [0]], y)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
labels, counts = np.unique(y_train, return_counts=True)
print(labels, counts)
fig, ax = plt.subplots(1, figsize=plt.figaspect(0.25))
for label in labels:
X_train.loc[y_train == label, "dim_0"].iloc[0].plot(ax=ax, label=label)
plt.legend()
ax.set(title="Example time series", xlabel="Time");
for label in labels[:2]:
fig, ax = plt.subplots(1, figsize=plt.figaspect(0.25))
for instance in X_train.loc[y_train == label, "dim_0"]:
ax.plot(instance)
ax.set(title=f"Instances of {label}")
```
from sklearn.neighbors import KNeighborsClassifier
knn = make_pipeline(
Tabularizer(),
KNeighborsClassifier(n_neighbors=1, metric="euclidean"))
knn.fit(X_train, y_train)
knn.score(X_test, y_test)
```
from sktime.classification.distance_based import KNeighborsTimeSeriesClassifier
knn = KNeighborsTimeSeriesClassifier(n_neighbors=1, distance="dtw")
knn.fit(X_train, y_train)
knn.score(X_test, y_test)
```
### Other classifiers
To find out what other algorithms we have implemented in sktime, you can use our utility function:
```
from sktime.registry import all_estimators
all_estimators(estimator_types="classifier", as_dataframe=True)
```
| github_jupyter |
# End-to-End Example #1
1. [Introduction](#Introduction)
2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing)
1. [Permissions and environment variables](#Permissions-and-environment-variables)
2. [Data ingestion](#Data-ingestion)
3. [Data inspection](#Data-inspection)
4. [Data conversion](#Data-conversion)
3. [Training the K-Means model](#Training-the-K-Means-model)
4. [Set up hosting for the model](#Set-up-hosting-for-the-model)
5. [Validate the model for use](#Validate-the-model-for-use)
## Introduction
Welcome to our first end-to-end example! Today, we're working through a classification problem, specifically of images of handwritten digits, from zero to nine. Let's imagine that this dataset doesn't have labels, so we don't know for sure what the true answer is. In later examples, we'll show the value of "ground truth", as it's commonly known.
Today, however, we need to get these digits classified without ground truth. A common method for doing this is a set of methods known as "clustering", and in particular, the method that we'll look at today is called k-means clustering. In this method, each point belongs to the cluster with the closest mean, and the data is partitioned into a number of clusters that is specified when framing the problem. In this case, since we know there are 10 clusters, and we have no labeled data (in the way we framed the problem), this is a good fit.
To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
## Prequisites and Preprocessing
### Permissions and environment variables
Here we set up the linkage and authentication to AWS services. There are two parts to this:
1. The role(s) used to give learning and hosting access to your data. Here we extract the role you created earlier for accessing your notebook. See the documentation if you want to specify a different role.
1. The S3 bucket name that you want to use for training and model data. Here we use a default in the form of `sagemaker-{region}-{AWS account ID}`, but you may specify a different one if you wish.
```
%matplotlib inline
from sagemaker import get_execution_role
from sagemaker.session import Session
role = get_execution_role()
bucket = Session().default_bucket()
```
### Data ingestion
Next, we read the dataset from the existing repository into memory, for preprocessing prior to training. In this case we'll use the MNIST dataset, which contains 70K 28 x 28 pixel images of handwritten digits. For more details, please see [here](http://yann.lecun.com/exdb/mnist/).
This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
```
%%time
import pickle, gzip, numpy, boto3, json
# Load the dataset
region = boto3.Session().region_name
boto3.Session().resource('s3', region_name=region).Bucket('sagemaker-sample-data-{}'.format(region)).download_file('algorithms/kmeans/mnist/mnist.pkl.gz', 'mnist.pkl.gz')
with gzip.open('mnist.pkl.gz', 'rb') as f:
train_set, valid_set, test_set = pickle.load(f, encoding='latin1')
```
### Data inspection
Once the dataset is imported, it's typical as part of the machine learning process to inspect the data, understand the distributions, and determine what type(s) of preprocessing might be needed. You can perform those tasks right here in the notebook. As an example, let's go ahead and look at one of the digits that is part of the dataset.
```
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (2,10)
def show_digit(img, caption='', subplot=None):
if subplot==None:
_,(subplot)=plt.subplots(1,1)
imgr=img.reshape((28,28))
subplot.axis('off')
subplot.imshow(imgr, cmap='gray')
plt.title(caption)
show_digit(train_set[0][30], 'This is a {}'.format(train_set[1][30]))
```
## Training the K-Means model
Once we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the k-means training algorithm. But Amazon SageMaker's k-means has been tested on, and scales well with, multi-terabyte datasets.
After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes around 4 minutes.
```
from sagemaker import KMeans
data_location = 's3://{}/kmeans_highlevel_example/data'.format(bucket)
output_location = 's3://{}/kmeans_example/output'.format(bucket)
print('training data will be uploaded to: {}'.format(data_location))
print('training artifacts will be uploaded to: {}'.format(output_location))
kmeans = KMeans(role=role,
train_instance_count=2,
train_instance_type='ml.c4.xlarge',
output_path=output_location,
k=10,
data_location=data_location)
%%time
kmeans.fit(kmeans.record_set(train_set[0]))
```
## Set up hosting for the model
Now, we can deploy the model we just trained behind a real-time hosted endpoint. This next step can take, on average, 7 to 11 minutes to complete.
```
%%time
kmeans_predictor = kmeans.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
```
## Validate the model for use
Finally, we'll validate the model for use. Let's generate a classification for a single observation from the trained model using the endpoint we just created.
```
result = kmeans_predictor.predict(train_set[0][30:31])
print(result)
```
OK, a single prediction works.
Let's do a whole batch and see how well the clustering works.
```
%%time
result = kmeans_predictor.predict(valid_set[0][0:100])
clusters = [r.label['closest_cluster'].float32_tensor.values[0] for r in result]
for cluster in range(10):
print('\n\n\nCluster {}:'.format(int(cluster)))
digits = [ img for l, img in zip(clusters, valid_set[0]) if int(l) == cluster ]
height=((len(digits)-1)//5)+1
width=5
plt.rcParams["figure.figsize"] = (width,height)
_, subplots = plt.subplots(height, width)
subplots=numpy.ndarray.flatten(subplots)
for subplot, image in zip(subplots, digits):
show_digit(image, subplot=subplot)
for subplot in subplots[len(digits):]:
subplot.axis('off')
plt.show()
```
### The bottom line
K-Means clustering is not the best algorithm for image analysis problems, but we do see pretty reasonable clusters being built.
### (Optional) Delete the Endpoint
If you're ready to be done with this notebook, make sure run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on.
```
print(kmeans_predictor.endpoint)
import sagemaker
sagemaker.Session().delete_endpoint(kmeans_predictor.endpoint)
```
| github_jupyter |
# Keras
Keras is fairly well-known in the Python deep learning community. It used to be a high-level API to make frameworks like CNTK, Theano and TensorFlow easier to use and was framework-agnostic (you only had to set the backend for processing, everything else was abstracted). A few years ago, Keras was migrated to the TF repository and dropped support for other backends. It is now the de-facto high level API for TF.
### Layers and models
The basic component of a Keras model is a layer. A layer is comprised of one or more operations and is meant to be easily reusable. A model is a graph of layers.
```
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import models
tf.random.set_seed(1024)
```
We will use the well-known Boston housing dataset. It is a small dataset of 500 examples (homes in Boston) with 13 features where the target variable is the price of a home. Keras has this dataset easily accessible
```
# Boston housing dataset
(x_train, y_train), (x_test, y_test) = keras.datasets.boston_housing.load_data()
print(f'x_train shape: {x_train.shape}')
print(f'y_train shape: {y_train.shape}')
print(f'y_test shape: {y_test.shape}')
# Preprocess the data (these are Numpy arrays)
# Standardize by subtracting the mean and dividing with the standard deviation
x_mean = np.mean(x_train, axis=0)
x_std = np.std(x_train, axis=0)
x_train = (x_train - x_mean) / x_std
x_test = (x_test - x_mean) / x_std
n_features = x_train.shape[1]
```
#### Building a linear regression model
You'll see examples in the TF guide immediately start with neural networks. We will begin with a linear regression model and then move onto a simple neural network, showing that it is very easy to construct both.
The simplest way to construct a Keras model is a sequential model, which is just a sequence of layers (or operations). For simple models, this works well. Our linear regression model has just one layer - a [dense layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense), which means a multiplication with a weight and a bias addition. This is just the equation $y = Wx + b$.
This layer can also be used for neural networks, where we would be multiplying matrices. We choose `units = 1` (output a vector of size 1, i.e. a scalar for each example) and `input_shape = (n_features, )`, which means that we have 13 features for each example.
```
linear_layer = layers.Dense(
units=1,
input_shape=(n_features,),
name='linear_layer'
)
linear_model = models.Sequential([
linear_layer
], name='linear_model')
# View the weights (parameters) of a layer
linear_layer.weights
```
Let's look a very useful method which gives us the summary of a model. It shows us all the layers + their shapes and the parameters of the model.
```
linear_model.summary()
```
#### Compiling and training the model
Before training, we need to specify the optimizer we will use and which loss we want to minimize. We'll pick the simplest options: stochastic gradient descent for the optimizer and mean squared error for the loss function. We do this by "compiling" the model, which prepares it for training. After that, we can call `.fit`, which is similar to scikit-learn. However, here we have to specify the number of iterations (epochs) we want to run.
```
linear_model.compile(
# Optimizer: Stochastic gradient descent
optimizer=keras.optimizers.SGD(),
# Loss function to minimize: mean squared error
loss=keras.losses.MeanSquaredError()
)
linear_model.fit(
x_train,
y_train,
batch_size=0, # Use all data as one batch
verbose=0, # Don't print progress
epochs=20 # Run 20 iterations of gradient updates
)
y_hat = linear_model.predict(x_test)
mse = keras.metrics.MeanSquaredError()(y_test, y_hat)
print(f'Test MSE: {mse:.4f}')
```
We see that we also managed to get a decent fit (MSE around 20 should be OK for this dataset). Observe how we managed to do this by avoiding all optimization-related mathematics - we only had to specify how the model computes predictions (the forward pass). Since our model and optimization procedure were very simple, the code isn't really much more complicated than if we used lower-level tools, but for more advanced approaches, it's usually much simpler to use Keras.
### More details for layers and models
A layer holds state and performs computations. Layers should be easily reusable and composable - you can create more complex layers from simpler ones.
A model is a graph of layers and offers train / predict / evaluate APIs. It is easily serializable.
### Functional API example
A core principle of Keras is the progressive increase of complexity. You should always be able to get into lower-level workflows in a gradual way. You shouldn't fall off a cliff if the high-level functionality doesn't exactly match your use case. You should be able to gain more control over the small details while keeping some high-level convenience.
For more sophisticated usecases, we might want a more flexible way to construct models (recurrent models, multiple inputs and outputs...). We'll move onto the Functional API, which offers us just that. Instead of specifying a sequence of layers, we create each layer individually and apply it onto the previous ones. This is the most popular way of creating models in Keras and is recommended. Each layer is a Python class which you apply by calling it with its inputs. The inputs to each layer are tensors (`tf.Tensor`). Once you have all the layers applied, you simply create a model by telling it what is its input and its output. TensorFlow will find the path through the graph of layers you have created and note down which layers are part of the graph (unused parts will not be included).
#### Neural networks
Now, we'll move on to a neural network example. We will apply a neural network with a small hidden layer to the same dataset. Due to the simplicity of Keras, this won't be much more difficult than the linear regression example!
We'll use only dense layers and the ReLU activation function. In this example, our inputs will have 13 features. With matrix multiplication in the dense layers, we will transform them to a 3-dim and 1-dimensional space, which will be the output.
We need to start with the Input class, which is a placeholder for the actual data we'll send in. The shape we are specifying is the number of features, i.e. length of each vector (example). Even though we can (and should) feed in matrices of data, the batch size (the number of rows in the matrix) is always omitted in Keras.
```
inputs = layers.Input(shape=(n_features,), name='inputs')
# After we create a layer object, we get its output by calling it and passing the input tensor.
# The activation function can also be added by creating a `layers.ReLU` or `layers.Activation` layer.
layer1 = layers.Dense(3, activation='relu', name='dense_1')
x1 = layer1(inputs)
# Our outputs will be prices (a single value for each example)
layer2 = layers.Dense(1, name='dense_2')
predictions = layer2(x1)
nn_model = models.Model(inputs=inputs, outputs=predictions, name='nn_model')
nn_model.summary()
```
You can access input / output / weights attributes on both layers and models to see what they contain.
```
print(f'Layer 1 inputs: {layer1.input}\n')
print(f'Layer 1 outputs: {layer1.output}\n')
print(f'Model inputs: {nn_model.input}\n')
print(f'Model outputs: {nn_model.output}\n')
print(f'Layer 1 weights: {layer1.weights}\n')
```
Again, we'll need to compile the model before training. We'll use a different optimizer this time.
```
tf.random.set_seed(999)
nn_model.compile(
# Adam optimizer, better suited for neural networks
optimizer=keras.optimizers.Adam(0.15),
# Loss function to minimize: mean squared error
loss=keras.losses.MeanSquaredError()
)
```
We will also use a validation set (we don't train on it) to monitor the loss during training. Although training is typically done in minibatches, we will use the whole dataset here as it is fairly small.
```
history = nn_model.fit(
x_train,
y_train,
batch_size=0, # The minibatch size (tradeoff between speed and quality) - use all data
epochs=20,
validation_split=0.1 # How much of the training data to use for validation
)
```
The `.fit` method returns an object which contains the losses for each epoch during training. You can use it to save them or plot them, but we will show an easier and more powerful tool for this in the next notebook.
```
# Show the loss and metrics history
history.history
```
The validation results seem good. To make sure, we'll also evaluate our model on the test set.
```
# Evaluate the model on the test data
mse = nn_model.evaluate(x_test, y_test, batch_size=0)
print(f'Test MSE: {mse}')
```
We see that our test loss is higher than our validation loss, but overall still OK, even lower than the linear regression loss. Since we're dealing with a small amount of data, overfitting can easily happen - we'll demonstrate some useful techniques to combat it in the next notebook.
### Going further
The [Keras API docs](https://www.tensorflow.org/api_docs/python/tf/keras) are pretty nice and contain a lot more material - definitely check out the list of layers. And even if you can't find something you need - it's easy to create your own layer, metric or loss function with plain TF operations.
| github_jupyter |
# Project 1: Trading with Momentum
## Instructions
Each problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a `# TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity.
## Packages
When you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code.
The other packages that we're importing are `helper`, `project_helper`, and `project_tests`. These are custom packages built to help you solve the problems. The `helper` and `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems.
### Install Packages
```
import sys
!{sys.executable} -m pip install -r requirements.txt
```
### Load Packages
```
import pandas as pd
import numpy as np
import helper
import project_helper
import project_tests
```
## Market Data
The data source we use for most of the projects is the [Wiki End of Day data](https://www.quandl.com/databases/WIKIP) hosted at [Quandl](https://www.quandl.com). This contains data for many stocks, but we'll just be looking at stocks in the S&P 500. We also made things a little easier to run by narrowing our range of time to limit the size of the data.
### Set API Key
Set the `quandl_api_key` variable to your Quandl api key. You can find your Quandl api key [here](https://www.quandl.com/account/api).
```
# TODO: Add your Quandl API Key
quandl_api_key = ''
```
### Download Data
```
import os
snp500_file_path = 'data/tickers_SnP500.txt'
wiki_file_path = 'data/WIKI_PRICES.csv'
start_date, end_date = '2013-07-01', '2017-06-30'
use_columns = ['date', 'ticker', 'adj_close']
if not os.path.exists(wiki_file_path):
with open(snp500_file_path) as f:
tickers = f.read().split()
helper.download_quandl_dataset(quandl_api_key, 'WIKI', 'PRICES', wiki_file_path, use_columns, tickers, start_date, end_date)
else:
print('Data already downloaded')
```
### Load Data
```
df = pd.read_csv(wiki_file_path, parse_dates=['date'], index_col=False)
close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close')
print('Loaded Data')
```
### View Data
Run the cell below to see what the data looks like for `close`.
```
project_helper.print_dataframe(close)
```
### Stock Example
Let's see what a single stock looks like from the closing prices. For this example and future display examples in this project, we'll use Apple's stock (AAPL). If we tried to graph all the stocks, it would be too much information.
```
apple_ticker = 'AAPL'
project_helper.plot_stock(close[apple_ticker], '{} Stock'.format(apple_ticker))
```
## Resample Adjusted Prices
The trading signal you'll develop in this project does not need to be based on daily prices, for instance, you can use month-end prices to perform trading once a month. To do this, you must first resample the daily adjusted closing prices into monthly buckets, and select the last observation of each month.
Implement the `resample_prices` to resample `close_prices` at the sampling frequency of `freq`.
```
def resample_prices(close_prices, freq='M'):
"""
Resample close prices for each ticker at specified frequency.
Parameters
----------
close_prices : DataFrame
Close prices for each ticker and date
freq : str
What frequency to sample at
For valid freq choices, see http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases
Returns
-------
prices_resampled : DataFrame
Resampled prices for each ticker and date
"""
# TODO: Implement Function
return None
project_tests.test_resample_prices(resample_prices)
```
### View Data
Let's apply this function to `close` and view the results.
```
monthly_close = resample_prices(close)
project_helper.plot_resampled_prices(
monthly_close.loc[:, apple_ticker],
close.loc[:, apple_ticker],
'{} Stock - Close Vs Monthly Close'.format(apple_ticker))
```
## Compute Log Returns
Compute log returns ($R_t$) from prices ($P_t$) as your primary momentum indicator:
$$R_t = log_e(P_t) - log_e(P_{t-1})$$
Implement the `compute_log_returns` function below, such that it accepts a dataframe (like one returned by `resample_prices`), and produces a similar dataframe of log returns. Use Numpy's [log function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.log.html) to help you calculate the log returns.
```
def compute_log_returns(prices):
"""
Compute log returns for each ticker.
Parameters
----------
prices : DataFrame
Prices for each ticker and date
Returns
-------
log_returns : DataFrame
Log returns for each ticker and date
"""
# TODO: Implement Function
return None
project_tests.test_compute_log_returns(compute_log_returns)
```
### View Data
Using the same data returned from `resample_prices`, we'll generate the log returns.
```
monthly_close_returns = compute_log_returns(monthly_close)
project_helper.plot_returns(
monthly_close_returns.loc[:, apple_ticker],
'Log Returns of {} Stock (Monthly)'.format(apple_ticker))
```
## Shift Returns
Implement the `shift_returns` function to shift the log returns to the previous or future returns in the time series. For example, the parameter `shift_n` is 2 and `returns` is the following:
```
Returns
A B C D
2013-07-08 0.015 0.082 0.096 0.020 ...
2013-07-09 0.037 0.095 0.027 0.063 ...
2013-07-10 0.094 0.001 0.093 0.019 ...
2013-07-11 0.092 0.057 0.069 0.087 ...
... ... ... ... ...
```
the output of the `shift_returns` function would be:
```
Shift Returns
A B C D
2013-07-08 NaN NaN NaN NaN ...
2013-07-09 NaN NaN NaN NaN ...
2013-07-10 0.015 0.082 0.096 0.020 ...
2013-07-11 0.037 0.095 0.027 0.063 ...
... ... ... ... ...
```
Using the same `returns` data as above, the `shift_returns` function should generate the following with `shift_n` as -2:
```
Shift Returns
A B C D
2013-07-08 0.094 0.001 0.093 0.019 ...
2013-07-09 0.092 0.057 0.069 0.087 ...
... ... ... ... ... ...
... ... ... ... ... ...
... NaN NaN NaN NaN ...
... NaN NaN NaN NaN ...
```
_Note: The "..." represents data points we're not showing._
```
def shift_returns(returns, shift_n):
"""
Generate shifted returns
Parameters
----------
returns : DataFrame
Returns for each ticker and date
shift_n : int
Number of periods to move, can be positive or negative
Returns
-------
shifted_returns : DataFrame
Shifted returns for each ticker and date
"""
# TODO: Implement Function
return None
project_tests.test_shift_returns(shift_returns)
```
### View Data
Let's get the previous month's and next month's returns.
```
prev_returns = shift_returns(monthly_close_returns, 1)
lookahead_returns = shift_returns(monthly_close_returns, -1)
project_helper.plot_shifted_returns(
prev_returns.loc[:, apple_ticker],
monthly_close_returns.loc[:, apple_ticker],
'Previous Returns of {} Stock'.format(apple_ticker))
project_helper.plot_shifted_returns(
lookahead_returns.loc[:, apple_ticker],
monthly_close_returns.loc[:, apple_ticker],
'Lookahead Returns of {} Stock'.format(apple_ticker))
```
## Generate Trading Signal
A trading signal is a sequence of trading actions, or results that can be used to take trading actions. A common form is to produce a "long" and "short" portfolio of stocks on each date (e.g. end of each month, or whatever frequency you desire to trade at). This signal can be interpreted as rebalancing your portfolio on each of those dates, entering long ("buy") and short ("sell") positions as indicated.
Here's a strategy that we will try:
> For each month-end observation period, rank the stocks by _previous_ returns, from the highest to the lowest. Select the top performing stocks for the long portfolio, and the bottom performing stocks for the short portfolio.
Implement the `get_top_n` function to get the top performing stock for each month. Get the top performing stocks from `prev_returns` by assigning them a value of 1. For all other stocks, give them a value of 0. For example, using the following `prev_returns`:
```
Previous Returns
A B C D E F G
2013-07-08 0.015 0.082 0.096 0.020 0.075 0.043 0.074
2013-07-09 0.037 0.095 0.027 0.063 0.024 0.086 0.025
... ... ... ... ... ... ... ...
```
The function `get_top_n` with `top_n` set to 3 should return the following:
```
Previous Returns
A B C D E F G
2013-07-08 0 1 1 0 1 0 0
2013-07-09 0 1 0 1 0 1 0
... ... ... ... ... ... ... ...
```
*Note: You may have to use Panda's [`DataFrame.iterrows`](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.iterrows.html) with [`Series.nlargest`](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.Series.nlargest.html) in order to implement the function. This is one of those cases where creating a vecorization solution is too difficult.*
```
def get_top_n(prev_returns, top_n):
"""
Select the top performing stocks
Parameters
----------
prev_returns : DataFrame
Previous shifted returns for each ticker and date
top_n : int
The number of top performing stocks to get
Returns
-------
top_stocks : DataFrame
Top stocks for each ticker and date marked with a 1
"""
# TODO: Implement Function
return None
project_tests.test_get_top_n(get_top_n)
```
### View Data
We want to get the best performing and worst performing stocks. To get the best performing stocks, we'll use the `get_top_n` function. To get the worst performing stocks, we'll also use the `get_top_n` function. However, we pass in `-1*prev_returns` instead of just `prev_returns`. Multiplying by negative one will flip all the positive returns to negative and negative returns to positive. Thus, it will return the worst performing stocks.
```
top_bottom_n = 50
df_long = get_top_n(prev_returns, top_bottom_n)
df_short = get_top_n(-1*prev_returns, top_bottom_n)
project_helper.print_top(df_long, 'Longed Stocks')
project_helper.print_top(df_short, 'Shorted Stocks')
```
## Projected Returns
It's now time to check if your trading signal has the potential to become profitable!
We'll start by computing the net returns this portfolio would return. For simplicity, we'll assume every stock gets an equal dollar amount of investment. This makes it easier to compute a portfolio's returns as the simple arithmetic average of the individual stock returns.
Implement the `portfolio_returns` function to compute the expected portfolio returns. Using `df_long` to indicate which stocks to long and `df_short` to indicate which stocks to short, calculate the returns using `lookahead_returns`. To help with calculation, we've provided you with `n_stocks` as the number of stocks we're investing in a single period.
```
def portfolio_returns(df_long, df_short, lookahead_returns, n_stocks):
"""
Compute expected returns for the portfolio, assuming equal investment in each long/short stock.
Parameters
----------
df_long : DataFrame
Top stocks for each ticker and date marked with a 1
df_short : DataFrame
Bottom stocks for each ticker and date marked with a 1
lookahead_returns : DataFrame
Lookahead returns for each ticker and date
n_stocks: int
The number number of stocks chosen for each month
Returns
-------
portfolio_returns : DataFrame
Expected portfolio returns for each ticker and date
"""
# TODO: Implement Function
return None
project_tests.test_portfolio_returns(portfolio_returns)
```
### View Data
Time to see how the portfolio did.
```
expected_portfolio_returns = portfolio_returns(df_long, df_short, lookahead_returns, 2*top_bottom_n)
project_helper.plot_returns(expected_portfolio_returns.T.sum(), 'Portfolio Returns')
```
## Statistical Tests
### Annualized Rate of Return
```
expected_portfolio_returns_by_date = expected_portfolio_returns.T.sum().dropna()
portfolio_ret_mean = expected_portfolio_returns_by_date.mean()
portfolio_ret_ste = expected_portfolio_returns_by_date.sem()
portfolio_ret_annual_rate = (np.exp(portfolio_ret_mean * 12) - 1) * 100
print("""
Mean: {:.6f}
Standard Error: {:.6f}
Annualized Rate of Return: {:.2f}%
""".format(portfolio_ret_mean, portfolio_ret_ste, portfolio_ret_annual_rate))
```
The annualized rate of return allows you to compare the rate of return from this strategy to other quoted rates of return, which are usually quoted on an annual basis.
### T-Test
Our null hypothesis ($H_0$) is that the actual mean return from the signal is zero. We'll perform a one-sample, one-sided t-test on the observed mean return, to see if we can reject $H_0$.
We'll need to first compute the t-statistic, and then find its corresponding p-value. The p-value will indicate the probability of observing a mean return equally or more extreme than the one we observed if the null hypothesis were true. A small p-value means that the chance of observing the mean we observed under the null hypothesis is small, and thus casts doubt on the null hypothesis. It's good practice to set a desired level of significance or alpha ($\alpha$) _before_ computing the p-value, and then reject the null hypothesis if $p < \alpha$.
For this project, we'll use $\alpha = 0.05$, since it's a common value to use.
Implement the `analyze_alpha` function to perform a t-test on the sample of portfolio returns. We've imported the `scipy.stats` module for you to perform the t-test.
Note: [`scipy.stats.ttest_1samp`](https://docs.scipy.org/doc/scipy-1.0.0/reference/generated/scipy.stats.ttest_1samp.html) performs a two-sided test, so divide the p-value by 2 to get 1-sided p-value
```
from scipy import stats
def analyze_alpha(expected_portfolio_returns_by_date):
"""
Perform a t-test with the null hypothesis being that the expected mean return is zero.
Parameters
----------
expected_portfolio_returns_by_date : Pandas Series
Expected portfolio returns for each date
Returns
-------
t_value
T-statistic from t-test
p_value
Corresponding p-value
"""
# TODO: Implement Function
return None
project_tests.test_analyze_alpha(analyze_alpha)
```
### View Data
Let's see what values we get with our portfolio. After you run this, make sure to answer the question below.
```
t_value, p_value = analyze_alpha(expected_portfolio_returns_by_date)
print("""
Alpha analysis:
t-value: {:.3f}
p-value: {:.6f}
""".format(t_value, p_value))
```
### Question: What p-value did you observe? And what does that indicate about your signal?
*#TODO: Put Answer In this Cell*
## Submission
Now that you're done with the project, it's time to submit it. Click the submit button in the bottom right. One of our reviewers will give you feedback on your project with a pass or not passed grade. You can continue to the next section while you wait for feedback.
| github_jupyter |
# Introduction
## Guided Project - Visualizing The Gender Gap In College Degrees
In this guided project, we'll extend the work we did in the last two missions on visualizing the gender gap across college degrees. So far, we mostly focused on the STEM degrees but now we will generate line charts to compare across all degree categories. In the last step of this guided project, we'll explore how to export the final diagram we create as an image file.
```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
# This will need to be done in the future so
# get accustomed to using now
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
women_degrees = pd.read_csv('percent-bachelors-degrees-women-usa.csv')
cb_dark_blue = (0/255,107/255,164/255)
cb_orange = (255/255, 128/255, 14/255)
stem_cats = ['Engineering', 'Computer Science', 'Psychology', 'Biology', 'Physical Sciences', 'Math and Statistics']
fig = plt.figure(figsize=(18, 3))
for sp in range(0,6):
ax = fig.add_subplot(1,6,sp+1)
ax.plot(women_degrees['Year'], women_degrees[stem_cats[sp]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[sp]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(stem_cats[sp])
ax.tick_params(bottom="False", top="False", left="False", right="False")
if sp == 0:
ax.text(2005, 87, 'Men')
ax.text(2002, 8, 'Women')
elif sp == 5:
ax.text(2005, 62, 'Men')
ax.text(2001, 35, 'Women')
plt.show()
```
### Comparing across all Degree Categories
Because there are seventeen degrees that we need to generate line charts for, we'll use a subplot grid layout of 6 rows by 3 columns.
```
stem_cats = ['Psychology', 'Biology', 'Math and Statistics', 'Physical Sciences', 'Computer Science', 'Engineering']
lib_arts_cats = ['Foreign Languages', 'English', 'Communications and Journalism', 'Art and Performance', 'Social Sciences and History']
other_cats = ['Health Professions', 'Public Administration', 'Education', 'Agriculture','Business', 'Architecture']
fig = plt.figure(figsize=(16, 20))
## Generate first column of line charts. STEM degrees.
for sp in range(0,18,3):
cat_index = int(sp/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[stem_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(stem_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
if cat_index == 0:
ax.text(2003, 85, 'Women')
ax.text(2005, 10, 'Men')
elif cat_index == 5:
ax.text(2005, 87, 'Men')
ax.text(2003, 7, 'Women')
## Generate second column of line charts. Liberal arts degrees.
for sp in range(1,16,3):
cat_index = int((sp-1)/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[lib_arts_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[lib_arts_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(lib_arts_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
if cat_index == 0:
ax.text(2003, 78, 'Women')
ax.text(2005, 18, 'Men')
## Generate third column of line charts. Other degrees.
for sp in range(2,20,3):
cat_index = int((sp-2)/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[other_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[other_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(other_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
if cat_index == 0:
ax.text(2003, 90, 'Women')
ax.text(2005, 5, 'Men')
elif cat_index == 5:
ax.text(2005, 62, 'Men')
ax.text(2003, 30, 'Women')
plt.show()
```
### Hiding x-axis labels
With seventeen line charts in one diagram, the non-data elements quickly clutter the field of view.
The most immediate issue that sticks out is the titles of some line charts overlapping with the x-axis labels for the line chart above it. If we remove the titles for each line chart, the viewer won't know what degree each line chart refers to.
Let's instead remove the x-axis labels for every line chart in a column except for the bottom most one.
```
fig = plt.figure(figsize=(16, 28))
## Generate first column of line charts. STEM degrees.
for sp in range(0,18,3):
cat_index = int(sp/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[stem_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(stem_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
if cat_index == 0:
ax.text(2003, 85, 'Women')
ax.text(2005, 10, 'Men')
elif cat_index == 5:
ax.text(2005, 87, 'Men')
ax.text(2003, 7, 'Women')
ax.tick_params(labelbottom='on')
## Generate second column of line charts. Liberal arts degrees.
for sp in range(1,16,3):
cat_index = int((sp-1)/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[lib_arts_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[lib_arts_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(lib_arts_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
if cat_index == 0:
ax.text(2003, 78, 'Women')
ax.text(2005, 18, 'Men')
elif cat_index == 4:
ax.tick_params(labelbottom='on')
## Generate third column of line charts. Other degrees.
for sp in range(2,20,3):
cat_index = int((sp-2)/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[other_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[other_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(other_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
if cat_index == 0:
ax.text(2003, 90, 'Women')
ax.text(2005, 5, 'Men')
elif cat_index == 5:
ax.text(2005, 62, 'Men')
ax.text(2003, 30, 'Women')
ax.tick_params(labelbottom='on')
plt.show()
```
### Setting y-axis labels
Removing the x-axis labels for all but the bottommost plots solved the issue we noticed with the overlapping text. In addition, the plots are cleaner and more readable.
The trade-off we made is that it's now more difficult for the viewer to discern approximately which years some interesting changes in trends may have happened. This is acceptable because we're primarily interested in enabling the viewer to quickly get a high level understanding of which degrees are prone to gender imbalance and how that has changed over time.
In the vein of reducing cluttering, let's also simplify the y-axis labels. Currently, all seventeen plots have six y-axis labels and even though they are consistent across the plots, they still add to the visual clutter. By keeping just the starting and ending labels (`0` and `100`), we can keep some of the benefits of having the y-axis labels to begin with.
```
fig = plt.figure(figsize=(18, 30))
## Generate first column of line charts. STEM degrees.
for sp in range(0,18,3):
cat_index = int(sp/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[stem_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(stem_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
ax.set_yticks([0,100])
if cat_index == 0:
ax.text(2003, 85, 'Women')
ax.text(2005, 10, 'Men')
elif cat_index == 5:
ax.text(2005, 87, 'Men')
ax.text(2003, 7, 'Women')
ax.tick_params(labelbottom='on')
## Generate second column of line charts. Liberal arts degrees.
for sp in range(1,16,3):
cat_index = int((sp-1)/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[lib_arts_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[lib_arts_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(lib_arts_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
ax.set_yticks([0,100])
if cat_index == 0:
ax.text(2003, 78, 'Women')
ax.text(2005, 18, 'Men')
elif cat_index == 4:
ax.tick_params(labelbottom='on')
## Generate third column of line charts. Other degrees.
for sp in range(2,20,3):
cat_index = int((sp-2)/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[other_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[other_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(other_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
ax.set_yticks([0,100])
if cat_index == 0:
ax.text(2003, 90, 'Women')
ax.text(2005, 5, 'Men')
elif cat_index == 5:
ax.text(2005, 62, 'Men')
ax.text(2003, 30, 'Women')
ax.tick_params(labelbottom='on')
plt.show()
```
### Adding a horizontal line
While removing most of the y-axis labels definitely reduced clutter, it also made it hard to understand which degrees have close to 50-50 gender breakdown. While keeping all of the y-axis labels would have made it easier, we can actually do one better and use a horizontal line across all of the line charts where the y-axis label `50` would have been.
We can generate a horizontal line across an entire subplot using the `Axes.axhline()` method.
#### For all plots:
- Generate a horizontal line with the following properties:
- Starts at the y-axis position 50
- Set to the third color (light gray) in the Color Blind 10 palette
- Has a transparency of 0.3
```
fig = plt.figure(figsize=(16, 26))
## Generate first column of line charts. STEM degrees.
for sp in range(0,18,3):
cat_index = int(sp/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[stem_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(stem_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
ax.set_yticks([0,100])
ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3)
if cat_index == 0:
ax.text(2003, 85, 'Women')
ax.text(2005, 10, 'Men')
elif cat_index == 5:
ax.text(2005, 87, 'Men')
ax.text(2003, 7, 'Women')
ax.tick_params(labelbottom='on')
## Generate second column of line charts. Liberal arts degrees.
for sp in range(1,16,3):
cat_index = int((sp-1)/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[lib_arts_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[lib_arts_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(lib_arts_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
ax.set_yticks([0,100])
ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3)
if cat_index == 0:
ax.text(2003, 78, 'Women')
ax.text(2005, 18, 'Men')
elif cat_index == 4:
ax.tick_params(labelbottom='on')
## Generate third column of line charts. Other degrees.
for sp in range(2,20,3):
cat_index = int((sp-2)/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[other_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[other_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(other_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
ax.set_yticks([0,100])
ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3)
if cat_index == 0:
ax.text(2003, 90, 'Women')
ax.text(2005, 5, 'Men')
elif cat_index == 5:
ax.text(2005, 62, 'Men')
ax.text(2003, 30, 'Women')
ax.tick_params(labelbottom='on')
plt.show()
```
## Exporting to a file
With the current backend we're using, we can use `Figure.savefig()` or `pyplot.savefig()` to export all of the plots contained in the figure as a single image file. Note that these have to be called before we display the figure using `pyplot.show()`.
```
# Set backend to Agg.
# Add a little padding as our labels are overlapping
fig = plt.figure(figsize=(16, 32))
## Generate first column of line charts. STEM degrees.
for sp in range(0,18,3):
cat_index = int(sp/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[stem_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(stem_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3)
ax.set_yticks([0,100])
if cat_index == 0:
ax.text(2003, 85, 'Women')
ax.text(2005, 10, 'Men')
elif cat_index == 5:
ax.text(2005, 87, 'Men')
ax.text(2003, 7, 'Women')
ax.tick_params(labelbottom='on')
## Generate second column of line charts. Liberal arts degrees.
for sp in range(1,16,3):
cat_index = int((sp-1)/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[lib_arts_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[lib_arts_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(lib_arts_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3)
ax.set_yticks([0,100])
if cat_index == 0:
ax.text(2003, 75, 'Women')
ax.text(2005, 20, 'Men')
elif cat_index == 4:
ax.tick_params(labelbottom='on')
## Generate third column of line charts. Other degrees.
for sp in range(2,20,3):
cat_index = int((sp-2)/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[other_cats[cat_index]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[other_cats[cat_index]], c=cb_orange, label='Men', linewidth=3)
for key,spine in ax.spines.items():
spine.set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(other_cats[cat_index])
ax.tick_params(bottom="False", top="False", left="False", right="False")
ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3)
ax.set_yticks([0,100])
if cat_index == 0:
ax.text(2003, 90, 'Women')
ax.text(2005, 5, 'Men')
elif cat_index == 5:
ax.text(2005, 62, 'Men')
ax.text(2003, 30, 'Women')
ax.tick_params(labelbottom='on')
# Export file before calling pyplot.show()
fig.savefig("gender_degrees.png")
plt.show()
```
| github_jupyter |
```
import os
import sys
import networkx as nx
import pandas as pd
import community as community_louvain
import networkx.algorithms.community as nx_comm
nb_dir = os.path.split(os.getcwd())[0]
if nb_dir not in sys.path:
sys.path.append(nb_dir)
os.chdir('/home/tduricic/Development/workspace/structure-in-gnn')
from src.data import dataset_handle
os.getcwd()
def generate_conf_model(G,seed=0):
din=[x[1] for x in G.in_degree()]
dout=[x[1] for x in G.out_degree()]
GNoisy=nx.directed_configuration_model(din,dout,create_using=nx.DiGraph(),seed=seed)
keys = [x[0] for x in G.in_degree()]
G_mapping = dict(zip(range(len(G.nodes())),keys))
G_rev_mapping = dict(zip(keys,range(len(G.nodes()))))
GNoisy = nx.relabel_nodes(GNoisy,G_mapping)
return GNoisy
def generate_modified_conf_model(G, seed=0):
node_labels_dict = nx.get_node_attributes(G,'label')
unique_node_labels = set(node_labels.values())
same_label_subgraphs = {}
for node_label in unique_node_labels:
same_label_subgraphs[node_label] = nx.DiGraph()
edges_to_remove = []
for edge in G.edges:
if edge[0] in node_labels_dict and edge[1] in node_labels_dict and node_labels_dict[edge[0]] == node_labels_dict[edge[1]]:
node_label = G.nodes(data=True)[edge[0]]['label']
same_label_subgraphs[node_label].add_edge(edge[0], edge[1])
edges_to_remove.append((edge[0], edge[1]))
G.remove_edges_from(edges_to_remove)
for label in same_label_subgraphs:
G.add_edges_from(generate_conf_model(same_label_subgraphs[label], seed).edges)
return G
# Download datasets
dataset_handle.create_cora()
dataset_handle.create_citeseer()
dataset_handle.create_texas()
dataset_handle.create_washington()
dataset_handle.create_wisconsin()
# dataset_handle.create_cornell()
dataset_handle.create_webkb()
dataset_handle.create_pubmed()
# Create G and label CM G from datasets
datasets = ['cora', 'citeseer', 'webkb', 'pubmed'] #'cornell']
graphs = {}
for dataset in datasets:
graph_filepath = 'data/graphs/processed/'+ dataset +'/'+ dataset +'.cites'
original_G = nx.Graph()
with open(graph_filepath, 'r') as f:
for line in f:
edge = line.split()
original_G.add_edge(edge[0], edge[1])
node_labels = {}
features_filepath = 'data/graphs/processed/' + dataset + '/' + dataset + '.content'
with open(features_filepath, 'r') as f:
for line in f:
features = line.split()
node_id = features[0]
label = features[-1]
node_labels[node_id] = label
for node_id in original_G.nodes:
if node_id in node_labels:
original_G.nodes[node_id]['label'] = node_labels[node_id]
label_cm_G = generate_modified_conf_model(original_G)
graphs[dataset] = {'original': original_G, 'label_cm':label_cm_G}
# Run community detection and calculate modularity on original and CM graphs
for run_iteration in range(1,11):
print(run_iteration)
for dataset in datasets:
if run_iteration not in graphs[dataset]:
graphs[dataset][run_iteration] = {}
original_G_best_partition = community_louvain.best_partition(graphs[dataset]['original'])
label_cm_G_best_partition = community_louvain.best_partition(graphs[dataset]['label_cm'])
graphs[dataset][run_iteration]['num_communities_original'] = len(set(original_G_best_partition.values()))
graphs[dataset][run_iteration]['num_communities_label_cm'] = len(set(label_cm_G_best_partition.values()))
original_communities = {}
for node_id in original_G_best_partition:
community_id = original_G_best_partition[node_id]
if community_id not in original_communities:
original_communities[community_id] = []
original_communities[community_id].append(node_id)
label_cm_communities = {}
for node_id in label_cm_G_best_partition:
community_id = label_cm_G_best_partition[node_id]
if community_id not in label_cm_communities:
label_cm_communities[community_id] = []
label_cm_communities[community_id].append(node_id)
graphs[dataset][run_iteration]['modularity_original'] = nx_comm.modularity(graphs[dataset]['original'], list(original_communities.values()))
graphs[dataset][run_iteration]['modularity_label_cm'] = nx_comm.modularity(graphs[dataset]['label_cm'], list(label_cm_communities.values()))
run_iteration += 1
boxplot_values = []
for dataset in graphs:
for run_iteration in range(1,11):
boxplot_values.append({
'dataset':dataset,
'Number of communities':graphs[dataset][run_iteration]['num_communities_original'],
'Modularity':graphs[dataset][run_iteration]['modularity_original'],
'Graph type':'Original',
'run_iteration':run_iteration
})
boxplot_values.append({
'dataset':dataset,
'Number of communities':graphs[dataset][run_iteration]['num_communities_label_cm'],
'Modularity':graphs[dataset][run_iteration]['modularity_label_cm'],
'Graph type':'Label CM',
'run_iteration':run_iteration
})
df = pd.DataFrame(boxplot_values)
df
# Calculate d=2 UMAP embeddings for original and CM graphs
# Visualize UMAP embeddings and color by labels
sns.boxplot(y='Modularity', x='dataset',
data=df,
palette="colorblind",
hue='Graph type')
sns.boxplot(y='Number of communities', x='dataset',
data=df,
palette="colorblind",
hue='Graph type')
```
| github_jupyter |
```
activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()]
for activation in activations:
model = Test_Model(activation=activation)
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
index = random.randint(0,29)
print(index)
wandb.init(project=PROJECT_NAME,name=f'activation-{activation}')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':criterion(model(X_test),y_test).item()})
print(f'{torch.argmax(preds[index])} \n {y_batch[index]}')
print(f'{torch.argmax(preds[1])} \n {y_batch[1]}')
print(f'{torch.argmax(preds[2])} \n {y_batch[2]}')
print(f'{torch.argmax(preds[3])} \n {y_batch[3]}')
print(f'{torch.argmax(preds[4])} \n {y_batch[4]}')
wandb.finish()
NAME = "change the conv2d"
BATCH_SIZE = 32
import os
import cv2
import torch
import numpy as np
def load_data(img_size=112):
data = []
index = -1
labels = {}
for directory in os.listdir('./data/'):
index += 1
labels[f'./data/{directory}/'] = [index,-1]
print(len(labels))
for label in labels:
for file in os.listdir(label):
filepath = label + file
img = cv2.imread(filepath,cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img,(img_size,img_size))
img = img / 255.0
data.append([
np.array(img),
labels[label][0]
])
labels[label][1] += 1
for _ in range(12):
np.random.shuffle(data)
print(len(data))
np.save('./data.npy',data)
return data
import torch
def other_loading_data_proccess(data):
X = []
y = []
print('going through the data..')
for d in data:
X.append(d[0])
y.append(d[1])
print('splitting the data')
VAL_SPLIT = 0.25
VAL_SPLIT = len(X)*VAL_SPLIT
VAL_SPLIT = int(VAL_SPLIT)
X_train = X[:-VAL_SPLIT]
y_train = y[:-VAL_SPLIT]
X_test = X[-VAL_SPLIT:]
y_test = y[-VAL_SPLIT:]
print('turning data to tensors')
X_train = torch.from_numpy(np.array(X_train))
y_train = torch.from_numpy(np.array(y_train))
X_test = torch.from_numpy(np.array(X_test))
y_test = torch.from_numpy(np.array(y_test))
return [X_train,X_test,y_train,y_test]
REBUILD_DATA = True
if REBUILD_DATA:
data = load_data()
np.random.shuffle(data)
X_train,X_test,y_train,y_test = other_loading_data_proccess(data)
import torch
import torch.nn as nn
import torch.nn.functional as F
# class Test_Model(nn.Module):
# def __init__(self):
# super().__init__()
# self.conv1 = nn.Conv2d(1, 6, 5)
# self.pool = nn.MaxPool2d(2, 2)
# self.conv2 = nn.Conv2d(6, 16, 5)
# self.fc1 = nn.Linear(16 * 25 * 25, 120)
# self.fc2 = nn.Linear(120, 84)
# self.fc3 = nn.Linear(84, 36)
# def forward(self, x):
# x = self.pool(F.relu(self.conv1(x)))
# x = self.pool(F.relu(self.conv2(x)))
# x = x.view(-1, 16 * 25 * 25)
# x = F.relu(self.fc1(x))
# x = F.relu(self.fc2(x))
# x = self.fc3(x)
# return x
class Test_Model(nn.Module):
def __init__(self):
super().__init__()
self.pool = nn.MaxPool2d(2, 2)
self.conv1 = nn.Conv2d(1, 32, 5)
self.conv3 = nn.Conv2d(32,64,5)
self.conv2 = nn.Conv2d(64, 128, 5)
self.fc1 = nn.Linear(128 * 10 * 10, 512)
self.fc2 = nn.Linear(512, 256)
self.fc4 = nn.Linear(256,128)
self.fc3 = nn.Linear(128, 36)
def forward(self, x,shape=False):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv3(x)))
x = self.pool(F.relu(self.conv2(x)))
if shape:
print(x.shape)
x = x.view(-1, 128 * 10 * 10)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc4(x))
x = self.fc3(x)
return x
device = torch.device('cuda')
model = Test_Model().to(device)
preds = model(X_test.reshape(-1,1,112,112).float().to(device),True)
preds[0]
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
EPOCHS = 5
loss_logs = []
from tqdm import tqdm
PROJECT_NAME = "Sign-Language-Recognition"
def test(net,X,y):
correct = 0
total = 0
net.eval()
with torch.no_grad():
for i in range(len(X)):
real_class = torch.argmax(y[i]).to(device)
net_out = net(X[i].view(-1,1,112,112).to(device).float())
net_out = net_out[0]
predictied_class = torch.argmax(net_out)
if predictied_class == real_class:
correct += 1
total += 1
return round(correct/total,3)
import wandb
len(os.listdir('./data/'))
import random
# index = random.randint(0,29)
# print(index)
# wandb.init(project=PROJECT_NAME,name=NAME)
# for _ in tqdm(range(EPOCHS)):
# for i in range(0,len(X_train),BATCH_SIZE):
# X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
# y_batch = y_train[i:i+BATCH_SIZE].to(device)
# model.to(device)
# preds = model(X_batch.float())
# loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index])})
# wandb.finish()
import matplotlib.pyplot as plt
import pandas as pd
df = pd.Series(loss_logs)
df.plot.line(figsize=(12,6))
test(model,X_test,y_test)
test(model,X_train,y_train)
preds
X_testing = X_train
y_testing = y_train
correct = 0
total = 0
model.eval()
with torch.no_grad():
for i in range(len(X_testing)):
real_class = torch.argmax(y_testing[i]).to(device)
net_out = model(X_testing[i].view(-1,1,112,112).to(device).float())
net_out = net_out[0]
predictied_class = torch.argmax(net_out)
# print(predictied_class)
if str(predictied_class) == str(real_class):
correct += 1
total += 1
print(round(correct/total,3))
# for real,pred in zip(y_batch,preds):
# print(real)
# print(torch.argmax(pred))
# print('\n')
# conv2d_output
# conv2d_1_ouput
# conv2d_2_ouput
# output_fc1
# output_fc2
# output_fc4
# max_pool2d_keranl
# max_pool2d
# num_of_linear
# activation
# best num of epochs
# best optimizer
# best loss
## best lr
class Test_Model(nn.Module):
def __init__(self,conv2d_output=128,conv2d_1_ouput=32,conv2d_2_ouput=64,output_fc1=512,output_fc2=256,output_fc4=128,output=36,activation=F.relu,max_pool2d_keranl=2):
super().__init__()
self.conv2d_output = conv2d_output
self.pool = nn.MaxPool2d(max_pool2d_keranl)
self.conv1 = nn.Conv2d(1, conv2d_1_ouput, 5)
self.conv3 = nn.Conv2d(conv2d_1_ouput,conv2d_2_ouput,5)
self.conv2 = nn.Conv2d(conv2d_2_ouput, conv2d_output, 5)
self.fc1 = nn.Linear(conv2d_output * 10 * 10, output_fc1)
self.fc2 = nn.Linear(output_fc1, output_fc2)
self.fc4 = nn.Linear(output_fc2,output_fc4)
self.fc3 = nn.Linear(output_fc4, output)
self.activation = activation
def forward(self, x,shape=False):
x = self.pool(self.activation(self.conv1(x)))
x = self.pool(self.activation(self.conv3(x)))
x = self.pool(self.activation(self.conv2(x)))
if shape:
print(x.shape)
x = x.view(-1, self.conv2d_output * 10 * 10)
x = self.activation(self.fc1(x))
x = self.activation(self.fc2(x))
x = self.activation(self.fc4(x))
x = self.fc3(x)
return x
# conv2d_output
# conv2d_1_ouput
# conv2d_2_ouput
# output_fc1
# output_fc2
# output_fc4
# max_pool2d_keranl
# max_pool2d
# num_of_linear
# best num of epochs
# best optimizer
# best loss
## best lr
# batch size
EPOCHS = 3
BATCH_SIZE = 32
activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()]
for activation in activations:
model = Test_Model(activation=activation)
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
index = random.randint(0,29)
print(index)
wandb.init(project=PROJECT_NAME,name=f'activation-{activation}')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':criterion(model(X_test),y_test).item()})
print(f'{torch.argmax(preds[index])} \n {y_batch[index]}')
print(f'{torch.argmax(preds[1])} \n {y_batch[1]}')
print(f'{torch.argmax(preds[2])} \n {y_batch[2]}')
print(f'{torch.argmax(preds[3])} \n {y_batch[3]}')
print(f'{torch.argmax(preds[4])} \n {y_batch[4]}')
wandb.finish()
activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()]
for activation in activations:
model = Test_Model(activation=activation)
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
index = random.randint(0,29)
print(index)
wandb.init(project=PROJECT_NAME,name=f'activation-{activation}')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':criterion(model(X_test.view(-1,1,112,112)),y_test).item()})
print(f'{torch.argmax(preds[index])} \n {y_batch[index]}')
print(f'{torch.argmax(preds[1])} \n {y_batch[1]}')
print(f'{torch.argmax(preds[2])} \n {y_batch[2]}')
print(f'{torch.argmax(preds[3])} \n {y_batch[3]}')
print(f'{torch.argmax(preds[4])} \n {y_batch[4]}')
wandb.finish()
activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()]
for activation in activations:
model = Test_Model(activation=activation)
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
index = random.randint(0,29)
print(index)
wandb.init(project=PROJECT_NAME,name=f'activation-{activation}')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':criterion(model(X_test.view(-1,1,112,112)).to(device),y_test).item()})
print(f'{torch.argmax(preds[index])} \n {y_batch[index]}')
print(f'{torch.argmax(preds[1])} \n {y_batch[1]}')
print(f'{torch.argmax(preds[2])} \n {y_batch[2]}')
print(f'{torch.argmax(preds[3])} \n {y_batch[3]}')
print(f'{torch.argmax(preds[4])} \n {y_batch[4]}')
wandb.finish()
activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()]
for activation in activations:
model = Test_Model(activation=activation)
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
index = random.randint(0,29)
print(index)
wandb.init(project=PROJECT_NAME,name=f'activation-{activation}')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':criterion(model(X_test.view(-1,1,112,112)).to(device),torch.tensor(y_test,dtype=torch.long)).item()})
print(f'{torch.argmax(preds[index])} \n {y_batch[index]}')
print(f'{torch.argmax(preds[1])} \n {y_batch[1]}')
print(f'{torch.argmax(preds[2])} \n {y_batch[2]}')
print(f'{torch.argmax(preds[3])} \n {y_batch[3]}')
print(f'{torch.argmax(preds[4])} \n {y_batch[4]}')
wandb.finish()
def get_loss(criterion,y,model,X):
preds = model(X.view(-1,1,112,112).to(device))
preds.to(device)
loss = criterion(preds,torch.tensor(y,dtype=torch.long))
loss.backward()
return loss.item()
activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()]
for activation in activations:
model = Test_Model(activation=activation)
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
index = random.randint(0,29)
print(index)
wandb.init(project=PROJECT_NAME,name=f'activation-{activation}')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':get_loss(criterion,y_test,model,X_test)})
print(f'{torch.argmax(preds[index])} \n {y_batch[index]}')
print(f'{torch.argmax(preds[1])} \n {y_batch[1]}')
print(f'{torch.argmax(preds[2])} \n {y_batch[2]}')
print(f'{torch.argmax(preds[3])} \n {y_batch[3]}')
print(f'{torch.argmax(preds[4])} \n {y_batch[4]}')
wandb.finish()
def get_loss(criterion,y,model,X):
preds = model(X.view(-1,1,112,112).to(device).float())
preds.to(device)
loss = criterion(preds,torch.tensor(y,dtype=torch.long))
loss.backward()
return loss.item()
activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()]
for activation in activations:
model = Test_Model(activation=activation)
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
index = random.randint(0,29)
print(index)
wandb.init(project=PROJECT_NAME,name=f'activation-{activation}')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':get_loss(criterion,y_test,model,X_test)})
print(f'{torch.argmax(preds[index])} \n {y_batch[index]}')
print(f'{torch.argmax(preds[1])} \n {y_batch[1]}')
print(f'{torch.argmax(preds[2])} \n {y_batch[2]}')
print(f'{torch.argmax(preds[3])} \n {y_batch[3]}')
print(f'{torch.argmax(preds[4])} \n {y_batch[4]}')
wandb.finish()
```
| github_jupyter |
# Rendezvous
Rendezvous problems involve the relative position, velocity, and acceleration of two objects in orbit around another (large) bodyโfor example, two spacecraft in orbit around Earth.
```
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
```
## Relative coordinate system
Given two spacecraft A and B, where we know the position and velocity vectors at an instant in time (potentially via their orbital elements), we can determine the relative position, vector, and acceleration of B from the local coordinate system of A.
The problem: given $\vec{R}_A$, $\vec{R}_B$, $\vec{V}_A$, and $\vec{V}_B$, find $\vec{r}_{\text{rel}}$, $\vec{v}_{\text{rel}}$, and $\vec{a}_{\text{rel}}$ of B with respect to A.
First, determine the relative vectors of B with respect to A in the geocentric equatorial frame:
$$
\begin{align}
\vec{R}_{B/A} = \vec{R}_{\text{rel}} &= \vec{R}_B - \vec{R}_A \\
\vec{V}_{\text{rel}} &= \vec{V}_B - \vec{V}_A - \vec{\Omega} \times \vec{R}_{\text{rel}} \\
\vec{A}_{\text{rel}} &= \vec{A}_B - \vec{A}_A - \vec{\dot{\Omega}} \times \vec{R}_{\text{rel}} - \vec{\Omega} \times \vec{\Omega} \times \vec{R}_{\text{rel}} - 2 \vec{\Omega} \times \vec{V}_{\text{rel}} \;,
\end{align}
$$
where
$$
\begin{align}
\vec{A}_A &= -\frac{\mu}{R_A^3} \vec{R}_A \\
\vec{A}_B &= -\frac{\mu}{R_B^3} \vec{R}_B \;.
\end{align}
$$
We need the angular velocity $\Omega$ and angular acceleration $\dot{\Omega}$ associated with the moving coordinate system of spacecraft A, where $\vec{V}_A = \vec{\Omega} \times \vec{R}_A$, and we can determine the angular momentum
$$
\vec{h}_A = \vec{R}_A \times \left( \vec{\Omega} \times \vec{R}_A \right) \;.
$$
Then,
$$
\begin{align}
\vec{\Omega} &= \frac{\vec{h}_A}{R_A^2} = \frac{\vec{R}_A \times \vec{V}_A}{R_A^2} \\
\vec{\dot{\Omega}} &= \frac{d}{dt} \vec{\Omega} = \frac{-2 \vec{\Omega}}{R_A^2} \vec{V}_A \cdot \vec{R}_A
\end{align}
$$
These vectors give us the relative position, velocity, and acceleration in the geocentric equatorial frame, but we would like them in the local coordinate frame of spacecraft A.
In other words, where is spacecraft B and how fast is it moving, from the perspective of spacecraft A?
First, define the local coordinate system of spacecraft A:
$$
\begin{align}
\hat{i} &= \frac{\vec{R}_A}{R_A} \\
\hat{k} &= \frac{\vec{h}_A}{h_A} \\
\hat{j} &= \hat{k} \times \hat{i}
\end{align}
$$
We can write these unit direction vectors with respect to the geocentric equatorial frame:
$$
\begin{align}
\hat{i} &= L_x \hat{I} + M_x \hat{J} + N_x \hat{K} \\
\hat{j} &= L_y \hat{I} + M_y \hat{J} + N_y \hat{K} \\
\hat{k} &= L_z \hat{I} + M_z \hat{J} + N_z \hat{K} \;,
\end{align}
$$
where $L_x$ is the first element of $\hat{i}$, and so on.
The rotation matrix for converting from the local coordinate system to the geocentric equatorial frame is
$$
[Q_{xX}] = \begin{bmatrix} L_x & M_x & N_x \\
L_y & M_y & N_y \\
L_z & M_z & N_z \end{bmatrix}
$$
Then, we can determine the relative vectors from the local coordinate system of A:
$$
\begin{align}
\vec{r}_{\text{rel}} &= [Q_{xX}]^{\intercal} \vec{R}_{\text{rel}} \\
\vec{v}_{\text{rel}} &= [Q_{xX}]^{\intercal} \vec{V}_{\text{rel}} \\
\vec{a}_{\text{rel}} &= [Q_{xX}]^{\intercal} \vec{A}_{\text{rel}}
\end{align}
$$
## Linear orbit theory
We may also want to know how close or far apart two objects are as a function of time.
For example, if the Space Shuttle releases a satellite with some initial relative velocity,
how far away is it after some time?
We can answer this using linear orbit theory.
First, define the position of the object B relative to that of object A: $\vec{r} = \vec{R} + \vec{\delta}_r$, where $\vec{\delta}_r$ is relatively small (compared to orbital distances),
and insert this into the equation of motion:
$$
\vec{\ddot{R}} + \vec{\ddot{\delta}_r} = \frac{\mu}{r^3} \left( \vec{R} + \vec{\delta}_r \right) \;,
$$
where we can express the position of the second object in the denominator using a binomial expansion:
$$
r^{-3} = \left(r^2 \right)^{-3/2} \approx R^{-3} \left[ \frac{-2 R \delta_r}{R^2} \right]^{-3/2} \;.
$$
Then, based on the assumptions that $\vec{R} = R \hat{i}$ and that the orbit of object A
is circular, we can solve for the relative position vector of object B as a function of time
using the **Chlohessy-Wiltshire equations** (i.e., modified Hill's equations):
$$
\begin{align}
\ddot{\delta_x} - 3 n^2 \delta_x - 2 n \dot{\delta_y} &= 0 \\
\ddot{\delta_y} + 2 n \dot{\delta_x} &= 0 \\
\ddot{\delta_z} + n^2 \delta_z &= 0 \;,
\end{align}
$$
where $n = \sqrt{\mu / R^3}$ is the mean motion.
To solve for the relative position vector as a function of time, we can
decompose this into a system of six first-order ODEs, and integrate in time.
The initial conditions would be that at $t = 0$,
the initial position components are zero ($\delta_x, \delta_y, \delta_z = 0$),
and the initial velocity components $\dot{\delta}_x$, $\dot{\delta}_y$, and $\dot{\delta}_z$ are given.
After some time, the distance of object B from A would then be $\delta_r = \sqrt{\delta_x^2 + \delta_y^2 + \delta_z^2}$.
## Cowell's method
We may also want to keep track of the original orbit along with the new orbit,
both in the case of a rendezvous problem but also when analyzing orbital perturbations.
**Cowell's method** allows us to track/integrate the original Kepler orbit,
along with the *perturbed* orbit (i.e., the oscultating orbit) of either the
original vehicle or an ejected object.
First, define the position of the object in the perturbed orbit:
$\vec{r} = \vec{\rho} + \vec{\delta}_r$, where $\vec{\rho}$ is the position in the
original Kepler orbit.
Then, we can write equations of motion for both orbits:
$$
\begin{align}
\vec{\ddot{\rho}} + \frac{\mu}{\rho^3} \vec{\rho} &= 0 \\
\vec{\ddot{\delta}}_r + \frac{\mu}{\delta_r^3} \vec{\delta}_r &= \vec{a}_p \;,
\end{align}
$$
where $\vec{a}_p$ is the acceleration of a perturbing force.
In the case of thrust, this is
$$
\vec{a}_p = \frac{T \dot{\delta}_r}{ \delta_r m} \;,
$$
but this term can also include other perturbations such as atmospheric drag.
## Encke formulation
$$
\begin{align}
\vec{\ddot{\delta}}_r &= \frac{\mu}{\rho^3} \left( F \vec{r} - \vec{\delta}_r \right) + \frac{T \vec{v}}{v m} \\
\vec{\ddot{\rho}} &= -\frac{\mu}{\rho^3} \vec{\rho} \\
\dot{m} &= -\frac{T}{I_{\text{sp}} g_0} \;,
\end{align}
$$
where
$$
\begin{align}
\phi &= \frac{-(\delta_x \rho_x + \delta_y \rho_y + \delta_z \rho_z )}{\rho^2} \\
F &= 1 - (1 - 2 \phi)^{-3/2} \\
\vec{v} &= \vec{\dot{\delta}}_r + \vec{\dot{\rho}} \\
\vec{r} &= \vec{\delta}_r + \vec{\rho} \;.
\end{align}
$$
In three dimensions, this problem requires the integration of 13 first-order ODEs in time for 13 variables, all of which need initial conditions.
For example, if we are looking for the new perturbed orbit after some engine burn for
some time, then the initial relative position and velocity components would be zero:
$\delta_x, \delta_y, \delta_z, \dot{\delta}_x, \dot{\delta}_y, \dot{\delta}_z = 0$,
the initial position and velocity vectors of the Kepler orbit would be based on the
state vector where the burn starts ($\rho_x, \rho_y, \rho_z, \dot{\rho}_x, \dot{\rho}_y, \dot{\rho}_z$), and the initial mass is the starting mass of the vehicle ($m = m_0$).
After some integrating for some time, we can determine the new position and velocity:
$$
\begin{align}
\vec{R} &= \left[ \rho_x + \delta_x , \rho_y + \delta_y, \rho_z + \delta_z \right] \\
\vec{V} &= \left[ \dot{\rho}_x + \dot{\delta}_x , \dot{\rho}_y + \dot{\delta}_y, \dot{\rho}_z + \dot{\delta}_z \right] \;.
\end{align}
$$
| github_jupyter |
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
from pathlib import Path
sys.path.append(str(Path.cwd().parent))
from typing import Tuple
import numpy as np
import pandas as pd
from statsmodels.graphics import tsaplots
from load_dataset import Dataset
import matplotlib.pyplot as plt
import plotting
from statsmodels.tsa.stattools import adfuller
```
### ะัะธะผะตั ัะฐะฑะพัั stl
ะฝะต ะธะผะฟะพััะธัะพะฒะฐัั ะธ ะฝะต ะทะฐะฟััะบะฐัั ะดะฐะปัะฝะตะนัะธะต ััะตะนะบะธ ะดะพ ะฑะปะพะบะฐ "ะทะฐะดะฐะฝะธะต"
```
from stl import detect_ts, extract_trend, extract_seasonality
```
#### ะะพะทัะผะตะผ ัะธะฟะธัะฝัะน ะฟัะธะผะตั ััะดะฐ ั ััะตะฝะดะพะผ ะธ ัะตะทะพะฝะฝะพัััั
```
dataset = Dataset('../data/dataset/')
ts = dataset['stl_example.csv']
ts.plot(figsize=(10, 5))
```
#### ะะทะฒะปะตัะตะผ ะปะธะฝะตะนะฝัะน ััะตะฝะด
```
trend = extract_trend(ts)[0]
trend.plot()
```
#### ะัััะตะผ ััะตะฝะด ะธะท ะธัั
ะพะดะฝะพะณะพ ััะดะฐ
```
ts_detrended = ts - trend
ts_detrended.plot()
```
#### ะะทะฒะปะตัะตะผ ัะตะทะพะฝะฝะพััั ะธะท ะฟะพะปััะธะฒัะตะณะพัั ััะดะฐ
```
season = extract_seasonality(ts_detrended, period=6)
season
season.plot(figsize=(10, 5))
```
#### ะัััะตะผ ัะตะทะพะฝะฝะพััั ะธะท ััะดะฐ ts_detrended ะธ ะฟะพะปััะธะผ ะพััะฐัะบะธ
```
resid = ts_detrended - season
plotting.plot_ts(resid)
```
#### ะขะฐะบ ะบะฐะบ ะผั ัะฑัะฐะปะธ ะธะท ััะดะฐ ััะตะฝะด ะธ ัะตะทะพะฝะฝะพััั, ะฟะพะปััะธะฒัะธะตัั ะพััะฐัะบะธ ะฟะพ ะธะดะตะต ะดะพะปะถะฝั ะฑััั ััะฐัะธะพะฝะฐัะฝัะผะธ. ะะฐะฒะฐะนัะต ััะพ ะฟัะพะฒะตัะธะผ ะฟะพ ะบัะธัะตัะธั ะะธะบะธ-ะคัะปะปะตัะฐ.
```
adfuller(resid.dropna())
```
### ะะฐะดะฐะฝะธะต 1 - ัะตะฐะปะธะทะพะฒะฐัั "ะฝะฐะธะฒะฝัั" ะธะผะฟะปะตะผะตะฝัะฐัะธั stl-ัะฐะทะปะพะถะตะฝะธั:
ะ ัะด - stl_example.csv
1. ะะฟัะพะบัะธะผะธัะพะฒะฐัั ััะด ะปะธะฝะตะนะฝัะผ ััะตะฝะดะพะผ (ััะฝะบัะธั extract_trend).
2. ะััะตััั ะปะธะฝะตะนะฝัะน ััะตะฝะด ะธะท ััะดะฐ.
3. ะะฐะนัะธ ะฟะตัะธะพะด ัะตะทะพะฝะฝะพััะธ ะฟะพะปััะธะฒัะตะณะพัั ััะดะฐ ะฟะพ ะบะพััะตะปะพะณัะฐะผะต.
4. ะะพะปััะธัั ัะตะทะพะฝะฝะพััั ะฟัะธ ะฟะพะผะพัะธ ะผะตะดะธะฐะฝะฝะพะณะพ ัะธะปัััะฐ ั ะพะบะฝะพะผ ัะฐะฒะฝัะผ ะฟะตัะธะพะด/ะบ, ะบ ะฟะพะดะพะฑัะฐัั (ะพะฑััะฝะพ 2-3) (extract_seasonality)
4. ะััะตััั ััะตะฝะด ะธ ัะตะทะพะฝะฝะพััั, ะฟะพะปััะธัั ะพััะฐัะบะธ.
6. ะัะพะฒะตัะธัั ะพััะฐัะบะธ ะฝะฐ ััะฐัะธะพะฝะฐัะฝะพััั.
detect_ts ะดะพะปะถะฝะฐ ะฒะพะทะฒัะฐัะฐัั tuple ะธะท: (ััะตะฝะด, ัะตะทะพะฝะฝะพััั, ะพััะฐัะบะธ)
```
def extract_trend(ts: pd.Series) -> Tuple[pd.Series, float, float]:
"""
ะะทะฒะปะตะบะฐะตั ะปะธะฝะตะนะฝัะน ััะตะฝะด ะธะท ะฒัะตะผะตะฝะฝะพะณะพ ััะดะฐ
ะะพะปะถะฝะฐ ะฒะพะทะฒัะฐัะฐัั trend, k, b; trend - series ะธะท ะฟะพะปััะธะฒัะตะณะพัั ััะตะฝะดะฐ,
k ะธ b - ัะณะพะป ะฝะฐะบะปะพะฝะฐ ะธ bias ัะพะพัะฒะตัััะฒะตะฝะฝะพ
"""
# <ะฒะฐั ะบะพะด ะทะดะตัั>
return trend, k, b
def extract_seasonality(ts_detrended, period=None) -> pd.Series:
"""
ะะทะฒะปะตะบะฐะตั ัะตะทะพะฝะฝัั ะบะพะผะฟะพะฝะตะฝัั
"""
# <ะฒะฐั ะบะพะด ะทะดะตัั>
return season
```
### ะะฐะดะฐะฝะธะต 2 - ะฝะฐะนัะธ ะฐะฝะพะผะฐะปะธะธ ะฒะพ ะฒัะตะผะตะฝะฝะพะผ ััะดะต ะฟัะธ ะฟะพะผะพัะธ ะฟะพะปััะธะฒัะธั
ัั ะพััะฐัะบะพะฒ
1. ะ ะฐััะธัะฐัั ััะฐะฝะดะฐััะฝะพะต ะพัะบะปะพะฝะตะฝะธะต ะพััะฐัะบะพะฒ std
2. ะะพะปััะธัั ะฟะพัะพะณ ะฝะฐ ะพััะฐัะบะธ ะฟะพ ัะพัะผัะปะต `threshold = k * std`, k ะพะฑััะฝะพ ะฑะตัะตััั ะพั 2 ะดะพ 3.
3. ะะฐะนัะธ ะฐะฝะพะผะฐะปะธะธ, ะบะฐะบ ัะพัะบะธ ััะดะฐ, ะฐะฑัะพะปััะฝัะต ะทะฝะฐัะตะฝะธั ะบะพัะพััั
ะฟัะตะฒััะฐัั ะฝะฐะนะดะตะฝะฝัะน ะฟะพัะพะณ
```
# your code here
```
### ะะฐะดะฐะฝะธะต 3 - ัะดะตะปะฐัั ะฟัะพะณะฝะพะท ััะดะฐ ะฝะฐ 6 ะฟะตัะธะพะดะพะฒ ะฒะฟะตัะตะด (36 ัะพัะตะบ)
1. ะญะบัััะฐะฟะพะปะธััะนัะต ะปะธะฝะตะนะฝัะน ััะตะฝะด.
2. ะกะดะตะปะฐะนัะต ัะตะบัััะธะฒะฝัะน ะฟัะพะณะฝะพะท ัะตะทะพะฝะฝะพะน ะบะพะผะฟะพะฝะตะฝัั ะฟะพ ัะพัะผัะปะต y(t) = y(t-6)
3. ะััะฐัะบะธ ะฟะพ-ั
ะพัะพัะตะผั ะดะพะปะถะฝั ะผะพะดะตะปะธัะพะฒะฐัััั ะผะพะดะตะปัั arma, ะฝะพ ะฒ ะฝะฐัะตะผ ัะปััะฐะต ัะดะตะปะฐะนัะต ะฟัะพััะพ ะฟัะพะณะฝะพะท ััะตะดะฝะธะผ ะทะฝะฐัะตะฝะธะตะผ. (ะข.ะบ. ะฒ ะฝะฐัะตะผ ัะปััะฐะต ััะพ 0, ะพััะฐัะบะธ ะผะพะถะฝะพ ะฒะพะพะฑัะต ะฟัะพะธะณะฝะพัะธัะพะฒะฐัั)
4. ะกะปะพะถะธัะต ะฟะพะปััะธะฒัะธะตัั ะบะพะผะฟะพะฝะตะฝัั ะธ ะฟะพะปััะธัะต ัะธะฝะฐะปัะฝัะน ะฟัะพะณะฝะพะท
5. profit!
```
# your code here
```
### stl-ะ ะฐะทะปะพะถะตะฝะธะต "ะธะท ะบะพัะพะฑะบะธ" - statsmodels.
```
from statsmodels.tsa.seasonal import seasonal_decompose
decomp = seasonal_decompose(ts, period=6)
decomp.seasonal.plot()
decomp.trend.plot()
decomp.resid.plot()
adfuller(decomp.resid.dropna())
```
### ะััะณะธะต ะฟะฐะบะตัั
- stldecompose (ัะฐะบะถะต "ะฝะฐะธะฒะฝะฐั ัะตะฐะปะธะทะฐัะธั")
- pyloess (ะดะฐะฒะฝะพ ะฝะต ะฑัะปะพ ะพะฑะฝะพะฒะปะตะฝะธะน)
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Tutorials/GlobalSurfaceWater/1_water_occurrence.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Tutorials/GlobalSurfaceWater/1_water_occurrence.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Tutorials/GlobalSurfaceWater/1_water_occurrence.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
###############################
# Asset List
###############################
gsw = ee.Image('JRC/GSW1_1/GlobalSurfaceWater')
occurrence = gsw.select('occurrence')
###############################
# Constants
###############################
VIS_OCCURRENCE = {
'min':0,
'max':100,
'palette': ['red', 'blue']
}
VIS_WATER_MASK = {
'palette': ['white', 'black']
}
###############################
# Calculations
###############################
# Create a water mask layer, and set the image mask so that non-water areas
# are opaque.
water_mask = occurrence.gt(90).selfMask()
###############################
# Initialize Map Location
###############################
# Uncomment one of the following statements to center the map.
# Map.setCenter(-90.162, 29.8597, 10) # New Orleans, USA
# Map.setCenter(-114.9774, 31.9254, 10) # Mouth of the Colorado River, Mexico
# Map.setCenter(-111.1871, 37.0963, 11) # Lake Powell, USA
# Map.setCenter(149.412, -35.0789, 11) # Lake George, Australia
# Map.setCenter(105.26, 11.2134, 9) # Mekong River Basin, SouthEast Asia
# Map.setCenter(90.6743, 22.7382, 10) # Meghna River, Bangladesh
# Map.setCenter(81.2714, 16.5079, 11) # Godavari River Basin Irrigation Project, India
# Map.setCenter(14.7035, 52.0985, 12) # River Oder, Germany & Poland
# Map.setCenter(-59.1696, -33.8111, 9) # Buenos Aires, Argentina
Map.setCenter(-74.4557, -8.4289, 11) # Ucayali River, Peru
###############################
# Map Layers
###############################
Map.addLayer(occurrence.updateMask(occurrence.divide(100)), VIS_OCCURRENCE, "Water Occurrence (1984-2018)")
Map.addLayer(water_mask, VIS_WATER_MASK, '90% occurrence water mask', False)
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
```
import math
import numpy as np
from joblib import load
from sklearn.ensemble import GradientBoostingClassifier
# Loading in final features test flows dicts
#
# Returns: all unknown test flows dict, mirror test flows dict, known test flows dict
def load_final_test_dicts(N):
if N == 100:
mirror_test_flows = load(f"../Feature-Vectors/train-test-flows/{N}-p-subflows/Final-80-Mirror-TEST-Flows-Dict")
unknown_test_flows = load(f"../Feature-Vectors/train-test-flows/{N}-p-subflows/Final-80-Random-Unknown-TEST-Flows-Dict")
known_test_flows = load(f"../Feature-Vectors/train-test-flows/{N}-p-subflows/Final-80-Random-Known-TEST-Flows-Dict")
else:
mirror_test_flows = load(f"../Feature-Vectors/train-test-flows/{N}-p-subflows/Final-80-Mirror-TEST-Flows-Dict")
unknown_test_flows = load(f"../Feature-Vectors/train-test-flows/{N}-p-subflows/Final-80-Unknown-TEST-Flows-Dict")
known_test_flows = load(f"../Feature-Vectors/train-test-flows/{N}-p-subflows/Final-80-Known-TEST-Flows-Dict")
return unknown_test_flows, mirror_test_flows, known_test_flows
# Same as above but naive features
def load_naive_test_dicts(N):
unknown_test_flows = load(f"../Feature-Vectors/train-test-flows/Basic-Features/{N}-p-subflows/Unknown-TEST-Flows-Dict")
mirror_test_flows = load(f"../Feature-Vectors/train-test-flows/Basic-Features/{N}-p-subflows/Mirror-TEST-Flows-Dict")
known_test_flows = load(f"../Feature-Vectors/train-test-flows/Basic-Features/{N}-p-subflows/Known-TEST-Flows-Dict")
return unknown_test_flows, mirror_test_flows, known_test_flows
# Create dict of partial flows given a percentage of subflows to include and flow dict
# Partial flows must have at least 15 subflows to be included
def get_partial_flows(flow_dict, percentage):
partial_flows = {}
for flow in flow_dict:
subflows = flow_dict[flow]
portion = math.ceil(percentage * len(subflows))
if len(subflows) < 15:
continue
partial_flows[flow] = subflows[:portion]
return partial_flows
# Classify with both confidences, get uncertain percentage
# NOTE: uncertain count if maj=True is just count of flows classified with majority
def classify_confidences(label, known_con, unknown_con, con_thresh, maj=False):
correct = 0
uncertain = 0
# Find the likelihood ratio that the confidence requires
needed_ratio = con_thresh / (1-con_thresh)
needed_ratio = math.log(needed_ratio)
assert len(known_con) == len(unknown_con)
for con_ind in range(len(unknown_con)):
k_con = known_con[con_ind]
u_con = unknown_con[con_ind]
assert not (k_con > needed_ratio and u_con > needed_ratio)
if k_con > needed_ratio:
if label == 1: correct += 1;
elif u_con > needed_ratio:
if label == 0: correct += 1;
else:
uncertain += 1
# Classify by whichever confidence is larger if majority is true
if maj:
classification = 1 if k_con > u_con else 0
if classification == label: correct += 1;
uncertain = uncertain/len(known_con)
return correct, uncertain, correct/len(known_con)
# TODO: ONLY GET CONFIDENCE IF FLOW HAS > 15 SUBFLOWS
# Get known and unknown confidences for all flows in given flow dictionary,
# Given a trained GBDT model and bins/likelihoods
# Returned "flows" dict IS NOT USED DON'T USE IT (doesn't reflect thrown out < 15 subflow flows)
def get_flow_confidences(flow_dict, known_bin_likelihoods, unknown_bin_likelihoods, gbdt):
flows = list(flow_dict.keys())
known_confidences = []
unknown_confidences = []
for flow in flow_dict:
subflows = flow_dict[flow]
if len(subflows) < 15:
continue
# Get predictions on the flow's subflows
predictions = gbdt.predict(subflows)
# Calculate sum of logs of known and unknown likelihoods for the flow's subflows
known_ll_sum = 0
unknown_ll_sum = 0
for p in predictions:
# DEBUG/TESTING CODE
# print(f"Prediction: {p}")
if p == 1:
known_l = known_bin_likelihoods[1]
unknown_l = known_bin_likelihoods[0]
# print(f"Known L: {known_l} \t Unknown L: {unknown_l}")
elif p == 0:
known_l = unknown_bin_likelihoods[1]
unknown_l = unknown_bin_likelihoods[0]
# print(f"Known L: {known_l} \t Unknown L: {unknown_l}")
known_ll_sum += known_l
unknown_ll_sum += unknown_l
# Known confidence is sum of known lls - sum of unknown lls (vice versa for unknown)
# Which are just the likelihood ratios in the log space
# print(f"Known Confidence Sum: {known_ll_sum} \t Unknown Confidence Sum: {unknown_ll_sum}")
known_confidence = known_ll_sum - unknown_ll_sum
unknown_confidence = unknown_ll_sum - known_ll_sum
known_confidences.append(known_confidence)
unknown_confidences.append(unknown_confidence)
return flows, known_confidences, unknown_confidences
```
### Naive Features
```
############################# 25 PACKET SUBFLOWS - GBDT
N = 25
unknown_test_flows, mirror_test_flows, known_test_flows = load_naive_test_dicts(N)
con_thresh = 0.95
# MIRROR UNKNOWN
print("MIRROR UNKNOWN ############################################")
name = f'{N}_M_n'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(mirror_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(mirror_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(mirror_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
# ALL UNKNOWN
print("\n\nALL UNKNOWN #############################################")
name = f'{N}_U_n'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(unknown_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(unknown_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(unknown_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
############################# 100 PACKET SUBFLOWS - GBDT
N = 100
unknown_test_flows, mirror_test_flows, known_test_flows = load_naive_test_dicts(N)
con_thresh = 0.95
# MIRROR UNKNOWN
print("MIRROR UNKNOWN ############################################")
name = f'{N}_M_n'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(mirror_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(mirror_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(mirror_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
# ALL UNKNOWN
print("\n\nALL UNKNOWN #############################################")
name = f'{N}_U_n'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(unknown_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(unknown_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(unknown_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
############################# 1000 PACKET SUBFLOWS - GBDT
N = 1000
unknown_test_flows, mirror_test_flows, known_test_flows = load_naive_test_dicts(N)
con_thresh = 0.95
# MIRROR UNKNOWN
print("MIRROR UNKNOWN ############################################")
name = f'{N}_M_n'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(mirror_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(mirror_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(mirror_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
# ALL UNKNOWN
print("\n\nALL UNKNOWN #############################################")
name = f'{N}_U_n'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(unknown_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(unknown_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(unknown_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
############################# 100 PACKET SUBFLOWS - TREE
N = 100
unknown_test_flows, mirror_test_flows, known_test_flows = load_naive_test_dicts(N)
con_thresh = 0.95
# MIRROR UNKNOWN
print("MIRROR UNKNOWN ############################################")
name = f'{N}_M_Tree'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(mirror_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(mirror_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(mirror_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
# ALL UNKNOWN
print("\n\nALL UNKNOWN #############################################")
name = f'{N}_U_Tree'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(unknown_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(unknown_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(unknown_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
############################# 1000 PACKET SUBFLOWS - TREE
N = 1000
unknown_test_flows, mirror_test_flows, known_test_flows = load_naive_test_dicts(N)
con_thresh = 0.95
# MIRROR UNKNOWN
print("MIRROR UNKNOWN ############################################")
name = f'{N}_M_Tree'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(mirror_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(mirror_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(mirror_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
# ALL UNKNOWN
print("\n\nALL UNKNOWN #############################################")
name = f'{N}_U_Tree'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(unknown_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(unknown_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(unknown_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
```
### Final Features
```
############################# 25 PACKET SUBFLOWS - GBDT
N = 25
unknown_test_flows, mirror_test_flows, known_test_flows = load_final_test_dicts(N)
con_thresh = 0.95
# MIRROR UNKNOWN
print("MIRROR UNKNOWN ############################################")
name = f'{N}_M'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(mirror_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(mirror_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(mirror_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
# ALL UNKNOWN
print("\n\nALL UNKNOWN #############################################")
name = f'{N}_U'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(unknown_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(unknown_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(unknown_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
############################# 100 PACKET SUBFLOWS - GBDT
N = 100
unknown_test_flows, mirror_test_flows, known_test_flows = load_test_dicts(N)
con_thresh = 0.95
# MIRROR UNKNOWN
print("MIRROR UNKNOWN ############################################")
name = f'{N}_M'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(mirror_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(mirror_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(mirror_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
# ALL UNKNOWN
print("\n\nALL UNKNOWN #############################################")
name = f'{N}_U'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(unknown_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(unknown_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(unknown_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
############################# 1000 PACKET SUBFLOWS - GBDT
N = 1000
unknown_test_flows, mirror_test_flows, known_test_flows = load_test_dicts(N)
con_thresh = 0.95
# MIRROR UNKNOWN
print("MIRROR UNKNOWN ############################################")
name = f'{N}_M'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(mirror_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(mirror_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(mirror_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
# ALL UNKNOWN
print("\n\nALL UNKNOWN #############################################")
name = f'{N}_U'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}_GBDT')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(unknown_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(unknown_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(unknown_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
############################# 25 PACKET SUBFLOWS - TREE NAIVE FEATURES
N = 25
unknown_test_flows, mirror_test_flows, known_test_flows = load_naive_test_dicts(N)
con_thresh = 0.95
# MIRROR UNKNOWN
print("MIRROR UNKNOWN ############################################")
name = f'{N}_M_Tree'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(mirror_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(mirror_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(mirror_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
# ALL UNKNOWN
print("\n\nALL UNKNOWN #############################################")
name = f'{N}_U_Tree'
# Load models
mirror_known_bins = np.load(f"Models/{name}_Known_Bin_Ls.npy")
mirror_unknown_bins = np.load(f"Models/{name}_Unknown_Bin_Ls.npy")
gbdt = load(f'Models/{name}')
# 25, 50, 75% of flows
p = [0.25, 0.5, 0.75]
for percentage in p:
p_unknown_flows = get_partial_flows(unknown_test_flows, percentage)
p_known_flows = get_partial_flows(known_test_flows, percentage)
print(f"Flow Percentage: {percentage}")
print(f"Full Unknown Flows Length: {len(unknown_test_flows)} \t Full Known Flows Length: {len(known_test_flows)}")
print(f"Partial Unknown Flows Length: {len(p_unknown_flows)} \t Partial Known Flows Length: {len(p_known_flows)}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_known_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(p_unknown_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. {percentage} Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}\n")
# Full flows
flows, known_confidences, unknown_confidences = get_flow_confidences(known_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh)
print(f"Known Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(1, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Known Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
flows, known_confidences, unknown_confidences = get_flow_confidences(unknown_test_flows, \
mirror_known_bins, mirror_unknown_bins, gbdt)
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh)
print(f"Unknown Strict 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
correct, uncertain, acc = classify_confidences(0, known_confidences, unknown_confidences, con_thresh, maj=True)
print(f"Unknown Maj. 1.00 Flows \t Acc: {acc} \t Uncertain: {uncertain} \t Correct: {correct}")
```
| github_jupyter |
# Building DNN Models for Classification with TF core
Here we are using just a small subset of the data for demonstration pourposes. The complete dataset can be accessed here:
https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz
```
import tensorflow as tf
import matplotlib.pyplot as plt
import os
from sklearn.metrics import confusion_matrix, precision_score, recall_score, accuracy_score
%matplotlib inline
```
## 1. Load and prepare the data
```
from numpy import genfromtxt
current_dir = os.getcwd()
## Training data
train_dataset_path = os.path.join(current_dir, os.pardir, 'data', 'small_higgs.csv')
higgs_train = genfromtxt(train_dataset_path, delimiter=',')
X_train = higgs_train[:,1:]
y_train = higgs_train[:,0]
del higgs_train
# Validation data
validation_dataset_path = os.path.join(os.getcwd(), os.pardir, 'data', 'validation_higgs.csv')
higgs_val = genfromtxt(validation_dataset_path, delimiter=',')
X_val = higgs_val[:,1:]
y_val = higgs_val[:,0]
del higgs_val
```
## 2. Build the input pipepline
```
BATCH_SIZE = 128
train_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=10000)
train_dataset = train_dataset.batch(BATCH_SIZE)
iterator = train_dataset.make_initializable_iterator()
next_element = iterator.get_next()
```
## 3. Build a function containing the DNN
```
n_hidden1 = 200
n_hidden2 = 200
n_hidden3 = 200
n_hidden4 = 200
n_outputs = 1
def DNN(inputs):
wt_init = tf.contrib.layers.xavier_initializer()
hidden1 = tf.layers.dense(inputs, units=n_hidden1, activation=tf.nn.elu, kernel_initializer=wt_init)
hidden2 = tf.layers.dense(hidden1, units=n_hidden2, activation=tf.nn.elu, kernel_initializer=wt_init)
hidden3 = tf.layers.dense(hidden2, units=n_hidden3, activation=tf.nn.elu, kernel_initializer=wt_init)
hidden4 = tf.layers.dense(hidden3, units=n_hidden4, activation=tf.nn.elu, kernel_initializer=wt_init)
logits = tf.layers.dense(hidden4, units=n_outputs, activation=None)
return tf.squeeze(logits)
```
## 4. Create the placeholders to pass values for training and evaluation
```
n_inputs = X_train.shape[1] # number of features in the dataset
X = tf.placeholder(tf.float32, shape=[None, n_inputs], name='X')
y = tf.placeholder(tf.float32, name='target')
```
## 5. Define the loss
```
## We will use the binary cross-entropy loss
logits = DNN(X)
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(cross_entropy)
tf.summary.scalar('loss', loss)
## Optional: recording some metrics for visualization
prob_of_signal = tf.nn.sigmoid(logits)
y_pred = tf.cast(prob_of_signal > 0.5, dtype=tf.int32)
accuracy, accuracy_update_op= tf.metrics.accuracy(labels=y, predictions=y_pred)
tf.summary.scalar('accuracy', accuracy)
auc, auc_update_op = tf.metrics.accuracy(labels=y, predictions=y_pred)
tf.summary.scalar('AUC', auc)
## Summary an writer objects for TensorBoard
summary_values = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(os.path.join(current_dir, 'higgs_logs','train'))
val_writer = tf.summary.FileWriter(os.path.join(current_dir, 'higgs_logs','validation'))
```
## 6. Define the optimizer and training operation
```
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss)
```
## 7. (Optional) Write a function for running the training operation
```
def train_model(epoch_number):
print(epoch_number, end=',')
iterator.initializer.run()
## This is necesary for the metrics:
tf.local_variables_initializer().run()
while True:
try:
X_values, y_values = sess.run(next_element)
sess.run([training_op, accuracy_update_op, auc_update_op],
feed_dict={X: X_values, y:y_values})
except tf.errors.OutOfRangeError:
break
## Training metrics
summaries = sess.run(summary_values, feed_dict={X:X_values, y:y_values})
train_writer.add_summary(summaries, epoch_number)
## The values for the metrics must be re-initialized
tf.local_variables_initializer().run()
sess.run([accuracy_update_op, auc_update_op], feed_dict={X: X_val, y:y_val})
summaries = sess.run(summary_values, feed_dict={X:X_val, y:y_val})
val_writer.add_summary(summaries, epoch_number)
```
## 8. Run the computation graph
```
N_EPOCHS = 400
with tf.Session() as sess:
tf.global_variables_initializer().run()
train_writer.add_graph(sess.graph)
## Training loop
print("Epoch: ")
for epoch in range(1,N_EPOCHS+1):
train_model(epoch)
print("\nDone Training!")
# Closing the file writers
train_writer.close()
val_writer.close()
# Getting the predictions
predictions = sess.run(y_pred, feed_dict={X: X_val})
```
## 9. Visualize/analyze the results of the model
```
confusion_matrix(y_true=y_val, y_pred=predictions)
print("Precision: ", precision_score(y_true=y_val, y_pred=predictions))
print("Recall: ", recall_score(y_true=y_val, y_pred=predictions))
print("Accuracy: ", accuracy_score(y_true=y_val, y_pred=predictions))
```
| github_jupyter |
```
import os
import json
from datetime import datetime
import shutil
import subprocess
import pandas as pd
import seqeval
os.environ['MKL_THREADING_LAYER'] = 'GNU'
ROOT_DIR = !pwd
ROOT_DIR = "/".join(ROOT_DIR[0].split("/")[:-1])
ROOT_DIR
```
## Training Pipeline
```
downstream_dir = ROOT_DIR + "/token-classification/"
os.chdir(downstream_dir)
!pip install tokenizers sentencepiece sacremoses
def generate_command(config):
command = "python3"
command += " " + config["run_file"] + " "
command += "--output_dir " + config["output_dir"] + " "
command += "--model_name_or_path " + config["model_name_or_path"] + " "
command += "--data_dir " + config["data_dir"] + " "
command += "--num_train_epochs " + str(config["num_train_epochs"]) + " "
command += "--per_device_train_batch_size " + str(config["per_device_train_batch_size"]) + " "
command += "--learning_rate " + str(config["learning_rate"]) + " "
command += "--max_seq_length " + str(config["max_seq_length"]) + " "
if "do_train" in config:
command += "--do_train "
if "do_eval" in config:
command += "--do_eval "
if "do_predict" in config:
command += "--do_predict "
command += "--seed " + str(config["seed"]) + " "
if "umls" in config:
command += "--umls "
command += "--med_document " + str(config["med_document"]) + " "
command += "--labels " + config["labels"]
command += " --save_steps 50000"
return command
'''
#############################################
Clinical/umls(both 1500000 and 3000000) BERT fine-tuning params
- Output dir: ./models/clinicalBert-v1 | ./models/umls-clinicalBert-v1
- model_name_or_path: emilyalsentzer/Bio_ClinicalBERT | ../checkpoint/clinicalbert300000
- Learning Rate: {2e-5, 3e-5, 5e-5}
- Batch size: {16, 32}
- Epochs: {20} # ner needs longer training
#############################################
'''
# seeds = [6809, 36275, 5317, 82958, 25368] # five seeds average
seeds = [6809] # fine tuning
learning_rate_set = [2e-5, 3e-5, 5e-5]
batch_size_set = [16, 32]
epoch_set = [20]
path_set = [
("./models/i2b2_2012/clinicalBert", "emilyalsentzer/Bio_ClinicalBERT"),
("./models/i2b2_2012/BertBased", "bert-base-cased")
]
for seed in seeds:
for lr in learning_rate_set:
for epoch in epoch_set:
for batch_size in batch_size_set:
for path in path_set:
config = {
"run_file" : "run_ner.py",
"labels" : "dataset/NER/2006/label.txt",
"output_dir" : path[0] + "-" + str(seed)+"-"+ datetime.now().strftime("%Y-%m-%d-%H-%M-%S"),
"model_name_or_path" : path[1],
"data_dir" : "dataset/NER/2006",
"num_train_epochs" : epoch,
"per_device_train_batch_size" : batch_size,
"learning_rate" : lr,
"max_seq_length" : 258,
"seed" : seed,
"do_train" : True,
"do_eval" : True,
"do_predict" : True
}
# Run Downstream tasks with given config
!rm dataset/NER/2006/cache*
command = generate_command(config)
print(command)
subprocess.run(command, shell=True)
# Save config to output dir
with open(config["output_dir"] + '/fine_tune_config.json', 'w') as f:
json.dump(config, f)
assert "fine_tune_config.json" in os.listdir(config["output_dir"])
# delete all checkpoints
for path in os.listdir(config["output_dir"]):
if path.startswith("checkpoint"):
shutil.rmtree(config["output_dir"] + "/" +path)
if path.startswith("pytorch_model.bin"):
os.remove(config["output_dir"] + "/" +path)
path_set = [("./models/2006-umlsbert/", "../checkpoint/umlsbert")]
for seed in seeds:
for lr in learning_rate_set:
for epoch in epoch_set:
for batch_size in batch_size_set:
for path in path_set:
config = {
"run_file" : "run_ner.py",
"labels" : "dataset/NER/2006/label.txt",
"output_dir" : path[0] + "-" + str(seed)+"-"+ datetime.now().strftime("%Y-%m-%d-%H-%M-%S"),
"model_name_or_path" : path[1],
"data_dir" : "dataset/NER/2006",
"num_train_epochs" : epoch,
"per_device_train_batch_size" : batch_size,
"learning_rate" : lr,
"max_seq_length" : 258,
"seed" : seed,
"do_train" : True,
"do_eval" : True,
"umls" : True,
"med_document" : "voc/vocab_updated.txt",
"do_predict" : True
}
# Run Downstream tasks with given config
!rm dataset/NER/2006/cache*
command = generate_command(config)
print(command)
subprocess.run(command, shell=True)
# Save config to output dir
with open(config["output_dir"] + '/fine_tune_config.json', 'w') as f:
json.dump(config, f)
assert "fine_tune_config.json" in os.listdir(config["output_dir"])
# delete all checkpoints
for path in os.listdir(config["output_dir"]):
if path.startswith("checkpoint"):
shutil.rmtree(config["output_dir"] + "/" +path)
if path.startswith("pytorch_model.bin"):
os.remove(config["output_dir"] + "/" +path)
```
| github_jupyter |
##Tirmzi Analysis
n=1000 m+=1000 nm-=120 istep= 4 min=150 max=700
```
import sys
sys.path
import matplotlib.pyplot as plt
import numpy as np
import os
from scipy import signal
ls
import capsol.newanalyzecapsol as ac
ac.get_gridparameters
import glob
cd Output-Fortran
ls
folders = glob.glob("*NewTirmzi_large_range*/")
folders
all_data= dict()
for folder in folders:
params = ac.get_gridparameters(folder + 'capsol.in')
data = ac.np.loadtxt(folder + 'Z-U.dat')
process_data = ac.process_data(params, data, smoothing=False, std=5*10**-9)
all_data[folder]= (process_data)
all_params= dict()
for folder in folders:
params=ac.get_gridparameters(folder + 'capsol.in')
all_params[folder]= (params)
all_data
all_data.keys()
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 9.98}:
data=all_data[key]
thickness =all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm')
plt.title('C v. Z for 1nm thick sample')
plt.ylabel("C(m)")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("C' v. Z for 1nm thick sample 06-28-2021.png")
```
cut off last experiment because capacitance was off the scale
```
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == .5}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(4,-3)
plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Cz vs. Z for 1.0nm')
plt.ylabel("Cz")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Cz v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == .5}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(5,-5)
plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Czz vs. Z for 1.0nm')
plt.ylabel("Czz")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png")
params
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == .5}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(8,-8)
plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('alpha vs. Z for 1.0nm')
plt.ylabel("$\\alpha$")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Alpha v. Z for varying sample thickness, 06-28-2021.png")
data
from scipy.optimize import curve_fit
def Cz_model(z, a, n, b,):
return(a*z**n + b)
all_data.keys()
data= all_data['capsol-calc\\0001-capsol\\']
z= data['z'][1:-1]
cz= data['cz'][1:-1]
popt, pcov= curve_fit(Cz_model, z, cz, p0=[cz[0]*z[0], -1, 0])
a=popt[0]
n=popt[1]
b=popt[2]
std_devs= np.sqrt(pcov.diagonal())
sigma_a = std_devs[0]
sigma_n = std_devs[1]
model_output= Cz_model(z, a, n, b)
rmse= np.sqrt(np.mean((cz - model_output)**2))
f"a= {a} ยฑ {sigma_a}"
f"n= {n}ยฑ {sigma_n}"
model_output
"Root Mean Square Error"
rmse/np.mean(-cz)
```
| github_jupyter |
# Polynomial Regression
Polynomial Regression is a technique that is used for a nonlinear equation byt taking polynomial functions of indepedent variable.
Transform the data to polynomail. Polynomial regression is for special case of the general linear regression model. It is useful for describing curvilinear relationships. Curvilinear relationships have by squaring or setting higher-order terms of the predictor variables.
Quadratic - 2nd order
Cubic - 3rd order
Higher order
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# input
symbol = 'AMD'
start = '2014-01-01'
end = '2018-08-27'
# Read data
dataset = yf.download(symbol,start,end)
# View Columns
dataset.head()
dataset.shape
X = dataset.iloc[ : , 0:4].values
Y = dataset.iloc[ : , 4].values
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state = 0)
# PolynomialFeatures (prepreprocessing)
poly = PolynomialFeatures(degree=3)
X_ = poly.fit_transform(X)
X_test_ = poly.fit_transform(X_test)
# Linear Model
lg = LinearRegression()
# Fit
lg.fit(X_, Y)
# Obtain coefficients
lg.coef_
lg.intercept_
lg.score(X_, Y)
# Predict
lg.predict(X_test_[[0]])
X = dataset.iloc[:,0:1].values
Y = dataset.iloc[:,4].values
poly=PolynomialFeatures(degree=3)
poly_x=poly.fit_transform(X)
regressor=LinearRegression()
regressor.fit(poly_x,Y)
plt.scatter(X,Y,color='red')
plt.plot(X,regressor.predict(poly.fit_transform(X)),color='blue')
plt.show()
```
Calculate Polynomial of 3rd order using one independent variable
```
X = np.array(dataset['Open'].values)
Y = np.array(dataset['Adj Close'].values)
from scipy import *
f = np.polyfit(X,Y,3)
p = np.poly1d(f)
print(p)
```
Polynomial of multiple independent variables
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
X = np.array(dataset[['Open', 'High', 'Low']].values)
Y = np.array(dataset['Adj Close'].values)
Y = Y.reshape(1172, -1)
poly = PolynomialFeatures(degree=3)
X_ = poly.fit_transform(X)
predict_ = poly.fit_transform(Y)
```
Polynomial Regression with more than One Dimension
```
pr = PolynomialFeatures(degree=2)
X = np.array(dataset[['High', 'Low']].values)
Y = np.array(dataset['Adj Close'].values)
X_poly = pr.fit_transform(X)
# Pre-processing
from sklearn.preprocessing import StandardScaler
# Normalize the each feature simultaneously
SCALE = StandardScaler()
SCALE.fit(X)
x_scale = SCALE.transform(X)
```
Example Polynomial
```
# Pipeline
from sklearn.pipeline import Pipeline
X = np.array(dataset['Open'].values)
Y = np.array(dataset['Adj Close'].values)
X = X.reshape(1172, -1)
Y = Y.reshape(1172, -1)
Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())]
pipe = Pipeline(Input)
pipe.fit(X,Y)
yhat = pipe.predict(Y)
yhat[0:4]
```
Different Example Method
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
msk = np.random.rand(len(dataset)) < 0.8
train = dataset[msk]
test = dataset[~msk]
train_x = np.asanyarray(train[['Open']])
train_y = np.asanyarray(train[['Adj Close']])
test_x = np.asanyarray(test[['Open']])
test_y = np.asanyarray(test[['Adj Close']])
poly = PolynomialFeatures(degree=2)
train_x_poly = poly.fit_transform(train_x)
train_x_poly
clf = linear_model.LinearRegression()
train_y_ = clf.fit(train_x_poly, train_y)
# The coefficients
print('Coefficients: ', clf.coef_)
print('Intercept: ',clf.intercept_)
plt.scatter(train[['Open']], train[['Adj Close']], color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2)
plt.plot(XX, yy, '-r' )
plt.xlabel("Open")
plt.ylabel("Adj Close")
# Evaluation
from sklearn.metrics import r2_score
test_x_poly = poly.fit_transform(test_x)
test_y_ = clf.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y_ , test_y) )
```
| github_jupyter |
# Assignment 1: Bandits and Exploration/Exploitation
Welcome to Assignment 1. This notebook will:
- Help you create your first bandit algorithm
- Help you understand the effect of epsilon on exploration and learn about the exploration/exploitation tradeoff
- Introduce you to some of the reinforcement learning software we are going to use for this specialization
This class uses RL-Glue to implement most of our experiments. It was originally designed by Adam White, Brian Tanner, and Rich Sutton. This library will give you a solid framework to understand how reinforcement learning experiments work and how to run your own. If it feels a little confusing at first, don't worry - we are going to walk you through it slowly and introduce you to more and more parts as you progress through the specialization.
We are assuming that you have used a Jupyter notebook before. But if not, it is quite simple. Simply press the run button, or shift+enter to run each of the cells. The places in the code that you need to fill in will be clearly marked for you.
## Section 0: Preliminaries
```
# Import necessary libraries
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
import time
from rlglue.rl_glue import RLGlue
import main_agent
import ten_arm_env
import test_env
```
In the above cell, we import the libraries we need for this assignment. We use numpy throughout the course and occasionally provide hints for which methods to use in numpy. Other than that we mostly use vanilla python and the occasional other library, such as matplotlib for making plots.
You might have noticed that we import ten_arm_env. This is the __10-armed Testbed__ introduced in [section 2.3](http://www.incompleteideas.net/book/RLbook2018.pdf) of the textbook. We use this throughout this notebook to test our bandit agents. It has 10 arms, which are the actions the agent can take. Pulling an arm generates a stochastic reward from a Gaussian distribution with unit-variance. For each action, the expected value of that action is randomly sampled from a normal distribution, at the start of each run. If you are unfamiliar with the 10-armed Testbed please review it in the textbook before continuing.
__DO NOT IMPORT OTHER LIBRARIES as this will break the autograder.__
__DO NOT SET A RANDOM SEED as this will break the autograder.__
Please **do not** duplicate cells. This will put your notebook into a bad state and break Cousera's autograder.
Before you submit, please click "Kernel" -> "Restart and Run All" and make sure all cells pass.
## Section 1: Greedy Agent
We want to create an agent that will find the action with the highest expected reward. One way an agent could operate is to always choose the action with the highest value based on the agentโs current estimates. This is called a greedy agent as it greedily chooses the action that it thinks has the highest value. Let's look at what happens in this case.
First we are going to implement the argmax function, which takes in a list of action values and returns an action with the highest value. Why are we implementing our own instead of using the argmax function that numpy uses? Numpy's argmax function returns the first instance of the highest value. We do not want that to happen as it biases the agent to choose a specific action in the case of ties. Instead we want to break ties between the highest values randomly. So we are going to implement our own argmax function. You may want to look at [np.random.choice](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html) to randomly select from a list of values.
```
# -----------
# Graded Cell
# -----------
def argmax(q_values):
"""
Takes in a list of q_values and returns the index of the item
with the highest value. Breaks ties randomly.
returns: int - the index of the highest value in q_values
"""
top_value = float("-inf")
ties = list()
for i in range(len(q_values)):
# if a value in q_values is greater than the highest value update top and reset ties to zero
# if a value is equal to top value add the index to ties
# return a random selection from ties.
# YOUR CODE HERE
if q_values[i] > top_value:
ties = [i]
top_value = q_values[i]
print(ties)
elif q_values[i] == top_value:
ties.append(i)
elif q_values[i] < top_value:
pass
# raise NotImplementedError()
return np.random.choice(ties)
# --------------
# Debugging Cell
# --------------
# Feel free to make any changes to this cell to debug your code
test_array = [1, 110, 110, 10, 110, 4, 0, 0, 1, 7]
# assert argmax(test_array) == 8, "Check your argmax implementation returns the index of the largest value"
# make sure np.random.choice is called correctly
# np.random.seed(0)
# test_array = [1, 0, 0, 1]
argmax(test_array)
# assert argmax(test_array) == 0
# -----------
# Tested Cell
# -----------
# The contents of the cell will be tested by the autograder.
# If they do not pass here, they will not pass there.
test_array = [0, 0, 0, 0, 0, 0, 0, 0, 1, 0]
assert argmax(test_array) == 8, "Check your argmax implementation returns the index of the largest value"
# set random seed so results are deterministic
np.random.seed(0)
test_array = [1, 0, 0, 1]
counts = [0, 0, 0, 0]
for _ in range(100):
a = argmax(test_array)
counts[a] += 1
# make sure argmax does not always choose first entry
assert counts[0] != 100, "Make sure your argmax implementation randomly choooses among the largest values."
# make sure argmax does not always choose last entry
assert counts[3] != 100, "Make sure your argmax implementation randomly choooses among the largest values."
# make sure the random number generator is called exactly once whenver `argmax` is called
expected = [44, 0, 0, 56] # <-- notice not perfectly uniform due to randomness
assert counts == expected
```
Now we introduce the first part of an RL-Glue agent that you will implement. Here we are going to create a GreedyAgent and implement the agent_step method. This method gets called each time the agent takes a step. The method has to return the action selected by the agent. This method also ensures the agentโs estimates are updated based on the signals it gets from the environment.
Fill in the code below to implement a greedy agent.
```
# -----------
# Graded Cell
# -----------
class GreedyAgent(main_agent.Agent):
def agent_step(self, reward, observation=None):
"""
Takes one step for the agent. It takes in a reward and observation and
returns the action the agent chooses at that time step.
Arguments:
reward -- float, the reward the agent recieved from the environment after taking the last action.
observation -- float, the observed state the agent is in. Do not worry about this as you will not use it
until future lessons
Returns:
current_action -- int, the action chosen by the agent at the current time step.
"""
### Useful Class Variables ###
# self.q_values : An array with what the agent believes each of the values of the arm are.
# self.arm_count : An array with a count of the number of times each arm has been pulled.
# self.last_action : The action that the agent took on the previous time step
#######################
# Update Q values Hint: Look at the algorithm in section 2.4 of the textbook.
# increment the counter in self.arm_count for the action from the previous time step
# update the step size using self.arm_count
# update self.q_values for the action from the previous time step
# YOUR CODE HERE
self.arm_count[self.last_action] += 1
self.q_values[self.last_action] += (reward - self.q_values[self.last_action]) / self.arm_count[self.last_action]
# current action = ? # Use the argmax function you created above
# YOUR CODE HERE
current_action = argmax(self.q_values)
self.last_action = current_action
return current_action
# --------------
# Debugging Cell
# --------------
# Feel free to make any changes to this cell to debug your code
# build a fake agent for testing and set some initial conditions
# np.random.seed(1)/
greedy_agent = GreedyAgent()
greedy_agent.q_values = [0, 0, 0.5, 0, 0]
greedy_agent.arm_count = [0, 1, 0, 0, 0]
greedy_agent.last_action = 0
action = greedy_agent.agent_step(reward=-1)
action = greedy_agent.agent_step(reward=-4)
action = greedy_agent.agent_step(reward=-2)
action = greedy_agent.agent_step(reward=-5)
action = greedy_agent.agent_step(reward=-2)
action = greedy_agent.agent_step(reward=-5)
print(greedy_agent.q_values)
print(greedy_agent.arm_count)
print(greedy_agent.last_action)
# make sure the q_values were updated correctly
# assert greedy_agent.q_values == [0, 0.5, 0.5, 0, 0]
# make sure the agent is using the argmax that breaks ties randomly
# assert action == 2
# -----------
# Tested Cell
# -----------
# The contents of the cell will be tested by the autograder.
# If they do not pass here, they will not pass there.
# build a fake agent for testing and set some initial conditions
greedy_agent = GreedyAgent()
greedy_agent.q_values = [0, 0, 1.0, 0, 0]
greedy_agent.arm_count = [0, 1, 0, 0, 0]
greedy_agent.last_action = 1
# take a fake agent step
action = greedy_agent.agent_step(reward=1)
# make sure agent took greedy action
assert action == 2
# make sure q_values were updated correctly
assert greedy_agent.q_values == [0, 0.5, 1.0, 0, 0]
# take another step
action = greedy_agent.agent_step(reward=2)
assert action == 2
assert greedy_agent.q_values == [0, 0.5, 2.0, 0, 0]
```
Let's visualize the result. Here we run an experiment using RL-Glue to test our agent. For now, we will set up the experiment code; in future lessons, we will walk you through running experiments so that you can create your own.
```
# ---------------
# Discussion Cell
# ---------------
num_runs = 200 # The number of times we run the experiment
num_steps = 1000 # The number of pulls of each arm the agent takes
env = ten_arm_env.Environment # We set what environment we want to use to test
agent = GreedyAgent # We choose what agent we want to use
agent_info = {"num_actions": 10} # We pass the agent the information it needs. Here how many arms there are.
env_info = {} # We pass the environment the information it needs. In this case nothing.
all_averages = []
average_best = 0
for run in tqdm(range(num_runs)): # tqdm is what creates the progress bar below
np.random.seed(run)
rl_glue = RLGlue(env, agent) # Creates a new RLGlue experiment with the env and agent we chose above
rl_glue.rl_init(agent_info, env_info) # We pass RLGlue what it needs to initialize the agent and environment
rl_glue.rl_start() # We start the experiment
average_best += np.max(rl_glue.environment.arms)
scores = [0]
averages = []
for i in range(num_steps):
reward, _, action, _ = rl_glue.rl_step() # The environment and agent take a step and return
# the reward, and action taken.
scores.append(scores[-1] + reward)
averages.append(scores[-1] / (i + 1))
all_averages.append(averages)
plt.figure(figsize=(15, 5), dpi= 80, facecolor='w', edgecolor='k')
plt.plot([average_best / num_runs for _ in range(num_steps)], linestyle="--")
plt.plot(np.mean(all_averages, axis=0))
plt.legend(["Best Possible", "Greedy"])
plt.title("Average Reward of Greedy Agent")
plt.xlabel("Steps")
plt.ylabel("Average reward")
plt.show()
greedy_scores = np.mean(all_averages, axis=0)
```
How did our agent do? Is it possible for it to do better?
## Section 2: Epsilon-Greedy Agent
We learned about [another way for an agent to operate](https://www.coursera.org/learn/fundamentals-of-reinforcement-learning/lecture/tHDck/what-is-the-trade-off), where it does not always take the greedy action. Instead, sometimes it takes an exploratory action. It does this so that it can find out what the best action really is. If we always choose what we think is the current best action is, we may miss out on taking the true best action, because we haven't explored enough times to find that best action.
Implement an epsilon-greedy agent below. Hint: we are implementing the algorithm from [section 2.4](http://www.incompleteideas.net/book/RLbook2018.pdf#page=52) of the textbook. You may want to use your greedy code from above and look at [np.random.random](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.random.html), as well as [np.random.randint](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randint.html), to help you select random actions.
```
# -----------
# Graded Cell
# -----------
class EpsilonGreedyAgent(main_agent.Agent):
def agent_step(self, reward, observation):
"""
Takes one step for the agent. It takes in a reward and observation and
returns the action the agent chooses at that time step.
Arguments:
reward -- float, the reward the agent recieved from the environment after taking the last action.
observation -- float, the observed state the agent is in. Do not worry about this as you will not use it
until future lessons
Returns:
current_action -- int, the action chosen by the agent at the current time step.
"""
### Useful Class Variables ###
# self.q_values : An array with what the agent believes each of the values of the arm are.
# self.arm_count : An array with a count of the number of times each arm has been pulled.
# self.last_action : The action that the agent took on the previous time step
# self.epsilon : The probability an epsilon greedy agent will explore (ranges between 0 and 1)
#######################
# Update Q values - this should be the same update as your greedy agent above
# YOUR CODE HERE
self.arm_count[self.last_action] += 1
self.q_values[self.last_action] += (reward - self.q_values[self.last_action]) / self.arm_count[self.last_action]
# Choose action using epsilon greedy
# Randomly choose a number between 0 and 1 and see if it's less than self.epsilon
# (hint: look at np.random.random()). If it is, set current_action to a random action.
# otherwise choose current_action greedily as you did above.
# YOUR CODE HERE
if np.random.random() < self.epsilon:
current_action = np.random.randint(len(self.q_values))
else:
current_action = argmax(self.q_values)
self.last_action = current_action
return current_action
# --------------
# Debugging Cell
# --------------
# Feel free to make any changes to this cell to debug your code
# build a fake agent for testing and set some initial conditions
np.random.seed(0)
e_greedy_agent = EpsilonGreedyAgent()
e_greedy_agent.q_values = [0, 0.0, 0.5, 0, 0]
e_greedy_agent.arm_count = [0, 1, 0, 0, 0]
e_greedy_agent.num_actions = 5
e_greedy_agent.last_action = 1
e_greedy_agent.epsilon = 0.5
# given this random seed, we should see a greedy action (action 2) here
action = e_greedy_agent.agent_step(reward=1, observation=0)
# -----------------------------------------------
# we'll try to guess a few of the trickier places
# -----------------------------------------------
# make sure to update for the *last_action* not the current action
assert e_greedy_agent.q_values != [0, 0.5, 1.0, 0, 0], "A"
# make sure the stepsize is based on the *last_action* not the current action
assert e_greedy_agent.q_values != [0, 1, 0.5, 0, 0], "B"
# make sure the agent is using the argmax that breaks ties randomly
assert action == 2, "C"
# -----------------------------------------------
# let's see what happens for another action
np.random.seed(1)
e_greedy_agent = EpsilonGreedyAgent()
e_greedy_agent.q_values = [0, 0.5, 0.5, 0, 0]
e_greedy_agent.arm_count = [0, 1, 0, 0, 0]
e_greedy_agent.num_actions = 5
e_greedy_agent.last_action = 1
e_greedy_agent.epsilon = 0.5
# given this random seed, we should see a random action (action 4) here
action = e_greedy_agent.agent_step(reward=1, observation=0)
# The agent saw a reward of 1, so should increase the value for *last_action*
assert e_greedy_agent.q_values == [0, 0.75, 0.5, 0, 0], "D"
# the agent should have picked a random action for this particular random seed
assert action == 4, "E"
# -----------
# Tested Cell
# -----------
# The contents of the cell will be tested by the autograder.
# If they do not pass here, they will not pass there.
np.random.seed(0)
e_greedy_agent = EpsilonGreedyAgent()
e_greedy_agent.q_values = [0, 0, 1.0, 0, 0]
e_greedy_agent.arm_count = [0, 1, 0, 0, 0]
e_greedy_agent.num_actions = 5
e_greedy_agent.last_action = 1
e_greedy_agent.epsilon = 0.5
action = e_greedy_agent.agent_step(reward=1, observation=0)
assert e_greedy_agent.q_values == [0, 0.5, 1.0, 0, 0]
# manipulate the random seed so the agent takes a random action
np.random.seed(1)
action = e_greedy_agent.agent_step(reward=0, observation=0)
assert action == 4
# check to make sure we update value for action 4
action = e_greedy_agent.agent_step(reward=1, observation=0)
assert e_greedy_agent.q_values == [0, 0.5, 0.0, 0, 1.0]
```
Now that we have our epsilon greedy agent created. Let's compare it against the greedy agent with epsilon of 0.1.
```
# ---------------
# Discussion Cell
# ---------------
# Plot Epsilon greedy results and greedy results
num_runs = 200
num_steps = 1000
epsilon = 0.1
agent = EpsilonGreedyAgent
env = ten_arm_env.Environment
agent_info = {"num_actions": 10, "epsilon": epsilon}
env_info = {}
all_averages = []
for run in tqdm(range(num_runs)):
np.random.seed(run)
rl_glue = RLGlue(env, agent)
rl_glue.rl_init(agent_info, env_info)
rl_glue.rl_start()
scores = [0]
averages = []
for i in range(num_steps):
reward, _, action, _ = rl_glue.rl_step() # The environment and agent take a step and return
# the reward, and action taken.
scores.append(scores[-1] + reward)
averages.append(scores[-1] / (i + 1))
all_averages.append(averages)
plt.figure(figsize=(15, 5), dpi= 80, facecolor='w', edgecolor='k')
plt.plot([1.55 for _ in range(num_steps)], linestyle="--")
plt.plot(greedy_scores)
plt.title("Average Reward of Greedy Agent vs. E-Greedy Agent")
plt.plot(np.mean(all_averages, axis=0))
plt.legend(("Best Possible", "Greedy", "Epsilon: 0.1"))
plt.xlabel("Steps")
plt.ylabel("Average reward")
plt.show()
```
Notice how much better the epsilon-greedy agent did. Because we occasionally choose a random action we were able to find a better long term policy. By acting greedily before our value estimates are accurate, we risk settling on a suboptimal action.
## Section 2.1 Averaging Multiple Runs
Did you notice that we averaged over 200 runs? Why did we do that?
To get some insight, let's look at the results of two individual runs by the same agent.
```
# ---------------
# Discussion Cell
# ---------------
# Plot runs of e-greedy agent
agent = EpsilonGreedyAgent
env = ten_arm_env.Environment
agent_info = {"num_actions": 10, "epsilon": 0.1}
env_info = {}
all_averages = []
plt.figure(figsize=(15, 5), dpi= 80, facecolor='w', edgecolor='k')
num_steps = 1000
for run in (0, 1):
np.random.seed(run) # Here we set the seed so that we can compare two different runs
averages = []
rl_glue = RLGlue(env, agent)
rl_glue.rl_init(agent_info, env_info)
rl_glue.rl_start()
scores = [0]
for i in range(num_steps):
reward, state, action, is_terminal = rl_glue.rl_step()
scores.append(scores[-1] + reward)
averages.append(scores[-1] / (i + 1))
plt.plot(averages)
plt.title("Comparing two independent runs")
plt.xlabel("Steps")
plt.ylabel("Average reward")
plt.show()
```
Notice how the two runs were different? But, if this is the exact same algorithm, why does it behave differently in these two runs?
The answer is that it is due to randomness in the environment and in the agent. Depending on what action the agent randomly starts with, or when it randomly chooses to explore, it can change the results of the runs. And even if the agent chooses the same action, the reward from the environment is randomly sampled from a Gaussian. The agent could get lucky, and see larger rewards for the best action early on and so settle on the best action faster. Or, it could get unlucky and see smaller rewards for best action early on and so take longer to recognize that it is in fact the best action.
To be more concrete, letโs look at how many times an exploratory action is taken, for different seeds.
```
# ---------------
# Discussion Cell
# ---------------
print("Random Seed 1")
np.random.seed(1)
for _ in range(15):
if np.random.random() < 0.1:
print("Exploratory Action")
print()
print()
print("Random Seed 2")
np.random.seed(2)
for _ in range(15):
if np.random.random() < 0.1:
print("Exploratory Action")
```
With the first seed, we take an exploratory action three times out of 15, but with the second, we only take an exploratory action once. This can significantly affect the performance of our agent because the amount of exploration has changed significantly.
To compare algorithms, we therefore report performance averaged across many runs. We do this to ensure that we are not simply reporting a result that is due to stochasticity, as explained [in the lectures](https://www.coursera.org/learn/fundamentals-of-reinforcement-learning/lecture/PtVBs/sequential-decision-making-with-evaluative-feedback). Rather, we want statistically significant outcomes. We will not use statistical significance tests in this course. Instead, because we have access to simulators for our experiments, we use the simpler strategy of running for a large number of runs and ensuring that the confidence intervals do not overlap.
## Section 3: Comparing values of epsilon
Can we do better than an epsilon of 0.1? Let's try several different values for epsilon and see how they perform. We try different settings of key performance parameters to understand how the agent might perform under different conditions.
Below we run an experiment where we sweep over different values for epsilon:
```
# ---------------
# Discussion Cell
# ---------------
# Experiment code for different e-greedy
epsilons = [0.0, 0.01, 0.1, 0.4]
plt.figure(figsize=(15, 5), dpi= 80, facecolor='w', edgecolor='k')
plt.plot([1.55 for _ in range(num_steps)], linestyle="--")
n_q_values = []
n_averages = []
n_best_actions = []
num_runs = 200
for epsilon in epsilons:
all_averages = []
for run in tqdm(range(num_runs)):
agent = EpsilonGreedyAgent
agent_info = {"num_actions": 10, "epsilon": epsilon}
env_info = {"random_seed": run}
rl_glue = RLGlue(env, agent)
rl_glue.rl_init(agent_info, env_info)
rl_glue.rl_start()
best_arm = np.argmax(rl_glue.environment.arms)
scores = [0]
averages = []
best_action_chosen = []
for i in range(num_steps):
reward, state, action, is_terminal = rl_glue.rl_step()
scores.append(scores[-1] + reward)
averages.append(scores[-1] / (i + 1))
if action == best_arm:
best_action_chosen.append(1)
else:
best_action_chosen.append(0)
if epsilon == 0.1 and run == 0:
n_q_values.append(np.copy(rl_glue.agent.q_values))
if epsilon == 0.1:
n_averages.append(averages)
n_best_actions.append(best_action_chosen)
all_averages.append(averages)
plt.plot(np.mean(all_averages, axis=0))
plt.legend(["Best Possible"] + epsilons)
plt.xlabel("Steps")
plt.ylabel("Average reward")
plt.show()
```
Why did 0.1 perform better than 0.01?
If exploration helps why did 0.4 perform worse that 0.0 (the greedy agent)?
Think about these and how you would answer these questions. They are questions in the practice quiz. If you still have questions about it, retake the practice quiz.
## Section 4: The Effect of Step Size
In Section 1 of this assignment, we decayed the step size over time based on action-selection counts. The step-size was 1/N(A), where N(A) is the number of times action A was selected. This is the same as computing a sample average. We could also set the step size to be a constant value, such as 0.1. What would be the effect of doing that? And is it better to use a constant or the sample average method?
To investigate this question, letโs start by creating a new agent that has a constant step size. This will be nearly identical to the agent created above. You will use the same code to select the epsilon-greedy action. You will change the update to have a constant step size instead of using the 1/N(A) update.
```
# -----------
# Graded Cell
# -----------
class EpsilonGreedyAgentConstantStepsize(main_agent.Agent):
def agent_step(self, reward, observation):
"""
Takes one step for the agent. It takes in a reward and observation and
returns the action the agent chooses at that time step.
Arguments:
reward -- float, the reward the agent recieved from the environment after taking the last action.
observation -- float, the observed state the agent is in. Do not worry about this as you will not use it
until future lessons
Returns:
current_action -- int, the action chosen by the agent at the current time step.
"""
### Useful Class Variables ###
# self.q_values : An array with what the agent believes each of the values of the arm are.
# self.arm_count : An array with a count of the number of times each arm has been pulled.
# self.last_action : An int of the action that the agent took on the previous time step.
# self.step_size : A float which is the current step size for the agent.
# self.epsilon : The probability an epsilon greedy agent will explore (ranges between 0 and 1)
#######################
# Update q_values for action taken at previous time step
# using self.step_size instead of using self.arm_count
# YOUR CODE HERE
self.arm_count[self.last_action] += 1
self.q_values[self.last_action] += self.step_size * (reward - self.q_values[self.last_action])
# Choose action using epsilon greedy. This is the same as you implemented above.
# YOUR CODE HERE
if np.random.random() < self.epsilon:
current_action = np.random.randint(len(self.q_values))
else:
current_action = argmax(self.q_values)
self.last_action = current_action
return current_action
# --------------
# Debugging Cell
# --------------
# Feel free to make any changes to this cell to debug your code
for step_size in [0.01, 0.1, 0.5, 1.0]:
e_greedy_agent = EpsilonGreedyAgentConstantStepsize()
e_greedy_agent.q_values = [0, 0, 1.0, 0, 0]
e_greedy_agent.num_actions = 5
e_greedy_agent.last_action = 1
e_greedy_agent.epsilon = 0.0
e_greedy_agent.step_size = step_size
action = e_greedy_agent.agent_step(1, 0)
assert e_greedy_agent.q_values == [0, step_size, 1.0, 0, 0], "Check that you are updating q_values correctly using the stepsize."
# -----------
# Tested Cell
# -----------
# The contents of the cell will be tested by the autograder.
# If they do not pass here, they will not pass there.
np.random.seed(0)
# Check Epsilon Greedy with Different Constant Stepsizes
for step_size in [0.01, 0.1, 0.5, 1.0]:
e_greedy_agent = EpsilonGreedyAgentConstantStepsize()
e_greedy_agent.q_values = [0, 0, 1.0, 0, 0]
e_greedy_agent.num_actions = 5
e_greedy_agent.last_action = 1
e_greedy_agent.epsilon = 0.0
e_greedy_agent.step_size = step_size
action = e_greedy_agent.agent_step(1, 0)
assert e_greedy_agent.q_values == [0, step_size, 1.0, 0, 0]
# ---------------
# Discussion Cell
# ---------------
# Experiment code for different step sizes
step_sizes = [0.01, 0.1, 0.5, 1.0, '1/N(A)']
epsilon = 0.1
num_steps = 1000
num_runs = 200
fig, ax = plt.subplots(figsize=(15, 5), dpi= 80, facecolor='w', edgecolor='k')
q_values = {step_size: [] for step_size in step_sizes}
true_values = {step_size: None for step_size in step_sizes}
best_actions = {step_size: [] for step_size in step_sizes}
for step_size in step_sizes:
all_averages = []
for run in tqdm(range(num_runs)):
np.random.seed(run)
agent = EpsilonGreedyAgentConstantStepsize if step_size != '1/N(A)' else EpsilonGreedyAgent
agent_info = {"num_actions": 10, "epsilon": epsilon, "step_size": step_size, "initial_value": 0.0}
env_info = {}
rl_glue = RLGlue(env, agent)
rl_glue.rl_init(agent_info, env_info)
rl_glue.rl_start()
best_arm = np.argmax(rl_glue.environment.arms)
scores = [0]
averages = []
if run == 0:
true_values[step_size] = np.copy(rl_glue.environment.arms)
best_action_chosen = []
for i in range(num_steps):
reward, state, action, is_terminal = rl_glue.rl_step()
scores.append(scores[-1] + reward)
averages.append(scores[-1] / (i + 1))
if action == best_arm:
best_action_chosen.append(1)
else:
best_action_chosen.append(0)
if run == 0:
q_values[step_size].append(np.copy(rl_glue.agent.q_values))
best_actions[step_size].append(best_action_chosen)
ax.plot(np.mean(best_actions[step_size], axis=0))
plt.legend(step_sizes)
plt.title("% Best Arm Pulled")
plt.xlabel("Steps")
plt.ylabel("% Best Arm Pulled")
vals = ax.get_yticks()
ax.set_yticklabels(['{:,.2%}'.format(x) for x in vals])
plt.show()
```
Notice first that we are now plotting the amount of time that the best action is taken rather than the average reward. To better understand the performance of an agent, it can be useful to measure specific behaviors, beyond just how much reward is accumulated. This measure indicates how close the agentโs behaviour is to optimal.
It seems as though 1/N(A) performed better than the others, in that it reaches a solution where it takes the best action most frequently. Now why might this be? Why did a step size of 0.5 start out better but end up performing worse? Why did a step size of 0.01 perform so poorly?
Let's dig into this further below. Letโs plot how well each agent tracks the true value, where each agent has a different step size method. You do not have to enter any code here, just follow along.
```
# ---------------
# Discussion Cell
# ---------------
largest = 0
num_steps = 1000
for step_size in step_sizes:
plt.figure(figsize=(15, 5), dpi= 80, facecolor='w', edgecolor='k')
largest = np.argmax(true_values[step_size])
plt.plot([true_values[step_size][largest] for _ in range(num_steps)], linestyle="--")
plt.title("Step Size: {}".format(step_size))
plt.plot(np.array(q_values[step_size])[:, largest])
plt.legend(["True Expected Value", "Estimated Value"])
plt.xlabel("Steps")
plt.ylabel("Value")
plt.show()
```
These plots help clarify the performance differences between the different step sizes. A step size of 0.01 makes such small updates that the agentโs value estimate of the best action does not get close to the actual value. Step sizes of 0.5 and 1.0 both get close to the true value quickly, but are very susceptible to stochasticity in the rewards. The updates overcorrect too much towards recent rewards, and so oscillate around the true value. This means that on many steps, the action that pulls the best arm may seem worse than it actually is. A step size of 0.1 updates fairly quickly to the true value, and does not oscillate as widely around the true values as 0.5 and 1.0. This is one of the reasons that 0.1 performs quite well. Finally we see why 1/N(A) performed well. Early on while the step size is still reasonably high it moves quickly to the true expected value, but as it gets pulled more its step size is reduced which makes it less susceptible to the stochasticity of the rewards.
Does this mean that 1/N(A) is always the best? When might it not be? One possible setting where it might not be as effective is in non-stationary problems. You learned about non-stationarity in the lessons. Non-stationarity means that the environment may change over time. This could manifest itself as continual change over time of the environment, or a sudden change in the environment.
Let's look at how a sudden change in the reward distributions affects a step size like 1/N(A). This time we will run the environment for 2000 steps, and after 1000 steps we will randomly change the expected value of all of the arms. We compare two agents, both using epsilon-greedy with epsilon = 0.1. One uses a constant step size of 0.1, the other a step size of 1/N(A) that reduces over time.
```
# ---------------
# Discussion Cell
# ---------------
epsilon = 0.1
num_steps = 2000
num_runs = 200
step_size = 0.1
plt.figure(figsize=(15, 5), dpi= 80, facecolor='w', edgecolor='k')
plt.plot([1.55 for _ in range(num_steps)], linestyle="--")
for agent in [EpsilonGreedyAgent, EpsilonGreedyAgentConstantStepsize]:
all_averages = []
for run in tqdm(range(num_runs)):
agent_info = {"num_actions": 10, "epsilon": epsilon, "step_size": step_size}
np.random.seed(run)
rl_glue = RLGlue(env, agent)
rl_glue.rl_init(agent_info, env_info)
rl_glue.rl_start()
scores = [0]
averages = []
for i in range(num_steps):
reward, state, action, is_terminal = rl_glue.rl_step()
scores.append(scores[-1] + reward)
averages.append(scores[-1] / (i + 1))
if i == 1000:
rl_glue.environment.arms = np.random.randn(10)
all_averages.append(averages)
plt.plot(np.mean(all_averages, axis=0))
plt.legend(["Best Possible", "1/N(A)", "0.1"])
plt.xlabel("Steps")
plt.ylabel("Average reward")
plt.show()
```
Now the agent with a step size of 1/N(A) performed better at the start but then performed worse when the environment changed! What happened?
Think about what the step size would be after 1000 steps. Let's say the best action gets chosen 500 times. That means the step size for that action is 1/500 or 0.002. At each step when we update the value of the action and the value is going to move only 0.002 * the error. That is a very tiny adjustment and it will take a long time for it to get to the true value.
The agent with step size 0.1, however, will always update in 1/10th of the direction of the error. This means that on average it will take ten steps for it to update its value to the sample mean.
These are the types of tradeoffs we have to think about in reinforcement learning. A larger step size moves us more quickly toward the true value, but can make our estimated values oscillate around the expected value. A step size that reduces over time can converge to close to the expected value, without oscillating. On the other hand, such a decaying stepsize is not able to adapt to changes in the environment. Nonstationarity---and the related concept of partial observability---is a common feature of reinforcement learning problems and when learning online.
## Section 5: Conclusion
Great work! You have:
- Implemented your first agent
- Learned about the effect of epsilon, an exploration parameter, on the performance of an agent
- Learned about the effect of step size on the performance of the agent
- Learned about a good experiment practice of averaging across multiple runs
| github_jupyter |
# NumPy and Pandas for 2D Data
This notebook contains the code assignments that are in the _NumPy and Pandas for 2D data_ lesson.
##ย Two-dimensional NumPy Arrays
In this section we will learn how to deal with numpy two-dimensinal arrays.
```
import numpy as np
# Subway ridership for 5 stations on 10 different days
ridership = np.array([
[ 0, 0, 2, 5, 0],
[1478, 3877, 3674, 2328, 2539],
[1613, 4088, 3991, 6461, 2691],
[1560, 3392, 3826, 4787, 2613],
[1608, 4802, 3932, 4477, 2705],
[1576, 3933, 3909, 4979, 2685],
[ 95, 229, 255, 496, 201],
[ 2, 0, 1, 27, 0],
[1438, 3785, 3589, 4174, 2215],
[1342, 4043, 4009, 4665, 3033]
])
```
### Accessing elements
```
print ridership[1, 3]
print ridership[1:3, 3:5]
print ridership[1, :]
```
###ย Vectorized operations on rows or columns
```
print ridership[0, :] + ridership[1, :]
print ridership[:, 0] + ridership[:, 1]
```
### Vectorized operations on entire arrays
```
a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
b = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]])
print a + b
```
###ย Quiz
```
def mean_riders_for_max_station(ridership):
'''
Fill in this function to find the station with the maximum riders on the
first day, then return the mean riders per day for that station. Also
return the mean ridership overall for comparsion.
Hint: NumPy's argmax() function might be useful:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html
'''
# Find the station with the maximum riders on the first day
max_station = ridership[0, :].argmax()
# Find the mean riders per day for that station
mean_for_max = ridership[:, max_station].mean()
# Find the mean ridership overall for comparison
overall_mean = ridership.mean()
return (overall_mean, mean_for_max)
print mean_riders_for_max_station(ridership)
```
###ย NumPy axis argument
```
a = np.array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
print a.sum()
print a.sum(axis=0)
print a.sum(axis=1)
```
###ย Quiz
```
def min_and_max_riders_per_day(ridership):
'''
Fill in this function. First, for each subway station, calculate the
mean ridership per day. Then, out of all the subway stations, return the
maximum and minimum of these values. That is, find the maximum
mean-ridership-per-day and the minimum mean-ridership-per-day for any
subway station.
'''
# Find mean ridership per day for each subway station
station_riders = ridership.mean(axis=0)
max_daily_ridership = station_riders.max()
min_daily_ridership = station_riders.min()
return (max_daily_ridership, min_daily_ridership)
print min_and_max_riders_per_day(ridership)
```
##ย NumPy and Pandas Data Types
In this section we will see the limitations of numpy data types.
```
np.array([1, 2, 3, 4, 5]).dtype
np.array([['aaa', 1], ['bbbb', 2], ['cc', 3]]).dtype
```
In pandas dataframes, each column is assumed to be a different type whereas in numpy all columns must be of the same type. Also, pandas dataframes have indexes: an index for each row and a name for each column.
##ย Pandas DataFrames
In this section we will introduce pandas dataframes.
```
import pandas as pd
# Subway ridership for 5 stations on 10 different days
ridership_df = pd.DataFrame(
data=[[ 0, 0, 2, 5, 0],
[1478, 3877, 3674, 2328, 2539],
[1613, 4088, 3991, 6461, 2691],
[1560, 3392, 3826, 4787, 2613],
[1608, 4802, 3932, 4477, 2705],
[1576, 3933, 3909, 4979, 2685],
[ 95, 229, 255, 496, 201],
[ 2, 0, 1, 27, 0],
[1438, 3785, 3589, 4174, 2215],
[1342, 4043, 4009, 4665, 3033]],
index=['05-01-11', '05-02-11', '05-03-11', '05-04-11', '05-05-11',
'05-06-11', '05-07-11', '05-08-11', '05-09-11', '05-10-11'],
columns=['R003', 'R004', 'R005', 'R006', 'R007']
)
```
### DataFrame creation
```
# You can create a DataFrame out of a dictionary mapping column names to values
df_1 = pd.DataFrame({'A': [0, 1, 2], 'B': [3, 4, 5]})
print df_1
# You can also use a list of lists or a 2D NumPy array
df_2 = pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=['A', 'B', 'C'])
print df_2
```
### Accessing elements
```
print ridership_df.iloc[0]
print ridership_df.loc['05-05-11']
print ridership_df['R003']
print ridership_df.iloc[1, 3]
```
### Accessing multiple rows
```
ridership_df.iloc[1:4]
```
### Accessing multiple columns
```
ridership_df[['R003', 'R005']]
```
###ย Pandas axis
```
df = pd.DataFrame({'A': [0, 1, 2], 'B': [3, 4, 5]})
print df.sum()
print df.sum(axis=1)
print df.values.sum()
```
###ย Quiz
```
def mean_riders_for_max_station(ridership):
'''
Fill in this function to find the station with the maximum riders on the
first day, then return the mean riders per day for that station. Also
return the mean ridership overall for comparsion.
This is the same as a previous exercise, but this time the
input is a Pandas DataFrame rather than a 2D NumPy array.
'''
# Find the station with the maximum riders on the first day
max_station = ridership.iloc[0].argmax()
# Find the mean riders per day for that station
mean_for_max = ridership[max_station].mean()
# Find the mean ridership overall for comparison
overall_mean = ridership.values.mean()
return (overall_mean, mean_for_max)
print mean_riders_for_max_station(ridership_df)
ridership_df.iloc[0].argmax()
```
###ย Reading CSVs files
Dataframes are a great data structure to represent CSVs.
```
path = 'data/'
subway_df = pd.read_csv(path + 'nyc_subway_weather.csv')
subway_df.head()
subway_df.describe()
```
### Quiz: Calculating correlation
```
def correlation(x, y):
'''
Fill in this function to compute the correlation between the two
input variables. Each input is either a NumPy array or a Pandas
Series.
correlation = average of (x in standard units) times (y in standard units)
Remember to pass the argument "ddof=0" to the Pandas std() function!
'''
std_x = (x - x.mean()) / x.std(ddof=0)
std_y = (y - y.mean()) / y.std(ddof=0)
return (std_x * std_y).mean()
entries = subway_df['ENTRIESn_hourly']
cum_entries = subway_df['ENTRIESn']
rain = subway_df['meanprecipi']
temp = subway_df['meantempi']
print correlation(entries, rain)
print correlation(entries, temp)
print correlation(rain, temp)
print correlation(entries, cum_entries)
```
## DataFrame Vectorized Operations
In this section we will work with vectorized operations on dataframes.
### Adding DataFrames with the column names
```
df1 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]})
df2 = pd.DataFrame({'a': [10, 20, 30], 'b': [40, 50, 60], 'c': [70, 80, 90]})
df1 + df2
```
### Adding DataFrames with overlapping column names
```
df1 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]})
df2 = pd.DataFrame({'d': [10, 20, 30], 'c': [40, 50, 60], 'b': [70, 80, 90]})
df1 + df2
```
### Adding DataFrames with overlapping row indexes
```
df1 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]},
index=['row1', 'row2', 'row3'])
df2 = pd.DataFrame({'a': [10, 20, 30], 'b': [40, 50, 60], 'c': [70, 80, 90]},
index=['row4', 'row3', 'row2'])
df1 + df2
```
###ย Quiz
```
# Cumulative entries and exits for one station for a few hours.
entries_and_exits = pd.DataFrame({
'ENTRIESn': [3144312, 3144335, 3144353, 3144424, 3144594,
3144808, 3144895, 3144905, 3144941, 3145094],
'EXITSn': [1088151, 1088159, 1088177, 1088231, 1088275,
1088317, 1088328, 1088331, 1088420, 1088753]
})
def get_hourly_entries_and_exits(entries_and_exits):
'''
Fill in this function to take a DataFrame with cumulative entries
and exits (entries in the first column, exits in the second) and
return a DataFrame with hourly entries and exits (entries in the
first column, exits in the second).
'''
return entries_and_exits - entries_and_exits.shift(1)
#return entries_and_exits.diff()
get_hourly_entries_and_exits(entries_and_exits)
```
### DataFrame applymap()
```
df = pd.DataFrame({
'a': [1, 2, 3],
'b': [10, 20, 30],
'c': [5, 10, 15]
})
def add_one(x):
return x + 1
df.applymap(add_one)
```
### Quiz
```
grades_df = pd.DataFrame(
data={'exam1': [43, 81, 78, 75, 89, 70, 91, 65, 98, 87],
'exam2': [24, 63, 56, 56, 67, 51, 79, 46, 72, 60]},
index=['Andre', 'Barry', 'Chris', 'Dan', 'Emilio',
'Fred', 'Greta', 'Humbert', 'Ivan', 'James']
)
def convert_grades(grades):
'''
Fill in this function to convert the given DataFrame of numerical
grades to letter grades. Return a new DataFrame with the converted
grade.
The conversion rule is:
90-100 -> A
80-89 -> B
70-79 -> C
60-69 -> D
0-59 -> F
'''
def convert_grade(grade):
if grade >= 90:
return 'A'
if grade >= 80:
return 'B'
if grade >= 70:
return 'C'
if grade >= 60:
return 'D'
return 'F'
return grades.applymap(convert_grade)
convert_grades(grades_df)
```
### DataFrame apply()
```
def convert_grades_curve(exam_grades):
# Pandas has a bult-in function that will perform this calculation
# This will give the bottom 0% to 10% of students the grade 'F',
# 10% to 20% the grade 'D', and so on. You can read more about
# the qcut() function here:
# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.qcut.html
return pd.qcut(exam_grades,
[0, 0.1, 0.2, 0.5, 0.8, 1],
labels=['F', 'D', 'C', 'B', 'A'])
# qcut() operates on a list, array, or Series. This is the
# result of running the function on a single column of the
# DataFrame.
print convert_grades_curve(grades_df['exam1'])
# qcut() does not work on DataFrames, but we can use apply()
# to call the function on each column separately
print grades_df.apply(convert_grades_curve)
```
### Quiz
```
def standardize(df):
'''
Fill in this function to standardize each column of the given
DataFrame. To standardize a variable, convert each value to the
number of standard deviations it is above or below the mean.
'''
def standarize_column(column):
return (column - column.mean()) / column.std(ddof=0)
return df.apply(standarize_column)
standardize(grades_df)
```
### DataFrame apply() use case 2
We can apply a function that, for each column, returns a single element. The input will be a dataframe and the output a series.
```
df = pd.DataFrame({
'a': [4, 5, 3, 1, 2],
'b': [20, 10, 40, 50, 30],
'c': [25, 20, 5, 15, 10]
})
print df.apply(np.mean)
print df.apply(np.max)
def second_largest(df):
'''
Fill in this function to return the second-largest value of each
column of the input DataFrame.
'''
def second_largest_in_column(column):
return column.nlargest(2).iloc[-1]
return df.apply(second_largest_in_column)
second_largest(df)
```
## Operations between DataFrames and Series
In this section we will see how opearations between DataFrames and Series are defined.
## Adding a Series to a square DataFrame
```
s = pd.Series([1, 2, 3, 4])
df = pd.DataFrame({
0: [10, 20, 30, 40],
1: [50, 60, 70, 80],
2: [90, 100, 110, 120],
3: [130, 140, 150, 160]
})
print df
print '' # Create a blank line between outputs
print df + s
```
### Adding a Series to a one-row DataFrame
```
s = pd.Series([1, 2, 3, 4])
df = pd.DataFrame({0: [10], 1: [20], 2: [30], 3: [40]})
print df
print '' # Create a blank line between outputs
print df + s
```
### Adding a Series to a one-column DataFrame
```
s = pd.Series([1, 2, 3, 4])
df = pd.DataFrame({0: [10, 20, 30, 40]})
print df
print '' # Create a blank line between outputs
print df + s
```
### Adding when DataFrame column names match Series index
```
s = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])
df = pd.DataFrame({
'a': [10, 20, 30, 40],
'b': [50, 60, 70, 80],
'c': [90, 100, 110, 120],
'd': [130, 140, 150, 160]
})
print df
print '' # Create a blank line between outputs
print df + s
```
### Adding when DataFrame column names don't match Series index
```
s = pd.Series([1, 2, 3, 4])
df = pd.DataFrame({
'a': [10, 20, 30, 40],
'b': [50, 60, 70, 80],
'c': [90, 100, 110, 120],
'd': [130, 140, 150, 160]
})
print df
print '' # Create a blank line between outputs
print df + s
```
### Adding with axis='index'
```
s = pd.Series([1, 2, 3, 4])
df = pd.DataFrame({
0: [10, 20, 30, 40],
1: [50, 60, 70, 80],
2: [90, 100, 110, 120],
3: [130, 140, 150, 160]
})
print df
print '' # Create a blank line between outputs
print df.add(s, axis='index')
# The functions sub(), mul(), and div() work similarly to add()
```
### Adding with axis='columns'
```
s = pd.Series([1, 2, 3, 4])
df = pd.DataFrame({
0: [10, 20, 30, 40],
1: [50, 60, 70, 80],
2: [90, 100, 110, 120],
3: [130, 140, 150, 160]
})
print df
print '' # Create a blank line between outputs
print df.add(s, axis='columns')
# The functions sub(), mul(), and div() work similarly to add()
```
### Quiz
```
grades_df = pd.DataFrame(
data={'exam1': [43, 81, 78, 75, 89, 70, 91, 65, 98, 87],
'exam2': [24, 63, 56, 56, 67, 51, 79, 46, 72, 60]},
index=['Andre', 'Barry', 'Chris', 'Dan', 'Emilio',
'Fred', 'Greta', 'Humbert', 'Ivan', 'James']
)
def standardize(df):
'''
Fill in this function to standardize each column of the given
DataFrame. To standardize a variable, convert each value to the
number of standard deviations it is above or below the mean.
This time, try to use vectorized operations instead of apply().
You should get the same results as you did before.
'''
return (df - df.mean()) / df.std(ddof=0)
def standardize_rows(df):
'''
Optional: Fill in this function to standardize each row of the given
DataFrame. Again, try not to use apply().
This one is more challenging than standardizing each column!
'''
mean_diffs = df.sub(df.mean(axis='columns'), axis='index')
return mean_diffs.div(df.std(axis='columns'), axis='index')
print standardize(grades_df)
print standardize_rows(grades_df)
```
## Pandas groupby()
In this section we will use pandas groupby() function.
### Examine DataFrame
```
import matplotlib.pyplot as plt
import seaborn as sns
%pylab inline
values = np.array([1, 3, 2, 4, 1, 6, 4])
example_df = pd.DataFrame({
'value': values,
'even': values % 2 == 0,
'above_three': values > 3
}, index=['a', 'b', 'c', 'd', 'e', 'f', 'g'])
example_df
```
### Examine groups
```
grouped_data = example_df.groupby('even')
# The groups attribute is a dictionary mapping keys to lists of row indexes
print grouped_data.groups
```
### Group by multiple columns
```
grouped_data = example_df.groupby(['even', 'above_three'])
print grouped_data.groups
```
### Get sum of each group
```
grouped_data = example_df.groupby('even')
print grouped_data.sum()
```
### Limit columns in result
```
grouped_data = example_df.groupby('even')
# You can take one or more columns from the result DataFrame
print grouped_data.sum()['value']
print '\n' # Blank line to separate results
# You can also take a subset of columns from the grouped data before
# collapsing to a DataFrame. In this case, the result is the same.
print grouped_data['value'].sum()
```
###ย Quiz
```
path = 'data/'
subway_df = pd.read_csv(path + 'nyc_subway_weather.csv')
ridership_by_day = subway_df.groupby('day_week').mean()['ENTRIESn_hourly']
ridership_by_day.plot()
```
## Calculating Hourly Entries and Exits
In this section we will see how to obtain hourly entries and exits from cumulative ones.
### Standardize each group
```
values = np.array([1, 3, 2, 4, 1, 6, 4])
example_df = pd.DataFrame({
'value': values,
'even': values % 2 == 0,
'above_three': values > 3
}, index=['a', 'b', 'c', 'd', 'e', 'f', 'g'])
example_df
def standardize(xs):
return (xs - xs.mean()) / xs.std()
grouped_data = example_df.groupby('even')
print grouped_data['value'].apply(standardize)
```
### Find second largest value in each group
```
def second_largest(xs):
sorted_xs = xs.sort_values(inplace=False, ascending=False)
return sorted_xs.iloc[1]
grouped_data = example_df.groupby('even')
print grouped_data['value'].apply(second_largest)
```
### Quiz
```
# DataFrame with cumulative entries and exits for multiple stations
ridership_df = pd.DataFrame({
'UNIT': ['R051', 'R079', 'R051', 'R079', 'R051', 'R079', 'R051', 'R079', 'R051'],
'TIMEn': ['00:00:00', '02:00:00', '04:00:00', '06:00:00', '08:00:00', '10:00:00', '12:00:00', '14:00:00', '16:00:00'],
'ENTRIESn': [3144312, 8936644, 3144335, 8936658, 3144353, 8936687, 3144424, 8936819, 3144594],
'EXITSn': [1088151, 13755385, 1088159, 13755393, 1088177, 13755598, 1088231, 13756191, 1088275]
})
def get_hourly_entries_and_exits(entries_and_exits):
'''
Fill in this function to take a DataFrame with cumulative entries
and exits and return a DataFrame with hourly entries and exits.
The hourly entries and exits should be calculated separately for
each station (the 'UNIT' column).
Hint: Use the `get_hourly_entries_and_exits()` function you wrote
in a previous quiz, DataFrame Vectorized Operations, and the `.apply()`
function, to help solve this problem.
'''
def hourly_for_group(entries_and_exits):
return entries_and_exits - entries_and_exits.shift(1)
return entries_and_exits.groupby('UNIT')['ENTRIESn', 'EXITSn'].apply(hourly_for_group)
print get_hourly_entries_and_exits(ridership_df)
```
## Combining Pandas DataFrames
In this section we will use the merge() method to combine two pandas dataframes. We will list the most common parameters:
- **on**: names of the columns to join on
- **left_on**: Columns from the left DataFrame to use as keys
- **right_on**: Columns from the right DataFrame to use as keys
- **how**: One of 'left', 'right', 'outer', 'inner'
### Quiz
```
subway_df = pd.DataFrame({
'UNIT': ['R003', 'R003', 'R003', 'R003', 'R003', 'R004', 'R004', 'R004',
'R004', 'R004'],
'DATEn': ['05-01-11', '05-02-11', '05-03-11', '05-04-11', '05-05-11',
'05-01-11', '05-02-11', '05-03-11', '05-04-11', '05-05-11'],
'hour': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'ENTRIESn': [ 4388333, 4388348, 4389885, 4391507, 4393043, 14656120,
14656174, 14660126, 14664247, 14668301],
'EXITSn': [ 2911002, 2911036, 2912127, 2913223, 2914284, 14451774,
14451851, 14454734, 14457780, 14460818],
'latitude': [ 40.689945, 40.689945, 40.689945, 40.689945, 40.689945,
40.69132 , 40.69132 , 40.69132 , 40.69132 , 40.69132 ],
'longitude': [-73.872564, -73.872564, -73.872564, -73.872564, -73.872564,
-73.867135, -73.867135, -73.867135, -73.867135, -73.867135]
})
weather_df = pd.DataFrame({
'DATEn': ['05-01-11', '05-01-11', '05-02-11', '05-02-11', '05-03-11',
'05-03-11', '05-04-11', '05-04-11', '05-05-11', '05-05-11'],
'hour': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'latitude': [ 40.689945, 40.69132 , 40.689945, 40.69132 , 40.689945,
40.69132 , 40.689945, 40.69132 , 40.689945, 40.69132 ],
'longitude': [-73.872564, -73.867135, -73.872564, -73.867135, -73.872564,
-73.867135, -73.872564, -73.867135, -73.872564, -73.867135],
'pressurei': [ 30.24, 30.24, 30.32, 30.32, 30.14, 30.14, 29.98, 29.98,
30.01, 30.01],
'fog': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'rain': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'tempi': [ 52. , 52. , 48.9, 48.9, 54. , 54. , 57.2, 57.2, 48.9, 48.9],
'wspdi': [ 8.1, 8.1, 6.9, 6.9, 3.5, 3.5, 15. , 15. , 15. , 15. ]
})
def combine_dfs(subway_df, weather_df):
'''
Fill in this function to take 2 DataFrames, one with subway data and one with weather data,
and return a single dataframe with one row for each date, hour, and location. Only include
times and locations that have both subway data and weather data available.
'''
return subway_df.merge(weather_df, on=['DATEn', 'hour', 'latitude', 'longitude'], how='inner')
combine_dfs(subway_df, weather_df)
```
## Plotting for DataFrames
In this section we will plot some data.
```
values = np.array([1, 3, 2, 4, 1, 6, 4])
example_df = pd.DataFrame({
'value': values,
'even': values % 2 == 0,
'above_three': values > 3
}, index=['a', 'b', 'c', 'd', 'e', 'f', 'g'])
example_df
```
### groupby() with as_index=False
```
first_even = example_df.groupby('even', as_index=False).first()
print first_even
print first_even['even'] # Now 'even' is still a column in the DataFrame
```
### Quiz
```
path = 'data/'
subway_df = pd.read_csv(path + 'nyc_subway_weather.csv')
subway_df.head()
subway_df.groupby('rain')['ENTRIESn_hourly'].mean()
# Make a scatterplot of subway stations with latitude and longitude
# as the x and y axes and ridership as the bubble size
data_by_location = subway_df.groupby(['latitude', 'longitude'], as_index=False).mean()
data_by_location.head()
scaled_entries = (data_by_location['ENTRIESn_hourly'] /
data_by_location['ENTRIESn_hourly'].std())
plt.scatter(data_by_location['latitude'], data_by_location['longitude'], s=scaled_entries)
```
| github_jupyter |
# Metadata management in kubeflow
```
!pip install kubeflow-metadata --user
from kubeflow.metadata import metadata
from datetime import datetime
from uuid import uuid4
METADATA_STORE_HOST = "metadata-grpc-service.kubeflow" # default DNS of Kubeflow Metadata gRPC serivce.
METADATA_STORE_PORT = 8080
#Define a workspace
ws_tf = metadata.Workspace(
store=metadata.Store(grpc_host=METADATA_STORE_HOST, grpc_port=METADATA_STORE_PORT),
name="kubeflow_dnn_tf",
description="Simple DNN workspace",
labels={"Execution_key_1": "value_1"})
#Create a run inside the workspace
r = metadata.Run(
workspace=ws_tf,
name="run-" + datetime.utcnow().isoformat("T") ,
description="A example run"
)
#Create an execution
exec = metadata.Execution(
name = "execution" + datetime.utcnow().isoformat("T") ,
workspace=ws_tf,
run=r,
description="DNN Execution example",
)
print("An execution was created with id %s" % exec.id)
##Run model ....
#Log information about the input data used
date_set_version = "data_set_version_" + str(uuid4())
data_set = exec.log_input(
metadata.DataSet(
description="Sample dateset - fashion mnist",
name="data-exraction",
owner="luis@luis.com",
uri="gs://...",
version=date_set_version,
query="SELECT * FROM table ..."))
print("Data set id is {0.id} with version '{0.version}'".format(data_set))
#Log information about a trained model
model_version = "model_version_" + str(uuid4())
model = exec.log_output(
metadata.Model(
name="MNIST Fashion",
description="model to recognize classify fashion",
owner="luis@luis.com",
uri="gs://...",
model_type="neural network",
training_framework={
"name": "tensorflow",
"version": "v1.0"
},
hyperparameters={
"layers": [10, 3, 1],
"early_stop": True
},
version=model_version,
labels={"mylabel": "l1"}))
print(model)
print("\nModel id is {0.id} and version is {0.version}".format(model))
#Log metrics information about the model
metrics = exec.log_output(
metadata.Metrics(
name="Fashion MNIST-evaluation",
description="validating the Fashion MNIST model to recognize fashion clothes",
owner="luis@luis.com",
uri="gs://...",
data_set_id=str(data_set.id),
model_id=str(model.id),
metrics_type=metadata.Metrics.VALIDATION,
values={"accuracy": 0.95},
labels={"mylabel": "l1"}))
print("Metrics id is %s" % metrics.id)
```
| github_jupyter |
To get started, let's import graphcat and create an empty computational graph:
```
import graphcat
graph = graphcat.StaticGraph()
```
The first step in our workflow will be to load an image from disk. We're going to use [Pillow](https://pillow.readthedocs.org) to do the heavy lifting, so you'll need to install it with
$ pip install pillow
if you don't already have it. With that out of the way, the first thing we need is a parameter with the filename of the image to be loaded. In Graphcat, everything that affects your computation - including parameters - should be represented as a task:
```
graph.set_task("filename", graphcat.constant("astronaut.jpg"))
```
```
import PIL.Image
def load(graph, name, inputs):
path = inputs.get("path")
return PIL.Image.open(path)
graph.set_task("load", load)
```
The `load` function expects an input named *path* which will supply the filename to be loaded, and returns a Pillow image as output. Our "filename" task produces a filename, so we connect it to the "load" task's *path* input:
```
graph.set_links("filename", ("load", "path"))
```
Finally, let's stop and take stock of what we've done so far, with a diagram of the current computational graph:
```
import graphcat.notebook
graphcat.notebook.display(graph)
```
For our next step, we'll resize the incoming image:
```
def resize(graph, name, inputs):
image = inputs.get("image")
scale = inputs.get("scale")
return image.resize((int(image.width * scale), int(image.height * scale)))
graph.set_task("resize", resize)
```
```
graph.set_parameter(target="resize", input="scale", source="scale_parameter", value=0.2)
```
And of course, we need to connect the `load` function to `resize`:
```
graph.set_links("load", ("resize", "image"))
graphcat.notebook.display(graph)
```
Before going any further, let's execute the current graph to see what the loaded image looks like:
```
graph.output("resize")
```
```
graphcat.notebook.display(graph)
```
Creating the blurred version of the input image works much like the resize operation - the `blur` task function takes an image and a blur radius as inputs, and produces a modified image as output:
```
import PIL.ImageFilter
def blur(graph, name, inputs):
image = inputs.get("image")
radius = inputs.get("radius")
return image.filter(PIL.ImageFilter.GaussianBlur(radius))
graph.set_task("blur", blur)
graph.set_parameter("blur", "radius", "radius_parameter", 5)
graph.set_links("resize", ("blur", "image"))
graphcat.notebook.display(graph)
```
Notice that the tasks we just added have white backgrounds in the diagram, indicating that they haven't been executed yet.
Now, we're ready to combine the blurred and unblurred versions of the image. Notably, our "blend" task will take *three* inputs: one for each version of the image, plus one for the "alpha" parameter that will control how much each image contributes to the final result:
```
import PIL.ImageChops
def blend(graph, name, inputs):
image1 = inputs.get("image1")
image2 = inputs.get("image2")
alpha = inputs.get("alpha")
return PIL.ImageChops.blend(image1, image2, alpha)
graph.set_task("blend", blend)
graph.set_parameter("blend", "alpha", "alpha_parameter", 0.65)
graph.add_links("resize", ("blend", "image1"))
graph.set_links("blur", ("blend", "image2"))
graphcat.notebook.display(graph)
```
That's it! Now we're ready to execute the graph and see the softened result:
```
graph.output("blend")
```
Don't ask how, but I can confirm that the image now looks like it was taken in a department store, circa 1975.
Of course, executing the graph once doesn't really demonstrate Graphcat's true abilities. The real benefit of a computational graph only becomes clear when its parameters are changing, with the graph only executing the tasks that need to be recomputed.
To demonstrate this, we will use Jupyter notebook widgets - https://ipywidgets.readthedocs.io - to provide a simple, interactive user interface. In particular, we'll use interactive sliders to drive the "scale", "radius", and "alpha" parameters in the computational graph. We won't discuss how the widgets work in any detail, focusing instead on just the places where they are integrated with Graphcat. To begin, we will need to define some callback functions that will be called when the value of a widget changes:
```
def set_graph_value(name):
def implementation(change):
graph.set_task(name, graphcat.constant(change["new"]))
return implementation
```
```
import ipywidgets as widgets
scale_widget = widgets.FloatSlider(description="scale:", min=0.01, max=1, value=0.2, step=0.01, continuous_update=False)
scale_widget.observe(set_graph_value("scale_parameter"), names="value")
radius_widget = widgets.FloatSlider(description="radius:", min=0, max=10, value=5, step=1, continuous_update=False)
radius_widget.observe(set_graph_value("radius_parameter"), names="value")
alpha_widget = widgets.FloatSlider(description="alpha:", min=0, max=1, value=0.7, step=0.01, continuous_update=False)
alpha_widget.observe(set_graph_value("alpha_parameter"), names="value")
```
We'll also need an output widget where our results will be displayed:
```
output_widget = widgets.Output()
output_widget.layout.height="1000px"
```
So we can see exactly which tasks are executed when a slider is moved, we will create our own custom logging function and connect it to the graph:
```
def log_execution(graph, name, inputs):
with output_widget:
print(f"Executing {name}")
graph.on_execute.connect(log_execution);
```
This function will be called every time a task is executed.
We also need a function that will be called whenever the graph changes. This function will be responsible for clearing the previous output, displaying an up-to-date graph diagram, and displaying the new graph output:
```
import IPython.display
def graph_changed(graph):
with output_widget:
IPython.display.clear_output(wait=True)
graphcat.notebook.display(graph)
IPython.display.display(graph.output("blend"))
graph.on_changed.connect(graph_changed);
```
```
IPython.display.display(scale_widget)
IPython.display.display(radius_widget)
IPython.display.display(alpha_widget)
IPython.display.display(output_widget)
graph_changed(graph)
```
| github_jupyter |
# Data analysis
This feature able the user to develop real-time data analysis, consist of the complete Python-powered environment, with a set of custom methods for agile development.
Extensions > New Extension > Data analysis
## Bare minimum
```
from bci_framework.extensions.data_analysis import DataAnalysis
class Analysis(DataAnalysis):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if __name__ == '__main__':
Analysis()
```
## Data stream access
The data stream is accessed asynchronously with the `loop_consumer` decorator from `bci_framework.extensions.data_analysis`, this decorator requires the Kafka topics to access.
There is 2 topics availables for `loop_consumer`: `eeg` and `marker`.
```
from bci_framework.extensions.data_analysis import DataAnalysis, loop_consumer
class Analysis(DataAnalysis):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.stream()
@loop_consumer('eeg', 'marker')
def stream(self):
print('Incoming data...')
if __name__ == '__main__':
Analysis()
```
The decorated method receives 3 optional arguments: `data`, `topic` and `frame`.
**data:** `(eeg, aux)` if topic is `eeg`, `marker_value` if topic is `marker`
**kafka_stream:** The `stream` object from Kafka.
**topic:** The topic of the Kafka stream, this object is available too in the object `data.topic`
**frame:** Incremental flag with the counter of streamed data.
**latency:** The time bewteen acquisition and read.
```
@loop_consumer('eeg')
def stream(self, data, topic, frame, latency):
eeg, aux = data
print(f'Incoming data #{frame}')
print(f'EEG{eeg.shape}')
print(f'AUX{aux.shape}')
print(f'Topic: {topic}')
print(f'Latency: {latency}')
```
The above code will execute every data stream input, and the below code will execute only when a marker is streamed.
```
@loop_consumer('marker')
def stream(self, data, topic, frame):
marker_value = data
print(f'Incoming marker: {marker_value}')
```
Is **not possible**, use the decorator `loop_consumer` in more than one place, so the argument `topic` could be used to create a flow control.
```
@loop_consumer('eeg', 'marker')
def stream(self, data, topic, frame):
if topic == 'eeg':
eeg, aux = data
print("EEG data incomming..")
elif topic == 'marker':
marker_value = data
print("Marker incomming..")
```
## Simulate data stream
Using `fake_loop_consumer` instead of `loop_consumer` is possible to create a fake data stream.
```
from bci_framework.extensions.data_analysis import DataAnalysis, fake_loop_consumer
import logging
class Analysis(DataAnalysis):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.stream()
@fake_loop_consumer('eeg')
def stream(self):
logging.debug('Incoming data...')
if __name__ == '__main__':
Analysis()
```
## Built in methods
### Buffer / Sliding window
We can use `self.create_buffer` to implement an automatic buffer with a fixed time view, for example, a buffer of 30 seconds:
```
self.create_buffer(seconds=30)
```
The data can be accesed with `self.buffer_eeg` and `self.buffer_aux`
```
class Analysis(DataAnalysis):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.create_buffer(seconds=30)
self.stream()
@loop_consumer('eeg')
def stream(self):
eeg = self.buffer_eeg
aux = self.buffer_aux
```
The `self.create_buffer` method receives other arguments like `aux_shape`, `fill` and `samples`.
```
self.create_buffer(seconds=30, aux_shape=3, fill=0, resampling=1000)
```
**aux_shape:** The dimension of the auxiliary data, 3 by default.
**fill:** Initialize buffet with this value, 0 by default.
**resampling:** This value is used to resampling the data.
### Resampling
The resampling is defined when the buffer is created, with the argument `resampling` this value is not strictly used, instead a near and optimal value is calculated based on the sampling rate and the buffer size.
```
class Analysis(DataAnalysis):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.create_buffer(seconds=30, resampling=1000)
self.stream()
@loop_consumer('eeg')
def stream(self):
eeg = self.buffer_eeg_resampled
aux = self.buffer_aux_resampled
print(f'EEG{eeg.shape}')
print(f'AUX{aux.shape}')
```
The resampling will not affect the buffer, the both data are accessible all the time.
### Data slicing referenced by markers
Consist of a method that crops the available data with a marker reference. The decorator `@marker_slicing` do the trick. Receives the `markers`, a `t0` that indicate how many seconds to crop before the marker and the `duration` of the slice.
```
from bci_framework.extensions.data_analysis import DataAnalysis, marker_slicing
class Analysis(DataAnalysis):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Needs to be greater than the duration of the slice.
self.create_buffer(seconds=30, aux_shape=3)
self.slicing()
@marker_slicing(['Right', 'Left'], t0=-2, duration=6)
def slicing(self, eeg, aux, timestamp, marker):
print(eeg.shape)
print(aux.shape)
print(timestamp.shape)
print(marker)
print()
if __name__ == '__main__':
Analysis()
```
The above code will be executed each time that and slice is available. The `buffer` must be greater than the duration of the desired slice.
**eeg:** The EEG data croped (`channels, time`).
**aux:** The AUX data croped (`aux, time`).
**timestamp:** The `timestamp` vector.
**marker:** The `marker` that trigger the crop.
**latency:** The time bewteen acquisition and read.
## Receive markers
The markers can be accessed specifying the topic `marker` in the `loop_consumer`
```
@loop_consumer('marker')
```
## Send commands, annotations and feedbacks
The commands are used to communicate outputs into the real world, or other systems, they can also be read in the **Stimuli delivery** to create neurofeedback applications. To activate this feature just add the `enable_produser` argument as `True` into the `DataAnalysis` subclass.
```
if __name__ == '__main__':
Analysis(enable_produser=True)
```
Once activate the producer, the methods `self.send_command`, `self.send_feedback` and `self.end_annotation`are available.
```
@loop_consumer('eeg')
def stream(self):
eeg = self.buffer_eeg_resampled
aux = self.buffer_aux_resampled
[...] # amazing data analysis
self.send_command('MyCommand', value=45)
```
The `self.send_annotation` also receive the optional argument `duration`.
```
self.send_annotation('Data record start')
self.send_annotation('The subject yawn', duration=5)
```
The self.send_feedback receive any kind of Python data structure.
```
feed = {'var1': 0,
'var2': True,
'var3': 'Right',
}
self.send_feedback(**feed)
self.send_feedback(a=0, b=2.3, c='Left')
```
A generic producer also is available:
```
self.generic_produser(topic, data)
```
## Communication between analysis process
Let's build a script that will acts like **Kafka transformer**, this script reads the raw EEG data, calculate their EEG spectrum using Fourier and inject back again into the stream. This can be other advanced processing tasks, like classifications using neural networks.
```
from bci_framework.extensions.data_analysis import DataAnalysis, loop_consumer
from bci_framework.extensions import properties as prop
from gcpds.utils.processing import fourier
class Analysis(DataAnalysis):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.create_buffer(seconds=30, resampling=1000)
self.stream()
@loop_consumer('eeg')
def stream(self):
W, EEG = fourier(self.buffer_eeg, fs=prop.SAMPLE_RATE, axis=1)
data = {'amplitude': EEG,
'frequency': W}
self.generic_produser('spectrum', data)
if __name__ == '__main__':
Analysis(enable_produser=True)
```
Now, in another script, we will write a **Kafka consumer** this script will consume from the previously created stream.
```
from bci_framework.extensions.data_analysis import DataAnalysis, loop_consumer
from bci_framework.extensions import properties as prop
from gcpds.utils.processing import fourier
class Analysis(DataAnalysis):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.stream()
@loop_consumer('spectrum')
def stream(self, data):
data = data.value['data']
EEG = data['amplitude']
W = data['frequency']
if __name__ == '__main__':
Analysis()
```
This examples are available by default in the framework extensions explorer.
The `spectrum` topic must be created before to use the topic:
```
kafka-topics.sh --create --bootstrap-server localhost:2181 --replication-factor 1 --partitions 1 --topic feedback
```
## Framework integration
BCI-Framework can execute any number of scripts as an independent process, the system will handle the interruption and show information about the CPU and memory usage.
Data analysis > Data analysis
<img src='images/analysis_process.png'></img>
| github_jupyter |
```
# Initialize Otter
import otter
grader = otter.Notebook("hw07.ipynb")
```
# Homework 7 โ Visualization Fundamentals ๐ง
## Data 94, Spring 2021
This homework is due on **Thursday, April 8th at 11:59PM.** You must submit the assignment to Gradescope. Submission instructions can be found at the bottom of this notebook. See the [syllabus](http://data94.org/syllabus/#late-policy-and-extensions) for our late submission policy.
### Acknowledgements
Many of the pictures and descriptions in this assignment are taken from [Dr. Allison Horst](https://twitter.com/allison_horst) and [Dr. Kristen Gorman](https://www.uaf.edu/cfos/people/faculty/detail/kristen-gorman.php). See [here](https://allisonhorst.github.io/palmerpenguins/) for more details.
```
# Run this cell, don't change anything.
!pip install kaleido
# Run this cell, don't change anything.
from datascience import *
import numpy as np
Table.interactive_plots()
import seaborn as sns
import plotly.io
import plotly.express as px
from IPython.display import display, Image
from ipywidgets import interact
def save_and_show(fig, path):
if fig == None:
print('Error: Make sure to use the argument show = False above.')
plotly.io.write_image(fig, path)
display(Image(path))
```
## Disclaimer
When creating graphs in this assignment, there are two things you will always have to do that we didn't talk about in lecture:
1. Assign the graph to a variable name. (As in previous assignments, we will tell you which variable names to use.)
2. Use the argument `show = False` in your graphing method, in addition to the other arguments you want to use.
These steps are **required** in order for your work to be graded. The distinction is subtle, since you will see the same visual output either way. See below for an example.
<b style="color:green;">Good:</b>
```py
fig_5 = table.sort('other_column').barh('column_name', show = False)
```
<b style="color:red;">Bad:</b>
```py
table.sort('other_column').barh('column_name')
```
Also note that most of this homework will be graded manually by us rather than being graded by an autograder.
## The Data
In this assignment, we will explore a dataset containing size measurements for three penguin species observed on three islands in the Palmer Archipelago, Antarctica. The data was collected by Dr. Kristen Gorman, a marine biologist, from 2007 to 2009.
Here's a photo of Dr. Gorman in the wild collecting the data:
<img src='images/gorman1.png' width=500>
Run the cell below to load in our data.
```
penguins = Table.from_df(sns.load_dataset('penguins').dropna())
penguins
```
Let's make sure we understand what each of the columns in our data represents before proceeding.
### `'species'`
There are three species of penguin in our dataset: Adelie, Chinstrap, and Gentoo.
<img src='images/lter_penguins.png' width=500>
### `'island'`
The penguins in our dataset come from three islands: Biscoe, Dream, and Torgersen. (The smaller image of Anvers Island may initially be confusing; the dark region is land and the light region is water.)
<img src='images/island.png' width=500>
<div align = center>
Image taken from <a href=https://journals.plos.org/plosone/article/figure?id=10.1371/journal.pone.0090081.g001>here</a>
</div>
### `'bill_length_mm'` and `'bill_depth_mm'`
See the illustration below.
<img src='images/culmen_depth.png' width=350>
### `'flipper_length_mm'`, `'body_mass_g'`, `'sex'`
[Flippers](https://www.thespruce.com/flipper-definition-penguin-wings-385251) are the equivalent of wings on penguins. Body mass and sex should be relatively clear.
## Question 1 โ Bar Charts
### Question 1a
Let's start by visualizing the distribution of the islands from which the penguins in our dataset come from. As discussed in Lecture 25, we use bar charts to display the distribution of categorical variables, so `barh` sounds like it will be useful here.
Run the following line of code. Make sure to scroll to the very bottom of the box that appears to see the x-axis.
```
penguins.barh('island')
```
<!-- BEGIN QUESTION -->
Hmm... that doesn't look right. In two sentences, explain what's wrong with the above graph and describe how you would fix it.
_Hint: Recall the three-step visualization process._
<!--
BEGIN QUESTION
name: q1a
points: 2
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
### Question 1b
Now, fix the above visualization so that it correctly shows the distribution of the islands from which our penguins come from. Specifically, assign `fig_1b` to a bar chart with three bars, one for each island. The length of each bar should correspond to the number of penguins from each island.
**Note:** Remember the disclaimer at the top of the assignment. Don't forget to set `show = False`.
```
fig_1b = ...
fig_1b
```
<!-- BEGIN QUESTION -->
After creating your visualization above, run the following cell. You should see a picture of your graph. You must run this cell in order to get credit for this question.
<!--
BEGIN QUESTION
name: q1b
points: 1
manual: true
-->
```
# Run this cell, don't change anything.
save_and_show(fig_1b, 'images/saved/1b.png')
```
<!-- END QUESTION -->
### Question 1c
Let's instead display the distribution of our penguins' species.
Below, assign `fig_1c` to a bar chart with three bars, one for each species. The length of each bar should correspond to the number of penguins from each species. Unlike in Question 1b, you will need to:
1. Sort the table before grouping to match the example below.
2. Edit the title and axis labels to match the example below (hint: set the arguments `xaxis_title`, `yaxis_title`, and `title`).
<img src='images/examples/1c.png' width=750>
**Note:** Remember the disclaimer at the top of the assignment. Don't forget to set `show = False`.
_Hint: Remember that you can use the `\` character to break your work into multiple lines._
```
fig_1c = ...
fig_1c
```
<!-- BEGIN QUESTION -->
After creating your visualization above, run the following cell. You should see a picture of your plot. You must run this cell in order to get credit for this question.
<!--
BEGIN QUESTION
name: q1c
points: 3
manual: true
-->
```
# Run this cell, don't change anything.
save_and_show(fig_1c, 'images/saved/1c.png')
```
<!-- END QUESTION -->
### Question 1d
Let's now try and create some grouped bar charts, in order to look at the relationship between `'species'` and `'island'`. We'll first need to create a table with one categorical column and several numerical columns, as that's what `barh` will need to create a grouped bar chart.
Below, assign `species_by_island` to a table with three rows and four columns. Each row should correspond to a single species of penguin, and the columns should correspond to the islands where our penguins are from. The entries in `species_by_island` should describe the number of penguins of a particular species from a particular island.
The first row of `species_by_island` is given below; remember there are supposed to be three rows.
| species | Biscoe | Dream | Torgersen |
|----------|---------:|--------:|------------:|
| Adelie | 44 | 55 | 47 |
_Hint: This should be very straightforward; there is a method we studied that does exactly what you need to do. Also, if you see a warning that starts with `Creating an ndarray from...`, you can safely ignore it._
<!--
BEGIN QUESTION
name: q1d
points: 1
-->
```
species_by_island = ...
species_by_island
grader.check("q1d")
```
### Question 1e
Now, using the table `species_by_island` you created in the previous part, replicate the grouped bar chart below and assign the result to `fig_1e`. Make sure to match the axis labels and title.
<img src='images/examples/1e.png' width=750>
**Note:** Remember the disclaimer at the top of the assignment. Don't forget to set `show = False`.
```
fig_1e = ...
fig_1e
```
<!-- BEGIN QUESTION -->
After creating your visualization above, run the following cell. You should see a picture of your graph. You must run this cell in order to get credit for this question.
<!--
BEGIN QUESTION
name: q1e
points: 2
manual: true
-->
```
# Run this cell, don't change anything.
save_and_show(fig_1e, 'images/saved/1e.png')
```
<!-- END QUESTION -->
### Question 1f
Great, now we know how to create grouped bar charts starting from our `penguins` table. Let's try it again โ but this time, there will be less scaffolding.
Assign `fig_1f` to the grouped bar chart below, which describes the mean body weight of each species of penguin, separated by sex. Once again, make sure to match the axis label and title.
<img src='images/examples/1f.png' width=750>
**Note:** Remember the disclaimer at the top of the assignment. Don't forget to set `show = False`.
_Hint: Think about how this task is similar to what you did in the previous two parts. Don't forget the three-step visualization process, which says that you should first create a table with the information you need in your visualization; in that first step, `np.mean` will need to be used somewhere._
```
fig_1f = ...
fig_1f
```
<!-- BEGIN QUESTION -->
After creating your visualization above, run the following cell. You should see a picture of your graph. You must run this cell in order to get credit for this question.
<!--
BEGIN QUESTION
name: q1f
points: 3
manual: true
-->
```
# Run this cell, don't change anything.
save_and_show(fig_1f, 'images/saved/1f.png')
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
### Question 1g
Comment on what you see in the grouped bar chart from the previous part. Specifically, which species of penguin appears to be the largest on average, and which sex of penguins appears to be larger on average? Is there a pair of species whose sizes are roughly the same on average?
_Hint: We use the term "on average" a lot in the question, and that's because the graph you created only shows statistics for each species and sex on average (because you used `np.mean`). It is **not** saying that all female Gentoo penguins are larger than all female Chinstrap penguins, for example._
<!--
BEGIN QUESTION
name: q1g
points: 2
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
## Question 2 โ Histograms
<img src='images/all3.png' width=400>
<div align=center>
Adelie, Chinstrap, and Gentoo penguins.
</div>
Great! Now that we've explored the distributions of some of the categorical variables in our dataset (species and island), it's time to study the distributions of some of the numerical variables. As covered in Lecture 26, we can visualize a numerical distribution by creating a histogram.
### Question 2a
Before we draw any histograms, we'll make sure that we understand how histograms work.
Run the cell below to draw a histogram displaying the distribution of our penguins' bill lengths.
```
penguins.select('bill_length_mm').hist(bins = np.arange(30, 60, 5), density = False)
```
<!-- BEGIN QUESTION -->
Remember, in a histogram, each bin is inclusive of the left endpoint and exclusive of the right endpoint. What this means is that, for example, the bin between 35 mm and 40 mm above corresponds to bill lengths that are greater than or equal to 35 mm and less than 40 mm.
In the Markdown cell below, answer the following three questions by looking at the graph above; do **not** write any code. **If you don't believe it is possible to determine the answer by looking at the above graph, write "impossible to tell" and explain why.** Remember that you can hover over the bars above to get their exact heights.
1. How many penguins have a bill length between 50 mm (inclusive) and 55 mm (exclusive)?
2. True or False: `penguins.where('bill_length_mm', are.between(50, 55)).num_rows` is equal to the correct answer from the previous question.
3. How many penguins have a bill length between 43 mm (inclusive) and 50 mm (exclusive)?
<!--
BEGIN QUESTION
name: q2a
points: 3
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
### Question 2b
Below, assign `fig_2b` to a histogram that visualizes the distribution of our penguins' bill depths (**not** lengths, as in the previous part). Specifically, you must re-create the histogram below. Make sure that your histogram has the same bins, y-axis scale, axis labels, and title as the example.
<img src='images/examples/2b.png' width=750>
**Note:** Remember the disclaimer at the top of the assignment. Don't forget to set `show = False`.
_Hint: Our solution sets the arguments `density`, `bins`, `xaxis_title`, `yaxis_title`, `title`, and `show`._
```
fig_2b = ...
fig_2b
```
<!-- BEGIN QUESTION -->
After creating your visualization above, run the following cell. You should see a picture of your graph. You must run this cell in order to get credit for this question.
<!--
BEGIN QUESTION
name: q2b
points: 3
manual: true
-->
```
# Run this cell, don't change anything.
save_and_show(fig_2b, 'images/saved/2b.png')
```
<!-- END QUESTION -->
### Question 2c
When creating histograms, it's important to try several different bin sizes in order to make sure that we're satisfied with the level of detail (or lack thereof) in our histogram.
Run the code cell below. It will present you with a histogram of the distribution of our penguins' body masses, along with a slider for bin widths. Use the slider to try several different bin widths and look at the resulting histograms.
```
# Don't worry about the code, just play with the slider that appears after running.
def draw_mass_histogram(bin_width):
fig = penguins.select('body_mass_g').hist(bins = np.arange(2700, 6300+2*bin_width, bin_width), density = False, show = False)
display(fig)
interact(draw_mass_histogram, bin_width=(25, 525, 25));
```
<!-- BEGIN QUESTION -->
In the cell below, compare the two histograms that result from setting the bin width to 50 and 500. What are the pros and cons of each? (Remember that these histograms are displaying the same data, just with different bin sizes.)
<!--
BEGIN QUESTION
name: q2c
points: 2
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
### Question 2d
We can also draw overlaid histograms to compare the distribution of a numerical variable, separated by some category (by using the `group` argument). Run the cell below to generate a histogram of body masses, separated by species.
```
penguins.select('species', 'body_mass_g') \
.hist('body_mass_g', group = 'species', xaxis_title = 'Body Mass (g)', density = False)
```
The above graph is... okay. It's a little hard to extract any insight from it, since there are so many overlapping regions. However, you can click the text in the legend (`species = Gentoo` for example) to make certain categories disappear. Try it!
Also, keep in mind that there are fewer Chinstrap and Gentoo penguins in our dataset than Adelie penguins, which is why the bars for Adelie are all significantly taller. If you hide the Gentoo distribution and look at just the Adelie and Chinstrap distributions, you'll see that the "shapes" of the distributions are very similar (which is consistent with your result from Question 1f).
However, there's another solution โ we can make three separate histograms, one for each species. Below, assign `fig_2d` to the graph below, which displays the distributions of all three species' body masses on separate axes. Unlike the previous parts of this assignment, you will need to set the `height` and `width` arguments in `hist` to 700 and 500, respectively (otherwise the resulting graph is too big).
<img src='images/examples/2d.png' width=300>
**Note:** Remember the disclaimer at the top of the assignment. Don't forget to set `show = False`.
_Hint: Our solution used the arguments `group`, `density`, `overlay`, `height`, `width`, `title`, and `show`. You should start by copying the code we used to create the overlaid histogram and then add and modify arguments._
```
fig_2d = ...
fig_2d
```
<!-- BEGIN QUESTION -->
After creating your visualization above, run the following cell. You should see a picture of your graph. You must run this cell in order to get credit for this question.
<!--
BEGIN QUESTION
name: q2d
points: 2
manual: true
-->
```
# Run this cell, don't change anything.
save_and_show(fig_2d, 'images/saved/2d.png')
```
<!-- END QUESTION -->
## Question 3 โ Moving Forward
<img src='images/biscoe.png' width=400>
<div align=center>
A Gentoo penguin colony at Biscoe Point.
</div>
In Week 11's lectures we focused on bar charts and histograms. In this question, we will introduce the scatter plot, which you will learn about in Week 12 (and gain experience with in Homework 8).
Run the following cell to generate a scatter plot.
```
penguins.scatter('bill_length_mm', 'bill_depth_mm',
group = 'species', s = 35, sizes = 'body_mass_g',
title = 'Bill Length vs. Bill Depth')
```
<!-- BEGIN QUESTION -->
Here's what you're seeing above:
- Position on the x-axis represents bill length.
- Position on the y-axis represents bill depth.
- Color represents species, as per the legend on the right.
- Size represents body mass (this is a little hard to see, but some points are slightly larger than others).
You'll note that there are three general "clusters" or "groups" of points, corresponding to the three penguin species.
Use the scatter plot to fill in the blanks below. Both blanks should be a species of penguin.
_"It appears that the distribution of bill lengths of Chinstrap penguins is very similar to the distribution of bill lengths of _ _ _ _ penguins, while the distribution of bill depths of Chinstrap penguins is very similar to the distribution of bill depths of _ _ _ _ penguins."_
<!--
BEGIN QUESTION
name: q3
points: 2
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
### Fun Demo
We won't cover boxplots in depth this semester, but we'll draw some for you here. If you aren't familiar with boxplots (also known as "box-and-whisker plots"), you can watch the first few minutes of [this](https://www.youtube.com/watch?v=sDZSljMKkPw) video by Suraj or [this](https://www.khanacademy.org/math/cc-sixth-grade-math/cc-6th-data-statistics/cc-6th-box-whisker-plots/v/constructing-a-box-and-whisker-plot) video by Khan Academy.
Below, we'll draw boxplots for the distribution of body masses, separated by species.
```
px.box(penguins.to_df(), x = 'species', y = 'body_mass_g', color = 'species')
```
This boxplot is showing the same information as the histograms in Question 2d, just in a different way.
We can even take things a step further, by separating by species and sex:
```
px.box(penguins.to_df(), x = 'species', y = 'body_mass_g', color = 'sex')
```
In a single graph, we now see the distribution of penguin body masses, separated by species and sex. Pretty cool!
# Done!
Congrats! You've finished another Data 94 homework assignment!
To submit your work, follow the steps outlined on Ed. **Remember that for this homework in particular, almost all problems will be graded manually, rather than by the autograder.**
The point breakdown for this assignment is given in the table below:
| **Category** | Points |
| --- | --- |
| Autograder | 1 |
| Written (Including Visualizations) | 25 |
| **Total** | 26 |
---
To double-check your work, the cell below will rerun all of the autograder tests.
```
grader.check_all()
```
## Submission
Make sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output. The cell below will generate a zip file for you to submit. **Please save before exporting!**
```
# Save your notebook first, then run this cell to export your submission.
grader.export()
```
| github_jupyter |
```
import tensorflow.keras
tensorflow.keras.__version__
```
# Understanding recurrent neural networks
This notebook contains the code samples found in Chapter 6, Section 2 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
---
[...]
## A first recurrent layer in Keras
The process we just naively implemented in Numpy corresponds to an actual Keras layer: the `SimpleRNN` layer:
```
from tensorflow.keras.layers import SimpleRNN
```
There is just one minor difference: `SimpleRNN` processes batches of sequences, like all other Keras layers, not just a single sequence like
in our Numpy example. This means that it takes inputs of shape `(batch_size, timesteps, input_features)`, rather than `(timesteps,
input_features)`.
Like all recurrent layers in Keras, `SimpleRNN` can be run in two different modes: it can return either the full sequences of successive
outputs for each timestep (a 3D tensor of shape `(batch_size, timesteps, output_features)`), or it can return only the last output for each
input sequence (a 2D tensor of shape `(batch_size, output_features)`). These two modes are controlled by the `return_sequences` constructor
argument. Let's take a look at an example:
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, SimpleRNN
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32))
model.summary()
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32, return_sequences=True))
model.summary()
```
It is sometimes useful to stack several recurrent layers one after the other in order to increase the representational power of a network.
In such a setup, you have to get all intermediate layers to return full sequences:
```
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32)) # This last layer only returns the last outputs.
model.summary()
```
Now let's try to use such a model on the IMDB movie review classification problem. First, let's preprocess the data:
```
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
maxlen = 500 # cut texts after this number of words (among top max_features most common words)
batch_size = 32
print('Loading data...')
(input_train, y_train), (input_test, y_test) = imdb.load_data(num_words=max_features)
print(len(input_train), 'train sequences')
print(len(input_test), 'test sequences')
print('Pad sequences (samples x time)')
input_train = sequence.pad_sequences(input_train, maxlen=maxlen)
input_test = sequence.pad_sequences(input_test, maxlen=maxlen)
print('input_train shape:', input_train.shape)
print('input_test shape:', input_test.shape)
```
Let's train a simple recurrent network using an `Embedding` layer and a `SimpleRNN` layer:
```
from tensorflow.keras.layers import Dense
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
history = model.fit(input_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
```
Let's display the training and validation loss and accuracy:
```
import matplotlib.pyplot as plt
%matplotlib inline
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
As a reminder, in chapter 3, our very first naive approach to this very dataset got us to 88% test accuracy. Unfortunately, our small
recurrent network doesn't perform very well at all compared to this baseline (only up to 85% validation accuracy). Part of the problem is
that our inputs only consider the first 500 words rather the full sequences --
hence our RNN has access to less information than our earlier baseline model. The remainder of the problem is simply that `SimpleRNN` isn't very good at processing long sequences, like text. Other types of recurrent layers perform much better. Let's take a look at some
more advanced layers.
[...]
## A concrete LSTM example in Keras
Now let's switch to more practical concerns: we will set up a model using a LSTM layer and train it on the IMDB data. Here's the network,
similar to the one with `SimpleRNN` that we just presented. We only specify the output dimensionality of the LSTM layer, and leave every
other argument (there are lots) to the Keras defaults. Keras has good defaults, and things will almost always "just work" without you
having to spend time tuning parameters by hand.
```
from tensorflow.keras.layers import LSTM
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(LSTM(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(input_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
| github_jupyter |
# Validation of gens_eia860
This notebook runs sanity checks on the Generators data that are reported in EIA Form 860. These are the same tests which are run by the gens_eia860 validation tests by PyTest. The notebook and visualizations are meant to be used as a diagnostic tool, to help understand what's wrong when the PyTest based data validations fail for some reason.
```
%load_ext autoreload
%autoreload 2
import sys
import pandas as pd
import sqlalchemy as sa
import pudl
import warnings
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(stream=sys.stdout)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
logger.handlers = [handler]
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
plt.style.use('ggplot')
mpl.rcParams['figure.figsize'] = (10,4)
mpl.rcParams['figure.dpi'] = 150
pd.options.display.max_columns = 56
pudl_settings = pudl.workspace.setup.get_defaults()
ferc1_engine = sa.create_engine(pudl_settings['ferc1_db'])
pudl_engine = sa.create_engine(pudl_settings['pudl_db'])
```
## Get the original EIA 860 data
First we pull the original (post-ETL) EIA 860 data out of the database. We will use the values in this dataset as a baseline for checking that latter aggregated data and derived values remain valid. We will also eyeball these values here to make sure they are within the expected range. This may take a minute or two depending on the speed of your machine.
```
pudl_out_orig = pudl.output.pudltabl.PudlTabl(pudl_engine, freq=None)
gens_eia860_orig = pudl_out_orig.gens_eia860()
```
# Validation Against Fixed Bounds
Some of the variables reported in this table have a fixed range of reasonable values, like the capacity. These varaibles can be tested for validity against external standards directly. In general we have two kinds of tests in this section:
* **Tails:** are the exteme values too extreme? Typically, this is at the 5% and 95% level, but depending on the distribution, sometimes other thresholds are used.
* **Middle:** Is the central value of the distribution where it should be?
### Fields that need checking:
The main column we are checking right now is the capacity.
Future data-like columns to potentially check:
- 'minimum_load_mw'
- 'nameplate_power_factor'
- 'planned_net_summer_capacity_derate_mw'
- 'planned_net_summer_capacity_uprate_mw'
- 'planned_net_winter_capacity_derate_mw'
- 'planned_net_winter_capacity_uprate_mw'
- 'planned_new_capacity_mw'
- 'summer_capacity_mw'
- 'winter_capacity_mw'
```
gens_eia860_orig.sample(10)
```
## Capacity
```
pudl.validate.plot_vs_bounds(gens_eia860_orig, pudl.validate.gens_eia860_vs_bound)
```
# Validating Historical Distributions
As a sanity check of the testing process itself, we can check to see whether the entire historical distribution has attributes that place it within the extremes of a historical subsampling of the distribution. In this case, we sample each historical year, and look at the range of values taken on by some quantile, and see whether the same quantile for the whole of the dataset fits within that range
```
pudl.validate.plot_vs_self(gens_eia860_orig, pudl.validate.gens_eia860_self)
```
| github_jupyter |
# Intro to Hidden Markov Models (optional)
---
### Introduction
In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.
<div class="alert alert-block alert-info">
**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.
</div>
The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
<div class="alert alert-block alert-info">
**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
</div>
<hr>
<div class="alert alert-block alert-warning">
**Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
</div>
```
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
```
## Build a Simple HMM
---
You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).
> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether itโs raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.
A simplified diagram of the required network topology is shown below.

### Describing the Network
<div class="alert alert-block alert-warning">
$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.
</div>
HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.
<div class="alert alert-block alert-warning">
At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.
</div>
In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.
For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.)
### Initializing an HMM Network with Pomegranate
The Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.html#initialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
```
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
```
### **IMPLEMENTATION**: Add the Hidden States
When the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution.
#### Observation Emission Probabilities: $P(Y_t | X_t)$
We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)
| | $yes$ | $no$ |
| --- | --- | --- |
| $Sunny$ | 0.10 | 0.90 |
| $Rainy$ | 0.80 | 0.20 |
```
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
```
### **IMPLEMENTATION:** Adding Transitions
Once the states are added to the model, we can build up the desired topology of individual state transitions.
#### Initial Probability $P(X_0)$:
We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:
| $Sunny$ | $Rainy$ |
| --- | ---
| 0.5 | 0.5 |
#### State transition probabilities $P(X_{t} | X_{t-1})$
Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)
| | $Sunny$ | $Rainy$ |
| --- | --- | --- |
|$Sunny$| 0.80 | 0.20 |
|$Rainy$| 0.40 | 0.60 |
```
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
```
## Visualize the Network
---
We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
```
show_model(model, figsize=(100, 5), filename="example.png", overwrite=True, show_ends=False)
```
### Checking the Model
The states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.
Run the next cell to inspect the full state transition matrix, then read the .
```
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
```
## Inference in Hidden Markov Models
---
Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:
<div class="alert alert-block alert-info">
**Likelihood Evaluation**<br>
Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the model
</div>
We can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.
<div class="alert alert-block alert-info">
**Hidden State Decoding**<br>
Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observations
</div>
We can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states.
<div class="alert alert-block alert-info">
**Parameter Learning**<br>
Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$
</div>
We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate.
### IMPLEMENTATION: Calculate Sequence Likelihood
Calculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.
Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
```
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
```
### IMPLEMENTATION: Decoding the Most Likely Hidden State Sequence
The [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.
This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.
Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
```
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
```
### Forward likelihood vs Viterbi likelihood
Run the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
```
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
```
### Congratulations!
You've now finished the HMM warmup. You should have all the tools you need to complete the part of speech tagger project.
| github_jupyter |
<a id='ppd'></a>
<div id="qe-notebook-header" align="right" style="text-align:right;">
<a href="https://quantecon.org/" title="quantecon.org">
<img style="width:250px;display:inline;" width="250px" src="https://assets.quantecon.org/img/qe-menubar-logo.svg" alt="QuantEcon">
</a>
</div>
# Pandas for Panel Data
<a id='index-1'></a>
## Contents
- [Pandas for Panel Data](#Pandas-for-Panel-Data)
- [Overview](#Overview)
- [Slicing and Reshaping Data](#Slicing-and-Reshaping-Data)
- [Merging Dataframes and Filling NaNs](#Merging-Dataframes-and-Filling-NaNs)
- [Grouping and Summarizing Data](#Grouping-and-Summarizing-Data)
- [Final Remarks](#Final-Remarks)
- [Exercises](#Exercises)
- [Solutions](#Solutions)
## Overview
In an [earlier lecture on pandas](https://python.quantecon.org/pandas.html), we looked at working with simple data sets.
Econometricians often need to work with more complex data sets, such as panels.
Common tasks include
- Importing data, cleaning it and reshaping it across several axes.
- Selecting a time series or cross-section from a panel.
- Grouping and summarizing data.
`pandas` (derived from โpanelโ and โdataโ) contains powerful and
easy-to-use tools for solving exactly these kinds of problems.
In what follows, we will use a panel data set of real minimum wages from the OECD to create:
- summary statistics over multiple dimensions of our data
- a time series of the average minimum wage of countries in the dataset
- kernel density estimates of wages by continent
We will begin by reading in our long format panel data from a CSV file and
reshaping the resulting `DataFrame` with `pivot_table` to build a `MultiIndex`.
Additional detail will be added to our `DataFrame` using pandasโ
`merge` function, and data will be summarized with the `groupby`
function.
Most of this lecture was created by [Natasha Watkins](https://github.com/natashawatkins).
## Slicing and Reshaping Data
We will read in a dataset from the OECD of real minimum wages in 32
countries and assign it to `realwage`.
The dataset `pandas_panel/realwage.csv` can be downloaded
<a href=_static/lecture_specific/pandas_panel/realwage.csv download>here</a>.
Make sure the file is in your current working directory
```
import pandas as pd
# Display 6 columns for viewing purposes
pd.set_option('display.max_columns', 6)
# Reduce decimal points to 2
pd.options.display.float_format = '{:,.2f}'.format
realwage = pd.read_csv('https://github.com/QuantEcon/QuantEcon.lectures.code/raw/master/pandas_panel/realwage.csv')
```
Letโs have a look at what weโve got to work with
```
realwage.head() # Show first 5 rows
```
The data is currently in long format, which is difficult to analyze when there are several dimensions to the data.
We will use `pivot_table` to create a wide format panel, with a `MultiIndex` to handle higher dimensional data.
`pivot_table` arguments should specify the data (values), the index, and the columns we want in our resulting dataframe.
By passing a list in columns, we can create a `MultiIndex` in our column axis
```
realwage = realwage.pivot_table(values='value',
index='Time',
columns=['Country', 'Series', 'Pay period'])
realwage.head()
```
To more easily filter our time series data, later on, we will convert the index into a `DateTimeIndex`
```
realwage.index = pd.to_datetime(realwage.index)
type(realwage.index)
```
The columns contain multiple levels of indexing, known as a
`MultiIndex`, with levels being ordered hierarchically (Country >
Series > Pay period).
A `MultiIndex` is the simplest and most flexible way to manage panel
data in pandas
```
type(realwage.columns)
realwage.columns.names
```
Like before, we can select the country (the top level of our
`MultiIndex`)
```
realwage['United States'].head()
```
Stacking and unstacking levels of the `MultiIndex` will be used
throughout this lecture to reshape our dataframe into a format we need.
`.stack()` rotates the lowest level of the column `MultiIndex` to
the row index (`.unstack()` works in the opposite direction - try it
out)
```
realwage.stack().head()
```
We can also pass in an argument to select the level we would like to
stack
```
realwage.stack(level='Country').head()
```
Using a `DatetimeIndex` makes it easy to select a particular time
period.
Selecting one year and stacking the two lower levels of the
`MultiIndex` creates a cross-section of our panel data
```
realwage['2015'].stack(level=(1, 2)).transpose().head()
```
For the rest of lecture, we will work with a dataframe of the hourly
real minimum wages across countries and time, measured in 2015 US
dollars.
To create our filtered dataframe (`realwage_f`), we can use the `xs`
method to select values at lower levels in the multiindex, while keeping
the higher levels (countries in this case)
```
realwage_f = realwage.xs(('Hourly', 'In 2015 constant prices at 2015 USD exchange rates'),
level=('Pay period', 'Series'), axis=1)
realwage_f.head()
```
## Merging Dataframes and Filling NaNs
Similar to relational databases like SQL, pandas has built in methods to
merge datasets together.
Using country information from
[WorldData.info](https://www.worlddata.info/downloads/), weโll add
the continent of each country to `realwage_f` with the `merge`
function.
The CSV file can be found in `pandas_panel/countries.csv` and can be downloaded
<a href=_static/lecture_specific/pandas_panel/countries.csv download>here</a>.
```
worlddata = pd.read_csv('https://github.com/QuantEcon/QuantEcon.lectures.code/raw/master/pandas_panel/countries.csv', sep=';')
worlddata.head()
```
First, weโll select just the country and continent variables from
`worlddata` and rename the column to โCountryโ
```
worlddata = worlddata[['Country (en)', 'Continent']]
worlddata = worlddata.rename(columns={'Country (en)': 'Country'})
worlddata.head()
```
We want to merge our new dataframe, `worlddata`, with `realwage_f`.
The pandas `merge` function allows dataframes to be joined together by
rows.
Our dataframes will be merged using country names, requiring us to use
the transpose of `realwage_f` so that rows correspond to country names
in both dataframes
```
realwage_f.transpose().head()
```
We can use either left, right, inner, or outer join to merge our
datasets:
- left join includes only countries from the left dataset
- right join includes only countries from the right dataset
- outer join includes countries that are in either the left and right datasets
- inner join includes only countries common to both the left and right datasets
By default, `merge` will use an inner join.
Here we will pass `how='left'` to keep all countries in
`realwage_f`, but discard countries in `worlddata` that do not have
a corresponding data entry `realwage_f`.
This is illustrated by the red shading in the following diagram
<img src="https://s3-ap-southeast-2.amazonaws.com/python.quantecon.org/_static/lecture_specific/pandas_panel/venn_diag.png" style="">
We will also need to specify where the country name is located in each
dataframe, which will be the `key` that is used to merge the
dataframes โonโ.
Our โleftโ dataframe (`realwage_f.transpose()`) contains countries in
the index, so we set `left_index=True`.
Our โrightโ dataframe (`worlddata`) contains countries in the
โCountryโ column, so we set `right_on='Country'`
```
merged = pd.merge(realwage_f.transpose(), worlddata,
how='left', left_index=True, right_on='Country')
merged.head()
```
Countries that appeared in `realwage_f` but not in `worlddata` will
have `NaN` in the Continent column.
To check whether this has occurred, we can use `.isnull()` on the
continent column and filter the merged dataframe
```
merged[merged['Continent'].isnull()]
```
We have three missing values!
One option to deal with NaN values is to create a dictionary containing
these countries and their respective continents.
`.map()` will match countries in `merged['Country']` with their
continent from the dictionary.
Notice how countries not in our dictionary are mapped with `NaN`
```
missing_continents = {'Korea': 'Asia',
'Russian Federation': 'Europe',
'Slovak Republic': 'Europe'}
merged['Country'].map(missing_continents)
```
We donโt want to overwrite the entire series with this mapping.
`.fillna()` only fills in `NaN` values in `merged['Continent']`
with the mapping, while leaving other values in the column unchanged
```
merged['Continent'] = merged['Continent'].fillna(merged['Country'].map(missing_continents))
# Check for whether continents were correctly mapped
merged[merged['Country'] == 'Korea']
```
We will also combine the Americas into a single continent - this will make our visualization nicer later on.
To do this, we will use `.replace()` and loop through a list of the continent values we want to replace
```
replace = ['Central America', 'North America', 'South America']
for country in replace:
merged['Continent'].replace(to_replace=country,
value='America',
inplace=True)
```
Now that we have all the data we want in a single `DataFrame`, we will
reshape it back into panel form with a `MultiIndex`.
We should also ensure to sort the index using `.sort_index()` so that we
can efficiently filter our dataframe later on.
By default, levels will be sorted top-down
```
merged = merged.set_index(['Continent', 'Country']).sort_index()
merged.head()
```
While merging, we lost our `DatetimeIndex`, as we merged columns that
were not in datetime format
```
merged.columns
```
Now that we have set the merged columns as the index, we can recreate a
`DatetimeIndex` using `.to_datetime()`
```
merged.columns = pd.to_datetime(merged.columns)
merged.columns = merged.columns.rename('Time')
merged.columns
```
The `DatetimeIndex` tends to work more smoothly in the row axis, so we
will go ahead and transpose `merged`
```
merged = merged.transpose()
merged.head()
```
## Grouping and Summarizing Data
Grouping and summarizing data can be particularly useful for
understanding large panel datasets.
A simple way to summarize data is to call an [aggregation
method](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html#descriptive-statistics)
on the dataframe, such as `.mean()` or `.max()`.
For example, we can calculate the average real minimum wage for each
country over the period 2006 to 2016 (the default is to aggregate over
rows)
```
merged.mean().head(10)
```
Using this series, we can plot the average real minimum wage over the
past decade for each country in our data set
```
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib
matplotlib.style.use('seaborn')
merged.mean().sort_values(ascending=False).plot(kind='bar', title="Average real minimum wage 2006 - 2016")
#Set country labels
country_labels = merged.mean().sort_values(ascending=False).index.get_level_values('Country').tolist()
plt.xticks(range(0, len(country_labels)), country_labels)
plt.xlabel('Country')
plt.show()
```
Passing in `axis=1` to `.mean()` will aggregate over columns (giving
the average minimum wage for all countries over time)
```
merged.mean(axis=1).head()
```
We can plot this time series as a line graph
```
merged.mean(axis=1).plot()
plt.title('Average real minimum wage 2006 - 2016')
plt.ylabel('2015 USD')
plt.xlabel('Year')
plt.show()
```
We can also specify a level of the `MultiIndex` (in the column axis)
to aggregate over
```
merged.mean(level='Continent', axis=1).head()
```
We can plot the average minimum wages in each continent as a time series
```
merged.mean(level='Continent', axis=1).plot()
plt.title('Average real minimum wage')
plt.ylabel('2015 USD')
plt.xlabel('Year')
plt.show()
```
We will drop Australia as a continent for plotting purposes
```
merged = merged.drop('Australia', level='Continent', axis=1)
merged.mean(level='Continent', axis=1).plot()
plt.title('Average real minimum wage')
plt.ylabel('2015 USD')
plt.xlabel('Year')
plt.show()
```
`.describe()` is useful for quickly retrieving a number of common
summary statistics
```
merged.stack().describe()
```
This is a simplified way to use `groupby`.
Using `groupby` generally follows a โsplit-apply-combineโ process:
- split: data is grouped based on one or more keys
- apply: a function is called on each group independently
- combine: the results of the function calls are combined into a new data structure
The `groupby` method achieves the first step of this process, creating
a new `DataFrameGroupBy` object with data split into groups.
Letโs split `merged` by continent again, this time using the
`groupby` function, and name the resulting object `grouped`
```
grouped = merged.groupby(level='Continent', axis=1)
grouped
```
Calling an aggregation method on the object applies the function to each
group, the results of which are combined in a new data structure.
For example, we can return the number of countries in our dataset for
each continent using `.size()`.
In this case, our new data structure is a `Series`
```
grouped.size()
```
Calling `.get_group()` to return just the countries in a single group,
we can create a kernel density estimate of the distribution of real
minimum wages in 2016 for each continent.
`grouped.groups.keys()` will return the keys from the `groupby`
object
```
import seaborn as sns
continents = grouped.groups.keys()
for continent in continents:
sns.kdeplot(grouped.get_group(continent)['2015'].unstack(), label=continent, shade=True)
plt.title('Real minimum wages in 2015')
plt.xlabel('US dollars')
plt.show()
```
## Final Remarks
This lecture has provided an introduction to some of pandasโ more
advanced features, including multiindices, merging, grouping and
plotting.
Other tools that may be useful in panel data analysis include [xarray](http://xarray.pydata.org/en/stable/), a python package that
extends pandas to N-dimensional data structures.
## Exercises
### Exercise 1
In these exercises, youโll work with a dataset of employment rates
in Europe by age and sex from [Eurostat](http://ec.europa.eu/eurostat/data/database).
The dataset `pandas_panel/employ.csv` can be downloaded
<a href=_static/lecture_specific/pandas_panel/employ.csv download>here</a>.
Reading in the CSV file returns a panel dataset in long format. Use `.pivot_table()` to construct
a wide format dataframe with a `MultiIndex` in the columns.
Start off by exploring the dataframe and the variables available in the
`MultiIndex` levels.
Write a program that quickly returns all values in the `MultiIndex`.
### Exercise 2
Filter the above dataframe to only include employment as a percentage of
โactive populationโ.
Create a grouped boxplot using `seaborn` of employment rates in 2015
by age group and sex.
**Hint:** `GEO` includes both areas and countries.
## Solutions
### Exercise 1
```
employ = pd.read_csv('https://github.com/QuantEcon/QuantEcon.lectures.code/raw/master/pandas_panel/employ.csv')
employ = employ.pivot_table(values='Value',
index=['DATE'],
columns=['UNIT','AGE', 'SEX', 'INDIC_EM', 'GEO'])
employ.index = pd.to_datetime(employ.index) # ensure that dates are datetime format
employ.head()
```
This is a large dataset so it is useful to explore the levels and
variables available
```
employ.columns.names
```
Variables within levels can be quickly retrieved with a loop
```
for name in employ.columns.names:
print(name, employ.columns.get_level_values(name).unique())
```
### Exercise 2
To easily filter by country, swap `GEO` to the top level and sort the
`MultiIndex`
```
employ.columns = employ.columns.swaplevel(0,-1)
employ = employ.sort_index(axis=1)
```
We need to get rid of a few items in `GEO` which are not countries.
A fast way to get rid of the EU areas is to use a list comprehension to
find the level values in `GEO` that begin with โEuroโ
```
geo_list = employ.columns.get_level_values('GEO').unique().tolist()
countries = [x for x in geo_list if not x.startswith('Euro')]
employ = employ[countries]
employ.columns.get_level_values('GEO').unique()
```
Select only percentage employed in the active population from the
dataframe
```
employ_f = employ.xs(('Percentage of total population', 'Active population'),
level=('UNIT', 'INDIC_EM'),
axis=1)
employ_f.head()
```
Drop the โTotalโ value before creating the grouped boxplot
```
employ_f = employ_f.drop('Total', level='SEX', axis=1)
box = employ_f['2015'].unstack().reset_index()
sns.boxplot(x="AGE", y=0, hue="SEX", data=box, palette=("husl"), showfliers=False)
plt.xlabel('')
plt.xticks(rotation=35)
plt.ylabel('Percentage of population (%)')
plt.title('Employment in Europe (2015)')
plt.legend(bbox_to_anchor=(1,0.5))
plt.show()
```
| github_jupyter |
```
import math
import random
import gym
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Normal,Beta
from sklearn import preprocessing
from IPython.display import clear_output
import matplotlib.pyplot as plt
%matplotlib inline
seed_number = 2018
```
<h2>Use CUDA</h2>
```
np.random.seed(seed_number)
torch.backends.cudnn.deterministic = True
torch.manual_seed(seed_number)
use_cuda = torch.cuda.is_available()
if use_cuda:
torch.cuda.manual_seed_all(seed_number)
device = torch.device("cuda")
else:
device = torch.device("cpu")
```
<h2>Neural Network</h2>
```
def init_weights(m):
if isinstance(m, nn.Linear):
nn.init.normal_(m.weight, mean=0., std=0.1)
# nn.init.xavier_uniform_(m.weight)
# nn.init.kaiming_uniform_(m.weight)
nn.init.constant_(m.bias, 0.1)
class ActorCritic(nn.Module):
def __init__(self, num_inputs, num_outputs, hidden_size):
super(ActorCritic, self).__init__()
self.critic = nn.Sequential(
nn.Linear(num_inputs, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, 1)
)
self.actor = nn.Sequential(
nn.Linear(num_inputs, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, 2 * num_outputs),
nn.Softplus()
)
self.apply(init_weights)
def forward(self, x):
value = self.critic(x)
alpha_beta = self.actor(x)
alpha = alpha_beta[:,0]+1
beta = alpha_beta[:,1]+1
alpha = alpha.reshape(len(alpha),1)
beta = beta.reshape(len(beta),1)
dist = Beta (alpha, beta)
return dist, value, alpha, beta
def plot(frame_idx, rewards):
clear_output(True)
plt.figure(figsize=(20,5))
plt.subplot(131)
plt.title('frame %s. reward: %s' % (frame_idx, rewards[-1]))
plt.plot(rewards)
plt.show()
def test_env(model, goal, vis=False):
state = env.reset()
if vis: env.render()
done = False
total_reward = 0
while not done:
state_goal = np.concatenate((state,goal),0)
state_goal = torch.FloatTensor(state_goal).unsqueeze(0).to(device)
dist, _, a_, b_ = model(state_goal)
next_state, reward, done, _ = env.step(dist.sample().cpu().numpy()[0])
state = next_state
if vis: env.render()
total_reward += reward
return total_reward
```
<h2>Basic Hindsight GAE</h2>
```
def compute_gae(next_value, rewards, masks, values, gamma=0.99, lamda=0.95):
values = values + [next_value]
gae = 0
returns = []
for step in reversed(range(len(rewards))):
delta = rewards[step] + gamma * values[step + 1] * masks[step] - values[step]
gae = delta + gamma * lamda * masks[step] * gae
returns.insert(0, gae + values[step])
return returns
```
<h2> Hindsight GAE (Importance Sampling) </h2>
```
def hindsight_gae(rewards, current_logprobs, desired_logprobs, masks, values, gamma = 0.995, lamda = 0):
lambda_ret = 1
hindsight_gae = 0
returns = []
for step in range(len(rewards)):
temp = 0
is_weight_ratio = 1
for step_ in range(step, len(rewards)):
ratio = (current_logprobs[step_] - desired_logprobs[step_]).exp()
clipped_ratio = lambda_ret * torch.clamp(ratio, max = 1)
is_weight_ratio = is_weight_ratio * clipped_ratio
for step_ in range(step, len(rewards)):
temp = temp + ((gamma ** (step_+1)) * rewards[step_] - (gamma ** (step_)) * rewards[step_])
temp = temp - (gamma ** (step + 1)) * rewards[step]
delta = rewards[step] + is_weight_ratio * temp
hindsight_gae = delta + gamma * lamda * masks[step] * hindsight_gae
returns.insert(0, hindsight_gae + values[step])
return returns
```
<h2> Compute Return </h2>
```
def compute_returns(rewards, gamma = 0.995):
returns = 0
returns_list = []
for step in range(len(rewards)):
returns = returns + (gamma ** i) * rewards[step]
returns_list.insert(0,returns)
return returns_list
```
<h1> Proximal Policy Optimization Algorithm</h1>
<h2><a href="https://arxiv.org/abs/1707.06347">PPO Paper</a></h2>
```
def ppo_iter(mini_batch_size, states, actions, log_probs, returns, advantage):
batch_size = states.size(0)
for _ in range(batch_size // mini_batch_size):
rand_ids = np.random.randint(0, batch_size, mini_batch_size)
yield states[rand_ids, :], actions[rand_ids, :], log_probs[rand_ids, :], returns[rand_ids, :], advantage[rand_ids, :]
def ppo_update(ppo_epochs, mini_batch_size, states, actions, log_probs, returns, advantages, episode_count, test_reward, best_return, clip_param=0.2):
actor_loss_list = []
critic_loss_list = []
clipped = False
for ppo_epoch in range(ppo_epochs):
for state, action, old_log_probs, return_, advantage in ppo_iter(mini_batch_size, states, actions, log_probs, returns, advantages):
dist, value, a_, b_ = model(state)
entropy = dist.entropy().mean()
new_log_probs = dist.log_prob((action+2)/4)
ratio = (new_log_probs - old_log_probs).exp()
surr1 = ratio * advantage
surr2 = torch.clamp(ratio, 1.0 - clip_param, 1.0 + clip_param) * advantage
actor_loss = - torch.min(surr1, surr2).mean()
# MSE Loss
critic_loss = (return_ - value).pow(2).mean()
# Huber Loss
# critic_loss = nn.functional.smooth_l1_loss(value, return_)
actor_loss_list.append(actor_loss.data.cpu().numpy().item(0))
critic_loss_list.append(critic_loss.data.cpu().numpy().item(0))
loss = 0.5 * critic_loss + actor_loss - 0.001 * entropy
optimizer.zero_grad()
loss.backward()
optimizer.step()
mean_actor_loss = np.mean(actor_loss_list)
mean_critic_loss = np.mean(critic_loss_list)
mean_actor_loss_list.append(mean_actor_loss)
mean_critic_loss_list.append(mean_critic_loss)
assert ~np.isnan(mean_critic_loss), "Assert error: critic loss has nan value."
assert ~np.isinf(mean_critic_loss), "Assert error: critic loss has inf value."
print ('episode: {0}, actor_loss: {1:.3f}, critic_loss: {2:.3f}, mean_reward: {3:.3f}, best_return: {4:.3f}'
.format(episode_count, mean_actor_loss, mean_critic_loss, test_reward, best_return))
```
# Create Environment
```
from multiprocessing_env import SubprocVecEnv
num_envs = 16
env_name = "Pendulum-v0"
def make_env(i):
def _thunk():
env = gym.make(env_name)
env.seed(i+seed_number)
return env
return _thunk
envs = [make_env(i) for i in range(num_envs)]
envs = SubprocVecEnv(envs)
env = gym.make(env_name)
env.seed(seed_number)
```
# Initial Goal Distribution Generation
```
class RandomAgent(object):
def __init__(self, action_space):
self.action_space = action_space
def act(self, observation, reward, done):
return self.action_space.sample()
agent = RandomAgent(env.action_space)
episode_count = 50
reward = 0
done = False
initial_subgoals = []
for i in range(episode_count):
state = env.reset()
# print (state)
done_count = 0
while True:
action = agent.act(state, reward, done)
next_state, reward, done, _ = env.step(action)
state = next_state
if done:
break
initial_subgoals.append(state)
random.seed(seed_number)
initial_subgoal = initial_subgoals[random.randint(0, len(initial_subgoals)-1)]
print ('Initial subgoal sampled is: ', initial_subgoal)
```
# Training Agent
```
num_inputs = envs.observation_space.shape[0]
num_outputs = envs.action_space.shape[0]
#Hyper params:
hidden_size = 256
lr = 3e-4
num_steps = 20 # 20
mini_batch_size = 5
ppo_epochs = 4
threshold_reward = -200
use_hindsight = True
model = ActorCritic(2*num_inputs, num_outputs, hidden_size).to(device)
optimizer = optim.Adam(model.parameters(), lr=lr)
# optimizer = optim.SGD(model.parameters(), lr = lr)
max_frames = 10000000 # 50000
frame_idx = 0
test_rewards = []
mean_actor_loss_list = []
mean_critic_loss_list = []
state = envs.reset()
early_stop = False
episode_count = 0
best_return = -9999
multi_goal = True
desired_goal = np.asarray([0,0,0])
while frame_idx < max_frames and not early_stop:
# sample state from previous episode
if multi_goal:
if frame_idx == 1:
goal = initial_subgoal
else:
if len(state) > 1:
goal = state[random.randint(0, num_envs - 1)]
else:
goal = state[0]
else:
goal = desired_goal
log_probs = []
log_probs_desired = []
values = []
states = []
actions = []
rewards = []
masks = []
entropy = 0
for i in range(num_steps):
state_goals = []
state_desired_goals = []
next_state_goals = []
# append state with subgoal and desired goal
for s in state:
state_goal = np.concatenate((s,goal),0)
state_goals.append((state_goal))
state_desired_goal = np.concatenate((s, desired_goal), 0)
state_desired_goals.append((state_desired_goal))
state_goals = np.array(state_goals)
state_goals = torch.FloatTensor(state_goals).to(device)
state_desired_goals = np.array(state_desired_goals)
state_desired_goals = torch.FloatTensor(state_desired_goals).to(device)
# for subgoal
dist, value, alpha, beta = model(state_goals)
action = dist.sample() * 4 - 2 # pendulum's action range [-2, 2] real value
action = action.reshape(len(action), 1)
next_state, reward, done, _ = envs.step(action.cpu().numpy())
# for desired goal
dist_desired, value_desired, alpha_desired, beta_desired = model(state_desired_goals)
action_desired = dist_desired.sample() * 4 - 2
action_desired = action_desired.reshape(len(action_desired), 1)
# append next state with sub goal
for n_s in next_state:
next_state_goal = np.concatenate((n_s, goal), 0)
next_state_desired_goal = np.concatenate((n_s, desired_goal), 0)
next_state_goals.append((next_state_goal))
next_state_goals = np.array(next_state_goals)
# clip action to range from 0 to 1 for beta distribution
# for subgoal
log_prob = dist.log_prob((action+2)/4)
# for desired goal
log_prob_desired = dist_desired.log_prob((action_desired+2)/4)
entropy += dist.entropy().mean()
log_probs.append(log_prob)
log_probs_desired.append(log_prob_desired)
values.append(value)
# normalized reward
reward = (reward - np.mean(reward))/(np.std(reward) + 1e-5)
rewards.append(torch.FloatTensor(reward).unsqueeze(1).to(device))
masks.append(torch.FloatTensor(1 - done).unsqueeze(1).to(device))
states.append(state_goals)
actions.append(action)
state = next_state
frame_idx += 1
if frame_idx % num_steps == 0:
test_reward = np.mean([test_env(model, desired_goal) for _ in range(5)])
test_rewards.append(test_reward)
print ('episode: ', frame_idx/num_steps, 'alpha+beta: ', (alpha.mean(0)+beta.mean(0)).data.cpu().numpy()[0])
if test_reward >= best_return:
best_return = test_reward
# plot(frame_idx, test_rewards)
if test_reward > threshold_reward: early_stop = True
episode_count += 1
# print ('rewards: ', rewards)
# print ('values: ', values)
next_state_goals = torch.FloatTensor(next_state_goals).to(device)
_, next_value, next_alpha, next_beta = model(next_state_goals)
old_logprobs = log_probs
current_logprobs = log_probs_desired
returns = hindsight_gae(rewards, old_logprobs, current_logprobs, masks, values)
# returns = compute_gae (next_value, rewards, masks, values)
returns = torch.cat(returns).detach()
log_probs = torch.cat(log_probs).detach()
values = torch.cat(values).detach()
states = torch.cat(states)
actions = torch.cat(actions)
advantage = returns - values
# advantage = (advantage - advantage.mean()) / (advantage.std() + 1e-5)
ppo_update(ppo_epochs, mini_batch_size, states, actions, log_probs, returns, advantage, episode_count,
test_reward, best_return)
if frame_idx % (num_steps * 50) == 0:
lower_bound = int((frame_idx - (num_steps * 50)) / num_steps)
upper_bound = int(frame_idx / num_steps)
last_fifty_episode_mean_reward = np.mean(test_rewards[lower_bound:upper_bound])
print ('last 50 episode mean reward: ', last_fifty_episode_mean_reward)
print ('\n')
# print(values)
```
# Saving and Loading Testing Reward
```
import pickle
with open('./Test Reward Plot/test_rewards_beta', 'wb') as fp1:
pickle.dump(test_rewards, fp1)
with open('./Loss Plot/mean_actor_loss_beta', 'wb') as fp2:
pickle.dump(mean_actor_loss_list, fp2)
with open('./Loss Plot/mean_critic_loss_beta', 'wb') as fp3:
pickle.dump(mean_critic_loss_list, fp3)
```
<h1> Save and Load Model </h1>
```
torch.save(model, './Model/model_beta' )
expert_model = torch.load('./Model/model_beta')
# expert_test_rewards = []
# for i in range(10):
# # env = gym.wrappers.Monitor(env, 'test_video'+str(i), video_callable=lambda episode_id: True)
# expert_test_reward = test_env(expert_model, [0,0,0], False)
# expert_test_rewards.append(expert_test_reward)
# print ('test {0}, total_reward from 28000 steps load model: {1}'.format(i+1, expert_test_reward))
# print ('mean expert test reward: ', np.mean(expert_test_rewards))
```
| github_jupyter |
# Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
**You will learn how to:**
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
## 1 - Packages ##
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis.
- [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
```
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
```
## 2 - Dataset ##
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
```
X, Y = load_planar_dataset()
```
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
```
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
Lets first get a better sense of what our data is like.
**Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`?
**Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
```
### START CODE HERE ### (โ 3 lines of code)
shape_X = np.shape(X)
shape_Y = np.shape(Y)
m = X.shape[1] # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**shape of X**</td>
<td> (2, 400) </td>
</tr>
<tr>
<td>**shape of Y**</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>**m**</td>
<td> 400 </td>
</tr>
</table>
## 3 - Simple Logistic Regression
Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
```
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
```
You can now plot the decision boundary of these models. Run the code below.
```
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**Accuracy**</td>
<td> 47% </td>
</tr>
</table>
**Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
## 4 - Neural Network model
Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.
**Here is our model**:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
**Mathematically**:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
**Reminder**: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data.
### 4.1 - Defining the neural network structure ####
**Exercise**: Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
```
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
### START CODE HERE ### (โ 3 lines of code)
n_x = X.shape[0] # size of input layer
n_h = 4
n_y = Y.shape[0] # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
```
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width:20%">
<tr>
<td>**n_x**</td>
<td> 5 </td>
</tr>
<tr>
<td>**n_h**</td>
<td> 4 </td>
</tr>
<tr>
<td>**n_y**</td>
<td> 2 </td>
</tr>
</table>
### 4.2 - Initialize the model's parameters ####
**Exercise**: Implement the function `initialize_parameters()`.
**Instructions**:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (โ 4 lines of code)
W1 = np.random.randn(n_h,n_x) * 0.01
b1 = np.zeros((n_h,1))
W2 = np.random.randn(n_y,n_h) * 0.01
b2 = np.zeros((n_y,1))
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]] </td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.]
[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.]] </td>
</tr>
</table>
### 4.3 - The Loop ####
**Question**: Implement `forward_propagation()`.
**Instructions**:
- Look above at the mathematical representation of your classifier.
- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.
- You can use the function `np.tanh()`. It is part of the numpy library.
- The steps you have to implement are:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`.
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (โ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (โ 4 lines of code)
Z1 = np.dot(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td> 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 </td>
</tr>
</table>
Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.
**Instructions**:
- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented
$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:
```python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs) # no need to use a for loop!
```
(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`).
```
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (โ 2 lines of code)
logprobs = np.multiply(np.log(A2), Y) + np.multiply(np.subtract(1,Y), np.log(np.subtract(1,A2)))
cost = (-1/m) * np.sum(logprobs)
### END CODE HERE ###
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**cost**</td>
<td> 0.693058761... </td>
</tr>
</table>
Using the cache computed during forward propagation, you can now implement backward propagation.
**Question**: Implement the function `backward_propagation()`.
**Instructions**:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
- Tips:
- To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (โ 2 lines of code)
W1 = parameters["W1"]
W2 = parameters["W2"]
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (โ 2 lines of code)
A1 = cache["A1"]
A2 = cache["A2"]
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (โ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = A2 - Y
dW2 = (1/m) * np.dot(dZ2, A1.T)
db2 = (1/m) * np.sum(dZ2, axis=1, keepdims=True)
dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2))
dW1 = (1/m) * np.dot(dZ1, X.T)
db1 = (1/m) * np.sum(dZ1, axis=1, keepdims=True)
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td>**dW1**</td>
<td> [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]] </td>
</tr>
<tr>
<td>**db1**</td>
<td> [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]] </td>
</tr>
<tr>
<td>**dW2**</td>
<td> [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] </td>
</tr>
<tr>
<td>**db2**</td>
<td> [[-0.16655712]] </td>
</tr>
</table>
**Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (โ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (โ 4 lines of code)
dW1 = grads["dW1"]
db1 = grads["db1"]
dW2 = grads["dW2"]
db2 = grads["db2"]
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (โ 4 lines of code)
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:80%">
<tr>
<td>**W1**</td>
<td> [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ -1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[ -3.20136836e-06]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.00010457]] </td>
</tr>
</table>
### 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() ####
**Question**: Build your neural network model in `nn_model()`.
**Instructions**: The neural network model has to use the previous functions in the right order.
```
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters".
### START CODE HERE ### (โ 5 lines of code)
parameters = initialize_parameters(n_x, n_h, n_y)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (โ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(X, parameters)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = compute_cost(A2, Y, parameters)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(parameters, cache, X, Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(parameters, grads)
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>
**cost after iteration 0**
</td>
<td>
0.692739
</td>
</tr>
<tr>
<td>
<center> $\vdots$ </center>
</td>
<td>
<center> $\vdots$ </center>
</td>
</tr>
<tr>
<td>**W1**</td>
<td> [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-2.45566237 -3.27042274 2.00784958 3.36773273]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.20459656]] </td>
</tr>
</table>
### 4.5 Predictions
**Question**: Use your model to predict by building predict().
Use forward propagation to predict results.
**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \\
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
```
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (โ 2 lines of code)
A2, cache = forward_propagation(X, parameters)
predictions = A2 > 0.5
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**predictions mean**</td>
<td> 0.666666666667 </td>
</tr>
</table>
It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
```
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**Cost after iteration 9000**</td>
<td> 0.218607 </td>
</tr>
</table>
```
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
```
**Expected Output**:
<table style="width:15%">
<tr>
<td>**Accuracy**</td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression.
Now, let's try out several hidden layer sizes.
### 4.6 - Tuning hidden layer size (optional/ungraded exercise) ###
Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
```
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
```
**Interpretation**:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.
- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
**Optional questions**:
**Note**: Remember to submit the assignment but clicking the blue "Submit Assignment" button at the upper-right.
Some optional/ungraded questions that you can explore if you wish:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<font color='blue'>
**You've learnt to:**
- Build a complete neural network with a hidden layer
- Make a good use of a non-linear unit
- Implemented forward propagation and backpropagation, and trained a neural network
- See the impact of varying the hidden layer size, including overfitting.
Nice work!
## 5) Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
```
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
Congrats on finishing this Programming Assignment!
Reference:
- http://scs.ryerson.ca/~aharley/neural-networks/
- http://cs231n.github.io/neural-networks-case-study/
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D2_HiddenDynamics/W3D2_Intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Intro
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
## Overview
Today you will learn about Hidden Markov Models (HMMs), which allow us to infer things in the world from a stream of data. Tutorials will continue our two running examples that help provide intuition: fishing (for a binary latent state) and tracking astrocat (for a gaussian latent state). In both examples, we've set up interactive visualizations to provide intuition, and then students will recreate the key inferences step by step. For the binary case, we start with a simple version where the latent state doesn't change, then we'll allow the latent state to change over time. There's plenty of bonus material, but your core learning objective is to understand and implement an algorithm to infer a changing hidden state from observations.
The HMM combines ideas from the linear dynamics lessons (which used Markov models) with inferences described in the Bayes day (which used Hidden variables). It also connects directly to later lessons in Optimal Control and Reinforcement Learning, which often use the HMM to guide actions.
The HMM is a pervasive model in neuroscience. It is used for data analysis, like inferring neural activity from fluorescence images. It is also a foundational model for what the brain should compute, as it interprets the physical world that is observed only through its senses.
## Video
```
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1bt4y1X7UX", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"bJIAWgycuVU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Slides
```
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/8u92f/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
| github_jupyter |
```
# A script to calculate tolerance factors of ABX3 perovskites using bond valences from 2016
# Data from the International Union of Crystallography
# Author: Nick Wagner
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
bv = pd.read_csv("../Bond_valences2016.csv")
bv.head()
def calc_tol_factor(ion_list, valence_list, rp=0):
if len(ion_list) != 3:
print("Error: there should be three elements")
return None
for i in range(len(valence_list)): # If charge is 2-, make -2 to match tables
if valence_list[i][-1] == '-':
valence_list[i] = valence_list[i][-1] + valence_list[i][:-1]
for i in range(len(valence_list)): # Similarly, change 2+ to 2
valence_list[i] = int(valence_list[i].strip("+"))
AO_row = bv[(bv['Atom1'] == ion_list[0]) & (bv['Atom1_valence'] == valence_list[0])
& (bv['Atom2'] == ion_list[2]) & (bv['Atom2_valence'] == valence_list[2])]
BO_row = bv[(bv['Atom1'] == ion_list[1]) & (bv['Atom1_valence'] == valence_list[1])
& (bv['Atom2'] == ion_list[2]) & (bv['Atom2_valence'] == valence_list[2])]
avg_AO_row = AO_row.mean(axis=0) # If multiple values exist, take average
avg_BO_row = BO_row.mean(axis=0)
if rp == 0:
AO_bv = avg_AO_row['Ro']-avg_AO_row['B'] * np.log(avg_AO_row['Atom1_valence']/12)
BO_bv = avg_BO_row['Ro']-avg_BO_row['B'] * np.log(avg_BO_row['Atom1_valence']/6)
else: # Currently for Ruddlesden-Popper phases a naive weighted sum is used between A-site coordination of
# 9 in the rocksalt layer and 12 in perovskite
AO_bv = avg_AO_row['Ro']-avg_AO_row['B'] * np.log(avg_AO_row['Atom1_valence']/((9+12*(rp-1))/rp))
BO_bv = avg_BO_row['Ro']-avg_BO_row['B'] * np.log(avg_BO_row['Atom1_valence']/6)
tol_fact = AO_bv / (2**0.5 * BO_bv)
return tol_fact
# Test using BaMnO3
print(calc_tol_factor(['Ba', 'Mn','O'], ['2+', '4+', '2-']))
print(calc_tol_factor(['Ba', 'Mn','O'], ['2+', '4+', '2-'], rp=2))
names = ['Yb','Dy','Ho','Y','Tb','Gd', 'Eu','Sm','Nd','Pr','Ce','La']
nicole = [0.883, 0.901, 0.897, 0.827, 0.906, 0.912, 0.916, 0.92, 0.93, 0.936, 0.942, 0.95]
nick = []
for name in names:
nick.append(float('{:0.3f}'.format(calc_tol_factor([name, 'V','O'], ['3+', '3+', '2-']))))
d = {'nicole': nicole, 'nick': nick}
vanadates = pd.DataFrame(data=d ,index=names)
vanadates
ax = vanadates.plot.bar(figsize=(16,14),fontsize=32)
ax.set_ylabel('Tolerance Factor', fontsize=32)
ax.set_ylim(0.8, 1)
ax.legend(fontsize=32)
ax.set_title('Vanadate Tolerance Factors', fontsize=36)
plt.show()
nickels = ['Lu', 'Y', 'Dy', 'Gd', 'Eu', 'Sm', 'Nd', 'Pr', 'La']
nicole = [0.904, 0.851, 0.928, 0.938, 0.942, 0.947, 0.957, 0.964, 0.977]
nick= []
for nickel in nickels:
nick.append(float('{:0.3f}'.format(calc_tol_factor([nickel, 'Ni','O'], ['3+', '3+', '2-']))))
d = {'nicole': nicole, 'nick': nick}
nickelates = pd.DataFrame(data=d, index=nickels)
ax = nickelates.plot.bar(figsize=(16,14),fontsize=32)
ax.set_ylabel('Tolerance Factor', fontsize=32)
ax.set_ylim(0.8, 1)
ax.legend(fontsize=32)
ax.set_title('Nickelate Tolerance Factors', fontsize=36)
plt.show()
```
| github_jupyter |
# Variations on Binary Search
Now that you've gone through the work of building a binary search function, let's take some time to try out a few exercises that are variations (or extensions) of binary search. We'll provide the function for you to start:
```
def recursive_binary_search(target, source, left=0):
if len(source) == 0:
return None
center = (len(source)-1) // 2
if source[center] == target:
return center + left
elif source[center] < target:
return recursive_binary_search(target, source[center+1:], left+center+1)
else:
return recursive_binary_search(target, source[:center], left)
```
## Find First
The binary search function is guaranteed to return _an_ index for the element you're looking for in an array, but what if the element appears more than once?
Consider this array:
`[1, 3, 5, 7, 7, 7, 8, 11, 12]`
Let's find the number 7:
```
multiple = [1, 3, 5, 7, 7, 7, 8, 11, 12]
recursive_binary_search(7, multiple)
```
### Hmm...
Looks like we got the index 4, which is _correct_, but what if we wanted to find the _first_ occurrence of an element, rather than just any occurrence?
Write a new function: `find_first()` that uses binary_search as a starting point.
> Hint: You shouldn't need to modify binary_search() at all.
```
def find_first(target, source):
idx = recursive_binary_search(target, source)
if idx is None:
return None
while source[idx] == target:
if idx == 0:
return 0
if source[idx-1] == target:
idx -= 1
else:
return idx
multiple = [1, 3, 5, 7, 7, 7, 8, 11, 12, 13, 14, 15]
print(find_first(7, multiple)) # Should return 3
print(find_first(9, multiple)) # Should return None
## Add your own tests to verify that your code works!
```
## Spoiler - Solution below:
Here's what we came up with! You're answer might be a little different.
```python
def find_first(target, source):
index = recursive_binary_search(target, source)
if not index:
return None
while source[index] == target:
if index == 0:
return 0
if source[index-1] == target:
index -= 1
else:
return index
```
```
def find_first(target, source):
index = recursive_binary_search(target, source)
if not index:
return None
while source[index] == target:
if index == 0:
return 0
if source[index-1] == target:
index -= 1
else:
return index
multiple = [1, 3, 5, 7, 7, 7, 8, 11, 12, 13, 14, 15]
print(find_first(7, multiple)) # Should return 3
print(find_first(9, multiple)) # Should return None
```
## Contains
The second variation is a function that returns a boolean value indicating whether an element is _present_, but with no information about the location of that element.
For example:
```python
letters = ['a', 'c', 'd', 'f', 'g']
print(contains('a', letters)) ## True
print(contains('b', letters)) ## False
```
There are a few different ways to approach this, so try it out, and we'll share two solutions after.
```
def contains(target, source):
index = recursive_binary_search(target, source)
if index is not None:
return True
else:
return False
letters = ['a', 'c', 'd', 'f', 'g']
print(contains('a', letters)) ## True
print(contains('b', letters)) ## False
```
## Spoiler - Solution below:
Here are two solutions we came up with:
One option is just to wrap binary search:
```python
def contains(target, source):
return recursive_binary_search(target, source) is not None
```
Another choice is to build a simpler binary search directly into the function:
```python
def contains(target, source):
# Since we don't need to keep track of the index, we can remove the `left` parameter.
if len(source) == 0:
return False
center = (len(source)-1) // 2
if source[center] == target:
return True
elif source[center] < target:
return contains(target, source[center+1:])
else:
return contains(target, source[:center])
```
Try these functions out below:
```
# Loose wrapper for recursive binary search, returning True if the index is found and False if not
def contains(target, source):
return recursive_binary_search(target, source) is not None
letters = ['a', 'c', 'd', 'f', 'g']
print(contains('a', letters)) ## True
print(contains('b', letters)) ## False
# Native implementation of binary search in the `contains` function.
def contains(target, source):
if len(source) == 0:
return False
center = (len(source)-1) // 2
if source[center] == target:
return True
elif source[center] < target:
return contains(target, source[center+1:])
else:
return contains(target, source[:center])
letters = ['a', 'c', 'd', 'f', 'g']
print(contains('c', letters)) ## True
print(contains('b', letters)) ## False
```
## Awesome work!
| github_jupyter |
<a href="https://colab.research.google.com/github/phenix-project/Colabs/blob/main/CCTBX_Quickstart.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# CCTBX Quickstart
Get all dependencies installed and start coding using CCTBX
#Installation
### Installation is in three steps:
#### 1. the Anaconda Python package manager, CCTBX, and py3Dmol
#### 2. Retrieve the chemical data repositories and some example data
#### 3. Restart the Runtime and load CCTBX environment variables
```
#@title Install software
#@markdown Please execute this cell by pressing the _Play_ button
#@markdown on the left. Installation may take a few minutes.
#@markdown Double click on this text to show/hide the installation script
from IPython.utils import io
import tqdm.notebook
total = 50
MINICONDA_PREFIX="/usr/local"
MINICONDA_INSTALLER_SCRIPT="Miniconda3-py37_4.9.2-Linux-x86_64.sh"
with tqdm.notebook.tqdm(total=total) as pbar:
with io.capture_output() as captured:
# install anaconda
%shell wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
pbar.update(10)
%shell chmod +x {MINICONDA_INSTALLER_SCRIPT}
%shell ./{MINICONDA_INSTALLER_SCRIPT} -b -f -p {MINICONDA_PREFIX}
%shell conda install -c conda-forge cctbx -y
pbar.update(30)
# provide location of libtbx environment
%shell ln -s /usr/local/share/cctbx /usr/share/cctbx
# replace system libstdc++ with conda version
# restart runtime after running
# to keep changes persistent, modify the filesystem, not environment variables
%shell sudo cp /usr/local/lib/libstdc++.so.6.0.29 /usr/lib/x86_64-linux-gnu/libstdc++.so.6
# non cctbx installs
%shell pip install py3dmol
%shell apt install -y subversion
# python scripts for py3dmol + cctbx
%shell mkdir cctbx_py3dmol
%shell cd cctbx_py3dmol
%shell wget https://raw.githubusercontent.com/cschlick/cctbx-notebooks/master/cctbx_py3dmol_wrapper.py
%shell wget https://raw.githubusercontent.com/cschlick/cctbx-notebooks/master/cubetools.py
%shell cd ../
pbar.update(10)
#@title Download data
#@markdown Please execute this cell by pressing the _Play_ button
#@markdown on the left. This should take a minute.
%%capture
%%bash
# get chemical data
cd /usr/local/lib/python3.7/site-packages
mkdir chem_data
cd chem_data
svn --quiet --non-interactive --trust-server-cert co https://github.com/phenix-project/geostd.git/trunk geostd
svn --quiet --non-interactive --trust-server-cert co https://github.com/rlabduke/mon_lib.git/trunk mon_lib
svn --quiet --non-interactive --trust-server-cert export https://github.com/rlabduke/reference_data.git/trunk/Top8000/Top8000_rotamer_pct_contour_grids rotarama_data
svn --quiet --non-interactive --trust-server-cert --force export https://github.com/rlabduke/reference_data.git/trunk/Top8000/Top8000_ramachandran_pct_contour_grids rotarama_data
svn --quiet --non-interactive --trust-server-cert co https://github.com/rlabduke/reference_data.git/trunk/Top8000/Top8000_cablam_pct_contour_grids cablam_data
svn --quiet --non-interactive --trust-server-cert co https://github.com/rlabduke/reference_data.git/trunk/Top8000/rama_z rama_z
# update rotamer and cablam cache
/usr/local/bin/mmtbx.rebuild_rotarama_cache
/usr/local/bin/mmtbx.rebuild_cablam_cache
# get probe
git clone https://github.com/cschlick/cctbx-notebooks.git
mkdir -p /usr/local/share/cctbx/probe/exe
cp cctbx-notebooks/probe /usr/local/share/cctbx/probe/exe
chmod +x /usr/local/share/cctbx/probe/exe/probe
mkdir -p /usr/lib/python3.7/site-packages/probe
mkdir -p /usr/lib/python3.7/site-packages/reduce
# tutorial data
cd /content/
wget https://gitcdn.link/repo/phenix-lbl/cctbx_tutorial_files/master/2019_melk/1aba_pieces.pdb
wget https://gitcdn.link/repo/phenix-lbl/cctbx_tutorial_files/master/2019_melk/1aba_model.pdb
wget https://gitcdn.link/repo/phenix-lbl/cctbx_tutorial_files/master/2019_melk/4zyp.mtz
wget https://gitcdn.link/repo/phenix-lbl/cctbx_tutorial_files/master/2019_melk/1aba_reflections.mtz
wget https://gitcdn.link/repo/phenix-lbl/cctbx_tutorial_files/master/2019_melk/resname_mix.pdb
```
**WAIT!**...don't click the next button yet...Go to "Runtime" and select "Restart Runtime" first. Then click the button below as noted.
```
#@title Set environment variables and do imports
#@markdown Execute this cell by clicking the run button the left
%%capture
# Manually update sys.path
%env PYTHONPATH="$/env/python:/usr/local/lib:/usr/local/lib/python3.7/lib-dynload:/usr/local/lib/python3.7/site-packages"
# add conda paths to sys.path
import sys, os
sys.path.extend(['/usr/local/lib', '/usr/local/lib/python3.7/lib-dynload', '/usr/local/lib/python3.7/site-packages',"/content/cctbx_py3dmol"])
os.environ["MMTBX_CCP4_MONOMER_LIB"]="/usr/local/lib/python3.7/site-packages/chem_data/geostd"
os.environ["LIBTBX_BUILD"]= "/usr/local/share/cctbx"
import cctbx
import libtbx
libtbx.env.add_repository("/usr/local/lib/python3.7/site-packages/")
libtbx.env.module_dist_paths['probe']=""
from cubetools import *
from cctbx_py3dmol_wrapper import CCTBX3dmolWrapper
```
#### Test
We should (hopefully) not see any errors. Use the Run button or Shift+Enter to execute the cells
```
import cctbx
from scitbx.array_family import flex
a = flex.random_double(100)
print(flex.min(a))
# your code here...
```
| github_jupyter |
# Conversations query
```
from rekall.interval_list import IntervalList, Interval
from rekall.temporal_predicates import overlaps
```
## Using Identity Labels
```
def conversationsq(video_name):
from query.models import FaceCharacterActor, Shot
from rekall.video_interval_collection import VideoIntervalCollection
from rekall.parsers import in_array, bbox_payload_parser, merge_dict_parsers, dict_payload_parser
from rekall.merge_ops import payload_plus
from rekall.payload_predicates import payload_satisfies
from rekall.spatial_predicates import scene_graph
from esper.rekall import intrvllists_to_result_bbox
from query.models import Face
from rekall.video_interval_collection import VideoIntervalCollection
from rekall.parsers import in_array, bbox_payload_parser
from rekall.merge_ops import payload_plus, merge_named_payload, payload_second
from esper.rekall import intrvllists_to_result_bbox
from rekall.payload_predicates import payload_satisfies
from rekall.list_predicates import length_at_most
from rekall.logical_predicates import and_pred, or_pred, true_pred
from rekall.spatial_predicates import scene_graph, make_region
from rekall.temporal_predicates import before, after, overlaps
from rekall.bbox_predicates import height_at_least
from esper.rekall import intrvllists_to_result, intrvllists_to_result_with_objects, add_intrvllists_to_result
from esper.prelude import esper_widget
from rekall.interval_list import Interval, IntervalList
# faces are sampled every 12 frames
ONE_FRAME = 1
faces_with_character_actor_qs = FaceCharacterActor.objects.annotate(
min_frame=F('face__frame__number'),
max_frame=F('face__frame__number'),
video_id=F('face__frame__video_id'),
bbox_x1=F('face__bbox_x1'),
bbox_y1=F('face__bbox_y1'),
bbox_x2=F('face__bbox_x2'),
bbox_y2=F('face__bbox_y2'),
character_name=F('characteractor__character__name')
).filter(face__frame__video__name__contains=video_name)
faces_with_identity = VideoIntervalCollection.from_django_qs(
faces_with_character_actor_qs,
with_payload=in_array(merge_dict_parsers([
bbox_payload_parser(VideoIntervalCollection.django_accessor),
dict_payload_parser(VideoIntervalCollection.django_accessor, { 'character': 'character_name' }),
]))
).coalesce(payload_merge_op=payload_plus)
shots_qs = Shot.objects.filter(cinematic=True)
shots = VideoIntervalCollection.from_django_qs(shots_qs)
def payload_unique_characters(payload1, payload2):
if 'characters' not in payload1[0]:
unique_characters = set([p['character'] for p in payload1])
for p in payload2:
unique_characters.add(p['character'])
payload1[0]['characters'] = list(unique_characters)
else:
unique_characters = set([p['character'] for p in payload2])
unique_characters.update(payload1[0]['characters'])
payload1[0]['characters'] = list(unique_characters)
return payload1
shots_with_faces = shots.merge(faces_with_identity,
predicate=overlaps(),
payload_merge_op=payload_second)
shots_with_faces = shots_with_faces.coalesce(payload_merge_op=payload_unique_characters)
def cross_product_faces(intrvl1, intrvl2):
payload1 = intrvl1.get_payload()
payload2 = intrvl2.get_payload()
chrtrs1 = payload1[0]['characters'] if 'characters' in payload1[0] else list(set([p['character'] for p in payload1]))
chrtrs2 = payload2[0]['characters'] if 'characters' in payload2[0] else list(set([p['character'] for p in payload2]))
new_intervals = []
for i in chrtrs1:
for j in chrtrs2:
if i!=j:
new_payload = {'A': i, 'B': j}
start = min(intrvl1.start, intrvl2.start)
end = max(intrvl1.end, intrvl2.end)
new_intervals.append(Interval(start, end, {'A': i, 'B': j}))
return new_intervals
def faces_equal(payload1, payload2):
p1 = [payload1]
if type(payload1) is dict and 'chrs' in payload1:
p1 = payload1['chrs']
elif type(payload1) is list:
p1 = payload1
p2 = [payload2]
if type(payload2) is dict and 'chrs' in payload2:
p2 = payload2['chrs']
elif type(payload2) is list:
p2 = payload2
payload1 = p1
payload2 = p2
if type(payload1) is not list and type(payload1) is not list:
return (payload1['A'] == payload2['A'] and payload1['B'] == payload2['B']) or (payload1['A'] == payload2['B'] and payload1['B'] == payload2['A'])
elif type(payload1) is list and type(payload2) is list:
for i in payload1:
for j in payload2:
if i['A'] == j['A'] and i['B'] == j['B']:
return True
if i['A'] == j['B'] and i['B'] == j['A']:
return True
elif type(payload1) is list:
for i in payload1:
if i['A'] == payload2['A'] and i['B'] == payload2['B']:
return True
if i['A'] == payload2['B'] and i['B'] == payload2['A']:
return True
else:
for i in payload2:
if i['A'] == payload1['A'] and i['B'] == payload1['B']:
return True
if i['A'] == payload1['B'] and i['B'] == payload1['A']:
return True
return False
def times_equal(intrvl1, intrvl2):
return (intrvl1.start >= intrvl2.start and intrvl1.end <= intrvl2.end) or (intrvl2.start >= intrvl1.start and intrvl2.end <= intrvl1.end)
def times_overlap(intrvl1, intrvl2):
return intrvl1.start <= intrvl2.end and intrvl2.start <= intrvl1.end
def merge_to_list(payload1, payload2):
p1 = payload1 if type(payload1) is list else [payload1]
p2 = payload2 if type(payload2) is list else [payload2]
return p1+p2
def count_shots(payload1, payload2):
p1 = [payload1]
if type(payload1) is dict and 'chrs' in payload1:
p1 = payload1['chrs']
elif type(payload1) is list:
p1 = payload1
p2 = [payload2]
if type(payload2) is dict and 'chrs' in payload2:
p2 = payload2['chrs']
elif type(payload2) is list:
p2 = payload2
p1_shots = payload1['shots'] if type(payload1) is dict and 'shots' in payload1 else 1
p2_shots = payload2['shots'] if type(payload2) is dict and 'shots' in payload2 else 1
return {'shots': p1_shots + p2_shots, 'chrs': p1 + p2}
def shots_equal(payload1, payload2):
p1 = [payload1]
if type(payload1) is dict and 'chrs' in payload1:
p1 = payload1['chrs']
elif type(payload1) is list:
p1 = payload1
p2 = [payload2]
if type(payload2) is dict and 'chrs' in payload2:
p2 = payload2['chrs']
elif type(payload2) is list:
p2 = payload2
p1_shots = payload1['shots'] if type(payload1) is dict and 'shots' in payload1 else 1
p2_shots = payload2['shots'] if type(payload2) is dict and 'shots' in payload2 else 1
shots = p1_shots if p1_shots > p2_shots else p2_shots
return {'shots': shots, 'chrs': p1 + p2}
two_shots = shots_with_faces.join(shots_with_faces, predicate=after(max_dist=ONE_FRAME, min_dist=ONE_FRAME),
merge_op=cross_product_faces)
convs = two_shots.coalesce(predicate=times_equal, payload_merge_op=merge_to_list)
convs = convs.coalesce(predicate=payload_satisfies(faces_equal, arity=2), payload_merge_op=count_shots)
adjacent_seq = convs.merge(convs, predicate=and_pred(after(max_dist=ONE_FRAME, min_dist=ONE_FRAME), payload_satisfies(faces_equal, arity=2), arity=2), payload_merge_op=count_shots)
convs = convs.set_union(adjacent_seq)
# convs = convs.coalesce(predicate=times_equal, payload_merge_op=shots_equal)
def filter_fn(intvl):
payload = intvl.get_payload()
if type(payload) is dict and 'shots' in payload:
return payload['shots'] >= 2
return False
convs = convs.filter(filter_fn)
convs = convs.coalesce(predicate=times_overlap)
for video_id in convs.intervals.keys():
print(video_id)
intvllist = convs.get_intervallist(video_id)
for intvl in intvllist.get_intervals():
print(intvl.payload)
print(str(intvl.start) + ':' + str(intvl.end))
return convs
```
### Validation Numbers
```
Godfather Part iii
Precision: 0.7562506843815017
Recall: 0.9028280099350734
Precision Per Item: 0.5555555555555556
Recall Per Item: 1.0
Apollo 13
Precision: 0.9801451458304806
Recall: 0.7144069065322621
Precision Per Item: 1.0
Recall Per Item: 0.9333333333333333
Harry Potter 2
Precision: 0.8393842579146094
Recall: 0.5495863839497955
Precision Per Item: 0.75
Recall Per Item: 0.875
Fight Club
Precision: 0.7107177395618719
Recall: 0.8310226155358899
Precision Per Item: 0.6666666666666666
Recall Per Item: 0.9285714285714286
```
## Using Face Embeddings
Strategy: cluster embeddings by shot (number of clusters is max number of people in the shot), then compare cluster centroids.
```
def conversationsq_face_embeddings(video_name):
from query.models import FaceCharacterActor, Shot
from rekall.video_interval_collection import VideoIntervalCollection
from rekall.parsers import in_array, bbox_payload_parser, merge_dict_parsers, dict_payload_parser
from rekall.merge_ops import payload_plus
from rekall.payload_predicates import payload_satisfies
from rekall.spatial_predicates import scene_graph
from esper.rekall import intrvllists_to_result_bbox
from query.models import Face
from rekall.video_interval_collection import VideoIntervalCollection
from rekall.parsers import in_array, bbox_payload_parser
from rekall.merge_ops import payload_plus, merge_named_payload, payload_second
from esper.rekall import intrvllists_to_result_bbox
from rekall.payload_predicates import payload_satisfies
from rekall.list_predicates import length_at_most
from rekall.logical_predicates import and_pred, or_pred, true_pred
from rekall.spatial_predicates import scene_graph, make_region
from rekall.temporal_predicates import before, after, overlaps, equal
from rekall.bbox_predicates import height_at_least
from esper.rekall import intrvllists_to_result, intrvllists_to_result_with_objects, add_intrvllists_to_result
from esper.prelude import esper_widget
from rekall.interval_list import Interval, IntervalList
import esper.face_embeddings as face_embeddings
EMBEDDING_EQUALITY_THRESHOLD = 1.
ONE_FRAME = 1
faces_qs = Face.objects.annotate(
min_frame=F('frame__number'),
max_frame=F('frame__number'),
video_id=F('frame__video_id')
).filter(frame__video__name__contains=video_name, frame__regularly_sampled=True)
faces_per_frame = VideoIntervalCollection.from_django_qs(
faces_qs,
with_payload=in_array(merge_dict_parsers([
bbox_payload_parser(VideoIntervalCollection.django_accessor),
dict_payload_parser(VideoIntervalCollection.django_accessor, { 'face_id': 'id' }),
]))
).coalesce(payload_merge_op=payload_plus)
shots_qs = Shot.objects.filter(cinematic=True)
shots = VideoIntervalCollection.from_django_qs(shots_qs)
shots_with_faces = shots.merge(
faces_per_frame,
predicate=overlaps(),
payload_merge_op=lambda shot_id, faces_in_frame: [faces_in_frame]
).coalesce(payload_merge_op=payload_plus)
def cluster_center(face_ids):
mean_embedding = face_embeddings.mean(face_ids)
dists = face_embeddings.dist(face_ids, [mean_embedding])
return min(zip(dists, face_ids))[1]
def cluster_and_compute_centers(faces_in_frame_list):
num_people = max(len(faces_in_frame) for faces_in_frame in faces_in_frame_list)
face_ids = [face['face_id'] for faces_in_frame in faces_in_frame_list for face in faces_in_frame]
if num_people == 1:
clusters = [(fid, 0) for fid in face_ids]
else:
clusters = face_embeddings.kmeans(face_ids, num_people)
centers = [
(
cluster_center([
face_id
for face_id, cluster_id in clusters
if cluster_id == i
]), [
face_id
for face_id, cluster_id in clusters
if cluster_id == i
]
)
for i in range(num_people)
]
return centers
print("Clusters computed")
shots_with_centers = shots_with_faces.map(
lambda intrvl: (intrvl.start, intrvl.end, cluster_and_compute_centers(intrvl.payload))
)
def same_face(center1, center2):
return face_embeddings.dist([center1], target_ids=[center2])[0] < EMBEDDING_EQUALITY_THRESHOLD
def cross_product_faces(intrvl1, intrvl2):
payload1 = intrvl1.get_payload()
payload2 = intrvl2.get_payload()
payload = []
for cluster1 in payload1:
for cluster2 in payload2:
if not same_face(cluster1[0], cluster2[0]):
new_payload = {'A': cluster1, 'B': cluster2}
payload.append(new_payload)
return [(min(intrvl1.get_start(), intrvl2.get_start()),
max(intrvl1.get_end(), intrvl2.get_end()), {
'chrs': payload,
'shots': 1
})]
two_shots = shots_with_centers.join(
shots_with_centers,
predicate=after(max_dist=ONE_FRAME, min_dist=ONE_FRAME),
merge_op=cross_product_faces
)
print("Cross product done")
def faces_equal(payload1, payload2):
for face_pair1 in payload1['chrs']:
for face_pair2 in payload2['chrs']:
if (same_face(face_pair1['A'][0], face_pair2['A'][0]) and
same_face(face_pair1['B'][0], face_pair2['B'][0])):
return True
if (same_face(face_pair1['A'][0], face_pair2['B'][0]) and
same_face(face_pair1['B'][0], face_pair2['A'][0])):
return True
return False
convs = two_shots.coalesce(
predicate=payload_satisfies(faces_equal, arity=2),
payload_merge_op = lambda payload1, payload2: {
'chrs': payload1['chrs'] + payload2['chrs'],
'shots': payload1['shots'] + payload2['shots']
}
)
print("Coalesce done")
adjacent_seq = convs.merge(
convs,
predicate=and_pred(
after(max_dist=ONE_FRAME, min_dist=ONE_FRAME),
payload_satisfies(faces_equal, arity=2),
arity=2),
payload_merge_op = lambda payload1, payload2: {
'chrs': payload1['chrs'] + payload2['chrs'],
'shots': payload1['shots'] + payload2['shots']
},
working_window=1
)
convs = convs.set_union(adjacent_seq)
# convs = convs.coalesce(predicate=times_equal, payload_merge_op=shots_equal)
print("Two-shot adjacencies done")
def filter_fn(intvl):
payload = intvl.get_payload()
if type(payload) is dict and 'shots' in payload:
return payload['shots'] >= 2
return False
convs = convs.filter(filter_fn)
convs = convs.coalesce()
print("Final filter done")
# for video_id in convs.intervals.keys():
# print(video_id)
# intvllist = convs.get_intervallist(video_id)
# for intvl in intvllist.get_intervals():
# print(intvl.payload)
# print(str(intvl.start) + ':' + str(intvl.end))
return convs
convs = conversationsq_face_embeddings('apollo 13')
convs.get_intervallist(15).size()
# Returns precision, recall, precision_per_item, recall_per_item
def compute_statistics(query_intrvllists, ground_truth_intrvllists):
total_query_time = 0
total_query_segments = 0
total_ground_truth_time = 0
total_ground_truth_segments = 0
for video in query_intrvllists:
total_query_time += query_intrvllists[video].coalesce().get_total_time()
total_query_segments += query_intrvllists[video].size()
for video in ground_truth_intrvllists:
total_ground_truth_time += ground_truth_intrvllists[video].coalesce().get_total_time()
total_ground_truth_segments += ground_truth_intrvllists[video].size()
total_overlap_time = 0
overlapping_query_segments = 0
overlapping_ground_truth_segments = 0
for video in query_intrvllists:
if video in ground_truth_intrvllists:
query_list = query_intrvllists[video]
gt_list = ground_truth_intrvllists[video]
total_overlap_time += query_list.overlaps(gt_list).coalesce().get_total_time()
overlapping_query_segments += query_list.filter_against(gt_list, predicate=overlaps()).size()
overlapping_ground_truth_segments += gt_list.filter_against(query_list, predicate=overlaps()).size()
if total_query_time == 0:
precision = 1.0
precision_per_item = 1.0
else:
precision = total_overlap_time / total_query_time
precision_per_item = overlapping_query_segments / total_query_segments
if total_ground_truth_time == 0:
recall = 1.0
recall_per_item = 1.0
else:
recall = total_overlap_time / total_ground_truth_time
recall_per_item = overlapping_ground_truth_segments / total_ground_truth_segments
return precision, recall, precision_per_item, recall_per_item
def print_statistics(query_intrvllists, ground_truth_intrvllists):
precision, recall, precision_per_item, recall_per_item = compute_statistics(
query_intrvllists, ground_truth_intrvllists)
print("Precision: ", precision)
print("Recall: ", recall)
print("Precision Per Item: ", precision_per_item)
print("Recall Per Item: ", recall_per_item)
apollo_convs = conversationsq_face_embeddings("apollo 13")
apollo_data = [
(2578, 4100), (4244, 4826), (5098, 5828), (7757, 9546),
(9602, 10300), (12393, 12943), (13088, 13884), (14146, 15212),
(15427, 16116), (18040, 19198), (20801, 23368), (24572, 26185),
(26735, 28753), (29462, 30873), (31768, 34618)]
apollo_gt = {15: IntervalList([Interval(start, end, payload=None) for (start,end) in apollo_data])}
print_statistics({15: apollo_convs.filter(lambda intrvl: intrvl.start < 34618).get_intervallist(15)}, apollo_gt)
godfather_convs = conversationsq_face_embeddings("the godfather part iii")
godfather_data = [(12481, 13454), (13673, 14729), (16888, 17299), (21101, 27196),
(27602, 29032), (29033, 33204), (34071, 41293), (41512, 43103)]
godfather_gt = {216: IntervalList([Interval(start, end, payload=None) for (start,end) in godfather_data])}
print_statistics({216: godfather_convs.filter(lambda intrvl: intrvl.start < 43103).get_intervallist(216)},
godfather_gt)
hp2_convs = conversationsq_face_embeddings('harry potter and the chamber of secrets')
hp2_query = hp2_convs.filter(lambda inv: inv.start < 20308)
hp2_query = {'374': hp2_query.get_intervallist(374)}
hp2_data = [(2155, 4338), (4687, 6188), (6440, 10134), (12921, 13151), (16795, 17370),
(17766, 18021), (18102, 19495), (19622, 20308)]
hp2_gt = {'374': IntervalList([Interval(start, end, payload=None) for (start,end) in hp2_data])}
print_statistics(hp2_query, hp2_gt)
fc_query = conversationsq_face_embeddings('fight club')
fc_query = fc_query.filter(lambda inv: inv.start < 58258)
fc_query = {'61': fc_query.get_intervallist(61)}
fc_data = [(4698, 5602), (6493, 6865), (8670, 9156), (9517, 10908), (11087, 13538), (22039, 24188),
(25603, 27656), (31844, 32812), (32918, 33451), (33698, 35363), (42072, 45143),
(45272, 46685), (49162, 50618), (56830, 58258)]
fc_gt = {'61': IntervalList([Interval(start, end, payload=None) for (start,end) in fc_data])}
print_statistics(fc_query, fc_gt)
```
Results with threshold of 0.9:
```
Precision: 0.9390051766824218
Recall: 0.8327760866310694
Precision Per Item: 0.9411764705882353
Recall Per Item: 1.0
Precision: 0.7085401799565622
Recall: 0.7960695455139658
Precision Per Item: 0.6
Recall Per Item: 0.875
Precision: 0.7528127623845507
Recall: 0.8525244841684891
Precision Per Item: 0.5625
Recall Per Item: 1.0
Precision: 0.6229706390328152
Recall: 0.7093411996066863
Precision Per Item: 0.6451612903225806
Recall Per Item: 0.9285714285714286
Average precision: 75.6
Average recall: 79.8
```
Results with a threshold of 1.0:
```
Precision: 0.9040439021791251
Recall: 0.8467488397624632
Precision Per Item: 0.8888888888888888
Recall Per Item: 1.0
Precision: 0.6720145787179304
Recall: 0.8195128328031722
Precision Per Item: 0.5238095238095238
Recall Per Item: 1.0
Precision: 0.7255309325946445
Recall: 0.8965484453741561
Precision Per Item: 0.5625
Recall Per Item: 1.0
Precision: 0.5912671438282219
Recall: 0.7269911504424779
Precision Per Item: 0.5555555555555556
Recall Per Item: 0.9285714285714286
Average precision: 72.3
Average recall: 82.3
```
## Face Embeddings Algorithm on Identities
```
def conversationsq_face_embeddings_with_identities(video_name):
from query.models import FaceCharacterActor, Shot
from rekall.video_interval_collection import VideoIntervalCollection
from rekall.parsers import in_array, bbox_payload_parser, merge_dict_parsers, dict_payload_parser
from rekall.merge_ops import payload_plus
from rekall.payload_predicates import payload_satisfies
from rekall.spatial_predicates import scene_graph
from esper.rekall import intrvllists_to_result_bbox
from query.models import Face
from rekall.video_interval_collection import VideoIntervalCollection
from rekall.parsers import in_array, bbox_payload_parser
from rekall.merge_ops import payload_plus, merge_named_payload, payload_second
from esper.rekall import intrvllists_to_result_bbox
from rekall.payload_predicates import payload_satisfies
from rekall.list_predicates import length_at_most
from rekall.logical_predicates import and_pred, or_pred, true_pred
from rekall.spatial_predicates import scene_graph, make_region
from rekall.temporal_predicates import before, after, overlaps, equal
from rekall.bbox_predicates import height_at_least
from esper.rekall import intrvllists_to_result, intrvllists_to_result_with_objects, add_intrvllists_to_result
from esper.prelude import esper_widget
from rekall.interval_list import Interval, IntervalList
import esper.face_embeddings as face_embeddings
EMBEDDING_EQUALITY_THRESHOLD = 10
ONE_FRAME = 1
faces_with_character_actor_qs = FaceCharacterActor.objects.annotate(
min_frame=F('face__frame__number'),
max_frame=F('face__frame__number'),
video_id=F('face__frame__video_id'),
bbox_x1=F('face__bbox_x1'),
bbox_y1=F('face__bbox_y1'),
bbox_x2=F('face__bbox_x2'),
bbox_y2=F('face__bbox_y2'),
character_name=F('characteractor__character__name')
).filter(face__frame__video__name__contains=video_name)
faces_per_frame = VideoIntervalCollection.from_django_qs(
faces_with_character_actor_qs,
with_payload=in_array(merge_dict_parsers([
bbox_payload_parser(VideoIntervalCollection.django_accessor),
dict_payload_parser(VideoIntervalCollection.django_accessor, { 'character': 'character_name' }),
]))
).coalesce(payload_merge_op=payload_plus)
shots_qs = Shot.objects.filter(cinematic=True)
shots = VideoIntervalCollection.from_django_qs(shots_qs)
shots_with_faces = shots.merge(
faces_per_frame,
predicate=overlaps(),
payload_merge_op=lambda shot_id, faces_in_frame: [faces_in_frame]
).coalesce(payload_merge_op=payload_plus)
def cluster_center(face_ids):
mean_embedding = face_embeddings.mean(face_ids)
dists = face_embeddings.dist(face_ids, [mean_embedding])
return min(zip(dists, face_ids))[1]
def cluster_and_compute_centers(faces_in_frame_list):
# num_people = max(len(faces_in_frame) for faces_in_frame in faces_in_frame_list)
# face_ids = [face['face_id'] for faces_in_frame in faces_in_frame_list for face in faces_in_frame]
# if num_people == 1:
# clusters = [(fid, 0) for fid in face_ids]
# else:
# clusters = face_embeddings.kmeans(face_ids, num_people)
# centers = [
# (
# cluster_center([
# face_id
# for face_id, cluster_id in clusters
# if cluster_id == i
# ]), [
# face_id
# for face_id, cluster_id in clusters
# if cluster_id == i
# ]
# )
# for i in range(num_people)
# ]
# return centers
return set([face['character'] for faces_in_frame in faces_in_frame_list for face in faces_in_frame])
print("Clusters computed")
shots_with_centers = shots_with_faces.map(
lambda intrvl: (intrvl.start, intrvl.end, cluster_and_compute_centers(intrvl.payload))
)
def same_face(center1, center2):
return center1 == center2
def cross_product_faces(intrvl1, intrvl2):
payload1 = intrvl1.get_payload()
payload2 = intrvl2.get_payload()
payload = []
for cluster1 in list(payload1):
for cluster2 in list(payload2):
if not same_face(cluster1, cluster2):
new_payload = {'A': cluster1, 'B': cluster2}
payload.append(new_payload)
return [Interval(min(intrvl1.get_start(), intrvl2.get_start()),
max(intrvl1.get_end(), intrvl2.get_end()), {
'chrs': payload,
'shots': 1
})]
two_shots = shots_with_centers.join(
shots_with_centers,
predicate=after(max_dist=ONE_FRAME, min_dist=ONE_FRAME),
merge_op=cross_product_faces,
working_window=ONE_FRAME
)
print("Cross product done")
def faces_equal(payload1, payload2):
for face_pair1 in payload1['chrs']:
for face_pair2 in payload2['chrs']:
if (same_face(face_pair1['A'][0], face_pair2['A'][0]) and
same_face(face_pair1['B'][0], face_pair2['B'][0])):
return True
if (same_face(face_pair1['A'][0], face_pair2['B'][0]) and
same_face(face_pair1['B'][0], face_pair2['A'][0])):
return True
return False
convs = two_shots.coalesce(
predicate=payload_satisfies(faces_equal, arity=2),
payload_merge_op = lambda payload1, payload2: {
'chrs': payload1['chrs'] + + payload2['chrs'],
'shots': payload1['shots'] + payload2['shots']
}
)
print("Coalesce done")
adjacent_seq = convs.merge(
convs,
predicate=and_pred(
after(max_dist=ONE_FRAME, min_dist=ONE_FRAME),
payload_satisfies(faces_equal, arity=2),
arity=2),
payload_merge_op = lambda payload1, payload2: {
'chrs': payload1['chrs'] + payload2['chrs'],
'shots': payload1['shots'] + payload2['shots']
},
working_window=1
)
convs = convs.set_union(adjacent_seq)
# convs = convs.coalesce(predicate=times_equal, payload_merge_op=shots_equal)
print("Two-shot adjacencies done")
def filter_fn(intvl):
payload = intvl.get_payload()
if type(payload) is dict and 'shots' in payload:
return payload['shots'] >= 2
return False
convs = convs.filter(filter_fn)
convs = convs.coalesce()
print("Final filter done")
for video_id in convs.intervals.keys():
print(video_id)
intvllist = convs.get_intervallist(video_id)
for intvl in intvllist.get_intervals():
print(intvl.payload)
print(str(intvl.start) + ':' + str(intvl.end))
return convs
convs2 = conversationsq_face_embeddings_with_identities('apollo 13')
apollo_data = [
(2578, 4100), (4244, 4826), (5098, 5828), (7757, 9546),
(9602, 10300), (12393, 12943), (13088, 13884), (14146, 15212),
(15427, 16116), (18040, 19198), (20801, 23368), (24572, 26185),
(26735, 28753), (29462, 30873), (31768, 34618)]
apollo_gt = {15: IntervalList([Interval(start, end, payload=None) for (start,end) in apollo_data])}
print_statistics({15: convs2.filter(lambda intrvl: intrvl.start < 34618).get_intervallist(15)}, apollo_gt)
```
Results:
```
Precision: 0.9756586483390607
Recall: 0.510055391985628
Precision Per Item: 1.0
Recall Per Item: 0.9333333333333333
```
## Dan's Scratchpad
```
from query.models import FaceCharacterActor, Shot
from rekall.video_interval_collection import VideoIntervalCollection
from rekall.parsers import in_array, bbox_payload_parser, merge_dict_parsers, dict_payload_parser
from rekall.merge_ops import payload_plus
from rekall.payload_predicates import payload_satisfies
from rekall.spatial_predicates import scene_graph
from esper.rekall import intrvllists_to_result_bbox
from query.models import Face
from rekall.video_interval_collection import VideoIntervalCollection
from rekall.parsers import in_array, bbox_payload_parser
from rekall.merge_ops import payload_plus, merge_named_payload, payload_second
from esper.rekall import intrvllists_to_result_bbox
from rekall.payload_predicates import payload_satisfies
from rekall.list_predicates import length_at_most
from rekall.logical_predicates import and_pred, or_pred, true_pred
from rekall.spatial_predicates import scene_graph, make_region
from rekall.temporal_predicates import before, after, overlaps
from rekall.bbox_predicates import height_at_least
from esper.rekall import intrvllists_to_result, intrvllists_to_result_with_objects, add_intrvllists_to_result
from esper.prelude import esper_widget
from rekall.interval_list import Interval, IntervalList
import esper.face_embeddings as face_embeddings
# faces are sampled every 12 frames
SAMPLING_RATE = 12
ONE_FRAME = 1
video_name='apollo 13'
faces_qs = Face.objects.annotate(
min_frame=F('frame__number'),
max_frame=F('frame__number'),
video_id=F('frame__video_id')
).filter(
frame__video__name__contains=video_name,
frame__regularly_sampled=True,
probability__gte=0.9
)
faces_per_frame = VideoIntervalCollection.from_django_qs(
faces_qs,
with_payload=in_array(merge_dict_parsers([
bbox_payload_parser(VideoIntervalCollection.django_accessor),
dict_payload_parser(VideoIntervalCollection.django_accessor, { 'face_id': 'id' }),
]))
).coalesce(payload_merge_op=payload_plus)
shots_qs = Shot.objects.filter(cinematic=True)
shots = VideoIntervalCollection.from_django_qs(shots_qs)
shots_with_faces = shots.merge(
faces_per_frame,
predicate=overlaps(),
payload_merge_op=lambda shot_id, faces_in_frame: [faces_in_frame]
).coalesce(payload_merge_op=payload_plus)
def cluster_center(face_ids):
mean_embedding = face_embeddings.mean(face_ids)
dists = face_embeddings.dist(face_ids, [mean_embedding])
return min(zip(dists, face_ids))[1]
def cluster_and_compute_centers(faces_in_frame_list):
num_people = max(len(faces_in_frame) for faces_in_frame in faces_in_frame_list)
face_ids = [face['face_id'] for faces_in_frame in faces_in_frame_list for face in faces_in_frame]
if num_people == 1:
clusters = [(fid, 0) for fid in face_ids]
else:
clusters = face_embeddings.kmeans(face_ids, num_people)
centers = [
(
cluster_center([
face_id
for face_id, cluster_id in clusters
if cluster_id == i
]), [
face_id
for face_id, cluster_id in clusters
if cluster_id == i
]
)
for i in range(num_people)
]
return centers
shots_with_centers = shots_with_faces.map(
lambda intrvl: (intrvl.start, intrvl.end, cluster_and_compute_centers(intrvl.payload))
)
shots_with_centroids.get_intervallist(15).filter(payload_satisfies(lambda p: len(p) > 1))
a_list = [706034, 706036, 706038, 706040, 706042, 706043, 706046, 706048]
b_list = [706033, 706035, 706037, 706039, 706041, 706044, 706045, 706047, 706049, 706050]
a_mean = face_embeddings.mean(a_list)
b_mean = face_embeddings.mean(b_list)
a_mean
b_mean
import numpy as np
np.sqrt(sum((a-b) ** 2 for a, b in zip(a_mean, b_mean))) * 8
a_mean_small1 = face_embeddings.mean(a_list[:4])
a_mean_small2 = face_embeddings.mean(a_list[4:8])
np.sqrt(sum((a-b) ** 2 for a, b in zip(a_mean_small1, a_mean_small2))) * 4
a_mean = face_embeddings.mean()
b_mean = face_embeddings.mean()
np.sqrt(sum((a-b) ** 2 for a, b in zip(a_mean, b_mean)))
def cluster_center(face_ids):
mean_embedding = face_embeddings.mean(face_ids)
dists = face_embeddings.dist(face_ids, [mean_embedding])
return min(zip(dists, face_ids))[1]
cluster_center(a_list)
cluster_center(b_list)
face_embeddings.dist([cluster_center(a_list)], target_ids=[cluster_center(b_list)])
c_list = [706334, 706336, 706338, 706341, 706343, 706345, 706346, 706348, 706350, 706353, 706355, 706357, 706359, 706361, 706362, 706364, 706365, 706366, 706367, 706368, 706369]
face_embeddings.dist([cluster_center(a_list)], target_ids=[cluster_center(c_list)])
face_embeddings.dist([cluster_center(b_list)], target_ids=[cluster_center(c_list)])
```
# Scratchpad
```
from query.models import FaceCharacterActor, Shot
from rekall.video_interval_collection import VideoIntervalCollection
from rekall.parsers import in_array, bbox_payload_parser, merge_dict_parsers, dict_payload_parser
from rekall.merge_ops import payload_plus
from rekall.payload_predicates import payload_satisfies
from rekall.spatial_predicates import scene_graph
from esper.rekall import intrvllists_to_result_bbox
from query.models import Face
from rekall.video_interval_collection import VideoIntervalCollection
from rekall.parsers import in_array, bbox_payload_parser
from rekall.merge_ops import payload_plus, merge_named_payload, payload_second
from esper.rekall import intrvllists_to_result_bbox
from rekall.payload_predicates import payload_satisfies
from rekall.list_predicates import length_at_most
from rekall.logical_predicates import and_pred, or_pred, true_pred
from rekall.spatial_predicates import scene_graph, make_region
from rekall.temporal_predicates import before, after, overlaps
from rekall.bbox_predicates import height_at_least
from esper.rekall import intrvllists_to_result, intrvllists_to_result_with_objects, add_intrvllists_to_result
from esper.prelude import esper_widget
from rekall.interval_list import Interval, IntervalList
RIGHT_HALF_MIN_X = 0.45
LEFT_HALF_MAX_X = 0.55
MIN_FACE_HEIGHT = 0.4
MAX_FACES_ON_SCREEN = 2
# faces are sampled every 12 frames
SAMPLING_RATE = 12
ONE_SECOND = 1
FOUR_SECONDS = 96
TEN_SECONDS = 240
# Annotate face rows with start and end frames and the video ID
faces_with_character_actor_qs = FaceCharacterActor.objects.annotate(
min_frame=F('face__frame__number'),
max_frame=F('face__frame__number'),
video_id=F('face__frame__video_id'),
bbox_x1=F('face__bbox_x1'),
bbox_y1=F('face__bbox_y1'),
bbox_x2=F('face__bbox_x2'),
bbox_y2=F('face__bbox_y2'),
character_name=F('characteractor__character__name')
)
faces_with_identity = VideoIntervalCollection.from_django_qs(
faces_with_character_actor_qs,
with_payload=in_array(merge_dict_parsers([
bbox_payload_parser(VideoIntervalCollection.django_accessor),
dict_payload_parser(VideoIntervalCollection.django_accessor, { 'character': 'character_name' }),
]))
).coalesce(payload_merge_op=payload_plus)
shots_qs = Shot.objects.filter(
labeler=Labeler.objects.get(name='shot-hsvhist-face'))
shots = VideoIntervalCollection.from_django_qs(shots_qs)
def payload_unique_characters(payload1, payload2):
if 'characters' not in payload1[0]:
unique_characters = set([p['character'] for p in payload1])
for p in payload2:
unique_characters.add(p['character'])
payload1[0]['characters'] = list(unique_characters)
else:
unique_characters = set([p['character'] for p in payload2])
unique_characters.update(payload1[0]['characters'])
payload1[0]['characters'] = list(unique_characters)
return payload1
shots_with_faces = shots.merge(faces_with_identity,
predicate=overlaps(),
payload_merge_op=payload_second)
shots_with_faces = shots_with_faces.coalesce(payload_merge_op=payload_unique_characters)
def cross_product_faces(intrvl1, intrvl2):
payload1 = intrvl1.get_payload()
payload2 = intrvl2.get_payload()
chrtrs1 = payload1[0]['characters'] if 'characters' in payload1[0] else list(set([p['character'] for p in payload1]))
chrtrs2 = payload2[0]['characters'] if 'characters' in payload2[0] else list(set([p['character'] for p in payload2]))
new_intervals = []
for i in payload1:
for j in chrtrs2:
if i!=j:
new_payload = {'A': i, 'B': j}
# new_payload.update()
start = min(intrvl1.start, intrvl2.start)
end = max(intrvl1.end, intrvl2.end)
# print(intrvl1.keys())
# print(intrvl1.video_id == intrvl2.video_id )
new_intervals.append(Interval(start, end, {'A': i, 'B': j}))
return new_intervals
def faces_equal(payload1, payload2):
return (payload1['A'] == payload2['A'] and payload1['B'] == payload2['B']) or (payload1['A'] == payload2['B'] and payload1['B'] == payload2['A'])
def faces_equal(payload1, payload2):
if type(payload1) is not list and type(payload1) is not list:
return (payload1['A'] == payload2['A'] and payload1['B'] == payload2['B']) or (payload1['A'] == payload2['B'] and payload1['B'] == payload2['A'])
elif type(payload1) is list and type(payload1) is list:
for i in payload1:
for j in payload2:
if i['A'] == j['A'] and i['B'] == j['B']:
return True
elif type(payload1) is list:
for i in payload1:
if i['A'] == payload2['A'] and i['B'] == payload2['B']:
return True
else:
for i in payload2:
if i['A'] == payload1['A'] and i['B'] == payload1['B']:
return True
return False
def times_equal(intrvl1, intrvl2):
return intrvl.start == intervl2.start and intrvl.end == intervl2.end
def merge_to_list(payload1, payload2):
p1 = payload1 if type(payload1) is list else [payload1]
p2 = payload2 if type(payload2) is list else [payload2]
return p1+p2
two_shots = shots_with_faces.join(shots_with_faces, predicate=after(max_dist=ONE_SECOND, min_dist=ONE_SECOND),
merge_op=cross_product_faces)
num_intervals = 0
for video_id in two_shots.intervals.keys():
intvllist = two_shots.get_intervallist(video_id)
s = intvllist.size()
print(s)
num_intervals += s
print(num_intervals)
conversations = two_shots.coalesce(predicate=payload_satisfies(faces_equal, arity=2))
num_intervals = 0
for video_id in conversations.intervals.keys():
intvllist = conversations.get_intervallist(video_id)
s = intvllist.size()
print(s)
num_intervals += s
print(num_intervals)
scene = three_shot.merge(three_shot, predicate=and_pred(after(max_dist=ONE_SECOND, min_dist=ONE_SECOND),
payload_satisfies(check_B_intersects, arity=2), arity=2)).coalesce()#, payload_merge_op=updateA))
esper_widget(intrvllists_to_result_with_objects(
conversations.get_allintervals(), lambda payload, video: []),
crop_bboxes=False,
disable_playback=False,
jupyter_keybindings=False)
conversations.get_allintervals().key()
conversations.intrvls.keys()
conversations
# Returns precision, recall, precision_per_item, recall_per_item
def compute_statistics(query_intrvllists, ground_truth_intrvllists):
total_query_time = 0
total_query_segments = 0
total_ground_truth_time = 0
total_ground_truth_segments = 0
for video in query_intrvllists:
total_query_time += query_intrvllists[video].coalesce().get_total_time()
total_query_segments += query_intrvllists[video].size()
for video in ground_truth_intrvllists:
total_ground_truth_time += ground_truth_intrvllists[video].coalesce().get_total_time()
total_ground_truth_segments += ground_truth_intrvllists[video].size()
total_overlap_time = 0
overlapping_query_segments = 0
overlapping_ground_truth_segments = 0
for video in query_intrvllists:
if video in ground_truth_intrvllists:
query_list = query_intrvllists[video]
gt_list = ground_truth_intrvllists[video]
total_overlap_time += query_list.overlaps(gt_list).coalesce().get_total_time()
overlapping_query_segments += query_list.filter_against(gt_list, predicate=overlaps()).size()
overlapping_ground_truth_segments += gt_list.filter_against(query_list, predicate=overlaps()).size()
if total_query_time == 0:
precision = 1.0
precision_per_item = 1.0
else:
precision = total_overlap_time / total_query_time
precision_per_item = overlapping_query_segments / total_query_segments
if total_ground_truth_time == 0:
recall = 1.0
recall_per_item = 1.0
else:
recall = total_overlap_time / total_ground_truth_time
recall_per_item = overlapping_ground_truth_segments / total_ground_truth_segments
return precision, recall, precision_per_item, recall_per_item
def print_statistics(query_intrvllists, ground_truth_intrvllists):
precision, recall, precision_per_item, recall_per_item = compute_statistics(
query_intrvllists, ground_truth_intrvllists)
print("Precision: ", precision)
print("Recall: ", recall)
print("Precision Per Item: ", precision_per_item)
print("Recall Per Item: ", recall_per_item)
godfather_query = conversationsq('the godfather part iii')
for k in godfather_query.intervals.keys():
print(k)
godfather_query = godfather_query.filter(lambda inv: inv.start < 43103)
godfather_query = {'216': godfather_query.get_intervallist(216)}
data = [(12481, 13454), (13673, 14729), (16888, 17299), (21101, 27196), (27602, 29032), (29033, 33204), (34071, 41293), (41512, 43103)]
godfather_gt = {'216': IntervalList([Interval(start, end, payload=None) for (start,end) in data])}
print_statistics(godfather_query, godfather_gt)
apollo_query = conversationsq('apollo 13')
apollo_query = apollo_query.filter(lambda inv: inv.start < 34618)
apollo_query = {'15': apollo_query.get_intervallist(15)}
data = [(2578, 4100), (4244, 4826), (5098, 5828), (7757, 9546), (9602, 10300), (12393, 12943), (13088, 13884), (14146, 15212), (15427, 16116), (18040, 19198), (20801, 23368), (24572, 26185), (26735, 28753), (29462, 30873), (31768, 34618)]
apollo_gt = {'15': IntervalList([Interval(start, end, payload=None) for (start,end) in data])}
print_statistics(apollo_query, apollo_gt)
invllist = caption_metadata_for_video(15)
hp2_query = conversationsq('harry potter and the chamber')
hp2_query = hp2_query.filter(lambda inv: inv.start < 20308)
hp2_query = {'374': hp2_query.get_intervallist(374)}
data = [(2155, 4338), (4687, 6188), (6440, 10134), (12921, 13151), (16795, 17370), (17766, 18021), (18102, 19495), (19622, 20308)]
hp2_gt = {'374': IntervalList([Interval(start, end, payload=None) for (start,end) in data])}
print_statistics(hp2_query, hp2_gt)
fc_query = conversationsq('fight club')
fc_query = fc_query.filter(lambda inv: inv.start < 58258)
fc_query = {'61': fc_query.get_intervallist(61)}
data = [(4698, 5602), (6493, 6865), (8670, 9156), (9517, 10908), (11087, 13538), (22039, 24188), (25603, 27656), (31844, 32812), (32918, 33451), (33698, 35363), (42072, 45143), (45272, 46685), (49162, 50618), (56830, 58258)]
fc_gt = {'61': IntervalList([Interval(start, end, payload=None) for (start,end) in data])}
print_statistics(fc_query, fc_gt)
for intvl in invllist.get_intervals():
if 'speaker' in intvl.payload:
print(intvl.payload)
Apollo 13
1:48 --> 2:50 V
2:57 --> 3:20 V
5:24 --> 7:44 V (5:24 - 6:24) (6:45 - 7:18) (shot broken up because of shots of the moon)
8:26 --> 9:02; 8:36 - 9:02
10:00 --> 10:33; 9:41 - 10:33 - long shot times w multiple people present
10:33 --> 11:11; 10:44 - 11:12 - skipped the daughter
12:40 --> 13:17; 12:33 - 13:21 (shot is a bit over extended)
[14:28 - 14:51] - reaction sequence; not dialogue
17:03 -- 18:02; 18:15 ; V - over extended; catches him in the next scene
20:29 --> 21:27 [DID NOT CATCH]
22:04 --> 23:56 V
27:04 --> 27:30 [caught and
27:34 --> 27:47 combined in unexpected ways
Godfather
data = [
(8757,9049),
(12750,13463),
(13683,14227),
(21357,22236),
(22294,22758),
(23147,25854),
(26007,26942),
(27620,28172),
(28382,28623),
(28785,29036),
(29904,31014),
(33936,35339),
(35421,36248),
(39388,40062),
(41675,42689),
(51246,52118),
(53117,54776),
(54895,55762),
(56819,59963),
(60253,61875),
(66533,67846),
(68729,69040),
(69421,70153),
(70285,71102)]
intrvllist = IntervalList([Interval(start, end, payload=None) for (start,end) in data])
shot_reverse_shot_labelled = {216: intrvllist}
esper_widget(intrvllists_to_result_with_objects(shot_reverse_shot_labelled, lambda payload, video: []), disable_captions=True)
```
| github_jupyter |
# The `Particle` Classes
The `Particle` class is the base class for all particles, whether introduced discretely one by one or as a distribution. In reality, the `Particle` class is based on two intermediate classes: `ParticleDistribution` and `ParticleInstances` to instantiate particle distributions and particles directly, respectively.
The `ParticleDistribution` inherits from the `Vapor` class, with the following added attributes and methods:
## `ParticleDistribution` attributes
| attribute | unit | default value
| --------- | ---- | -------------
| `spacing` | | `"linspace"`
| `nbins` | | `1e3`
| `nparticles` | | `1e5`
| `volume` | / m^3 | `1e-6`
| `cutoff` | | `0.9999`
| `gsigma` | | `1.25`
| `mode` | m | `1e-7`
| `disttype` | | `"lognormal"`
## `ParticleDistribution` methods
### `ParticleDistribution.pre_radius()`
The `pre_radius` method uses the attributes `cutoff`, `gsigma`, and `mode` to determine the starting and ending radii according to the utility `cutoff_radius.cut_rad`. Here, `cutoff` refers to the fraction cutoff of the distribution (e.g. `cutoff=0.9999` means taking only 99.99% of the lognormal distribution). Moreover, `gsigma` and `mode` are the lognormal parameters of the distribution, referring to the geometric standard deviation and mode (geometric mean) respectively. We use the `scipy.stats.lognorm` to construct the distribution in the `pre_discretize` method below.
- [`particula.util.cutoff_radius`](./utilities/radius_cutoff.ipynb)
Then, the `spacing` is used to create a vector of radii with `nbins` entries. The `spacing` attribute is a `string` type and for now, it can only be `"linspace"` or `"logspace"`: using the `numpy.linspace` or `numpy.logspace` functions, respectively, to construct the vector.
Finally, the `pre_radius` method returns a radius vector with units of meter.
### `ParticleDistribution.pre_discretize()`
The `pre_discretize` method uses the result of the `pre_radius()` method above, `disttype`, `gsigma`, and `mode` attributes to produce a probability density function distribution based on the `scipy.stats.lognorm` (lognormal) function. This is done via the `distribution_discretization` utility.
- [`particula.util.distribution_discretization`](./utilities/distribution_discretization.ipynb)
### `ParticleDistribution.pre_distribution()`
The `pre_distribution` method simply constructs the distribution multiplying the product of `pre_discretize()` by `nparticles` and dividing by `volume`.
```
from particula.particle import ParticleDistribution
PDcutoff1 = ParticleDistribution()
PDcutoff2 = ParticleDistribution(
cutoff=.99,
)
print("Starting radius from 99.99% cutoff is ", PDcutoff1.pre_radius().min())
print("Starting radius from 99% cutoff is ", PDcutoff2.pre_radius().min())
from particula.particle import ParticleDistribution
PDspacing1 = ParticleDistribution()
PDspacing2 = ParticleDistribution(
spacing="logspace",
)
print("First few radii from linspace spacing are ", PDspacing1.pre_radius()[:6])
print("First few radii from logspace spacing are ", PDspacing2.pre_radius()[:6])
print("The units of the distribution (density units): ", PDspacing1.pre_distribution().u)
```
The `ParticleInstances` inherits from the `ParticleDistribution` class, with the following added attributes and methods:
## `ParticleInstances` attributes
| attribute | unit | default value
| --------- | ---- | -------------
| `particle_radius` | m | `None` or `ParticleDistribution.pre_radius()`
| `particle_number` | | `1` or `ParticleDistribution.nparticles`
| `particle_density`| kg / m^3 | `1e3`
| `shape_factor` | | `1`
| `volume_void` | | `0`
| `particle_charge` | | `0`
We note that the `particle_radius` attribute defaults to `ParticleDistribution.pre_radius()` if it is not given explicitly. Therefore, one could either provide a one or more radii via `particle_radius` or provide the parameters for a distribution as described above. The `particle_number` attribute defaults to `ParticleDistribution.nparticles` if the `particle_radius` attribute is not given explicitly; otherwise, it is set to `1` by default, but the user could provide a different value, for example, `particle_radius=[1e-9, 2e-9]` coupled with `particle_number=[1, 2]` would mean one particle of radius 1e-9 and two particles of radius 2e-9.
In any event, the attributes here have higher precedence than the attributes in `ParticleDistribution`. Below, we reaffirm the `particle_distribution` as well.
## `ParticleInstances` methods
### `ParticleInstances.particle_distribution()`
The `particle_distribution` method either returns the `ParticleDistribution.pre_distribution()` if the `particle_radius` attribute is not given explicitly, or it constructs the distribution dividing `particle_number` by `particle_radius` and dividing by `volume`.
Note: the idea of distribution density necessitates normalizing by the variable (in our case, it is the radius) and here we divide by radius to get the unit of 1 / m^3 / m (i.e. number concentration density).
### `ParticleInstances.particle_mass()`
The `particle_mass` method returns the mass of the particle by using the `particle_radius`, `particle_density`, `shape_factor`, and `volume_void` attributes.
- [`particula.util.particle_mass`](./utilities/particle_mass.ipynb)
### `ParticleInstances.knudsen_number()`
The `knudsen_number` method returns the knudsen number of the particle by using the `particle_radius` attribute as well as the `mean_free_path()` method.
- [`knudsen_number`](./utilities/knudsen_number.ipynb)
### `ParticleInstances.slip_correction_factor()`
The `slip_correction_factor` method returns the slip correction factor of the particle by using the `particle_radius` attribute as well as the `knudsen_number()` method.
- [`particula.util.slip_correction`](./utilities/slip_correction.ipynb)
### `ParticleInstances.friction_factor()`
The `friction_factor` method returns the friction factor of the particle by using the `particle_radius` attribute as well as the `dynamic_viscosity()` and `slip_correction_factor()` methods.
- [`particula.util.friction_factor`](./utilities/friction_factor.ipynb)
```
from particula.particle import ParticleInstances
PD_dist = ParticleInstances()
PD_disc = ParticleInstances(
particle_radius=[1e-9, 2e-9],
particle_number=[1, 2]
)
print("The units of the 'continuous' distribution (density units): ", PD_dist.particle_distribution().u)
print("The units of the 'discrete' distribution (density units): ", PD_disc.particle_distribution().u)
print("First few points of continuous: ", PD_dist.particle_distribution()[:2])
print("First few points of discrete: ", PD_disc.particle_distribution()[:2])
print("Particle mass: ", PD_disc.particle_mass())
print("Friction factor: ", PD_disc.friction_factor())
```
The `Particle` inherits from the `ParticleInstances` class, with the following added attributes and methods:
## The `Particle` attributes
| attribute | unit | default value
| --------- | ---- | -------------
| `elementary_charge_value` | C | `constants.ELEMENTARY_CHARGE_VALUE`
| `electric_permittivity` | F / m | `constants.ELECTRIC_PERMITTIVITY`
| `boltzmann_constant` | kg * m^2 / K / s^2 | `constants.BOLTZMANN_CONSTANT`
| `coagulation_approximation` | | `"hardsphere"`
## The `Particle` methods
The following methods are available for the `Particle` class:
- `Particle._coag_prep()`
- `Particle.reduced_mass()`
- `Particle.reduced_friction_factor()`
- `Particle.coulomb_potential_ratio()`
- `Particle.coulomb_enhancement_kinetic_limit()`
- `Particle.coulomb_enhancement_continuum_limit()`
- `Particle.diffusive_knudsen_number()`
- `Particle.dimensionless_coagulation()`
- `Particle.coagulation()`
They all rely on one underlying utility: the class `DimensionlessCoagulation` from `dimensionless_coagulation`, which is documented in the [`particula.util.dimensionless_coagulation`](./utilities/dimensionless_coagulation.ipynb) notebook.
All these added methods rely on previously defined attributes and methods to construct particle--particle interaction (e.g. coagulation). Thus, each method takes an input argument `particle` which is an instance of the `Particle` class. If not explicitly provided, the `particle` argument defaults to `self` (itself).
```
from particula.particle import Particle
two_particles = Particle(
particle_radius=[1e-9, 2e-9],
particle_number=[1, 1]
)
another_particle = Particle(particle_radius=1e-8)
two_particles.coagulation()
another_particle.coagulation()
another_particle.coagulation(another_particle)
two_particles.coagulation(another_particle)
```
| github_jupyter |
#blueqatใฎใใใฏใจใณใใไฝใ๏ผ็ฐกๆ็ทจ๏ผ
ไปๅใฏblueqatใฎใใใฏใจใณใใqasmใใใผในใซไฝใๆนๆณใ็ขบ่ชใใพใใไปๅใฏqiskitใจcirqใใใฏใจใณใใๅฎ่ฃ
ใใพใใIBM็คพใฎQiskitใจGoogle้ๅ
ฌๅผใฎCirqใใใใฏใจใณใใจใใฆๅฉ็จใใฆใฟใพใใ
ใพใใฏใคใณในใใผใซใงใใ
```
pip install blueqat qiskit cirq
```
##ใพใQiskit
ใพใใฏQiskitใงใใใใผใซใ่ชญใฟ่พผใฟใๅผๆฐใ่จญๅฎใใฆใใใฏใจใณใใๅผใณๅบใใใๆใซ่ฟใๅคใ่จญๅฎใใใฐ็ตใใใพใใ
```
import warnings
from collections import Counter
from blueqat.backends.qasm_parser_backend_generator import generate_backend
def _qasm_runner_qiskit(qasm, qiskit_backend=None, shots=None, returns=None, **kwargs):
if returns is None:
returns = "shots"
elif returns not in ("shots", "draw", "_exception",
"qiskit_circuit", "qiskit_job", "qiskit_result"):
raise ValueError("`returns` shall be None, 'shots', 'draw', " +
"'qiskit_circuit', 'qiskit_job', 'qiskit_result' or '_exception'")
import_error = None
try:
with warnings.catch_warnings():
warnings.simplefilter("ignore")
from qiskit import Aer, QuantumCircuit, execute
except Exception as e:
import_error = e
if import_error:
if returns == "_exception":
return e
if isinstance(import_error, ImportError):
raise ImportError("Cannot import qiskit. To use this backend, please install qiskit." +
" `pip install qiskit`.")
else:
raise ValueError("Unknown error raised when importing qiskit. To get exception, " +
'run this backend with arg `returns="_exception"`')
else:
if returns == "_exception":
return None
qk_circuit = QuantumCircuit.from_qasm_str(qasm)
if returns == "qiskit_circuit":
return qk_circuit
if returns == "draw":
return qk_circuit.draw(**kwargs)
if shots is None:
shots = 1024
if qiskit_backend is None:
qiskit_backend = Aer.get_backend("qasm_simulator")
job = execute(qk_circuit, backend=qiskit_backend, shots=shots, **kwargs)
if returns == "qiskit_job":
return job
result = job.result()
if returns == "qiskit_result":
return result
counts = Counter({bits[::-1]: val for bits, val in result.get_counts().items()})
return counts
ibmq_backend = generate_backend(_qasm_runner_qiskit)
```
ใใใฆใๆๅพใซbackendใ็ป้ฒใใฆ็ตใใใงใใ
```
from blueqat import BlueqatGlobalSetting
BlueqatGlobalSetting.register_backend("myibmq", ibmq_backend)
```
ใใใงๅฉ็จใงใใใใใซใชใใพใใใใใใฉใซใใงใฏใ่จ็ฎ็ตๆใฎใตใณใใซใ่ฟใใฎใงใ้ใญๅใใๅ่ทฏใๅฎ่กใใฆใฟใพใใๆฎ้ใซblueqatใๅฎ่กใใฆbackendๆๅฎใใใ ใใงใงใใพใใ
```
from blueqat import Circuit
Circuit().h[0].m[:].run(backend="myibmq")
```
็กไบๅใใพใใใ็ฐกๅใงใใญใๆฌกใซๅ่ทฏใๆใๆฉ่ฝใไฝฟใฃใฆใฟใพใใ
```python
if returns == "draw":
return qk_circuit.draw(**kwargs)
```
ใใใซใใๅ่ทฏใฎๆ็ปใๅฎ่ฃ
ใงใใพใใ
```
Circuit().h[0].m[:].run(backend="myibmq",returns="draw")
```
็กไบใงใใพใใใ
##Cirqใๅฎ่ฃ
ใใกใใฏ่จฑๅฏใๅพใฆใใใกใใฎ่จไบใใๆฒ่ผใใใฆใใใ ใใพใใ
CirqใใใฏใจใณใใไฝใฃใSJSYใใใฎ่จไบ
https://sjsy.hatenablog.com/entry/20191212/1576158640
```
def _qasm_runner_cirq(qasm, shots=None, returns=None, **kwargs):
if returns is None:
returns = "cirq_result"
elif returns not in ("cirq_result", 'draw'):
raise ValueError("`returns` shall be None, 'draw', 'cirq_result' or '_exception'")
import_error = None
try:
with warnings.catch_warnings():
from cirq import Circuit, Simulator
from cirq.contrib.qasm_import import circuit_from_qasm
import cirq
except Exception as e:
import_error = e
if import_error:
if returns == "_exception":
return e
if isinstance(import_error, ImportError):
raise ImportError("Cannot import Cirq. To use this backend, please install qiskit." +
" `pip install Cirq`.")
else:
raise ValueError("Unknown error raised when importing Cirq. To get exception, " +
'run this backend with arg `returns="_exception"`')
else:
if returns == "_exception":
return None
cirq_circuit = circuit_from_qasm(qasm)
if returns == "draw":
return cirq_circuit
if shots is None:
shots = 1024
simulator = cirq.Simulator()
result = simulator.run(cirq_circuit, repetitions=shots, **kwargs)
if returns == "cirq_result":
return result
return result
cirq_backend = generate_backend(_qasm_runner_cirq)
BlueqatGlobalSetting.register_backend("mycirq", cirq_backend)
```
ๅบๆฌ็ใซใฏQiskitใฎๅ ดๅใจๅใใงใใๅ่ทฏใฎๆ็ปใ่ฉฆใใฆใฟใพใใ
```
Circuit().h[0].m[:].run(backend="mycirq",returns="draw")
```
็กไบใซ็ป้ฒใใงใใพใใใๅบๆฌ็ใชในใใใใฏไธ่จใณใผใใๅ็
งใใใฐใใใใใใใฏใจใณใใ็ตฑๅใงใใพใใ
```
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from scipy import stats
from sklearn.linear_model import LogisticRegression
from imblearn.over_sampling import SMOTE
from collections import Counter
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score, f1_score
from xgboost import XGBClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestRegressor
import warnings
warnings.filterwarnings("ignore")
rain = pd.read_csv("weatherAUS.csv")
rain
print(f'The number of rows are {rain.shape[0] } and the number of columns are {rain.shape[1]}')
rain.info()
categorical_col, contin_val=[],[]
for i in rain.columns:
if rain[i].dtype == 'object':
categorical_col.append(i)
else:
contin_val.append(i)
print(categorical_col)
print(contin_val)
rain.isnull().sum()
msno.bar(rain, sort='ascending')
rain['RainTomorrow'] = rain['RainTomorrow'].map({'Yes': 1, 'No': 0})
rain['RainToday'] = rain['RainToday'].map({'Yes': 1, 'No': 0})
print(rain.RainToday)
print(rain.RainTomorrow)
(rain.isnull().sum()/len(rain))*100
#Filling the missing values for continuous variables with mean
rain['MinTemp']=rain['MinTemp'].fillna(rain['MinTemp'].mean())
rain['MaxTemp']=rain['MinTemp'].fillna(rain['MaxTemp'].mean())
rain['Rainfall']=rain['Rainfall'].fillna(rain['Rainfall'].mean())
rain['Evaporation']=rain['Evaporation'].fillna(rain['Evaporation'].mean())
rain['Sunshine']=rain['Sunshine'].fillna(rain['Sunshine'].mean())
rain['WindGustSpeed']=rain['WindGustSpeed'].fillna(rain['WindGustSpeed'].mean())
rain['WindSpeed9am']=rain['WindSpeed9am'].fillna(rain['WindSpeed9am'].mean())
rain['WindSpeed3pm']=rain['WindSpeed3pm'].fillna(rain['WindSpeed3pm'].mean())
rain['Humidity9am']=rain['Humidity9am'].fillna(rain['Humidity9am'].mean())
rain['Humidity3pm']=rain['Humidity3pm'].fillna(rain['Humidity3pm'].mean())
rain['Pressure9am']=rain['Pressure9am'].fillna(rain['Pressure9am'].mean())
rain['Pressure3pm']=rain['Pressure3pm'].fillna(rain['Pressure3pm'].mean())
rain['Cloud9am']=rain['Cloud9am'].fillna(rain['Cloud9am'].mean())
rain['Cloud3pm']=rain['Cloud3pm'].fillna(rain['Cloud3pm'].mean())
rain['Temp9am']=rain['Temp9am'].fillna(rain['Temp9am'].mean())
rain['Temp3pm']=rain['Temp3pm'].fillna(rain['Temp3pm'].mean())
#Filling the missing values for continuous variables with mode
rain['RainToday']=rain['RainToday'].fillna(rain['RainToday'].mode()[0])
rain['RainTomorrow']=rain['RainTomorrow'].fillna(rain['RainTomorrow'].mode()[0])
#Filling the missing values for continuous variables with mode
rain['WindDir9am'] = rain['WindDir9am'].fillna(rain['WindDir9am'].mode()[0])
rain['WindGustDir'] = rain['WindGustDir'].fillna(rain['WindGustDir'].mode()[0])
rain['WindDir3pm'] = rain['WindDir3pm'].fillna(rain['WindDir3pm'].mode()[0])
#Checking percentage of missing data in every column
(rain.isnull().sum()/len(rain))*100
#Dropping date column
rain=rain.iloc[:,1:]
rain
le = preprocessing.LabelEncoder()
rain['Location'] = le.fit_transform(rain['Location'])
rain['WindDir9am'] = le.fit_transform(rain['WindDir9am'])
rain['WindDir3pm'] = le.fit_transform(rain['WindDir3pm'])
rain['WindGustDir'] = le.fit_transform(rain['WindGustDir'])
rain.head(5)
plt.figure(figsize=(15,15))
ax = sns.heatmap(rain.corr(), square=True, annot=True, fmt='.2f')
ax.set_xticklabels(ax.get_xticklabels(), rotation=90)
plt.show()
```
* MinTemp and Temp9am highly correlated.
* MinTemp and Temp3pm highly correlated.
* MaxTemp and Temp9am highly correlated.
* MaxTemp and Temp3pm highly correlated.
* Temp3pm and Temp9am highly correlated.
* Humidity9am and Humidity3pm highly correlated.
```
fig, ax =plt.subplots(2,1)
plt.figure(figsize=(10,10))
sns.boxplot(rain['Humidity3pm'],orient='v',color='c',ax=ax[0])
sns.boxplot(rain['Humidity9am'],orient='v',color='c',ax=ax[1])
fig.tight_layout()
fig, ax =plt.subplots(2,1)
plt.figure(figsize=(10,10))
sns.boxplot(rain['Pressure3pm'],orient='v',color='c',ax=ax[0])
sns.boxplot(rain['Pressure9am'],orient='v',color='c',ax=ax[1])
fig.tight_layout()
sns.violinplot(x='RainToday',y='MaxTemp',data=rain,hue='RainTomorrow')
sns.violinplot(x='RainToday',y='MinTemp',data=rain,hue='RainTomorrow')
print('Shape of DataFrame Before Removing Outliers', rain.shape )
rain=rain[(np.abs(stats.zscore(rain)) < 3).all(axis=1)]
print('Shape of DataFrame After Removing Outliers', rain.shape )
rain=rain.drop(['Temp3pm','Temp9am','Humidity9am'],axis=1)
rain.columns
os = SMOTE()
x, y = os.fit_resample(rain.iloc[:,:-1], rain.iloc[:,-1])
count = Counter(y)
print(count)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
```
## **Training The Models**
**1. Logistic Regression**
```
LR_model = LogisticRegression(max_iter=500)
LR_model.fit(x_train, y_train)
predicted=LR_model.predict(x_test)
conf = confusion_matrix(y_test, predicted)
print ("The accuracy of Logistic Regression is : ", accuracy_score(y_test, predicted)*100, "%")
print()
print("F1 score for logistic regression is :",f1_score(y_test, predicted,)*100, "%")
```
**2. XGBoost**
```
xgbc = XGBClassifier(objective='binary:logistic')
xgbc.fit(x_train,y_train)
predicted = xgbc.predict(x_test)
print ("The accuracy of Logistic Regression is : ", accuracy_score(y_test, predicted)*100, "%")
print()
print("F1 score for XGBoost is :",f1_score(y_test, predicted,)*100, "%")
```
**3. Gaussian Naive Bayes**
```
GN_model = GaussianNB()
GN_model.fit(x_train, y_train)
predicted = GN_model.predict(x_test)
print("The accuracy of Gaussian Naive Bayes model is : ", accuracy_score(y_test, predicted)*100, "%")
print()
print("F1 score for Gaussian Naive Bayes is :",f1_score(y_test, predicted,)*100, "%")
```
**4. Bernoulli Naive Bayes**
```
BN_model = BernoulliNB()
BN_model.fit(x_train, y_train)
predicted = BN_model.predict(x_test)
print("The accuracy of Gaussian Naive Bayes model is : ", accuracy_score(y_test, predicted)*100, "%")
print()
print("F1 score for Bernoulli Naive Bayes is :",f1_score(y_test, predicted,)*100, "%")
```
**4. RandomForest**
```
RF_model = RandomForestRegressor(n_estimators = 100, random_state = 0)
RF_model.fit(x_train, y_train)
predicted = RF_model.predict(x_test)
print("The accuracy of Random Forest is : ", accuracy_score(y_test, predicted.round())*100, "%")
table = pd.DataFrame({"y_test": y_test, "predicted": predicted})
table
import pickle
pickle.dump(RF_model, open("RandomforestModelFinal2.pkl", 'wb'))
```
| github_jupyter |
```
import gc
import os
import cv2
import sys
import json
import time
import timm
import torch
import random
import sklearn.metrics
from PIL import Image
from pathlib import Path
from functools import partial
from contextlib import contextmanager
import numpy as np
import scipy as sp
import pandas as pd
import torch.nn as nn
from torch.optim import Adam, SGD, AdamW
from torch.optim.lr_scheduler import CosineAnnealingLR
from torch.utils.data import DataLoader, Dataset
from albumentations import Compose, Normalize, Resize
from albumentations.pytorch import ToTensorV2
os.environ["CUDA_VISIBLE_DEVICES"]="1"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
!nvidia-smi
train_metadata = pd.read_csv("../../../resources/DF20-Mini/DanishFungi2020-Mini_train_metadata_DEV.csv")
print(len(train_metadata))
test_metadata = pd.read_csv("../../../resources/DF20-Mini/DanishFungi2020-Mini_test_metadata_DEV.csv")
print(len(test_metadata))
train_metadata['image_path'] = train_metadata.apply(lambda x: '/local/nahouby/Datasets/DF20/' + x['image_path'].split('/SvampeAtlas-14.12.2020/')[-1], axis=1)
test_metadata['image_path'] = test_metadata.apply(lambda x: '/local/nahouby/Datasets/DF20/' + x['image_path'].split('/SvampeAtlas-14.12.2020/')[-1], axis=1)
train_metadata['image_path'] = train_metadata.apply(lambda x: x['image_path'].split('.')[0] + '.JPG', axis=1)
test_metadata['image_path'] = test_metadata.apply(lambda x: x['image_path'].split('.')[0] + '.JPG', axis=1)
train_metadata.head()
@contextmanager
def timer(name):
t0 = time.time()
LOGGER.info(f'[{name}] start')
yield
LOGGER.info(f'[{name}] done in {time.time() - t0:.0f} s.')
def init_logger(log_file='train.log'):
from logging import getLogger, DEBUG, FileHandler, Formatter, StreamHandler
log_format = '%(asctime)s %(levelname)s %(message)s'
stream_handler = StreamHandler()
stream_handler.setLevel(DEBUG)
stream_handler.setFormatter(Formatter(log_format))
file_handler = FileHandler(log_file)
file_handler.setFormatter(Formatter(log_format))
logger = getLogger('Herbarium')
logger.setLevel(DEBUG)
logger.addHandler(stream_handler)
logger.addHandler(file_handler)
return logger
LOG_FILE = '../../logs/DF20M/EfficientNet-B1.log'
LOGGER = init_logger(LOG_FILE)
def seed_torch(seed=777):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
SEED = 777
seed_torch(SEED)
class TrainDataset(Dataset):
def __init__(self, df, transform=None):
self.df = df
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_path = self.df['image_path'].values[idx]
label = self.df['class_id'].values[idx]
image = cv2.imread(file_path)
try:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
except:
print(file_path)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
return image, label
WIDTH, HEIGHT = 299, 299
from albumentations import RandomCrop, HorizontalFlip, VerticalFlip, RandomBrightnessContrast, CenterCrop, PadIfNeeded, RandomResizedCrop
def get_transforms(*, data):
assert data in ('train', 'valid')
if data == 'train':
return Compose([
RandomResizedCrop(WIDTH, HEIGHT, scale=(0.8, 1.0)),
HorizontalFlip(p=0.5),
VerticalFlip(p=0.5),
RandomBrightnessContrast(p=0.2),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
ToTensorV2(),
])
elif data == 'valid':
return Compose([
Resize(WIDTH, HEIGHT),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
ToTensorV2(),
])
N_CLASSES = len(train_metadata['class_id'].unique())
train_dataset = TrainDataset(train_metadata, transform=get_transforms(data='train'))
valid_dataset = TrainDataset(test_metadata, transform=get_transforms(data='valid'))
# Adjust BATCH_SIZE and ACCUMULATION_STEPS to values that if multiplied results in 64 !!!!!1
BATCH_SIZE = 32
ACCUMULATION_STEPS = 2
EPOCHS = 100
WORKERS = 8
train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=WORKERS)
valid_loader = DataLoader(valid_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=WORKERS)
from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_pretrained('efficientnet-b1')
model._fc = nn.Linear(model._fc.in_features, N_CLASSES)
from torch.optim.lr_scheduler import ReduceLROnPlateau
from sklearn.metrics import f1_score, accuracy_score, top_k_accuracy_score
import tqdm
with timer('Train model'):
accumulation_steps = ACCUMULATION_STEPS
n_epochs = EPOCHS
lr = 0.01
model.to(device)
optimizer = SGD(model.parameters(), lr=lr, momentum=0.9)
scheduler = ReduceLROnPlateau(optimizer, 'min', factor=0.9, patience=1, verbose=True, eps=1e-6)
criterion = nn.CrossEntropyLoss()
best_score = 0.
best_loss = np.inf
for epoch in range(n_epochs):
start_time = time.time()
model.train()
avg_loss = 0.
optimizer.zero_grad()
for i, (images, labels) in tqdm.tqdm(enumerate(train_loader)):
images = images.to(device)
labels = labels.to(device)
y_preds = model(images)
loss = criterion(y_preds, labels)
# Scale the loss to the mean of the accumulated batch size
loss = loss / accumulation_steps
loss.backward()
if (i - 1) % accumulation_steps == 0:
optimizer.step()
optimizer.zero_grad()
avg_loss += loss.item() / len(train_loader)
model.eval()
avg_val_loss = 0.
preds = np.zeros((len(valid_dataset)))
preds_raw = []
for i, (images, labels) in enumerate(valid_loader):
images = images.to(device)
labels = labels.to(device)
with torch.no_grad():
y_preds = model(images)
preds[i * BATCH_SIZE: (i+1) * BATCH_SIZE] = y_preds.argmax(1).to('cpu').numpy()
preds_raw.extend(y_preds.to('cpu').numpy())
loss = criterion(y_preds, labels)
avg_val_loss += loss.item() / len(valid_loader)
scheduler.step(avg_val_loss)
score = f1_score(test_metadata['class_id'], preds, average='macro')
accuracy = accuracy_score(test_metadata['class_id'], preds)
recall_3 = top_k_accuracy_score(test_metadata['class_id'], preds_raw, k=3)
elapsed = time.time() - start_time
LOGGER.debug(f' Epoch {epoch+1} - avg_train_loss: {avg_loss:.4f} avg_val_loss: {avg_val_loss:.4f} F1: {score:.6f} Accuracy: {accuracy:.6f} Recall@3: {recall_3:.6f} time: {elapsed:.0f}s')
if accuracy>best_score:
best_score = accuracy
LOGGER.debug(f' Epoch {epoch+1} - Save Best Accuracy: {best_score:.6f} Model')
torch.save(model.state_dict(), f'../../checkpoints/DF20M-EfficientNet-B1_best_accuracy.pth')
if avg_val_loss<best_loss:
best_loss = avg_val_loss
LOGGER.debug(f' Epoch {epoch+1} - Save Best Loss: {best_loss:.4f} Model')
torch.save(model.state_dict(), f'../../checkpoints/DF20M-EfficientNet-B1_best_loss.pth')
torch.save(model.state_dict(), f'../../checkpoints/DF20M-EfficientNet-B1-100E.pth')
```
| github_jupyter |
```
from random import randint, seed
import numpy as np
def random_sum_pairs(n_examples, n_numbers, largest):
X, y = [], []
for i in range(n_examples):
in_pattern = [randint(1, largest) for _ in range(n_numbers)]
out_pattern = sum(in_pattern)
X.append(in_pattern)
y.append(out_pattern)
return X, y
seed(42)
n_samples=2
n_numbers = 3
largest = 10
X, y = random_sum_pairs(n_samples, n_numbers, largest)
print(X, y)
def to_string(X, y, n_numbers, largest):
X_max_len = int(n_numbers * np.ceil(np.log10(largest+1)) + n_numbers - 1)
X_str = []
for pattern in X:
strp = '+'.join([str(n) for n in pattern])
strp = ''.join([' ' for _ in range(X_max_len - len(strp))]) + strp
X_str.append(strp)
y_max_len = int(np.ceil(np.log10(n_numbers * (largest + 1))))
y_str = []
for pattern in y:
strp = str(pattern)
strp = ''.join([' ' for _ in range(y_max_len - len(strp))]) + strp
y_str.append(strp)
return X_str, y_str
Xstr, ystr = to_string(X, y, n_numbers, largest)
ystr
def integer_encode(X, y, alphabet):
"""
Encode string representation of integer as indices in some alphabet
"""
char_to_int = dict((c, i) for i, c in enumerate(alphabet))
Xenc = []
for pattern in X:
int_enc = [char_to_int[c] for c in pattern]
Xenc.append(int_enc)
yenc = []
for pattern in y:
int_enc = [char_to_int[c] for c in pattern]
yenc.append(int_enc)
return Xenc, yenc
alphabet = tuple("0123456789+ ")
n_chars = len(alphabet)
X, y = integer_encode(Xstr, ystr, alphabet)
def one_hot_encode(X, y, max_int):
Xenc = list()
for seq in X:
pattern = []
for i in seq:
vector = [0 for _ in range(max_int)]
vector[i] = 1
pattern.append(vector)
Xenc.append(pattern)
yenc = list()
for seq in y:
pattern = []
for i in seq:
vec = [0 for _ in range(max_int)]
vec[i] = 1
pattern.append(vec)
yenc.append(pattern)
return Xenc, yenc
X, y = one_hot_encode(X, y, len(alphabet))
def invert(seq, alphabet):
int_to_char = dict((i, c) for i, c in enumerate(alphabet))
strings = []
for pattern in seq:
s = []
for sym in pattern:
s.append(int_to_char[np.argmax(sym)])
strings.append(''.join(s))
return strings
invert(X, alphabet)
def prep_data(n_samples, n_numbers, largest, alphabet):
X, y = random_sum_pairs(n_samples, n_numbers, largest)
X, y = to_string(X, y, n_numbers, largest)
X, y = integer_encode(X, y, alphabet)
X, y = one_hot_encode(X, y, len(alphabet))
return np.array(X), np.array(y)
def gen_data(n_numbers, largest, alphabet, batch_size=32):
while True:
X, y = random_sum_pairs(batch_size, n_numbers, largest)
X, y = to_string(X, y, n_numbers, largest)
X, y = integer_encode(X, y, alphabet)
X, y = one_hot_encode(X, y, len(alphabet))
yield np.array(X), np.array(y)
n_in_seq_length = int(n_numbers * np.ceil(np.log10(largest+1)) + n_numbers - 1)
n_out_seq_length = int(np.ceil(np.log10(n_numbers * (largest+1))))
from keras.models import Sequential
from keras.layers import LSTM, TimeDistributed, RepeatVector, Dense
n_batches = 10
n_epoch = 7
model = Sequential()
model.add(LSTM(10, input_shape=(n_in_seq_length, n_chars)))
model.add(RepeatVector(n_out_seq_length))
model.add(LSTM(5, return_sequences=True))
model.add(TimeDistributed(Dense(n_chars, activation='softmax')))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
hist = model.fit_generator(gen_data(n_numbers, largest, alphabet), epochs=10, steps_per_epoch=1000)
n_samples = 32
X, y = prep_data(n_samples, n_numbers, largest, alphabet)
res = model.predict(X)
expected = invert(y, alphabet)
predicted = invert(res, alphabet)
for i in range(n_samples):
print(f"Expected={expected[i]}, Predicted={predicted[i]}")
```
| github_jupyter |
+ ็ฎๅๆฅ่ฏดๅฐฑๆๅไธ็นๅฐ้ฎ้ข๏ผY ๆฏ Y_norm ่ท็ๅฅฝ๏ผไธคไปฝๅ่็ญๆกไธไปฝๅๅผๆฑ็ๆ้ฎ้ขๅบ่ฏฅๆฏ้็๏ผๅฆไธไปฝ่ทๆ้ๅฐไธๆ ท็้ฎ้ขใ
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import scipy.io as scio
```
# 1 Anomaly detection
```
fpath = 'data/ex8data1.mat'
data = scio.loadmat(fpath)
data.keys()
X,Xval,yval = data['X'],data['Xval'],data['yval']
X.shape,Xval.shape,yval.shape
def draw_Figure1(data):
plt.scatter(data[:,0],data[:,1],marker='x',c='blue',linewidths=0.5,s=10)
scale = np.linspace(0,30,7)
plt.xticks(scale)
plt.yticks(scale)
plt.xlabel('Latency (ms)')
plt.ylabel('Throughput (mb/s)')
plt.title('Figure 1: The first dataset.',y=-0.3)
plt.show()
draw_Figure1(X)
```
## 1.1 Gaussian distribution
ๅจ ex8 ไนๅ็ไบไธ้ LaTex๏ผๆญฃๅฅฝๆ ex8 ๅบ็ฐ็ๆๆๅ
ฌๅผๆฒๅบๆฅ๏ผๅฐฑๅฝ็ปไน ไบใ
ๅฏนไบๆไธไธช็นๅพ๏ผๅ
ถ้ซๆฏๅๅธๅ
ฌๅผไธบ๏ผ
$$
p(x;\mu,\sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}
$$
where $\mu$ is the mean and $\sigma^2$ controls the variance.
่ๅคๆญไธไธช็นๆฏๅฆๅผๅธธ๏ผๆฏๅจๆฑๅบ่ฏฅ็น็ๆๆ็นๅพ็้ซๆฏๅฝๆฐๅ๏ผๅฐ่ฏฅ็น็ๆๆ็นๅพ็้ซๆฏๅฝๆฐไน็งฏ๏ผ็ถๅๅคๆญ่ฟไธชๅผๅ threshold ็ๅคงๅฐ๏ผๅคงไบ็ญไบ่ฏดๆๆญฃๅธธ๏ผๅฐไบ่ฏดๆๅผๅธธ
$$
p(x) = \prod_{j=1}^{n}p(x_j;\mu_j,\sigma_j^2)
$$
```
mu,var = X.mean(axis=0),X.var(axis=0)
mu,var
def Gaussian(x,mu,var):
return 1/(np.sqrt(2*(np.pi)*var))*np.exp(-((x-mu)**2)/(2*var))
Gaussian(X[0],mu,var)
```
## 1.2 Estimating parameters for a Gaussian
$\mu$: $$\mu_i = \frac{1}{m}\sum_{j=1}^{m}x_i^{(j)}$$
$\sigma^2$: $$ \sigma^2 = \frac{1}{m}\sum_{j=1}^{m}(x_i^{(j)}-\mu_i)^2 $$
ๅ
ถๅฎ็ดๆฅๅไธ้ขไธๆ ท: X.mean(axis=0),X.var(axis=0) ๅฐฑๅบๆฅไบ๏ผไธ่ฟไบบๅฎถ่ฆๆฑๅฎ็ฐ้ฃๅฐฑๅฎ็ฐๅ
```
def estimateGaussian(data):
m = data.shape[0]
ones = np.ones(shape=(1,m),dtype=int)
mu = ones.dot(data)/m # the shape is (1,m)
sigma2 = ones.dot((data - mu) **2)/m
return mu,sigma2
mu,sigma2 = estimateGaussian(X)
mu,sigma2
# ็ปๅบ2D้ซๆฏๅฝๆฐๅพ
# def draw_2DGaussFigure(mu,sigma2):
# x = np.linspace(-5,5)
# x = x.reshape(-1,1)
# gauss = Gaussian(x,mu,sigma2)
# plt.plot(x,gauss)
# plt.show()
# draw_GaussFigure(0,1)
# ็ปๅบ3D้ซๆฏๅฝๆฐ็ญ้ซ็บฟๅพ
# def draw_contourfGaussFigure(mu,sigma2):
# x = np.linspace(0,30)
# X,Y = np.meshgrid(x,x)
# def Gaussian3D(x,y,mu,sigma2):
# return Gaussian(x,mu[0,0],sigma2[0,0])*Gaussian(y,mu[0,1],sigma2[0,1])
# plt.contour(X,Y,Gaussian3D(X,Y,mu,sigma2))
# plt.show()
# draw_contourfGaussFigure(mu,sigma2)
def draw_Figure2(data,mu,sigma2):
plt.scatter(data[:,0],data[:,1],marker='x',c='blue',linewidths=0.5,s=10)
x = np.linspace(0,30)
X,Y = np.meshgrid(x,x)
def Gaussian3D(x,y,mu,sigma2):
return Gaussian(x,mu[0,0],sigma2[0,0])*Gaussian(y,mu[0,1],sigma2[0,1])
plt.contour(X,Y,Gaussian3D(X,Y,mu,sigma2),[10 ** x for x in range(-20,0,3)])
scale = np.linspace(0,30,7)
plt.xticks(scale)
plt.yticks(scale)
plt.xlabel('Latency (ms)')
plt.ylabel('Throughput (mb/s)')
plt.title('Figure 2: The Gaussian distribution contours of the distribution fit to the dataset',y=-0.3)
plt.show()
draw_Figure2(X,mu,sigma2)
```
## 1.3 Selecting the threshold,ฮต
```
Xval.shape,yval.shape
```
ๅ
ๅฏ่งๅ็ไธไธๆต่ฏ้
```
def showCrossValidationSet(Xval,yval):
for i in set(yval.reshape(-1).tolist()):
xval_i = Xval[yval.reshape(-1) == i]
plt.scatter(xval_i[:,0],xval_i[:,1],marker='x',linewidths=0.5,s=10)
scale = np.linspace(0,30,7)
plt.xticks(scale)
plt.yticks(scale)
plt.xlabel('Latency (ms)')
plt.ylabel('Throughput (mb/s)')
plt.show()
showCrossValidationSet(Xval,yval)
p = Gaussian(Xval,Xval.mean(axis=0),Xval.var(axis=0))
pval = p[:,0] * p[:,1]
```
The F1 score is computed using precision (prec) and recall (rec):
$$
F_1 = \frac{2*prec*rec}{prec+rec}
$$
You compute precision and recall by:
$$
prec = \frac{tp}{tp+fp}
$$
$$
rec = \frac{tp}{tp+fn}
$$
where
+ tp is the number of true positives: the ground truth label says itโs an anomaly and our algorithm correctly classified it as an anomaly.
+ fp is the number of false positives: the ground truth label says itโs not an anomaly, but our algorithm incorrectly classified it as an anomaly.
+ fn is the number of false negatives: the ground truth label says itโs an anomaly, but our algorithm incorrectly classified it as not being anomalous.
```
def selectThreshold(pval,yval):
epsilons = np.arange(pval.min(),pval.max(),(pval.max()-pval.min())/1000)
best_eps = 0
best_F = 0
p_val = pval.reshape(-1)
y_val = yval.reshape(-1)
for epsilon in epsilons:
prediction = (p_val <= epsilon)
tp = sum((y_val == 1) & (prediction == True)) #่ฟ้ๆฏ็็ๅ๏ผๅช่ฝ่ฟไนๅ๏ผๅ่๏ผhttps://blog.csdn.net/weixin_40041218/article/details/80868521
fp = sum((y_val == 0) & (prediction == True))
fn = sum((y_val == 1) & (prediction == False))
prec = tp/(tp+fp)
rec = tp/(tp+fn)
F_score = 2*prec*rec/(prec+rec)
if F_score > best_F:
best_eps = epsilon
best_F = F_score
return best_eps,best_F
epsilon,F_score = selectThreshold(pval,yval)
epsilon,F_score
def draw_Figure3(data,mu,sigma2,epsilon):
p = Gaussian(data,mu,sigma2)
pval = p[:,0] * p[:,1]
normal = data[pval >= epsilon]
anormal = data[pval < epsilon]
plt.scatter(normal[:,0],normal[:,1],marker='x',c='blue',linewidths=0.5,s=10)
plt.scatter(anormal[:,0],anormal[:,1],marker='x',c='red',linewidths=0.5,s=10)
x = np.linspace(0,30)
X,Y = np.meshgrid(x,x)
def Gaussian3D(x,y,mu,sigma2):
return Gaussian(x,mu[0,0],sigma2[0,0])*Gaussian(y,mu[0,1],sigma2[0,1])
plt.contour(X,Y,Gaussian3D(X,Y,mu,sigma2),[10 ** x for x in range(-20,0,3)])
scale = np.linspace(0,30,7)
plt.xticks(scale)
plt.yticks(scale)
plt.xlabel('Latency (ms)')
plt.ylabel('Throughput (mb/s)')
plt.title('Figure 3: The classified anomalies.',y=-0.3)
plt.show()
draw_Figure3(X,mu,sigma2,epsilon)
```
่ฝ็ถ $\epsilon$ ๅ้ขไธญ็ป็ไธไธๆ ท๏ผไฝๆฏๅผๅธธๆฃๆต็็ปๆ่ฟๆฏไธ้็
## 1.4 High dimensional dataset
```
fpath = 'data/ex8data2.mat'
data2 = scio.loadmat(fpath)
data2.keys()
X,Xval,yval = data2['X'],data2['Xval'],data2['yval']
X.shape,Xval.shape,yval.shape
Xmu,Xvar = X.mean(axis=0),X.var(axis=0)
Xvalmu,Xvalvar = Xval.mean(axis=0),Xval.var(axis=0)
Xvalmu,Xvalvar
p_vec = Gaussian(X,Xmu,Xvar)
pval_vec = Gaussian(Xval,Xvalmu,Xvalvar)
pval_vec.shape
n = p_vec.shape[1]
p = p_vec[:,0].copy()
pval = pval_vec[:,0].copy()
for i in range(1,n):
p *= p_vec[:,i]
pval *= pval_vec[:,i]
p.shape,pval.shape
epsilon,F_score = selectThreshold(pval,yval)
epsilon,F_score
anormal = X[p < epsilon]
len(anormal)
```
่ฏ่ฏดๆๅๆ่งๆ็นไธๅคชๅฏนๅฒ๏ผ่ฟๆไนๆฏๆฟ้ช่ฏ้ๅพๅบๆฅ็$\epsilon$ๅผๅปๆต่ฏ่ฎญ็ป้๏ผ
# 2 Recommender Systems
## 2.1 Movie ratings dataset
```
fpath = 'data/ex8_movies.mat'
data3 = scio.loadmat(fpath)
data3.keys()
Y,R = data3['Y'],data3['R']
Y.shape,R.shape
```
To help you understand the matrix Y, the script ex8cofi.m will compute the average movie rating for the first movie (Toy Story) and output the average rating to the screen.
the average movie rating for the first movie (Toy Story):
```
Y.mean(axis=1)[0]
fpath = 'data/ex8_movieParams.mat'
data4 = scio.loadmat(fpath)
data4.keys()
```
? ็ปไน ไธญ่ฏด็ๆฏ 100 ไธช num_features๏ผไฝๆฏไธบๅฅ่ฏปๅบๆฅๅชๆ 10 ไธช๏ผ
```
X,Theta = data4['X'],data4['Theta']
X.shape,Theta.shape
```
## 2.2 Collaborative filtering learning algorithm
### 2.2.1 Collaborative filtering cost function
The collaborative filtering cost function (without regularization) is given by๏ผ
$$
J(x^{(1)},\dots,x^{(n_m)},\theta^{(1)},\dots,\theta^{(n_m)}) = \frac{1}{2}\sum_{(i,j):r(i,j)=1}((\theta^{(j)})^Tx^{(i)} - y^{(i,j)})^2
$$
ๆณจๆๅจๅ็ปญ็จscipy.optimize.minimize()ๆฑๆๅผๆถ๏ผ้่ฆๅฐXๅThetaๅบๅๅๆ1ไธชๅ้๏ผ่ฟไธชๅจไนๅ ex4 ็ฅ็ป็ฝ็ป็ๆถๅๅฐฑ้ๅฐไบใ
```
def serialization(X,Theta):
return np.append(X.reshape(-1),Theta.reshape(-1))
def deSerialization(datas,num_users=943,num_movies=1682,num_features=10):
return datas[:num_movies*num_features].reshape(num_movies,num_features),datas[num_movies*num_features:].reshape(num_users,num_features)
def cofiCostFunc(datas,Y,R,num_users=943,num_movies=1682,num_features=10):
X,Theta = deSerialization(datas,num_users=num_users,num_movies=num_movies,num_features=num_features)
m,n = Y.shape
M = np.multiply(X.dot(Theta.T),R)
return np.sum((M - Y)**2)/2
cofiCostFunc(serialization(X,Theta),Y,R)
```
้ช่ฏๅบๅๅๅๅๅบๅๅๆฏๅฆๆๅ
```
X_de,Theta_de = deSerialization(serialization(X,Theta))
(X - X_de).min(),(X - X_de).max(),(Theta - Theta_de).min(),(Theta - Theta_de).max()
```
็ปไน ไธญ่ฏด็22.22ๆฏๅๅ4ไธช็จๆท๏ผๅ5้จ็ตๅฝฑ๏ผๅ3ไธช็นๅพๅบๆฅ็ใใใ
```
num_users,num_movies,num_features = 4,5,3
cofiCostFunc(serialization(X[:num_movies,:num_features],Theta[:num_users,:num_features]),Y[:num_movies,:num_users],R[:num_movies,:num_users],num_features=num_features,num_movies=num_movies,num_users=num_users)
```
### 2.2.2 Collaborative filtering gradient
The gradients of the cost function is given by:
$$
\frac{\partial{J}}{\partial{x^{(i)}_k}} = \sum_{j:r(i,j)=1}((\theta^{(j)})^Tx^{(i)} - y^{(i,j)})\theta_k^{(i)}
$$$$
\frac{\partial{J}}{\partial{\theta^{(j)}_k}} = \sum_{i:r(i,j)=1}((\theta^{(j)})^Tx^{(i)} - y^{(i,j)})x_k^{(i)}
$$
```
def cofiGradient(datas,Y,R,num_users=943,num_movies=1682,num_features=10):
X,Theta = deSerialization(datas,num_users=num_users,num_movies=num_movies,num_features=num_features)
M = np.multiply(X.dot(Theta.T),R)
X_grad = (M - Y).dot(Theta)
Theta_grad = (M - Y).T.dot(X)
return serialization(X_grad,Theta_grad)
datas_grad = cofiGradient(serialization(X,Theta),Y,R)
X_grad,Theta_grad = deSerialization(datas_grad)
X_grad.shape,Theta_grad.shape
```
### 2.2.3 Regularized cost function
็ปไน ไธญ่ฏด็31.34ๆฏๅๅ4ไธช็จๆท๏ผๅ5้จ็ตๅฝฑ๏ผๅ3ไธช็นๅพไปฅๅ$\lambda$ๅผไธบ1.5ๅบๆฅ็ใใใ
```
def cofiRegularCostFunc(datas,Y,R,parameter,num_users=943,num_movies=1682,num_features=10):
value1 = cofiCostFunc(datas,Y,R,num_users=num_users,num_movies=num_movies,num_features=num_features)
X,Theta = deSerialization(datas,num_users=num_users,num_movies=num_movies,num_features=num_features)
value2 = parameter/2*(np.sum(Theta[:num_users,:num_features] ** 2) + np.sum(X[:num_movies,:num_features]**2))
return value1 + value2
parameter = 1.5
cofiRegularCostFunc(serialization(X[:num_movies,:num_features],Theta[:num_users,:num_features]),Y[:num_movies,:num_users],R[:num_movies,:num_users],parameter,num_features=num_features,num_movies=num_movies,num_users=num_users)
```
### 2.2.4 Regularized gradient
```
def cofiRegularGradient(datas,Y,R,parameter,num_users=943,num_movies=1682,num_features=10):
X,Theta = deSerialization(datas,num_users=num_users,num_movies=num_movies,num_features=num_features)
datas1 = cofiGradient(datas,Y,R,num_users=num_users,num_movies=num_movies,num_features=num_features)
X_grad,Theta_grad = deSerialization(datas1,num_users=num_users,num_movies=num_movies,num_features=num_features)
X_grad += (parameter * X)
Theta_grad += (parameter * Theta)
return serialization(X_grad,Theta_grad)
datas_grad = cofiRegularGradient(serialization(X,Theta),Y,R,parameter)
X_grad,Theta_grad = deSerialization(datas_grad)
X_grad.shape,Theta_grad.shape
```
## 2.3 Learning movie recommendations
```
movie_list = []
with open('./data/movie_ids.txt', encoding='latin-1') as f:
for line in f:
tokens = line.strip().split(' ')
movie_list.append(' '.join(tokens[1:]))
movie_list = np.array(movie_list)
movie_list.shape
```
### 2.3.1 Recommendations
ๆจ่็ฎๆณ่ฟ็จ๏ผ
1. ๅๅคๆฐๆฎ
```
my_ratings = np.zeros(shape=(1682,),dtype='int8')
my_ratings[0] = 4
my_ratings[97] = 2
my_ratings[6] = 3
my_ratings[11]= 5
my_ratings[53] = 4
my_ratings[63]= 5
my_ratings[65]= 3
my_ratings[68] = 5
my_ratings[182] = 4
my_ratings[225] = 5
my_ratings[354]= 5
def ShowMyRatings(my_ratings):
print('Original ratings provided:')
for i in range(my_ratings.shape[0]):
if my_ratings[i] != 0:
print('Rated {rate} for {names}'.format(rate=my_ratings[i],names=movie_list[i]))
ShowMyRatings(my_ratings)
```
ๅฅฅๅฐ่ฟ้ๆ็ช็ถๅๅบ่ฟๆฅไบ๏ผไธๅผๅงไปๆไปถไธญ่ฏปๅฐ็ X ๅ Theta ็ๅผๅ
ถๅฎไธ้่ฆ๏ผไปไฟฉๅฐฑๆฏไธชๅๅผ๏ผๅฐฑ่ทไนๅๅฎ็ฐ็ๆๆ็ฎๆณ็ๅๅผไธๆ ท๏ผไนๅ่ฆไธ็ถ้ฝๆฏๅ
จ0ๆ่
้ๆบๅๅงๅ๏ผๅชไธ่ฟ่ฟ้ๆไบไปๆๅฎๆไปถไธญ่ฏปๅ็ๅๅผไบ๏ผใ็ถๅๆ นๆฎ Y ๅ R ๆฅ่ฎญ็ป X ๅ Theta๏ผไฝฟไปไฟฉๆไผ๏ผ่ฟๆถ่ฎญ็ปๅฎ็ X_pred,Theta_pred ๆๆฏๆ็จ็ใๅ่
ไปฃ่กจ็ตๅฝฑ๏ผ่กๆฐไปฃ่กจ็ตๅฝฑๆปๆฐ๏ผๆฏไธ่กไธบ่ฏฅ็ตๅฝฑ็10ไธช็นๅพ็ๅผ๏ผ๏ผๅ่
ไปฃ่กจ็จๆท๏ผ่กๆฐไปฃ่กจ็ตๅฝฑๆปๆฐ๏ผๆฏไธ่กไธบ่ฏฅ็จๆท็10ไธช็นๅพ็ๅผ๏ผ๏ผๆไปฅ Y_pred ๅฐฑๆฏๆๆ็จๆท้ขๆตๆๆ็ตๅฝฑ็่ฏๅ๏ผไป็ไปฃไปทๅฝๆฐๅผๅบ่ฏฅๆฏๆๅฐ็๏ผไธ Y ๆๆฅ่ฟ๏ผใ
็ถๅๆฅไธๆฅๅฐฑ่ฏฅ็ปไธไธชๆฐ็จๆทๅๆจ่ไบ๏ผ้่ฟไป็ฎๅๅทฒ็ป็ป็่ฏๅ๏ผๅฆ๏ผmy_ratings[1] = 4๏ผๆฅ้ขๆตไป็ Theta_pred ๅผ๏ผ็ถๅๅไธ X_pred ๆณไนๅฐฑๆฏ้ขๆตไปๅฏนๆๆ็ตๅฝฑ็่ฏๅๆ
ๅต๏ผ็ถๅไปๅคงๅฐๅฐๆๅ็ปๆจ่ๅๅ ๅ็ๅฐฑOKไบใ
ไปฅไธๅฐฑๆฏๆจ่็ณป็ป็็่ฎบ๏ผๅ
ทไฝๅ็ๆถๅๅฏไปฅ็ดๆฅๅ
ๆ my_ratings ็ปๅ ่ฟๅป๏ผ็ถๅๅ่ฎญ็ป๏ผๆๅ็ดๆฅๅY_pred็ๆๅไธๅๅฐฑๆฏๅฏน่ฏฅ็จๆท่ฟ่กๆๆ็ตๅฝฑ็้ขๆตใ
```
num_features=50
Y,R,num_movies,num_users = data3['Y'],data3['R'],int(data4['num_movies'].sum()),int(data4['num_users'].sum())
X,Theta = np.random.rand(num_movies,num_features),np.random.rand(num_users,num_features)
Y = np.insert(Y,Y.shape[1],values=my_ratings,axis=1)
R = np.insert(R,R.shape[1],values=(my_ratings != 0),axis=1) # pythonไธญ True/False ไธ 1/0ๆฏๅฎๅ
จไธๆ ท็
Theta = np.insert(Theta,Theta.shape[0],values=0,axis=0)
num_users +=1
# ๆจ่กๅทฅไฝไธ็็ป่๏ผๅๅผๅฝไธๅ
def normalizeRatings(Y,R):
Y_means = (Y.sum(axis=1) / R.sum(axis=1)).reshape(-1,1)
Y_norm = Y - Y_means
return Y_means,Y_norm
Y_means,Y_norm = normalizeRatings(Y,R)
X.shape,Theta.shape,Y_means.shape,Y_norm.shape,R.shape
```
2. ่ฎญ็ปๆจกๅ
```
import scipy.optimize as opt
parameter = 10
# ่ฏดๅฎ่ฏๅพๅฅๆช๏ผๆๅฆๆ็จY่ฟ่ก่ฎญ็ป็่ฏ็ปๆ่ทๅ่็ญๆกไธไฝไธไธญ็ๅพ็ธไผผ๏ผ็ถๅๆ้็ๆฅ่ฏด็จ Y_norm ๅบ่ฏฅๆดๅ ็ธไผผๆๅฏน๏ผไฝๆฏ็จ Y_norm ็็ปๆๅดๅคฉๅทฎๅฐๅซใ
result = opt.minimize(fun=cofiRegularCostFunc,x0=serialization(X,Theta),args=(Y,R,parameter,num_users,num_movies,num_features),jac=cofiRegularGradient,method='TNC')
result
```
3. ่งฃๅๆฐๆฎ
```
X_pred,Theta_pred = deSerialization(result['x'],num_users=num_users,num_movies=num_movies,num_features=num_features)
# ่ฟไธชๆฏ็จ Y_norm ๅพๅบ็ Y_pred๏ผไธ้ขๅ
ไธชๆฏ็ดๆฅ็จ Y ๅพๅบ็๏ผ็ฎๅๆฅ่ฏดๆฏ็ดๆฅ็จ Y ๅพๅบ็ๅฅฝ๏ผไฝๆฏ่ฟไธๆญฃๅธธ
# Y_pred = X_pred.dot(Theta_pred.T) + Y_means
Y_pred = X_pred.dot(Theta_pred.T)
X_pred.shape,Theta_pred.shape,Y_pred.shape
rated,ix = np.sort(Y_pred[:,-1],axis=0)[::-1],np.argsort(Y_pred[:,-1],axis=0)[::-1]
def ShowRecommendations(rated,ix,n=10):
print('Top recommendations for you:')
for i in range(n):
print('Predicting rating {rate} for {names}'.format(rate=rated[i],names=movie_list[ix[i]]))
ShowRecommendations(rated,ix)
```
็จ Y ๆฏ Y_norm ่ฎญ็ปๅบ็็ปๆๅฅฝ็ๅค๏ผ่ฟไธๆญฃๅธธ๏ผไฝๆ็ไบๅๅคฉไนไธ็ฅ้้ฎ้ขๅบๅจๅช๏ผๅช่ฝๅ
ๆพ็๏ผ็ญๅฐๅ็ปญ็ๅ่็ญๆก็่ฝไธ่ฝ็นไธไธ
# END
| github_jupyter |
# SGDClassifier with RobustScaler & Quantile Transformer
This Code template is for classification analysis using the SGD Classifier where rescaling method used is RobustScaler and feature transformation is done via Quantile Transformer.
### Required Packages
```
import numpy as np
import pandas as pd
import seaborn as se
import warnings
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from imblearn.over_sampling import RandomOverSampler
from sklearn.preprocessing import RobustScaler, QuantileTransformer
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training.
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
Stochastic Gradient Descent (SGD) is a simple yet very efficient approach to fitting linear classifiers and regressors under convex loss functions such as (linear) Support Vector Machines and Logistic Regression. SGD is merely an optimization technique and does not correspond to a specific family of machine learning models. It is only a way to train a model. Often, an instance of SGDClassifier or SGDRegressor will have an equivalent estimator in the scikit-learn API, potentially using a different optimization technique.
For example, using SGDRegressor(loss='squared_loss', penalty='l2') and Ridge solve the same optimization problem, via different means.
#### Model Tuning Parameters
> - **loss** -> The loss function to be used. The possible values are โsquared_lossโ, โhuberโ, โepsilon_insensitiveโ, or โsquared_epsilon_insensitiveโ
> - **penalty** -> The penalty (aka regularization term) to be used. Defaults to โl2โ which is the standard regularizer for linear SVM models. โl1โ and โelasticnetโ might bring sparsity to the model (feature selection) not achievable with โl2โ.
> - **alpha** -> Constant that multiplies the regularization term. The higher the value, the stronger the regularization. Also used to compute the learning rate when set to learning_rate is set to โoptimalโ.
> - **l1_ratio** -> The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty is โelasticnetโ.
> - **tol** -> The stopping criterion
> - **learning_rate** -> The learning rate schedule,possible values {'optimal','constant','invscaling','adaptive'}
> - **eta0** -> The initial learning rate for the โconstantโ, โinvscalingโ or โadaptiveโ schedules.
> - **power_t** -> The exponent for inverse scaling learning rate.
> - **epsilon** -> Epsilon in the epsilon-insensitive loss functions; only if loss is โhuberโ, โepsilon_insensitiveโ, or โsquared_epsilon_insensitiveโ.
### Robust Scaler
Standardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results.
This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).
### Quantile Transformer
This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.
Transform features using quantiles information.
```
model=make_pipeline(RobustScaler(), QuantileTransformer(), SGDClassifier())
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
score: The score function returns the coefficient of determination R2 of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Ayush Gupta , Github: [Profile](https://github.com/guptayush179)
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by downloading the client and [reading the primer](https://plotly.com/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plotly.com/python/getting-started/#initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
### Version Check
Note: The static image export API is available in version <b>3.2.0.+</b><br>
```
import plotly
plotly.__version__
```
### Static Image Export
New in version 3.2.0. It's now possible to programmatically export figures as high quality static images while fully offline.
#### Install Dependencies
Static image generation requires the [orca](https://github.com/plotly/orca) commandline utility and the [psutil](https://github.com/giampaolo/psutil) Python library. There are 3 general approach to installing these dependencies.
##### conda
Using the [conda](https://conda.io/docs/) package manager, you can install these dependencies in a single command:
```
$ conda install -c plotly plotly-orca psutil
```
**Note:** Even if you don't want to use conda to manage your Python dependencies, it is still useful as a cross platform tool for managing native libraries and command-line utilities (e.g. git, wget, graphviz, boost, gcc, nodejs, cairo, etc.). For this use-case, start with [Miniconda](https://conda.io/miniconda.html) (~60MB) and tell the installer to add itself to your system `PATH`. Then run `conda install plotly-orca` and the orca executable will be available system wide.
##### npm + pip
You can use the [npm](https://www.npmjs.com/get-npm) package manager to install `orca` (and its `electron` dependency), and then use pip to install `psutil`:
```
$ npm install -g electron@1.8.4 orca
$ pip install psutil
```
##### Standalone Binaries + pip
If you are unable to install conda or npm, you can install orca as a precompiled binary for your operating system. Follow the instructions in the orca [README](https://github.com/plotly/orca) to install orca and add it to your system `PATH`. Then use pip to install `psutil`.
```
$ pip install psutil
```
### Create a Figure
Now let's create a simple scatter plot with 100 random points of variying color and size.
```
from plotly.offline import iplot, init_notebook_mode
import plotly.graph_objs as go
import plotly.io as pio
import os
import numpy as np
```
We'll configure the notebook for use in [offline](https://plotly.com/python/getting-started/#initialization-for-offline-plotting) mode
```
init_notebook_mode(connected=True)
N = 100
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
sz = np.random.rand(N)*30
fig = go.Figure()
fig.add_scatter(x=x,
y=y,
mode='markers',
marker={'size': sz,
'color': colors,
'opacity': 0.6,
'colorscale': 'Viridis'
})
iplot(fig)
```
### Write Image File
The `plotly.io.write_image` function is used to write an image to a file or file-like python object.
Let's first create an output directory to store our images
```
if not os.path.exists('images'):
os.mkdir('images')
```
If you are running this notebook live, click to [open the output directory](./images) so you can examine the images as they're written.
#### Raster Formats: PNG, JPEG, and WebP
Orca can output figures to several raster image formats including **PNG**, ...
```
pio.write_image(fig, 'images/fig1.png')
```
**JPEG**, ...
```
pio.write_image(fig, 'images/fig1.jpeg')
```
and **WebP**
```
pio.write_image(fig, 'images/fig1.webp')
```
#### Vector Formats: SVG and PDF...
Orca can also output figures in several vector formats including **SVG**, ...
```
pio.write_image(fig, 'images/fig1.svg')
```
**PDF**, ...
```
pio.write_image(fig, 'images/fig1.pdf')
```
and **EPS** (requires the poppler library)
```
pio.write_image(fig, 'images/fig1.eps')
```
**Note:** It is important to note that any figures containing WebGL traces (i.e. of type `scattergl`, `heatmapgl`, `contourgl`, `scatter3d`, `surface`, `mesh3d`, `scatterpolargl`, `cone`, `streamtube`, `splom`, or `parcoords`) that are exported in a vector format will include encapsulated rasters, instead of vectors, for some parts of the image.
### Get Image as Bytes
The `plotly.io.to_image` function is used to return an image as a bytes object.
Let convert the figure to a **PNG** bytes object...
```
img_bytes = pio.to_image(fig, format='png')
```
and then display the first 20 bytes.
```
img_bytes[:20]
```
#### Display Bytes as Image Using `IPython.display.Image`
A bytes object representing a PNG image can be displayed directly in the notebook using the `IPython.display.Image` class. This also works in the [Qt Console for Jupyter](https://qtconsole.readthedocs.io/en/stable/)!
```
from IPython.display import Image
Image(img_bytes)
```
### Change Image Dimensions and Scale
In addition to the image format, the `to_image` and `write_image` functions provide arguments to specify the image `width` and `height` in logical pixels. They also provide a `scale` parameter that can be used to increase (`scale` > 1) or decrease (`scale` < 1) the physical resolution of the resulting image.
```
img_bytes = pio.to_image(fig, format='png', width=600, height=350, scale=2)
Image(img_bytes)
```
### Summary
In summary, to export high-quality static images from plotly.py all you need to do is install orca and psutil and then use the `plotly.io.write_image` and `plotly.io.to_image` functions.
If you want to know more about how the orca integration works, or if you need to troubleshoot an issue, please check out the [Orca Management](../orca-management/) section.
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'static-image-export.ipynb', 'python/static-image-export/', 'Static Image Export | plotly',
'Plotly allows you to save static images of your plots. Save the image to your local computer, or embed it inside your Jupyter notebooks as a static image.',
title = 'Static Image Export | plotly',
name = 'Static Image Export',
thumbnail='thumbnail/static-image-export.png',
language='python',
uses_plotly_offline=True,
page_type='example_index', has_thumbnail='true', display_as='file_settings', order=1,
ipynb='~notebook_demo/252')
```
| github_jupyter |
# Data Science Fundamentals 5
Basic introduction on how to perform typical machine learning tasks with Python.
Prepared by Mykhailo Vladymyrov & Aris Marcolongo,
Science IT Support, University Of Bern, 2020
This work is licensed under <a href="https://creativecommons.org/share-your-work/public-domain/cc0/">CC0</a>.
# Solutions to Part 2.
```
from sklearn import tree
from sklearn import ensemble
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from matplotlib import pyplot as plt
import seaborn as sns
from time import time as timer
from imageio import imread
import pandas as pd
import numpy as np
import os
from sklearn.manifold import TSNE
import umap
import tensorflow as tf
%matplotlib inline
from matplotlib import animation
from IPython.display import HTML
if not os.path.exists('data'):
path = os.path.abspath('.')+'/colab_material.tgz'
tf.keras.utils.get_file(path, 'https://github.com/neworldemancer/DSF5/raw/master/colab_material.tgz')
!tar -xvzf colab_material.tgz > /dev/null 2>&1
from utils.routines import *
```
# Datasets
In this course we will use several synthetic and real-world datasets to illustrate the behavior of the models and exercise our skills.
## 1. Synthetic linear
```
def get_linear(n_d=1, n_points=10, w=None, b=None, sigma=5):
x = np.random.uniform(0, 10, size=(n_points, n_d))
w = w or np.random.uniform(0.1, 10, n_d)
b = b or np.random.uniform(-10, 10)
y = np.dot(x, w) + b + np.random.normal(0, sigma, size=n_points)
print('true slopes: w =', w, '; b =', b)
return x, y
x, y = get_linear(n_d=1, sigma=0)
plt.plot(x[:, 0], y, '*')
n_d = 2
x, y = get_linear(n_d=n_d, n_points=100)
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x[:,0], x[:,1], y, marker='x', color='b',s=40)
```
## 2. House prices
Subset of the Ames Houses dataset: http://jse.amstat.org/v19n3/decock.pdf
```
def house_prices_dataset(return_df=False, price_max=400000, area_max=40000):
path = 'data/AmesHousing.csv'
df = pd.read_csv(path, na_values=('NaN', ''), keep_default_na=False)
rename_dict = {k:k.replace(' ', '').replace('/', '') for k in df.keys()}
df.rename(columns=rename_dict, inplace=True)
useful_fields = ['LotArea',
'Utilities', 'OverallQual', 'OverallCond',
'YearBuilt', 'YearRemodAdd', 'ExterQual', 'ExterCond',
'HeatingQC', 'CentralAir', 'Electrical',
'1stFlrSF', '2ndFlrSF','GrLivArea',
'FullBath', 'HalfBath',
'BedroomAbvGr', 'KitchenAbvGr', 'KitchenQual', 'TotRmsAbvGrd',
'Functional','PoolArea',
'YrSold', 'MoSold'
]
target_field = 'SalePrice'
df.dropna(axis=0, subset=useful_fields+[target_field], inplace=True)
cleanup_nums = {'Street': {'Grvl': 0, 'Pave': 1},
'LotFrontage': {'NA':0},
'Alley': {'NA':0, 'Grvl': 1, 'Pave': 2},
'LotShape': {'IR3':0, 'IR2': 1, 'IR1': 2, 'Reg':3},
'Utilities': {'ELO':0, 'NoSeWa': 1, 'NoSewr': 2, 'AllPub': 3},
'LandSlope': {'Sev':0, 'Mod': 1, 'Gtl': 3},
'ExterQual': {'Po':0, 'Fa': 1, 'TA': 2, 'Gd': 3, 'Ex':4},
'ExterCond': {'Po':0, 'Fa': 1, 'TA': 2, 'Gd': 3, 'Ex':4},
'BsmtQual': {'NA':0, 'Po':1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex':5},
'BsmtCond': {'NA':0, 'Po':1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex':5},
'BsmtExposure':{'NA':0, 'No':1, 'Mn': 2, 'Av': 3, 'Gd': 4},
'BsmtFinType1':{'NA':0, 'Unf':1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ':5, 'GLQ':6},
'BsmtFinType2':{'NA':0, 'Unf':1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ':5, 'GLQ':6},
'HeatingQC': {'Po':0, 'Fa': 1, 'TA': 2, 'Gd': 3, 'Ex':4},
'CentralAir': {'N':0, 'Y': 1},
'Electrical': {'':0, 'NA':0, 'Mix':1, 'FuseP':2, 'FuseF': 3, 'FuseA': 4, 'SBrkr': 5},
'KitchenQual': {'Po':0, 'Fa': 1, 'TA': 2, 'Gd': 3, 'Ex':4},
'Functional': {'Sal':0, 'Sev':1, 'Maj2': 2, 'Maj1': 3, 'Mod': 4, 'Min2':5, 'Min1':6, 'Typ':7},
'FireplaceQu': {'NA':0, 'Po':1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex':5},
'PoolQC': {'NA':0, 'Fa': 1, 'TA': 2, 'Gd': 3, 'Ex':4},
'Fence': {'NA':0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv':4},
}
df_X = df[useful_fields].copy()
df_X.replace(cleanup_nums, inplace=True) # convert continous categorial variables to numerical
df_Y = df[target_field].copy()
x = df_X.to_numpy().astype(np.float32)
y = df_Y.to_numpy().astype(np.float32)
if price_max>0:
idxs = y<price_max
x = x[idxs]
y = y[idxs]
if area_max>0:
idxs = x[:,0]<area_max
x = x[idxs]
y = y[idxs]
return (x, y, df) if return_df else (x,y)
def house_prices_dataset_normed():
x, y = house_prices_dataset(return_df=False, price_max=-1, area_max=-1)
scaler=StandardScaler()
features_scaled=scaler.fit_transform(x)
return features_scaled
x, y, df = house_prices_dataset(return_df=True)
print(x.shape, y.shape)
df.head()
plt.plot(x[:, 0], y, '.')
plt.xlabel('area, sq.ft')
plt.ylabel('price, $');
```
## 3. Blobs
```
x, y = make_blobs(n_samples=1000, centers=[[0,0], [5,5], [10, 0]])
colors = "ygr"
for i, color in enumerate(colors):
idx = y == i
plt.scatter(x[idx, 0], x[idx, 1], c=color, edgecolor='gray', s=25)
```
## 4. MNIST
The MNIST database of handwritten digits has a training set of 60,000 examples, and a test set of 10,000 examples. The digits have been size-normalized and centered in a fixed-size image.
It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting (taken from http://yann.lecun.com/exdb/mnist/). Each example is a 28x28 grayscale image and the dataset can be readily downloaded from Tensorflow.
```
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
```
Let's check few samples:
```
n = 3
fig, ax = plt.subplots(n, n, figsize=(2*n, 2*n))
ax = [ax_xy for ax_y in ax for ax_xy in ax_y]
for axi, im_idx in zip(ax, np.random.choice(len(train_images), n**2)):
im = train_images[im_idx]
im_class = train_labels[im_idx]
axi.imshow(im, cmap='gray')
axi.text(1, 4, f'{im_class}', color='r', size=16)
axi.grid(False)
plt.tight_layout(0,0,0)
```
## 5. Fashion MNIST
`Fashion-MNIST` is a dataset of Zalando's article imagesโconsisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. (from https://github.com/zalandoresearch/fashion-mnist)
```
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
```
Let's check few samples:
```
n = 3
fig, ax = plt.subplots(n, n, figsize=(2*n, 2*n))
ax = [ax_xy for ax_y in ax for ax_xy in ax_y]
for axi, im_idx in zip(ax, np.random.choice(len(train_images), n**2)):
im = train_images[im_idx]
im_class = train_labels[im_idx]
axi.imshow(im, cmap='gray')
axi.text(1, 4, f'{im_class}', color='r', size=16)
axi.grid(False)
plt.tight_layout(0,0,0)
```
Each of the training and test examples is assigned to one of the following labels:
| Label | Description |
| --- | --- |
| 0 | T-shirt/top |
| 1 | Trouser |
| 2 | Pullover |
| 3 | Dress |
| 4 | Coat |
| 5 | Sandal |
| 6 | Shirt |
| 7 | Sneaker |
| 8 | Bag |
| 9 | Ankle boot |
In this course we will use several synthetic and real-world datasets to illustrate the behavior of the models and exercise our skills.
# EXERCISE 1 : Random forest classifier for FMNIST
```
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
n = len(train_labels)
x_train = train_images.reshape((n, -1))
y_train = train_labels
n_test = len(test_labels)
x_test = test_images.reshape((n_test, -1))
y_test = test_labels
# 1. Create classifier. As the number of features is big, use bigger tree depth
# (max_depth parameter). in the same time to reduce variance, one should limit the
# total number of tree leafes. (max_leaf_nodes parameter)
# Try different number of estimators (n_estimators)
n_est = 20
dtc = ensemble.RandomForestClassifier(max_depth=700, n_estimators=n_est, max_leaf_nodes=500)
# 2. fit the model
t1 = timer()
dtc.fit(x_train, y_train)
t2 = timer()
print ('training time: %.1fs'%(t2-t1))
# 3. Inspect training and test accuracy
print("training score : %.3f (n_est=%d)" % (dtc.score(x_train, y_train), n_est))
print("test score : %.3f (n_est=%d)" % (dtc.score(x_test, y_test), n_est))
```
# EXERCISE 2 : PCA with a non-linear data-set
```
# 1. Load the data using the function load_ex1_data_pca() , check the dimensionality of the data and plot them.
# Solution:
data = load_ex1_data_pca()
n_samples,n_dim=data.shape
print('We have ',n_samples, 'samples of dimension ', n_dim)
plt.figure(figsize=((5,5)))
plt.grid()
plt.plot(data[:,0],data[:,1],'o')
# 2. Define a PCA object and perform the PCA fitting.
pca=PCA()
pca.fit(data)
# 3. Check the explained variance ratio and select best number of components.
print('Explained variance ratio: ' ,pca.explained_variance_ratio_)
# 4. Plot the reconstructed vectors for different values of k.
scores=pca.transform(data)
for k in range(1,3):
res=np.dot(scores[:,:k], pca.components_[:k,:])
plt.figure(figsize=((5,5)))
plt.title('Reconstructed vector for k ='+ str(k))
plt.plot(res[:,0],res[:,1],'o')
plt.plot(data[:,0],data[:,1],'o')
for a,b,c,d in zip(data[:,0],data[:,1],res[:,0],res[:,1]) :
plt.plot([a,c],[b,d],'-', linestyle = '--', color='red')
plt.grid()
# Message: if the manyfold is non-linear one is forced to use a high number of principal components.
# For example, in the parabola example the projection for k=1 looks bad. But using too many principal components
# the reconstructed vectors are almost equal to the original ones (for k=2 we get exact reconstruction in our example )
# and the advanteges of dimensionality reduction are lost. This is a general pattern.
```
# EXERCISE 3 : Find the hidden drawing.
```
# 1. Load the data using the function load_ex2_data_pca(seed=1235) , check the dimensionality of the data and plot them.
data= load_ex2_data_pca(seed=1235)
n_samples,n_dim=data.shape
print('We have ',n_samples, 'samples of dimension ', n_dim)
# 2. Define a PCA object and perform the PCA fitting.
pca=PCA()
pca.fit(data)
# 3. Check the explained variance ratio and select best number of components.
plt.figure()
print('Explained variance ratio: ' ,pca.explained_variance_ratio_)
plt.plot(pca.explained_variance_ratio_,'-o')
plt.xlabel('k')
plt.ylabel('Explained variance ratio')
plt.grid()
# 4. Plot the reconstructed vectors for the best value of k.
plt.figure()
k=2
data_transformed=pca.transform(data)
plt.plot(data_transformed[:,0],data_transformed[:,1],'o')
# **Message:** Sometimes the data hides simple patterns in high dimensional datasets.
# PCA can be very useful in identifying these patterns.
```
| github_jupyter |
```
import pandas as pd
from bicm import BipartiteGraph
import numpy as np
from tqdm import tqdm
import csv
import itertools
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix, f1_score, classification_report
from sklearn.metrics import roc_curve, roc_auc_score, precision_recall_curve, average_precision_score
from sklearn.metrics import confusion_matrix, f1_score, classification_report
import math
```
### Functions
```
def dump_degree_sequences(train,test,fold=0,unseen_folder='VecNet_Unseen_Nodes'):
ligands = list(set(train['InChiKey'].tolist()))
targets = list(set(train['target_aa_code'].tolist()))
ligands_degree_dict = dict()
for inchikey_chem in tqdm(ligands):
sum_df = train[train['InChiKey'] == inchikey_chem]
ligands_degree_dict[inchikey_chem] = dict()
ligands_degree_dict[inchikey_chem]['deg_0'] = len(sum_df[sum_df['Y'] == 0])
ligands_degree_dict[inchikey_chem]['deg_1'] = len(sum_df[sum_df['Y'] == 1])
targets_degree_dict = dict()
for aa_target in tqdm(targets):
sum_df = train[train['target_aa_code'] == aa_target]
targets_degree_dict[aa_target] = dict()
targets_degree_dict[aa_target]['deg_0'] = len(sum_df[sum_df['Y'] == 0])
targets_degree_dict[aa_target]['deg_1'] = len(sum_df[sum_df['Y'] == 1])
degree_train_1_0_ligands = [ligands_degree_dict[key_val]['deg_1'] for key_val in tqdm(ligands_degree_dict.keys())]
degree_train_0_1_ligands = [ligands_degree_dict[key_val]['deg_0'] for key_val in tqdm(ligands_degree_dict.keys())]
degree_train_1_0_targets = [targets_degree_dict[key_val]['deg_1'] for key_val in tqdm(targets_degree_dict.keys())]
degree_train_0_1_targets = [targets_degree_dict[key_val]['deg_0'] for key_val in tqdm(targets_degree_dict.keys())]
with open('../data/sars-busters-consolidated/GitData/' + unseen_folder + '/Degree_Sequences/degreetrain10ligands_' + str(fold) + '.txt', 'w') as file:
for degree in degree_train_1_0_ligands:
file.write("%i\n" % degree)
file.close()
with open('../data/sars-busters-consolidated/GitData/' + unseen_folder + '/Degree_Sequences/degreetrain01ligands_' + str(fold) + '.txt', 'w') as file:
for degree in degree_train_0_1_ligands:
file.write("%i\n" % degree)
file.close()
with open('../data/sars-busters-consolidated/GitData/' + unseen_folder + '/Degree_Sequences/degreetrain10targets_' + str(fold) + '.txt', 'w') as file:
for degree in degree_train_1_0_targets:
file.write("%i\n" % degree)
file.close()
with open('../data/sars-busters-consolidated/GitData/' + unseen_folder + '/Degree_Sequences/degreetrain01targets_' + str(fold) + '.txt', 'w') as file:
for degree in degree_train_0_1_targets:
file.write("%i\n" % degree)
file.close()
textfile = open('../data/sars-busters-consolidated/GitData/' + unseen_folder + '/Degree_Sequences/ligands_' + str(fold) + '.txt', "w")
for element in ligands:
textfile.write(element + "\n")
textfile.close()
textfile = open('../data/sars-busters-consolidated/GitData/' + unseen_folder + '/Degree_Sequences/targets_' + str(fold) + '.txt', "w")
for element in targets:
textfile.write(element + "\n")
textfile.close()
return
def get_configuration_model_performance(train,test,ligand_file_path,target_file_path,summat10_file_path,summat01_file_path):
text_file = open(ligand_file_path, "r") # Rows of the adjacency matrix in order
ligands = text_file.readlines()
text_file = open(target_file_path, "r") # Columns of the adjacency matrix in order
targets = text_file.readlines()
ligands = [j.replace('\n','') for j in tqdm(ligands)]
targets = [j.replace('\n','') for j in tqdm(targets)]
number_ligands = len(ligands)
number_targets = len(targets)
train_pos = train[train['Y'] == 1]
train_neg = train[train['Y'] == 0]
pos_deg_0_ligands = []
pos_deg_0_targets = []
neg_deg_0_ligands = []
neg_deg_0_targets = []
ligand_degree_ratio = dict()
ligand_all_average = []
for ligand in tqdm(ligands):
pos_deg = len(train_pos[train_pos['InChiKey'] == ligand])
neg_deg = len(train_neg[train_neg['InChiKey'] == ligand])
ligand_degree_ratio[ligand] = dict()
ligand_degree_ratio[ligand]['deg_ratio'] = pos_deg / (pos_deg + neg_deg)
ligand_degree_ratio[ligand]['deg_avg'] = pos_deg / number_targets
ligand_all_average.append(pos_deg / number_targets)
if pos_deg == 0:
pos_deg_0_ligands.append(ligand)
if neg_deg == 0:
neg_deg_0_ligands.append(ligand)
ligands_all_avg = sum(ligand_all_average) / number_ligands
targets_degree_ratio = dict()
target_all_average = []
for target in tqdm(targets):
pos_deg = len(train_pos[train_pos['target_aa_code'] == target])
neg_deg = len(train_neg[train_neg['target_aa_code'] == target])
targets_degree_ratio[target] = dict()
targets_degree_ratio[target]['deg_ratio'] = pos_deg / (pos_deg + neg_deg)
targets_degree_ratio[target]['deg_avg'] = pos_deg / number_ligands
target_all_average.append(pos_deg / number_ligands)
if pos_deg == 0:
pos_deg_0_targets.append(target)
if neg_deg == 0:
neg_deg_0_targets.append(target)
targets_all_avg = sum(target_all_average) / number_targets
print('Ligands with positive degree 0: ',len(pos_deg_0_ligands))
print('Ligands with negative degree 0: ',len(neg_deg_0_ligands))
print('Targets with positive degree 0: ',len(pos_deg_0_targets))
print('Targets with negative degree 0: ',len(neg_deg_0_targets))
pos_annotated_ligands = list(set(ligands)-set(pos_deg_0_ligands))
pos_annotated_targets = list(set(targets)-set(pos_deg_0_targets))
neg_annotated_ligands = list(set(ligands)-set(neg_deg_0_ligands))
neg_annotated_targets = list(set(targets)-set(neg_deg_0_targets))
summat10 = np.loadtxt(open(summat10_file_path, "rb"), delimiter=",", skiprows=0) # Output of MATLAB run
summat01 = np.loadtxt(open(summat01_file_path, "rb"), delimiter=",", skiprows=0) # Output of MATLAB run
test_probabilty_predicted_conditioned = []
## Average conditional probability
#conditoned_summat = np.divide(summat10,np.add(summat10,summat01)) # Elementwise pos_deg / (pos_deg + neg_deg)
#conditoned_summat = conditoned_summat[~np.isnan(conditoned_summat)]
#average_conditional_probability = sum(conditoned_summat) / len(conditoned_summat) # Average over valid conditional probabilities
p10_avg = np.mean(summat10)
p01_avg = np.mean(summat01)
average_conditional_probability = p10_avg / (p10_avg + p01_avg)
drop_nan = []
for index, row in tqdm(test.iterrows()):
if row['InChiKey'] in pos_annotated_ligands and row['target_aa_code'] in pos_annotated_targets:
p10 = summat10[ligands.index(row['InChiKey']),targets.index(row['target_aa_code'])]
p01 = summat01[ligands.index(row['InChiKey']),targets.index(row['target_aa_code'])]
p10_conditioned = p10 / (p10 + p01)
elif row['InChiKey'] in pos_annotated_ligands and row['target_aa_code'] not in pos_annotated_targets:
p10_conditioned = ligand_degree_ratio[row['InChiKey']]['deg_ratio'] ## k_+ / (k_+ + k_-)
elif row['InChiKey'] not in pos_annotated_ligands and row['target_aa_code'] in pos_annotated_targets:
p10_conditioned = targets_degree_ratio[row['target_aa_code']]['deg_ratio'] ## k_+ / (k_+ + k_-)
else:
p10_conditioned = average_conditional_probability
if math.isnan(p10_conditioned):
drop_nan.append(index)
else:
test_probabilty_predicted_conditioned.append(p10_conditioned)
## Performance on the test dataset
print('AUC: ', roc_auc_score(test.drop(drop_nan)['Y'].tolist(), test_probabilty_predicted_conditioned))
print('AUP: ', average_precision_score(test.drop(drop_nan)['Y'].tolist(), test_probabilty_predicted_conditioned))
return roc_auc_score(test.drop(drop_nan)['Y'].tolist(), test_probabilty_predicted_conditioned), average_precision_score(test.drop(drop_nan)['Y'].tolist(), test_probabilty_predicted_conditioned)
```
### Generated the degree files - Unseen Nodes and Edges
```
for fold in tqdm(range(5)):
train = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Nodes/train_' + str(fold) + '.csv')
edges_test = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Nodes/test_unseen_edges_' + str(fold) + '.csv')
nodes_test = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Nodes/test_unseen_nodes_' + str(fold) + '.csv')
dump_degree_sequences(train,nodes_test,fold=fold,unseen_folder='VecNet_Unseen_Nodes')
```
######## Now run the MATLAB code in Configuration Model - 5 fold folder to generate the matrices. #########
### Get Unseen Nodes and Edges Test Performance
```
auc_nodes = []
aup_nodes = []
auc_edges =[]
aup_edges = []
for fold in tqdm(range(5)):
train = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Nodes/train_' + str(fold) + '.csv')
edges_test = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Nodes/test_unseen_edges_' + str(fold) + '.csv')
nodes_test = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Nodes/test_unseen_nodes_' + str(fold) + '.csv')
ligand_file_path = '/data/sars-busters-consolidated/GitData/VecNet_Unseen_Nodes/Degree_Sequences/ligands_' + str(fold) + '.txt'
target_file_path = '/data/sars-busters-consolidated/GitData/VecNet_Unseen_Nodes/Degree_Sequences/targets_' + str(fold) + '.txt'
summat10_file_path = '/data/sars-busters-consolidated/GitData/VecNet_Unseen_Nodes/Degree_Sequences/summat10_' + str(fold) + '.csv'
summat01_file_path = '/data/sars-busters-consolidated/GitData/VecNet_Unseen_Nodes/Degree_Sequences/summat01_' + str(fold) + '.csv'
auc, aup = get_configuration_model_performance(train,nodes_test,ligand_file_path,target_file_path,summat10_file_path,summat01_file_path)
auc_nodes.append(auc)
aup_nodes.append(aup)
auc, aup = get_configuration_model_performance(train,edges_test,ligand_file_path,target_file_path,summat10_file_path,summat01_file_path)
auc_edges.append(auc)
aup_edges.append(aup)
print('Unseen Nodes')
print('AUC: ', np.mean(auc_nodes), '+-', np.std(auc_nodes))
print('AUP: ', np.mean(aup_nodes), '+-', np.std(aup_nodes))
print('Unseen Edges')
print('AUC: ', np.mean(auc_edges), '+-', np.std(auc_edges))
print('AUP: ', np.mean(aup_edges), '+-', np.std(aup_edges))
```
### Generated the degree files - Unseen Targets
```
for fold in tqdm(range(5)):
train = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/train_' + str(fold) + '.csv')
edges_test = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/test_unseen_edges_' + str(fold) + '.csv')
nodes_test = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/test_unseen_nodes_' + str(fold) + '.csv')
dump_degree_sequences(train,nodes_test,fold=fold,unseen_folder='VecNet_Unseen_Targets')
```
### Get Unseen Targets Test Performance
```
auc_targets = []
aup_targets = []
for fold in tqdm(range(5)):
train = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/train_' + str(fold) + '.csv')
edges_test = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/test_unseen_edges_' + str(fold) + '.csv')
nodes_test = pd.read_csv('../data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/test_unseen_nodes_' + str(fold) + '.csv')
ligand_file_path = '/data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/Degree_Sequences/ligands_' + str(fold) + '.txt'
target_file_path = '/data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/Degree_Sequences/targets_' + str(fold) + '.txt'
summat10_file_path = '/data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/Degree_Sequences/summat10_' + str(fold) + '.csv'
summat01_file_path = '/data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/Degree_Sequences/summat01_' + str(fold) + '.csv'
try:
auc, aup = get_configuration_model_performance(train,nodes_test,ligand_file_path,target_file_path,summat10_file_path,summat01_file_path)
auc_targets.append(auc)
aup_targets.append(aup)
except:
continue
print('Unseen Targets')
print('AUC: ', np.mean(auc_targets), '+-', np.std(auc_targets))
print('AUP: ', np.mean(aup_targets), '+-', np.std(aup_targets))
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Bayesian Gaussian Mixture Model and Hamiltonian MCMC
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Bayesian_Gaussian_Mixture_Model"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Gaussian_Mixture_Model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Gaussian_Mixture_Model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Gaussian_Mixture_Model.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this colab we'll explore sampling from the posterior of a Bayesian Gaussian Mixture Model (BGMM) using only TensorFlow Probability primitives.
## Model
For $k\in\{1,\ldots, K\}$ mixture components each of dimension $D$, we'd like to model $i\in\{1,\ldots,N\}$ iid samples using the following Bayesian Gaussian Mixture Model:
$$\begin{align*}
\theta &\sim \text{Dirichlet}(\text{concentration}=\alpha_0)\\
\mu_k &\sim \text{Normal}(\text{loc}=\mu_{0k}, \text{scale}=I_D)\\
T_k &\sim \text{Wishart}(\text{df}=5, \text{scale}=I_D)\\
Z_i &\sim \text{Categorical}(\text{probs}=\theta)\\
Y_i &\sim \text{Normal}(\text{loc}=\mu_{z_i}, \text{scale}=T_{z_i}^{-1/2})\\
\end{align*}$$
Note, the `scale` arguments all have `cholesky` semantics. We use this convention because it is that of TF Distributions (which itself uses this convention in part because it is computationally advantageous).
Our goal is to generate samples from the posterior:
$$p\left(\theta, \{\mu_k, T_k\}_{k=1}^K \Big| \{y_i\}_{i=1}^N, \alpha_0, \{\mu_{ok}\}_{k=1}^K\right)$$
Notice that $\{Z_i\}_{i=1}^N$ is not present--we're interested in only those random variables which don't scale with $N$. (And luckily there's a TF distribution which handles marginalizing out $Z_i$.)
It is not possible to directly sample from this distribution owing to a computationally intractable normalization term.
[Metropolis-Hastings algorithms](https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm) are technique for for sampling from intractable-to-normalize distributions.
TensorFlow Probability offers a number of MCMC options, including several based on Metropolis-Hastings. In this notebook, we'll use [Hamiltonian Monte Carlo](https://en.wikipedia.org/wiki/Hamiltonian_Monte_Carlo) (`tfp.mcmc.HamiltonianMonteCarlo`). HMC is often a good choice because it can converge rapidly, samples the state space jointly (as opposed to coordinatewise), and leverages one of TF's virtues: automatic differentiation. That said, sampling from a BGMM posterior might actually be better done by other approaches, e.g., [Gibb's sampling](https://en.wikipedia.org/wiki/Gibbs_sampling).
```
%matplotlib inline
import functools
import matplotlib.pyplot as plt; plt.style.use('ggplot')
import numpy as np
import seaborn as sns; sns.set_context('notebook')
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
physical_devices = tf.config.experimental.list_physical_devices('GPU')
if len(physical_devices) > 0:
tf.config.experimental.set_memory_growth(physical_devices[0], True)
```
Before actually building the model, we'll need to define a new type of distribution. From the model specification above, its clear we're parameterizing the MVN with an inverse covariance matrix, i.e., [precision matrix](https://en.wikipedia.org/wiki/Precision_(statistics%29). To accomplish this in TF, we'll need to roll out our `Bijector`. This `Bijector` will use the forward transformation:
- `Y = tf.linalg.triangular_solve((tf.linalg.matrix_transpose(chol_precision_tril), X, adjoint=True) + loc`.
And the `log_prob` calculation is just the inverse, i.e.:
- `X = tf.linalg.matmul(chol_precision_tril, X - loc, adjoint_a=True)`.
Since all we need for HMC is `log_prob`, this means we avoid ever calling `tf.linalg.triangular_solve` (as would be the case for `tfd.MultivariateNormalTriL`). This is advantageous since `tf.linalg.matmul` is usually faster owing to better cache locality.
```
class MVNCholPrecisionTriL(tfd.TransformedDistribution):
"""MVN from loc and (Cholesky) precision matrix."""
def __init__(self, loc, chol_precision_tril, name=None):
super(MVNCholPrecisionTriL, self).__init__(
distribution=tfd.Independent(tfd.Normal(tf.zeros_like(loc),
scale=tf.ones_like(loc)),
reinterpreted_batch_ndims=1),
bijector=tfb.Chain([
tfb.Shift(shift=loc),
tfb.Invert(tfb.ScaleMatvecTriL(scale_tril=chol_precision_tril,
adjoint=True)),
]),
name=name)
```
The `tfd.Independent` distribution turns independent draws of one distribution, into a multivariate distribution with statistically independent coordinates. In terms of computing `log_prob`, this "meta-distribution" manifests as a simple sum over the event dimension(s).
Also notice that we took the `adjoint` ("transpose") of the scale matrix. This is because if precision is inverse covariance, i.e., $P=C^{-1}$ and if $C=AA^\top$, then $P=BB^{\top}$ where $B=A^{-\top}$.
Since this distribution is kind of tricky, let's quickly verify that our `MVNCholPrecisionTriL` works as we think it should.
```
def compute_sample_stats(d, seed=42, n=int(1e6)):
x = d.sample(n, seed=seed)
sample_mean = tf.reduce_mean(x, axis=0, keepdims=True)
s = x - sample_mean
sample_cov = tf.linalg.matmul(s, s, adjoint_a=True) / tf.cast(n, s.dtype)
sample_scale = tf.linalg.cholesky(sample_cov)
sample_mean = sample_mean[0]
return [
sample_mean,
sample_cov,
sample_scale,
]
dtype = np.float32
true_loc = np.array([1., -1.], dtype=dtype)
true_chol_precision = np.array([[1., 0.],
[2., 8.]],
dtype=dtype)
true_precision = np.matmul(true_chol_precision, true_chol_precision.T)
true_cov = np.linalg.inv(true_precision)
d = MVNCholPrecisionTriL(
loc=true_loc,
chol_precision_tril=true_chol_precision)
[sample_mean, sample_cov, sample_scale] = [
t.numpy() for t in compute_sample_stats(d)]
print('true mean:', true_loc)
print('sample mean:', sample_mean)
print('true cov:\n', true_cov)
print('sample cov:\n', sample_cov)
```
Since the sample mean and covariance are close to the true mean and covariance, it seems like the distribution is correctly implemented. Now, we'll use `MVNCholPrecisionTriL` `tfp.distributions.JointDistributionNamed` to specify the BGMM model. For the observational model, we'll use `tfd.MixtureSameFamily` to automatically integrate out the $\{Z_i\}_{i=1}^N$ draws.
```
dtype = np.float64
dims = 2
components = 3
num_samples = 1000
bgmm = tfd.JointDistributionNamed(dict(
mix_probs=tfd.Dirichlet(
concentration=np.ones(components, dtype) / 10.),
loc=tfd.Independent(
tfd.Normal(
loc=np.stack([
-np.ones(dims, dtype),
np.zeros(dims, dtype),
np.ones(dims, dtype),
]),
scale=tf.ones([components, dims], dtype)),
reinterpreted_batch_ndims=2),
precision=tfd.Independent(
tfd.WishartTriL(
df=5,
scale_tril=np.stack([np.eye(dims, dtype=dtype)]*components),
input_output_cholesky=True),
reinterpreted_batch_ndims=1),
s=lambda mix_probs, loc, precision: tfd.Sample(tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(probs=mix_probs),
components_distribution=MVNCholPrecisionTriL(
loc=loc,
chol_precision_tril=precision)),
sample_shape=num_samples)
))
def joint_log_prob(observations, mix_probs, loc, chol_precision):
"""BGMM with priors: loc=Normal, precision=Inverse-Wishart, mix=Dirichlet.
Args:
observations: `[n, d]`-shaped `Tensor` representing Bayesian Gaussian
Mixture model draws. Each sample is a length-`d` vector.
mix_probs: `[K]`-shaped `Tensor` representing random draw from
`Dirichlet` prior.
loc: `[K, d]`-shaped `Tensor` representing the location parameter of the
`K` components.
chol_precision: `[K, d, d]`-shaped `Tensor` representing `K` lower
triangular `cholesky(Precision)` matrices, each being sampled from
a Wishart distribution.
Returns:
log_prob: `Tensor` representing joint log-density over all inputs.
"""
return bgmm.log_prob(
mix_probs=mix_probs, loc=loc, precision=chol_precision, s=observations)
```
## Generate "Training" Data
For this demo, we'll sample some random data.
```
true_loc = np.array([[-2., -2],
[0, 0],
[2, 2]], dtype)
random = np.random.RandomState(seed=43)
true_hidden_component = random.randint(0, components, num_samples)
observations = (true_loc[true_hidden_component] +
random.randn(num_samples, dims).astype(dtype))
```
## Bayesian Inference using HMC
Now that we've used TFD to specify our model and obtained some observed data, we have all the necessary pieces to run HMC.
To do this, we'll use a [partial application](https://en.wikipedia.org/wiki/Partial_application) to "pin down" the things we don't want to sample. In this case that means we need only pin down `observations`. (The hyper-parameters are already baked in to the prior distributions and not part of the `joint_log_prob` function signature.)
```
unnormalized_posterior_log_prob = functools.partial(joint_log_prob, observations)
initial_state = [
tf.fill([components],
value=np.array(1. / components, dtype),
name='mix_probs'),
tf.constant(np.array([[-2., -2],
[0, 0],
[2, 2]], dtype),
name='loc'),
tf.linalg.eye(dims, batch_shape=[components], dtype=dtype, name='chol_precision'),
]
```
### Unconstrained Representation
Hamiltonian Monte Carlo (HMC) requires the target log-probability function be differentiable with respect to its arguments. Furthermore, HMC can exhibit dramatically higher statistical efficiency if the state-space is unconstrained.
This means we'll have to work out two main issues when sampling from the BGMM posterior:
1. $\theta$ represents a discrete probability vector, i.e., must be such that $\sum_{k=1}^K \theta_k = 1$ and $\theta_k>0$.
2. $T_k$ represents an inverse covariance matrix, i.e., must be such that $T_k \succ 0$, i.e., is [positive definite](https://en.wikipedia.org/wiki/Positive-definite_matrix).
To address this requirement we'll need to:
1. transform the constrained variables to an unconstrained space
2. run the MCMC in unconstrained space
3. transform the unconstrained variables back to the constrained space.
As with `MVNCholPrecisionTriL`, we'll use [`Bijector`s](https://www.tensorflow.org/api_docs/python/tf/distributions/bijectors/Bijector) to transform random variables to unconstrained space.
- The [`Dirichlet`](https://en.wikipedia.org/wiki/Dirichlet_distribution) is transformed to unconstrained space via the [softmax function](https://en.wikipedia.org/wiki/Softmax_function).
- Our precision random variable is a distribution over postive semidefinite matrices. To unconstrain these we'll use the `FillTriangular` and `TransformDiagonal` bijectors. These convert vectors to lower-triangular matrices and ensure the diagonal is positive. The former is useful because it enables sampling only $d(d+1)/2$ floats rather than $d^2$.
```
unconstraining_bijectors = [
tfb.SoftmaxCentered(),
tfb.Identity(),
tfb.Chain([
tfb.TransformDiagonal(tfb.Softplus()),
tfb.FillTriangular(),
])]
@tf.function(autograph=False)
def sample():
return tfp.mcmc.sample_chain(
num_results=2000,
num_burnin_steps=500,
current_state=initial_state,
kernel=tfp.mcmc.SimpleStepSizeAdaptation(
tfp.mcmc.TransformedTransitionKernel(
inner_kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
step_size=0.065,
num_leapfrog_steps=5),
bijector=unconstraining_bijectors),
num_adaptation_steps=400),
trace_fn=lambda _, pkr: pkr.inner_results.inner_results.is_accepted)
[mix_probs, loc, chol_precision], is_accepted = sample()
```
We'll now execute the chain and print the posterior means.
```
acceptance_rate = tf.reduce_mean(tf.cast(is_accepted, dtype=tf.float32)).numpy()
mean_mix_probs = tf.reduce_mean(mix_probs, axis=0).numpy()
mean_loc = tf.reduce_mean(loc, axis=0).numpy()
mean_chol_precision = tf.reduce_mean(chol_precision, axis=0).numpy()
precision = tf.linalg.matmul(chol_precision, chol_precision, transpose_b=True)
print('acceptance_rate:', acceptance_rate)
print('avg mix probs:', mean_mix_probs)
print('avg loc:\n', mean_loc)
print('avg chol(precision):\n', mean_chol_precision)
loc_ = loc.numpy()
ax = sns.kdeplot(loc_[:,0,0], loc_[:,0,1], shade=True, shade_lowest=False)
ax = sns.kdeplot(loc_[:,1,0], loc_[:,1,1], shade=True, shade_lowest=False)
ax = sns.kdeplot(loc_[:,2,0], loc_[:,2,1], shade=True, shade_lowest=False)
plt.title('KDE of loc draws');
```
## Conclusion
This simple colab demonstrated how TensorFlow Probability primitives can be used to build hierarchical Bayesian mixture models.
| github_jupyter |
```
from ciml import gather_results
from ciml import tf_trainer
from sklearn.cluster import KMeans
from sklearn.mixture import GaussianMixture
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.cm as cmx
import matplotlib.colors as pltcolors
import matplotlib.pyplot as plt
import plotly.express as px
from plotly.subplots import make_subplots
import collections
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from subunit2sql.db import api
from tensorflow.python.training import adagrad
from sklearn.metrics import confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
from sklearn.metrics import classification_report
data_path = '/Users/kw/ciml_data/output/data'
dataset = '1m-1min-node_provider'
experiment = 'dnn-3x10-100epochs-bs128'
file_name = 'prediction_' + dataset
#Prediction_Data consists of EID, Pred, Class
prediction_data = gather_results.load_data_json(dataset, file_name,
sub_folder=experiment, data_path=data_path)
examples_id = [x[0] for x in prediction_data]
pred_classes = [x[1]['classes'] for x in prediction_data]
pred_classes_norm = [x[0][:10] for x in pred_classes]
labels = [x[2] for x in prediction_data]
labels_norm = [x[:10] for x in labels]
#Choose class_names based on classification problem
class_names = ['ovh','rax','vexxhost','inap-mtl01','fortnebula-regionone','limestone-regionone','packethost-us-west-1']
class_names_norm = [x[:10] for x in class_names]
#class_names = ['passed','failed']
pred_classes_norm
#Generates confusion matrix
confusion = confusion_matrix(labels_norm,pred_classes_norm,labels=class_names_norm)
confusion
#Plots confusion matrix
fig, ax = plot_confusion_matrix(conf_mat=confusion,
show_absolute=True,
show_normed=True,
colorbar=True,
class_names=class_names_norm)
plt.show()
# Generate and print metrics report using sklearn
# sklearn.metrics.classification_report(y_true, y_pred, labels=None, target_names=None, sample_weight=None, digits=2, output_dict=False)
#The precision is the ratio tp / (tp + fp).
#The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.
#The recall is the ratio tp / (tp + fn).
#The recall is intuitively the ability of the classifier to find all the positive samples.
#The F-beta score can be interpreted as a weighted harmonic mean of the precision and recall,
#where an F-beta score reaches its best value at 1 and worst score at 0.
#The F-beta score weights recall more than precision by a factor of beta.
#beta == 1.0 means recall and precision are equally important.
#F1 = 2 * (precision * recall) / (precision + recall)
#The support is the number of occurrences of each class in y_true.
report = classification_report(labels_norm, pred_classes_norm)
print(report)
```
| github_jupyter |
# Prospect demo 2: inspect a set of targets
See companion notebook `Prospect_demo.ipynb` for more general informations
Note that standalone VI pages (html files) can also be created from a list of targets, see examples 6 and 10 in prospect/bin/examples_prospect_pages.sh
```
import os, sys
# If not using the desiconda version of prospect: EDIT THIS to your path
#sys.path.insert(0,"/global/homes/X/XXXXX/prospect/py")
from prospect import viewer, utilities
```
#### Prepare a set of Spectra + redrock outputs matching a list of targets
This makes use of functions implemented in `prospect.utilities`
```
### 1) List all targets in files where to look for:
datadir = os.environ['DESI_SPECTRO_REDUX']+'/denali/tiles/cumulative' # EDIT THIS
tiles = ['81062', '80654'] # EDIT THIS
# help(subset_db):
# - can filter pixels, tiles, nights, expids, petals, survey-program
# - The directory tree and file names must be among the following (supports daily, andes... everest):
# dirtree_type='healpix': {datadir}/{survey}/{program}/{pixel//100}/{pixel}/{spectra_type}-{survey}-{program}-{pixel}.fits``
# dirtree_type='pernight': {datadir}/{tileid}/{night}/{spectra_type}-{petal}-{tile}-{night}.fits
# dirtree_type='perexp': {datadir}/{tileid}/{expid}/{spectra_type}-{petal}-{tile}-exp{expid}.fits
# dirtree_type='cumulative': {datadir}/{tileid}/{night}/{spectra_type}-{petal}-{tile}-thru{night}.fits
# To use blanc/cascades 'all' (resp 'deep') coadds, use dirtree_type='pernight' and nights=['all'] (resp ['deep']).
subset_db = utilities.create_subsetdb(datadir, dirtree_type='cumulative', tiles=tiles)
# (
# Another example with everest/healpix data:
# subset_db = utilities.create_subsetdb(datadir, dirtree_type='healpix', survey-program=['main','dark'], pixels=['9557'])
# )
target_db = utilities.create_targetdb(datadir, subset_db, dirtree_type='cumulative')
### 2) Enter your list of targets here. Warning: ** Targetids must be int64 **
targets = [ 616094114412233066, 39632930179383639, 39632930179384420, 39632930179384518,
616094111199396534, 616094114420622028, 616094111195201658 ]
### Prepare adapted set of Spectra + catalogs
# spectra, zcat, rrcat are entry-matched
# if with_redrock==False, rrcat is None
spectra, zcat, rrcat = utilities.load_spectra_zcat_from_targets(targets, datadir, target_db, dirtree_type='cumulative', with_redrock_details=True)
```
#### VI interface in notebook
```
# Run this cell to have the VI tool !
viewer.plotspectra(spectra, zcatalog=zcat, redrock_cat=rrcat, notebook=True,
title='My TARGETIDs', top_metadata=['TARGETID', 'TILEID', 'mag_G'])
```
| github_jupyter |
<a href="https://colab.research.google.com/github/google-research/tapas/blob/master/notebooks/tabfact_predictions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2020 The Google AI Language Team Authors
Licensed under the Apache License, Version 2.0 (the "License");
```
# Copyright 2019 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
Running a Tapas fine-tuned checkpoint
---
This notebook shows how to load and make predictions with TAPAS model, which was introduced in the paper: [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349)
# Clone and install the repository
First, let's install the code.
```
! pip install tapas-table-parsing
```
# Fetch models fom Google Storage
Next we can get pretrained checkpoint from Google Storage. For the sake of speed, this is a medium sized model trained on [TABFACT](https://tabfact.github.io/). Note that best results in the paper were obtained with a large model.
```
! gsutil cp "gs://tapas_models/2020_10_07/tapas_tabfact_inter_masklm_medium_reset.zip" "tapas_model.zip" && unzip tapas_model.zip
! mv tapas_tabfact_inter_masklm_medium_reset tapas_model
```
# Imports
```
import tensorflow.compat.v1 as tf
import os
import shutil
import csv
import pandas as pd
import IPython
tf.get_logger().setLevel('ERROR')
from tapas.utils import tf_example_utils
from tapas.protos import interaction_pb2
from tapas.utils import number_annotation_utils
import math
```
# Load checkpoint for prediction
Here's the prediction code, which will create and `interaction_pb2.Interaction` protobuf object, which is the datastructure we use to store examples, and then call the prediction script.
```
os.makedirs('results/tabfact/tf_examples', exist_ok=True)
os.makedirs('results/tabfact/model', exist_ok=True)
with open('results/tabfact/model/checkpoint', 'w') as f:
f.write('model_checkpoint_path: "model.ckpt-0"')
for suffix in ['.data-00000-of-00001', '.index', '.meta']:
shutil.copyfile(f'tapas_model/model.ckpt{suffix}', f'results/tabfact/model/model.ckpt-0{suffix}')
max_seq_length = 512
vocab_file = "tapas_model/vocab.txt"
config = tf_example_utils.ClassifierConversionConfig(
vocab_file=vocab_file,
max_seq_length=max_seq_length,
max_column_id=max_seq_length,
max_row_id=max_seq_length,
strip_column_names=False,
add_aggregation_candidates=False,
)
converter = tf_example_utils.ToClassifierTensorflowExample(config)
def convert_interactions_to_examples(tables_and_queries):
"""Calls Tapas converter to convert interaction to example."""
for idx, (table, queries) in enumerate(tables_and_queries):
interaction = interaction_pb2.Interaction()
for position, query in enumerate(queries):
question = interaction.questions.add()
question.original_text = query
question.id = f"{idx}-0_{position}"
for header in table[0]:
interaction.table.columns.add().text = header
for line in table[1:]:
row = interaction.table.rows.add()
for cell in line:
row.cells.add().text = cell
number_annotation_utils.add_numeric_values(interaction)
for i in range(len(interaction.questions)):
try:
yield converter.convert(interaction, i)
except ValueError as e:
print(f"Can't convert interaction: {interaction.id} error: {e}")
def write_tf_example(filename, examples):
with tf.io.TFRecordWriter(filename) as writer:
for example in examples:
writer.write(example.SerializeToString())
def predict(table_data, queries):
table = [list(map(lambda s: s.strip(), row.split("|")))
for row in table_data.split("\n") if row.strip()]
examples = convert_interactions_to_examples([(table, queries)])
write_tf_example("results/tabfact/tf_examples/test.tfrecord", examples)
write_tf_example("results/tabfact/tf_examples/dev.tfrecord", [])
! python -m tapas.run_task_main \
--task="TABFACT" \
--output_dir="results" \
--noloop_predict \
--test_batch_size={len(queries)} \
--tapas_verbosity="ERROR" \
--compression_type= \
--reset_position_index_per_cell \
--init_checkpoint="tapas_model/model.ckpt" \
--bert_config_file="tapas_model/bert_config.json" \
--mode="predict" 2> error
results_path = "results/tabfact/model/test.tsv"
all_results = []
df = pd.DataFrame(table[1:], columns=table[0])
display(IPython.display.HTML(df.to_html(index=False)))
print()
with open(results_path) as csvfile:
reader = csv.DictReader(csvfile, delimiter='\t')
for row in reader:
supported = int(row["pred_cls"])
all_results.append(supported)
score = float(row["logits_cls"])
position = int(row['position'])
if supported:
print("> SUPPORTS:", queries[position])
else:
print("> REFUTES:", queries[position])
return all_results
```
# Predict
```
# Based on TabFact table 2-1610384-4.html.csv
result = predict("""
tournament | wins | top - 10 | top - 25 | events | cuts made
masters tournament | 0 | 0 | 1 | 3 | 2
us open | 0 | 0 | 0 | 4 | 3
the open championship | 0 | 0 | 0 | 2 | 1
pga championship | 0 | 1 | 1 | 4 | 2
totals | 0 | 1 | 2 | 13 | 8
""", ["The most frequently occurring number of events is 4", "The most frequently occurring number of events is 3"])
```
| github_jupyter |
# Parte de funciรณn
## Bibliografรญa
- Wai Kai Chen, capรญtulo 2
- Araujo, capรญtulo 2
- Schaumann & M.E. Van Valkenburg, capรญtulo 11
## Introducciรณn
El estudio de las funciones de red es uno de los vectores principales de la materia y el anรกlisis pormenorizado de sus partes deriva en aplicaciones particulares a cada una de las unidades de estudio que veremos a lo largo del aรฑo. Se le dice *parte* a aquella funciรณn derivada de la funciรณn de red en el dominio de Laplace que representa conceptualmente un aspecto conceptual de la caracterรญstica bajo estudio.
Las partes de la funciรณn de red $F_{(s)}$ que nos van a interesar analizar son:
- La parte par de $F_{(s)}$.
- La parte impar de $F_{(s)}$.
- La parte real de $F_{(s)}$.
- La parte imaginaria de $F_{(s)}$.
- El mรณdulo de $F_{(s)}$.
- La fase de $F_{(s)}$.
- El retardo de $F_{(s)}$.
## Metodologรญa de anรกlisis
Las partes de $F_{(s)}$ se relacionan entre sรญ segรบn el siguiente grรกfico:
<img src="parte_de_funcion/relacion_entre_partes.svg" />
En la secciรณn "Anรกlisis de partes de funciรณn" veremos en detalle como se generan esos vรญnculos. Lo interesante de este diagrama es que permite visualizar claramente la relaciรณn entre las partes hacia abajo (desde $F_{(s)}$ hasta una parte (problema de anรกlisis) y desde una parte a $F_{(s)}$ (generalmente asociados a problemas de sรญntesis)).
## Anรกlisis de partes de funciรณn
Comencemos planteando una forma general $F_{(s)}$:
$F_{(s)} = \frac{a_n s^n + a_{n-1} s^{n-1} + ... + a_1 s + a_0}{b_m s^m + b_{m-1} s^{m-1} + ... + b_1 s + b_0}$
$F_{(s)} = \frac{P_{(s)}}{Q_{(s)}}$
Donde:
- $P_{(s)} = a_n s^n + a_{n-1} s^{n-1} + ... + a_1 s + a_0 $
- $Q_{(s)} = b_m s^m + b_{m-1} s^{m-1} + ... + b_1 s + b_0 $
- $m \geq n$
Entonces, expresamos la funciรณn de red $F_{(s)}$ como un cociente de polinomios donde el grado del denominador es mayor igual al del numerador.
Ademรกs, $F_{(s)}$ es una funciรณn con simetrรญa hermรญtica. Es decir:
$|F_{(s)}|^2 = F_{(s)} F_{(-s)}$
Para el estudio que sigue, resulta interesante expresar a $P_{(s)}$ y a $Q_{(s)}$ como una suma de polinomios cada uno que comprenden los [monomios](https://es.wikipedia.org/wiki/Monomio) de $s$ con grado par y otros con grado impar. A dichos polinomios los vamos a simbolizar con $m_i$ y $n_i$ donde $m_i$ es la suma de los monomios pares y $n_i$ es la suma de los monomios impares e $i$ vale 1 รณ 2 segรบn si responde al numerador o al denominador.
$P_{(s)} = m_1 + n_1 \to m_1 = a_0 + a_2 s^2 + ... , n_1 = a_1 s + a_3 s^3 + ...$
$Q_{(s)} = m_2 + n_2 \to m_2 = b_0 + b_2 s^2 + ... , n_2 = b_1 s + b_3 s^3 + ...$
Asรญ:
$F_{(s)} = \frac{m_1 + n_1}{m_2 + n_2}$
### Obtenciรณn de la parte par e impar de $F_{(s)}$
Como comentamos anteriormente, los polinomios $m_i$ son polinomios pares y los $n_i$ son los impares, o sea
<table width="300">
<tr>
<td style="text-align:center">Par</td>
<td style="text-align:center">Impar</td>
</tr>
<tr>
<th style="text-align:center">$m_{1(s)} = m_{1(-s)}$</th>
<th style="text-align:center">$n_{1(s)} = -n_{1(-s)}$</th>
</tr>
<tr>
<th style="text-align:center">$m_{2(s)} = m_{2(-s)}$</th>
<th style="text-align:center">$n_{2(s)} = -n_{2(-s)}$</th>
</tr>
</table>
Por lo que si multiplicamos el numerador y el denominador de $F_{(s)}$ por $Q_{(-s)} = m_2 - n_2$, no afectaremos a $F_{(s)}$, pero:
$F_{(s)} = \frac{P_{(s)} Q_{(-s)}}{Q_{(s)} Q_{(-s)}}$
$F_{(s)} = \frac{(m_1 + n_1)(m_2 - n_2)}{(m_2 + n_2)(m_2 - n_2)}$
$F_{(s)} = \frac{m_1 m_2 - n_1 n_2 + n_1 m_2 - m_1 n_2}{m_2^2 - n_2^2}$
$F_{(s)} = \frac{m_1 m_2 - n_1 n_2}{m_2^2 - n_2^2} + \frac{n_1 m_2 - m_1 n_2}{m_2^2 - n_2^2}$
Podemos ver que:
- El producto de un polinomio par con uno impar arroja otro polinomio impar $\to m_i n_j = n_k$.
- El producto de un polinomio par con uno par arroja otro polinomio par $\to m_i m_j = m_k$.
- El producto de un polinomio impar con uno par arroja otro polinomio par $\to n_i n_j = m_k$.
En otras palabras, si un polinomio par o impar se lo multiplica por otro par, el resultado conserva su paridad. Si se lo multiplica por uno impar, el resultado invierte su paridad. Asรญ, el denominador resultante de $F_{(s)}$ es par, el primer tรฉrmino tiene numerador par y el segundo tiene numerador impar:
$F_{(s)} = M_{(s)} + N_{(s)}$
$M_{(s)} = \frac{m_1 m_2 - n_1 n_2}{m_2^2 - n_2^2} \to$ es la parte par de $F_{(s)}$.
$N_{(s)} = \frac{n_1 m_2 - m_1 n_2}{m_2^2 - n_2^2} \to$ es la parte impar de $F_{(s)}$.
### Obtenciรณn de la parte real e imaginaria de $F_{(jw)}$
A partir de las anteriores expresiones, vamos a analizar la parte real e imaginaria de $F_{(s)}$. Para ello, conviene analizar primero que sucede con la variable compleja $s$ al ser evaluada solamente en su parte imaginaria $jw$.
Si $s = jw$, entonces
<table width="300">
<tr>
<td style="text-align:center">Grado</td>
<td style="text-align:center"> $s$ </td>
<td style="text-align:center"> $w$ </td>
</tr>
<tr>
<th style="text-align:center">$1$</th>
<th style="text-align:center">$s$</th>
<th style="text-align:center">$jw$</th>
</tr>
<tr>
<th style="text-align:center">$2$</th>
<th style="text-align:center">$s^2$</th>
<th style="text-align:center">$-w^2$</th>
</tr>
<tr>
<th style="text-align:center">$3$</th>
<th style="text-align:center">$s^3$</th>
<th style="text-align:center">$-jw^3$</th>
</tr>
<tr>
<th style="text-align:center">$4$</th>
<th style="text-align:center">$s^4$</th>
<th style="text-align:center">$w^4$</th>
</tr>
</table>
Como podemos recordar, el grado 5 serรก equivalente al 1 en cuanto a fase y el 6 al 2. Entonces podemos derivar las siguientes conclusiones:
- Los polinomios pares en $s$ serรกn funciones de $w$ reales al evaluar $s=jw$.
- Los polinomios impares en $s$ serรกn funciones de $jw$ imaginarias al evaluar $s=jw$.
Entoces, podemos derivar que:
- $Real\{F_{(s=jw)}\} = M_{(s = jw)}$
- $Imag\{F_{(s=jw)}\} = N_{(s = jw)}$
### Obtenciรณn del mรณdulo de $F_{(jw)}$
La parte mรณdulo $F$ no ni es mรกs ni menos que el cรกlculo del mรณdulo de un nรบmero complejo. Dicho nรบmero complejo tiene una parte real y otra imaginaria que computamos en el apartado anterior.
El mรณdulo cuadrado de la funciรณn de red se obtiene de la siguiente forma:
$|F_{(s)}|^2 = F_{(s)} F_{(-s)} = (M_{(s)} + N_{(s)})(M_{(-s)} + N_{(-s)})$
$|F_{(s)}|^2 = M_{(s)} M_{(-s)} + M_{(s)} N_{(-s)} + M_{(-s)} N_{(s)} + N_{(s)} N_{(-s)}$
$|F_{(s)}|^2 = |M_{(s)}|^2 - |N_{(s)}|^2$
Ahora bien, al evaluar la anterior expresiรณn para $s = jw$:
$|F_{(s=jw)}|^2 = |M_{(s=jw)}|^2 + |N_{(s=jw)}|^2$
$|F_{(jw)}|^2 = Real\{F_{(jw)}\}^2 + Imag\{F_{(jw)}\}^2$
$|F_{(jw)}| = \sqrt{Real\{F_{(jw)}\}^2 + Imag\{F_{(jw)}\}^2}$
### Obtenciรณn de la fase de $F_{(jw)}$
Para obtener la funciรณn de fase debemos comenzar el anรกlisis desde la composiciรณn de $F_{(jw)}$ y su relaciรณn con la fase:
$F_{(jw)} = |F_{(jw)}| e^{j\Theta_{(jw)}}$
$F_{(-jw)} = |F_{(jw)}| e^{-j\Theta_{(jw)}}$
$\frac{F_{(jw)}}{F_{(-jw)}} = \frac{e^{j\Theta_{(jw)}}}{e^{-j\Theta_{(jw)}}}$
$\frac{F_{(jw)}}{F_{(-jw)}} = \frac{cos(\Theta_{(jw)}) + jsin(\Theta_{(jw)})}{cos(\Theta_{(jw)}) - jsin(\Theta_{(jw)})}$
$\frac{F_{(jw)}}{F_{(-jw)}} = \frac{1 + j\frac{sin(\Theta_{(jw)})}{cos(\Theta_{(jw)})}}{1 - j\frac{sin(\Theta_{(jw)})}{cos(\Theta_{(jw)})}}$
$\frac{F_{(jw)}}{F_{(-jw)}} = \frac{1 + jtan(\Theta_{(jw)})}{1 - jtan(\Theta_{(jw)})}$
$jtan(\Theta_{(jw)}) = \frac{F_{(jw)} - F_{(-jw)}}{F_{(jw)} + F_{(-jw)}}$
$jtan(\Theta_{(jw)}) = \frac{2Imag\{F_{(jw)}\}}{2Real\{F_{(jw)}\}}$
$jtan(\Theta_{(jw)}) = \frac{Imag\{F_{(jw)}\}}{Real\{F_{(jw)}\}}$
### Obtenciรณn del retardo de $F_{(jw)}$
Como comentamos anteriormente, el retardo se define como:
$ \tau_{(w)} = -\frac{d\Theta_{(w)}}{dw}$
Partiendo de la expresiรณn de la fase:
$\Theta_{(w)} = arctan\left(\frac{Imag\{F_{(jw)}\}}{Real\{F_{(jw)}\}}\right)$
Sabiendo que:
$ \frac{d}{dx}\{arctan(f_{(x)})\} = \frac{1}{1 + f_{(x)}^2} \frac{df_{(x)}}{dx}$
Podemos reemplazar en la expresiรณn original:
$ \tau_{(w)} = -\frac{d\Theta_{(w)}}{dw} = -\frac{1}{1 + \frac{Imag^2\{F_{(jw)}\}}{Real^2\{F_{(jw)}\}}} \frac{d}{dw}\{\frac{Imag\{F_{(jw)}\}}{Real\{F_{(jw)}\}}\} $
La anterior expresiรณn se puede expandir a un mรกs pero carece de sentido sin una expresiรณn de parte real y de parte imaginaria. Sin embargo, resulta รบtil analizar que sucede cuando expresamos a $F_{(jw)}$ como una multiplicatoria de ceros dividida por una multiplicatoria de polos:
$F_{(jw)} = \frac{(jw + \sigma_{z1} + jw_{z1})(jw + \sigma_{z1} - jw_{z1})(jw + \sigma_{z2} + jw_{z2}) (jw + \sigma_{z2} - jw_{z2}) ... }{(jw + \sigma_{p1} + jw_{p1})(jw + \sigma_{p1} - jw_{p1})(jw + \sigma_{p2} + jw_{p2}) (jw + \sigma_{p2} - jw_{p2}) ... }$
Cuya fase puede ser expresada como:
$ \Theta_{(w)} = \sum{\Theta_{zi_{(w)}}} - \sum{\Theta_{pi_{(w)}}}$
$ \Theta_{(w)} = \sum{arctan\left(\frac{w - w_{zi}}{\sigma_{zi}} \right)} - \sum{arctan\left(\frac{w - w_{pj}}{\sigma_{pj}} \right)}$
Donde la derivada de un tรฉrmino $arctan\left(\frac{w - w_{i}}{\sigma_{i}} \right)$ es:
$\frac{d}{dw}\{arctan\left(\frac{w - w_{i}}{\sigma_{i}} \right)\} = \frac{1}{\sigma_i}\frac{1}{1 + \left(\frac{w - w_i}{\sigma_i}\right)^2}$
$\frac{d}{dw}\{arctan\left(\frac{w - w_{i}}{\sigma_{i}} \right)\} = \frac{\sigma_i}{\sigma_i^2 + (w + w_{i})^2}$
Tenemos dos casos de singularidades, aquellas que son complejas conjugadas aquellas que son reales, cada una aportarรก:
- Real simple: $\frac{\sigma_i}{\sigma_i^2 + w^2}$
- Compleja conjugada: $\frac{\sigma_i}{\sigma_i^2 + (w + w_i)^2} + \frac{\sigma_i}{\sigma_i^2 + (w - w_i)^2} $
Asรญ, podemos reexpresar al retardo como:
$\tau_{(w)} = - \sum{\left(\frac{\sigma_{zi}}{\sigma_{zi}^2 + (w + w_{zi})^2} + \frac{\sigma_{zi}}{\sigma_{zi}^2 + (w - w_{zi})^2} \right)} + \sum{\left(\frac{\sigma_{pi}}{\sigma_{pi}^2 + (w + w_{pi})^2} + \frac{\sigma_{pi}}{\sigma_{pi}^2 + (w - w_{pi})^2} \right)}$
*Nota 1*: que la anterior expresiรณn debe modificarse levemente para incorporar los casos de singularidades reales simples, en los cuales en vez de tener dos terminos es sรณlo uno.
*Nota 2*: notar que los polos y ceros en continua han sido removidos ya que aportan $K\frac{\pi}{2}rad$ ($K$ es la multiplicidad) de fase pero de forma constante y no generan mรกs que una discontinuidad en frecuencia cero al retardo.
## Obtenciรณn de $F_{(s)}$ a partir de $\tau_{(w)}$
Sea la funciรณn retardo $\tau_{(w)} = \frac{2w^2 + 7}{w^4 + 7 w^2 + 10}$, calcular la $F_{(s)}$.
Una forma conveniente de atacar este tipo de problemas es llevar el retardo a la forma de sumatoria que vimos anteriormente. Esto permite identificar los valores de polos y ceros. Entonces, procedemos a analizar las raรญces del polinomio del denominador:
$w^4 + 7 w^2 + 10 = 0 \to w^2_1 = -2; w^2_2 = -5$
Asรญ resulta:
$\tau_{(w)} = \frac{2w^2 + 7}{(w^2 + 2)(w^2 + 5)}$
Que puede ser expresado en fracciones simples:
$\tau_{(w)} = \frac{A}{w^2 + 2} + \frac{B}{w^2 + 5}$
Damos dos valores a $w^2$ para obtener las escuaciones que nos permiten calcular $A$ y $B$. Elegimos $w^2 = \{0, 1\}$:
- $5A + 2B = 7$
- $6A + 3B = 9$
De lo que resulta evidente que $A = 1$ y $B = 1$. Asรญ:
$\tau_{(w)} = \frac{1}{w^2 + 2} + \frac{1}{w^2 + 5}$
Entonces vemos que una singularidad se ubica en $w = \sqrt{2}$ y la otra en $w = \sqrt{5}$. De lo siguiente, se desprenden las siguientes posibilidades para la construcciรณn de $F_{(s)}$:
* $F_{(s)} = \frac{K}{(s + \sqrt{2})(s + \sqrt{5})}$
* $F_{(s)} = \frac{K}{s(s + \sqrt{2})(s + \sqrt{5})}$
* $F_{(s)} = \frac{Ks}{(s + \sqrt{2})(s + \sqrt{5})}$
* $F_{(s)} = \frac{K(s - \sqrt{2})}{(s + \sqrt{5})}$
* $F_{(s)} = \frac{K(s - \sqrt{2})}{s(s + \sqrt{5})}$
* $F_{(s)} = \frac{K(s - \sqrt{5})}{(s + \sqrt{2})}$
* $F_{(s)} = \frac{K(s - \sqrt{5})}{s(s + \sqrt{2})}$
Vemos que todas estas opciones se dan debido a que las singularidades en el origen aportan un valor constante de fase que no se ve reflejado en la funciรณn retardo. Para poder restringir la funciรณn debemos dar informaciรณn de fase que nos permita seleccionar dentro de las opciones. Por ejemplo, decimos que la fase en $w=0$ es $\Theta_{(w=0)} = \frac{\pi}{2}$. Entonces:
* $F_{(s)} = \frac{Ks}{(s + \sqrt{2})(s + \sqrt{5})}$
* $F_{(s)} = \frac{K(s - \sqrt{2})}{s(s + \sqrt{5})}$
* $F_{(s)} = \frac{K(s - \sqrt{5})}{s(s + \sqrt{2})}$
Vemos entonces que de siete opciones, pasamos a tres. Dos de las anteriores se las conoce como sistemas de no-mรญnima fase. Son aquellas funciones que poseen ceros en el semiplano derecho. Elegimos en este caso, la primera de las tres opciones por avanzar con un diseรฑo, pero cualquiera de las tres son vรกlidas salvo que hay alguna restricciรณn extra.
Vemos entonces, que debemos determinar el valor de $K$, y para ello vamos a requerir el valor de mรณdulo de $F_{(jw)}$ a un valor de $w$. Elegimos, por ejemplo (y de forma arbitraria a los efectos del problema) $|F_{(jw = 1)}| = 2$.
$|F_{(jw=1)}| = K \frac{1}{\sqrt{5 + 1} \sqrt{2 + 1}} = \frac{K}{\sqrt{3}\sqrt{6}} = 2$
$K = 2 \sqrt{3}\sqrt{6}$
$K = 6\sqrt{2}$
Finalmente:
$F_{(s)} = \frac{6\sqrt{2}s}{(s + \sqrt{2})(s + \sqrt{5})}$
*Notas:*
- En este problema se evidencia la pรฉrdida de informaciรณn al trabajar con al funciรณn retardo, ya que sรณlo podee informaciรณn de la fase derivada (doble pรฉrdida de informaciรณn).
- En general, problemas de obtenciรณn de $F_{(s)}$ como este darรกn las restricciones necesarias para obtenerla (por ejemplo valor de mรณdulo y un valor de fase, y si es o no de mรญnima fase la funciรณn). En este problema sumamos las restricciones a medida que se avanzaba y se necesitaba utilizarlas para evidenciar el efecto de pรฉrdida de informaciรณn.
## Obtenciรณn de $F_{(s)}$ a partir de la parte imaginaria.
Sea la funciรณn parte imaginaria $Imag\{Y_{jw}\} = j \frac{1-w^2}{w}$ y el valor $|Y_{jw=j1}| = 2$.
Este es un ejemplo simple, pero que como el anterior ejemplifican las ambigรผedades de la obtenciรณn de la $F_{(s)}$ al partir de una de sus partes. Si bien pueden tomarse atajos para la obtenciรณn del resultado, vamos hacer algunos pasos extra para recorrer el camino entero que servirรก como hoja de ruta al analizar otros casos.
A partir de la parte imaginaria podemos obtener la parte impar si hacemos $w = \frac{s}{j}$, de lo que se desprende:
$N_{(s)} = \frac{s^2 + 1}{s}$
Y sabemos que $N_{(s)} = \frac{n_1 m_2 - m_1 n_2}{m_2^2 - n_2^2}$. Sabemos que el denominador de $N_{s}$ es un polinomio par pero el que tenemos nosotros no lo es, por ello, seguramente perdimos al menos un grado por cancelaciรณn de raรญces en la obtenciรณn de la parte imaginaria. Subimos el grado del numerador y del denominador multiplicando por $1 = \frac{s}{s}$:
$N_{(s)} = \frac{(s^2 + 1)s}{s^2} = \frac{n_1 m_2 - m_1 n_2}{m_2^2 - n_2^2}$
De lo anterior se desprende que $m_2^2 = 0$ ya que $m_2$ es un polinomio par y no puede ser $m_2^2 = s^2$. Sin embargo, $n_2 = s$ y consecuentemente $m_1 = s^2 + 1$. Resta entonces averiguar el valor de $n_1$.
Podemos plantear la parte par con la informaciรณn provista al momento:
$M_{(s)} = \frac{m_1 m_2 - n_1 n_2}{m_2^2 - n_2^2} = \frac{n_1 n_2}{n_2^2}$
$M_{(s)} = \frac{n_1}{n_2}$
$M_{(s)} = \frac{n_1}{s}$
Si obtenemos la parte real de la $M_{(s)}$:
$Real\{Y_{(jw)}\} = \frac{n_1}{jw}$
$n_1$ es un polinomio impar que puede ser reexpresado como $n_{1(jw)} = jG_{(w)}$. Asรญ, la parte real no tendrรก componente imaginaria. Analicemos entonces la condiciรณn de mรณdulo para calcular $n_1$:
$|Y_{(jw)}|^2 = Real^2\{Y_{(jw)}\} + Imag^2\{Y_{(jw)}\}$
Si $jw = j1$:
$2^2 = \frac{n_{1(1)}^2}{-w^2} + \left( \frac{1 - 1^2}{1} \right)^2$
$4 * (-1) = n_{1(1)}^2$
$n_{1(1)}^2 = -4$
$n_{1(1)} = 2$
Sabiendo que $n_1$ es un polinomio impar, podemos sugerir el valor de $n_1 = 2s$. Armando la $Y_{(s)}$:
$Y_{(s)} = \frac{s^2 + 2s + 1}{s}$
| github_jupyter |
#### Copyright 2017 Google LLC.
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# First Steps with TensorFlow
**Learning Objectives:**
* Learn fundamental TensorFlow concepts
* Use the `LinearRegressor` class in TensorFlow to predict median housing price, at the granularity of city blocks, based on one input feature
* Evaluate the accuracy of a model's predictions using Root Mean Squared Error (RMSE)
* Improve the accuracy of a model by tuning its hyperparameters
The [data](https://developers.google.com/machine-learning/crash-course/california-housing-data-description) is based on 1990 census data from California.
## Setup
In this first cell, we'll load the necessary libraries.
```
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
```
Next, we'll load our data set.
```
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
```
We'll randomize the data, just to be sure not to get any pathological ordering effects that might harm the performance of Stochastic Gradient Descent. Additionally, we'll scale `median_house_value` to be in units of thousands, so it can be learned a little more easily with learning rates in a range that we usually use.
```
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
california_housing_dataframe["median_house_value"] /= 1000.0
california_housing_dataframe
```
## Examine the Data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column: count of examples, mean, standard deviation, max, min, and various quantiles.
```
california_housing_dataframe.describe()
```
## Build the First Model
In this exercise, we'll try to predict `median_house_value`, which will be our label (sometimes also called a target). We'll use `total_rooms` as our input feature.
**NOTE:** Our data is at the city block level, so this feature represents the total number of rooms in that block.
To train our model, we'll use the [LinearRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/LinearRegressor) interface provided by the TensorFlow [Estimator](https://www.tensorflow.org/get_started/estimator) API. This API takes care of a lot of the low-level model plumbing, and exposes convenient methods for performing model training, evaluation, and inference.
### Step 1: Define Features and Configure Feature Columns
In order to import our training data into TensorFlow, we need to specify what type of data each feature contains. There are two main types of data we'll use in this and future exercises:
* **Categorical Data**: Data that is textual. In this exercise, our housing data set does not contain any categorical features, but examples you might see would be the home style, the words in a real-estate ad.
* **Numerical Data**: Data that is a number (integer or float) and that you want to treat as a number. As we will discuss more later sometimes you might want to treat numerical data (e.g., a postal code) as if it were categorical.
In TensorFlow, we indicate a feature's data type using a construct called a **feature column**. Feature columns store only a description of the feature data; they do not contain the feature data itself.
To start, we're going to use just one numeric input feature, `total_rooms`. The following code pulls the `total_rooms` data from our `california_housing_dataframe` and defines the feature column using `numeric_column`, which specifies its data is numeric:
```
# Define the input feature: total_rooms.
my_feature = california_housing_dataframe[["total_rooms"]]
# Configure a numeric feature column for total_rooms.
feature_columns = [tf.feature_column.numeric_column("total_rooms")]
```
**NOTE:** The shape of our `total_rooms` data is a one-dimensional array (a list of the total number of rooms for each block). This is the default shape for `numeric_column`, so we don't have to pass it as an argument.
### Step 2: Define the Target
Next, we'll define our target, which is `median_house_value`. Again, we can pull it from our `california_housing_dataframe`:
```
# Define the label.
targets = california_housing_dataframe["median_house_value"]
```
### Step 3: Configure the LinearRegressor
Next, we'll configure a linear regression model using LinearRegressor. We'll train this model using the `GradientDescentOptimizer`, which implements Mini-Batch Stochastic Gradient Descent (SGD). The `learning_rate` argument controls the size of the gradient step.
**NOTE:** To be safe, we also apply [gradient clipping](https://developers.google.com/machine-learning/glossary/#gradient_clipping) to our optimizer via `clip_gradients_by_norm`. Gradient clipping ensures the magnitude of the gradients do not become too large during training, which can cause gradient descent to fail.
```
# Use gradient descent as the optimizer for training the model.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.0000001)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
# Configure the linear regression model with our feature columns and optimizer.
# Set a learning rate of 0.0000001 for Gradient Descent.
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
```
### Step 4: Define the Input Function
To import our California housing data into our `LinearRegressor`, we need to define an input function, which instructs TensorFlow how to preprocess
the data, as well as how to batch, shuffle, and repeat it during model training.
First, we'll convert our *pandas* feature data into a dict of NumPy arrays. We can then use the TensorFlow [Dataset API](https://www.tensorflow.org/programmers_guide/datasets) to construct a dataset object from our data, and then break
our data into batches of `batch_size`, to be repeated for the specified number of epochs (num_epochs).
**NOTE:** When the default value of `num_epochs=None` is passed to `repeat()`, the input data will be repeated indefinitely.
Next, if `shuffle` is set to `True`, we'll shuffle the data so that it's passed to the model randomly during training. The `buffer_size` argument specifies
the size of the dataset from which `shuffle` will randomly sample.
Finally, our input function constructs an iterator for the dataset and returns the next batch of data to the LinearRegressor.
```
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model of one feature.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(buffer_size=10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
```
**NOTE:** We'll continue to use this same input function in later exercises. For more
detailed documentation of input functions and the `Dataset` API, see the [TensorFlow Programmer's Guide](https://www.tensorflow.org/programmers_guide/datasets).
### Step 5: Train the Model
We can now call `train()` on our `linear_regressor` to train the model. We'll wrap `my_input_fn` in a `lambda`
so we can pass in `my_feature` and `target` as arguments (see this [TensorFlow input function tutorial](https://www.tensorflow.org/get_started/input_fn#passing_input_fn_data_to_your_model) for more details), and to start, we'll
train for 100 steps.
```
_ = linear_regressor.train(
input_fn = lambda:my_input_fn(my_feature, targets),
steps=100
)
```
### Step 6: Evaluate the Model
Let's make predictions on that training data, to see how well our model fit it during training.
**NOTE:** Training error measures how well your model fits the training data, but it **_does not_** measure how well your model **_generalizes to new data_**. In later exercises, you'll explore how to split your data to evaluate your model's ability to generalize.
```
# Create an input function for predictions.
# Note: Since we're making just one prediction for each example, we don't
# need to repeat or shuffle the data here.
prediction_input_fn =lambda: my_input_fn(my_feature, targets, num_epochs=1, shuffle=False)
# Call predict() on the linear_regressor to make predictions.
predictions = linear_regressor.predict(input_fn=prediction_input_fn)
# Format predictions as a NumPy array, so we can calculate error metrics.
predictions = np.array([item['predictions'][0] for item in predictions])
# Print Mean Squared Error and Root Mean Squared Error.
mean_squared_error = metrics.mean_squared_error(predictions, targets)
root_mean_squared_error = math.sqrt(mean_squared_error)
print("Mean Squared Error (on training data): %0.3f" % mean_squared_error)
print("Root Mean Squared Error (on training data): %0.3f" % root_mean_squared_error)
```
Is this a good model? How would you judge how large this error is?
Mean Squared Error (MSE) can be hard to interpret, so we often look at Root Mean Squared Error (RMSE)
instead. A nice property of RMSE is that it can be interpreted on the same scale as the original targets.
Let's compare the RMSE to the difference of the min and max of our targets:
```
min_house_value = california_housing_dataframe["median_house_value"].min()
max_house_value = california_housing_dataframe["median_house_value"].max()
min_max_difference = max_house_value - min_house_value
print("Min. Median House Value: %0.3f" % min_house_value)
print("Max. Median House Value: %0.3f" % max_house_value)
print("Difference between Min. and Max.: %0.3f" % min_max_difference)
print("Root Mean Squared Error: %0.3f" % root_mean_squared_error)
```
Our error spans nearly half the range of the target values. Can we do better?
This is the question that nags at every model developer. Let's develop some basic strategies to reduce model error.
The first thing we can do is take a look at how well our predictions match our targets, in terms of overall summary statistics.
```
calibration_data = pd.DataFrame()
calibration_data["predictions"] = pd.Series(predictions)
calibration_data["targets"] = pd.Series(targets)
calibration_data.describe()
```
Okay, maybe this information is helpful. How does the mean value compare to the model's RMSE? How about the various quantiles?
We can also visualize the data and the line we've learned. Recall that linear regression on a single feature can be drawn as a line mapping input *x* to output *y*.
First, we'll get a uniform random sample of the data so we can make a readable scatter plot.
```
sample = california_housing_dataframe.sample(n=300)
```
Next, we'll plot the line we've learned, drawing from the model's bias term and feature weight, together with the scatter plot. The line will show up red.
```
# Get the min and max total_rooms values.
x_0 = sample["total_rooms"].min()
x_1 = sample["total_rooms"].max()
# Retrieve the final weight and bias generated during training.
weight = linear_regressor.get_variable_value('linear/linear_model/total_rooms/weights')[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
# Get the predicted median_house_values for the min and max total_rooms values.
y_0 = weight * x_0 + bias
y_1 = weight * x_1 + bias
# Plot our regression line from (x_0, y_0) to (x_1, y_1).
plt.plot([x_0, x_1], [y_0, y_1], c='r')
# Label the graph axes.
plt.ylabel("median_house_value")
plt.xlabel("total_rooms")
# Plot a scatter plot from our data sample.
plt.scatter(sample["total_rooms"], sample["median_house_value"])
# Display graph.
plt.show()
```
This initial line looks way off. See if you can look back at the summary stats and see the same information encoded there.
Together, these initial sanity checks suggest we may be able to find a much better line.
## Tweak the Model Hyperparameters
For this exercise, we've put all the above code in a single function for convenience. You can call the function with different parameters to see the effect.
In this function, we'll proceed in 10 evenly divided periods so that we can observe the model improvement at each period.
For each period, we'll compute and graph training loss. This may help you judge when a model is converged, or if it needs more iterations.
We'll also plot the feature weight and bias term values learned by the model over time. This is another way to see how things converge.
```
def train_model(learning_rate, steps, batch_size, input_feature="total_rooms"):
"""Trains a linear regression model of one feature.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
input_feature: A `string` specifying a column from `california_housing_dataframe`
to use as input feature.
"""
periods = 10
steps_per_period = steps / periods
my_feature = input_feature
my_feature_data = california_housing_dataframe[[my_feature]]
my_label = "median_house_value"
targets = california_housing_dataframe[my_label]
# Create feature columns.
feature_columns = [tf.feature_column.numeric_column(my_feature)]
# Create input functions.
training_input_fn = lambda:my_input_fn(my_feature_data, targets, batch_size=batch_size)
prediction_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False)
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
# Set up to plot the state of our model's line each period.
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.title("Learned Line by Period")
plt.ylabel(my_label)
plt.xlabel(my_feature)
sample = california_housing_dataframe.sample(n=300)
plt.scatter(sample[my_feature], sample[my_label])
colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)]
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
root_mean_squared_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
predictions = linear_regressor.predict(input_fn=prediction_input_fn)
predictions = np.array([item['predictions'][0] for item in predictions])
# Compute loss.
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(predictions, targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, root_mean_squared_error))
# Add the loss metrics from this period to our list.
root_mean_squared_errors.append(root_mean_squared_error)
# Finally, track the weights and biases over time.
# Apply some math to ensure that the data and line are plotted neatly.
y_extents = np.array([0, sample[my_label].max()])
weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
x_extents = (y_extents - bias) / weight
x_extents = np.maximum(np.minimum(x_extents,
sample[my_feature].max()),
sample[my_feature].min())
y_extents = weight * x_extents + bias
plt.plot(x_extents, y_extents, color=colors[period])
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.subplot(1, 2, 2)
plt.ylabel('RMSE')
plt.xlabel('Periods')
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(root_mean_squared_errors)
# Output a table with calibration data.
calibration_data = pd.DataFrame()
calibration_data["predictions"] = pd.Series(predictions)
calibration_data["targets"] = pd.Series(targets)
display.display(calibration_data.describe())
print("Final RMSE (on training data): %0.2f" % root_mean_squared_error)
```
## Task 1: Achieve an RMSE of 180 or Below
Tweak the model hyperparameters to improve loss and better match the target distribution.
If, after 5 minutes or so, you're having trouble beating a RMSE of 180, check the solution for a possible combination.
```
train_model(
learning_rate=0.00005,
steps=600,
batch_size=5
)
```
### Solution
Click below for one possible solution.
```
train_model(
learning_rate=0.00002,
steps=500,
batch_size=5
)
```
This is just one possible configuration; there may be other combinations of settings that also give good results. Note that in general, this exercise isn't about finding the *one best* setting, but to help build your intutions about how tweaking the model configuration affects prediction quality.
### Is There a Standard Heuristic for Model Tuning?
This is a commonly asked question. The short answer is that the effects of different hyperparameters are data dependent. So there are no hard-and-fast rules; you'll need to test on your data.
That said, here are a few rules of thumb that may help guide you:
* Training error should steadily decrease, steeply at first, and should eventually plateau as training converges.
* If the training has not converged, try running it for longer.
* If the training error decreases too slowly, increasing the learning rate may help it decrease faster.
* But sometimes the exact opposite may happen if the learning rate is too high.
* If the training error varies wildly, try decreasing the learning rate.
* Lower learning rate plus larger number of steps or larger batch size is often a good combination.
* Very small batch sizes can also cause instability. First try larger values like 100 or 1000, and decrease until you see degradation.
Again, never go strictly by these rules of thumb, because the effects are data dependent. Always experiment and verify.
## Task 2: Try a Different Feature
See if you can do any better by replacing the `total_rooms` feature with the `population` feature.
Don't take more than 5 minutes on this portion.
```
# YOUR CODE HERE
```
### Solution
Click below for one possible solution.
```
train_model(
learning_rate=0.00002,
steps=1000,
batch_size=5,
input_feature="population"
)
```
| github_jupyter |
# ็จDQNๅผบๅๅญฆไน ็ฎๆณ็ฉโๅๆๅคง่ฅฟ็โ๏ผ
<iframe src="//player.bilibili.com/player.html?aid=586526003&bvid=BV1Tz4y1U7HE&cid=293880206&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
<iframe src="//player.bilibili.com/player.html?aid=801504295&bvid=BV1Wy4y1n73E&cid=294254486&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
<iframe src="//player.bilibili.com/player.html?aid=501711447&bvid=BV1gN411d7dr&cid=296365416&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
## 1. ๅฎ่ฃ
ไพ่ตๅบ
> ๅ
ถไธญๆธธๆไปฃ็ ไฝฟ็จpygame้ๆ
> ็ฉ็ๆจกๅไฝฟ็จpymunk
ๆณจ๏ผpaddlepaddle็ๆฌไธบ1.8.0๏ผparl็ๆฌไธบ1.3.1
```
# !pip install pygame -i https://mirror.baidu.com/pypi/simple
# !pip install parl==1.3.1 -i https://mirror.baidu.com/pypi/simple
# !pip install pymunk
# !unzip work/code.zip -d ./
```
## 2. ่ฎพ็ฝฎ็ฏๅขๅ้
> ็ฑไบnotebookๆ ๆณๆพ็คบpygame็้ข๏ผๆไปฅๆไปฌ่ฎพ็ฝฎๅฆไธ็ฏๅขๅ้
```
import os
os.putenv('SDL_VIDEODRIVER', 'fbcon')
os.environ["SDL_VIDEODRIVER"] = "dummy"
```
## 3. ๆๅปบๅคๅฑ็ฅ็ป็ฝ็ป
> ่ฏฅ็ๆฌไฝฟ็จไธคๅฑๅ
จ่ฟๆฅๅฑ
> ๅท็งฏ็ฅ็ป็ฝ็ป็ๆฌไธบ๏ผhttps://aistudio.baidu.com/aistudio/projectdetail/1540300
```
import parl
from parl import layers
import paddle.fluid as fluid
import copy
import numpy as np
import os
from parl.utils import logger
import random
import collections
LEARN_FREQ = 5 # ่ฎญ็ป้ข็๏ผไธ้่ฆๆฏไธไธชstep้ฝlearn๏ผๆไธไบๆฐๅข็ป้ชๅๅlearn๏ผๆ้ซๆ็
MEMORY_SIZE = 20000 # replay memory็ๅคงๅฐ๏ผ่ถๅคง่ถๅ ็จๅ
ๅญ
MEMORY_WARMUP_SIZE = 200 # replay_memory ้้่ฆ้ขๅญไธไบ็ป้ชๆฐๆฎ๏ผๅๅผๅฏ่ฎญ็ป
BATCH_SIZE = 32 # ๆฏๆฌก็ปagent learn็ๆฐๆฎๆฐ้๏ผไปreplay memory้ๆบ้sampleไธๆนๆฐๆฎๅบๆฅ
LEARNING_RATE = 0.001 # ๅญฆไน ็
GAMMA = 0.99 # reward ็่กฐๅๅ ๅญ๏ผไธ่ฌๅ 0.9 ๅฐ 0.999 ไธ็ญ
class Model(parl.Model):
def __init__(self, act_dim):
hid1_size = 256
hid2_size = 256
# 3ๅฑๅ
จ่ฟๆฅ็ฝ็ป
self.fc1 = layers.fc(size=hid1_size, act='relu')
self.fc2 = layers.fc(size=hid2_size, act='relu')
self.fc3 = layers.fc(size=act_dim, act=None)
def value(self, obs):
# ๅฎไน็ฝ็ป
# ่พๅ
ฅstate๏ผ่พๅบๆๆactionๅฏนๅบ็Q๏ผ[Q(s,a1), Q(s,a2), Q(s,a3)...]
h1 = self.fc1(obs)
h2 = self.fc2(h1)
Q = self.fc3(h2)
return Q
```
## 4. ๆๅปบDQN็ฎๆณใAgentๅ็ป้ชๆฑ
```
# from parl.algorithms import DQN # ไนๅฏไปฅ็ดๆฅไปparlๅบไธญๅฏผๅ
ฅDQN็ฎๆณ
import pygame
import cv2
class DQN(parl.Algorithm):
def __init__(self, model, act_dim=None, gamma=None, lr=None):
""" DQN algorithm
Args:
model (parl.Model): ๅฎไนQๅฝๆฐ็ๅๅ็ฝ็ป็ปๆ
act_dim (int): action็ฉบ้ด็็ปดๅบฆ๏ผๅณๆๅ ไธชaction
gamma (float): reward็่กฐๅๅ ๅญ
lr (float): learning rate ๅญฆไน ็.
"""
self.model = model
self.target_model = copy.deepcopy(model)
assert isinstance(act_dim, int)
assert isinstance(gamma, float)
assert isinstance(lr, float)
self.act_dim = act_dim
self.gamma = gamma
self.lr = lr
def predict(self, obs):
""" ไฝฟ็จself.model็value็ฝ็ปๆฅ่ทๅ [Q(s,a1),Q(s,a2),...]
"""
return self.model.value(obs)
def learn(self, obs, action, reward, next_obs, terminal):
""" ไฝฟ็จDQN็ฎๆณๆดๆฐself.model็value็ฝ็ป
"""
# ไปtarget_modelไธญ่ทๅ max Q' ็ๅผ๏ผ็จไบ่ฎก็ฎtarget_Q
next_pred_value = self.target_model.value(next_obs)
best_v = layers.reduce_max(next_pred_value, dim=1)
best_v.stop_gradient = True # ้ปๆญขๆขฏๅบฆไผ ้
terminal = layers.cast(terminal, dtype='float32')
target = reward + (1.0 - terminal) * self.gamma * best_v
pred_value = self.model.value(obs) # ่ทๅQ้ขๆตๅผ
# ๅฐaction่ฝฌonehotๅ้๏ผๆฏๅฆ๏ผ3 => [0,0,0,1,0]
action_onehot = layers.one_hot(action, self.act_dim)
action_onehot = layers.cast(action_onehot, dtype='float32')
# ไธ้ขไธ่กๆฏ้ๅ
็ด ็ธไน๏ผๆฟๅฐactionๅฏนๅบ็ Q(s,a)
# ๆฏๅฆ๏ผpred_value = [[2.3, 5.7, 1.2, 3.9, 1.4]], action_onehot = [[0,0,0,1,0]]
# ==> pred_action_value = [[3.9]]
pred_action_value = layers.reduce_sum(
layers.elementwise_mul(action_onehot, pred_value), dim=1)
# ่ฎก็ฎ Q(s,a) ไธ target_Q็ๅๆนๅทฎ๏ผๅพๅฐloss
cost = layers.square_error_cost(pred_action_value, target)
cost = layers.reduce_mean(cost)
optimizer = fluid.optimizer.Adam(learning_rate=self.lr) # ไฝฟ็จAdamไผๅๅจ
optimizer.minimize(cost)
return cost
def sync_target(self):
""" ๆ self.model ็ๆจกๅๅๆฐๅผๅๆญฅๅฐ self.target_model
"""
self.model.sync_weights_to(self.target_model)
class Agent(parl.Agent):
def __init__(self,
algorithm,
obs_dim,
act_dim,
e_greed=0.1,
e_greed_decrement=0):
assert isinstance(obs_dim, int)
assert isinstance(act_dim, int)
self.obs_dim = obs_dim
self.act_dim = act_dim
super(Agent, self).__init__(algorithm)
self.global_step = 0
self.update_target_steps = 200 # ๆฏ้200ไธชtraining stepsๅๆmodel็ๅๆฐๅคๅถๅฐtarget_modelไธญ
self.e_greed = e_greed # ๆไธๅฎๆฆ็้ๆบ้ๅๅจไฝ๏ผๆข็ดข
self.e_greed_decrement = e_greed_decrement # ้็่ฎญ็ป้ๆญฅๆถๆ๏ผๆข็ดข็็จๅบฆๆ
ขๆ
ข้ไฝ
def build_program(self):
self.pred_program = fluid.Program()
self.learn_program = fluid.Program()
with fluid.program_guard(self.pred_program): # ๆญๅปบ่ฎก็ฎๅพ็จไบ ้ขๆตๅจไฝ๏ผๅฎไน่พๅ
ฅ่พๅบๅ้
obs = layers.data(
name='obs', shape=[self.obs_dim], dtype='float32')
self.value = self.alg.predict(obs)
with fluid.program_guard(self.learn_program): # ๆญๅปบ่ฎก็ฎๅพ็จไบ ๆดๆฐQ็ฝ็ป๏ผๅฎไน่พๅ
ฅ่พๅบๅ้
obs = layers.data(
name='obs', shape=[self.obs_dim], dtype='float32')
action = layers.data(name='act', shape=[1], dtype='int32')
reward = layers.data(name='reward', shape=[], dtype='float32')
next_obs = layers.data(
name='next_obs', shape=[self.obs_dim], dtype='float32')
terminal = layers.data(name='terminal', shape=[], dtype='bool')
self.cost = self.alg.learn(obs, action, reward, next_obs, terminal)
def sample(self, obs):
sample = np.random.rand() # ไบง็0~1ไน้ด็ๅฐๆฐ
if sample < self.e_greed:
act = np.random.randint(self.act_dim) # ๆข็ดข๏ผๆฏไธชๅจไฝ้ฝๆๆฆ็่ขซ้ๆฉ
else:
act = self.predict(obs) # ้ๆฉๆไผๅจไฝ
self.e_greed = max(
0.01, self.e_greed - self.e_greed_decrement) # ้็่ฎญ็ป้ๆญฅๆถๆ๏ผๆข็ดข็็จๅบฆๆ
ขๆ
ข้ไฝ
return act
def predict(self, obs): # ้ๆฉๆไผๅจไฝ
obs = np.expand_dims(obs, axis=0)
pred_Q = self.fluid_executor.run(
self.pred_program,
feed={'obs': obs.astype('float32')},
fetch_list=[self.value])[0]
pred_Q = np.squeeze(pred_Q, axis=0)
act = np.argmax(pred_Q) # ้ๆฉQๆๅคง็ไธๆ ๏ผๅณๅฏนๅบ็ๅจไฝ
return act
def learn(self, obs, act, reward, next_obs, terminal):
# ๆฏ้200ไธชtraining stepsๅๆญฅไธๆฌกmodelๅtarget_model็ๅๆฐ
if self.global_step % self.update_target_steps == 0:
self.alg.sync_target()
self.global_step += 1
act = np.expand_dims(act, -1)
feed = {
'obs': obs.astype('float32'),
'act': act.astype('int32'),
'reward': reward,
'next_obs': next_obs.astype('float32'),
'terminal': terminal
}
cost = self.fluid_executor.run(
self.learn_program, feed=feed, fetch_list=[self.cost])[0] # ่ฎญ็ปไธๆฌก็ฝ็ป
return cost
class ReplayMemory(object):
def __init__(self, max_size):
self.buffer = collections.deque(maxlen=max_size)
# ๅขๅ ไธๆก็ป้ชๅฐ็ป้ชๆฑ ไธญ
def append(self, exp):
self.buffer.append(exp)
# ไป็ป้ชๆฑ ไธญ้ๅNๆก็ป้ชๅบๆฅ
def sample(self, batch_size):
mini_batch = random.sample(self.buffer, batch_size)
obs_batch, action_batch, reward_batch, next_obs_batch, done_batch = [], [], [], [], []
for experience in mini_batch:
s, a, r, s_p, done = experience
obs_batch.append(s)
action_batch.append(a)
reward_batch.append(r)
next_obs_batch.append(s_p)
done_batch.append(done)
return np.array(obs_batch).astype('float32'), \
np.array(action_batch).astype('float32'), np.array(reward_batch).astype('float32'),\
np.array(next_obs_batch).astype('float32'), np.array(done_batch).astype('float32')
def __len__(self):
return len(self.buffer)
# ่ฎญ็ปไธไธชepisode
def run_episode(env, agent, rpm, episode):
total_reward = 0
env.reset()
action = np.random.randint(0, env.action_num - 1)
obs, _, _ = env.next(action)
step = 0
while True:
step += 1
action = agent.sample(obs) # ้ๆ ทๅจไฝ๏ผๆๆๅจไฝ้ฝๆๆฆ็่ขซๅฐ่ฏๅฐ
next_obs, reward, done = env.next(action)
rpm.append((obs, action, reward, next_obs, done))
# train model
if (len(rpm) > MEMORY_WARMUP_SIZE) and (step % LEARN_FREQ == 0):
(batch_obs, batch_action, batch_reward, batch_next_obs,
batch_done) = rpm.sample(BATCH_SIZE)
train_loss = agent.learn(batch_obs, batch_action, batch_reward,
batch_next_obs,
batch_done) # s,a,r,s',done
total_reward += reward
obs = next_obs
if done:
break
if not step % 20:
logger.info('step:{} e_greed:{} action:{} reward:{}'.format(
step, agent.e_greed, action, reward))
if not step % 500:
image = pygame.surfarray.array3d(
pygame.display.get_surface()).copy()
image = np.flip(image[:, :, [2, 1, 0]], 0)
image = np.rot90(image, 3)
img_pt = os.path.join('outputs', 'snapshoot_{}_{}.jpg'.format(episode, step))
cv2.imwrite(img_pt, image)
return total_reward
```
## 5. ๅๅปบAgentๅฎไพ
```
from State2NN import AI_Board
env = AI_Board()
action_dim = env.action_num
obs_shape = 16 * 13
e_greed = 0.2
rpm = ReplayMemory(MEMORY_SIZE) # DQN็็ป้ชๅๆพๆฑ
# ๆ นๆฎparlๆกๆถๆๅปบagent
model = Model(act_dim=action_dim)
algorithm = DQN(model, act_dim=action_dim, gamma=GAMMA, lr=LEARNING_RATE)
agent = Agent(
algorithm,
obs_dim=obs_shape,
act_dim=action_dim,
e_greed=e_greed, # ๆไธๅฎๆฆ็้ๆบ้ๅๅจไฝ๏ผๆข็ดข
e_greed_decrement=1e-6) # ้็่ฎญ็ป้ๆญฅๆถๆ๏ผๆข็ดข็็จๅบฆๆ
ขๆ
ข้ไฝ
```
## 6. ่ฎญ็ปๆจกๅ
```
from State2NN import AI_Board
import os
dirs = ['weights', 'outputs']
for d in dirs:
if not os.path.exists(d):
os.mkdir(d)
# ๅ ่ฝฝๆจกๅ
# save_path = './dqn_model.ckpt'
# agent.restore(save_path)
# ๅ
ๅพ็ป้ชๆฑ ้ๅญไธไบๆฐๆฎ๏ผ้ฟๅ
ๆๅผๅง่ฎญ็ป็ๆถๅๆ ทๆฌไธฐๅฏๅบฆไธๅค
while len(rpm) < MEMORY_WARMUP_SIZE:
run_episode(env, agent, rpm, episode=0)
max_episode = 2000
# ๅผๅง่ฎญ็ป
episode = 0
while episode < max_episode: # ่ฎญ็ปmax_episodeไธชๅๅ๏ผtest้จๅไธ่ฎก็ฎๅ
ฅepisodeๆฐ้
# train part
for i in range(0, 50):
total_reward = run_episode(env, agent, rpm, episode+1)
episode += 1
save_path = './weights/dqn_model_episode_{}.ckpt'.format(episode)
agent.save(save_path)
print('-[INFO] episode:{}, model saved at {}'.format(episode, save_path))
env.reset()
# ่ฎญ็ป็ปๆ๏ผไฟๅญๆจกๅ
save_path = './final.ckpt'
agent.save(save_path)
```
## 7. ๆธธๆ็ฏๅข่กฅๅ
่ฏดๆ
### 7.1 ๆธธๆๅ
ฑๆ11็งๆฐดๆ๏ผ

### 7.2 ็ขฐๆๆฃๆต๏ผ
```python
def setup_collision_handler(self):
def post_solve_bird_line(arbiter, space, data):
if not self.lock:
self.lock = True
b1, b2 = None, None
i = arbiter.shapes[0].collision_type + 1
x1, y1 = arbiter.shapes[0].body.position
x2, y2 = arbiter.shapes[1].body.position
```
### 7.3 ๅฅๅฑๆบๅถ๏ผ
ๆฏๅๆไธ็งๆฐดๆ๏ผrewardๅ ็ธๅบ็ๅๆฐ
| ๆฐดๆ | ๅๆฐ |
| -------- | -------- |
| ๆจฑๆก | 2 |
| ๆฉๅญ | 3 |
| ... | ... |
| ่ฅฟ็ | 10 |
| ๅคง่ฅฟ็ | 100 |
```python
if i < 11:
self.last_score = self.score
self.score += i
elif i == 11:
self.last_score = self.score
self.score += 100
```
### 7.4 ๆฉ็ฝๆบๅถ:
ๅฆๆไธๆฌกactionๅ 1s๏ผๅณๆฐๆงๆฐดๆ็ๆ้ด้๏ผๆฒกๆๆๅๅๆๆฐดๆ๏ผๅrewardๅๅปๆพไธๆฐดๆ็ๅๆฐ
```python
_, reward, _ = self.next_frame(action=action)
for _ in range(int(self.create_time * self.FPS)):
_, nreward, _ = self.next_frame(action=None, generate=False)
reward += nreward
if reward == 0:
reward = -i
```
### 7.5 ่พๅ
ฅ็นๅพ๏ผ
ไนๅ็็ๆฌ(https://aistudio.baidu.com/aistudio/projectdetail/1540300)่พๅ
ฅ็นๅพไธบๆธธๆๆชๅพ๏ผ้็จResNetๆๅ็นๅพ
ไฝๆฏ็ดๆฅๅๅพ่พๅ
ฅไฝฟๅพๆจกๅๅพ้พๅญฆไน ๅฐๆๆ็็นๅพ
ๅ ๆญคๆฐ็ๆฌไฝฟ็จpygameๆฅๅฃ่ทๅๅฝๅ็ถๆ
```python
def get_feature(self, N_class=12, Keep=15):
# ็นๅพๅทฅ็จ
c_t = self.i
# ่ช่บซ็ฑปๅซ
feature_t = np.zeros((1, N_class + 1), dtype=np.float)
feature_t[0, c_t] = 1.
feature_t[0, 0] = 0.5
feature_p = np.zeros((Keep, N_class + 1), dtype=np.float)
Xcs = []
Ycs = []
Ts = []
for i, ball in enumerate(self.balls):
if ball:
x = int(ball.body.position[0])
y = int(ball.body.position[1])
t = self.fruits[i].type
Xcs.append(x/self.WIDTH)
Ycs.append(y/self.HEIGHT)
Ts.append(t)
sorted_id = sorted_index(Ycs)
for i, id_ in enumerate(sorted_id):
if i == Keep:
break
feature_p[i, Ts[id_]] = 1.
feature_p[i, 0] = Xcs[id_]
feature_p[i, -1] = Ycs[id_]
image = np.concatenate((feature_t, feature_p), axis=0)
return image
```
ๆณจ๏ผN_class = ๆฐดๆ็ฑปๅซๆฐ + 1
#### feature_t๏ผ
> ็จไบ่กจ็คบๅฝๅๆไธญๆฐดๆ็ฑปๅซ็ont-hotๅ้๏ผ
#### feature_p๏ผ
> ็จไบ่กจ็คบๅฝๅๆธธๆ็ถๆ๏ผๅคงๅฐไธบ(Keep, N_class + 1)
> Keep ่กจ็คบๅชๅ
ณๆณจๅฝๅไฝ็ฝฎๆ้ซ็ Keep ไธชๆฐดๆ
> N_class - 1 ๆฏๆไธชๆฐดๆ็ฑปๅซ็ont-hotๅ้๏ผ 0 ไฝ็ฝฎไธบ x ๅๆ ๏ผ-1 ไฝ็ฝฎไธบ y ๅๆ ๏ผๅฝไธๅ๏ผ
็ฎๅ็จ็ๅฐฑๆฏ่ฟไบไธช็นๅพใใใ
# ๅผ ่ๆฟๆฅไบhahh๏ผ่ๆฟๅคงๆฐ๏ผๆป็จฝ.jpg๏ผ
## 8. ๆ็ๅ
ฌไผๅท

| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '3'
import numpy as np
import tensorflow as tf
import json
with open('dataset-bpe.json') as fopen:
data = json.load(fopen)
train_X = data['train_X']
train_Y = data['train_Y']
test_X = data['test_X']
test_Y = data['test_Y']
EOS = 2
GO = 1
vocab_size = 32000
train_Y = [i + [2] for i in train_Y]
test_Y = [i + [2] for i in test_Y]
from tensor2tensor.utils import beam_search
def pad_second_dim(x, desired_size):
padding = tf.tile([[[0.0]]], tf.stack([tf.shape(x)[0], desired_size - tf.shape(x)[1], tf.shape(x)[2]], 0))
return tf.concat([x, padding], 1)
class Translator:
def __init__(self, size_layer, num_layers, embedded_size, learning_rate):
def cells(reuse=False):
return tf.nn.rnn_cell.GRUCell(size_layer,reuse=reuse)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.count_nonzero(self.X, 1, dtype = tf.int32)
self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype = tf.int32)
batch_size = tf.shape(self.X)[0]
embeddings = tf.Variable(tf.random_uniform([vocab_size, embedded_size], -1, 1))
def forward(x, y, reuse = False):
batch_size = tf.shape(x)[0]
X_seq_len = tf.count_nonzero(x, 1, dtype = tf.int32)
Y_seq_len = tf.count_nonzero(y, 1, dtype = tf.int32)
with tf.variable_scope('model',reuse=reuse):
encoder_embedded = tf.nn.embedding_lookup(embeddings, x)
decoder_embedded = tf.nn.embedding_lookup(embeddings, y)
rnn_cells = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)])
last_output, last_state = tf.nn.dynamic_rnn(rnn_cells, encoder_embedded,
sequence_length=X_seq_len,
dtype = tf.float32)
with tf.variable_scope("decoder",reuse=reuse):
attention_mechanism = tf.contrib.seq2seq.LuongAttention(num_units = size_layer,
memory = last_output)
rnn_cells = tf.contrib.seq2seq.AttentionWrapper(
cell = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer)
initial_state = rnn_cells.zero_state(batch_size, tf.float32).clone(cell_state=last_state)
outputs, _ = tf.nn.dynamic_rnn(rnn_cells, decoder_embedded,
sequence_length=Y_seq_len,
initial_state = initial_state,
dtype = tf.float32)
return tf.layers.dense(outputs,vocab_size)
main = tf.strided_slice(self.X, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
self.training_logits = forward(self.X, decoder_input, reuse = False)
self.training_logits = self.training_logits[:, :tf.reduce_max(self.Y_seq_len)]
self.training_logits = pad_second_dim(self.training_logits, tf.reduce_max(self.Y_seq_len))
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
y_t = tf.argmax(self.training_logits,axis=2)
y_t = tf.cast(y_t, tf.int32)
self.prediction = tf.boolean_mask(y_t, masks)
mask_label = tf.boolean_mask(self.Y, masks)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
initial_ids = tf.fill([batch_size], GO)
def symbols_to_logits(ids):
x = tf.contrib.seq2seq.tile_batch(self.X, 1)
logits = forward(x, ids, reuse = True)
return logits[:, tf.shape(ids)[1]-1, :]
final_ids, final_probs, _ = beam_search.beam_search(
symbols_to_logits,
initial_ids,
1,
tf.reduce_max(self.X_seq_len),
vocab_size,
0.0,
eos_id = EOS)
self.fast_result = final_ids
size_layer = 512
num_layers = 2
embedded_size = 256
learning_rate = 1e-3
batch_size = 128
epoch = 20
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Translator(size_layer, num_layers, embedded_size, learning_rate)
sess.run(tf.global_variables_initializer())
pad_sequences = tf.keras.preprocessing.sequence.pad_sequences
batch_x = pad_sequences(train_X[:10], padding='post')
batch_y = pad_sequences(train_Y[:10], padding='post')
sess.run([model.fast_result, model.cost, model.accuracy],
feed_dict = {model.X: batch_x, model.Y: batch_y})
import tqdm
for e in range(epoch):
pbar = tqdm.tqdm(
range(0, len(train_X), batch_size), desc = 'minibatch loop')
train_loss, train_acc, test_loss, test_acc = [], [], [], []
for i in pbar:
index = min(i + batch_size, len(train_X))
batch_x = pad_sequences(train_X[i : index], padding='post')
batch_y = pad_sequences(train_Y[i : index], padding='post')
feed = {model.X: batch_x,
model.Y: batch_y}
accuracy, loss, _ = sess.run([model.accuracy,model.cost,model.optimizer],
feed_dict = feed)
train_loss.append(loss)
train_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
pbar = tqdm.tqdm(
range(0, len(test_X), batch_size), desc = 'minibatch loop')
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x = pad_sequences(test_X[i : index], padding='post')
batch_y = pad_sequences(test_Y[i : index], padding='post')
feed = {model.X: batch_x,
model.Y: batch_y,}
accuracy, loss = sess.run([model.accuracy,model.cost],
feed_dict = feed)
test_loss.append(loss)
test_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
print('epoch %d, training avg loss %f, training avg acc %f'%(e+1,
np.mean(train_loss),np.mean(train_acc)))
print('epoch %d, testing avg loss %f, testing avg acc %f'%(e+1,
np.mean(test_loss),np.mean(test_acc)))
from tensor2tensor.utils import bleu_hook
results = []
for i in tqdm.tqdm(range(0, len(test_X), batch_size)):
index = min(i + batch_size, len(test_X))
batch_x = pad_sequences(test_X[i : index], padding='post')
feed = {model.X: batch_x}
p = sess.run(model.fast_result,feed_dict = feed)[:,0,:]
result = []
for row in p:
result.append([i for i in row if i > 3])
results.extend(result)
rights = []
for r in test_Y:
rights.append([i for i in r if i > 3])
bleu_hook.compute_bleu(reference_corpus = rights,
translation_corpus = results)
```
| github_jupyter |
# Day 1, Part 6: Two body motion, analytical and numeric
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Let's begin by defining the mass of the star we are interested in. We'll start with something that is the mass of the Sun.
```
# mass of particle 1 in solar masses
mass_of_star = 1.0
```
Let's also initialize $v_0 = v_p$ and $r_0 = r_p$ - values at perhelion. For reference let's check out the image again:

```
# distance of m2 at closest approach (pericenter)
rp = 1.0 # in AU - these are the distance from Sun to Earth units
# velocity of m2 at this closest approach distance
# we assume vp of the larger mass (m1) is negligable
vp = 35.0 # in km/s
```
We'll also need some conversion factors:
```
# unit conversions
MassOfSun = 2e33 # g
MassOfJupiter = 1.898e30 # g
AUinCM = 1.496e13 # cm
kmincm = 1e5 # cm/km
G = 6.674e-8 # gravitational constant in cm^3 g^-1 s^-2
```
Note that astronomers use weird units - centimeters and grams - wait until we get to parsecs!
Now, let's convert units so we can calculate things.
```
mass_of_star = mass_of_star*MassOfSun
vp = vp*kmincm
rp = rp*AUinCM
```
## Analytical solution
The first thing we want to do is construct the analytical solution. We'll use some relations to calculate the eccentricity and semi-major axis:
```
# analytically here are the constants we need to define to solve:
ecc = rp*vp*vp/(G*(mass_of_star)) - 1.0
a = rp/(1.0 - ecc)
# print some interesting things
print('Eccentricity = ' + str(ecc))
Porb = np.sqrt( 4.0*np.pi**2.0*a**3.0/(G*(mass_of_star)) )
print('Orbital Period = ' + str(Porb) + ' sec')
```
We also, for the time begin, only want to do elliptical/circular orbits, let's put in some checks for that:
```
if (ecc < 0):
print('eccentricity is less than zero, we should stop!')
elif (ecc >= 1):
print('eccentricity is greater than one')
print(' this is a parabolic or hyperbolic orbit, we should stop!')
else:
print('everything looks good, lets go!')
```
### Exercise
Use the above information to plot r(theta).
Recall: $r(\theta) = \frac{a (1-e^2)}{1 + e cos(\theta)}$
Don't forget to label your axis with the correct coordinates!
Bonus: use other $r_p$ and $v_p$ values to see how your plot changes!
## Euler's Numerical Solution
Redo this calculation with our numerical methods. A few things that might be useful. First, calculating the acceleration due to gravity:
```
# mStar is the mass of the central star, rStar is the *vector*
# from planet to mass of star
def calcAcc(mStar, rStar):
mag_r = (rStar[0]**2 + rStar[1]**2)**0.5
mag_a = -G*mStar/mag_r**2
# how about direction? It's along rStar
# but we need to make sure this direction
# vector is a "hat" i.e. a unit vector
# We want the direction only:
unitVector = rStar/mag_r
return mag_a*unitVector
```
Let's recall that $\vec{r}_p$ is the vector from the star to the planet, and $\vec{v}_p$ is the velocity vector - this vector will be perpendicular to the orbit.
We can plot this on our elliptical plot before!
We'll use: https://matplotlib.org/3.1.0/api/_as_gen/matplotlib.pyplot.arrow.html
First we'll plot the elliptical path, with a dashed line.
```
# now, generate the theta array
ntheta = 500 # number of points for theta
th_an = np.linspace(0, 360, ntheta)
# now, create r(theta)
r_an = (a*(1-ecc*ecc))/(1.0 + ecc*np.cos( th_an*np.pi/180.0 ))
# for plotting -> x/y coords for m2
x_an = r_an*np.cos( th_an*np.pi/180.0 )/AUinCM
y_an = r_an*np.sin( th_an*np.pi/180.0 )/AUinCM
# plot x/y coords
fig, ax = plt.subplots(1, figsize = (10, 10))
fig.suptitle('Coordinates Plot')
ax.plot(x_an, y_an, 'b--', linewidth=5)
ax.plot(0.0, 0.0, 'kx')
ax.set_xlabel('x in AU')
ax.set_ylabel('y in AU')
ax.set_xlim(-2.5, 2.5)
ax.set_ylim(-2.5, 2.5)
# now, let's plot these vectors
xarrow = 0; yarrow = 0 # coords of base
# dx/dy point to direction
dy = 0 # easy peasy, just along x-axis
# we can calculate r(theta = 0)
dx = (a*(1-ecc*ecc)/(1.0+ecc*np.cos(0)))/AUinCM
plt.arrow(xarrow, yarrow, dx, dy,head_width=0.1,
length_includes_head=True)
# we can use this same location as the head of our velocity arrow
vxarrow = dx; vyarrow = 0
# what length should we make? This is a little hard
# to define since we are plotting something in
# units of km/s and the x/y coords so we can
# just choose a value
dy = 1.0 # arbitrary
dx = 0
plt.arrow(vxarrow, vyarrow, dx, dy, head_width=0.1,
length_includes_head=True, color='red')
plt.show()
```
To use Euler's Method to calculate, we'll specify these initial conditions as 2D vectors:
```
r_0 = np.array([rp, 0])
v_0 = np.array([0, vp])
```
One thing we want to do is calculate for many time steps. How do we select $\Delta t$? Well, let's try fractions of an orbit.
```
Porb = np.sqrt( 4.0*np.pi**2.0*a**3.0/(G*(mass_of_star)) )
print(Porb, ' seconds')
```
Let's first start with time steps of 1e6 seconds. This means ~63 steps per orbit.
```
delta_t = 1e6 # seconds
```
For how many timesteps? Let's start with 1 orbit, or 64 steps.
```
n_steps = 64
```
### Exercise
Use Euler's method to calculate this orbit. Plot it ontop of your analytical solution.
Recall:
$\vec{r}_{i+1} = \vec{r}_i + \vec{v}_i \Delta t$
and
$\vec{v}_{i+1} = \vec{v}_i + \vec{a}_g(\vec{r}_i) \Delta t$
Bonus: what happens if you increase the number of steps? Or changing $\Delta t$?
### Possible Ans
```
ri = r_0
vi = v_0
r = []
for i in range(n_steps):
# ...
# what does it look like?
r = np.array(r)
r
# now, generate the theta array
ntheta = 500 # number of points for theta
th_an = np.linspace(0, 360, ntheta)
# now, create r(theta)
r_an = (a*(1-ecc*ecc))/(1.0 + ecc*np.cos( th_an*np.pi/180.0 ))
# for plotting -> x/y coords for m2
x_an = r_an*np.cos( th_an*np.pi/180.0 )/AUinCM
y_an = r_an*np.sin( th_an*np.pi/180.0 )/AUinCM
# plot x/y coords
fig, ax = plt.subplots(1, figsize = (10, 10))
fig.suptitle('Coordinates Plot')
ax.plot(x_an, y_an, 'b--', linewidth=5)
ax.plot(0.0, 0.0, 'kx')
ax.set_xlabel('x in AU')
ax.set_ylabel('y in AU')
ax.set_xlim(-2.5, 2.5)
ax.set_ylim(-2.5, 2.5)
# plot Euler's solution
ax.plot(r[:,0]/AUinCM, r[:,1]/AUinCM)
plt.show()
```
### Exercise
How small does $\Delta t$ need to be for an accurate solution? How many orbits can you get?
### Exercises
Let's calculate some orbits, analytical and numerical for our planetary system. Again, we'll assume our Sun is massive and centered at the origin, and the planets are much less massive and orbit around the Sun.
1. Calculate the orbit of the Earth
* T, the orbital period = 365 days, i.e. 1 year
* the eccentricity, ecc = 0.02
Recall: $T = \sqrt{\frac{4 \pi^2 a^3}{G M_{star}}}$
2. Bonus - others. Note, you probably want to write a function:
* Mercury, T = 0.48 years, eccentricity = 0.21
* Venus, T = 0.62 years, eccentricity = 0.01
* Mars, T = 1.88 years, eccentricity = 0.09
* Jupiter, T = 11.86 years, eccentricity = 0.05
* Saturn, T = 29.46 years, eccentricity = 0.05
* Uranus, T = 84.02 years, eccentricity = 0.05
* Neptune, T = 164.8 years, eccentricity = 0.01
* Pluto, T = 248.0 years, eccentricity = 0.25
Questions: how does $\Delta t$ and change with eccentricity and period?
Plot the whole solar system, both analytically and numerically. Think strategically - what things do you need to make as "variables" in your function? (T, ecc... what else?)
| github_jupyter |
# Tile Coding
---
Tile coding is an innovative way of discretizing a continuous space that enables better generalization compared to a single grid-based approach. The fundamental idea is to create several overlapping grids or _tilings_; then for any given sample value, you need only check which tiles it lies in. You can then encode the original continuous value by a vector of integer indices or bits that identifies each activated tile.
### 1. Import the Necessary Packages
```
# Import common libraries
import sys
import gym
import numpy as np
import matplotlib.pyplot as plt
# Set plotting options
%matplotlib inline
plt.style.use('ggplot')
np.set_printoptions(precision=3, linewidth=120)
```
### 2. Specify the Environment, and Explore the State and Action Spaces
We'll use [OpenAI Gym](https://gym.openai.com/) environments to test and develop our algorithms. These simulate a variety of classic as well as contemporary reinforcement learning tasks. Let's begin with an environment that has a continuous state space, but a discrete action space.
```
# Create an environment
env = gym.make('Acrobot-v1')
env.seed(505);
# Explore state (observation) space
print("State space:", env.observation_space)
print("- low:", env.observation_space.low)
print("- high:", env.observation_space.high)
# Explore action space
print("Action space:", env.action_space)
```
Note that the state space is multi-dimensional, with most dimensions ranging from -1 to 1 (positions of the two joints), while the final two dimensions have a larger range. How do we discretize such a space using tiles?
### 3. Tiling
Let's first design a way to create a single tiling for a given state space. This is very similar to a uniform grid! The only difference is that you should include an offset for each dimension that shifts the split points.
For instance, if `low = [-1.0, -5.0]`, `high = [1.0, 5.0]`, `bins = (10, 10)`, and `offsets = (-0.1, 0.5)`, then return a list of 2 NumPy arrays (2 dimensions) each containing the following split points (9 split points per dimension):
```
[array([-0.9, -0.7, -0.5, -0.3, -0.1, 0.1, 0.3, 0.5, 0.7]),
array([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5, 4.5])]
```
Notice how the split points for the first dimension are offset by `-0.1`, and for the second dimension are offset by `+0.5`. This might mean that some of our tiles, especially along the perimeter, are partially outside the valid state space, but that is unavoidable and harmless.
```
def create_tiling_grid(low, high, bins=(10, 10), offsets=(0.0, 0.0)):
return [(np.arange(bins[i] - 1) + 1) * ((high[i] - low[i]) / bins[i]) + low[i] + offsets[i] for i in range(len(bins))]
low = [-1.0, -5.0]
high = [1.0, 5.0]
create_tiling_grid(low, high, bins=(10, 10), offsets=(-0.1, 0.5)) # [test]
```
You can now use this function to define a set of tilings that are a little offset from each other.
```
def create_tilings(low, high, tiling_specs):
return [create_tiling_grid(low, high, bins, offsets) for bins, offsets in tiling_specs]
# Tiling specs: [(<bins>, <offsets>), ...]
tiling_specs = [((10, 10), (-0.066, -0.33)),
((10, 10), (0.0, 0.0)),
((10, 10), (0.066, 0.33))]
tilings = create_tilings(low, high, tiling_specs)
```
It may be hard to gauge whether you are getting desired results or not. So let's try to visualize these tilings.
```
from matplotlib.lines import Line2D
def visualize_tilings(tilings):
"""Plot each tiling as a grid."""
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
linestyles = ['-', '--', ':']
legend_lines = []
fig, ax = plt.subplots(figsize=(10, 10))
for i, grid in enumerate(tilings):
for x in grid[0]:
l = ax.axvline(x=x, color=colors[i % len(colors)], linestyle=linestyles[i % len(linestyles)], label=i)
for y in grid[1]:
l = ax.axhline(y=y, color=colors[i % len(colors)], linestyle=linestyles[i % len(linestyles)])
legend_lines.append(l)
ax.grid('off')
ax.legend(legend_lines, ["Tiling #{}".format(t) for t in range(len(legend_lines))], facecolor='white', framealpha=0.9)
ax.set_title("Tilings")
return ax # return Axis object to draw on later, if needed
visualize_tilings(tilings);
```
Great! Now that we have a way to generate these tilings, we can next write our encoding function that will convert any given continuous state value to a discrete vector.
### 4. Tile Encoding
Implement the following to produce a vector that contains the indices for each tile that the input state value belongs to. The shape of the vector can be the same as the arrangment of tiles you have, or it can be ultimately flattened for convenience.
You can use the same `discretize()` function here from grid-based discretization, and simply call it for each tiling.
```
def discretize(sample, grid):
return [np.digitize(sample[i], grid[i]) for i in range(len(sample))]
def tile_encode(sample, tilings, flatten=False):
encoded = [list(np.array(discretize(sample, grid))) for grid in tilings]
return list(np.array(encoded).flatten()) if flatten else encoded
# Test with some sample values
samples = [(-1.2 , -5.1 ),
(-0.75, 3.25),
(-0.5 , 0.0 ),
( 0.25, -1.9 ),
( 0.15, -1.75),
( 0.75, 2.5 ),
( 0.7 , -3.7 ),
( 1.0 , 5.0 )]
encoded_samples = [tile_encode(sample, tilings) for sample in samples]
print("\nSamples:", repr(samples), sep="\n")
print("\nEncoded samples:", repr(encoded_samples), sep="\n")
```
Note that we did not flatten the encoding above, which is why each sample's representation is a pair of indices for each tiling. This makes it easy to visualize it using the tilings.
```
from matplotlib.patches import Rectangle
def visualize_encoded_samples(samples, encoded_samples, tilings, low=None, high=None):
"""Visualize samples by activating the respective tiles."""
samples = np.array(samples) # for ease of indexing
# Show tiling grids
ax = visualize_tilings(tilings)
# If bounds (low, high) are specified, use them to set axis limits
if low is not None and high is not None:
ax.set_xlim(low[0], high[0])
ax.set_ylim(low[1], high[1])
else:
# Pre-render (invisible) samples to automatically set reasonable axis limits, and use them as (low, high)
ax.plot(samples[:, 0], samples[:, 1], 'o', alpha=0.0)
low = [ax.get_xlim()[0], ax.get_ylim()[0]]
high = [ax.get_xlim()[1], ax.get_ylim()[1]]
# Map each encoded sample (which is really a list of indices) to the corresponding tiles it belongs to
tilings_extended = [np.hstack((np.array([low]).T, grid, np.array([high]).T)) for grid in tilings] # add low and high ends
tile_centers = [(grid_extended[:, 1:] + grid_extended[:, :-1]) / 2 for grid_extended in tilings_extended] # compute center of each tile
tile_toplefts = [grid_extended[:, :-1] for grid_extended in tilings_extended] # compute topleft of each tile
tile_bottomrights = [grid_extended[:, 1:] for grid_extended in tilings_extended] # compute bottomright of each tile
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
for sample, encoded_sample in zip(samples, encoded_samples):
for i, tile in enumerate(encoded_sample):
# Shade the entire tile with a rectangle
topleft = tile_toplefts[i][0][tile[0]], tile_toplefts[i][1][tile[1]]
bottomright = tile_bottomrights[i][0][tile[0]], tile_bottomrights[i][1][tile[1]]
ax.add_patch(Rectangle(topleft, bottomright[0] - topleft[0], bottomright[1] - topleft[1],
color=colors[i], alpha=0.33))
# In case sample is outside tile bounds, it may not have been highlighted properly
if any(sample < topleft) or any(sample > bottomright):
# So plot a point in the center of the tile and draw a connecting line
cx, cy = tile_centers[i][0][tile[0]], tile_centers[i][1][tile[1]]
ax.add_line(Line2D([sample[0], cx], [sample[1], cy], color=colors[i]))
ax.plot(cx, cy, 's', color=colors[i])
# Finally, plot original samples
ax.plot(samples[:, 0], samples[:, 1], 'o', color='r')
ax.margins(x=0, y=0) # remove unnecessary margins
ax.set_title("Tile-encoded samples")
return ax
visualize_encoded_samples(samples, encoded_samples, tilings);
```
Inspect the results and make sure you understand how the corresponding tiles are being chosen. Note that some samples may have one or more tiles in common.
### 5. Q-Table with Tile Coding
The next step is to design a special Q-table that is able to utilize this tile coding scheme. It should have the same kind of interface as a regular table, i.e. given a `<state, action>` pair, it should return a `<value>`. Similarly, it should also allow you to update the `<value>` for a given `<state, action>` pair (note that this should update all the tiles that `<state>` belongs to).
The `<state>` supplied here is assumed to be from the original continuous state space, and `<action>` is discrete (and integer index). The Q-table should internally convert the `<state>` to its tile-coded representation when required.
```
class QTable:
"""Simple Q-table."""
def __init__(self, state_size, action_size):
self.state_size = state_size
self.action_size = action_size
self.q_table = np.zeros(list(state_size) + [action_size])
print("QTable(): size =", self.q_table.shape)
class TiledQTable:
"""Composite Q-table with an internal tile coding scheme."""
def __init__(self, low, high, tiling_specs, action_size):
self.tilings = create_tilings(low, high, tiling_specs)
self.state_sizes = [tuple(len(splits)+1 for splits in tiling_grid) for tiling_grid in self.tilings]
self.action_size = action_size
self.q_tables = [QTable(state_size, self.action_size) for state_size in self.state_sizes]
print("TiledQTable(): no. of internal tables = ", len(self.q_tables))
def get(self, state, action):
"""
avg = 0
for i in range(len(encodings)):
Q = self.q_tables[i].q_table
idx = tuple(encodings[i] + [action])
avg += Q[idx]
return avg / len(self.q_tables)
"""
encodings = tile_encode(state, self.tilings)
return np.average([self.q_tables[i].q_table[tuple(encodings[i] + [action])] for i in range(len(encodings))])
def update(self, state, action, value, alpha=0.1):
encodings = tile_encode(state, self.tilings)
for i in range(len(encodings)):
Q = self.q_tables[i].q_table
idx = tuple(encodings[i] + [action])
Q[idx] = alpha * value + (1 - alpha) * Q[idx]
# Q[idx] += alpha * (value - Q[idx])
# Test with a sample Q-table
tq = TiledQTable(low, high, tiling_specs, 2)
s1 = 3; s2 = 4; a = 0; q = 1.0
print("[GET] Q({}, {}) = {}".format(samples[s1], a, tq.get(samples[s1], a))) # check value at sample = s1, action = a
print("[UPDATE] Q({}, {}) = {}".format(samples[s2], a, q)); tq.update(samples[s2], a, q) # update value for sample with some common tile(s)
print("[GET] Q({}, {}) = {}".format(samples[s1], a, tq.get(samples[s1], a))) # check value again, should be slightly updated
```
If you update the q-value for a particular state (say, `(0.25, -1.91)`) and action (say, `0`), then you should notice the q-value of a nearby state (e.g. `(0.15, -1.75)` and same action) has changed as well! This is how tile-coding is able to generalize values across the state space better than a single uniform grid.
### 6. Implement a Q-Learning Agent using Tile-Coding
Now it's your turn to apply this discretization technique to design and test a complete learning agent!
```
class Agent:
def __init__(self, env, q_table, alpha=0.02, gamma=0.8, eps=0.1, eps_decay=0.998, final_eps=0.0001):
self.env = env
self.q_table = q_table
self.alpha = alpha
self.gamma = gamma
self.eps = eps
self.eps_decay = eps_decay
self.final_eps = final_eps
def greedy_action(self, state):
return np.argmax([self.q_table.get(state, action) for action in range(self.env.action_space.n)])
def random_action(self):
return np.random.choice(np.arange(self.env.action_space.n))
def eps_greedy_action(self, state):
return self.random_action() if np.random.uniform() <= self.eps else self.greedy_action(state)
def episode(self):
cumulative_reward = 0
done = False
state = env.reset()
while not done:
action = self.eps_greedy_action(state) # follow behaviour policy
next_state, reward, done, _ = env.step(action)
cumulative_reward += reward
A_max = self.greedy_action(next_state) # estimate return from target policy
est_return = self.q_table.get(next_state, A_max)
value = reward + self.gamma * est_return
self.q_table.update(state, action, value, self.alpha) # update table
state = next_state
self.eps = max(self.eps * self.eps_decay, self.final_eps)
return cumulative_reward
tiling_specs = [(tuple([5]) * 6, (env.observation_space.high - env.observation_space.low)/5),
(tuple([5]) * 6, tuple([0.0]) * 6),
(tuple([5]) * 6, (env.observation_space.high - env.observation_space.low)/5)]
tq = TiledQTable(env.observation_space.low, env.observation_space.high, tiling_specs, env.action_space.n)
agent = Agent(env, tq)
last_hundred = []
def train(num_episodes):
best = float('-inf')
best_avg = float('-inf')
for i in range(1, num_episodes+1):
new_episode = agent.episode()
best = max(best, new_episode)
last_hundred.append(new_episode)
if (len(last_hundred) > 100):
last_hundred.pop(0)
best_avg = max(best_avg, np.average(last_hundred))
if i % 10 == 0:
print("\rEpisode {}/{}, Best {}, Best100 {}".format(i, num_episodes, best, best_avg), end="")
sys.stdout.flush()
train(10000)
```
| github_jupyter |
```
%matplotlib inline
```
# Violin plot basics
Violin plots are similar to histograms and box plots in that they show
an abstract representation of the probability distribution of the
sample. Rather than showing counts of data points that fall into bins
or order statistics, violin plots use kernel density estimation (KDE) to
compute an empirical distribution of the sample. That computation
is controlled by several parameters. This example demonstrates how to
modify the number of points at which the KDE is evaluated (``points``)
and how to modify the band-width of the KDE (``bw_method``).
For more information on violin plots and KDE, the scikit-learn docs
have a great section: http://scikit-learn.org/stable/modules/density.html
```
import numpy as np
import matplotlib.pyplot as plt
# Fixing random state for reproducibility
np.random.seed(19680801)
# fake data
fs = 10 # fontsize
pos = [1, 2, 4, 5, 7, 8]
data = [np.random.normal(0, std, size=100) for std in pos]
fig, axs = plt.subplots(nrows=2, ncols=5, figsize=(10, 6))
axs[0, 0].violinplot(data, pos, points=20, widths=0.3,
showmeans=True, showextrema=True, showmedians=True)
axs[0, 0].set_title('Custom violinplot 1', fontsize=fs)
axs[0, 1].violinplot(data, pos, points=40, widths=0.5,
showmeans=True, showextrema=True, showmedians=True,
bw_method='silverman')
axs[0, 1].set_title('Custom violinplot 2', fontsize=fs)
axs[0, 2].violinplot(data, pos, points=60, widths=0.7, showmeans=True,
showextrema=True, showmedians=True, bw_method=0.5)
axs[0, 2].set_title('Custom violinplot 3', fontsize=fs)
axs[0, 3].violinplot(data, pos, points=60, widths=0.7, showmeans=True,
showextrema=True, showmedians=True, bw_method=0.5,
quantiles=[[0.1], [], [], [0.175, 0.954], [0.75], [0.25]])
axs[0, 3].set_title('Custom violinplot 4', fontsize=fs)
axs[0, 4].violinplot(data[-1:], pos[-1:], points=60, widths=0.7,
showmeans=True, showextrema=True, showmedians=True,
quantiles=[0.05, 0.1, 0.8, 0.9], bw_method=0.5)
axs[0, 4].set_title('Custom violinplot 5', fontsize=fs)
axs[1, 0].violinplot(data, pos, points=80, vert=False, widths=0.7,
showmeans=True, showextrema=True, showmedians=True)
axs[1, 0].set_title('Custom violinplot 6', fontsize=fs)
axs[1, 1].violinplot(data, pos, points=100, vert=False, widths=0.9,
showmeans=True, showextrema=True, showmedians=True,
bw_method='silverman')
axs[1, 1].set_title('Custom violinplot 7', fontsize=fs)
axs[1, 2].violinplot(data, pos, points=200, vert=False, widths=1.1,
showmeans=True, showextrema=True, showmedians=True,
bw_method=0.5)
axs[1, 2].set_title('Custom violinplot 8', fontsize=fs)
axs[1, 3].violinplot(data, pos, points=200, vert=False, widths=1.1,
showmeans=True, showextrema=True, showmedians=True,
quantiles=[[0.1], [], [], [0.175, 0.954], [0.75], [0.25]],
bw_method=0.5)
axs[1, 3].set_title('Custom violinplot 9', fontsize=fs)
axs[1, 4].violinplot(data[-1:], pos[-1:], points=200, vert=False, widths=1.1,
showmeans=True, showextrema=True, showmedians=True,
quantiles=[0.05, 0.1, 0.8, 0.9], bw_method=0.5)
axs[1, 4].set_title('Custom violinplot 10', fontsize=fs)
for ax in axs.flat:
ax.set_yticklabels([])
fig.suptitle("Violin Plotting Examples")
fig.subplots_adjust(hspace=0.4)
plt.show()
```
| github_jupyter |
# Cart-pole Balancing Model with Amazon SageMaker and Coach library
---
## Introduction
In this notebook we'll start from the cart-pole balancing problem, where a pole is attached by an un-actuated joint to a cart, moving along a frictionless track. Instead of applying control theory to solve the problem, this example shows how to solve the problem with reinforcement learning on Amazon SageMaker and Coach.
(For a similar Cart-pole example using Ray RLlib, see this [link](../rl_cartpole_ray/rl_cartpole_ray_gymEnv.ipynb). Another Cart-pole example using Coach library and offline data can be found [here](../rl_cartpole_batch_coach/rl_cartpole_batch_coach.ipynb).)
1. *Objective*: Prevent the pole from falling over
2. *Environment*: The environment used in this exmaple is part of OpenAI Gym, corresponding to the version of the cart-pole problem described by Barto, Sutton, and Anderson [1]
3. *State*: Cart position, cart velocity, pole angle, pole velocity at tip
4. *Action*: Push cart to the left, push cart to the right
5. *Reward*: Reward is 1 for every step taken, including the termination step
References
1. AG Barto, RS Sutton and CW Anderson, "Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problem", IEEE Transactions on Systems, Man, and Cybernetics, 1983.
## Pre-requisites
### Imports
To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
```
import sagemaker
import boto3
import sys
import os
import glob
import re
import numpy as np
import subprocess
from IPython.display import HTML
import time
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
```
### Setup S3 bucket
Set up the linkage and authentication to the S3 bucket that you want to use for checkpoint and the metadata.
```
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = "s3://{}/".format(s3_bucket)
print("S3 bucket path: {}".format(s3_output_path))
```
### Define Variables
We define variables such as the job prefix for the training jobs *and the image path for the container (only when this is BYOC).*
```
# create unique job name
job_name_prefix = "rl-cart-pole"
```
### Configure where training happens
You can run your RL training jobs on a SageMaker notebook instance or on your own machine. In both of these scenarios, you can run the following in either local or SageMaker modes. The local mode uses the SageMaker Python SDK to run your code in a local container before deploying to SageMaker. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. You just need to set local_mode = True.
```
# run in local mode?
local_mode = False
if local_mode:
instance_type = "local"
else:
instance_type = "ml.m4.4xlarge"
```
### Create an IAM role
Either get the execution role when running from a SageMaker notebook instance `role = sagemaker.get_execution_role()` or, when running from local notebook instance, use utils method `role = get_execution_role()` to create an execution role.
```
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role()
print("Using IAM role arn: {}".format(role))
```
### Install docker for `local` mode
In order to work in `local` mode, you need to have docker installed. When running from you local machine, please make sure that you have docker or docker-compose (for local CPU machines) and nvidia-docker (for local GPU machines) installed. Alternatively, when running from a SageMaker notebook instance, you can simply run the following script to install dependenceis.
Note, you can only run a single local notebook at one time.
```
# only run from SageMaker notebook instance
if local_mode:
!/bin/bash ./common/setup.sh
```
## Setup the environment
Cartpole environment used in this example is part of OpenAI Gym.
## Configure the presets for RL algorithm
The presets that configure the RL training jobs are defined in the โpreset-cartpole-clippedppo.pyโ file which is also uploaded on the /src directory. Using the preset file, you can define agent parameters to select the specific agent algorithm. You can also set the environment parameters, define the schedule and visualization parameters, and define the graph manager. The schedule presets will define the number of heat up steps, periodic evaluation steps, training steps between evaluations.
These can be overridden at runtime by specifying the RLCOACH_PRESET hyperparameter. Additionally, it can be used to define custom hyperparameters.
```
!pygmentize src/preset-cartpole-clippedppo.py
```
## Write the Training Code
The training code is written in the file โtrain-coach.pyโ which is uploaded in the /src directory.
First import the environment files and the preset files, and then define the main() function.
```
!pygmentize src/train-coach.py
```
## Train the RL model using the Python SDK Script mode
If you are using local mode, the training will run on the notebook instance. When using SageMaker for training, you can select a GPU or CPU instance. The RLEstimator is used for training RL jobs.
1. Specify the source directory where the environment, presets and training code is uploaded.
2. Specify the entry point as the training code
3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.
4. Define the training parameters such as the instance count, job name, S3 path for output and job name.
5. Specify the hyperparameters for the RL agent algorithm. The RLCOACH_PRESET can be used to specify the RL agent algorithm you want to use.
6. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
```
estimator = RLEstimator(
entry_point="train-coach.py",
source_dir="src",
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version="0.11.0",
framework=RLFramework.MXNET,
role=role,
instance_type=instance_type,
instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
hyperparameters={
"RLCOACH_PRESET": "preset-cartpole-clippedppo",
"rl.agent_params.algorithm.discount": 0.9,
"rl.evaluation_steps:EnvironmentEpisodes": 8,
"improve_steps": 10000,
"save_model": 1,
},
)
estimator.fit(wait=local_mode)
```
## Store intermediate training output and model checkpoints
The output from the training job above is stored on S3. The intermediate folder contains gifs and metadata of the training.
```
job_name = estimator._current_job_name
print("Job name: {}".format(job_name))
s3_url = "s3://{}/{}".format(s3_bucket, job_name)
if local_mode:
output_tar_key = "{}/output.tar.gz".format(job_name)
else:
output_tar_key = "{}/output/output.tar.gz".format(job_name)
intermediate_folder_key = "{}/output/intermediate/".format(job_name)
output_url = "s3://{}/{}".format(s3_bucket, output_tar_key)
intermediate_url = "s3://{}/{}".format(s3_bucket, intermediate_folder_key)
print("S3 job path: {}".format(s3_url))
print("Output.tar.gz location: {}".format(output_url))
print("Intermediate folder path: {}".format(intermediate_url))
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
```
## Visualization
### Plot metrics for training job
We can pull the reward metric of the training and plot it to see the performance of the model over time.
```
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = os.path.join(intermediate_folder_key, csv_file_name)
wait_for_s3_object(s3_bucket, key, tmp_dir, training_job_name=job_name)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=["Training Reward"])
x_axis = "Episode #"
y_axis = "Training Reward"
plt = df.plot(x=x_axis, y=y_axis, figsize=(12, 5), legend=True, style="b-")
plt.set_ylabel(y_axis)
plt.set_xlabel(x_axis);
```
### Visualize the rendered gifs
The latest gif file of the training is displayed. You can replace the gif_index below to visualize other files generated.
```
key = os.path.join(intermediate_folder_key, "gifs")
wait_for_s3_object(s3_bucket, key, tmp_dir, training_job_name=job_name)
print("Copied gifs files to {}".format(tmp_dir))
glob_pattern = os.path.join("{}/*.gif".format(tmp_dir))
gifs = [file for file in glob.iglob(glob_pattern, recursive=True)]
extract_episode = lambda string: int(
re.search(".*episode-(\d*)_.*", string, re.IGNORECASE).group(1)
)
gifs.sort(key=extract_episode)
print("GIFs found:\n{}".format("\n".join([os.path.basename(gif) for gif in gifs])))
# visualize a specific episode
gif_index = -1 # since we want last gif
gif_filepath = gifs[gif_index]
gif_filename = os.path.basename(gif_filepath)
print("Selected GIF: {}".format(gif_filename))
os.system("mkdir -p ./src/tmp/ && cp {} ./src/tmp/{}.gif".format(gif_filepath, gif_filename))
HTML('<img src="./src/tmp/{}.gif">'.format(gif_filename))
```
## Evaluation of RL models
We use the last checkpointed model to run evaluation for the RL Agent.
### Load checkpointed model
Checkpointed data from the previously trained models will be passed on for evaluation / inference in the checkpoint channel. In local mode, we can simply use the local directory, whereas in the SageMaker mode, it needs to be moved to S3 first.
```
wait_for_s3_object(s3_bucket, output_tar_key, tmp_dir, training_job_name=job_name)
if not os.path.isfile("{}/output.tar.gz".format(tmp_dir)):
raise FileNotFoundError("File output.tar.gz not found")
os.system("tar -xvzf {}/output.tar.gz -C {}".format(tmp_dir, tmp_dir))
if local_mode:
checkpoint_dir = "{}/data/checkpoint".format(tmp_dir)
else:
checkpoint_dir = "{}/checkpoint".format(tmp_dir)
print("Checkpoint directory {}".format(checkpoint_dir))
if local_mode:
checkpoint_path = "file://{}".format(checkpoint_dir)
print("Local checkpoint file path: {}".format(checkpoint_path))
else:
checkpoint_path = "s3://{}/{}/checkpoint/".format(s3_bucket, job_name)
if not os.listdir(checkpoint_dir):
raise FileNotFoundError("Checkpoint files not found under the path")
os.system("aws s3 cp --recursive {} {}".format(checkpoint_dir, checkpoint_path))
print("S3 checkpoint file path: {}".format(checkpoint_path))
```
### Run the evaluation step
Use the checkpointed model to run the evaluation step.
```
estimator_eval = RLEstimator(
role=role,
source_dir="src/",
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version="0.11.0",
framework=RLFramework.MXNET,
entry_point="evaluate-coach.py",
instance_count=1,
instance_type=instance_type,
base_job_name=job_name_prefix + "-evaluation",
hyperparameters={"RLCOACH_PRESET": "preset-cartpole-clippedppo", "evaluate_steps": 2000},
)
estimator_eval.fit({"checkpoint": checkpoint_path})
```
### Visualize the output
Optionally, you can run the steps defined earlier to visualize the output
# Model deployment
Since we specified MXNet when configuring the RLEstimator, the MXNet deployment container will be used for hosting.
```
predictor = estimator.deploy(
initial_instance_count=1, instance_type=instance_type, entry_point="deploy-mxnet-coach.py"
)
```
We can test the endpoint with 2 samples observations. Starting with the cart stationary in the center of the environment, but the pole to the right and falling. Since the environment vector was of the form `[cart_position, cart_velocity, pole_angle, pole_velocity]` and we used observation normalization in our preset, we choose an observation of `[0, 0, 2, 2]`. Since we're deploying a PPO model, our model returns both state value and actions.
```
value, action = predictor.predict(np.array([0.0, 0.0, 2.0, 2.0]))
action
```
We see the policy decides to move the cart to the right (2nd value) with a higher probability to recover the situation. And similarly in the other direction.
```
value, action = predictor.predict(np.array([0.0, 0.0, -2.0, -2.0]))
action
```
### Clean up endpoint
```
predictor.delete_endpoint()
```
| github_jupyter |
```
import os
import sys
import re
import json
import numpy as np
import pandas as pd
from collections import defaultdict
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
module_path = os.path.abspath(os.path.join('../onmt'))
if module_path not in sys.path:
sys.path.append(module_path)
# import kp_evaluate
# import onmt.keyphrase.utils as utils
import seaborn as sns
import matplotlib.pyplot as plt
import scipy
from nltk.stem.porter import PorterStemmer
from nltk.stem.porter import *
stemmer = PorterStemmer()
def stem_word_list(word_list):
return [stemmer.stem(w.strip()) for w in word_list]
def if_present_duplicate_phrases(src_seq, tgt_seqs, stemming=True, lowercase=True):
"""
Check if each given target sequence verbatim appears in the source sequence
:param src_seq:
:param tgt_seqs:
:param stemming:
:param lowercase:
:param check_duplicate:
:return:
"""
if lowercase:
src_seq = [w.lower() for w in src_seq]
if stemming:
src_seq = stem_word_list(src_seq)
present_indices = []
present_flags = []
duplicate_flags = []
phrase_set = set() # some phrases are duplicate after stemming, like "model" and "models" would be same after stemming, thus we ignore the following ones
for tgt_seq in tgt_seqs:
if lowercase:
tgt_seq = [w.lower() for w in tgt_seq]
if stemming:
tgt_seq = stem_word_list(tgt_seq)
# check if the phrase appears in source text
# iterate each word in source
match_flag, match_pos_idx = if_present_phrase(src_seq, tgt_seq)
# if it reaches the end of source and no match, means it doesn't appear in the source
present_flags.append(match_flag)
present_indices.append(match_pos_idx)
# check if it is duplicate
if '_'.join(tgt_seq) in phrase_set:
duplicate_flags.append(True)
else:
duplicate_flags.append(False)
phrase_set.add('_'.join(tgt_seq))
assert len(present_flags) == len(present_indices)
return np.asarray(present_flags), \
np.asarray(present_indices), \
np.asarray(duplicate_flags)
def if_present_phrase(src_str_tokens, phrase_str_tokens):
"""
:param src_str_tokens: a list of strings (words) of source text
:param phrase_str_tokens: a list of strings (words) of a phrase
:return:
"""
match_flag = False
match_pos_idx = -1
for src_start_idx in range(len(src_str_tokens) - len(phrase_str_tokens) + 1):
match_flag = True
# iterate each word in target, if one word does not match, set match=False and break
for seq_idx, seq_w in enumerate(phrase_str_tokens):
src_w = src_str_tokens[src_start_idx + seq_idx]
if src_w != seq_w:
match_flag = False
break
if match_flag:
match_pos_idx = src_start_idx
break
return match_flag, match_pos_idx
dataset_names = ['inspec', 'krapivin', 'nus', 'semeval', 'kp20k', 'duc', 'stackex']
# dataset_names = ['kp20k', 'magkp']
dataset_names = ['inspec', 'krapivin', 'nus', 'semeval', 'duc']
# json_base_dir = '/Users/memray/project/kp/OpenNMT-kpg/data/keyphrase/json/' # path to the json folder
json_base_dir = '/zfs1/hdaqing/rum20/kp/data/kp/json/' # path on CRC
src_lens = {}
tgt_nums = {}
for dataset_name in dataset_names:
src_len = []
tgt_num = []
num_present_doc, num_present_tgt = 0, 0
num_absent_doc, num_absent_tgt = 0, 0
print(dataset_name)
input_json_path = os.path.join(json_base_dir, dataset_name, 'test.json')
with open(input_json_path, 'r') as input_json:
for json_line in input_json:
json_dict = json.loads(json_line)
if dataset_name.startswith('stackex'):
json_dict['abstract'] = json_dict['question']
json_dict['keywords'] = json_dict['tags']
del json_dict['question']
del json_dict['tags']
title = json_dict['title']
abstract = json_dict['abstract']
fulltext = json_dict['fulltext'] if 'fulltext' in json_dict else ''
keywords = json_dict['keywords']
if isinstance(keywords, str):
keywords = keywords.split(';')
json_dict['keywords'] = keywords
src = title + ' . ' + abstract
tgt = keywords
src_seq = [t for t in re.split(r'\W', src) if len(t) > 0]
tgt_seqs = [[t for t in re.split(r'\W', p) if len(t) > 0] for p in tgt]
present_tgt_flags, _, _ = if_present_duplicate_phrases(src_seq, tgt_seqs, stemming=True, lowercase=True)
# print(' '.join(src_seq))
# print('[GROUND-TRUTH] #(all)=%d, #(present)=%d, #(absent)=%d\n' % \
# (len(present_tgt_flags), sum(present_tgt_flags), len(present_tgt_flags)-sum(present_tgt_flags)))
# print('\n'.join(['\t\t[%s]' % ' '.join(phrase) if is_present else '\t\t%s' % ' '.join(phrase) for phrase, is_present in zip(tgt_seqs, present_tgt_flags)]))
present_tgts = [tgt for tgt, present in zip(tgt_seqs, present_tgt_flags) if present]
absent_tgts = [tgt for tgt, present in zip(tgt_seqs, present_tgt_flags) if ~present]
num_present_tgt += len(present_tgts)
num_absent_tgt += len(absent_tgts)
if len(present_tgts) > 0: num_present_doc += 1
if len(absent_tgts) > 0: num_absent_doc += 1
src_len.append(len(title.split()) + len(abstract.split()) + len(fulltext.split()))
tgt_num.append(len(keywords))
# break
print('num_doc=', len(tgt_num))
print('num_tgt=', sum(tgt_num))
print('num_present_doc=', num_present_doc)
print('num_present_tgt=', num_present_tgt, ', #avg=%.2f' % (num_present_tgt / len(tgt_num)))
print('num_absent_doc=', num_absent_doc)
print('num_absent_tgt=', num_absent_tgt, ', #avg=%.2f' % (num_absent_tgt / len(tgt_num)))
src_lens[dataset_name] = src_len
tgt_nums[dataset_name] = tgt_num
# print(scipy.stats.describe(src_lens[dataset_name]))
# print(scipy.stats.describe(tgt_nums[dataset_name]))
```
### Visualize histogram of two datasets
```
sns.__version__
penguins = sns.load_dataset("penguins")
sns.histplot(data=penguins, x="flipper_length_mm", hue="species")
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(8, 6))
tmp_tgt_nums = [n for n in tgt_nums["magkp"] if n <= 30]
sns.distplot(tmp_tgt_nums, color=sns.color_palette("Greens_r", 8)[6], label="MagKP", bins=np.arange(31) - 0.5, kde=False, rug=False, hist_kws=dict(alpha=1.0, edgecolor="w", linewidth=0.2))
tmp_tgt_nums = [n for n in tgt_nums["magkp"] if n >= 3 and n <= 6]
sns.distplot(tmp_tgt_nums, label="MagKP-LN", bins=np.arange(31)-0.5,
color="c",
kde=False, rug=False, hist_kws=dict(alpha=0.7, edgecolor="w", linewidth=0.1))
tmp_tgt_nums = [n for n in tgt_nums["magkp"] if n > 10 and n <= 30]
sns.distplot(tmp_tgt_nums, color=sns.color_palette("Blues_r", 8)[4], label="MagKP-N", bins=np.arange(31)-0.5, kde=False, rug=False, hist_kws=dict(alpha=0.5, edgecolor="w", linewidth=0.2))
tmp_tgt_nums = [n for n in tgt_nums["kp20k"] if n <= 30]
sns.distplot(tmp_tgt_nums, color=sns.color_palette("hls", 8)[0], label="KP20k", bins=np.arange(31) - 0.5, kde=False, rug=False, hist_kws=dict(alpha=0.6, edgecolor="k", linewidth=1.5))
plt.xlim([-1, 30])
plt.legend(loc='upper right')
ax.set_title('Histogram of #(kp per document) of KP20k and MagKP')
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(8, 6))
tmp_tgt_nums = [n for n in tgt_nums["magkp"] if n <= 30]
print(len(tmp_tgt_nums))
sns.distplot(tmp_tgt_nums, label="MagKP",
bins=np.arange(31) - 0.5, color="w",
hist_kws=dict(alpha=1.0, edgecolor="k", linewidth=5.0),
kde=False, kde_kws={"color": "k", "lw": 3, "label": "KDE"})
tmp_tgt_nums = [n for n in tgt_nums["magkp"] if n >= 3 and n <= 6]
magkpln_tgt_nums = tmp_tgt_nums
print(len(tmp_tgt_nums))
sns.distplot(tmp_tgt_nums, label="MagKP-LN",
bins=np.arange(31)-0.5, color=sns.color_palette("Blues_r", 8)[3],
kde=False, rug=False, hist_kws=dict(alpha=0.8, edgecolor="k", linewidth=0.0))
tmp_tgt_nums = [n for n in tgt_nums["magkp"] if n > 10]
print(len(tmp_tgt_nums))
sns.distplot(tmp_tgt_nums, label="MagKP-Nlarge",
bins=np.arange(31)-0.5, color=sns.color_palette("Greys_r", 8)[5],
kde=False, rug=False, hist_kws=dict(alpha=0.8, edgecolor="k", linewidth=0.0))
tmp_tgt_nums = [n for n in tgt_nums["magkp"] if n > 10]
tmp_tgt_nums = tmp_tgt_nums[: len(magkpln_tgt_nums)]
print(len(tmp_tgt_nums))
sns.distplot(tmp_tgt_nums, label="MagKP-Nsmall",
bins=np.arange(31)-0.5, color=sns.color_palette("Greys_r", 8)[2],
kde=False, rug=False, hist_kws=dict(alpha=1.0, edgecolor="k", linewidth=0.0))
tmp_tgt_nums = [n for n in tgt_nums["kp20k"] if n <= 30]
print(len(tmp_tgt_nums))
sns.distplot(tmp_tgt_nums, label="KP20k",
bins=np.arange(31) - 0.5, color=sns.color_palette("hls", 8)[0],
kde=False, rug=False, hist_kws=dict(alpha=1.0, edgecolor="red", linewidth=2.5))
plt.xlim([-1, 30])
plt.legend(loc='upper right')
ax.set_ylabel('#(papers)')
ax.set_xlabel('#(phrase) per paper')
# ax.set_title('Histogram of #(kp per document) of KP20k and MagKP')
```
#### Check #(unique_kp) in each dataset
```
dataset_names = ['kp20k', 'magkp']
# json_base_dir = '/Users/memray/project/kp/OpenNMT-kpg/data/keyphrase/json/' # path to the json folder
json_base_dir = '/zfs1/pbrusilovsky/rum20/kp/OpenNMT-kpg/data/keyphrase/json' # path on CRC
dataset_tgt_dict = {}
for dataset_name in dataset_names:
dataset_tgt_dict[dataset_name] = []
print(dataset_name)
input_json_path = os.path.join(json_base_dir, dataset_name, '%s_train.json' % dataset_name)
with open(input_json_path, 'r') as input_json:
for json_line in input_json:
json_dict = json.loads(json_line)
if dataset_name == 'stackexchange':
json_dict['abstract'] = json_dict['question']
json_dict['keywords'] = json_dict['tags']
del json_dict['question']
del json_dict['tags']
title = json_dict['title']
abstract = json_dict['abstract']
fulltext = json_dict['fulltext'] if 'fulltext' in json_dict else ''
keywords = json_dict['keywords']
if isinstance(keywords, str):
keywords = keywords.split(';')
json_dict['keywords'] = keywords
keywords = [k.lower().strip() for k in keywords]
dataset_tgt_dict[dataset_name].append(keywords)
# prepare Magkp subsets
dataset_tgt_dict['magkp_ln'] = [kps for kps in dataset_tgt_dict["magkp"] if len(kps) >= 3 and len(kps) <= 6]
dataset_tgt_dict['magkp_nlarge'] = [kps for kps in dataset_tgt_dict["magkp"] if len(kps) > 10]
dataset_tgt_dict['magkp_nsmall'] = dataset_tgt_dict['magkp_nlarge'][: len(dataset_tgt_dict['magkp_ln'])]
for dataset, kps_list in dataset_tgt_dict.items():
kp_set = set()
num_kp = 0
max_kp_in_doc = 0
max_len_kp = 0
len_kp_list = []
for kps in kps_list:
for kp in kps:
kp_set.add(kp)
num_kp += 1
num_word = len(kp.split())
len_kp_list.append(num_word)
if num_word > max_len_kp:
max_len_kp = num_word
if len(kps) > max_kp_in_doc:
max_kp_in_doc = len(kps)
num_unique_kp = len(kp_set)
print('*' * 50)
print(dataset)
print('num_doc=', len(kps_list))
print('num_unique_kp=', num_unique_kp)
print('num_kp=', num_kp)
print('len_kp=', np.mean(len_kp_list))
print('max_kp_in_doc=', max_kp_in_doc)
print('len_kp_list=', max_len_kp)
```
#### print num_paper binned by num_kp
```
tmp_tgt_nums = [n for n in tgt_nums["kp20k"] if n <= 30]
for bin_count in np.bincount(tmp_tgt_nums):
print(bin_count)
```
#### Hatch-filled histograms
```
import itertools
from collections import OrderedDict
from functools import partial
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
from cycler import cycler
from six.moves import zip
def filled_hist(ax, edges, values, bottoms=None, orientation='v',
**kwargs):
"""
Draw a histogram as a stepped patch.
Extra kwargs are passed through to `fill_between`
Parameters
----------
ax : Axes
The axes to plot to
edges : array
A length n+1 array giving the left edges of each bin and the
right edge of the last bin.
values : array
A length n array of bin counts or values
bottoms : scalar or array, optional
A length n array of the bottom of the bars. If None, zero is used.
orientation : {'v', 'h'}
Orientation of the histogram. 'v' (default) has
the bars increasing in the positive y-direction.
Returns
-------
ret : PolyCollection
Artist added to the Axes
"""
print(orientation)
if orientation not in set('hv'):
raise ValueError("orientation must be in {{'h', 'v'}} "
"not {o}".format(o=orientation))
kwargs.setdefault('step', 'post')
edges = np.asarray(edges)
values = np.asarray(values)
if len(edges) - 1 != len(values):
raise ValueError('Must provide one more bin edge than value not: '
'len(edges): {lb} len(values): {lv}'.format(
lb=len(edges), lv=len(values)))
if bottoms is None:
bottoms = np.zeros_like(values)
if np.isscalar(bottoms):
bottoms = np.ones_like(values) * bottoms
values = np.r_[values, values[-1]]
bottoms = np.r_[bottoms, bottoms[-1]]
if orientation == 'h':
return ax.fill_betweenx(edges, values, bottoms,
**kwargs)
elif orientation == 'v':
return ax.fill_between(edges, values, bottoms,
**kwargs)
else:
raise AssertionError("you should never be here")
def stack_hist(ax, stacked_data, sty_cycle, bottoms=None,
hist_func=None, labels=None,
plot_func=None, plot_kwargs=None):
"""
ax : axes.Axes
The axes to add artists too
stacked_data : array or Mapping
A (N, M) shaped array. The first dimension will be iterated over to
compute histograms row-wise
sty_cycle : Cycler or operable of dict
Style to apply to each set
bottoms : array, optional
The initial positions of the bottoms, defaults to 0
hist_func : callable, optional
Must have signature `bin_vals, bin_edges = f(data)`.
`bin_edges` expected to be one longer than `bin_vals`
labels : list of str, optional
The label for each set.
If not given and stacked data is an array defaults to 'default set {n}'
If stacked_data is a mapping, and labels is None, default to the keys
(which may come out in a random order).
If stacked_data is a mapping and labels is given then only
the columns listed by be plotted.
plot_func : callable, optional
Function to call to draw the histogram must have signature:
ret = plot_func(ax, edges, top, bottoms=bottoms,
label=label, **kwargs)
plot_kwargs : dict, optional
Any extra kwargs to pass through to the plotting function. This
will be the same for all calls to the plotting function and will
over-ride the values in cycle.
Returns
-------
arts : dict
Dictionary of artists keyed on their labels
"""
# deal with default binning function
if hist_func is None:
hist_func = np.histogram
# deal with default plotting function
if plot_func is None:
plot_func = filled_hist
# deal with default
if plot_kwargs is None:
plot_kwargs = {}
print(plot_kwargs)
try:
l_keys = stacked_data.keys()
label_data = True
if labels is None:
labels = l_keys
except AttributeError:
label_data = False
if labels is None:
labels = itertools.repeat(None)
if label_data:
loop_iter = enumerate((stacked_data[lab], lab, s) for lab, s in
zip(labels, sty_cycle))
else:
loop_iter = enumerate(zip(stacked_data, labels, sty_cycle))
arts = {}
for j, (data, label, sty) in loop_iter:
if label is None:
label = 'dflt set {n}'.format(n=j)
label = sty.pop('label', label)
vals, edges = hist_func(data)
if bottoms is None:
bottoms = np.zeros_like(vals)
top = bottoms + vals # stack
top = vals # non-stack
print(label)
print(sty)
sty.update(plot_kwargs)
print(sty)
ret = plot_func(ax, edges, top, bottoms=bottoms,
label=label, **sty)
bottoms = top
arts[label] = ret
ax.legend(fontsize=10)
return arts
kp_data = OrderedDict()
kp_data["MagKP"] = [n for n in tgt_nums["magkp"] if n <= 10 and (n < 3 or n > 6)]
kp_data["MagKP-Nlarge"] = [n for n in tgt_nums["magkp"] if n > 10 and n <= 30]
kp_data["MagKP-Nsmall"] = [n for n in tgt_nums["magkp"] if n > 10 and n <= 30]
kp_data["MagKP-Nsmall"] = kp_data["MagKP-Nsmall"][: len(kp_data["MagKP-Nsmall"]) // 2]
kp_data["MagKP-LN"] = [n for n in tgt_nums["magkp"] if n >= 3 and n <= 6]
kp_data["KP20k"] = [n for n in tgt_nums["kp20k"] if n <= 30]
# set up histogram function to fixed bins
edges = np.linspace(-1, 30, 31, endpoint=True)
hist_func = partial(np.histogram, bins=edges)
print(kp_data.keys())
# set up style cycles
color_cycle = cycler(facecolor=[sns.color_palette("hls", 8)[4],
sns.color_palette("hls", 8)[4],
sns.color_palette("hls", 8)[4],
sns.color_palette("hls", 8)[4],
sns.color_palette("hls", 8)[0],
])
hatch_cycle = cycler(hatch=[' ', '/', 'o', '+', ' '])
# hatch_cycle = cycler(hatch=[' ', '/', 'o', '+', '|'])
alpha_cycle = cycler(alpha=[0.6, 0.6, 0.5, 0.5, 1.0])
# hist_kws=dict(alpha=0.5, edgecolor="w", linewidth=0.2)
# Fixing random state for reproducibility
np.random.seed(19680801)
fig, ax = plt.subplots(figsize=(8, 9), tight_layout=True, sharey=True)
arts = stack_hist(ax, kp_data,
sty_cycle=color_cycle + hatch_cycle + alpha_cycle,
labels=kp_data.keys(), hist_func=hist_func)
ax.set_xlabel('counts')
ax.set_ylabel('x')
plt.show()
```
### Stats of KP20k
##### w/o preprocessing
All documents
- #(data examples)=514,154
- #(KP)=2,710,067
- #(unique KP)=710,218
For documents whose \#(kp)>10
- #(DP)=19,336 (3.76%)
- #(KP)=401,763 (14.82%)
- #(unique KP)=52,176 (7.35%)
##### w/ preprocessing
All documents
- #(DP)=514,154
- #(KP)=2,710,067
- #(unique KP)=625,058 (diff between w/&w/o preprocessing: 85,160)
For documents whose \#(kp)>10
- #(DP)=19,336
- #(KP)=401,763 (14.82%)
- #(unique KP)=48,125 (7.70%, diff between w/&w/o preprocessing: 4,051)
#### Count #kp per document
```
data = tgt_nums["kp20k"]
print(scipy.stats.describe(data))
for p in np.linspace(0, 100, 101):
percentile = np.percentile(data, p, interpolation='lower')
print('Percentile@%.0f = %.6f' % (p, percentile))
tmp_tgt_nums = [n for n in tgt_nums["kp20k"] if n >=3 and n <= 6]
print('%d/%d' % (len(tmp_tgt_nums), len(tgt_nums["kp20k"])))
fig, ax = plt.subplots(figsize=(8, 6))
tmp_tgt_nums = [n for n in tgt_nums["kp20k"] if n <= 10]
sns.distplot(tmp_tgt_nums, color="teal", label="KP20k", bins=10, kde=False, rug=False, hist_kws=dict(alpha=0.7, edgecolor="k", linewidth=1))
ax.set_title('Histogram of #(kp per document) of KP20k (truncated at 10)')
fig, ax = plt.subplots(figsize=(8, 6))
tmp_tgt_nums = [n for n in tgt_nums["kp20k"]]
sns.distplot(tmp_tgt_nums, color="teal", label="KP20k", bins=100, kde=False, rug=False, hist_kws=dict(alpha=0.7, edgecolor="k", linewidth=1))
ax.set_title('Histogram of #(kp per document) of KP20k')
```
#### Count unique phrases
##### only count documents that #(kp)>10
```
dataset_name = 'kp20k'
do_preprocess = True
stemmer = PorterStemmer()
json_base_dir = '/zfs1/pbrusilovsky/rum20/kp/OpenNMT-kpg/data/keyphrase/json' # path on CRC
input_json_path = os.path.join(json_base_dir, dataset_name, '%s_train.json' % dataset_name)
unique_kp_counter = defaultdict(lambda: 0)
num_data = 0
num_kp = 0
with open(input_json_path, 'r') as input_json:
for json_line in input_json:
json_dict = json.loads(json_line)
if dataset_name == 'stackexchange':
json_dict['abstract'] = json_dict['question']
json_dict['keywords'] = json_dict['tags']
del json_dict['question']
del json_dict['tags']
title = json_dict['title']
abstract = json_dict['abstract']
fulltext = json_dict['fulltext'] if 'fulltext' in json_dict else ''
keywords = json_dict['keywords']
if isinstance(keywords, str):
keywords = keywords.split(';')
json_dict['keywords'] = keywords
if len(keywords) > 10:
num_data += 1
for keyword in keywords:
num_kp += 1
if do_preprocess:
tokens = [stemmer.stem(t) for t in keyword.lower().split()]
keyword = '_'.join(tokens)
unique_kp_counter[keyword] = unique_kp_counter[keyword] + 1
print('#(DP)=%d' % num_data)
print('#(KP)=%d' % num_kp)
print('#(unique KP)=%d' % len(unique_kp_counter))
```
##### count all documents #(kp)>0
```
dataset_name = 'kp20k'
do_preprocess = False
stemmer = PorterStemmer()
json_base_dir = '/zfs1/pbrusilovsky/rum20/kp/OpenNMT-kpg/data/keyphrase/json' # path on CRC
input_json_path = os.path.join(json_base_dir, dataset_name, '%s_train.json' % dataset_name)
unique_kp_counter = defaultdict(lambda: 0)
kp_len_counter = defaultdict(lambda: 0)
num_data = 0
num_kp = 0
with open(input_json_path, 'r') as input_json:
for json_line in input_json:
json_dict = json.loads(json_line)
if dataset_name == 'stackexchange':
json_dict['abstract'] = json_dict['question']
json_dict['keywords'] = json_dict['tags']
del json_dict['question']
del json_dict['tags']
title = json_dict['title']
abstract = json_dict['abstract']
fulltext = json_dict['fulltext'] if 'fulltext' in json_dict else ''
keywords = json_dict['keywords']
if isinstance(keywords, str):
keywords = keywords.split(';')
json_dict['keywords'] = keywords
if len(keywords) > 0:
num_data += 1
for keyword in keywords:
num_kp += 1
if do_preprocess:
tokens = [stemmer.stem(t) for t in keyword.lower().split()]
keyword = ' '.join(tokens)
tokens = [t for t in keyword.split()]
kp_len_counter[len(tokens)] = kp_len_counter[len(tokens)] + 1
unique_kp_counter[keyword] = unique_kp_counter[keyword] + 1
print('#(DP)=%d' % num_data)
print('#(KP)=%d' % num_kp)
print('#(unique KP)=%d' % len(unique_kp_counter))
fig, ax = plt.subplots(figsize=(8, 6))
tmp_kp_freqs = [v for k,v in unique_kp_counter.items() if v > 1000]
sns.distplot(tmp_kp_freqs, color="teal", label="KP20k", bins=15, kde=False, rug=False, hist_kws=dict(alpha=0.7, edgecolor="k", linewidth=1))
```
#### KP length distribution
```
fig, ax = plt.subplots(figsize=(16, 12))
sns.set(style="whitegrid")
kp_lens = sorted([(kp_len, freq) for kp_len, freq in kp_len_counter.items()], key=lambda k:k[0])
accum_kp_count = 0
total_kp_count = sum(freq for _, freq in kp_lens)
for kp_len, freq in kp_lens:
accum_kp_count += freq
print('#kp_len=%d, freq=%d, accum/total=%.2f%%' % (kp_len, freq, accum_kp_count / total_kp_count * 100))
print(len(kp_lens))
kp_lens_df = pd.DataFrame(kp_lens, columns=['#kp_len', 'freq'])
ax = sns.barplot(x="#kp_len", y="freq", data=kp_lens_df)
```
### Stats of MagKP
##### w/o preprocessing
All documents
- #(DP)=2,699,094
- #(KP)=41,605,964
- #(unique KP)=6,880,853
For documents whose \#(kp)>10
- #(DP)=1,520,307 (56.33%)
- #(KP)=35,525,765 (85.39%)
- #(unique KP)=5,784,959 (84.07%)
##### w/ preprocessing (lowercase and stemming)
All documents
- #(DP)=2,699,094
- #(KP)=41,605,964
- #(unique KP)=6,537,481 (diff between w/&w/o preprocessing: 343,372, 5.25% difference)
For documents whose \#(kp)>10
- #(DP)=1,520,307
- #(KP)=35,525,765 ๏ผ85.39%๏ผ
- #(unique KP)=5,493,997 (84.04%, diff between w/&w/o preprocessing: 290,962)
#### Count #kp per document
```
data = tgt_nums["magkp"]
print(scipy.stats.describe(data))
data = tgt_nums["magkp"]
for p in np.linspace(0, 100, 101):
percentile = np.percentile(data, p, interpolation='lower')
print('Percentile@%.0f = %.6f' % (p, percentile))
```
#### Histogram of #(kp per document) < 61
```
fig, ax = plt.subplots(figsize=(8, 6))
tmp_tgt_nums = [n for n in tgt_nums["magkp"] if n < 61]
sns.distplot(tmp_tgt_nums, color="teal", label="MagKP", bins=60, kde=False, rug=False, hist_kws=dict(alpha=0.7, edgecolor="k", linewidth=1))
ax.set_title('Histogram of #(kp per document) of MagKP (truncated at 60)')
```
#### Histogram of #(kp per document) < 11
```
fig, ax = plt.subplots(figsize=(8, 6))
tmp_tgt_nums = [n for n in tgt_nums["magkp"] if n <= 10]
sns.distplot(tmp_tgt_nums, color="teal", label="MagKP", bins=10, kde=False, rug=False, hist_kws=dict(alpha=0.7, edgecolor="k", linewidth=1))
ax.set_title('Histogram of #(kp per document) of MagKP (truncated at 10)')
# sns.distplot(np.asarray(tgt_nums, dtype=int), bins=15, color="r", kde=False, rug=False);
# Plot a simple histogram with binsize determined automatically
# sns.distplot(tgt_nums, kde=False, color="b", ax=ax)
# # Plot a kernel density estimate and rug plot
# sns.distplot(tgt_nums, hist=False, rug=True, color="r")
# # Plot a filled kernel density estimate
# sns.distplot(tgt_nums, hist=False, color="g", kde_kws={"shade": True})
# # Plot a histogram and kernel density estimate
# sns.distplot(tgt_nums, hist=True, color="m", ax=ax)
# sns.distplot(tgt_nums["kp20k"] , color="skyblue", label="KP20k", bins=15, kde=False, rug=False, hist_kws=dict(alpha=0.7))
# sns.distplot(tgt_nums["kp20k"] , color="teal", label="KP20k", bins=15, kde=False, rug=False, hist_kws=dict(alpha=0.7, edgecolor="k", linewidth=1))
sns.distplot(tgt_nums["magkp"] , color="teal", label="MagKP", bins=15, kde=False, rug=False, hist_kws=dict(alpha=0.7, edgecolor="k", linewidth=1))
# sns.distplot(tgt_nums["inspec"] , color="red", label="Inspec", bins=15, kde=False, rug=False, hist_kws=dict(alpha=0.7))
# sns.distplot(tgt_nums["krapivin"] , color="olive", label="Krapivin", bins=15, kde=False, rug=False, hist_kws=dict(alpha=0.7))
# sns.distplot(tgt_nums["nus"] , color="gold", label="NUS", bins=15, kde=False, rug=False, hist_kws=dict(alpha=0.7))
# sns.distplot(tgt_nums["semeval"] , color="teal", label="Semeval", bins=15, kde=False, rug=False, hist_kws=dict(alpha=0.7))
ax.set(xlabel='Number of keyphrases in doc', ylabel='Number of documents')
plt.legend()
plt.show()
```
#### Count unique phrases
##### only count documents that #(kp)>10
```
dataset_name = 'magkp'
do_preprocess = True
stemmer = PorterStemmer()
json_base_dir = '/zfs1/pbrusilovsky/rum20/kp/OpenNMT-kpg/data/keyphrase/json' # path on CRC
input_json_path = os.path.join(json_base_dir, dataset_name, '%s_train.json' % dataset_name)
unique_kp_counter = defaultdict(lambda: 0)
num_data = 0
num_kp = 0
with open(input_json_path, 'r') as input_json:
for json_line in input_json:
json_dict = json.loads(json_line)
if dataset_name == 'stackexchange':
json_dict['abstract'] = json_dict['question']
json_dict['keywords'] = json_dict['tags']
del json_dict['question']
del json_dict['tags']
title = json_dict['title']
abstract = json_dict['abstract']
fulltext = json_dict['fulltext'] if 'fulltext' in json_dict else ''
keywords = json_dict['keywords']
if isinstance(keywords, str):
keywords = keywords.split(';')
json_dict['keywords'] = keywords
if len(keywords) > 10:
num_data += 1
for keyword in keywords:
num_kp += 1
if do_preprocess:
tokens = [stemmer.stem(t) for t in keyword.lower().split()]
keyword = '_'.join(tokens)
unique_kp_counter[keyword] = unique_kp_counter[keyword] + 1
print('#(DP)=%d' % num_data)
print('#(KP)=%d' % num_kp)
print('#(unique KP)=%d' % len(unique_kp_counter))
```
##### count all documents #(kp)>0
```
dataset_name = 'magkp'
do_preprocess = False
stemmer = PorterStemmer()
json_base_dir = '/zfs1/pbrusilovsky/rum20/kp/OpenNMT-kpg/data/keyphrase/json' # path on CRC
input_json_path = os.path.join(json_base_dir, dataset_name, '%s_train.json' % dataset_name)
unique_kp_counter = defaultdict(lambda: 0)
kp_len_counter = defaultdict(lambda: 0)
num_data = 0
num_kp = 0
with open(input_json_path, 'r') as input_json:
for json_line in input_json:
json_dict = json.loads(json_line)
if dataset_name == 'stackexchange':
json_dict['abstract'] = json_dict['question']
json_dict['keywords'] = json_dict['tags']
del json_dict['question']
del json_dict['tags']
title = json_dict['title']
abstract = json_dict['abstract']
fulltext = json_dict['fulltext'] if 'fulltext' in json_dict else ''
keywords = json_dict['keywords']
if isinstance(keywords, str):
keywords = keywords.split(';')
json_dict['keywords'] = keywords
if len(keywords) > 0:
num_data += 1
for keyword in keywords:
num_kp += 1
if do_preprocess:
tokens = [stemmer.stem(t) for t in keyword.lower().split()]
keyword = ' '.join(tokens)
# print(keyword)
tokens = [t for t in keyword.split()]
kp_len_counter[len(tokens)] = kp_len_counter[len(tokens)] + 1
unique_kp_counter[keyword] = unique_kp_counter[keyword] + 1
print('#(DP)=%d' % num_data)
print('#(KP)=%d' % num_kp)
print('#(unique KP)=%d' % len(unique_kp_counter))
fig, ax = plt.subplots(figsize=(8, 6))
tmp_kp_freqs = [v for k,v in unique_kp_counter.items() if v > 5000]
sns.distplot(tmp_kp_freqs, color="teal",
title="Frequency of unique phrases", label="MagKP",
bins=50, kde=False, rug=False, hist_kws=dict(alpha=0.7, edgecolor="k", linewidth=1))
```
#### KP length distribution
```
fig, ax = plt.subplots(figsize=(16, 12))
sns.set(style="whitegrid")
kp_lens = sorted([(kp_len, freq) for kp_len, freq in kp_len_counter.items()], key=lambda k:k[0])
accum_kp_count = 0
total_kp_count = sum(freq for _, freq in kp_lens)
for kp_len, freq in kp_lens:
accum_kp_count += freq
print('#kp_len=%d, freq=%d, accum/total=%.2f%%' % (kp_len, freq, accum_kp_count / total_kp_count * 100))
print(len(kp_lens))
kp_lens_df = pd.DataFrame(kp_lens, columns=['#kp_len', 'freq'])
ax = sns.barplot(x="#kp_len", y="freq", data=kp_lens_df)
```
| github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/Training/multi_class_text_classification/NLU_training_multi_class_text_classifier_demo_musical_instruments.ipynb)
# Training a Deep Learning Classifier with NLU
## ClassifierDL (Multi-class Text Classification)
## 4 class Amazon Musical Instruments review classifier training
With the [ClassifierDL model](https://nlp.johnsnowlabs.com/docs/en/annotators#classifierdl-multi-class-text-classification) from Spark NLP you can achieve State Of the Art results on any multi class text classification problem
This notebook showcases the following features :
- How to train the deep learning classifier
- How to store a pipeline to disk
- How to load the pipeline from disk (Enables NLU offline mode)
You can achieve these results or even better on this dataset with training data:
<br>

You can achieve these results or even better on this dataset with test data:
<br>

# 1. Install Java 8 and NLU
```
!wget https://setup.johnsnowlabs.com/nlu/colab.sh -O - | bash
import nlu
```
# 2. Download musical instruments classification dataset
https://www.kaggle.com/eswarchandt/amazon-music-reviews
dataset with products rated between 5 classes
```
! wget http://ckl-it.de/wp-content/uploads/2021/01/Musical_instruments_reviews.csv
import pandas as pd
test_path = '/content/Musical_instruments_reviews.csv'
train_df = pd.read_csv(test_path,sep=",")
cols = ["y","text"]
train_df = train_df[cols]
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(train_df, test_size=0.2)
train_df
```
# 3. Train Deep Learning Classifier using nlu.load('train.classifier')
By default, the Universal Sentence Encoder Embeddings (USE) are beeing downloaded to provide embeddings for the classifier. You can use any of the 50+ other sentence Emeddings in NLU tough!
You dataset label column should be named 'y' and the feature column with text data should be named 'text'
```
# load a trainable pipeline by specifying the train. prefix and fit it on a datset with label and text columns
# Since there are no
trainable_pipe = nlu.load('train.classifier')
fitted_pipe = trainable_pipe.fit(train_df.iloc[:50] )
# predict with the trainable pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df.iloc[:50],output_level='document' )
preds
```
# 4. Evaluate the model
```
from sklearn.metrics import classification_report
print(classification_report(preds['y'], preds['trained_classifier']))
```
# 5. Lets try different Sentence Emebddings
```
# We can use nlu.print_components(action='embed_sentence') to see every possibler sentence embedding we could use. Lets use bert!
nlu.print_components(action='embed_sentence')
# Load pipe with bert embeds
# using large embeddings can take a few hours..
# fitted_pipe = nlu.load('en.embed_sentence.bert_large_uncased train.classifier').fit(train_df)
fitted_pipe = nlu.load('en.embed_sentence.bert train.classifier').fit(train_df.iloc[:100])
# predict with the trained pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df.iloc[:100],output_level='document')
from sklearn.metrics import classification_report
print(classification_report(preds['y'], preds['trained_classifier']))
# Load pipe with bert embeds
fitted_pipe = nlu.load('embed_sentence.bert train.classifier').fit(train_df.iloc[:100])
# predict with the trained pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df.iloc[:100],output_level='document')
from sklearn.metrics import classification_report
print(classification_report(preds['y'], preds['trained_classifier']))
from sklearn.metrics import classification_report
trainable_pipe = nlu.load('en.embed_sentence.small_bert_L12_768 train.classifier')
# We need to train longer and user smaller LR for NON-USE based sentence embeddings usually
# We could tune the hyperparameters further with hyperparameter tuning methods like gridsearch
# Also longer training gives more accuracy
trainable_pipe['classifier_dl'].setMaxEpochs(90)
trainable_pipe['classifier_dl'].setLr(0.0005)
fitted_pipe = trainable_pipe.fit(train_df)
# predict with the trainable pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df,output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['trained_classifier']))
#preds
```
# 6. evaluate on Test Data
```
preds = fitted_pipe.predict(test_df,output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['trained_classifier']))
```
# 7. Lets save the model
```
stored_model_path = './models/classifier_dl_trained'
fitted_pipe.save(stored_model_path)
```
# 8. Lets load the model from HDD.
This makes Offlien NLU usage possible!
You need to call nlu.load(path=path_to_the_pipe) to load a model/pipeline from disk.
```
hdd_pipe = nlu.load(path=stored_model_path)
preds = hdd_pipe.predict('It was really good ')
preds
hdd_pipe.print_info()
```
| github_jupyter |
```
import re
text_to_search = '''
abcdefghijklmnopqurtuvwxyz
ABCDEFGHIJKLMNOPQRSTUVWXYZ
1234567890
Ha HaHa
MetaCharacters (Need to be escaped):
. ^ $ * + ? { } [ ] \ | ( )
coreyms.com
321-555-4321
123.555.1234
123*555*1234
800-555-1234
900-555-1234
Mr. Schafer
Mr Smith
Ms Davis
Mrs. Robinson
Mr. T
cat
mat
bat
'''
sentence = 'Start a sentence and then bring it to an end'
pattern = re.compile(r'abc')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
text_to_search[1:4]
pattern = re.compile(r'123')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
pattern = re.compile(r'\.')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
pattern = re.compile(r'coreyms\.com')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
pattern = re.compile(r'.')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
```
\d
Matches any decimal digit; this is equivalent to the class [0-9].
\D
Matches any non-digit character; this is equivalent to the class [^0-9].
\s
Matches any whitespace character; this is equivalent to the class [ \t\n\r\f\v].
\S
Matches any non-whitespace character; this is equivalent to the class [^ \t\n\r\f\v].
\w
Matches any alphanumeric character; this is equivalent to the class [a-zA-Z0-9_].
\W
Matches any non-alphanumeric character; this is equivalent to the class [^a-zA-Z0-9_].
\b
Word boundary.
\B
Another zero-width assertion, this is the opposite of \b
```
pattern = re.compile(r'\bHa')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
print(text_to_search[66:75])
```
^
Matches at the beginning of lines
$
Matches at the end of a line
\A
Matches only at the start of the string. When not in MULTILINE mode, \A and ^ are effectively the same. In MULTILINE mode, theyโre different: \A still matches only at the beginning of the string, but ^ may match at any location inside the string that follows a newline character.
\Z
Matches only at the end of the string.
```
pattern = re.compile(r'\d\d\d.\d\d\d.\d\d\d\d')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
with open('data.txt', 'r') as f:
contents = f.read()
pattern = re.compile(r'\d\d\d.\d\d\d.\d\d\d\d')
matches = pattern.finditer(contents)
for match in matches:
print(match)
pattern = re.compile(r'[89]00.\d\d\d.\d\d\d\d')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
with open('data.txt', 'r') as f:
contents = f.read()
pattern = re.compile(r'[89]00.\d\d\d.\d\d\d\d')
matches = pattern.finditer(contents)
for match in matches:
print(match)
pattern = re.compile(r'[1-5]')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
pattern = re.compile(r'[^a-zA-Z0-9]')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
pattern = re.compile(r'[^b]at')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
```
Character Description Example Try it
[] A set of characters "[a-m]"
\ Signals a special sequence (can also be used to escape special characters) "\d"
. Any character (except newline character) "he..o"
^ Starts with "^hello"
$ Ends with "world$"
* Zero or more occurrences "aix*"
+ One or more occurrences "aix+"
{} Exactly the specified number of occurrences "al{2}"
| Either or "falls|stays"
() Capture and group
```
pattern = re.compile(r'Mr\.')
matches = pattern.finditer(text_to_search)
for match in matches:
print(match)
emails = '''
CoreyMSchafer@gmail.com
corey.schafer@university.edu
corey-321-schafer@my-work.net
'''
pattern = re.compile(r'[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+')
matches = pattern.finditer(emails)
for match in matches:
print(match)
urls = '''
https://www.google.com
http://coreyms.com
https://youtube.com
https://www.nasa.gov
'''
pattern = re.compile(r'https?://(www\.)?(\w+)(\.\w+)')
subbed_urls = pattern.sub(r'\2\3', urls)
print(subbed_urls)
pattern = re.compile(r'\d{3}.\d{3}.\d{4}')
matches = pattern.findall(text_to_search)
for match in matches:
print(match)
pattern = re.compile(r'start', re.I)
matches = pattern.findall(sentence)
for match in matches:
print(match)
```
| github_jupyter |
This page explains the multiple layouts components and all the options to control the layout of the dashboard.
There are 4 main components in a jupyter-flex dashboard in this hierarchy:
1. Pages
2. Sections
3. Cards
4. Cells
Meaning that Pages contain one or more Sections, Sections contains one or multiple Cards and Cards contain one or multiple Cells.
Cells are pretty self explanatory as they are the same as in Jupyter, they can be Markdown or Code cells and can contain one output (text or images).
## Cards
A Card is an object that holds one or more Cells, these can be markdown cells or code cells that have outputs such as plots, text, widgets and more.
To define a new Card you use a level-3 markdown header (`###`). Each card belongs in a Section and one Section can have one or more Cards.
Any tagged Cell will be added to the current Card until a new Card, Section or Page is defined.
The components of a Card are:
1. Title: Based on the value of level-3 markdown header (`###`) used to define it
2. One (or more) code cells tagged with `body` that contain outputs
3. One (or more) markdown cells tagged with `body` that contain some narrative for the dashboard
4. Footer: one or more markdown or code cells tagged with `footer`
5. Info dialog: one or more markdown or code cells tagged with `info`
For example take this notebook with one Card, two plots and some text. Note that code cells get expanded to ocupy all the space in the Card while markdown cells get just use the space they need to display its content.
```
### Card header
# This is a markdown cell, **all** *regular* syntax ~~should~~ work.
# This is another markdown cell. Any output from a code cell will have priority and be expanded.
import altair as alt
from vega_datasets import data
alt.renderers.set_embed_options(actions=False)
source = data.cars()
plot = alt.Chart(source).mark_circle(size=60).encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
tooltip=['Name', 'Origin', 'Horsepower', 'Miles_per_Gallon']
)
plot
plot.properties(
width='container',
height='container'
)
```
This is more markdown below the main content
Click on the help button in the corner to open the help modal.
This is a MD cell on the footer, we can use [regular md](https://google.com) an even show code, for example:
```
"Altair version: " + alt.__version__
```
This is the help modal, the title above ^^ is the same card header and this is just a regular markdown cell.
```
"This is code output"
```
You can have also have plots here
```
plot
```
[](/examples/card-complete.html)
<p class="img-caption">Click on the image to open dashboard</p>
## Sections
A Section is an object that holds one or more Cards, those Cards can be arrangend as rows (default) or columns inside the Section.
To define a new Section you use a level-2 markdown header (`##`). Each Section belongs in a Page and one Page can have one or more Sections.
The default orientation of Sections in a Page is to show each section as a column.
For example take this Notebook with two Sections, the first with one Card and the second one with two:
```
## Column 1
### Column 1
# <code>
## Column 2
### Column 2 - Row 1
# <code>
### Column 2 - Row 2
# <code>
```
[](/examples/section-columns-rows.html)
<p class="img-caption">Click on the image to open dashboard</p>
### Orientation
The default orientation for Sections is `columns` and for Cards is `rows`. This means that each Section will be shown as one column in a Page and each Card in a Section will be shown as a row.
Each Section default parameters can be overwritten by adding tags to the Section markdown cell (the one with a level-2 header: `##`).
For example in the last Notebook to make the second Section (right column) to also be divided by colums, instead of the defualt of rows, we add an `orientation=columns` tag like this:
```
## Column 2
```
[](/examples/section-columns-columns.html)
<p class="img-caption">Click on the image to open dashboard</p>
To change the orientation of Sections in a Page use the global `flex_orientation` parameter.
The default is to use the oposite orientation between Sections and Cards as follows:
- If dashboard `flex_orientation` is `columns` (default), then section orientation will default to `rows`
- If dashboard `flex_orientation` is `rows`, then section orientation well default to `columns`
To set the global parameters you tag one code Cell in the Notebook with `parameters`.
<div class="admonition tip">
<p class="admonition-title">The <code>parameters</code> tag</p>
<ol>
<li>It's usually a good idea to have this cell at the begining of the notebook</li>
<li>This is the same tag used by <a href="https://github.com/nteract/papermill">papermill</a> so you can use it as part of a pipeline</li>
</ol>
</div>
For example adding the following cell as the first cell of the Notebook we had before:
```
flex_orientation = "rows"
```
[](/examples/section-rows-columns.html)
<p class="img-caption">Click on the image to open dashboard</p>
### Size: Width and Height
You might have noticed that by default all the Cards and Section space is divided equally. If there are 2 Cards in a Section each Card will get 50% of the space, if there are 3 Cards in one Section each will get 33.3% of the space and so on and the same applies for multiple Sections in one Page.
These proportions can be controlled using the `size={value}` tag in Sections and Cards cells.
For example take this notebook that focuses most of the dashboard space on the top Section with one Card.
```
flex_orientation = "rows"
## Row 1
### Row 1
# <code>
## Row 2
### Row 2 - Card 1
# <code>
### Row 2 - Card 2
# <code>
```
[](/examples/focal-chart-top.html)
<div class="admonition info">
<p class="admonition-title">What does the value of size mean?</p>
<p>Interally juptyer-flex uses <a href="https://material-ui.com">Material UI</a> and it heavily uses the <a href="https://material-ui.com/components/grid/">Grid component</a>.
<p>The <code>size</code> tag is passed directly to the `xs` property of the Grid items.</p>
<p>These items size should add up to 12</p>
</div>
### Card size
In the same way that you can control Section proportions in a Page, you can control the Card proportions inside a Section using the `size` parameter as a tag in the Card header (level-3 markdown header: `###`).
In the last example if we change these two cells:
```
### Row 2 - Card 1
# <code>
### Row 2 - Card 1
# <code>
```
[](/examples/focal-chart-top-card-size.html)
<p class="img-caption">Click on the image to open dashboard</p>
### Section Tabs
You can make a Section display all it's Cards as tabs making each Card it's own tab. This allows more screen space for each Card. Tabs are especially useful when you have a large number of components to display.
To do this just tag a Section with `tabs` (the one with the level-2 markdown header: `##`).
For example this notebook:
```
## Column 1
### Tab 1
1
### Tab 2
2
## Column 2
### Regular Column
# <code>
## Column 3
### Tab A
"A"
### Tab B
"B"
### Tab C
"C"
```
[](/examples/section-tabs-columns.html)
<p class="img-caption">Click on the image to open dashboard</p>
## Pages
For bigger dashboards with a lot components you can divide the dashboard into multiple pages using level-1 markdown headers (`#`).
A Page is an object that holds one or more Sections, those Sections can be shown as columns (default) or rows inside the Page.
If a dashboard has one or more Pages the top navigation of the dashboard will include links to change between those.
Page parameters, such as orientation and size, can be overwritten by tagging the level-1 (`#`) markdown header cell.
Take this example Notebook with 2 Pages and multiple sections (including tabs):
```
# Page 1
## Column 1
### Column 1
"page 1 - col 1"
## Column 2
### Column 2 - Card 1
"page 1 - col 2 - card 1"
### Column 2 - Card 2
"page 1 - col 2 - card 2"
# Page 2
## Row 1
### Row 1
"page 2 - row 1"
## Row 2
### Card 1
"page 2 - row 2 - card 1"
### Card 2
"page 2 - row 2 - card 2"
```
[](/examples/pages.html)
<p class="img-caption">Click on the image to open dashboard</p>
## Sidebars
Sidebars are one special type of Section or Page. They behave in the same way as regular Section, they can contain one or more Cards that will be shown in the Sidebar of the dashboard.
If you tag a Page with `sidebar` it will be a global Sidebar, meaning that all pages will have the same sidebar.
If you tag a Section with `sidebar` then it will only appear for the page that contains that Section.
This is mostly useful when defining inputs and using Jupyter widgets, see [Voila and Widgets](/voila-widgets) but it can also be used to display other type of information.
This example uses a global sidebar by tagging the first page with `sidebar`.
```
# Sidebar
### Sidebar
"""
The sidebar is the sidebar of the dashboard.
It will always be there even after switching pages.
This content is a regular Card, for example *this* **is** [markdown](https://daringfireball.net/projects/markdown/).
"""
""
### This is a second card
# Since we have two cards in the sidebar the content was split equaly as it happens in Sections by default but it can be controlled by the `size` tag.
# This is a code cell output on the side bar
# Page 1
## Col 1
### This is the first page
# <code>
## Col 2
### Column 2
# <code>
# Page 2
## Row 1
### This is the second page
# <code>
## Row 2
### Row 2
# <code>
```
[](/examples/sidebar-global.html)
<p class="img-caption">Click on the image to open dashboard</p>
### Section sidebar
If you want a sidebar that is only available to one of the Pages, tag a Section with `sidebar`.
[](/examples/sidebar-pages.html)
<p class="img-caption">Click on the image to open dashboard</p>
## Vertical Layouts: Scroll
By default Jupyter-flex components are laid out to fill the height of the browser. So that multiple components (Sections, Cards) get expanded to the available space and there is no vertical scroll in the body.
This is a good default that works well for a small to medium number of components, however if you have lots of charts youโll probably want to scroll rather than fit them all onto the page.
You can control this behavior globally using the `flex_vertical_layout` option which has a default value of `fill`, change it to `scroll` to layout charts at their natural height, scrolling the page if necessary.
```
flex_vertical_layout = "scroll"
```
This can also be set for each page using the `layout=scroll` tag.
[](/examples/pages-diff-layouts.html)
## Examples
<div class="image-grid">
<a class="image-card" href="/examples/focal-chart-top.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_focal-chart-top-reference.png">
<figcaption>focal-chart-top</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/focal-chart-top-card-size.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_focal-chart-top-card-size-reference.png">
<figcaption>focal-chart-top-card-size</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/grid-2x2.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_grid-2x2-reference.png">
<figcaption>grid-2x2</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/grid-2x3.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_grid-2x3-reference.png">
<figcaption>grid-2x3</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/header-columns-footer.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_header-columns-footer-reference.png">
<figcaption>header-columns-footer.ipynb</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/layout-fill.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_layout-fill-reference.png">
<figcaption>layout-fill</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/layout-scroll.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_layout-scroll-reference.png">
<figcaption>layout-scroll</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/pages-diff-layouts.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_pages-diff-layouts-reference.png">
<figcaption>pages-diff-layouts</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/pages.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_pages-reference.png">
<figcaption>pages</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/section-columns-columns.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_section-columns-columns-reference.png">
<figcaption>section-columns + columns</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/section-columns-rows.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_section-columns-rows-reference.png">
<figcaption>section-columns-rows</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/section-rows-columns.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_section-rows-columns-reference.png">
<figcaption>section-rows-columns</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/section-rows-rows.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_section-rows-rows-reference.png">
<figcaption>section-rows + rows</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/section-tabs-columns.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_section-tabs-columns-reference.png">
<figcaption>section-tabs-columns</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/section-tabs-rows.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_section-tabs-rows-reference.png">
<figcaption>section-tabs-rows</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/sidebar-global-and-pages.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_sidebar-global-and-pages-reference.png">
<figcaption>sidebar-global-and-pages</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/sidebar-global.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_sidebar-global-reference.png">
<figcaption>sidebar-global</figcaption>
</figure>
</a>
<a class="image-card" href="/examples/sidebar-pages.html">
<figure>
<img src="/assets/img/screenshots/jupyter_flex.tests.test_layouts/layouts_sidebar-pages-reference.png">
<figcaption>sidebar-pages</figcaption>
</figure>
</a>
</div>
| github_jupyter |
```
# Import libraries
import numpy as np
import matplotlib.pyplot as plt
# Import libraries
import keras
import keras.backend as K
from keras.models import Model
# Activation and Regularization
from keras.regularizers import l2
from keras.activations import softmax
# Keras layers
from keras.layers.convolutional import Conv2D, Conv2DTranspose
from keras.layers import Dense, Dropout, Flatten, Input, BatchNormalization, Activation
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
# Model architecture
from elu_resnet_2d_distances import *
```
# Loading the Dataset
```
def parse_lines(raw):
return [[float(x) for x in line.split("\t") if x != ""] for line in raw]
def parse_line(line):
return [float(x) for x in line.split("\t") if x != ""]
path = "../data/full_under_200.txt"
# Opn file and read text
with open(path, "r") as f:
lines = f.read().split('\n')
# Scan first n proteins
names = []
seqs = []
dists = []
pssms = []
# Extract numeric data from text
for i,line in enumerate(lines):
if len(names) == 111:
break
# Read each protein separately
if line == "[ID]":
names.append(lines[i+1])
elif line == "[PRIMARY]":
seqs.append(lines[i+1])
elif line == "[EVOLUTIONARY]":
pssms.append(parse_lines(lines[i+1:i+21]))
elif line == "[DIST]":
dists.append(parse_lines(lines[i+1:i+len(seqs[-1])+1]))
# Progress control
if len(names)%50 == 0:
print("Currently @ ", len(names), " out of n (500)")
def wider(seq, l=200, n=20):
""" Converts a seq into a one-hot tensor. Not LxN but LxLxN"""
key = "HRKDENQSYTCPAVLIGFWM"
tensor = []
for i in range(l):
d2 = []
for j in range(l):
d1 = [1 if (j<len(seq) and i<len(seq) and key[x] == seq[i] and key[x] == seq[j]) else 0 for x in range(n)]
d2.append(d1)
tensor.append(d2)
return np.array(tensor)
inputs_aa = np.array([wider(seq) for seq in seqs])
inputs_aa.shape
def wider_pssm(pssm, seq, l=200, n=20):
""" Converts a seq into a one-hot tensor. Not LxN but LxLxN"""
key = "HRKDENQSYTCPAVLIGFWM"
key_alpha = "ACDEFGHIKLMNPQRSTVWY"
tensor = []
for i in range(l):
d2 = []
for j in range(l):
if (i==j and j<len(seq) and i<len(seq)):
d1 = [aa[i] for aa in pssm]
else:
d1 = [0 for i in range(n)]
# Append pssm[i]*pssm[j]
if j<len(seq) and i<len(seq):
d1.append(pssm[key_alpha.index(seq[i])][i] *
pssm[key_alpha.index(seq[j])][j])
else:
d1.append(0)
# Append custom distance to diagonal
d1.append(1 - abs(i-j)/200)
d2.append(d1)
tensor.append(d2)
return np.array(tensor)
inputs_pssm = np.array([wider_pssm(pssms[i], seqs[i]) for i in range(len(pssms))])
inputs_pssm.shape
inputs = np.concatenate((inputs_aa, inputs_pssm), axis=3)
inputs.shape
# Delete unnecessary data
del inputs_pssm # = inputs_pssm[:10]
del inputs_aa # = inputs_aa[:10]
# Embed number of rows
def embedding_matrix(matrix, l=200):
# Embed with extra columns
for i in range(len(matrix)):
while len(matrix[i])<l:
matrix[i].extend([-1 for i in range(l-len(matrix[i]))])
#Embed with extra rows
while len(matrix)<l:
matrix.append([-1 for x in range(l)])
return np.array(matrix)
dists = np.array([embedding_matrix(matrix) for matrix in dists])
def treshold(matrix, cuts=[-0.5, 500, 750, 1000, 1400, 1700, 2000], l=200):
# Turns an L*L*1 tensor into an L*L*N
trash = (np.array(matrix)<cuts[0]).astype(np.int)
first = (np.array(matrix)<cuts[1]).astype(np.int)-trash
sec = (np.array(matrix)<cuts[2]).astype(np.int)-trash-first
third = (np.array(matrix)<=cuts[3]).astype(np.int)-trash-first-sec
fourth = (np.array(matrix)<=cuts[4]).astype(np.int)-trash-first-sec-third
fifth = (np.array(matrix)<=cuts[5]).astype(np.int)-trash-first-sec-third-fourth
sixth = np.array(matrix)>cuts[5]
return np.concatenate((trash.reshape(l,l,1),
first.reshape(l,l,1),
sec.reshape(l,l,1),
third.reshape(l,l,1),
fourth.reshape(l,l,1),
fifth.reshape(l,l,1),
sixth.reshape(l,l,1)),axis=2)
outputs = np.array([treshold(d) for d in dists])
print(outputs.shape)
del dists
```
# Loading the model
```
kernel_size, filters = 3, 16
adam = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=True)
# Using AMSGrad optimizer for speed
kernel_size, filters = 3, 16
adam = keras.optimizers.Adam(amsgrad=True)
# Create model
model = resnet_v2(input_shape=(200,200, 42), depth=28, num_classes=7)
model.compile(optimizer=adam, loss=weighted_categorical_crossentropy(np.array([1e-07, 0.45, 1.65, 1.75, 0.73, 0.77, 0.145])), metrics=["accuracy"])
model.summary()
```
## Model training
```
his = model.fit(inputs, outputs, epochs=35, batch_size=2, verbose=1, shuffle=True, validation_split=0.1)
print(his.history)
model.save("tester_28.h5")
plt.figure()
plt.plot(his.history["loss"])
plt.plot(his.history["val_loss"])
plt.legend(["loss", "val_loss"], loc="lower left")
plt.show()
```
## Check the model has learned something
```
i,k = 0, 5
sample_pred = model.predict([inputs[i:i+k]])
preds4 = np.argmax(sample_pred, axis=3)
preds4[preds4==0] = 6 # Change trash class by class for long distance
outs4 = np.argmax(outputs[i:i+k], axis=3)
outs4[outs4==0] = 6 # Change trash class by class for long distance
# Select the best prediction to display it - (proportional by protein length(area of contact map))
results = [np.sum(np.equal(pred[:len(seqs[i+j]), :len(seqs[i+j])], outs4[j, :len(seqs[i+j]), :len(seqs[i+j]),]),axis=(0,1))/
len(seqs[i+j])**2
for j,pred in enumerate(preds4)]
best_score = max(results)
print("Best score (Accuracy): ", best_score)
sorted_scores = [acc for acc in sorted(results, key=lambda x: x, reverse=True)]
print("Best 5 scores: ", sorted_scores[:5])
print("Best 5 indices: ", [results.index(x) for x in sorted_scores[:5]])
best_score_index = results.index(best_score)
print("Index of best score: ", best_score_index)
# best_score_index = results.index(0.6597390187822867)
# print(best_score_index)
best_score_index = 3
i=0
plt.title('Ground Truth')
plt.imshow(outs4[best_score_index, :len(seqs[i+best_score_index]), :len(seqs[i+best_score_index])],
cmap='viridis_r', interpolation='nearest')
plt.colorbar()
plt.show()
# Then plot predictions by the model - v2
plt.title("Prediction by model")
plt.imshow(preds4[best_score_index, :len(seqs[i+best_score_index]), :len(seqs[i+best_score_index])],
cmap='viridis_r', interpolation='nearest')
plt.colorbar()
plt.show()
# model.save("elu_resnet_2d_distance_valacc_0_59_acc.h5")
# Done!
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
preds_crop = np.concatenate( [pred[:len(seqs[i+j]), :len(seqs[i+j])].flatten() for j,pred in enumerate(preds4)] )
outs_crop = np.concatenate( [outs4[j, :len(seqs[i+j]), :len(seqs[i+j])].flatten() for j,pred in enumerate(preds4)] )
matrix = cm = confusion_matrix(outs_crop, preds_crop)
classes = [i+1 for i in range(7)]
title = "Comfusion matrix"
cmap = "coolwarm"
normalize = True
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
# print(cm)
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
# ... and label them with the respective list entries
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
print("Introducing total mse: ", np.linalg.norm(outs_crop-preds_crop))
print("Introducing mse/prot: ", np.linalg.norm(outs_crop-preds_crop)/len(preds4))
```
## Done! Let's continue training the model with the whole data!
| github_jupyter |
# ้ฃๆกจๅธธ่ง่ต๏ผPALM็ผๅบๅฝฉ็
งไธญ้ปๆไธญๅคฎๅนๅฎไฝ - 12ๆ็ฌฌ3ๅๆนๆก
# ๏ผ1๏ผๆฏ่ตไป็ป
## ่ต้ขไป็ป
PALM้ปๆๅฎไฝๅธธ่ง่ต็้็นๆฏ็ ็ฉถๅๅๅฑไธๆฃ่
็ผๅบ็
ง็้ปๆ็ปๆๅฎไฝ็ธๅ
ณ็็ฎๆณใ่ฏฅๅธธ่ง่ต็็ฎๆ ๆฏ่ฏไผฐๅๆฏ่พๅจไธไธชๅธธ่ง็่ง็ฝ่็ผๅบๅพๅๆฐๆฎ้ไธๅฎไฝ้ปๆ็่ชๅจ็ฎๆณใๅ
ทไฝ็ฎ็ๆฏ้ขๆต้ปๆไธญๅคฎๅนๅจๅพๅไธญ็ๅๆ ๅผใ

ไธญๅคฎๅนๆฏ่ง็ฝ่ไธญ่พจ่ฒๅใๅ่พจๅๆๆ้็ๅบๅใไปฅไบบไธบไพ๏ผๅจ่ง็้ขไพง็บฆ3.5mmๅค๏ผๆไธ้ป่ฒๅฐๅบ๏ผ็งฐ้ปๆ๏ผๅ
ถไธญๅคฎ็ๅน้ท๏ผๅฐฑๆฏไธญๅคฎๅนใไธญๅคฎๅน็ๅ็กฎๅฎไฝๅฏ่พ
ๅฉๅป็ๅฎๆ็ณๅฐฟ็
่ง็ฝ่ใ้ปๆๅๆง็ญ็
ๅ็่ฏๆญใ
# ๏ผ2๏ผๆฐๆฎไป็ป
PALM็
็ๆง่ฟ่ง้ขๆตๅธธ่ง่ต็ฑไธญๅฑฑๅคงๅญฆไธญๅฑฑ็ผ็งไธญๅฟๆไพ800ๅผ ๅธฆ้ปๆไธญๅคฎๅนๅๆ ๆ ๆณจ็็ผๅบๅฝฉ็
งไพ้ๆ่ฎญ็ปๆจกๅ๏ผๅฆๆไพ400ๅผ ๅธฆๆ ๆณจๆฐๆฎไพๅนณๅฐ่ฟ่กๆจกๅๆต่ฏใ
## ๆฐๆฎ่ฏดๆ
ๆฌๆฌกๅธธ่ง่ตๆไพ็้ๆ ๅ็ฑไธญๅฑฑๅคงๅญฆไธญๅฑฑ็ผ็งไธญๅฟ็7ๅ็ผ็งๅป็ๆๅทฅ่ฟ่กๆ ๆณจ๏ผไนๅ็ฑๅฆไธไฝ้ซ็บงไธๅฎถๅฐๅฎไปฌ่ๅไธบๆ็ป็ๆ ๆณจ็ปๆใๆฌๆฏ่ตๆไพๆฐๆฎ้ๅฏนๅบ็้ปๆไธญๅคฎๅนๅๆ ไฟกๆฏๅญๅจๅจxlsxๆไปถไธญ๏ผๅไธบโFovea_Location_trainโ๏ผ็ฌฌไธๅๅฏนๅบ็ผๅบๅพๅ็ๆไปถๅ(ๅ
ๆฌๆฉๅฑๅโ.jpgโ)๏ผ็ฌฌไบๅๅ
ๅซxๅๆ ๏ผ็ฌฌไธๅๅ
ๅซyๅๆ ใ
ๅพ
## ่ฎญ็ปๆฐๆฎ้
ๆไปถๅ็งฐ๏ผTrain
Trainๆไปถๅคน้ๆไธไธชๆไปถๅคนfundus_imagesๅไธไธชxlsxๆไปถใ
fundus_imagesๆไปถๅคนๅ
ๅ
ๅซ800ๅผ ็ผๅบๅฝฉ็
ง๏ผๅ่พจ็ไธบ1444ร1444๏ผๆ2124ร2056ใๅฝๅๅฝขๅฆH0001.jpgใP0001.jpgใN0001.jpgๅV0001.jpgใ
xlsxๆไปถไธญๅ
ๅซ800ๅผ ็ผๅบๅฝฉ็
งๅฏนๅบ็xใyๅๆ ไฟกๆฏใ
## ๆต่ฏๆฐๆฎ้
ๆไปถๅ็งฐ๏ผPALM-Testing400-Images ๆไปถๅคน้ๅ
ๅซ400ๅผ ็ผๅบๅฝฉ็
ง๏ผๅฝๅๅฝขๅฆT0001.jpgใ
# ไธใๆฐๆฎๅค็
## 1.1 ่งฃๅๆฐๆฎ้
```
!unzip -oq /home/aistudio/data/data116960/ๅธธ่ง่ต๏ผPALM็ผๅบๅฝฉ็
งไธญ้ปๆไธญๅคฎๅนๅฎไฝ.zip
!mv โะณโฃัโโะณโPALMโคโโกโซโโฉโโโโจโโโโโโจโคัโโโขะธโฌโ ๅธธ่ง่ต๏ผPALM็ผๅบๅฝฉ็
งไธญ้ปๆไธญๅคฎๅนๅฎไฝ
!rm -rf __MACOSX
!mv /home/aistudio/PALM็ผๅบๅฝฉ็
งไธญ้ปๆไธญๅคฎๅนๅฎไฝ/* /home/aistudio/work/ๅธธ่ง่ต๏ผPALM็ผๅบๅฝฉ็
งไธญ้ปๆไธญๅคฎๅนๅฎไฝ/
```
## 1.2 ๆฅ็ๆฐๆฎๆ ็ญพ
```
import blackhole.dataframe as pd
df=pd.read_excel('ๅธธ่ง่ต๏ผPALM็ผๅบๅฝฉ็
งไธญ้ปๆไธญๅคฎๅนๅฎไฝ/Train/Fovea_Location_train.xlsx')
df.head()
# ่ฎก็ฎๆ ็ญพ็ๅๅผๅๆ ๅๅทฎ๏ผ็จไบๆ ็ญพ็ๅฝไธๅ
key_pts_values = df.values[:,1:] # ๅๅบๆ ็ญพไฟกๆฏ
data_mean = key_pts_values.mean() # ่ฎก็ฎๅๅผ
data_std = key_pts_values.std() # ่ฎก็ฎๆ ๅๅทฎ
print('ๆ ็ญพ็ๅๅผไธบ:', data_mean)
print('ๆ ็ญพ็ๆ ๅๅทฎไธบ:', data_std)
```
## 1.3 ๆฐๆฎๅขๅผบ
* ไธบไบ้ฒๆญข่ฟๆๅๅๆณๅ่ฝๅไธๅผบ๏ผๅฏนๅพ็่ฟ่กๆฐๆฎๅขๅผบๅค็
* ๆฌๆกไพไฝฟ็จ็ไธป่ฆๆไฝๅ
ๆฌๅช่ฃๅ็ฐๅบฆๅ็ญ็ญ
```
import paddle.vision.transforms.functional as F
class Resize(object):
# ๅฐ่พๅ
ฅๅพๅ่ฐๆดไธบๆๅฎๅคงๅฐ
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, data):
image = data[0] # ่ทๅๅพ็
key_pts = data[1] # ่ทๅๆ ็ญพ
image_copy = np.copy(image)
key_pts_copy = np.copy(key_pts)
h, w = image_copy.shape[:2]
new_h, new_w = self.output_size,self.output_size
new_h, new_w = int(new_h), int(new_w)
img = F.resize(image_copy, (new_h, new_w))
# scale the pts, too
#key_pts_copy[::2] = key_pts_copy[::2] * new_w / w
#key_pts_copy[1::2] = key_pts_copy[1::2] * new_h / h
return img, key_pts_copy
class GrayNormalize(object):
# ๅฐๅพ็ๅไธบ็ฐๅบฆๅพ๏ผๅนถๅฐๅ
ถๅผๆพ็ผฉๅฐ[0, 1]
# ๅฐ label ๆพ็ผฉๅฐ [-1, 1] ไน้ด
def __call__(self, data):
image = data[0] # ่ทๅๅพ็
key_pts = data[1] # ่ทๅๆ ็ญพ
image_copy = np.copy(image)
key_pts_copy = np.copy(key_pts)
# ็ฐๅบฆๅๅพ็
gray_scale = paddle.vision.transforms.Grayscale(num_output_channels=3)
image_copy = gray_scale(image_copy)
# ๅฐๅพ็ๅผๆพ็ผฉๅฐ [0, 1]
image_copy = (image_copy-127.5) / 127.5
# ๅฐๅๆ ็นๆพ็ผฉๅฐ [-1, 1]
#mean = data_mean # ่ทๅๆ ็ญพๅๅผ
#std = data_std # ่ทๅๆ ็ญพๆ ๅๅทฎ
#key_pts_copy = (key_pts_copy - mean)/std
return image_copy, key_pts_copy
class ToCHW(object):
# ๅฐๅพๅ็ๆ ผๅผ็ฑHWCๆนไธบCHW
def __call__(self, data):
image = data[0]
key_pts = data[1]
transpose = T.Transpose((2, 0, 1)) # ๆนไธบCHW
image = transpose(image)
return image, key_pts
import paddle.vision.transforms as T
data_transform = T.Compose([
Resize(224),
GrayNormalize(),
ToCHW(),
])
data_transform2 = T.Compose([
Resize(224),
GrayNormalize(),
ToCHW(),
])
```
## 1.4 ่ชๅฎไนๆฐๆฎ้
```
path='ๅธธ่ง่ต๏ผPALM็ผๅบๅฝฉ็
งไธญ้ปๆไธญๅคฎๅนๅฎไฝ/Train/fundus_image/'
df=df.sample(frac=1)
image_list=[]
label_listx=[]
label_listy=[]
for i in range(len(df)):
image_list.append(path+df['imgName'][i])
label_listx.append(df['Fovea_X'][i])
label_listy.append(df['Fovea_Y'][i])
import os
test_path='ๅธธ่ง่ต๏ผPALM็ผๅบๅฝฉ็
งไธญ้ปๆไธญๅคฎๅนๅฎไฝ/PALM-Testing400-Images'
test_list=[]
test_labelx=[]
test_labely=[]
list = os.listdir(test_path) # ๅๅบๆไปถๅคนไธๆๆ็็ฎๅฝไธๆไปถ
for i in range(0, len(list)):
path = os.path.join(test_path, list[i])
test_list.append(path)
test_labelx.append(0)
test_labely.append(0)
import paddle
import cv2
import numpy as np
class dataset(paddle.io.Dataset):
def __init__(self,img_list,label_listx,label_listy,transform=None,transform2=None,mode='train'):
self.image=img_list
self.labelx=label_listx
self.labely=label_listy
self.mode=mode
self.transform=transform
self.transform2=transform2
def load_img(self, image_path):
img=cv2.imread(image_path,1)
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
h,w,c=img.shape
return img,h,w
def __getitem__(self,index):
img,h,w = self.load_img(self.image[index])
labelx = self.labelx[index]/w
labely = self.labely[index]/h
img_size=img.shape
if self.transform:
if self.mode=='train':
img, label = self.transform([img, [labelx,labely]])
else:
img, label = self.transform2([img, [labelx,labely]])
label=np.array(label,dtype='float32')
img=np.array(img,dtype='float32')
return img,label
def __len__(self):
return len(self.image)
```
## 1.5 ๅๅๆฐๆฎ้
* ไปฅ0.85็ๆฏ็ๅๅ่ฎญ็ป้ๅๆต่ฏ้๏ผๆฐๆฎ้ไธญๆ85%ไฝไธบ่ฎญ็ป้๏ผ15%ไฝไธบ้ช่ฏ้
```
radio=0.85
train_list=image_list[:int(len(image_list)*radio)]
train_labelx=label_listx[:int(len(label_listx)*radio)]
train_labely=label_listy[:int(len(label_listy)*radio)]
val_list=image_list[int(len(image_list)*radio):]
val_labelx=label_listx[int(len(label_listx)*radio):]
val_labely=label_listy[int(len(label_listy)*radio):]
train_ds=dataset(train_list,train_labelx,train_labely,data_transform,data_transform2,'train')
val_ds=dataset(val_list,val_labelx,val_labely,data_transform,data_transform2,'valid')
test_ds=dataset(test_list,test_labelx,test_labely,data_transform,data_transform2,'test')
```
## 1.6 ๆฅ็ๅพ็
```
import matplotlib.pyplot as plt
for i,data in enumerate(train_ds):
img,label=data
img=img.transpose([1,2,0])
print(img.shape)
plt.title(label)
plt.imshow(img)
plt.show()
if i==0:
break
```
# ไบใๆจกๅๆๅปบ
## 2.1 ๆจกๅ็ป็ฝ
* ๆฌๆกไพ้ๆฉresnet152่ฟ่กๆจกๅ็ป็ฝ๏ผresnet152็ธๆฏไบresnet50ๅฏนไบๆฌ่ต้ข็ๅ็กฎๆงๆด้ซ
* ่ฏฆๆ
ๅฏๅ่[ๅฎๆนๆๆกฃ](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/02_paddle2.0_develop/04_model_cn.html)็ปๅปบ็ฅ็ป็ฝ็ปๆจกๅ
```
class MyNet(paddle.nn.Layer):
def __init__(self):
super(MyNet, self).__init__()
self.resnet = paddle.vision.resnet152(pretrained=True, num_classes=0)
self.flatten = paddle.nn.Flatten()
self.linear_1 = paddle.nn.Linear(2048, 512)
self.linear_2 = paddle.nn.Linear(512, 256)
self.linear_3 = paddle.nn.Linear(256, 2)
self.relu = paddle.nn.ReLU()
self.dropout = paddle.nn.Dropout(0.2)
def forward(self, inputs):
y = self.resnet(inputs)
y = self.flatten(y)
y = self.linear_1(y)
y = self.linear_2(y)
y = self.relu(y)
y = self.dropout(y)
y = self.linear_3(y)
y = paddle.nn.functional.sigmoid(y)
return y
```
## 2.2 ๅผๆญฅๅ ่ฝฝๆฐๆฎ
```
train_loader = paddle.io.DataLoader(train_ds, places=paddle.CPUPlace(), batch_size=32, shuffle=True, num_workers=0)
val_loader = paddle.io.DataLoader(val_ds, places=paddle.CPUPlace(), batch_size=32, shuffle=True, num_workers=0)
test_loader=paddle.io.DataLoader(test_ds, places=paddle.CPUPlace(), batch_size=32, shuffle=False, num_workers=0)
```
## 2.3 ่ชๅฎไนๆๅคฑๅฝๆฐ
```
from sklearn.metrics.pairwise import euclidean_distances
import paddle.nn as nn
# ๆๅคฑๅฝๆฐ
def cal_coordinate_Loss(logit, label, alpha = 0.5):
"""
logit: shape [batch, ndim]
label: shape [batch, ndim]
ndim = 2 represents coordinate_x and coordinaate_y
alpha: weight for MSELoss and 1-alpha for ED loss
return: combine MSELoss and ED Loss for x and y, shape [batch, 1]
"""
alpha = alpha
mse_loss = nn.MSELoss(reduction='mean')
mse_x = mse_loss(logit[:,0],label[:,0])
mse_y = mse_loss(logit[:,1],label[:,1])
mse_l = 0.5*(mse_x + mse_y)
# print('mse_l', mse_l)
ed_loss = []
# print(logit.shape[0])
for i in range(logit.shape[0]):
logit_tmp = logit[i,:].numpy()
label_tmp = label[i,:].numpy()
# print('cal_coordinate_loss_ed', logit_tmp, label_tmp)
ed_tmp = euclidean_distances([logit_tmp], [label_tmp])
# print('ed_tmp:', ed_tmp[0][0])
ed_loss.append(ed_tmp)
ed_l = sum(ed_loss)/len(ed_loss)
# print('ed_l', ed_l)
# print('alpha', alpha)
loss = alpha * mse_l + (1-alpha) * ed_l
# print('loss in function', loss)
return loss
class SelfDefineLoss(paddle.nn.Layer):
"""
1. ็ปงๆฟpaddle.nn.Layer
"""
def __init__(self):
"""
2. ๆ้ ๅฝๆฐๆ นๆฎ่ชๅทฑ็ๅฎ้
็ฎๆณ้ๆฑๅไฝฟ็จ้ๆฑ่ฟ่กๅๆฐๅฎไนๅณๅฏ
"""
super(SelfDefineLoss, self).__init__()
def forward(self, input, label):
"""
3. ๅฎ็ฐforwardๅฝๆฐ๏ผforwardๅจ่ฐ็จๆถไผไผ ้ไธคไธชๅๆฐ๏ผinputๅlabel
- input๏ผๅไธชๆๆนๆฌก่ฎญ็ปๆฐๆฎ็ป่ฟๆจกๅๅๅ่ฎก็ฎ่พๅบ็ปๆ
- label๏ผๅไธชๆๆนๆฌก่ฎญ็ปๆฐๆฎๅฏนๅบ็ๆ ็ญพๆฐๆฎ
ๆฅๅฃ่ฟๅๅผๆฏไธไธชTensor๏ผๆ นๆฎ่ชๅฎไน็้ป่พๅ ๅๆ่ฎก็ฎๅๅผๅ็ๆๅคฑ
"""
# ไฝฟ็จPaddlePaddleไธญ็ธๅ
ณAPI่ชๅฎไน็่ฎก็ฎ้ป่พ
output = cal_coordinate_Loss(input,label)
return output
```
# ไธใ่ฎญ็ปใ่ฏไผฐไธ้ขๆต
## 3.1 ๆจกๅ่ฎญ็ปไธๅฏ่งๅ
* ๆฌๆฌกๆฏ่ต็ปๆ็่ฐไผ่ฟ็จ๏ผ่ฎพๅฎไบ401่ฝฎ่ฟญไปฃ๏ผไปepoch1ๅฐepoch401๏ผ
* ๆนๆฌกๅคงๅฐBatch_size่ฎพๅฎไธบ32๏ผๆ นๆฎ่ชๅทฑ็ๆพๅญๅคงๅฐ่ฟ่ก่ฎพๅฎ
* ๅ ไธบ่ฎก็ฎๆบๅญ็ฌฆ้ฝๆฏไปฅ2็ๆๆฐๆฌกๅน่ฟ่กๅญๅจ็๏ผๆไปฅ่ฎพ็ฝฎ batch_sizeๆถๅฐฝ้้ๆฉไพๅฆ 4๏ผ 8๏ผ 16๏ผ 32๏ผ 64๏ผ 128๏ผ 256 ็ญ
* Batch Size็ๅคงๅฐๅฝฑๅๆจกๅ็ไผๅ็จๅบฆๅ้ๅบฆใๅๆถๅ
ถ็ดๆฅๅฝฑๅๅฐGPUๅ
ๅญ็ไฝฟ็จๆ
ๅต๏ผๅๅฆไฝ GPUๅ
ๅญไธๅคง๏ผ่ฏฅๆฐๅผๆๅฅฝ่ฎพ็ฝฎๅฐไธ็น
* ไฝฟ็จ cosine annealing ็็ญ็ฅๆฅๅจๆ่ฐๆดๅญฆไน ็๏ผlearning_rateไธบ1e-5
* ็ปๆต่ฏไปฅไธๅๆฐ่ฎพๅฎๅฏไปฅ่พพๅฐ่พๅฅฝ็็ปๆ
```
from utils import NME
visualdl=paddle.callbacks.VisualDL(log_dir='visual_log')
#ๅฎไน่พๅ
ฅ
Batch_size=32
EPOCHS=401
step_each_epoch = len(train_ds)//Batch_size
# ไฝฟ็จ paddle.Model ๅฐ่ฃ
ๆจกๅ
model = paddle.Model(MyNet())
#ๆจกๅๅ ่ฝฝ
#model.load('/home/aistudio/work/lup/final')
lr = paddle.optimizer.lr.CosineAnnealingDecay(learning_rate=1e-5,
T_max=step_each_epoch * EPOCHS)
# ๅฎไนAdamไผๅๅจ
optimizer = paddle.optimizer.Adam(learning_rate=lr,
weight_decay=1e-5,
parameters=model.parameters())
# ๅฎไนSmoothL1Loss
loss =paddle.nn.SmoothL1Loss()
#loss =SelfDefineLoss()
# ไฝฟ็จ่ชๅฎไนmetrics
metric = NME()
model.prepare(optimizer=optimizer, loss=loss, metrics=metric)
# ่ฎญ็ปๅฏ่งๅVisualDLๅทฅๅ
ท็ๅ่ฐๅฝๆฐ
# ๅฏๅจๆจกๅๅ
จๆต็จ่ฎญ็ป
model.fit(train_loader, # ่ฎญ็ปๆฐๆฎ้
val_loader, # ่ฏไผฐๆฐๆฎ้
epochs=EPOCHS, # ่ฎญ็ป็ๆป่ฝฎๆฌก
batch_size=Batch_size, # ่ฎญ็ปไฝฟ็จ็ๆนๅคงๅฐ
save_dir="/home/aistudio/work/lup", #ๆๆจกๅๅๆฐใไผๅๅจๅๆฐไฟๅญ่ณ่ชๅฎไน็ๆไปถๅคน
save_freq=20, #่ฎพๅฎๆฏ้ๅคๅฐไธชepochไฟๅญๆจกๅๅๆฐๅไผๅๅจๅๆฐ
verbose=1 , # ๆฅๅฟๅฑ็คบๅฝขๅผ
callbacks=[visualdl]
) # ่ฎพ็ฝฎๅฏ่งๅ
```
## 3.2 ๆจกๅ่ฏไผฐ
* ๆ็ปๆฏ่ตๆไบค็็ปๆไธญ๏ผcheckpointsไฝฟ็จ็ๆฏ/home/aistudio/work/lup/่ทฏๅพไธ็**final**
* ้่ฟไปฅไธๆไฝ่ฟ่กๆจกๅ่ฏไผฐ
```
# ๆจกๅ่ฏไผฐ
model.load('/home/aistudio/work/lup/final')
result = model.evaluate(val_loader, verbose=1)
print(result)
```
## 3.3 ้ขๆต็ปๆ
```
# ่ฟ่ก้ขๆตๆไฝ
result = model.predict(test_loader)
# ่ทๅๆต่ฏๅพ็ๅฐบๅฏธๅๅพ็ๅ
test_path='ๅธธ่ง่ต๏ผPALM็ผๅบๅฝฉ็
งไธญ้ปๆไธญๅคฎๅนๅฎไฝ/PALM-Testing400-Images'
test_size=[]
FileName=[]
for i in range(len(list)):
path = os.path.join(test_path, list[i])
img=cv2.imread(path,1)
test_size.append(img.shape)
FileName.append(list[i])
test_size=np.array(test_size)
```
* ๅฐ้ขๆต็ปๆๅๅ
ฅresult.csvๆไปถไธญ
```
result=np.array(result)
pred=[]
for i in range(len(result[0])):
pred.extend(result[0][i])
pred=np.array(pred)
pred = paddle.to_tensor(pred)
out=np.array(pred).reshape(-1,2)
#Fovea_X=out[:,0]*data_std+data_mean
#Fovea_Y=out[:,1]*data_std+data_mean
Fovea_X=out[:,0]
Fovea_Y=out[:,1]
Fovea_X=Fovea_X*test_size[:,1]
Fovea_Y=Fovea_Y*test_size[:,0]
submission = pd.DataFrame(data={
"FileName": FileName,
"Fovea_X": Fovea_X,
"Fovea_Y": Fovea_Y
})
submission=submission.sort_values(by='FileName')
submission.to_csv("result.csv", index=False)
```
# ๆป็ป
* ่ฟไธๆญฅ่ฐๆดๅญฆไน ็ๅ่ถ
ๅๆฐ
* ๅฐ่ฏๅ
ถไปไผๅๅจ
* ๅฐ่ฏๅฎไนๆดๆทฑๅฑ็็ฅ็ป็ฝ็ป
* ๆจกๅ่ฝๅๆ้ๅบฆ๏ผๅฐ่ฏๆฐ็็ฝ็ปๆจกๅ
* ไธฐๅฏๆฐๆฎๅขๅผบๆไฝ
# ๅ่่ตๆ
[้ฃๆกจๅธธ่ง่ต๏ผPALM็ผๅบๅฝฉ็
งไธญ้ปๆไธญๅคฎๅนๅฎไฝ-9ๆ็ฌฌ1ๅๆนๆก](https://aistudio.baidu.com/aistudio/projectdetail/2332679)
[้ฃๆกจๅธธ่ง่ต๏ผPALM็ผๅบๅฝฉ็
งไธญ้ปๆไธญๅคฎๅนๅฎไฝ-8ๆ็ฌฌ9ๅๆนๆก](https://aistudio.baidu.com/aistudio/projectdetail/2309957?channelType=0&channel=0)
[ใๆทฑๅบฆๅญฆไน 7ๆฅๆๅก่ฅใไบบ่ธๅ
ณ้ฎ็นๆฃๆต](https://aistudio.baidu.com/aistudio/projectdetail/1487972)
| github_jupyter |
```
import pandas as pd
import re
import os
import time
import random
import numpy as np
try:
%tensorflow_version 2.x # enable TF 2.x in Colab
except Exception:
pass
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
from google.colab import drive
import pickle
from nltk.translate.bleu_score import corpus_bleu
tf.__version__
# Mount drive
drive.mount('/gdrive')
drive_root = '/gdrive/My Drive/'
```
### Creating a custom dataset of expressions(target) and math word problems(input)
```
## Function to convert integers to English words
def numberToWords(num):
"""
:type num: int
:rtype: str
"""
def one(num):
switcher = {
1: 'One',
2: 'Two',
3: 'Three',
4: 'Four',
5: 'Five',
6: 'Six',
7: 'Seven',
8: 'Eight',
9: 'Nine'
}
return switcher.get(num)
def two_less_20(num):
switcher = {
10: 'Ten',
11: 'Eleven',
12: 'Twelve',
13: 'Thirteen',
14: 'Fourteen',
15: 'Fifteen',
16: 'Sixteen',
17: 'Seventeen',
18: 'Eighteen',
19: 'Nineteen'
}
return switcher.get(num)
def ten(num):
switcher = {
2: 'Twenty',
3: 'Thirty',
4: 'Forty',
5: 'Fifty',
6: 'Sixty',
7: 'Seventy',
8: 'Eighty',
9: 'Ninety'
}
return switcher.get(num)
def two(num):
if not num:
return ''
elif num < 10:
return one(num)
elif num < 20:
return two_less_20(num)
else:
tenner = num // 10
rest = num - tenner * 10
return ten(tenner) + ' ' + one(rest) if rest else ten(tenner)
def three(num):
hundred = num // 100
rest = num - hundred * 100
if hundred and rest:
return one(hundred) + ' Hundred ' + two(rest)
elif not hundred and rest:
return two(rest)
elif hundred and not rest:
return one(hundred) + ' Hundred'
billion = num // 1000000000
million = (num - billion * 1000000000) // 1000000
thousand = (num - billion * 1000000000 - million * 1000000) // 1000
rest = num - billion * 1000000000 - million * 1000000 - thousand * 1000
if not num:
return 'Zero'
result = ''
if billion:
result = three(billion) + ' Billion'
if million:
result += ' ' if result else ''
result += three(million) + ' Million'
if thousand:
result += ' ' if result else ''
result += three(thousand) + ' Thousand'
if rest:
result += ' ' if result else ''
result += three(rest)
return result
## creating a simple dataset with random number of operations(one, two or three operations),
## random number (uptil 50000) and random operations
def create_simple_dataset(dataset_len = 1000):
ops_dict = {'+':['plus','add'],'-':['minus','subtract'],'*':['multiply','into'],'/':['divide','by']}
nums_dict = {}
for i in range(50000):
nums_dict[i] = numberToWords(i)
num_ops = [2,3,4]
nums_dict_keys_list = list(nums_dict.keys())
ops_dict_keys_list = list(ops_dict.keys())
all_input_exps = []
all_target_exps = []
for _ in range(dataset_len):
n_o = random.choice(num_ops)
nums = []
num_vals = []
for i in range(n_o+1):
nums.append(random.choice(nums_dict_keys_list))
num_vals.append(nums_dict[nums[i]])
ops = []
ops_vals = []
for i in range(n_o):
ops.append(random.choice(ops_dict_keys_list))
ops_vals.append(random.choice(ops_dict[ops[i]]))
target_exp = ' '.join(list(str(nums[0])))
inp_exp = num_vals[0]
for i in range(n_o):
target_exp = target_exp + ' ' + ops[i] + ' ' + ' '.join(list(str(nums[i+1])))
inp_exp = inp_exp + ' ' + ops_vals[i] + ' ' + num_vals[i+1]
all_input_exps.append(inp_exp)
all_target_exps.append(target_exp)
return all_input_exps,all_target_exps
input_exps, target_exps = create_simple_dataset(2000000)
input_exps[:5]
target_exps[:5]
len(pd.Series(target_exps)), len(pd.Series(target_exps).unique())
data = pd.DataFrame({'Input':input_exps, 'Target':target_exps})
data.to_pickle('data-baseline.pkl')
```
### Preprocessing and Tokenizing the Input and Target exps
```
with open('data-baseline.pkl', 'rb') as f:
data = pickle.load(f)
input_exps = list(data.Input.values)
target_exps = list(data.Target.values)
input_exps[0], target_exps[0]
## right now, only adding start and end token
## Can later preprocess more: add spaces before and after punctuations, replace unimp tokens, etc.
def preprocess_strings(sentence):
sentence = sentence.lower().strip()
return '<start> ' + sentence + ' <end>'
preprocessed_input_exps = list(map(preprocess_strings, input_exps))
preprocessed_target_exps = list(map(preprocess_strings, target_exps))
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor, padding='post')
return tensor, lang_tokenizer
input_tensor, inp_lang_tokenizer = tokenize(preprocessed_input_exps)
len(inp_lang_tokenizer.word_index)
input_tensor
target_tensor, targ_lang_tokenizer = tokenize(preprocessed_target_exps)
len(targ_lang_tokenizer.word_index)
target_tensor
```
### Create a tf.data dataset
```
# Creating training and validation sets
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.01, random_state=42)
len(input_tensor_train)
len(input_tensor_val)
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 512
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
embedding_dim = 64
units = 256
vocab_inp_size = len(inp_lang_tokenizer.word_index)+1
vocab_tar_size = len(targ_lang_tokenizer.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
vocab_inp_size, vocab_tar_size
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
```
### Encoder Decoder Model
```
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# sample input
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# hidden shape == (batch_size, hidden size)
# hidden_with_time_axis shape == (batch_size, 1, hidden size)
# we are doing this to perform addition to calculate the score
hidden_with_time_axis = tf.expand_dims(query, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
# the shape of the tensor before applying self.V is (batch_size, max_length, units)
score = self.V(tf.nn.tanh(
self.W1(values) + self.W2(hidden_with_time_axis)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = BahdanauAttention(100)
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _, _ = decoder(tf.random.uniform((BATCH_SIZE, 1)),
sample_hidden, sample_output)
print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))
sample_hidden.shape
optimizer = tf.keras.optimizers.Adam(lr=0.001)
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
```
### Checkpoints
```
# !rm -r /gdrive/My\ Drive/checkpoints/ADL\ Project/training_checkpoints/aashna_baseline
checkpoint_dir = os.path.join(drive_root, "checkpoints")
checkpoint_dir = os.path.join(checkpoint_dir, "ADL Project/training_checkpoints/aashna_baseline")
print("Checkpoints directory is", checkpoint_dir)
if os.path.exists(checkpoint_dir):
print("Checkpoints folder already exists")
else:
print("Creating a checkpoints directory")
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
if latest:
epoch_num = int(latest.split('/')[-1].split('-')[-1])
checkpoint.restore(latest)
else:
epoch_num = 0
epoch_num
```
### Training
```
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang_tokenizer.word_index['<start>']] * BATCH_SIZE, 1)
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
EPOCHS = 5
for epoch in range(epoch_num, EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
checkpoint.save(file_prefix = checkpoint_prefix)
print('Saved epoch: {}'.format(epoch+1))
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
```
### Evaluation
```
def max_length(tensor):
return max(len(t) for t in tensor)
max_length_targ, max_length_inp = max_length(target_tensor), max_length(input_tensor)
def evaluate(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_strings(sentence)
inputs = [inp_lang_tokenizer.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang_tokenizer.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# storing the attention weights to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang_tokenizer.index_word[predicted_id] + ' '
if targ_lang_tokenizer.index_word[predicted_id] == '<end>':
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# function for plotting the attention weights
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def translate(sentence):
result, sentence, attention_plot = evaluate(sentence)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
def evaluate_accuracy(inputs):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = ''
for i in range(len(inputs.numpy()[0])):
if inputs.numpy()[0][i] != 0:
sentence += inp_lang_tokenizer.index_word[inputs.numpy()[0][i]] + ' '
inputs = tf.convert_to_tensor(inputs)
result = ''
result_seq = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang_tokenizer.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# storing the attention weights to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result_seq += str(predicted_id) +' '
result += targ_lang_tokenizer.index_word[predicted_id] + ' '
if targ_lang_tokenizer.index_word[predicted_id] == '<end>':
return result_seq, result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result_seq, result, sentence, attention_plot
dataset_val = tf.data.Dataset.from_tensor_slices((input_tensor_val, target_tensor_val)).shuffle(BUFFER_SIZE)
dataset_val = dataset_val.batch(1, drop_remainder=True)
y_true = []
y_pred = []
acc_cnt = 0
a = 0
for (inp_val_batch, target_val_batch) in iter(dataset_val):
a += 1
if a % 500 == 0:
print(a)
print("Accuracy count: ",acc_cnt)
print('------------------')
target_sentence = ''
for i in target_val_batch.numpy()[0]:
if i!= 0:
target_sentence += (targ_lang_tokenizer.index_word[i] + ' ')
target_sentence = target_sentence.split('<start> ')[1]
y_true.append([target_sentence.split(' ')])
res_seq, res, sent, att = evaluate_accuracy(inp_val_batch)
y_pred.append(res.split(' '))
if target_sentence == res:
acc_cnt += 1
print('Corpus BLEU score of the model: ', corpus_bleu(y_true, y_pred))
print('Accuracy of the model: ',acc_cnt/len(input_tensor_val))
```
#### Translation and Attention Plots
```
check_str = 'One Thousand Three Hundred Nine subtract Fourteen Thousand Three Hundred Nine'
check_str in input_exps
translate(check_str)
translate("thirty plus one hundred twenty five")
translate('One plus Two')
```
### Conclusions
We can see that this baseline model on this simple dataset is clearly memorizing the data rather than learning the actual way to translate the words to an equation.
The Corpus BLEU and Accuracy scores are very high, but the translation fails on simple equations like "One plus Two"
Sources:
1. https://www.tensorflow.org/tutorials/text/nmt_with_attention#top_of_page
| github_jupyter |
[](https://github.com/labmlai/annotated_deep_learning_paper_implementations)
[](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/normalization/deep_norm/experiment.ipynb)
[](https://app.labml.ai/run/ec8e4dacb7f311ec8d1cd37d50b05c3d)
[](https://www.comet.ml/labml/deep-norm/61d817f80ff143c8825fba4aacd431d4?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step)
## DeepNorm
This is an experiment training Shakespeare dataset with a deep transformer using [DeepNorm](https://nn.labml.ai/normalization/deep_norm/index.html).
### Install the packages
```
!pip install labml-nn comet_ml --quiet
```
### Enable [Comet](https://www.comet.ml)
```
#@markdown Select in order to enable logging this experiment to [Comet](https://www.comet.ml).
use_comet = True #@param {type:"boolean"}
if use_comet:
import comet_ml
comet_ml.init(project_name='deep_norm')
```
### Imports
```
import torch
import torch.nn as nn
from labml import experiment
from labml.configs import option
from labml_nn.normalization.deep_norm.experiment import Configs
```
### Create an experiment
```
experiment.create(name="deep_norm", writers={"screen", "comet"} if use_comet else {'screen'})
```
### Configurations
```
conf = Configs()
```
Set experiment configurations and assign a configurations dictionary to override configurations
```
experiment.configs(conf, {
# Use character level tokenizer
'tokenizer': 'character',
# Prompt separator is blank
'prompt_separator': '',
# Starting prompt for sampling
'prompt': 'It is ',
# Use Tiny Shakespeare dataset
'text': 'tiny_shakespeare',
# Use a context size of $256$
'seq_len': 256,
# Train for 32 epochs
'epochs': 32,
# Batch size $16$
'batch_size': 16,
# Switch between training and validation for $10$ times per epoch
'inner_iterations': 10,
# Adam optimizer with no warmup
'optimizer.optimizer': 'Adam',
'optimizer.learning_rate': 3e-4,
})
```
Set PyTorch models for loading and saving
```
experiment.add_pytorch_models({'model': conf.model})
```
### Start the experiment and run the training loop.
```
# Start the experiment
with experiment.start():
conf.run()
```
| github_jupyter |
# 14. Image classification by machine learning: Optical text recognition
There are different types of machine learning. In some cases, like in the pixel classification task, the algorithm does the classification on its own by trying to optimize groups according to a given rule (unsupervised). In other cases one has to feed the algorithm with a set of annotated examples to train it (supervised). Here we are going to train an ML algorithm to recognize digits. Therefore the first things that we need is a good set of annotated examples. Luckily, since this is a "popular" problem, one can find such datasets on-line. In general, this is not the case, and one has to manually create such a dataset. Then one can either decide on a set of features that the algorithm has to use for learning or let the algorithm define those itself. Here we look at the first case, and we will look at the second one in the following chapters.
Note that this notebooks does not present a complete OCR solution. The goal is rather to show the underlying principles of machine learning methods used for OCR.
```
import glob
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import skimage
import skimage.feature
import course_functions
datapath = course_functions.define_data_path()
```
## 14.1 Exploring the dataset
We found a good dataset [here](http://www.ee.surrey.ac.uk/CVSSP/demos/chars74k/) and downloaded it. Let's first have a look at it.
```
data_path = '/Users/gw18g940/Desktop/Test_data/ImageProcessingCourse/English_print/Fnt/'
```
We have a folder with 62 sub-folders corresponding to digits and lower and upper-case characters:
```
samples = np.sort(glob.glob(data_path+'*'))
print(samples)
```
Let's check the contents by plotting the first 5 images of a folder:
```
files = glob.glob(samples[7]+'/*.png')
plt.figure(figsize=(15,15))
for i in range(10):
plt.subplot(1,10,i+1)
image = skimage.io.imread(files[i])
plt.imshow(image, cmap = 'gray')
plt.show()
```
So we have samples of each character written with different fonts and types (italic, bold).
## 14.2 Classifying digits
We are first going to try to classify digits. Our goal is to be able to pass an image of the type shown above to our ML algorithm so that the latter can say what digit is present in that image.
First, we have to decide what information the algorithm should use to make that decision. The simplest thing to do is to just say that each pixel is a "feature", and thus to use a flattened version of each image as feature space.
So that the process is a bit faster we are going to rescale all the images to 32x32 pixels so that we have $32^2$ features.
### 14.2.1 Loading and scaling images
For each digit, we load 50 images by randomly selecting them. We rescale them and reshape them in a single comprehension list. Let's see what happens for one digit:
```
data = [np.reshape(skimage.transform.rescale(skimage.io.imread(files[x]),1/4),32**2)
for x in np.random.choice(np.arange(len(files)),10,replace=False)]
data[0].shape
plt.imshow(np.reshape(data[2],(32,32)),cmap = 'gray')
plt.show()
```
Now let's do this for all digits and aggregate all these data into ```all_data```:
```
num_samples = 500
all_data = []
for ind, s in enumerate(samples[0:10]):
files = glob.glob(s+'/*.png')
data = np.array([np.reshape(skimage.transform.rescale(skimage.io.imread(files[x]),1/4),32**2)
for x in np.random.choice(np.arange(len(files)),num_samples,replace=False)])
all_data.append(data)
```
Now we concatenate all these data into one single matrix:
```
data = np.concatenate(all_data,axis = 0)
data.shape
```
### 14.2.2 Creating categories
We have 50 examples for 10 digits and each example has 1024 features. We also need to create an array that contains the information "what digit is present at each row of the ```data``` array. We have 500 times a list of each digit:
```
cats = [str(i) for i in range(len(all_data))]
category = np.concatenate([[cats[i] for j in range(num_samples)] for i in range(len(cats))])
category
```
### 14.2.3 Running the ML algorithm
Now we are ready to use our dataset of features and our corresponding list of categories to train a classifier. We are going to use here a Random Forest classifier implement in scikit-learn:
```
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from sklearn.metrics import confusion_matrix
```
First we have to split the dataset into a training and a testing dataset. It is very important to test the classifier on data that have not been seen previously by it!
```
Xtrain, Xtest, ytrain, ytest = train_test_split(data, category, random_state=0)
```
Now we can do the actual learning:
```
model = RandomForestClassifier(n_estimators=1000)
model.fit(Xtrain, ytrain)
```
Finally we can verify the predictions on the test dataset. The predict function returns a list of the category to which each testing sample has been assigned.
```
ypred = model.predict(Xtest)
ypred
```
We can look at a few examples:
```
fig, ax = plt.subplots(1, 10, figsize = (15,10))
for x in range(10):
ax[x].imshow(np.reshape(Xtest[x],(32,32)),cmap='gray')
ax[x].set_title(ypred[x])
plt.show()
```
In order to get a more comprehensive view, we can look at some statistics:
```
print(metrics.classification_report(ypred, ytest))
```
We see that our very simple features, basically the pixel positions, and 50 examples per class are sufficient to reach a very good result.
## 14.3 Using the classifier on "real" data
Let's try to segment a real-life case: an image of a digital screen:
```
jpg = skimage.io.imread('/Users/gw18g940/Desktop/Test_data/ImageProcessingCourse/digit.jpg')
plt.imshow(jpg, cmap = 'gray')
plt.show()
```
### 14.3.1 Pre-processing
We trained our classifier on black and white pictures, so let's first convert the image and create a black and white version using a thresholder:
```
jpg = skimage.color.rgb2gray(jpg)
th = skimage.filters.threshold_li(jpg)
jpg_th = jpg<th
plt.imshow(jpg_th, cmap = 'gray')
plt.show()
```
### 14.3.2 Identifying numbers
First we need to identify each single number present here in the second row. If we project the image along the horizontal direction, we clearly see an "empty" region. By detecting where the steps are, we can isolate the two lines of text:
```
plt.plot(np.max(jpg_th,axis = 1))
plt.show()
#create projection
proj = np.max(jpg_th,axis = 1)
#select "positive" regions and find their indices
regions = proj > 0.5
text_indices = np.arange(jpg.shape[0])[regions]
#find the steps and split the indices into two groups
splits = np.split(text_indices,np.where(np.diff(np.arange(jpg.shape[0])[regions])>1)[0]+1)
plt.imshow(jpg[splits[1],:],cmap = 'gray')
plt.show()
```
To separate each digit we proceed in the same way by projecting along the vertical dimensions:
```
#select line to process
line_ind = 1
proj2 = np.min(jpg[splits[line_ind],:],axis = 0)
regions = proj2 < 0.5
text_indices = np.arange(jpg.shape[1])[regions]
splits2 = np.split(text_indices,np.where(np.diff(np.arange(jpg.shape[1])[regions])>1)[0]+1)
```
```splits2``` contains all column indices for each digit:
```
characters = [jpg_th[splits[line_ind],x[0]:x[-1]] for x in splits2]
for ind, x in enumerate(characters):
plt.subplot(1,10, ind+1)
plt.imshow(x)
plt.show()
```
### 14.3.3 Rescaling
Since we rely on pixels positions as features, we have to make sure that the images we are passing to the classifier are similar to those used for training. Those had on average a height of 24 pixels. So let's rescale:
```
im_re = skimage.transform.rescale(characters[2],1/(characters[2].shape[0]/24), preserve_range=True, order = 0)
```
Additionally, the images are square and have 32 pixels. So let's pad our images. We do that by filling an empty image with our image at the middle. We also have to make sure that the intensity scale is correct:
```
empty = np.zeros((32,32))
empty[int((32-im_re.shape[0])/2):int((32-im_re.shape[0])/2)+im_re.shape[0],
int((32-im_re.shape[1])/2):int((32-im_re.shape[1])/2)+im_re.shape[1]] = im_re
empty = empty<0.5
to_pass = (255*empty).astype(np.uint8)
plt.imshow(empty)
plt.show()
```
Finally we can pass this to the classifier:
```
ypred = model.predict(np.reshape(to_pass,32**2)[np.newaxis,:])
fig,ax = plt.subplots()
plt.imshow(to_pass)
ax.set_title('Prediction: '+ ypred[0])
plt.show()
```
Let's do the same exercise for all digits:
```
fig, ax = plt.subplots(1, 10, figsize = (15,10))
for x in range(10):
final_size = 32
im_re = skimage.transform.rescale(characters[x],1/(characters[x].shape[0]/24),preserve_range=True, order = 0)
empty = np.zeros((32,32))
empty[int((32-im_re.shape[0])/2):int((32-im_re.shape[0])/2)+im_re.shape[0],
int((32-im_re.shape[1])/2):int((32-im_re.shape[1])/2)+im_re.shape[1]] = im_re
to_pass = empty<0.5
to_pass = (255*to_pass).astype(np.uint8)
ypred = model.predict(np.reshape(to_pass,32**2)[np.newaxis,:])
ax[x].imshow(to_pass)
ax[x].set_title(ypred[0])
plt.show()
```
## 14.4 With all characters
```
category.shape
data.shape
num_samples = 100
all_data = []
for ind, s in enumerate(samples[0:62]):
files = glob.glob(s+'/*.png')
data = np.array([np.reshape(skimage.transform.rescale(skimage.io.imread(files[x]),1/4),32**2)
for x in np.random.choice(np.arange(len(files)),num_samples,replace=False)])
all_data.append(data)
data = np.concatenate(all_data,axis = 0)
chars = 'abcdefghijklmnopqrstuvwxyz'
cats = [str(i) for i in range(10)]+[i for i in chars.upper()]+[i for i in chars]
category = np.concatenate([[cats[i] for j in range(num_samples)] for i in range(len(cats))])
Xtrain, Xtest, ytrain, ytest = train_test_split(data, category, random_state=0)
model = RandomForestClassifier(n_estimators=1000)
model.fit(Xtrain, ytrain)
ypred = model.predict(Xtest)
print(metrics.classification_report(ypred, ytest))
mat = confusion_matrix(ytest, ypred)
fig, ax = plt.subplots(figsize=(10,10))
plt.imshow(mat.T,vmin = 0,vmax = 10)#, square=True, annot=True, fmt='d', cbar=False)
plt.xticks(ticks=np.arange(62),labels=cats)
plt.yticks(ticks=np.arange(62),labels=cats)
plt.xlabel('true label')
plt.ylabel('predicted label');
```
| github_jupyter |
##### Copyright 2021 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Converting TensorFlow Text operators to TensorFlow Lite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/guide/text_tf_lite"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/guide/text_tf_lite.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/guide/text_tf_lite.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/guide/text_tf_lite.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
Machine learning models are frequently deployed using TensorFlow Lite to mobile, embedded, and IoT devices to improve data privacy and lower response times. These models often require support for text processing operations. TensorFlow Text version 2.7 and higher provides improved performance, reduced binary sizes, and operations specifically optimized for use in these environments.
## Text operators
The following TensorFlow Text classes can be used from within a TensorFlow Lite model.
* `FastWordpieceTokenizer`
* `WhitespaceTokenizer`
## Model Example
```
!pip install -U "tensorflow-text==2.8.*"
from absl import app
import numpy as np
import tensorflow as tf
import tensorflow_text as tf_text
from tensorflow.lite.python import interpreter
```
The following code example shows the conversion process and interpretation in Python using a simple test model. Note that the output of a model cannot be a `tf.RaggedTensor` object when you are using TensorFlow Lite. However, you can return the components of a `tf.RaggedTensor` object or convert it using its `to_tensor` function. See [the RaggedTensor guide](https://www.tensorflow.org/guide/ragged_tensor) for more details.
```
class TokenizerModel(tf.keras.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.tokenizer = tf_text.WhitespaceTokenizer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='input')
])
def call(self, input_tensor):
return { 'tokens': self.tokenizer.tokenize(input_tensor).flat_values }
# Test input data.
input_data = np.array(['Some minds are better kept apart'])
# Define a Keras model.
model = TokenizerModel()
# Perform TensorFlow Text inference.
tf_result = model(tf.constant(input_data))
print('TensorFlow result = ', tf_result['tokens'])
```
## Convert the TensorFlow model to TensorFlow Lite
When converting a TensorFlow model with TensorFlow Text operators to TensorFlow Lite, you need to
indicate to the `TFLiteConverter` that there are custom operators using the
`allow_custom_ops` attribute as in the example below. You can then run the model conversion as you normally would. Review the [TensorFlow Lite converter](https://www.tensorflow.org/lite/convert) documentation for a detailed guide on the basics of model conversion.
```
# Convert to TensorFlow Lite.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.allow_custom_ops = True
tflite_model = converter.convert()
```
## Inference
For the TensorFlow Lite interpreter to properly read your model containing TensorFlow Text operators, you must configure it to use these custom operators, and provide registration methods for them. Use `tf_text.tflite_registrar.SELECT_TFTEXT_OPS` to provide the full suite of registration functions for the supported TensorFlow Text operators to `InterpreterWithCustomOps`.
Note, that while the example below shows inference in Python, the steps are similar in other languages with some minor API translations, and the necessity to build the `tflite_registrar` into your binary. See [TensorFlow Lite Inference](https://www.tensorflow.org/lite/guide/inference) for more details.
```
# Perform TensorFlow Lite inference.
interp = interpreter.InterpreterWithCustomOps(
model_content=tflite_model,
custom_op_registerers=tf_text.tflite_registrar.SELECT_TFTEXT_OPS)
interp.get_signature_list()
```
Next, the TensorFlow Lite interpreter is invoked with the input, providing a result which matches the TensorFlow result from above.
```
tokenize = interp.get_signature_runner('serving_default')
output = tokenize(input=input_data)
print('TensorFlow Lite result = ', output['tokens'])
```
| github_jupyter |
# Part I: Set Up
- Import Packages
```
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt2
import pandas as pd
from pandas import datetime
import math, time
import itertools
from sklearn import preprocessing
import datetime
from sklearn.metrics import mean_squared_error
from math import sqrt
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.layers.recurrent import LSTM
from keras.models import load_model
import keras
import h5py
import os
from statistics import mean
from keras import backend as K
from keras.layers.convolutional import Convolution1D, MaxPooling1D
from keras.layers.core import Flatten
```
- Initialize Variables
```
seq_len = 22
shape = [seq_len, 9, 1]
neurons = [256, 256, 32, 1]
dropout = 0.3
decay = 0.5
epochs = 90
os.chdir("/Users/youssefberrada/Dropbox (MIT)/15.961 Independant Study/Data")
#os.chdir("/Users/michelcassard/Dropbox (MIT)/15.960 Independant Study/Data")
file = 'FX-5.xlsx'
# Load spreadsheet
xl = pd.ExcelFile(file)
close = pd.ExcelFile('close.xlsx')
df_close=np.array(close.parse(0))
```
# Part 2: Data
- Load Data
```
def get_stock_data(stock_name, ma=[]):
"""
Return a dataframe of that stock and normalize all the values.
(Optional: create moving average)
"""
df = xl.parse(stock_name)
df.drop(['VOLUME'], 1, inplace=True)
df.set_index('Date', inplace=True)
# Renaming all the columns so that we can use the old version code
df.rename(columns={'OPEN': 'Open', 'HIGH': 'High', 'LOW': 'Low', 'NUMBER_TICKS': 'Volume', 'LAST_PRICE': 'Adj Close'}, inplace=True)
# Percentage change
df['Pct'] = df['Adj Close'].pct_change()
df.dropna(inplace=True)
# Moving Average
if ma != []:
for moving in ma:
df['{}ma'.format(moving)] = df['Adj Close'].rolling(window=moving).mean()
df.dropna(inplace=True)
# Move Adj Close to the rightmost for the ease of training
adj_close = df['Adj Close']
df.drop(labels=['Adj Close'], axis=1, inplace=True)
df = pd.concat([df, adj_close], axis=1)
return df
```
- Visualize the data
```
def plot_stock(df):
print(df.head())
plt.subplot(211)
plt.plot(df['Adj Close'], color='red', label='Adj Close')
plt.legend(loc='best')
plt.subplot(212)
plt.plot(df['Pct'], color='blue', label='Percentage change')
plt.legend(loc='best')
plt.show()
```
- Training/Test Set
```
def load_data(stock,normalize,seq_len,split,ma):
amount_of_features = len(stock.columns)
print ("Amount of features = {}".format(amount_of_features))
sequence_length = seq_len + 1
result_train = []
result_test= []
row = round(split * stock.shape[0])
df_train=stock[0:row].copy()
print ("Amount of training data = {}".format(df_train.shape[0]))
df_test=stock[row:len(stock)].copy()
print ("Amount of testing data = {}".format(df_test.shape[0]))
if normalize:
#Training
min_max_scaler = preprocessing.MinMaxScaler()
df_train['Open'] = min_max_scaler.fit_transform(df_train.Open.values.reshape(-1,1))
df_train['High'] = min_max_scaler.fit_transform(df_train.High.values.reshape(-1,1))
df_train['Low'] = min_max_scaler.fit_transform(df_train.Low.values.reshape(-1,1))
df_train['Volume'] = min_max_scaler.fit_transform(df_train.Volume.values.reshape(-1,1))
df_train['Adj Close'] = min_max_scaler.fit_transform(df_train['Adj Close'].values.reshape(-1,1))
df_train['Pct'] = min_max_scaler.fit_transform(df_train['Pct'].values.reshape(-1,1))
if ma != []:
for moving in ma:
df_train['{}ma'.format(moving)] = min_max_scaler.fit_transform(df_train['{}ma'.format(moving)].values.reshape(-1,1))
#Test
df_test['Open'] = min_max_scaler.fit_transform(df_test.Open.values.reshape(-1,1))
df_test['High'] = min_max_scaler.fit_transform(df_test.High.values.reshape(-1,1))
df_test['Low'] = min_max_scaler.fit_transform(df_test.Low.values.reshape(-1,1))
df_test['Volume'] = min_max_scaler.fit_transform(df_test.Volume.values.reshape(-1,1))
df_test['Adj Close'] = min_max_scaler.fit_transform(df_test['Adj Close'].values.reshape(-1,1))
df_test['Pct'] = min_max_scaler.fit_transform(df_test['Pct'].values.reshape(-1,1))
if ma != []:
for moving in ma:
df_test['{}ma'.format(moving)] = min_max_scaler.fit_transform(df_test['{}ma'.format(moving)].values.reshape(-1,1))
#Training
data_train = df_train.as_matrix()
for index in range(len(data_train) - sequence_length):
result_train.append(data_train[index: index + sequence_length])
train = np.array(result_train)
X_train = train[:, :-1].copy() # all data until day m
y_train = train[:, -1][:,-1].copy() # day m + 1 adjusted close price
#Test
data_test = df_test.as_matrix()
for index in range(len(data_test) - sequence_length):
result_test.append(data_test[index: index + sequence_length])
test = np.array(result_train)
X_test = test[:, :-1].copy()
y_test = test[:, -1][:,-1].copy()
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], amount_of_features))
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], amount_of_features))
return [X_train, y_train, X_test, y_test]
```
# 3. Model
```
def build_model(shape, neurons, dropout, decay):
model = Sequential()
#model.add(Dense(neurons[0],activation="relu", input_shape=(shape[0], shape[1])))
model.add(LSTM(neurons[0], input_shape=(shape[0], shape[1]), return_sequences=True))
model.add(Dropout(dropout))
model.add(LSTM(neurons[1], input_shape=(shape[0], shape[1]), return_sequences=False))
model.add(Dropout(dropout))
model.add(Dense(neurons[2],kernel_initializer="uniform",activation='relu'))
model.add(Dense(neurons[3],kernel_initializer="uniform",activation='linear'))
adam = keras.optimizers.Adam(decay=decay)
model.compile(loss='mse',optimizer='adam', metrics=['accuracy'])
model.summary()
return model
def build_model_CNN(shape, neurons, dropout, decay):
model = Sequential()
model.add(Convolution1D(input_shape = (shape[0], shape[1]),
nb_filter=64,
filter_length=2,
border_mode='valid',
activation='relu',
subsample_length=1))
model.add(MaxPooling1D(pool_length=2))
model.add(Convolution1D(input_shape = (shape[0], shape[1]),
nb_filter=64,
filter_length=2,
border_mode='valid',
activation='relu',
subsample_length=1))
model.add(MaxPooling1D(pool_length=2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(250))
model.add(Dropout(0.25))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('linear'))
adam = keras.optimizers.Adam(decay=decay)
model.compile(loss='mse',optimizer='adam', metrics=['accuracy'])
model.summary()
return model
model = build_model_CNN(shape, neurons, dropout, decay)
```
# 4. Results
- Model Fit
```
model.fit(X_train,y_train,batch_size=512,epochs=epochs,validation_split=0.3,verbose=1)
```
- Model Score
```
def model_score(model, X_train, y_train, X_test, y_test):
trainScore = model.evaluate(X_train, y_train, verbose=0)
print('Train Score: %.5f MSE (%.2f RMSE)' % (trainScore[0], math.sqrt(trainScore[0])))
testScore = model.evaluate(X_test, y_test, verbose=0)
print('Test Score: %.5f MSE (%.2f RMSE)' % (testScore[0], math.sqrt(testScore[0])))
return trainScore[0], testScore[0]
model_score(model, X_train, y_train, X_test, y_test)
def percentage_difference(model, X_test, y_test):
percentage_diff=[]
p = model.predict(X_test)
for u in range(len(y_test)): # for each data index in test data
pr = p[u][0] # pr = prediction on day u
percentage_diff.append((pr-y_test[u]/pr)*100)
print(mean(percentage_diff))
return p
p = percentage_difference(model, X_test, y_test)
def plot_result_norm(stock_name, normalized_value_p, normalized_value_y_test):
newp=normalized_value_p
newy_test=normalized_value_y_test
plt2.plot(newp, color='red', label='Prediction')
plt2.plot(newy_test,color='blue', label='Actual')
plt2.legend(loc='best')
plt2.title('The test result for {}'.format(stock_name))
plt2.xlabel('5 Min ahead Forecast')
plt2.ylabel('Price')
plt2.show()
def denormalize(stock_name, normalized_value,split=0.7,predict=True):
"""
Return a dataframe of that stock and normalize all the values.
(Optional: create moving average)
"""
df = xl.parse(stock_name)
df.drop(['VOLUME'], 1, inplace=True)
df.set_index('Date', inplace=True)
# Renaming all the columns so that we can use the old version code
df.rename(columns={'OPEN': 'Open', 'HIGH': 'High', 'LOW': 'Low', 'NUMBER_TICKS': 'Volume', 'LAST_PRICE': 'Adj Close'}, inplace=True)
df.dropna(inplace=True)
df = df['Adj Close'].values.reshape(-1,1)
normalized_value = normalized_value.reshape(-1,1)
row = round(split * df.shape[0])
if predict:
df_p=df[0:row].copy()
else:
df_p=df[row:len(df)].copy()
#return df.shape, p.shape
mean_df=np.mean(df_p)
std_df=np.std(df_p)
new=normalized_value*mean_df+std_df
return new
```
- Portfolio construction
```
def portfolio(currency_list,file = 'FX-5.xlsx',seq_len = 22,shape = [seq_len, 9, 1],neurons = [256, 256, 32, 1],dropout = 0.3,decay = 0.5,
epochs = 90,ma=[50, 100, 200],split=0.7):
i=0
mini=99999999
for currency in currency_list:
df=get_stock_data(currency, ma)
X_train, y_train, X_test, y_test = load_data(df,True,seq_len,split,ma)
model = build_model_CNN(shape, neurons, dropout, decay)
model.fit(X_train,y_train,batch_size=512,epochs=epochs,validation_split=0.3,verbose=1)
p = percentage_difference(model, X_test, y_test)
newp = denormalize(currency, p,predict=True)
if mini>p.size:
mini=p.size
if i==0:
predict=p.copy()
else:
predict=np.hstack((predict[0:mini],p[0:mini]))
i+=1
return predict
currency_list=[ 'GBP Curncy',
'JPY Curncy',
'EUR Curncy',
'CAD Curncy',
'NZD Curncy',
'SEK Curncy',
'AUD Curncy',
'CHF Curncy',
'NOK Curncy',
'ZAR Curncy']
#currency_list=['JPY Curncy']
predictcur=portfolio(currency_list,file = 'FX-5.xlsx',seq_len = 22,shape = [seq_len, 9, 1],neurons = [256, 256, 32, 1],dropout = 0.3,decay = 0.5,
epochs = 1,ma=[50, 100, 200],split=0.7)
```
- Backtest
```
def rebalance(n,previous_prices,x0,w,mu,gamma=1):
GT=np.cov(previous_prices)
f = n*[0]
g = n*[0.0005]
_,weights=MarkowitzWithTransactionsCost(n,mu,GT,x0,w,gamma,f,g)
return weights
def log_diff(data):
return np.diff(np.log(data))
def backtest(prices, predictions, initial_weights):
t_prices = len(prices[1,:])
t_predictions = len(predictions[:,1])
length_past = t_prices - t_predictions
returns = np.apply_along_axis(log_diff, 1, prices)
prediction_return = []
for k in range(t_predictions):
prediction_return.append(np.log(predictions[k]/prices[:,length_past+k]))
weights = initial_weights
portfolio_return = []
prev_weight = weights
for i in range(0,t_predictions-1)):
print(i)
predicted_return = prediction_return[i]
previous_return = returns[:,length_past+i]
previous_returns = returns[:,0:length_past+i]
new_weight = rebalance(10,previous_returns,mu=predicted_return.tolist(),x0=prev_weight,w=1,gamma=0.5)
period_return = new_weight*np.log(prices[:,length_past+i+1]/prices[:,length_past+i])
portfolio_return.append(np.sum(period_return))
prev_weight = new_weight
print(new_weight)
return portfolio_return
x=backtest(dq.T, predictcur, np.repeat(1/10,10))
def plot_result(stock_name, normalized_value_p, normalized_value_y_test):
newp = denormalize(stock_name, normalized_value_p,predict=True)
newy_test = denormalize(stock_name, normalized_value_y_test,predict=False)
plt2.plot(newp, color='red', label='Prediction')
plt2.plot(newy_test,color='blue', label='Actual')
plt2.legend(loc='best')
plt2.title('The test result for {}'.format(stock_name))
plt2.xlabel('5 Min ahead Forecast')
plt2.ylabel('Price')
plt2.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
import torch.utils.data as DataUtils
import numpy as np
import time
import sys
import torch.nn as nn
import torch.nn.functional as F
# Readymade data loading function
DATA_ROOT='./MNISTData/'
def getMNISTDataLoaders(batchSize=64, nTrain=50000, nVal=10000, nTest=10000):
# You can use technically use the same transform instance for all 3 sets
assert (60000 - nVal) == nTrain, 'nTrain + nVal must be equal to 60000'
trainTransform = transforms.Compose([transforms.ToTensor()])
valTransform = transforms.Compose([transforms.ToTensor()])
testTransform = transforms.Compose([transforms.ToTensor()])
trainSet = datasets.MNIST(root=DATA_ROOT, download=True, train=True, \
transform=trainTransform)
valSet = datasets.MNIST(root=DATA_ROOT, download=True, train=True, \
transform=valTransform)
testSet = datasets.MNIST(root=DATA_ROOT, download=True, train=False, \
transform=testTransform)
indices = np.arange(0, 60000)
np.random.shuffle(indices)
trainSampler = SubsetRandomSampler(indices[:nTrain])
valSampler = SubsetRandomSampler(indices[nTrain:])
testSampler = SubsetRandomSampler(np.arange(0, nTest))
trainLoader = DataUtils.DataLoader(trainSet, batch_size=batchSize, \
sampler=trainSampler)
valLoader = DataUtils.DataLoader(valSet, batch_size=batchSize, \
sampler=valSampler)
testLoader = DataUtils.DataLoader(testSet, batch_size=batchSize, \
sampler=testSampler)
return trainLoader, valLoader, testLoader
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(1, 32, 3, padding=1),
nn.ReLU(),
nn.BatchNorm2d(32),
nn.Conv2d(32, 32, 3, stride=2, padding=1),
nn.ReLU(),
nn.BatchNorm2d(32),
nn.MaxPool2d(2, 2),
nn.Dropout(0.25)
)
self.conv2 = nn.Sequential(
nn.Conv2d(32, 64, 3, padding=1),
nn.ReLU(),
nn.BatchNorm2d(64),
nn.Conv2d(64, 64, 3, stride=2, padding=1),
nn.ReLU(),
nn.BatchNorm2d(64),
nn.MaxPool2d(2, 2),
nn.Dropout(0.25)
)
self.conv3 = nn.Sequential(
nn.Conv2d(64, 128, 3, padding=1),
nn.ReLU(),
nn.BatchNorm2d(128),
nn.MaxPool2d(2, 2),
nn.Dropout(0.25)
)
self.fc = nn.Sequential(
nn.Linear(128, 10),
)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = x.view(x.size(0), -1)
return self.fc(x)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Notebook will use PyTorch Device: ' + device.upper())
# Utility Progress Bar Function
def progress(curr, total, suffix=''):
bar_len = 48
filled = int(round(bar_len * curr / float(total)))
if filled == 0:
filled = 1
bar = '=' * (filled - 1) + '>' + '-' * (bar_len - filled)
sys.stdout.write('\r[%s] .. %s' % (bar, suffix))
sys.stdout.flush()
if curr == total:
bar = bar_len * '='
sys.stdout.write('\r[%s] .. %s .. Completed\n' % (bar, suffix))
model = Net().to(device)
criterion = nn.CrossEntropyLoss()
learning_rate = 1e-2
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
n_epochs = 1
lr = 1e-2
step = 0
model.train()
train_loader, val_loader, test_loader = getMNISTDataLoaders()
start_time = time.time()
"""
for i in range(n_epochs):
for j, (images, labels) in enumerate(train_loader):
images, labels = images.to(device), labels.to(device)
optimizer.zero_grad()
logits = model(images)
loss = criterion(logits, labels)
loss.backward()
optimizer.step()
if j % 8 == 0:
progress(j+1, len(train_loader), 'Batch [{}/{}] Epoch [{}/{}] Loss = {:.3f}'.format(j+1, len(train_loader), i+1, n_epochs, loss.item()))
step += 1
end_time = time.time()
print('\nTotal training steps = {}'.format(step))
print('Total time taken = {}'.format(end_time - start_time))
"""
!pip install advertorch > /dev/null
import advertorch
print(advertorch.__version__)
# Evaluating against FGSM attack
from advertorch.attacks import GradientSignAttack
"""
# Documentation for this attack can be found at the link below
# https://advertorch.readthedocs.io/en/latest/advertorch/attacks.html#advertorch.attacks.GradientSignAttack
adversary_1 = GradientSignAttack(model, eps=0.3)
correct = 0
model.eval()
for j, (images, labels) in enumerate(test_loader):
images, labels = images.to(device), labels.to(device)
adv_images_1 = adversary_1.perturb(images, labels) # This is extra step as compared to normal clean accuracy testing
logits = model(adv_images_1)
_, preds = torch.max(logits, 1)
correct += (preds == labels).sum().item()
progress(j+1, len(test_loader), 'Batch [{}/{}]'.format(j+1, len(test_loader)))
model.train()
print('Accuracy on FGSM adversarial samples = {}%'.format(float(correct) * 100 / 10000))
"""
# Evaluating against IFSGM attack
from advertorch.attacks import LinfBasicIterativeAttack
"""
# Documentation for this attack can be found at the link below
# https://advertorch.readthedocs.io/en/latest/advertorch/attacks.html#advertorch.attacks.GradientSignAttack
adversary_2 = LinfBasicIterativeAttack(model, eps=0.1 , nb_iter=40)
correct = 0
model.eval()
for j, (images, labels) in enumerate(test_loader):
images, labels = images.to(device), labels.to(device)
adv_images_2 = adversary_2.perturb(images, labels) # This is extra step as compared to normal clean accuracy testing
logits = model(adv_images_2)
_, preds = torch.max(logits, 1)
correct += (preds == labels).sum().item()
progress(j+1, len(test_loader), 'Batch [{}/{}]'.format(j+1, len(test_loader)))
model.train()
print('Accuracy on FGSM adversarial samples = {}%'.format(float(correct) * 100 / 10000))
"""
# Evaluating against PGD attack
from advertorch.attacks import LinfPGDAttack
"""
# Documentation for this attack can be found at the link below
# https://advertorch.readthedocs.io/en/latest/advertorch/attacks.html#advertorch.attacks.GradientSignAttack
adversary_3 = LinfPGDAttack(model, eps=0.3 , nb_iter = 40)
correct = 0
model.eval()
for j, (images, labels) in enumerate(test_loader):
images, labels = images.to(device), labels.to(device)
adv_images_3 = adversary_3.perturb(images, labels) # This is extra step as compared to normal clean accuracy testing
logits = model(adv_images_3)
_, preds = torch.max(logits, 1)
correct += (preds == labels).sum().item()
progress(j+1, len(test_loader), 'Batch [{}/{}]'.format(j+1, len(test_loader)))
model.train()
print('Accuracy on FGSM adversarial samples = {}%'.format(float(correct) * 100 / 10000))
"""
n_epochs = 70
lr = 1e-2
step = 0
xent_loss = nn.CrossEntropyLoss()
adv_model = model
adv_model.train()
optimizer = torch.optim.SGD(adv_model.parameters(), lr=lr, momentum=0.9)
train_loader, val_loader, test_loader = getMNISTDataLoaders()
start_time = time.time()
"""
Although not officially mentioned, making `size_average=False` for the loss
function improves reliability of the result in PyTorch 0.4.0. This is required
since we are taking step against the gradient for "every" image in the batch.
So reducing them to a single value won't cut it.
"""
#training on FSGM
advertorch_loss_fn = nn.CrossEntropyLoss(size_average=False)
for i in range(n_epochs):
for j, (images, labels) in enumerate(train_loader):
images, labels = images.to(device), labels.to(device)
"""
Creating the adversary :
------------------------
Adversarial examples should be typically generated when model parameters are not
changing i.e. model parameters are frozen. This step may not be required for very
simple linear models, but is a must for models using components such as dropout
or batch normalization.
"""
adv_model.eval() # Freezes the model parameters
"""
The `clip` values here determine the clipping range after taking the adversarial step
The clipping is essential to keep the domain of input images within the range
MNIST images for this notebook are normalized to [0, 1]. If you're using something else,
make sure to modify these values accordingly. The `eps` value decides the magnitude
of the attack. For all MNIST models, the threat model advises to stick to maximum eps of 0.3
for input in range [0, 1]
"""
fgsm_adversary = GradientSignAttack(adv_model, advertorch_loss_fn, eps=0.3, clip_min=0., \
clip_max=1., targeted=False)
adv_images = fgsm_adversary.perturb(images, labels) # Generate adversarial samples
ifgsm_adversary = LinfBasicIterativeAttack(adv_model, advertorch_loss_fn, eps=0.1, clip_min=0., \
clip_max=1., targeted=False , nb_iter=40)
adv_images_2 = ifgsm_adversary.perturb(images, labels) # Generate adversarial samples
pgd_adversary = LinfPGDAttack(adv_model, advertorch_loss_fn, eps=0.3, clip_min=0., \
clip_max=1., targeted=False , nb_iter = 40)
adv_images_3 = pgd_adversary.perturb(images, labels)
adv_model.train() # Allows model parameters to be changed again
optimizer.zero_grad()
logits = adv_model(images)
loss = criterion(logits, labels)
loss.backward()
optimizer.step()
train_images = adv_images
train_labels = labels
optimizer.zero_grad()
logits = adv_model(train_images)
loss = xent_loss(logits, train_labels)
loss.backward()
optimizer.step()
train_images = adv_images_2
train_labels = labels
optimizer.zero_grad()
logits = adv_model(train_images)
loss = xent_loss(logits, train_labels)
loss.backward()
optimizer.step()
train_images = adv_images_3
train_labels = labels
optimizer.zero_grad()
logits = adv_model(train_images)
loss = xent_loss(logits, train_labels)
loss.backward()
optimizer.step()
if j % 8 == 0:
progress(j+1, len(train_loader), 'Batch [{}/{}] Epoch [{}/{}] Loss = {:.3f}'.format(j+1, len(train_loader), i+1, n_epochs, loss.item()))
step += 1
end_time = time.time()
print('\nTotal training steps = {}'.format(step))
print('Total time taken = {}'.format(end_time - start_time))
correct = 0
model.eval()
for j, (images, labels) in enumerate(test_loader):
images, labels = images.to(device), labels.to(device)
logits = model(images)
_, preds = torch.max(logits, 1)
correct += (preds == labels).sum().item()
progress(j+1, len(test_loader), 'Batch [{}/{}]'.format(j+1, len(test_loader)))
model.train()
print('Accuracy = {}%'.format(float(correct) * 100 / 10000))
# Evaluating against FGSM attack
from advertorch.attacks import GradientSignAttack
# Documentation for this attack can be found at the link below
# https://advertorch.readthedocs.io/en/latest/advertorch/attacks.html#advertorch.attacks.GradientSignAttack
adversary_1 = GradientSignAttack(adv_model, eps=0.3)
correct = 0
adv_model.eval()
for j, (images, labels) in enumerate(test_loader):
images, labels = images.to(device), labels.to(device)
adv_images_1 = adversary_1.perturb(images, labels)
logits = adv_model(adv_images_1)
_, preds = torch.max(logits, 1)
correct += (preds == labels).sum().item()
progress(j+1, len(test_loader), 'Batch [{}/{}]'.format(j+1, len(test_loader)))
adv_model.train()
print('Accuracy on FGSM adversarial samples = {}%'.format(float(correct) * 100 / 10000))
# Evaluating against I-FGSM attack
from advertorch.attacks import LinfBasicIterativeAttack
# Documentation for this attack can be found at the link below
# https://advertorch.readthedocs.io/en/latest/advertorch/attacks.html#advertorch.attacks.GradientSignAttack
adversary_2 = LinfBasicIterativeAttack(adv_model, eps=0.1 , nb_iter=40)
correct = 0
adv_model.eval()
for j, (images, labels) in enumerate(test_loader):
images, labels = images.to(device), labels.to(device)
adv_images_2 = adversary_2.perturb(images, labels)
logits = adv_model(adv_images_2)
_, preds = torch.max(logits, 1)
correct += (preds == labels).sum().item()
progress(j+1, len(test_loader), 'Batch [{}/{}]'.format(j+1, len(test_loader)))
adv_model.train()
print('Accuracy on FGSM adversarial samples = {}%'.format(float(correct) * 100 / 10000))
# Evaluating against PGD attack
from advertorch.attacks import LinfPGDAttack
# Documentation for this attack can be found at the link below
# https://advertorch.readthedocs.io/en/latest/advertorch/attacks.html#advertorch.attacks.GradientSignAttack
adversary_3 = LinfPGDAttack(adv_model, eps=0.3)
correct = 0
adv_model.eval()
for j, (images, labels) in enumerate(test_loader):
images, labels = images.to(device), labels.to(device)
adv_images_3 = adversary_3.perturb(images, labels)
logits = adv_model(adv_images_3)
_, preds = torch.max(logits, 1)
correct += (preds == labels).sum().item()
progress(j+1, len(test_loader), 'Batch [{}/{}]'.format(j+1, len(test_loader)))
adv_model.train()
print('Accuracy on FGSM adversarial samples = {}%'.format(float(correct) * 100 / 10000))
torch.save(adv_model.state_dict(), 'model.pt')
```
| github_jupyter |
# Imports
```
%matplotlib inline
import pandas as pd
import numpy as np
from sklearn import model_selection, linear_model
import matplotlib.pyplot as plt
```
# Functions
```
def normalize(a):
return (a - np.min(a)) / (np.max(a) - np.min(a))
def linear_regression(x, y, iters, alpha):
m = len(x)
cost = np.empty(iters)
rmse = np.empty(iters)
theta = np.zeros(x.shape[1])
pred = predict(x, theta)
for i in range(iters):
cost[i] = (1 / 2 * m) * np.sum((pred - y) ** 2)
for j in range(x.shape[1]):
theta[j] = theta[j] - (alpha / m) * np.sum((pred - y) * x[:, j])
pred = predict(x, theta)
rmse[i] = calc_rmse(y, pred)
return theta, cost, rmse
def linear_regression_with_L1(x, y, iters, alpha, lambda_):
m = len(x)
cost = np.empty(iters)
rmse = np.empty(iters)
theta = np.zeros(x.shape[1])
pred = predict(x, theta)
for i in range(iters):
cost[i] = (1 / 2 * m) * (np.sum((pred - y) ** 2) + lambda_ * np.sum(np.abs(theta[1:])))
theta[0] = theta[0] - (alpha / m) * np.sum((pred - y) * x[:, 0])
for j in range(1, x.shape[1]):
theta[j] = theta[j] - (alpha / m) * np.sum((pred - y) * x[:, j]) - (alpha * lambda_ / (2 * m)) * np.sign(theta[j])
pred = predict(x, theta)
rmse[i] = calc_rmse(y, pred)
return theta, cost, rmse
def linear_regression_with_L2(x, y, iters, alpha, lambda_):
m = len(x)
cost = np.empty(iters)
rmse = np.empty(iters)
theta = np.zeros(x.shape[1])
pred = predict(x, theta)
for i in range(iters):
cost[i] = (1 / 2 * m) * (np.sum((pred - y) ** 2) + lambda_ * np.sum(theta[1:] ** 2))
theta[0] = theta[0] - (alpha / m) * np.sum((pred - y) * x[:, 0])
for j in range(1, x.shape[1]):
theta[j] = theta[j] - (alpha / m) * np.sum((pred - y) * x[:, j]) - (alpha * lambda_ / m) * theta[j]
pred = predict(x, theta)
rmse[i] = calc_rmse(y, pred)
return theta, cost, rmse
def calc_rmse(y, y_pred):
rss = sum((y - y_pred) ** 2)
rmse = np.sqrt(rss / len(y))
return rmse
def predict(x, theta):
return x @ theta
def preprocess(x):
m = len(x)
x_std = np.apply_along_axis(normalize, 0, x)
x_std = np.c_[np.ones(m), x_std]
return x_std
def find_best_lambdas(x, y):
lasso = linear_model.Lasso(random_state=0, max_iter=10000)
ridge = linear_model.Ridge(random_state=0, max_iter=10000)
lambdas = np.logspace(-1, 1, 30)
tuned_parameters = [{'alpha': lambdas}]
n_folds = 5
clf_l1 = model_selection.GridSearchCV(lasso, tuned_parameters, cv=n_folds, refit=False)
clf_l1.fit(x, y)
clf_l2 = model_selection.GridSearchCV(ridge, tuned_parameters, cv=n_folds, refit=False)
clf_l2.fit(x, y)
return clf_l1.best_params_['alpha'], clf_l2.best_params_['alpha']
def plot_rmse(iters, rmse):
plt.xlabel('Iterations')
plt.ylabel('RMSE')
plt.title('RMSE vs. Iterations Curve')
plt.plot(np.arange(iters), rmse)
plt.show()
def plot_head_brain(x, y, y_pred):
plt.scatter(x, y, color='red')
plt.plot(x, y_pred, color='black')
plt.xlabel('Head Size(cm^3)(Normalized)')
plt.ylabel('Brain Weight(grams)')
plt.legend(['Best Fit Line', 'Datapoints'])
plt.show()
```
# Loading Abalone Dataset and Preprocessing
```
abalone = pd.read_csv('Datasets/abalone.data', header=None)
x, y = abalone[abalone.columns[:-1]], abalone[abalone.columns[-1]].values
x = pd.get_dummies(x).values
x = preprocess(x)
x_train, x_test, y_train, y_test = model_selection.train_test_split(x, y, test_size=0.2, random_state=1)
iters = 10000
alpha = 0.1
```
# Linear regression on Abalone dataset without regularization
```
theta, cost, rmse_train = linear_regression(x_train, y_train, iters, alpha)
y_train_pred = predict(x_train, theta)
y_test_pred = predict(x_test, theta)
rmse_test = calc_rmse(y_test, y_test_pred)
plot_rmse(iters, rmse_train)
print("Final RMSE for training set: ", rmse_train[-1])
print("RMSE for testing set: ", rmse_test)
```
# Closed Form Implementation of Linear Regression
```
theta = np.linalg.pinv(x_train.T @ x_train) @ x_train.T @ y_train
y_train_pred = predict(x_train, theta)
y_test_pred = predict(x_test, theta)
```
# RMSE for Training and Testing set for Closed Form Implementation
```
rmse_train = calc_rmse(y_train, y_train_pred)
rmse_test = calc_rmse(y_test, y_test_pred)
print("RMSE for training set: ", rmse_train)
print("RMSE for testing set: ", rmse_test)
```
> **_Closed Form_ implementation gives better RMSE for both training and testing set.
However, performance of _gradient descent_ can be improved by choosing optimal alpha and number of iterations.**
# Getting optimal regularization parameters for L1 and L2
```
lambda_l1, lambda_l2 = find_best_lambdas(x_train, y_train)
print("Optimal regularization parameter for L1: ", lambda_l1)
print("Optimal regularization parameter for L2: ", lambda_l2)
```
# Gradient Descent with L1 regularization on Abalone Dataset
```
theta, cost, rmse_train = linear_regression_with_L1(x_train, y_train, iters, alpha, lambda_l1)
y_train_pred = predict(x_train, theta)
y_test_pred = predict(x_test, theta)
rmse_test = calc_rmse(y_test, y_test_pred)
plot_rmse(iters, rmse_train)
print("Final RMSE for training set: ", rmse_train[-1])
print("RMSE for testing set: ", rmse_test)
```
# Gradient Descent with L2 regularization on Abalone Dataset
```
theta, cost, rmse_train = linear_regression_with_L2(x_train, y_train, iters, alpha, lambda_l2)
y_train_pred = predict(x_train, theta)
y_test_pred = predict(x_test, theta)
rmse_test = calc_rmse(y_test, y_test_pred)
plot_rmse(iters, rmse_train)
print("Final RMSE for training set: ", rmse_train[-1])
print("RMSE for testing set: ", rmse_test)
```
# Loading head_brain dataset and Preprocessing
```
head_brain = pd.read_csv('Datasets/headbrain.csv')
x = head_brain['Head Size(cm^3)'].values
y = head_brain['Brain Weight(grams)'].values
x = preprocess(x)
iters = 10000
alpha = 0.1
```
# Linear regression on head_brain dataset without regularization
```
theta, cost, rmse_train = linear_regression(x, y, iters, alpha)
y_pred = predict(x, theta)
plot_head_brain(x[:, 1], y, y_pred)
print("RMSE :", rmse_train[-1])
```
# Getting optimal regularization parameters for L1 and L2
```
lambda_l1, lambda_l2 = find_best_lambdas(x, y)
print("Optimal regularization parameter for L1: ", lambda_l1)
print("Optimal regularization parameter for L2: ", lambda_l2)
```
# Linear Regression on head_brain dataset with L1 Regularization
```
theta, cost, rmse_train = linear_regression_with_L1(x, y, iters, alpha, lambda_l1)
y_pred_L1 = predict(x, theta)
plot_head_brain(x[:, 1], y, y_pred_L1)
print("RMSE :", rmse_train[-1])
```
# Linear Regression on head_brain dataset with L2 Regularization
```
theta, cost, rmse_train = linear_regression_with_L2(x, y, iters, alpha, lambda_l2)
y_pred_L2 = predict(x, theta)
plot_head_brain(x[:, 1], y, y_pred_L2)
print("RMSE :", rmse_train[-1])
```
# Comparison of Best Fit Lines
```
plt.scatter(x[:, 1], y, label='Datapoints', color='red')
plt.plot(x[:, 1], y_pred, label='w/o Regularization', color='black', linewidth=3)
plt.plot(x[:, 1], y_pred_L1, label='with L1', color='green', linewidth=3)
plt.plot(x[:, 1], y_pred_L2, label='with L2', color='orange', linewidth=3)
plt.xlabel('Head Size(cm^3)(Normalized)')
plt.ylabel('Brain Weight(grams)')
plt.legend()
plt.show()
```
> **There is not much difference visually but since _RMSE for linear regression without regularization_ < _RMSE for linear regression with L1 regularization_ < _RMSE for linear regression with L2 regularization_,
_linear regression without regularization_ gives the best _best fit line_ whereas _linear regression with L2 regularization_ gives the worst _best fit line_.**
| github_jupyter |
# OpenRXN Example: Membrane slab
### This notebook demostrates how a complicated system can be set up easily with the OpenRXN package
We are interested in setting up a 3D system with a membrane slab at the bottom (both lower and upper leaflets), and a bulk region on top. There will be three Species in our model (drug, receptor and drug-receptor complex), with only the drug allowed to move around in the bulk. The receptor and drug-receptor complex are assumed to be locked in the membrane region.
**Our goal here is to build a model that describes both:**
1) The diffusion of the drug both in the membrane and the bulk
2) The binding of the drug to the receptor
First we import the necessary things from the OpenRXN package:
```
from openrxn.reactions import Reaction, Species
from openrxn.compartments.arrays import CompartmentArray3D
from openrxn.connections import IsotropicConnection, AnisotropicConnection, FicksConnection
from openrxn.model import Model
import pint
unit = pint.UnitRegistry()
import numpy as np
```
Then we define the Species objects to use in our model.
```
drug = Species('drug')
receptor = Species('receptor')
dr_complex = Species('complex')
```
In OpenRXN, we can define Reaction objects as follows:
```
Reaction?
kon = 1e6/(unit.molar*unit.sec)
koff = 0.1/unit.sec
binding = Reaction('binding',[drug,receptor],[dr_complex],[1,1],[1],kf=kon,kr=koff)
binding.display()
```
The biggest utility of OpenRXN lies in the ability to create compartments (and arrays of compartments) within which these reactions can occur. Species can also diffuse between compartments with specified rates.
```
conn_in_slab = FicksConnection({'drug' : 1e-8*unit.cm**2/unit.sec})
x_pos = np.linspace(-50,50,101)*unit.nanometer
y_pos = np.linspace(-50,50,101)*unit.nanometer
z_pos1 = np.array([-1,0])*unit.nanometer
lower_slab = CompartmentArray3D('lower_slab',x_pos,y_pos,z_pos1,conn_in_slab,periodic=[True,True,False])
```
This created a 3D array of compartments with positions as specified by x_pos, y_pos and z_pos1. It is periodic in the x and y dimensions, and the compartments are automatically connected:
```
lower_slab.compartments[(0,0,0)].connections
lower_slab.compartments[(0,0,0)].pos
```
Let's create another 3D array for the upper slab:
```
z_pos2 = np.array([0,1])*unit.nanometer
upper_slab = CompartmentArray3D('upper_slab',x_pos,y_pos,z_pos2,conn_in_slab,periodic=[True,True,False])
```
And now we can define another connection type for between the slabs:
```
conn_between_slab = IsotropicConnection({'drug' : 1e-5/unit.sec})
```
Then use the join3D method to make connections between our 3D compartment arrays:
```
lower_slab.join3D(upper_slab,conn_between_slab,append_side='z+')
```
Now let's define a bulk compartment array, and connect it to the upper slab:
```
conn_bulk_to_bulk = FicksConnection({'drug' : 1e-5*unit.cm**2/unit.sec})
z_pos3 = np.linspace(1,25,23)*unit.nanometer
bulk = CompartmentArray3D('bulk',x_pos,y_pos,z_pos3,conn_bulk_to_bulk,periodic=[True,True,False])
conn_slab_to_bulk = AnisotropicConnection({'drug' : (1e-5/unit.sec, 1e-1/unit.sec)})
bulk.join3D(upper_slab,conn_slab_to_bulk,append_side='z-')
```
Now we can put all this together in a "Model" object:
```
my_model = Model([lower_slab,upper_slab,bulk])
my_model.arrays
```
In order to visualize the connections between the compartments, and to turn this model into a "system" that can be integrated forward in time, we turn it into a "FlatModel" using the flatten() method:
```
my_flat_model = my_model.flatten()
len(my_flat_model.compartments)
b000 = my_flat_model.compartments['bulk-0_0_0']
b000.connections
```
| github_jupyter |
```
## A taste of things to come
# Print the list created using the Non-Pythonic approach
i = 0
new_list= []
while i < len(names):
if len(names[i]) >= 6:
new_list.append(names[i])
i += 1
print(new_list)
# Print the list created by looping over the contents of names
better_list = []
for name in names:
if len(name) >= 6:
better_list.append(name)
print(better_list)
# Print the list created by using list comprehension
best_list = [name for name in names if len(name) >= 6]
print(best_list)
## Built-in practice: range()
# Create a range object that goes from 0 to 5
nums = range(0,6)
print(type(nums))
# Convert nums to a list
nums_list = list(nums)
print(nums_list)
# Create a new list of odd numbers from 1 to 11 by unpacking a range object
nums_list2 = [*range(1,13,2)]
print(nums_list2)
## Built-in practice: enumerate()
# Rewrite the for loop to use enumerate
indexed_names = []
for i, name in enumerate(names):
index_name = (i, name)
indexed_names.append(index_name)
print(indexed_names)
# Rewrite the above for loop using list comprehension
indexed_names_comp = [(i,name) for i,name in enumerate(names)]
print(indexed_names_comp)
# Unpack an enumerate object with a starting index of one
indexed_names_unpack = [*enumerate(names, 1)]
print(indexed_names_unpack)
## Built-in practice: map()
# Use map to apply str.upper to each element in names
names_map = map(str.upper, names)
# Print the type of the names_map
print(type(names_map))
# Unpack names_map into a list
names_uppercase = [*names_map]
# Print the list created above
print(names_uppercase)
## Practice with NumPy arrays
# Print second row of nums
print(nums[1,:])
# Print all elements of nums that are greater than six
print(nums[nums > 6])
# Double every element of nums
nums_dbl = nums * 2
print(nums_dbl)
# Replace the third column of nums
nums[:, 2] = nums[:, 2] + 1
print(nums)
## Bringing it all together: Festivus!
# Create a list of arrival times
arrival_times = [*range(10,60,10)]
# Convert arrival_times to an array and update the times
arrival_times_np = np.array(arrival_times)
new_times = arrival_times_np - 3
# Use list comprehension and enumerate to pair guests to new times
guest_arrivals = [(names[i],time) for i,time in enumerate(new_times)]
# Map the welcome_guest function to each (guest,time) pair
welcome_map = map(welcome_guest, guest_arrivals)
guest_welcomes = [*welcome_map]
print(*guest_welcomes, sep='\n')
## Using %timeit: your turn!
# Create a list of integers (0-50) using list comprehension
%timeit nums_list_comp = [num for num in range(51)]
print(nums_list_comp)
# Create a list of integers (0-50) by unpacking range
%timeit nums_unpack = [*range(51)]
print(nums_unpack)
## Using %timeit: formal name or literal syntax
# Create a list using the formal name
formal_list = list()
print(formal_list)
# Create a list using the literal syntax
literal_list = []
print(literal_list)
# Print out the type of formal_list
print(type(formal_list))
# Print out the type of literal_list
print(type(literal_list))
## Using %mprun: Hero BMI
def calc_bmi_lists(sample_indices, hts, wts):
# Gather sample heights and weights as lists
s_hts = [hts[i] for i in sample_indices]
s_wts = [wts[i] for i in sample_indices]
# Convert heights from cm to m and square with list comprehension
s_hts_m_sqr = [(ht / 100) ** 2 for ht in s_hts]
# Calculate BMIs as a list with list comprehension
bmis = [s_wts[i] / s_hts_m_sqr[i] for i in range(len(sample_indices))]
return bmis
%mprun -f calc_bmi_lists(heroes, hts, wts)
## Bringing it all together: Star Wars profiling
def get_publisher_heroes(heroes, publishers, desired_publisher):
desired_heroes = []
for i,pub in enumerate(publishers):
if pub == desired_publisher:
desired_heroes.append(heroes[i])
return desired_heroes
def get_publisher_heroes_np(heroes, publishers, desired_publisher):
heroes_np = np.array(heroes)
pubs_np = np.array(publishers)
desired_heroes = heroes_np[pubs_np == desired_publisher]
return desired_heroes
# Use get_publisher_heroes() to gather Star Wars heroes
star_wars_heroes = get_publisher_heroes(heroes, publishers, 'George Lucas')
print(star_wars_heroes)
print(type(star_wars_heroes))
# Use get_publisher_heroes_np() to gather Star Wars heroes
star_wars_heroes_np = get_publisher_heroes_np(heroes, publishers, 'George Lucas')
print(star_wars_heroes_np)
print(type(star_wars_heroes_np))
```
```
## Combining Pokรฉmon names and types
# Combine names and primary_types
names_type1 = [*zip(names, primary_types)]
print(*names_type1[:5], sep='\n')
# Combine all three lists together
names_types = [*zip(names, primary_types, secondary_types)]
print(*names_types[:5], sep='\n')
# Combine five items from names and three items from primary_types
differing_lengths = [*zip(names[:5], primary_types[:3])]
print(*differing_lengths, sep='\n')
## Counting Pokรฉmon from a sample
# Collect the count of primary types
type_count = Counter(primary_types)
print(type_count, '\n')
# Collect the count of generations
gen_count = Counter(generations)
print(gen_count, '\n')
# Use list comprehension to get each Pokรฉmon's starting letter
starting_letters = [name[:1] for name in names]
# Collect the count of Pokรฉmon for each starting_letter
starting_letters_count = Counter(starting_letters)
print(starting_letters_count)
## Combinations of Pokรฉmon
# Import combinations from itertools
from itertools import combinations
# Create a combination object with pairs of Pokรฉmon
combos_obj = combinations(pokemon, 2)
print(type(combos_obj), '\n')
# Convert combos_obj to a list by unpacking
combos_2 = [*combos_obj]
print(combos_2, '\n')
# Collect all possible combinations of 4 Pokรฉmon directly into a list
combos_4 = [*combinations(pokemon, 4)]
print(combos_4)
## Comparing Pokรฉdexes
# Convert both lists to sets
ash_set = set(ash_pokedex)
misty_set = set(misty_pokedex)
# Find the Pokรฉmon that exist in both sets
both = ash_set.intersection(misty_set)
print(both)
# Find the Pokรฉmon that Ash has and Misty does not have
ash_only = ash_set.difference(misty_set)
print(ash_only)
# Find the Pokรฉmon that are in only one set (not both)
unique_to_set = ash_set.symmetric_difference(misty_set)
print(unique_to_set)
## Searching for Pokรฉmon
# Convert Brock's Pokรฉdex to a set
brock_pokedex_set = set(brock_pokedex)
print(brock_pokedex_set)
# Check if Psyduck is in Ash's list and Brock's set
print('Psyduck' in ash_pokedex)
print('Psyduck' in brock_pokedex_set)
# Check if Machop is in Ash's list and Brock's set
print('Machop' in ash_pokedex)
print('Machop' in brock_pokedex_set)
## Gathering unique Pokรฉmon
# Use find_unique_items() to collect unique Pokรฉmon names
uniq_names_func = find_unique_items(names)
print(len(uniq_names_func))
# Convert the names list to a set to collect unique Pokรฉmon names
uniq_names_set = set(names)
print(len(uniq_names_set))
# Check that both unique collections are equivalent
print(sorted(uniq_names_func) == sorted(uniq_names_set))
# Use the best approach to collect unique primary types and generations
uniq_types = set(primary_types)
uniq_gens = set(generations)
print(uniq_types, uniq_gens, sep='\n')
## Gathering Pokรฉmon without a loop
# Collect Pokรฉmon that belong to generation 1 or generation 2
gen1_gen2_pokemon = [name for name,gen in zip(poke_names, poke_gens) if gen <3]
# Create a map object that stores the name lengths
name_lengths_map = map(len, gen1_gen2_pokemon)
# Combine gen1_gen2_pokemon and name_lengths_map into a list
gen1_gen2_name_lengths = [*zip(gen1_gen2_pokemon, name_lengths_map)]
print(gen1_gen2_name_lengths_loop[:5])
print(gen1_gen2_name_lengths[:5])
## List comprehension
[(name, len(name)) for name,gen in zip(poke_names, poke_gens) if gen < 3]
## Pokรฉmon totals and averages without a loop
# Create a total stats array
total_stats_np = stats.sum(axis=1)
# Create an average stats array
avg_stats_np = stats.mean(axis=1)
# Combine names, total_stats_np, and avg_stats_np into a list
poke_list_np = [*zip(names, total_stats_np, avg_stats_np)]
print(poke_list_np == poke_list, '\n')
print(poke_list_np[:3])
print(poke_list[:3], '\n')
top_3 = sorted(poke_list_np, key=lambda x: x[1], reverse=True)[:3]
print('3 strongest Pokรฉmon:\n{}'.format(top_3))
## One-time calculation loop
# Import Counter
from collections import Counter
# Collect the count of each generation
gen_counts = Counter(generations)
# Improve for loop by moving one calculation above the loop
total_count = len(generations)
for gen,count in gen_counts.items():
gen_percent = round(count/ total_count * 100,2)
print('generation {}: count = {:3} percentage = {}'
.format(gen, count, gen_percent))
## Holistic conversion loop
# Collect all possible pairs using combinations()
possible_pairs = [*combinations(pokemon_types, 2)]
# Create an empty list called enumerated_tuples
enumerated_tuples = []
# Add a line to append each enumerated_pair_tuple to the empty list above
for i,pair in enumerate(possible_pairs, 1):
enumerated_pair_tuple = (i,) + pair
enumerated_pair_list = list(enumerated_pair_tuple)
enumerated_tuples.append(enumerated_pair_list)
# Convert all tuples in enumerated_tuples to a list
enumerated_pairs = [*map(list, enumerated_tuples)]
print(enumerated_pairs)
## Bringing it all together: Pokรฉmon z-scores
# Calculate the total HP avg and total HP standard deviation
hp_avg = hps.mean()
hp_std = hps.std()
# Use NumPy to eliminate the previous for loop
z_scores = (hps - hp_avg)/hp_std
# Combine names, hps, and z_scores
poke_zscores2 = [*zip(names, hps, z_scores)]
print(*poke_zscores2[:3], sep='\n')
# Use list comprehension with the same logic as the highest_hp_pokemon code block
highest_hp_pokemon = [(names, hps, z_scores) for names, hps, z_scores in poke_zscores2 if z_scores > 2]
print(*highest_hp_pokemon, sep='\n')
```
```
## Iterating with .iterrows()
# Iterate over pit_df and print each row
for i, row in pit_df.iterrows():
print(row)
# Iterate over pit_df and print each index variable and then each row
for i,row in pit_df.iterrows():
print(i)
print(row)
print(type(row))
# Print the row and type of each row
for row_tuple in pit_df.iterrows():
print(row_tuple)
print(type(row_tuple))
## Run differentials with .iterrows()
# Create an empty list to store run differentials
run_diffs = []
# Write a for loop and collect runs allowed and runs scored for each row
for i,row in giants_df.iterrows():
runs_scored = row['RS']
runs_allowed = row['RA']
# Use the provided function to calculate run_diff for each row
run_diff = calc_run_diff(runs_scored, runs_allowed)
# Append each run differential to the output list
run_diffs.append(run_diff)
giants_df['RD'] = run_diffs
print(giants_df)
## Iterating with .itertuples()
# Loop over the DataFrame and print each row's Index, Year and Wins (W)
for row in rangers_df.itertuples():
i = row.Index
year = row.Year
wins = row.W
# Check if rangers made Playoffs (1 means yes; 0 means no)
if row.Playoffs == 1:
print(i, year, wins)
## Run differentials with .itertuples()
run_diffs = []
# Loop over the DataFrame and calculate each row's run differential
for row in yankees_df.itertuples():
runs_scored = row.RS
runs_allowed = row.RA
run_diff = calc_run_diff(runs_scored, runs_allowed)
run_diffs.append(run_diff)
# Append new column
yankees_df['RD'] = run_diffs
print(yankees_df)
## Analyzing baseball stats with .apply()
# Gather sum of all columns
stat_totals = rays_df.apply(sum, axis=0)
print(stat_totals)
# Gather total runs scored in all games per year
total_runs_scored = rays_df[['RS', 'RA']].apply(sum, axis=1)
print(total_runs_scored)
# Convert numeric playoffs to text
textual_playoffs = rays_df.apply(lambda row: text_playoffs(row['Playoffs']), axis=1)
print(textual_playoffs)
## Settle a debate with .apply()
# Display the first five rows of the DataFrame
print(dbacks_df.head())
# Create a win percentage Series
win_percs = dbacks_df.apply(lambda row: calc_win_perc(row['W'], row['G']), axis=1)
print(win_percs, '\n')
# Append a new column to dbacks_df
dbacks_df['WP'] = win_percs
print(dbacks_df, '\n')
# Display dbacks_df where WP is greater than 0.50
print(dbacks_df[dbacks_df['WP'] >= 0.50])
## Replacing .iloc with underlying arrays
# Use the W array and G array to calculate win percentages
%timeit win_percs_np = calc_win_perc(baseball_df['W'].values, baseball_df['G'].values)
# Append a new column to baseball_df that stores all win percentages
baseball_df['WP'] = win_percs_np
print(baseball_df.head())
## Bringing it all together: Predict win percentage
win_perc_preds_loop = []
# Use a loop and .itertuples() to collect each row's predicted win percentage
for row in baseball_df.itertuples():
runs_scored = row.RS
runs_allowed = row.RA
win_perc_pred = predict_win_perc(runs_scored, runs_allowed)
win_perc_preds_loop.append(win_perc_pred)
# Apply predict_win_perc to each row of the DataFrame
win_perc_preds_apply = baseball_df.apply(lambda row: predict_win_perc(row['RS'], row['RA']), axis=1)
# Calculate the win percentage predictions using NumPy arrays
win_perc_preds_np = predict_win_perc(baseball_df['RS'].values, baseball_df['RA'].values)
baseball_df['WP_preds'] = win_perc_preds_np
print(baseball_df.head())
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as pp
import pandas as pd
import seaborn
%matplotlib inline
import zipfile
zipfile.ZipFile('names.zip').extractall('.')
import os
os.listdir('names')
open('names/yob2011.txt','r').readlines()[:10]
names2011 = pd.read_csv('names/yob2011.txt')
names2011.head()
names2011 = pd.read_csv('names/yob2011.txt',names=['name','sex','number'])
names2011.head()
names_all = []
for year in range(1880,2014+1):
names_all.append(pd.read_csv('names/yob{}.txt'.format(year),names=['name','sex','number']))
names_all[-1]['year'] = year
allyears = pd.concat(names_all)
allyears.head()
allyears.tail()
allyears_indexed = allyears.set_index(['sex','name','year']).sort_index()
allyears_indexed
allyears_indexed.loc['F','Mary']
def plotname(sex,name):
data = allyears_indexed.loc[sex,name]
pp.plot(data.index,data.values)
pp.figure(figsize=(12,2.5))
names = ['Michael','John','David','Martin']
for name in names:
plotname('M',name)
pp.legend(names)
pp.figure(figsize=(12,2.5))
names = ['Emily','Anna','Claire','Elizabeth']
for name in names:
plotname('F',name)
pp.legend(names)
pp.figure(figsize=(12,2.5))
names = ['Chiara','Claire','Clare','Clara','Ciara']
for name in names:
plotname('F',name)
pp.legend(names)
allyears_indexed.loc['F'].loc[names].head()
allyears_indexed.loc['F'].loc[names].unstack(level=0).head()
allyears_indexed.loc['F'].loc[names].unstack(level=0).fillna(0).head()
variants = allyears_indexed.loc['F'].loc[names].unstack(level=0).fillna(0)
pp.figure(figsize=(12,2.5))
pp.stackplot(variants.index,variants.values.T,label=names)
pp.figure(figsize=(12,2.5))
palette = seaborn.color_palette()
pp.stackplot(variants.index,variants.values.T,colors=palette)
for i,name in enumerate(names):
pp.text(1882,5000 + 800*i,name,color=palette[i])
allyears_indexed.loc['M',:,2008].sort('number',ascending=False).head()
pop2008 = allyears_indexed.loc['M',:,2008].sort('number',ascending=False).head()
pop2008.reset_index().drop(['sex','year','number'],axis=1).head()
def topten(sex,year):
simple = allyears_indexed.loc[sex,:,year].sort('number',ascending=False).reset_index()
simple = simple.drop(['sex','year','number'],axis=1).head(10)
simple.columns = [year]
simple.index = simple.index + 1
return simple
topten('M',2009)
def toptens(sex,year0,year1):
years = [topten(sex,year) for year in range(year0,year1+1)]
return years[0].join(years[1:])
toptens('M',2000,2010)
toptens('F',1985,1995)
toptens('F',1985,1995).stack().head()
toptens('F',1985,1995).stack().value_counts()
popular = toptens('F',1985,1995).stack().value_counts().index[:6]
pp.figure(figsize=(12,2.5))
for name in popular:
plotname('F',name)
pp.legend(popular)
```
| github_jupyter |
# Day 7 "The Treachery of Whales"
## Part 1
### Problem
A giant whale has decided your submarine is its next meal, and it's much faster than you are. There's nowhere to run!
Suddenly, a swarm of crabs (each in its own tiny submarine - it's too deep for them otherwise) zooms in to rescue you! They seem to be preparing to blast a hole in the ocean floor; sensors indicate a massive underground cave system just beyond where they're aiming!
The crab submarines all need to be aligned before they'll have enough power to blast a large enough hole for your submarine to get through. However, it doesn't look like they'll be aligned before the whale catches you! Maybe you can help?
There's one major catch - crab submarines can only move horizontally.
You quickly make a list of the horizontal position of each crab (your puzzle input). Crab submarines have limited fuel, so you need to find a way to make all of their horizontal positions match while requiring them to spend as little fuel as possible.
For example, consider the following horizontal positions:
16,1,2,0,4,2,7,1,2,14
This means there's a crab with horizontal position 16, a crab with horizontal position 1, and so on.
Each change of 1 step in horizontal position of a single crab costs 1 fuel. You could choose any horizontal position to align them all on, but the one that costs the least fuel is horizontal position 2:
Move from 16 to 2: 14 fuel
Move from 1 to 2: 1 fuel
Move from 2 to 2: 0 fuel
Move from 0 to 2: 2 fuel
Move from 4 to 2: 2 fuel
Move from 2 to 2: 0 fuel
Move from 7 to 2: 5 fuel
Move from 1 to 2: 1 fuel
Move from 2 to 2: 0 fuel
Move from 14 to 2: 12 fuel
This costs a total of 37 fuel. This is the cheapest possible outcome; more expensive outcomes include aligning at position 1 (41 fuel), position 3 (39 fuel), or position 10 (71 fuel).
Determine the horizontal position that the crabs can align to using the least fuel possible. How much fuel must they spend to align to that position?
### Setup
```
from utils import *
def parse(text):
return [int(h) for h in re.findall('\d+', text)]
_inputData = parse(initDay('day7'))
_sampleData = parse(getMarkdown('For example'))
plotStyle({'axes.formatter.useoffset': False, 'axes.titley': .8})
def plot(result, costs, title):
ax = plt.subplots()[1]
ax.ticklabel_format(style='plain')
ax.yaxis.tick_right()
plt.ylabel('cost')
plt.xlabel('position')
plt.scatter(range(len(costs)), costs, s=10, c='#f99f50')
plt.scatter(costs.index(result), result, s=50, c='#ed1c24')
plt.title(' ' + title)
```
### Solution
Brute force the full range, calculating the cost to move all crabs to each position, and return the minimum of those costs. Cost is the sum of the distances to a given position for each crab.
```
def solve(crabs, cost):
costs = [
sum([cost(crab, h) for h in crabs])
for crab in range(max(crabs))]
return min(costs), costs
def cost1(start, end):
return abs(end - start)
sampleResult1, sampleCosts1 = solve(_sampleData, cost1)
check(sampleResult1, 37)
plot(sampleResult1, sampleCosts1, 'Part 1 Sample Data')
inputResult1, inputCosts1 = solve(_inputData, cost1)
check1(inputResult1)
plot(inputResult1, inputCosts1, 'Part 1 Input Data')
```
## Part 2
### Problem
The crabs don't seem interested in your proposed solution. Perhaps you misunderstand crab engineering?
As it turns out, crab submarine engines don't burn fuel at a constant rate. Instead, each change of 1 step in horizontal position costs 1 more unit of fuel than the last: the first step costs 1, the second step costs 2, the third step costs 3, and so on.
As each crab moves, moving further becomes more expensive. This changes the best horizontal position to align them all on; in the example above, this becomes 5:
Move from 16 to 5: 66 fuel
Move from 1 to 5: 10 fuel
Move from 2 to 5: 6 fuel
Move from 0 to 5: 15 fuel
Move from 4 to 5: 1 fuel
Move from 2 to 5: 6 fuel
Move from 7 to 5: 3 fuel
Move from 1 to 5: 10 fuel
Move from 2 to 5: 6 fuel
Move from 14 to 5: 45 fuel
This costs a total of 168 fuel. This is the new cheapest possible outcome; the old alignment position (2) now costs 206 fuel instead.
Determine the horizontal position that the crabs can align to using the least fuel possible so they can make you an escape route! How much fuel must they spend to align to that position?
### Solution
Same as Part 1, with a tweak to the cost equation. Now instead of distance, we need the sum of each intermediate distance as well. The forumula for this is `d*(d-1)/2`, something I still remember from grade school!
```
def cost2(start, end):
d = abs(end - start)+1
return d*(d-1)//2
sampleResult2, sampleCosts2 = solve(_sampleData, cost2)
check(sampleResult2, 168)
plot(sampleResult2, sampleCosts2, 'Part 2 Sample Data')
inputResult2, inputCosts2 = solve(_inputData, cost2)
check2(inputResult2)
plot(inputResult2, inputCosts2, 'Part 2 Input Data')
```
| github_jupyter |
# Simulation and comparison of dual pol and dual pol diagonal only
```
%matplotlib inline
import numpy as np
from osgeo import gdal
from osgeo.gdalconst import GDT_Float32, GA_ReadOnly
def make_simimage(fn,m=5,bands=9,sigma=1,alpha=0.2,beta=0.2):
simimage = np.zeros((100**2,9))
ReSigma = np.zeros((3,3))
ImSigma = np.zeros((3,3))
for i in range(3):
for j in range(3):
if i==j:
ReSigma[i,j]=sigma**2
elif i<j:
ReSigma[i,j] = alpha*sigma**2
ImSigma[i,j] = beta*sigma**2
else:
ReSigma[i,j] = alpha*sigma**2
ImSigma[i,j] = -beta*sigma**2
Sigma = np.mat(ReSigma +1j*ImSigma)
C = np.linalg.cholesky(Sigma)
for i in range(100**2):
X = np.mat(np.random.randn(m,3))
Y = np.mat(np.random.randn(m,3))
Wr = X.T*X + Y.T*Y
Wi = X.T*Y - Y.T*X
W = (Wr - 1j*Wi)/2
W = C*W*C.H
simimage[i,0] = np.real(W[0,0])
simimage[i,1] = np.real(W[0,1])
simimage[i,2] = np.imag(W[0,1])
simimage[i,3] = np.real(W[0,2])
simimage[i,4] = np.imag(W[0,2])
simimage[i,5] = np.real(W[1,1])
simimage[i,6] = np.real(W[1,2])
simimage[i,7] = np.imag(W[1,2])
simimage[i,8] = np.real(W[2,2])
driver = gdal.GetDriverByName('GTiff')
outDataset = driver.Create(fn,100,100,bands,GDT_Float32)
if bands == 9:
for i in range(bands):
outband = outDataset.GetRasterBand(i+1)
outband.WriteArray(np.reshape(simimage[:,i],(100,100)),0,0)
outband.FlushCache()
elif bands == 4:
outband = outDataset.GetRasterBand(1)
outband.WriteArray(np.reshape(simimage[:,0],(100,100)),0,0)
outband.FlushCache()
outband = outDataset.GetRasterBand(2)
outband.WriteArray(np.reshape(simimage[:,1],(100,100)),0,0)
outband.FlushCache()
outband = outDataset.GetRasterBand(3)
outband.WriteArray(np.reshape(simimage[:,2],(100,100)),0,0)
outband.FlushCache()
outband = outDataset.GetRasterBand(4)
outband.WriteArray(np.reshape(simimage[:,5],(100,100)),0,0)
outband.FlushCache()
elif bands == 3:
outband = outDataset.GetRasterBand(1)
outband.WriteArray(np.reshape(simimage[:,0],(100,100)),0,0)
outband.FlushCache()
outband = outDataset.GetRasterBand(2)
outband.WriteArray(np.reshape(simimage[:,5],(100,100)),0,0)
outband.FlushCache()
outband = outDataset.GetRasterBand(3)
outband.WriteArray(np.reshape(simimage[:,8],(100,100)),0,0)
outband.FlushCache()
elif bands == 2:
outband = outDataset.GetRasterBand(1)
outband.WriteArray(np.reshape(simimage[:,0],(100,100)),0,0)
outband.FlushCache()
outband = outDataset.GetRasterBand(2)
outband.WriteArray(np.reshape(simimage[:,5],(100,100)),0,0)
outband.FlushCache()
elif bands == 1:
outband = outDataset.GetRasterBand(1)
outband.WriteArray(np.reshape(simimage[:,0],(100,100)),0,0)
outband.FlushCache()
outDataset = None
print 'written to %s'%fn
```
### Simulate dual pol series with change in last (5th) image
```
bands = 4
m = 5
make_simimage('myimagery/simx1.tif',m=m,sigma=1.0,bands=bands)
make_simimage('myimagery/simx2.tif',m=m,sigma=1.0,bands=bands)
make_simimage('myimagery/simx3.tif',m=m,sigma=1.0,bands=bands)
make_simimage('myimagery/simx4.tif',m=m,sigma=1.0,bands=bands)
make_simimage('myimagery/simx5.tif',m=m,sigma=2.5,bands=bands)
```
### Run sequential omnibus, significance = 0.01, ENL = 5
```
!scripts/run_sar_seq.sh simx myimagery/ 5 0.01
run scripts/dispms -f myimagery/sarseq_smap.tif -c
```
### Repeat above for dual pol diagonal only
### Count positives in each of the four intervals
```
def count(infile):
gdal.AllRegister()
inDataset = gdal.Open(infile,GA_ReadOnly)
cols = inDataset.RasterXSize
rows = inDataset.RasterYSize
bands = inDataset.RasterCount
for b in range(bands):
band = inDataset.GetRasterBand(b+1)
data=band.ReadAsArray(0,0,cols,rows)
print 'interval %i positives: %f'%(b+1,np.sum(data)/255.0)
inDataset = None
count('myimagery/sarseq_bmap.tif')
```
| github_jupyter |
```
import os
import pandas as pd
from pvoutput import *
```
* Uses PVOutput.org API search to try to get all systems in UK.
* The API search only allows us to get all systems within a search radius of <= 25 km.
* This script loads the appropriately-spaced UK grid points (generated with `get_grid_points_for_UK.ipynb`) and gets metadata for all those systems.
* HOWEVER, the API search returns a max of 30 PV systems, and the API provides no mechanism to iterate through pages of search results. So, for any search result with 30 PV systems, we should assume there are more systems on the PVOutput.org database. But I can't think of an efficient way to query those (for each circle of 25 km radius, we need to split that up into mutliple smaller circles, such that the smaller circles completely cover the outer circle. This can be done with 4 inner circles. But not very efficiently. So I gave up using the API Search and instead scraped the PVOutput.org map pages, using `PVOutput_download_HTML_for_country.ipynb` and `PVOutput_process_HTML.ipynb`)
## Iterate through all grid points for the UK
```
grid_points = pd.read_csv('~/dev/python/openclimatefix/solar/pvoutput/uk_grid_points.csv')
grid_points.head()
pv_systems_filename = os.path.expanduser('~/dev/python/openclimatefix/solar/pvoutput/uk_pv_systems.csv')
if os.path.exists(pv_systems_filename):
previously_read_grid_indicies = pd.read_csv(pv_systems_filename, usecols=['query_grid_index']).squeeze()
start_grid_index = previously_read_grid_indicies.max() + 1
header = False
else:
start_grid_index = 0
header = True
print("Starting at grid index", start_grid_index)
for index, row in grid_points.loc[start_grid_index:].iterrows():
lat_lon = '{},{}'.format(row.lat, row.lon)
print(index, lat_lon)
pv_systems = pv_system_search(query="25km", lat_lon=lat_lon)
print(" Retrieved", len(pv_systems), "pv systems")
pv_systems['query_latitude'] = row.lat
pv_systems['query_longitude'] = row.lon
pv_systems['query_grid_index'] = index
with open(pv_systems_filename, mode='a') as fh:
pv_systems.to_csv(fh, header=header)
pv_systems = pd.read_csv(pv_systems_filename, index_col='system_id')
pv_systems.query('system_name == "Berghem Umeรฅ"')
len(pv_systems)
pv_systems.index.duplicated().sum()
pv_systems_deduped = pv_systems.loc[~pv_systems.index.duplicated()]
len(pv_systems_deduped)
len(pv_systems_deduped.query('num_outputs > 288'))
```
## Plot map of PV systems
```
import geopandas as gpd
pv_system_gdf = gpd.GeoDataFrame(
pv_systems_deduped,
geometry=gpd.points_from_xy(
pv_systems_deduped.longitude,
pv_systems_deduped.latitude
)
)
shapefile = os.path.expanduser('~/data/geospatial/GBR_adm0.shp')
country = gpd.read_file(shapefile)
# country.crs = {'init': WGS84_PROJECTION}
ax = country.plot(alpha=0.2, figsize=(15, 15), color='white', edgecolor='black', linewidth=0.5);
pv_system_gdf.query('num_outputs > 50').plot(ax=ax, alpha=0.7, markersize=2)
ax.set_xlim((-10, 2))
```
Next steps:
* De-dupe
* for any queries where we got 30 results, split that search area into smaller search areas and search again to get all pv systems.
* select systems with > ~100 readings
* pull metadata for these systems
* download timeseries for systems with 5-minutely data. Maybe get, say, June's data
```
num_results_per_query_grid_index = pv_systems.groupby('query_grid_index')['system_name'].count()
query_grid_indexes_to_revisit = num_results_per_query_grid_index[num_results_per_query_grid_index == 30].index
# These are the 'search circles' which returned 30 results (the max number of results the API
# will return in one go); and so we should re-visit these search circles with smaller search
# circles.
query_grid_indexes_to_revisit
```
| github_jupyter |
<img src="http://i67.tinypic.com/2jcbwcw.png" align="left"></img><br><br><br><br>
## Notebook: Web Scraping & Web Crawling
**Author List**: Alexander Fred Ojala
**Original Sources**: https://www.crummy.com/software/BeautifulSoup/bs4/doc/ & https://www.dataquest.io/blog/web-scraping-tutorial-python/
**License**: Feel free to do whatever you want to with this code
**Compatibility:** Python 2.x and 3.x
## Other Web Scraping tools
This notebook mainly goes over how to get data with the Python package `BeautifulSoup`. However, there are many other Python packages that can be used for scraping.
Two are very popular and widely used:
* **Selenium:** Javascript based Pyton scraper that can act as a human when visiting websites
* **Scrapy:** For automated scripting for long periods of time Scrapy is really popular.
# Table of Contents
(Clickable document links)
___
### [0: Pre-steup](#sec0)
Document setup and Python 2 and Python 3 compability
### [1: Simple webscraping intro](#sec1)
Simple example of webscraping on a premade HTML template
### [2: Scrape Data-X Schedule](#sec2)
Find and scrape the current Data-X schedule.
### [3: Scrape Images and Files](#sec3)
Scrape a website of Images, PDF's, CSV data or any other file type.
## [Breakout Problem: Scrape Weather Data](#secBK)
Scrape real time weather data in Berkeley.
### [Appendix](#sec5)
#### [Scrape Bloomberg sitemap for political news headlines](#sec6)
#### [Webcrawl Twitter, recusrive URL link fetcher + depth](#sec7)
#### [SEO, visualize webite categories as a tree](#sec8)
<a id='sec0'></a>
## Pre-Setup
```
# stretch Jupyter coding blocks to fit screen
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
# if 100% it would fit the screen
# make it run on py2 and py3
from __future__ import division, print_function
```
<a id='sec1'></a>
# Webscraping intro
In order to scrape content from a website we first need to download the HTML contents of the website. This can be done with the Python library **requests** (with its `.get` method).
Then when we want to extract certain information from a website we use the scraping tool **BeautifulSoup4** (import bs4). In order to extract information with beautifulsoup we have to create a soup object from the HTML source code of a website.
```
import requests # The requests library is an
# HTTP library for getting content and posting etc.
import bs4 as bs # BeautifulSoup4 is a Python library
# for pulling data out of HTML and XML code.
# we can query markup languages for specific content
```
# Scraping a simple website
```
source = requests.get("https://afo.github.io/data-x")
# a GET request will download the HTML webpage.
print(source) # If <Response [200]> then
# the website has been downloaded succesfully
```
**Different types of repsonses:**
Generally status code starting with 2 indicates success. Status code starting with 4 or 5 indicates error
```
print(source.content) # This is the HTML content of the website,
# as you can see it's quite hard to decipher
print(type(source.content)) # type byte in Python 3
# Convert source.content to a beautifulsoup object
# beautifulsoup can parse (extract specific information) HTML code
soup = bs.BeautifulSoup(source.content, features='lxml')
# we pass in the source content
# features specifies what type of code we are parsing,
# here 'lxml' specifies that we want beautiful soup to parse HTML code
print(type(soup))
print(soup) # looks a lot nicer!
```
Above we printed the HTML code of the website,
decoded as a beautiful soup object
`<xxx> </xxx>`: are all the HTML tags, that specifies certain sections, stylings etc of the website, for more info:
https://www.w3schools.com/tags/ref_byfunc.asp
**class and id: ** Are attributes of HTML tags, they are used as hooks to give unique styling to certain elements and an id for sections / parts of the page.
Full list of HTML tags: https://developer.mozilla.org/en-US/docs/Web/HTML/Element
### Suppose we want to extract content that is shown on the website
```
# Inside the <body> tag of the website is where all the main content is
print(soup.body)
print(soup.title) # Title of the website
print(soup.find('title')) # same as .title
# If we want to extract specific text
print(soup.find('p')) # will only return first <p> tag
print(soup.find('p').text) # extracts the string within the <p> tag, strips it of tag
# If we want to extract all <p> tags
print(soup.find_all('p')) # returns list of all <p> tags
# we can also search for classes within all tags, using class_
# note _ is used to distinguish with Python's builtin class function
print(soup.find(class_='header'))
# We can also find tags with a speific id
print(soup.find(id='second'))
print(soup.find_all(class_='regular_list'))
for p in soup.find_all('p'): # print all text paragraphs on the webpage
print(p.text)
# Extract links / urls
# Links in html is usually coded as <a href="url">
# where the link is url
print(soup.a)
print(type(soup.a))
soup.a.get('href')
# to get the link from href attribute
# if we want to list links and their text info
links = soup.find_all('a')
for l in links:
print("\nInfo about {}: ".format(l.text), l.get('href'))
# then we have extracted the link
```
<a id='sec2'></a>
# Data-X website Scraping
### Now let us scrape the current Syllabus Schedule from the Data-X website
```
source = requests.get('https://data-x.blog/').content
# get the source content
soup = bs.BeautifulSoup(source,'lxml')
print(soup.prettify())
# .prettify() method makes the HTML code more readable
# as you can see this code is more difficult
# to read then the simple example above
# mostly because this is a real Wordpress website
```
#### Print the Title of the website
```
print(soup.find('title').text)
# check that we are at the correct website
```
#### Extract all paragraphs of text
```
for p in soup.find_all('p'):
print(p.text)
```
### Look at the navigation bar
```
navigation_bar = soup.find('nav')
print(navigation_bar)
# These are the linked subpages in the navigation bar
nav_bar = navigation_bar.text
print(nav_bar)
```
### Scrape the Syllabus of its content
(maybe to use in an App)
```
# Now we want to find the Syllabus,
# however we are at the root web page, not displaying the Syllabus
# Get all links from navigation bar at the data-x home webpage
for url in navigation_bar.find_all('a'):
link = url.get('href')
if 'data-x.blog' in link: # check link to a subpage
print(link)
if 'syllabus' in link:
syllabus_url = link
# syllabus is located at https://data-x.blog/syllabus/
print(syllabus_url)
# Open new connection to the Syllabus url. Replace soup object.
source = requests.get(syllabus_url).content
soup = bs.BeautifulSoup(source, 'lxml')
print(soup.body.prettify())
# we can see that the Syllabus is built up of <td>, <tr> and <table> tags
```
### Find the course schedule table from the syllabus:
Usually organized data in HTML format on a website is stored in tables under `<table>, <tr>,` and `<td>` tags. Here we want to extract the information in the Data-X syllabus.
**NOTE:** To identify element, class or id name of the object of your interest on a web page, you can go to the link address in your browser, under 'more tools' option click __'developer tools'__. This opens the 'Document object Model' of the webpage. Hover on the element of your interest on the webpage to check its location. This will help you in deciding which parts of 'soup content' you want to parse. More info at: https://developer.chrome.com/devtools
```
# We can see that course schedule is in <table><table/> elements
# We can also get the table
full_table = soup.find_all('table')
# A new row in an HTML table starts with <tr> tag
# A new column entry is defined by <td> tag
table_result = list()
for table in full_table:
for row in table.find_all('tr'):
row_cells = row.find_all('td') # find all table data
row_entries = [cell.text for cell in row_cells]
print(row_entries)
table_result.append(row_entries)
# get all the table data into a list
# We can also read it in to a Pandas DataFrame
import pandas as pd
pd.set_option('display.max_colwidth', 10000)
df = pd.DataFrame(table_result)
df
# Pandas can also grab tables from a website automatically
import pandas as pd
import html5lib
# requires html5lib:
#!conda install --yes html5
dfs = pd.read_html('https://data-x.blog/syllabus/')
# returns a list of all tables at url
dfs
print(type(dfs)) #list of tables
print(len(dfs)) # we only have one table
print(type(dfs[0])) # stored as DataFrame
df = pd.concat(dfs,ignore_index=True)
# Looks so-so, however striped from break line characters etc.
df
# Make it nicer
# Assign column names
df.columns= ['Part','Detailed Description']
# Assing week number
weeks = list()
for i in range(1,13):
weeks = weeks+['Week{}'.format(i) for tmp in range(4)]
df['Week'] = weeks
df.head()
# Set Week and Part as Multiindex
df = df.set_index(['Week','Part'])
df.head(10)
```
<a id='sec3'></a>
# Scrape images and other files
```
# As we can see there are two images on the data-x.blog/resources
# say that we want to download them
# Images are displayed with the <img> tag in HTML
# open connection and create new soup
raw = requests.get('https://data-x.blog/resources/').content
soup = bs.BeautifulSoup(raw,features='lxml')
print(soup.find('img'))
# as we can see below the image urls
# are stored in the src attribute inside the img tag
# Parse all url to the images
img_urls = list()
for img in soup.find_all('img'):
img_url = img.get('src')
if '.jpeg' in img_url or '.jpg' in img_url:
print(img_url)
img_urls.append(img_url)
print(img_urls)
!ls
# To download and save files with Python we can use
# the shutil library which is a file operations library
import shutil
for idx, img_url in enumerate(img_urls):
#enumarte to create a file integer name for every image
# make a request to the image URL
img_source = requests.get(img_url, stream=True)
# we set stream = True to download/
# stream the content of the data
with open('img'+str(idx)+'.jpg', 'wb') as file:
# open file connection, create file and write to it
shutil.copyfileobj(img_source.raw, file)
# save the raw file object
del img_source # to remove the file from memory
!ls
```
## Scraping function to download files of any type from a website
Below is a function that takes in a website and a specific file type to download X of them from the website.
```
# Extended scraping function of any file format
import os # To interact with operating system and format file name
import shutil # To copy file object from python to disk
import requests
import bs4 as bs
def py_file_scraper(url, html_tag='img', source_tag='src', file_type='.jpg',max=-1):
'''
Function that scrapes a website for certain file formats.
The files will be placed in a folder called "files"
in the working directory.
url = the url we want to scrape from
html_tag = the file tag (usually img for images or
a for file links)
source_tag = the source tag for the file url
(usually src for images or href for files)
file_type = .png, .jpg, .pdf, .csv, .xls etc.
max = integer (max number of files to scrape,
if = -1 it will scrape all files)
'''
# make a directory called 'files'
# for the files if it does not exist
if not os.path.exists('files/'):
os.makedirs('files/')
print('Loading content from the url...')
source = requests.get(url).content
print('Creating content soup...')
soup = bs.BeautifulSoup(source,'lxml')
i=0
print('Finding tag:%s...'%html_tag)
for n, link in enumerate(soup.find_all(html_tag)):
file_url=link.get(source_tag)
print ('\n',n+1,'. File url',file_url)
if 'http' in file_url: # check that it is a valid link
print('It is a valid url..')
if file_type in file_url: #only check for specific
# file type
print('%s FILE TYPE FOUND IN THE URL...'%file_type)
file_name = os.path.splitext(os.path.basename(file_url))[0] + file_type
#extract file name from url
file_source = requests.get(file_url, stream = True)
# open new stream connection
with open('./files/'+file_name, 'wb') as file:
# open file connection, create file and
# write to it
shutil.copyfileobj(file_source.raw, file)
# save the raw file object
print('DOWNLOADED:',file_name)
i+=1
del file_source # delete from memory
else:
print('%s file type NOT found in url:'%file_type)
print('EXCLUDED:',file_url)
# urls not downloaded from
if i == max:
print('Max reached')
break
print('Done!')
```
# Scrape funny cat pictures
```
py_file_scraper('https://funcatpictures.com/')
# scrape cats
!ls ./files
```
# Scrape pdf's from Data-X site
```
py_file_scraper('https://data-x.blog/resources',
html_tag='a',source_tag='href',file_type='.pdf', \
max=5)
```
# Scrape real data CSV files from websites
```
py_file_scraper('http://www-eio.upc.edu/~pau/cms/rdata/datasets.html',
html_tag='a', # R data sets
source_tag='href', file_type='.csv',max=5)
```
---
<a id='secBK'></a>
# Breakout problem
In this Breakout Problem you should extract live weather data in Berkeley from:
[http://forecast.weather.gov/MapClick.php?lat=37.87158815800046&lon=-122.27274583799971](http://forecast.weather.gov/MapClick.php?lat=37.87158815800046&lon=-122.27274583799971)
* Task scrape
* period / day (as Tonight, Friday, FridayNight etc.)
* the temperature for the period (as Low, High)
* the long weather description (e.g. Partly cloudy, with a low around 49..)
Store the scraped data strings in a Pandas DataFrame
**Hint:** The weather information is found in a div tag with `id='seven-day-forecast'`
# Appendix
<a id='sec6'></a>
# Scrape Bloomberg sitemap (XML) for current political news
```
# XML documents - site maps, all the urls. just between tags
# XML human and machine readable.
# Newest links: all the links for FIND SITE MAP!
# News websites will have sitemaps for politics, bot constantly
# tracking news track the sitemaps
# Before scraping a website look at robots.txt file
bs.BeautifulSoup(requests.get('https://www.bloomberg.com/robots.txt').content,'lxml')
source = requests.get('https://www.bloomberg.com/feeds/bpol/sitemap_news.xml').content
soup = bs.BeautifulSoup(source,'xml') # Note parser 'xml'
print(soup.prettify())
# Find political news headlines
for news in soup.find_all({'news'}):
print(news.title.text)
print(news.publication_date.text)
#print(news.keywords.text)
print('\n')
```
<a id='sec7'></a>
# Web crawl
Web crawling is almost like webscraping, but instead you crawl a specific website (and often its subsites) and extract meta information. It can be seen as simple, recursive scraping. This can be used for web indexing (in order to build a web search engine).
## Web crawl Twitter account
**Authors:** Kunal Desai & Alexander Fred Ojala
```
import bs4
from bs4 import BeautifulSoup
import requests
# Helper function to maintain the urls and the number of times they appear
url_dict = dict()
def add_to_dict(url_d, key):
if key in url_d:
url_d[key] = url_d[key] + 1
else:
url_d[key] = 1
# Recursive function which extracts links from the given url upto a given 'depth'.
def get_urls(url, depth):
if depth == 0:
return
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
for link in soup.find_all('a'):
if link.has_attr('href') and "https://" in link['href']:
# print(link['href'])
add_to_dict(url_dict, link['href'])
get_urls(link['href'], depth - 1)
# Iterative function which extracts links from the given url upto a given 'depth'.
def get_urls_iterative(url, depth):
urls = [url]
for url in urls:
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
for link in soup.find_all('a'):
if link.has_attr('href') and "https://" in link['href']:
add_to_dict(url_dict, link['href'])
urls.append(link['href'])
if len(urls) > depth:
break
get_urls("https://twitter.com/GolfWorld", 2)
for key in url_dict:
print(str(key) + " ---- " + str(url_dict[key]))
```
<a id='sec8'></a>
# SEO: Visualize sitemap and categories in a website
**Source:** https://www.ayima.com/guides/how-to-visualize-an-xml-sitemap-using-python.html
```
# Visualize XML sitemap with categories!
import requests
from bs4 import BeautifulSoup
url = 'https://www.sportchek.ca/sitemap.xml'
url = 'https://www.bloomberg.com/feeds/bpol/sitemap_index.xml'
page = requests.get(url)
print('Loaded page with: %s' % page)
sitemap_index = BeautifulSoup(page.content, 'html.parser')
print('Created %s object' % type(sitemap_index))
urls = [element.text for element in sitemap_index.findAll('loc')]
print(urls)
def extract_links(url):
''' Open an XML sitemap and find content wrapped in loc tags. '''
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
links = [element.text for element in soup.findAll('loc')]
return links
sitemap_urls = []
for url in urls:
links = extract_links(url)
sitemap_urls += links
print('Found {:,} URLs in the sitemap'.format(len(sitemap_urls)))
with open('sitemap_urls.dat', 'w') as f:
for url in sitemap_urls:
f.write(url + '\n')
'''
Categorize a list of URLs by site path.
The file containing the URLs should exist in the working directory and be
named sitemap_urls.dat. It should contain one URL per line.
Categorization depth can be specified by executing a call like this in the
terminal (where we set the granularity depth level to 5):
python categorize_urls.py --depth 5
The same result can be achieved by setting the categorization_depth variable
manually at the head of this file and running the script with:
python categorize_urls.py
'''
from __future__ import print_function
categorization_depth=3
# Main script functions
def peel_layers(urls, layers=3):
''' Builds a dataframe containing all unique page identifiers up
to a specified depth and counts the number of sub-pages for each.
Prints results to a CSV file.
urls : list
List of page URLs.
layers : int
Depth of automated URL search. Large values for this parameter
may cause long runtimes depending on the number of URLs.
'''
# Store results in a dataframe
sitemap_layers = pd.DataFrame()
# Get base levels
bases = pd.Series([url.split('//')[-1].split('/')[0] for url in urls])
sitemap_layers[0] = bases
# Get specified number of layers
for layer in range(1, layers+1):
page_layer = []
for url, base in zip(urls, bases):
try:
page_layer.append(url.split(base)[-1].split('/')[layer])
except:
# There is nothing that deep!
page_layer.append('')
sitemap_layers[layer] = page_layer
# Count and drop duplicate rows + sort
sitemap_layers = sitemap_layers.groupby(list(range(0, layers+1)))[0].count()\
.rename('counts').reset_index()\
.sort_values('counts', ascending=False)\
.sort_values(list(range(0, layers)), ascending=True)\
.reset_index(drop=True)
# Convert column names to string types and export
sitemap_layers.columns = [str(col) for col in sitemap_layers.columns]
sitemap_layers.to_csv('sitemap_layers.csv', index=False)
# Return the dataframe
return sitemap_layers
sitemap_urls = open('sitemap_urls.dat', 'r').read().splitlines()
print('Loaded {:,} URLs'.format(len(sitemap_urls)))
print('Categorizing up to a depth of %d' % categorization_depth)
sitemap_layers = peel_layers(urls=sitemap_urls,
layers=categorization_depth)
print('Printed {:,} rows of data to sitemap_layers.csv'.format(len(sitemap_layers)))
'''
Visualize a list of URLs by site path.
This script reads in the sitemap_layers.csv file created by the
categorize_urls.py script and builds a graph visualization using Graphviz.
Graph depth can be specified by executing a call like this in the
terminal:
python visualize_urls.py --depth 4 --limit 10 --title "My Sitemap" --style "dark" --size "40"
The same result can be achieved by setting the variables manually at the head
of this file and running the script with:
python visualize_urls.py
'''
from __future__ import print_function
# Set global variables
graph_depth = 3 # Number of layers deep to plot categorization
limit = 3 # Maximum number of nodes for a branch
title = '' # Graph title
style = 'light' # Graph style, can be "light" or "dark"
size = '8,5' # Size of rendered PDF graph
# Import external library dependencies
import pandas as pd
import graphviz
# Main script functions
def make_sitemap_graph(df, layers=3, limit=50, size='8,5'):
''' Make a sitemap graph up to a specified layer depth.
sitemap_layers : DataFrame
The dataframe created by the peel_layers function
containing sitemap information.
layers : int
Maximum depth to plot.
limit : int
The maximum number node edge connections. Good to set this
low for visualizing deep into site maps.
'''
# Check to make sure we are not trying to plot too many layers
if layers > len(df) - 1:
layers = len(df)-1
print('There are only %d layers available to plot, setting layers=%d'
% (layers, layers))
# Initialize graph
f = graphviz.Digraph('sitemap', filename='sitemap_graph_%d_layer' % layers)
f.body.extend(['rankdir=LR', 'size="%s"' % size])
def add_branch(f, names, vals, limit, connect_to=''):
''' Adds a set of nodes and edges to nodes on the previous layer. '''
# Get the currently existing node names
node_names = [item.split('"')[1] for item in f.body if 'label' in item]
# Only add a new branch it it will connect to a previously created node
if connect_to:
if connect_to in node_names:
for name, val in list(zip(names, vals))[:limit]:
f.node(name='%s-%s' % (connect_to, name), label=name)
f.edge(connect_to, '%s-%s' % (connect_to, name), label='{:,}'.format(val))
f.attr('node', shape='rectangle') # Plot nodes as rectangles
# Add the first layer of nodes
for name, counts in df.groupby(['0'])['counts'].sum().reset_index()\
.sort_values(['counts'], ascending=False).values:
f.node(name=name, label='{} ({:,})'.format(name, counts))
if layers == 0:
return f
f.attr('node', shape='oval') # Plot nodes as ovals
f.graph_attr.update()
# Loop over each layer adding nodes and edges to prior nodes
for i in range(1, layers+1):
cols = [str(i_) for i_ in range(i)]
nodes = df[cols].drop_duplicates().values
for j, k in enumerate(nodes):
# Compute the mask to select correct data
mask = True
for j_, ki in enumerate(k):
mask &= df[str(j_)] == ki
# Select the data then count branch size, sort, and truncate
data = df[mask].groupby([str(i)])['counts'].sum()\
.reset_index().sort_values(['counts'], ascending=False)
# Add to the graph
add_branch(f,
names=data[str(i)].values,
vals=data['counts'].values,
limit=limit,
connect_to='-'.join(['%s']*i) % tuple(k))
print(('Built graph up to node %d / %d in layer %d' % (j, len(nodes), i))\
.ljust(50), end='\r')
return f
def apply_style(f, style, title=''):
''' Apply the style and add a title if desired. More styling options are
documented here: http://www.graphviz.org/doc/info/attrs.html#d:style
f : graphviz.dot.Digraph
The graph object as created by graphviz.
style : str
Available styles: 'light', 'dark'
title : str
Optional title placed at the bottom of the graph.
'''
dark_style = {
'graph': {
'label': title,
'bgcolor': '#3a3a3a',
'fontname': 'Helvetica',
'fontsize': '18',
'fontcolor': 'white',
},
'nodes': {
'style': 'filled',
'color': 'white',
'fillcolor': 'black',
'fontname': 'Helvetica',
'fontsize': '14',
'fontcolor': 'white',
},
'edges': {
'color': 'white',
'arrowhead': 'open',
'fontname': 'Helvetica',
'fontsize': '12',
'fontcolor': 'white',
}
}
light_style = {
'graph': {
'label': title,
'fontname': 'Helvetica',
'fontsize': '18',
'fontcolor': 'black',
},
'nodes': {
'style': 'filled',
'color': 'black',
'fillcolor': '#dbdddd',
'fontname': 'Helvetica',
'fontsize': '14',
'fontcolor': 'black',
},
'edges': {
'color': 'black',
'arrowhead': 'open',
'fontname': 'Helvetica',
'fontsize': '12',
'fontcolor': 'black',
}
}
if style == 'light':
apply_style = light_style
elif style == 'dark':
apply_style = dark_style
f.graph_attr = apply_style['graph']
f.node_attr = apply_style['nodes']
f.edge_attr = apply_style['edges']
return f
# Read in categorized data
sitemap_layers = pd.read_csv('sitemap_layers.csv', dtype=str)
# Convert numerical column to integer
sitemap_layers.counts = sitemap_layers.counts.apply(int)
print('Loaded {:,} rows of categorized data from sitemap_layers.csv'\
.format(len(sitemap_layers)))
print('Building %d layer deep sitemap graph' % graph_depth)
f = make_sitemap_graph(sitemap_layers, layers=graph_depth,
limit=limit, size=size)
f = apply_style(f, style=style, title=title)
f.render(cleanup=True)
print('Exported graph to sitemap_graph_%d_layer.pdf' % graph_depth)
```
| github_jupyter |
```
import os
os.chdir(os.path.split(os.getcwd())[0])
import random
import numpy as np
import matplotlib.pyplot as plt
import gym
from agent import *
from optionpricing import *
import yaml
import torch
from collections import defaultdict
import matplotlib.style as style
style.use('seaborn-poster')
experiment_folder = None
with open(os.path.join('experiments', experiment_folder, 'config.yaml'), 'r') as f:
args_dict = yaml.load(f, Loader = yaml.SafeLoader)
class Args:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
args = Args(**args_dict)
config = {
'S': 100,
'T': 10, # 10 days
'L': 1,
'm': 100, # L options for m stocks
'n': 0,
'K': [95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105],
'D': 5,
'mu': 0,
'sigma': 0.01,
'r': 0,
'ss': 5,
'kappa': 0.1,
'multiplier': args.trc_multiplier,
'ticksize': args.trc_ticksize
}
env = OptionPricingEnv(config)
env.configure()
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
def load_estimator(env, device, nhidden, nunits, experiment_folder, kind = 'best'):
state_shape = env.observation_space.shape
state_space_dim = state_shape[0] if len(state_shape) == 1 else state_shape
estimator = Estimator(nhidden, nunits, state_space_dim, env.action_space.n)
if kind == 'best':
checkpoint = torch.load(os.path.join('experiments', experiment_folder, 'best.pt'), map_location = torch.device('cpu'))
elif kind == 'checkpoint':
checkpoint = torch.load(os.path.join('experiments', experiment_folder, 'checkpoint.pt'), map_location = torch.device('cpu'))
else:
raise ValueError('Invalid choice for kind')
estimator.load_state_dict(checkpoint['estimator'])
estimator.eval()
return estimator
def delta_neutral_policy(env):
return env.inv_action_map[-1 * int(env.delta * (env.L * env.m)) - env.n]
def simulate_episodes(config, n = 10000, kind = 'agent'):
env = OptionPricingEnv(config)
env.configure()
estimator = load_estimator(env, device, args.nhidden, args.nunits, experiment_folder, 'best')
full_history = {}
for i in range(1, n + 1):
print(f'\r{i}/{n} | {100 * i / n:.2f} %', end = '', flush = True)
state = torch.from_numpy(env.reset()).to(device)
history = defaultdict(list)
done = False
while not done:
history['delta'].append(env.delta)
if kind == 'agent':
with torch.no_grad():
action = np.argmax(estimator(state).numpy())
else:
action = delta_neutral_policy(env)
state, reward, done, info = env.step(action)
history['reward'].append(reward)
history['n'].append(env.n)
history['stock_value'].append(env.stock_value)
history['option_value'].append(env.option_value)
history['cash'].append(env.cash)
history['cost'].append(info['cost'])
history['pnl'].append(info['pnl'])
state = torch.from_numpy(state).to(device)
full_history[i] = history
return full_history
from scipy.stats import gaussian_kde
```
### K = 100
```
config['K'] = 100
random.seed(1)
np.random.seed(1)
torch.manual_seed(1)
agent_history = simulate_episodes(config, n = 1000)
random.seed(1)
np.random.seed(1)
torch.manual_seed(1)
delta_history = simulate_episodes(config, n = 1000, kind = 'delta')
agent_pnl_volatility = [np.std(agent_history[i]['pnl']) for i in range(1, len(agent_history) + 1)]
delta_pnl_volatility = [np.std(delta_history[i]['pnl']) for i in range(1, len(delta_history) + 1)]
agent_costs = [sum(agent_history[i]['cost']) for i in range(1, len(agent_history) + 1)]
delta_costs = [sum(delta_history[i]['cost']) for i in range(1, len(delta_history) + 1)]
fig, ax = plt.subplots(figsize = (12, 18), nrows = 2, ncols = 1)
x_pnl_volatility = np.linspace(0, 25, 1000)
x_costs = np.linspace(0, 300, 1000)
delta_pnl_volatility_kernel = gaussian_kde(delta_pnl_volatility)
agent_pnl_volatility_kernel = gaussian_kde(agent_pnl_volatility)
delta_costs_kernel = gaussian_kde(delta_costs)
agent_costs_kernel = gaussian_kde(agent_costs)
y_delta_pnl_volatility = delta_pnl_volatility_kernel(x_pnl_volatility)
y_agent_pnl_volatility = agent_pnl_volatility_kernel(x_pnl_volatility)
y_delta_costs = delta_costs_kernel(x_costs)
y_agent_costs = agent_costs_kernel(x_costs)
ax[0].plot(x_pnl_volatility, y_delta_pnl_volatility, lw = 1.5, label = f'Delta | Mean: {np.mean(delta_pnl_volatility):.2f}, Std: {np.std(delta_pnl_volatility):.2f}', color = 'blue')
ax[0].plot(x_pnl_volatility, y_agent_pnl_volatility, lw = 1.5, label = f'Agent | Mean: {np.mean(agent_pnl_volatility):.2f}, Std: {np.std(agent_pnl_volatility):.2f}', color = 'red')
#ax[0].set_title('Volatility')
ax[0].set_xlabel('Volatility')
ax[0].set_ylabel('Density')
ax[0].legend()
ax[1].plot(x_costs, y_delta_costs, lw = 1.5, label = f'Delta | Mean: {np.mean(delta_costs):.2f}, Std: {np.std(delta_costs):.2f}', color = 'blue')
ax[1].plot(x_costs, y_agent_costs, lw = 1.5, label = f'Agent | Mean: {np.mean(agent_costs):.2f}, Std: {np.std(agent_costs):.2f}', color = 'red')
#ax[1].set_title('Costs')
ax[1].set_xlabel('Cost')
ax[1].set_ylabel('Density')
ax[1].legend()
plt.show()
```
### K = 95
```
config['K'] = 95
random.seed(1)
np.random.seed(1)
torch.manual_seed(1)
agent_history = simulate_episodes(config, n = 1000)
random.seed(1)
np.random.seed(1)
torch.manual_seed(1)
delta_history = simulate_episodes(config, n = 1000, kind = 'delta')
agent_pnl_volatility = [np.std(agent_history[i]['pnl']) for i in range(1, len(agent_history) + 1)]
delta_pnl_volatility = [np.std(delta_history[i]['pnl']) for i in range(1, len(delta_history) + 1)]
agent_costs = [sum(agent_history[i]['cost']) for i in range(1, len(agent_history) + 1)]
delta_costs = [sum(delta_history[i]['cost']) for i in range(1, len(delta_history) + 1)]
fig, ax = plt.subplots(figsize = (12, 18), nrows = 2, ncols = 1)
x_pnl_volatility = np.linspace(0, 25, 1000)
x_costs = np.linspace(0, 300, 1000)
delta_pnl_volatility_kernel = gaussian_kde(delta_pnl_volatility)
agent_pnl_volatility_kernel = gaussian_kde(agent_pnl_volatility)
delta_costs_kernel = gaussian_kde(delta_costs)
agent_costs_kernel = gaussian_kde(agent_costs)
y_delta_pnl_volatility = delta_pnl_volatility_kernel(x_pnl_volatility)
y_agent_pnl_volatility = agent_pnl_volatility_kernel(x_pnl_volatility)
y_delta_costs = delta_costs_kernel(x_costs)
y_agent_costs = agent_costs_kernel(x_costs)
ax[0].plot(x_pnl_volatility, y_delta_pnl_volatility, lw = 1.5, label = f'Delta | Mean: {np.mean(delta_pnl_volatility):.2f}, Std: {np.std(delta_pnl_volatility):.2f}', color = 'blue')
ax[0].plot(x_pnl_volatility, y_agent_pnl_volatility, lw = 1.5, label = f'Agent | Mean: {np.mean(agent_pnl_volatility):.2f}, Std: {np.std(agent_pnl_volatility):.2f}', color = 'red')
#ax[0].set_title('Volatility')
ax[0].set_xlabel('Volatility')
ax[0].set_ylabel('Density')
ax[0].legend()
ax[1].plot(x_costs, y_delta_costs, lw = 1.5, label = f'Delta | Mean: {np.mean(delta_costs):.2f}, Std: {np.std(delta_costs):.2f}', color = 'blue')
ax[1].plot(x_costs, y_agent_costs, lw = 1.5, label = f'Agent | Mean: {np.mean(agent_costs):.2f}, Std: {np.std(agent_costs):.2f}', color = 'red')
#ax[1].set_title('Costs')
ax[1].set_xlabel('Cost')
ax[1].set_ylabel('Density')
ax[1].legend()
plt.show()
```
### K = 105
```
config['K'] = 105
random.seed(1)
np.random.seed(1)
torch.manual_seed(1)
agent_history = simulate_episodes(config, n = 1000)
random.seed(1)
np.random.seed(1)
torch.manual_seed(1)
delta_history = simulate_episodes(config, n = 1000, kind = 'delta')
agent_pnl_volatility = [np.std(agent_history[i]['pnl']) for i in range(1, len(agent_history) + 1)]
delta_pnl_volatility = [np.std(delta_history[i]['pnl']) for i in range(1, len(delta_history) + 1)]
agent_costs = [sum(agent_history[i]['cost']) for i in range(1, len(agent_history) + 1)]
delta_costs = [sum(delta_history[i]['cost']) for i in range(1, len(delta_history) + 1)]
fig, ax = plt.subplots(figsize = (12, 18), nrows = 2, ncols = 1)
x_pnl_volatility = np.linspace(0, 25, 1000)
x_costs = np.linspace(0, 300, 1000)
delta_pnl_volatility_kernel = gaussian_kde(delta_pnl_volatility)
agent_pnl_volatility_kernel = gaussian_kde(agent_pnl_volatility)
delta_costs_kernel = gaussian_kde(delta_costs)
agent_costs_kernel = gaussian_kde(agent_costs)
y_delta_pnl_volatility = delta_pnl_volatility_kernel(x_pnl_volatility)
y_agent_pnl_volatility = agent_pnl_volatility_kernel(x_pnl_volatility)
y_delta_costs = delta_costs_kernel(x_costs)
y_agent_costs = agent_costs_kernel(x_costs)
ax[0].plot(x_pnl_volatility, y_delta_pnl_volatility, lw = 1.5, label = f'Delta | Mean: {np.mean(delta_pnl_volatility):.2f}, Std: {np.std(delta_pnl_volatility):.2f}', color = 'blue')
ax[0].plot(x_pnl_volatility, y_agent_pnl_volatility, lw = 1.5, label = f'Agent | Mean: {np.mean(agent_pnl_volatility):.2f}, Std: {np.std(agent_pnl_volatility):.2f}', color = 'red')
#ax[0].set_title('Volatility')
ax[0].set_xlabel('Volatility')
ax[0].set_ylabel('Density')
ax[0].legend()
ax[1].plot(x_costs, y_delta_costs, lw = 1.5, label = f'Delta | Mean: {np.mean(delta_costs):.2f}, Std: {np.std(delta_costs):.2f}', color = 'blue')
ax[1].plot(x_costs, y_agent_costs, lw = 1.5, label = f'Agent | Mean: {np.mean(agent_costs):.2f}, Std: {np.std(agent_costs):.2f}', color = 'red')
#ax[1].set_title('Costs')
ax[1].set_xlabel('Cost')
ax[1].set_ylabel('Density')
ax[1].legend()
plt.show()
```
| github_jupyter |
# ์ ์ฌ ์ด๋ฏธ์ง ๊ฒ์ถ ์ํ๋ชจ๋ธ ๋ฐ๋ชจ
DNN๊ธฐ๋ฐ ์ด๋ฏธ์ง ์ ์ฌ๋ ๊ฒ์ถ ์ํ ๋ชจ๋ธ(BaseNet) ๋ฐ๋ชจ.
```
# load package
import tensorflow as tf
from functools import partial
import itertools
from tensorflow.keras.datasets import mnist
import numpy as np
import cv2
from matplotlib import pyplot as plt
import os.path as osp
from pathlib import Path
from model import create_model
```
## ๋ฐ์ดํฐ์
๋ก๋
```
number_of_dataset = 1000
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_test = np.array(x_test) / .255
x_test = x_test[:number_of_dataset]
# make pair dataset
test_dataset = [[x, y] for x, y in zip(x_test, y_test)]
test_dataset = [[x[0][0], x[1][0], 0 if x[0][1] == x[1][1] else 1] for x in
itertools.combinations(test_dataset, 2)]
def display(idx):
img = test_dataset[idx]
f = plt.figure()
f.add_subplot(1,2, 1)
plt.imshow(img[0])
plt.xticks([])
plt.yticks([])
f.add_subplot(1,2, 2)
plt.imshow(img[1])
plt.xticks([])
plt.yticks([])
plt.show(block=True)
print('๊ฐ์ ์ซ์ ์
๋๋ค' if img[2]==0 else '๋ค๋ฅธ ์ซ์ ์
๋๋ค')
```
## ๋ชจ๋ธ ๋ก๋
```
# model directory
ROOT_DIR = Path.cwd().parent
OUTPUT_DIR = osp.join(ROOT_DIR, 'output')
MODEL_DIR = osp.join(OUTPUT_DIR, 'model_dump')
model = create_model()
model.load_weights(osp.join(MODEL_DIR, 'model_epoch_10.h5'))
model.summary()
```
## ํ
์คํธ
### ๋ค๋ฅธ ์ซ์
```
idx = 2
display(idx)
test_img = test_dataset[idx]
vector_f = model(np.array([test_img[0]], dtype=np.float32))
vector_s = model(np.array([test_img[1]], dtype=np.float32))
distance = tf.reduce_mean(tf.square(vector_f - vector_s))
print(distance)
idx = 20001
display(idx)
test_img = test_dataset[idx]
vector_f = model(np.array([test_img[0]], dtype=np.float32))
vector_s = model(np.array([test_img[1]], dtype=np.float32))
distance = tf.reduce_mean(tf.square(vector_f - vector_s))
print(distance)
```
### ๊ฐ์ ์ซ์
```
idx = 16
display(idx)
test_img = test_dataset[idx]
vector_f = model(np.array([test_img[0]], dtype=np.float32))
vector_s = model(np.array([test_img[1]], dtype=np.float32))
distance = tf.reduce_mean(tf.square(vector_f - vector_s))
print(distance)
idx = 59
display(idx)
test_img = test_dataset[idx]
vector_f = model(np.array([test_img[0]], dtype=np.float32))
vector_s = model(np.array([test_img[1]], dtype=np.float32))
distance = tf.reduce_mean(tf.square(vector_f - vector_s))
print(distance)
idx = 4033
display(idx)
test_img = test_dataset[idx]
vector_f = model(np.array([test_img[0]], dtype=np.float32))
vector_s = model(np.array([test_img[1]], dtype=np.float32))
distance = tf.reduce_mean(tf.square(vector_f - vector_s))
print(distance)
```
| github_jupyter |
Cotton Diseases Prediction Detection Using Deep Learning
```
from tensorflow.compat.v1 import ConfigProto, InteractiveSession
config=ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction=0.5
config.gpu_options.allow_growth=True
session = InteractiveSession(config=config)
from tensorflow.keras.layers import Input,Lambda,Dense,Flatten
from tensorflow.keras.models import Model
from tensorflow.keras.applications import InceptionV3,ResNet152V2
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img
from tensorflow.keras.models import Sequential
import numpy as np
from glob import glob
IMAGE_SIZE=[224,224]
train_path='./drive/My Drive/data/train'
valid_path='./drive/My Drive/data/val'
resnet=ResNet152V2(input_shape=IMAGE_SIZE+[3],weights='imagenet',include_top=False)
for layers in resnet.layers:
layers.trainable=False
folders=glob('./drive/My Drive/data/train/*')
x=Flatten()(resnet.output)
prediction=Dense(len(folders),activation='softmax')(x)
model=Model(inputs=resnet.input,outputs=prediction)
model.summary()
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
train_datagen=ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen=ImageDataGenerator(rescale=1./255)
training_set=train_datagen.flow_from_directory('./drive/My Drive/data/train',
target_size=(224,224),
batch_size=32,
class_mode='categorical')
test_set=test_datagen.flow_from_directory('./drive/My Drive/data/val',
target_size=(224,224),
batch_size=32,
class_mode='categorical')
r=model.fit_generator(training_set,
validation_data=test_set,
epochs=20,
steps_per_epoch=len(training_set),
validation_steps=len(test_set))
model.save('resnet.h5')
import matplotlib.pyplot as plt
plt.plot(r.history['accuracy'],label='train_acc')
plt.plot(r.history['val_accuracy'],label='val_acc')
plt.legend()
plt.show()
plt.savefig('resnet_model_acc')
plt.plot(r.history['loss'],label='train_acc')
plt.plot(r.history['val_loss'],label='val_acc')
plt.legend()
plt.show()
plt.savefig('resnet_model_loss')
inception=InceptionV3(input_shape=IMAGE_SIZE+[3],weights='imagenet',include_top=False)
for layers in inception.layers:
layers.trainable=False
x=Flatten()(inception.output)
prediction=Dense(len(folders),activation='softmax')(x)
model2=Model(inputs=inception.input,outputs=prediction)
model2.summary()
model2.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
r=model2.fit_generator(training_set,
validation_data=test_set,
epochs=20,
steps_per_epoch=len(training_set),
validation_steps=len(test_set))
model2.save('inception.h5')
plt.plot(r.history['accuracy'],label='train_acc')
plt.plot(r.history['val_accuracy'],label='val_acc')
plt.legend()
plt.show()
plt.savefig('inception_model_acc')
plt.plot(r.history['loss'],label='train_acc')
plt.plot(r.history['val_loss'],label='val_acc')
plt.legend()
plt.show()
plt.savefig('inception_model_loss')
import tensorflow
model_load = tensorflow.keras.models.load_model('./drive/My Drive/inception.h5')
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
image=load_img('./drive/My Drive/data/test/diseased cotton leaf/dis_leaf (124).jpg',target_size=(224, 224))
x = img_to_array(image)
x = x/255
x = np.expand_dims(x ,axis=0)
preds = model_load.predict(x)
preds=np.argmax(preds, axis=1)
if preds==0:
preds="The leaf is diseased cotton leaf"
elif preds==1:
preds="The leaf is diseased cotton plant"
elif preds==2:
preds="The leaf is fresh cotton leaf"
else:
preds="The leaf is fresh cotton plant"
print(preds)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Train and evaluate with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/train_and_evaluate"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras/train_and_evaluate.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/keras/train_and_evaluate.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/train_and_evaluate.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This guide covers training, evaluation, and prediction (inference) models in TensorFlow 2.0 in two broad situations:
- When using built-in APIs for training & validation (such as `model.fit()`, `model.evaluate()`, `model.predict()`). This is covered in the section **"Using built-in training & evaluation loops"**.
- When writing custom loops from scratch using eager execution and the `GradientTape` object. This is covered in the section **"Writing your own training & evaluation loops from scratch"**.
In general, whether you are using built-in loops or writing your own, model training & evaluation works strictly in the same way across every kind of Keras model -- Sequential models, models built with the Functional API, and models written from scratch via model subclassing.
This guide doesn't cover distributed training.
## Setup
```
import tensorflow as tf
import numpy as np
```
## Part I: Using built-in training & evaluation loops
When passing data to the built-in training loops of a model, you should either use **Numpy arrays** (if your data is small and fits in memory) or **tf.data Dataset** objects. In the next few paragraphs, we'll use the MNIST dataset as Numpy arrays, in order to demonstrate how to use optimizers, losses, and metrics.
### API overview: a first end-to-end example
Let's consider the following model (here, we build in with the Functional API, but it could be a Sequential model or a subclassed model as well):
```
from tensorflow import keras
from tensorflow.keras import layers
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
```
Here's what the typical end-to-end workflow looks like, consisting of training, validation on a holdout set generated from the original training data, and finally evaluation on the test data:
Load a toy dataset for the sake of this example
```
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are Numpy arrays)
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
y_train = y_train.astype('float32')
y_test = y_test.astype('float32')
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
```
Specify the training configuration (optimizer, loss, metrics)
```
model.compile(optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
# List of metrics to monitor
metrics=['sparse_categorical_accuracy'])
```
Train the model by slicing the data into "batches"
of size "batch_size", and repeatedly iterating over
the entire dataset for a given number of "epochs"
```
print('# Fit model on training data')
history = model.fit(x_train, y_train,
batch_size=64,
epochs=3,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val))
print('\nhistory dict:', history.history)
```
The returned "history" object holds a record
of the loss values and metric values during training
```
# Evaluate the model on the test data using `evaluate`
print('\n# Evaluate on test data')
results = model.evaluate(x_test, y_test, batch_size=128)
print('test loss, test acc:', results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print('\n# Generate predictions for 3 samples')
predictions = model.predict(x_test[:3])
print('predictions shape:', predictions.shape)
```
### Specifying a loss, metrics, and an optimizer
To train a model with `fit`, you need to specify a loss function, an optimizer, and optionally, some metrics to monitor.
You pass these to the model as arguments to the `compile()` method:
```
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.sparse_categorical_accuracy])
```
The `metrics` argument should be a list -- you model can have any number of metrics.
If your model has multiple outputs, your can specify different losses and metrics for each output,
and you can modulate the contribution of each output to the total loss of the model. You will find more details about this in the section "**Passing data to multi-input, multi-output models**".
Note that if you're satisfied with the default settings, in many cases the optimizer, loss, and metrics can be specified via string identifiers as a shortcut:
```
model.compile(optimizer='rmsprop',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
```
For later reuse, let's put our model definition and compile step in functions; we will call them several times across different examples in this guide.
```
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
return model
```
#### Many built-in optimizers, losses, and metrics are available
In general, you won't have to create from scratch your own losses, metrics, or optimizers, because what you need is likely already part of the Keras API:
Optimizers:
- `SGD()` (with or without momentum)
- `RMSprop()`
- `Adam()`
- etc.
Losses:
- `MeanSquaredError()`
- `KLDivergence()`
- `CosineSimilarity()`
- etc.
Metrics:
- `AUC()`
- `Precision()`
- `Recall()`
- etc.
#### Custom losses
There are two ways to provide custom losses with Keras. The first example creates a function that accepts inputs `y_true` and `y_pred`. The following example shows a loss function that computes the average absolute error between the real data and the predictions:
```
def basic_loss_function(y_true, y_pred):
return tf.math.reduce_mean(tf.abs(y_true - y_pred))
model.compile(optimizer=keras.optimizers.Adam(),
loss=basic_loss_function)
model.fit(x_train, y_train, batch_size=64, epochs=3)
```
If you need a loss function that takes in parameters beside `y_true` and `y_pred`, you can subclass the `tf.keras.losses.Loss` class and implement the following two methods:
* `__init__(self)` โAccept parameters to pass during the call of your loss function
* `call(self, y_true, y_pred)` โUse the targets (`y_true`) and the model predictions (`y_pred`) to compute the model's loss
Parameters passed into `__init__()` can be used during `call()` when calculating loss.
The following example shows how to implement a `WeightedCrossEntropy` loss function that calculates a `BinaryCrossEntropy` loss, where the loss of a certain class or the whole function can be modified by a scalar.
```
class WeightedBinaryCrossEntropy(keras.losses.Loss):
"""
Args:
pos_weight: Scalar to affect the positive labels of the loss function.
weight: Scalar to affect the entirety of the loss function.
from_logits: Whether to compute loss from logits or the probability.
reduction: Type of tf.keras.losses.Reduction to apply to loss.
name: Name of the loss function.
"""
def __init__(self, pos_weight, weight, from_logits=False,
reduction=keras.losses.Reduction.AUTO,
name='weighted_binary_crossentropy'):
super().__init__(reduction=reduction, name=name)
self.pos_weight = pos_weight
self.weight = weight
self.from_logits = from_logits
def call(self, y_true, y_pred):
ce = tf.losses.binary_crossentropy(
y_true, y_pred, from_logits=self.from_logits)[:,None]
ce = self.weight * (ce*(1-y_true) + self.pos_weight*ce*(y_true))
return ce
```
This is a binary loss but the dataset has 10 classes, so apply the loss as if the model were making an independent binary prediction for each class. To do that, start by creating one-hot vectors from the class indices:
```
one_hot_y_train = tf.one_hot(y_train.astype(np.int32), depth=10)
```
Now use those one-hots, and the custom loss to train a model:
```
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.Adam(),
loss=WeightedBinaryCrossEntropy(
pos_weight=0.5, weight = 2, from_logits=True)
)
model.fit(x_train, one_hot_y_train, batch_size=64, epochs=5)
```
#### Custom metrics
If you need a metric that isn't part of the API, you can easily create custom metrics by subclassing the `Metric` class. You will need to implement 4 methods:
- `__init__(self)`, in which you will create state variables for your metric.
- `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targets `y_true` and the model predictions `y_pred` to update the state variables.
- `result(self)`, which uses the state variables to compute the final results.
- `reset_states(self)`, which reinitializes the state of the metric.
State update and results computation are kept separate (in `update_state()` and `result()`, respectively) because in some cases, results computation might be very expensive, and would only be done periodically.
Here's a simple example showing how to implement a `CategoricalTruePositives` metric, that counts how many samples where correctly classified as belonging to a given class:
```
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name='categorical_true_positives', **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name='tp', initializer='zeros')
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, 'int32') == tf.cast(y_pred, 'int32')
values = tf.cast(values, 'float32')
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, 'float32')
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.)
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[CategoricalTruePositives()])
model.fit(x_train, y_train,
batch_size=64,
epochs=3)
```
#### Handling losses and metrics that don't fit the standard signature
The overwhelming majority of losses and metrics can be computed from `y_true` and `y_pred`, where `y_pred` is an output of your model. But not all of them. For instance, a regularization loss may only require the activation of a layer (there are no targets in this case), and this activation may not be a model output.
In such cases, you can call `self.add_loss(loss_value)` from inside the `call` method of a custom layer. Here's a simple example that adds activity regularization (note that activity regularization is built-in in all Keras layers -- this layer is just for the sake of providing a concrete example):
```
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True))
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train,
batch_size=64,
epochs=1)
```
You can do the same for logging metric values:
```
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(keras.backend.std(inputs),
name='std_of_activation',
aggregation='mean')
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True))
model.fit(x_train, y_train,
batch_size=64,
epochs=1)
```
In the [Functional API](functional.ipynb), you can also call `model.add_loss(loss_tensor)`, or `model.add_metric(metric_tensor, name, aggregation)`.
Here's a simple example:
```
inputs = keras.Input(shape=(784,), name='digits')
x1 = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x2 = layers.Dense(64, activation='relu', name='dense_2')(x1)
outputs = layers.Dense(10, name='predictions')(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1),
name='std_of_activation',
aggregation='mean')
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True))
model.fit(x_train, y_train,
batch_size=64,
epochs=1)
```
#### Automatically setting apart a validation holdout set
In the first end-to-end example you saw, we used the `validation_data` argument to pass a tuple
of Numpy arrays `(x_val, y_val)` to the model for evaluating a validation loss and validation metrics at the end of each epoch.
Here's another option: the argument `validation_split` allows you to automatically reserve part of your training data for validation. The argument value represents the fraction of the data to be reserved for validation, so it should be set to a number higher than 0 and lower than 1. For instance, `validation_split=0.2` means "use 20% of the data for validation", and `validation_split=0.6` means "use 60% of the data for validation".
The way the validation is computed is by *taking the last x% samples of the arrays received by the `fit` call, before any shuffling*.
You can only use `validation_split` when training with Numpy data.
```
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1, steps_per_epoch=1)
```
### Training & evaluation from tf.data Datasets
In the past few paragraphs, you've seen how to handle losses, metrics, and optimizers, and you've seen how to use the `validation_data` and `validation_split` arguments in `fit`, when your data is passed as Numpy arrays.
Let's now take a look at the case where your data comes in the form of a tf.data Dataset.
The tf.data API is a set of utilities in TensorFlow 2.0 for loading and preprocessing data in a way that's fast and scalable.
For a complete guide about creating Datasets, see [the tf.data documentation](https://www.tensorflow.org/guide/data).
You can pass a Dataset instance directly to the methods `fit()`, `evaluate()`, and `predict()`:
```
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print('\n# Evaluate')
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
```
Note that the Dataset is reset at the end of each epoch, so it can be reused of the next epoch.
If you want to run training only on a specific number of batches from this Dataset, you can pass the `steps_per_epoch` argument, which specifies how many training steps the model should run using this Dataset before moving on to the next epoch.
If you do this, the dataset is not reset at the end of each epoch, instead we just keep drawing the next batches. The dataset will eventually run out of data (unless it is an infinitely-looping dataset).
```
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64).repeat()
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, steps_per_epoch=100, epochs=3)
```
#### Using a validation dataset
You can pass a Dataset instance as the `validation_data` argument in `fit`:
```
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=3, validation_data=val_dataset)
```
At the end of each epoch, the model will iterate over the validation Dataset and compute the validation loss and validation metrics.
If you want to run validation only on a specific number of batches from this Dataset, you can pass the `validation_steps` argument, which specifies how many validation steps the model should run with the validation Dataset before interrupting validation and moving on to the next epoch:
```
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=3,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset, validation_steps=10)
```
Note that the validation Dataset will be reset after each use (so that you will always be evaluating on the same samples from epoch to epoch).
The argument `validation_split` (generating a holdout set from the training data) is not supported when training from Dataset objects, since this features requires the ability to index the samples of the datasets, which is not possible in general with the Dataset API.
### Other input formats supported
Besides Numpy arrays and TensorFlow Datasets, it's possible to train a Keras model using Pandas dataframes, or from Python generators that yield batches.
In general, we recommend that you use Numpy input data if your data is small and fits in memory, and Datasets otherwise.
### Using sample weighting and class weighting
Besides input data and target data, it is possible to pass sample weights or class weights to a model when using `fit`:
- When training from Numpy data: via the `sample_weight` and `class_weight` arguments.
- When training from Datasets: by having the Dataset return a tuple `(input_batch, target_batch, sample_weight_batch)` .
A "sample weights" array is an array of numbers that specify how much weight each sample in a batch should have in computing the total loss. It is commonly used in imbalanced classification problems (the idea being to give more weight to rarely-seen classes). When the weights used are ones and zeros, the array can be used as a *mask* for the loss function (entirely discarding the contribution of certain samples to the total loss).
A "class weights" dict is a more specific instance of the same concept: it maps class indices to the sample weight that should be used for samples belonging to this class. For instance, if class "0" is twice less represented than class "1" in your data, you could use `class_weight={0: 1., 1: 0.5}`.
Here's a Numpy example where we use class weights or sample weights to give more importance to the correct classification of class #5 (which is the digit "5" in the MNIST dataset).
```
import numpy as np
class_weight = {0: 1., 1: 1., 2: 1., 3: 1., 4: 1.,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.,
6: 1., 7: 1., 8: 1., 9: 1.}
print('Fit with class weight')
model.fit(x_train, y_train,
class_weight=class_weight,
batch_size=64,
epochs=4)
# Here's the same example using `sample_weight` instead:
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.
print('\nFit with sample weight')
model = get_compiled_model()
model.fit(x_train, y_train,
sample_weight=sample_weight,
batch_size=64,
epochs=4)
```
Here's a matching Dataset example:
```
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=3)
```
### Passing data to multi-input, multi-output models
In the previous examples, we were considering a model with a single input (a tensor of shape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But what about models that have multiple inputs or outputs?
Consider the following model, which has an image input of shape `(32, 32, 3)` (that's `(height, width, channels)`) and a timeseries input of shape `(None, 10)` (that's `(timesteps, features)`). Our model will have two outputs computed from the combination of these inputs: a "score" (of shape `(1,)`) and a probability distribution over five classes (of shape `(5,)`).
```
from tensorflow import keras
from tensorflow.keras import layers
image_input = keras.Input(shape=(32, 32, 3), name='img_input')
timeseries_input = keras.Input(shape=(None, 10), name='ts_input')
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name='score_output')(x)
class_output = layers.Dense(5, name='class_output')(x)
model = keras.Model(inputs=[image_input, timeseries_input],
outputs=[score_output, class_output])
```
Let's plot this model, so you can clearly see what we're doing here (note that the shapes shown in the plot are batch shapes, rather than per-sample shapes).
```
keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True)
```
At compilation time, we can specify different losses to different outputs, by passing the loss functions as a list:
```
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(),
keras.losses.CategoricalCrossentropy(from_logits=True)])
```
If we only passed a single loss function to the model, the same loss function would be applied to every output, which is not appropriate here.
Likewise for metrics:
```
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(),
keras.losses.CategoricalCrossentropy(from_logits=True)],
metrics=[[keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError()],
[keras.metrics.CategoricalAccuracy()]])
```
Since we gave names to our output layers, we could also specify per-output losses and metrics via a dict:
```
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={'score_output': keras.losses.MeanSquaredError(),
'class_output': keras.losses.CategoricalCrossentropy(from_logits=True)},
metrics={'score_output': [keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError()],
'class_output': [keras.metrics.CategoricalAccuracy()]})
```
We recommend the use of explicit names and dicts if you have more than 2 outputs.
It's possible to give different weights to different output-specific losses (for instance, one might wish to privilege the "score" loss in our example, by giving to 2x the importance of the class loss), using the `loss_weights` argument:
```
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={'score_output': keras.losses.MeanSquaredError(),
'class_output': keras.losses.CategoricalCrossentropy(from_logits=True)},
metrics={'score_output': [keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError()],
'class_output': [keras.metrics.CategoricalAccuracy()]},
loss_weights={'score_output': 2., 'class_output': 1.})
```
You could also chose not to compute a loss for certain outputs, if these outputs meant for prediction but not for training:
```
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy(from_logits=True)])
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={'class_output':keras.losses.CategoricalCrossentropy(from_logits=True)})
```
Passing data to a multi-input or multi-output model in `fit` works in a similar way as specifying a loss function in `compile`:
you can pass *lists of Numpy arrays (with 1:1 mapping to the outputs that received a loss function)* or *dicts mapping output names to Numpy arrays of training data*.
```
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(),
keras.losses.CategoricalCrossentropy(from_logits=True)])
# Generate dummy Numpy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets],
batch_size=32,
epochs=3)
# Alternatively, fit on dicts
model.fit({'img_input': img_data, 'ts_input': ts_data},
{'score_output': score_targets, 'class_output': class_targets},
batch_size=32,
epochs=3)
```
Here's the Dataset use case: similarly as what we did for Numpy arrays, the Dataset should return
a tuple of dicts.
```
train_dataset = tf.data.Dataset.from_tensor_slices(
({'img_input': img_data, 'ts_input': ts_data},
{'score_output': score_targets, 'class_output': class_targets}))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=3)
```
### Using callbacks
Callbacks in Keras are objects that are called at different point during training (at the start of an epoch, at the end of a batch, at the end of an epoch, etc.) and which can be used to implement behaviors such as:
- Doing validation at different points during training (beyond the built-in per-epoch validation)
- Checkpointing the model at regular intervals or when it exceeds a certain accuracy threshold
- Changing the learning rate of the model when training seems to be plateauing
- Doing fine-tuning of the top layers when training seems to be plateauing
- Sending email or instant message notifications when training ends or where a certain performance threshold is exceeded
- Etc.
Callbacks can be passed as a list to your call to `fit`:
```
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor='val_loss',
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1)
]
model.fit(x_train, y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2)
```
#### Many built-in callbacks are available
- `ModelCheckpoint`: Periodically save the model.
- `EarlyStopping`: Stop training when training is no longer improving the validation metrics.
- `TensorBoard`: periodically write model logs that can be visualized in TensorBoard (more details in the section "Visualization").
- `CSVLogger`: streams loss and metrics data to a CSV file.
- etc.
#### Writing your own callback
You can create a custom callback by extending the base class keras.callbacks.Callback. A callback has access to its associated model through the class property `self.model`.
Here's a simple example saving a list of per-batch loss values during training:
```python
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.losses = []
def on_batch_end(self, batch, logs):
self.losses.append(logs.get('loss'))
```
### Checkpointing models
When you're training model on relatively large datasets, it's crucial to save checkpoints of your model at frequent intervals.
The easiest way to achieve this is with the `ModelCheckpoint` callback:
```
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath='mymodel_{epoch}',
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
save_best_only=True,
monitor='val_loss',
verbose=1)
]
model.fit(x_train, y_train,
epochs=3,
batch_size=64,
callbacks=callbacks,
validation_split=0.2)
```
You call also write your own callback for saving and restoring models.
For a complete guide on serialization and saving, see [Guide to Saving and Serializing Models](./save_and_serialize.ipynb).
### Using learning rate schedules
A common pattern when training deep learning models is to gradually reduce the learning as training progresses. This is generally known as "learning rate decay".
The learning decay schedule could be static (fixed in advance, as a function of the current epoch or the current batch index), or dynamic (responding to the current behavior of the model, in particular the validation loss).
#### Passing a schedule to an optimizer
You can easily use a static learning rate decay schedule by passing a schedule object as the `learning_rate` argument in your optimizer:
```
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate,
decay_steps=100000,
decay_rate=0.96,
staircase=True)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
```
Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`, `PolynomialDecay`, and `InverseTimeDecay`.
#### Using callbacks to implement a dynamic learning rate schedule
A dynamic learning rate schedule (for instance, decreasing the learning rate when the validation loss is no longer improving) cannot be achieved with these schedule objects since the optimizer does not have access to validation metrics.
However, callbacks do have access to all metrics, including validation metrics! You can thus achieve this pattern by using a callback that modifies the current learning rate on the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback.
### Visualizing loss and metrics during training
The best way to keep an eye on your model during training is to use [TensorBoard](https://www.tensorflow.org/tensorboard), a browser-based application that you can run locally that provides you with:
- Live plots of the loss and metrics for training and evaluation
- (optionally) Visualizations of the histograms of your layer activations
- (optionally) 3D visualizations of the embedding spaces learned by your `Embedding` layers
If you have installed TensorFlow with pip, you should be able to launch TensorBoard from the command line:
```
tensorboard --logdir=/full_path_to_your_logs
```
#### Using the TensorBoard callback
The easiest way to use TensorBoard with a Keras model and the `fit` method is the `TensorBoard` callback.
In the simplest case, just specify where you want the callback to write logs, and you're good to go:
```python
tensorboard_cbk = keras.callbacks.TensorBoard(log_dir='/full_path_to_your_logs')
model.fit(dataset, epochs=10, callbacks=[tensorboard_cbk])
```
The `TensorBoard` callback has many useful options, including whether to log embeddings, histograms, and how often to write logs:
```python
keras.callbacks.TensorBoard(
log_dir='/full_path_to_your_logs',
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq='epoch') # How often to write logs (default: once per epoch)
```
## Part II: Writing your own training & evaluation loops from scratch
If you want lower-level over your training & evaluation loops than what `fit()` and `evaluate()` provide, you should write your own. It's actually pretty simple! But you should be ready to have a lot more debugging to do on your own.
### Using the GradientTape: a first end-to-end example
Calling a model inside a `GradientTape` scope enables you to retrieve the gradients of the trainable weights of the layer with respect to a loss value. Using an optimizer instance, you can use these gradients to update these variables (which you can retrieve using `model.trainable_weights`).
Let's reuse our initial MNIST model from Part I, and let's train it using mini-batch gradient with a training loop.
```
# Get the model.
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the training dataset.
batch_size = 64
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
```
Run a training loop for a few epochs:
```
epochs = 3
for epoch in range(epochs):
print('Start of epoch %d' % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables autodifferentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))
print('Seen so far: %s samples' % ((step + 1) * 64))
```
### Low-level handling of metrics
Let's add metrics to the mix. You can readily reuse the built-in metrics (or custom ones you wrote) in such training loops written from scratch. Here's the flow:
- Instantiate the metric at the start of the loop
- Call `metric.update_state()` after each batch
- Call `metric.result()` when you need to display the current value of the metric
- Call `metric.reset_states()` when you need to clear the state of the metric (typically at the end of an epoch)
Let's use this knowledge to compute `SparseCategoricalAccuracy` on validation data at the end of each epoch:
```
# Get model
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
# Prepare the training dataset.
batch_size = 64
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
```
Run a training loop for a few epochs:
```
epochs = 3
for epoch in range(epochs):
print('Start of epoch %d' % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))
print('Seen so far: %s samples' % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print('Training acc over epoch: %s' % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val)
# Update val metrics
val_acc_metric(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print('Validation acc: %s' % (float(val_acc),))
```
### Low-level handling of extra losses
You saw in the previous section that it is possible for regularization losses to be added by a layer by calling `self.add_loss(value)` in the `call` method.
In the general case, you will want to take these losses into account in your training loops (unless you've written the model yourself and you already know that it creates no such losses).
Recall this example from the previous section, featuring a layer that creates a regularization loss:
```
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
```
When you call a model, like this:
```python
logits = model(x_train)
```
the losses it creates during the forward pass are added to the `model.losses` attribute:
```
logits = model(x_train[:64])
print(model.losses)
```
The tracked losses are first cleared at the start of the model `__call__`, so you will only see the losses created during this one forward pass. For instance, calling the model repeatedly and then querying `losses` only displays the latest losses, created during the last call:
```
logits = model(x_train[:64])
logits = model(x_train[64: 128])
logits = model(x_train[128: 192])
print(model.losses)
```
To take these losses into account during training, all you have to do is to modify your training loop to add `sum(model.losses)` to your total loss:
```
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
epochs = 3
for epoch in range(epochs):
print('Start of epoch %d' % (epoch,))
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train)
loss_value = loss_fn(y_batch_train, logits)
# Add extra losses created during this forward pass:
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))
print('Seen so far: %s samples' % ((step + 1) * 64))
```
That was the last piece of the puzzle! You've reached the end of this guide.
Now you know everything there is to know about using built-in training loops and writing your own from scratch.
| github_jupyter |
# Monitor a Model
When you've deployed a model into production as a service, you'll want to monitor it to track usage and explore the requests it processes. You can use Azure Application Insights to monitor activity for a model service endpoint.
## Install the Azure Machine Learning SDK
The Azure Machine Learning SDK is updated frequently. Run the following cell to upgrade to the latest release.
```
!pip install --upgrade azureml-sdk
```
## Connect to your workspace
With the latest version of the SDK installed, now you're ready to connect to your workspace.
> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
```
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to work with', ws.name)
```
## Prepare a model for deployment
Now we need a model to deploy. Run the code below to:
1. Create and register a dataset.
2. Train a model using the dataset.
3. Register the model.
```
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score, roc_curve
from azureml.core import Dataset
# Upload data files to the default datastore
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'],
target_path='diabetes-data/',
overwrite=True,
show_progress=True)
#Create a tabular dataset from the path on the datastore
print('Creating dataset...')
data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
print('Registering dataset...')
try:
data_set = data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name='mslearn-train-diabetes')
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = data_set.to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
print('Registering model...')
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# Get the registered model
model = ws.models['diabetes_model']
print('Model trained and registered.')
```
## Deploy a model as a web service
Now you're ready to deploy the registered model as a web service.
First, create a folder for the deployment configuration files
```
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
```
Now you need an entry script that the service will use to score new data.
```
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = json.loads(raw_data)['data']
np_data = np.array(data)
# Get a prediction from the model
predictions = model.predict(np_data)
# print the data and predictions (so they'll be logged!)
log_text = 'Data:' + str(data) + ' - Predictions:' + str(predictions)
print(log_text)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
```
You'll also need a Conda configuration file for the service environment.
```
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
# Save the environment config as a .yml file
env_file = folder_name + "/diabetes_env.yml"
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
```
Now you can deploy the service (in this case, as an Azure Container Instance (ACI).
> **Note**: This can take a few minutes - wait until the state is shown as **Healthy**.
```
from azureml.core.webservice import AciWebservice, Webservice
from azureml.core.model import Model
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
service_name = "diabetes-service-app-insights"
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
aci_service = Model.deploy(workspace=ws,
name= service_name,
models= [model],
inference_config= inference_config,
deployment_config=deployment_config)
aci_service.wait_for_deployment(show_output = True)
print(aci_service.state)
```
## Enable Application Insights
Next, you need to enable Application Insights for the service.
```
# Enable AppInsights
aci_service.update(enable_app_insights=True)
print('AppInsights enabled!')
```
## Use the web service
With the service deployed, now you can consume it from a client application.
First, determine the URL to which these applications must submit their requests.
```
endpoint = aci_service.scoring_uri
print(endpoint)
```
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON (or binary) format, and receive back the predicted class(es).
> **Tip**: If an error occurs because the service endpoint isn't ready. Wait a few seconds and try again!
```
import requests
import json
# Create new data for inferencing
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
# Get the predictions
predictions = requests.post(endpoint, input_json, headers = headers)
print(predictions.status_code)
if predictions.status_code == 200:
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
```
Now you can view the data logged for the service endpoint:
1. In the [Azure portal](https://portal.azure.com), open your Machine Learning workspace.
2. On the **Overview** page, click the link for the associated **Application Insights** resource.
3. On the Application Insights blade, click **Logs**.
> **Note**: If this is the first time you've opened log analytics, you may need to click **Get Started** to open the query editor. If a tip explaining how to write a query is displayed, close it.
4. Paste the following query into the query editor and click **Run**
```
traces
|where message == "STDOUT"
and customDimensions.["Service Name"] == "diabetes-service-app-insights"
|project timestamp, customDimensions.Content
```
5. View the results. At first there may be none, because an ACI web service can take as long as five minutes to send the telemetry to Application Insights. Wait a few minutes and re-run the query until you see the logged data and predictions.
6. When you've reviewed the logged data, close the Application Insights query page.
## Delete the service
When you no longer need your service, you should delete it.
> **Note**: If the service is in use, you may not be able to delete it immediately.
```
try:
aci_service.delete()
print('Service deleted.')
except Exception as ex:
print(ex.message)
```
For more information about using Application Insights to monitor a deployed service, see the [Azure Machine Learning documentation](https://docs.microsoft.com/azure/machine-learning/how-to-enable-app-insights).
| github_jupyter |
<a href="https://colab.research.google.com/github/PUC-RecSys-Class/RecSysPUC-2020/blob/master/practicos/pyRecLab_MostPopular.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<a href="https://youtu.be/MEY4UK4QCP4" target="_parent"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/09/YouTube_full-color_icon_%282017%29.svg/71px-YouTube_full-color_icon_%282017%29.svg.png" alt="Open In Colab"/></a>
# Prรกctica de Sistemas Recomendadores: pyreclab - Most popular e Item average rating.
En este prรกctico vamos a utilizar la biblioteca de Python [pyreclab](https://github.com/gasevi/pyreclab), desarrollado por los Laboratorios IALab y SocVis de la Pontificia Universidad Catรณlica de Chile para recomendaciรณn no personalizada: **Most Popular** e **Item Average Rating**.
## Configuraciรณn inicial
**Paso 1:** Descargue directamente a Colab los archivos del dataset ejecutando las siguientes 3 celdas:
```
!curl -L -o "u2.base" "https://drive.google.com/uc?export=download&id=1bGweNw7NbOHoJz11v6ld7ymLR8MLvBsA"
!curl -L -o "u2.test" "https://drive.google.com/uc?export=download&id=1f_HwJWC_1HFzgAjKAWKwkuxgjkhkXrVg"
!curl -L -o "u.item" "https://drive.google.com/uc?export=download&id=10YLhxkO2-M_flQtyo9OYV4nT9IvSESuz"
```
**Paso 2**: Instalamos [`pyreclab`](https://github.com/gasevi/pyreclab) y [`seaborn`](https://seaborn.pydata.org/index.html) utilizando `pip`.
```
!pip3 install pyreclab --upgrade
```
**Paso 3:** Importamos librerรญas de python que vamos a utilizar
```
import pandas as pd
import pyreclab
import seaborn as sns
import numpy as np
import scipy.sparse as sparse
import matplotlib.pyplot as plt
%matplotlib inline
sns.set(style="whitegrid")
```
## Antes de recomendar
**Paso 3**: Los archivos `u2.base` y `u2.test` tienen tuplas (usuario, item, rating, timestamp), que es la informaciรณn de preferencias de usuarios sobre pelรญculas en una muestra del dataset [MovieLens](https://grouplens.org/datasets/movielens/). Revisemos cรณmo es uno de estos archivos y luego haremos grรกficos que nos permitan sacar conclusiones a partir del mismo.
```
# Primero creamos el dataframe con los datos
df_train = pd.read_csv('u2.base',
sep='\t',
names=['userid', 'itemid', 'rating', 'timestamp'],
header=None)
df_train.head()
# Ahora queremos realizar una observaciรณn rรกpida de los ratings
df_train.describe()[['rating']]
```
Por otra parte, para obtener informaciรณn adicional de cada pelรญcula tal como **tรญtulo**, **fecha de lanzamiento**, **gรฉnero**, etc., cargaremos el archivo de items descargado (`u.item`) para poder mapear cada identificador de รญtem al conjunto de datos que lo describe. Revisemos el contenido de este archivo
```
columns = ['movieid', 'title', 'release_date', 'video_release_date', \
'IMDb_URL', 'unknown', 'Action', 'Adventure', 'Animation', \
'Children', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', \
'Film-Noir', 'Horror', 'Musical', 'Mystery', 'Romance', 'Sci-Fi', \
'Thriller', 'War', 'Western']
# Cargamos el dataset con los items
df_items = pd.read_csv('u.item',
sep='|',
index_col=0,
names = columns,
header=None,
encoding='latin-1')
df_items.head()
```
Distribuciรณn de peliculas por gรฉnero:
```
genre_columns = ['Action', 'Adventure', 'Animation', \
'Children', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', \
'Film-Noir', 'Horror', 'Musical', 'Mystery', 'Romance', 'Sci-Fi', \
'Thriller', 'War', 'Western']
genre_count = df_items[genre_columns].sum().sort_values()
sns.barplot(x=genre_count.values, y=genre_count.index, label="Total", palette="Blues_d")
```
**Pregunta:** Explique cรณmo funciona most popular y average rating.
ยฟQuรฉ problemas podrรญa encontrar al utilizar cada uno de ellos?.
**Respuesta:**
## Most popular
```
# Definicion de objeto "most popular"
most_popular = pyreclab.MostPopular(dataset='u2.base',
dlmchar=b'\t',
header=False,
usercol=0,
itemcol=1,
ratingcol=2)
# en teoria no es "entrenar" , objeto "most_popular" solo registra los items mรกs populares en el set de entrenamiento
most_popular.train()
```
Calculamos mรฉtricas de ranking ($nDCG@10$ y $MAP$) para conocer el rendimiento de nuestro recomendador en el set de test.
1. nDCG@10: normalized discounted cummulative gain @ 10
$DCG = \sum{ \frac{2^{rel} -1 }{log_2(i+1)}}$
$nDCG = \frac{DCG}{IDCG}$
$IDCG: $ ideal DCG donde todos los relevantes estรกn en las primeras posiciones.
$nDCG@10$ es la mรฉtrica nDCG para los primeros 10 items recomendados.
2. MAP: mean average precision
$AveP = \frac{\sum{P(k) * rel(k)}}{total items relevantes}$
$MAP = \sum{\frac{AveP(u)}{U}}$
donde:
$P(k):$ precision en posicion k (P@10 or "Precision at 10" corresponde al numero de items relevantes en las primeras 10 posiciones) <br>
$rel(k): $ 0 si el item no es relevante o 1 si es relevante. <br>
**TIP:** relevante por ejemplo es si es que el usuario le diรณ mรกs de 3 estrellas de rating a un item. <br>
**Ejemplo**
Sistema me recomendรณ estas 10 pelรญculas:
[13,15,35, 28, 3, 1, 100, 122]
El usuario en realidad prefiriรณ estas pelรญculas:
[13,90,12, 2, 3, 384, 219, 12938]
Construimos lista asignando un 1 si es relevante o 0 si no: <br>
[1, 0, 0,0,1,0,0,0]
Sobre esta lista calculamos las mรฉtricas nDCG@K y MAP (serรกn vistas en mรกs detalle en clases).
```
# Testing de recomendaciones sobre los primeros 10 items
top_n = 10
recommendList, maprec, ndcg = most_popular.testrec(input_file='u2.test',
dlmchar=b'\t',
header=False,
usercol=0,
itemcol=1,
ratingcol=2,
topn=top_n,
relevance_threshold=2,
includeRated=False)
print('MAP: {}\nNDCG@{}: {}'.format(maprec, top_n, ndcg))
# Calcular las recomendaciones para un usuario en particular (id =2)
user_id = 2
ranking = [int(r) for r in most_popular.recommend(str(user_id), top_n, includeRated=False)]
print('Recommendation for user {}: {}'.format(user_id, ranking))
# Ver explicitamente las recomendaciones para un usuario determinado
df_items.loc[ranking]
```
**Pregunta** Cambiar el id de usuario, quรฉ se puede observar de las recomendaciones?
**Respuesta:**
## Item average rating
```
# Definicion de objeto "item average rating"
item_avg = pyreclab.ItemAvg(dataset='u2.base',
dlmchar=b'\t',
header=False,
usercol=0,
itemcol=1,
ratingcol=2)
# en teoria no es "entrenar" , objeto "item_average" solo registra los items con mayor rating promedio en el set de entrenamiento
item_avg.train()
# Testing de recomendaciones sobre los primeros 10 items
top_n = 10
recommendList, maprec, ndcg = item_avg.testrec(input_file='u2.test',
dlmchar=b'\t',
header=False,
usercol=0,
itemcol=1,
ratingcol=2,
topn=top_n,
relevance_threshold=2,
includeRated=False)
print('MAP: {}\nNDCG@{}: {}'.format(maprec, top_n, ndcg))
# Calcular las recomendaciones para un usuario en particular (id =2)
user_id = 1
ranking = [int(r) for r in item_avg.recommend(str(user_id), top_n, includeRated=False)]
print('Recommendation for user {}: {}'.format(user_id, ranking))
# Ver explicitamente las recomendaciones para un usuario determinado
df_items.loc[ranking]
```
| github_jupyter |
Best Model : LSTM on Hist. of pixels ( 16 bin)
```
import math
from pandas import DataFrame
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from numpy import array
from keras.layers import Convolution2D, MaxPooling2D, Flatten, Reshape,Conv2D
from keras.models import Sequential
from keras.layers.wrappers import TimeDistributed
from keras.utils import np_utils
import numpy as np
import cv2
from keras.preprocessing.image import img_to_array
import matplotlib.pyplot as plt
%matplotlib inline
import pandas
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten,LSTM
from keras.optimizers import Adam
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
from keras.layers import Conv2D, MaxPooling2D, ZeroPadding2D, GlobalAveragePooling2D
from keras.preprocessing.image import ImageDataGenerator
from keras import models
from numpy import array
from keras import backend as K
from sklearn.metrics import mean_absolute_error
from keras import optimizers
from keras.layers import Bidirectional
results_rmse = []
results_mae = []
results_std = []
import numpy
```
# Model 1 : LSTM with 16 bins hist of pixel vals
```
def train_bin_models(num_bins):
numpy.random.seed(3)
time_steps=19
# load the dataset
dataframe = pandas.read_csv('./Trainold.csv')
dataset = dataframe.values
scaler = MinMaxScaler(feature_range=(0, 1))
#print(dataset)
# we group by day so we can process a video at a time.
grouped = dataframe.groupby(dataframe.VidName)
per_vid = []
for _, group in grouped:
per_vid.append(group)
print(len(per_vid))
trainX=[]
trainY=[]
# generate sequences a vid at a time
for i,vid in enumerate(per_vid):
histValuesList=[]
scoreList=[]
# if we have less than 20 datapoints for a vid we skip over the
# vid assuming something is missing in the raw data
total = vid.iloc[:,4:20].values
vidImPath=vid.iloc[:,0:2].values
if len(total) < time_steps :
continue
scoreVal=vid["Score"].values[0] + 1
max_total_for_vid = scoreVal.tolist()
#max_total_for_vid = vid["Score"].values[0].tolist()
for i in range(0,time_steps):
videoName=vidImPath[i][0]
imgName=vidImPath[i][1]
path="./IMAGES/Train/"+videoName+"/"+imgName
image = cv2.imread(path,0)
hist = cv2.calcHist([image],[0],None,[num_bins],[0,256])
hist_arr = hist.flatten()
#img_arr = img_to_array(image)
histValuesList.append(hist_arr)
#scoreList.append(max_total_for_vid)
trainX.append(histValuesList)
trainY.append([max_total_for_vid])
#trainY.append(scoreList)
print(len(trainX[0]))
#trainX = np.array([np.array(xi) for xi in trainX])
trainX=numpy.array(trainX)
trainY=numpy.array(trainY)
print(trainX.shape,trainY.shape)
time_steps=19
# load the dataset
dataframe = pandas.read_csv('./Test.csv')
dataset = dataframe.values
#print(dataset)
# we group by day so we can process a video at a time.
grouped = dataframe.groupby(dataframe.VidName)
per_vid = []
for _, group in grouped:
per_vid.append(group)
print(len(per_vid))
testX=[]
testY=[]
# generate sequences a vid at a time
for i,vid in enumerate(per_vid):
histValuesList=[]
scoreList=[]
# if we have less than 20 datapoints for a vid we skip over the
# vid assuming something is missing in the raw data
total = vid.iloc[:,4:20].values
vidImPath=vid.iloc[:,0:2].values
if len(total)<time_steps :
continue
scoreVal=vid["Score"].values[0] + 1
max_total_for_vid = scoreVal.tolist()
#max_total_for_vid = vid["Score"].values[0].tolist()
for i in range(0,time_steps):
#histValuesList.append(total[i])
#print("Vid and Img name")
#print(req[i][0],req[i][1])
videoName=vidImPath[i][0]
imgName=vidImPath[i][1]
path="./IMAGES/Test/"+videoName+"/"+imgName
image = cv2.imread(path,0)
hist = cv2.calcHist([image],[0],None,[num_bins],[0,256])
hist_arr = hist.flatten()
histValuesList.append(hist_arr)
#scoreList.append(max_total_for_vid)
testX.append(histValuesList)
testY.append([max_total_for_vid])
#testY.append(scoreList)
print(len(testX[0]))
#trainX = np.array([np.array(xi) for xi in trainX])
testX=numpy.array(testX)
testY=numpy.array(testY)
print(testX.shape,testY.shape)
trainX=numpy.array(trainX)
trainX=trainX.reshape(-1,num_bins)
trainX=trainX.reshape(-1,19,num_bins)
print(numpy.max(trainX))
testX=numpy.array(testX)
testX=testX.reshape(-1,num_bins)
testX=testX.reshape(-1,19,num_bins)
print(numpy.max(testX))
trainX=numpy.array(trainX)
trainX=trainX.reshape(-1,num_bins)
trainX = trainX/numpy.max(trainX)
trainX=trainX.reshape(-1,19,num_bins)
print(trainX.shape,trainY.shape)
testX=numpy.array(testX)
testX=testX.reshape(-1,num_bins)
testX = testX/numpy.max(testX)
testX=testX.reshape(-1,19,num_bins)
print(testX.shape,testY.shape)
print(trainX.shape,trainY.shape)
print(testX.shape,testY.shape)
#print(valX.shape,valY.shape)
# All parameter gradients will be clipped to
# a maximum norm of 1.
adam1 = optimizers.Adam(lr=0.001)
sgd1 = optimizers.SGD(lr=0.005) #0.005 or 6,100 neurons (1.24,1.12 with 0.003 and 0.2 )
print('Build model...')
# Build Model
model = Sequential()
model.add(LSTM(100, input_shape=(19, num_bins))) #100
model.add(Dense(1))
model.add(Dropout(0.1))
model.compile(loss='mse', optimizer=sgd1, metrics=['mse'])
#model.compile(loss='mse', optimizer=sgd1,metrics=['mean_squared_error'])
history =model.fit(trainX, trainY, nb_epoch=500, batch_size=20, verbose=2,shuffle=True) #500 batch =2
# make predictions
trainPredict = model.predict(trainX)
trainScore = mean_squared_error(trainY, trainPredict)
print('Train Score: %.2f MSE' % (trainScore))
from keras import backend as K
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
pred=model.predict(testX)
print(pred.shape)
print(testY.shape)
# calculate root mean squared error
#trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0]))
#print('Train Score: %.2f RMSE' % (trainScore))
testScore = mean_squared_error(testY, pred)
print('Test Score: %.2f MSE' % (testScore))
#maeScore = root_mean_squared_error(testY, pred)
#print('RMSE Score: %.2f MAE' % (maeScore))
rmse = np.sqrt(((pred - testY) ** 2).mean(axis=0))
print('RMSE Score: %.2f rmse' % (rmse))
mae = mean_absolute_error(testY, pred)
print('MAE Score: %.2f mae' % (mae))
list1=[]
list2=[]
diff=[]
for i in range(0,len(pred)):
print(testY[i],pred[i])
list1.append(testY[i])
list2.append(pred[i])
diff.append(abs(testY[i]-pred[i]))
print(numpy.mean(diff))
stdVals=numpy.std(diff)
results_rmse.append(rmse)
results_mae.append(mae)
#stdVals = np.std(testY-pred)
print(stdVals)
results_std.append(stdVals)
# Model 1 : LSTM with 16 bins hist of pixel vals
num_bins=16
train_bin_models(num_bins)
```
# Model 2 : LSTM with Hist of Color 3d (10 bins)
```
def train_colorbin_models(num_bins):
numpy.random.seed(3)
time_steps=19
# load the dataset
dataframe = pandas.read_csv('./Trainold.csv')
dataset = dataframe.values
scaler = MinMaxScaler(feature_range=(0, 1))
#print(dataset)
# we group by day so we can process a video at a time.
grouped = dataframe.groupby(dataframe.VidName)
per_vid = []
for _, group in grouped:
per_vid.append(group)
print(len(per_vid))
trainX=[]
trainY=[]
# generate sequences a vid at a time
for i,vid in enumerate(per_vid):
histValuesList=[]
scoreList=[]
# if we have less than 20 datapoints for a vid we skip over the
# vid assuming something is missing in the raw data
total = vid.iloc[:,4:20].values
vidImPath=vid.iloc[:,0:2].values
if len(total) < time_steps :
continue
scoreVal=vid["Score"].values[0] + 1
max_total_for_vid = scoreVal.tolist()
#max_total_for_vid = vid["Score"].values[0].tolist()
for i in range(0,time_steps):
videoName=vidImPath[i][0]
imgName=vidImPath[i][1]
path="./IMAGES/Train/"+videoName+"/"+imgName
image = cv2.imread(path)
hist = cv2.calcHist([image], [0, 1, 2], None, [num_bins, num_bins, num_bins], [0, 256, 0, 256, 0, 256])
hist_arr = hist.flatten()
#img_arr = img_to_array(image)
histValuesList.append(hist_arr)
#scoreList.append(max_total_for_vid)
trainX.append(histValuesList)
trainY.append([max_total_for_vid])
#trainY.append(scoreList)
print(len(trainX[0]))
#trainX = np.array([np.array(xi) for xi in trainX])
trainX=numpy.array(trainX)
trainY=numpy.array(trainY)
print(trainX.shape,trainY.shape)
# load the dataset
dataframe = pandas.read_csv('./Test.csv')
dataset = dataframe.values
#print(dataset)
# we group by day so we can process a video at a time.
grouped = dataframe.groupby(dataframe.VidName)
per_vid = []
for _, group in grouped:
per_vid.append(group)
print(len(per_vid))
testX=[]
testY=[]
# generate sequences a vid at a time
for i,vid in enumerate(per_vid):
histValuesList=[]
scoreList=[]
# if we have less than 20 datapoints for a vid we skip over the
# vid assuming something is missing in the raw data
total = vid.iloc[:,4:20].values
vidImPath=vid.iloc[:,0:2].values
if len(total)<time_steps :
continue
scoreVal=vid["Score"].values[0] + 1
max_total_for_vid = scoreVal.tolist()
#max_total_for_vid = vid["Score"].values[0].tolist()
for i in range(0,time_steps):
#histValuesList.append(total[i])
#print("Vid and Img name")
#print(req[i][0],req[i][1])
videoName=vidImPath[i][0]
imgName=vidImPath[i][1]
path="./IMAGES/Test/"+videoName+"/"+imgName
image = cv2.imread(path)
#img_arr = img_to_array(image)
hist = cv2.calcHist([image], [0, 1, 2], None, [num_bins, num_bins, num_bins], [0, 256, 0, 256, 0, 256])
hist_arr = hist.flatten()
histValuesList.append(hist_arr)
#scoreList.append(max_total_for_vid)
testX.append(histValuesList)
testY.append([max_total_for_vid])
#testY.append(scoreList)
print(len(testX[0]))
#trainX = np.array([np.array(xi) for xi in trainX])
testX=numpy.array(testX)
testY=numpy.array(testY)
print(testX.shape,testY.shape)
trainX=numpy.array(trainX)
trainX=trainX.reshape(-1,num_bins**3)
trainX = trainX/numpy.max(trainX)
trainX=trainX.reshape(-1,19,num_bins**3)
print(trainX.shape,trainY.shape)
testX=numpy.array(testX)
testX=testX.reshape(-1,num_bins**3)
testX = testX/numpy.max(testX)
testX=testX.reshape(-1,19,num_bins**3)
print(testX.shape,testY.shape)
adam1 = optimizers.Adam(lr=0.001)
sgd1 = optimizers.SGD(lr=0.005)
print('Build model...')
# Build Model
model = Sequential()
model.add(LSTM(100, input_shape=(19, num_bins**3))) #200
model.add(Dense(1))
model.add(Dropout(0.1))
model.compile(loss='mse', optimizer=sgd1,metrics=['mse'])
model.fit(trainX, trainY, nb_epoch=500, batch_size=19, verbose=2,shuffle=True) #500 batch =20
# make predictions
trainPredict = model.predict(trainX)
trainScore = mean_squared_error(trainY, trainPredict)
print('Train Score: %.2f MSE' % (trainScore))
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
pred=model.predict(testX)
print(pred.shape)
print(testY.shape)
testScore = mean_squared_error(testY, pred)
print('Test Score: %.2f MSE' % (testScore))
#maeScore = root_mean_squared_error(testY, pred)
#print('RMSE Score: %.2f MAE' % (maeScore))
rmse = np.sqrt(((pred - testY) ** 2).mean(axis=0))
print('RMSE Score: %.2f rmse' % (rmse))
mae = mean_absolute_error(testY, pred)
print('MAE Score: %.2f mae' % (mae))
list1=[]
list2=[]
diff=[]
for i in range(0,len(pred)):
print(testY[i],pred[i])
list1.append(testY[i])
list2.append(pred[i])
diff.append(abs(testY[i]-pred[i]))
print(numpy.mean(diff))
stdVals=numpy.std(diff)
results_rmse.append(rmse)
results_mae.append(mae)
#stdVals = np.std(testY-pred)
print(stdVals)
results_std.append(stdVals)
# Model 2 : LSTM with Hist of Color 3d (10 bins)
num_bins=10
train_colorbin_models(num_bins)
```
# Model 3 : CNN-LSTM
```
# Model 3 : CNN-LSTM
from keras.models import load_model
model = load_model('./CNNLSTM.h5')
time_steps=19
# load the dataset
dataframe = pandas.read_csv('./Test.csv')
dataset = dataframe.values
#print(dataset)
# we group by day so we can process a video at a time.
grouped = dataframe.groupby(dataframe.VidName)
per_vid = []
for _, group in grouped:
per_vid.append(group)
print(len(per_vid))
testX=[]
testY=[]
# generate sequences a vid at a time
for i,vid in enumerate(per_vid):
histValuesList=[]
scoreList=[]
# if we have less than 8 datapoints for a vid we skip over the
# vid assuming something is missing in the raw data
total = vid.iloc[:,4:20].values
vidImPath=vid.iloc[:,0:2].values
if len(total)<time_steps :
continue
scoreVal=vid["Score"].values[0] + 1
#max_total_for_vid = scoreVal.tolist()
max_total_for_vid = scoreVal.tolist()
for i in range(0,time_steps):
#histValuesList.append(total[i])
#print("Vid and Img name")
#print(req[i][0],req[i][1])
videoName=vidImPath[i][0]
imgName=vidImPath[i][1]
path="./IMAGES/Test/"+videoName+"/"+imgName
image = cv2.imread(path,0)
img_arr = img_to_array(image)
histValuesList.append(img_arr)
scoreList.append(max_total_for_vid)
testX.append(histValuesList)
testY.append([max_total_for_vid])
#testY.append(scoreList)
print(len(testX[0]))
#trainX = np.array([np.array(xi) for xi in trainX])
testX=numpy.array(testX)
testY=numpy.array(testY)
print(testX.shape,testY.shape)
testX=numpy.array(testX)
testX=testX.reshape(-1,250,700,1)
testX=testX/255
testX=testX.reshape(-1,19,250,700,1)
print(testX.shape,testY.shape)
from keras import backend as K
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
pred=model.predict(testX)
print(pred.shape)
print(testY.shape)
# calculate root mean squared error
pred=model.predict(testX)
print(pred.shape)
print(testY.shape)
testScore = mean_squared_error(testY, pred)
print('Test Score: %.2f MSE' % (testScore))
rmse = np.sqrt(((pred - testY) ** 2).mean(axis=0))
print('RMSE Score: %.2f rmse' % (rmse))
mae = mean_absolute_error(testY, pred)
print('MAE Score: %.2f mae' % (mae))
list1=[]
list2=[]
diff=[]
for i in range(0,len(pred)):
print(testY[i],pred[i])
list1.append(testY[i])
list2.append(pred[i])
diff.append(abs(testY[i]-pred[i]))
print(numpy.mean(diff))
stdVals=numpy.std(diff)
results_rmse.append(rmse)
results_mae.append(mae)
#stdVals = np.std(testY-pred)
print(stdVals)
results_std.append(stdVals)
```
# Model 4 : SVR
```
np.random.seed(3)
num_bins=16
time_steps=19
# load the dataset
dataframe = pandas.read_csv('./Trainold.csv')
dataset = dataframe.values
scaler = MinMaxScaler(feature_range=(0, 1))
#print(dataset)
# we group by day so we can process a video at a time.
grouped = dataframe.groupby(dataframe.VidName)
per_vid = []
for _, group in grouped:
per_vid.append(group)
print(len(per_vid))
trainX=[]
trainY=[]
# generate sequences a vid at a time
for i,vid in enumerate(per_vid):
histValuesList=[]
scoreList=[]
# if we have less than 8 datapoints for a vid we skip over the
# vid assuming something is missing in the raw data
total = vid.iloc[:,4:20].values
vidImPath=vid.iloc[:,0:2].values
if len(total) < time_steps :
continue
scoreVal=vid["Score"].values[0] + 1
max_total_for_vid = scoreVal.tolist()
#max_total_for_vid = vid["Score"].values[0].tolist()
for i in range(0,time_steps):
videoName=vidImPath[i][0]
imgName=vidImPath[i][1]
path="./IMAGES/Train/"+videoName+"/"+imgName
image = cv2.imread(path,0)
hist = cv2.calcHist([image],[0],None,[num_bins],[0,256])
hist_arr = hist.flatten()
#img_arr = img_to_array(image)
histValuesList.append(hist_arr)
scoreList.append(max_total_for_vid)
trainX.append(histValuesList)
#trainY.append([max_total_for_vid])
trainY.append(scoreList)
print(len(trainX[0]))
#trainX = np.array([np.array(xi) for xi in trainX])
trainX=numpy.array(trainX)
trainY=numpy.array(trainY)
trainX=trainX.reshape(-1,num_bins)
trainX = trainX/np.max(trainX)
#trainX=trainX.reshape(-1,1,16)
trainY=trainY.reshape(-1,1)
print(trainX.shape,trainY.shape)
time_steps=19
# load the dataset
dataframe = pandas.read_csv('./Test.csv')
dataset = dataframe.values
#print(dataset)
# we group by day so we can process a video at a time.
grouped = dataframe.groupby(dataframe.VidName)
per_vid = []
for _, group in grouped:
per_vid.append(group)
print(len(per_vid))
testX=[]
testY=[]
# generate sequences a vid at a time
for i,vid in enumerate(per_vid):
histValuesList=[]
scoreList=[]
# if we have less than 20 datapoints for a vid we skip over the
# vid assuming something is missing in the raw data
total = vid.iloc[:,4:20].values
vidImPath=vid.iloc[:,0:2].values
if len(total)<time_steps :
continue
scoreVal=vid["Score"].values[0] + 1
max_total_for_vid = scoreVal.tolist()
#max_total_for_vid = vid["Score"].values[0].tolist()
for i in range(0,time_steps):
#histValuesList.append(total[i])
#print("Vid and Img name")
#print(req[i][0],req[i][1])
videoName=vidImPath[i][0]
imgName=vidImPath[i][1]
path="./IMAGES/Test/"+videoName+"/"+imgName
image = cv2.imread(path,0)
hist = cv2.calcHist([image],[0],None,[num_bins],[0,256])
hist_arr = hist.flatten()
histValuesList.append(hist_arr)
scoreList.append(max_total_for_vid)
testX.append(histValuesList)
#testY.append([max_total_for_vid])
testY.append(scoreList)
print(len(testX[0]))
#trainX = np.array([np.array(xi) for xi in trainX])
testX=numpy.array(testX)
testY=numpy.array(testY)
testX=testX.reshape(-1,num_bins)
testX = testX/np.max(testX)
#testY=scaler.fit_transform(testY)
#testX=testX.reshape(-1,1,16)
testY=testY.reshape(-1,1)
print(testX.shape,testY.shape)
# 2. Import libraries and modules
from sklearn.svm import SVR
import numpy as np
from sklearn import preprocessing, cross_validation, svm
from sklearn import svm, grid_search
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.externals import joblib
from sklearn.linear_model import LinearRegression
from sklearn import linear_model
from sklearn import model_selection
from sklearn import metrics
from sklearn.preprocessing import MinMaxScaler
# hyperparameters to tune
hyperparameters = {'svr__C':[0.001, 0.01, 0.1, 1], 'svr__gamma':[0.001, 0.01, 0.1, 1], 'svr__kernel': ['linear','poly','rbf','sigmoid'] }
# 7. Tune model using cross-validation pipeline
clf = GridSearchCV(make_pipeline(svm.SVR()), hyperparameters, cv=10)
clf.fit(trainX, trainY)
#print(clf.get_params())
print("The best parameters are ")
print(clf.best_params_)
# 9. Evaluate model pipeline on test data
pred = clf.predict(testX)
# make predictions
trainPredict = clf.predict(trainX)
trainScore = mean_squared_error(trainY, trainPredict)
print('Train Score: %.2f MSE' % (trainScore))
from keras import backend as K
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
print(pred.shape)
print(testY.shape)
# calculate root mean squared error
testScore = mean_squared_error(testY, pred)
print('Test Score: %.2f MSE' % (testScore))
#rmse = np.sqrt(((pred - testY) ** 2).mean(axis=0))
#print('RMSE Score: %.2f rmse' % (rmse))
rmse = np.sqrt(metrics.mean_squared_error(testY, pred))
print('RMSE Score: %.2f rmse' % (rmse))
mae = mean_absolute_error(testY, pred)
print('MAE Score: %.2f mae' % (mae))
list1=[]
list2=[]
diff=[]
for i in range(0,len(pred)):
print(testY[i],pred[i])
list1.append(testY[i])
list2.append(pred[i])
diff.append(abs(testY[i]-pred[i]))
print(numpy.mean(diff))
stdVals=numpy.std(diff)
results_rmse.append(rmse)
results_mae.append(mae)
#stdVals = np.std(testY-pred)
print(stdVals)
results_std.append(stdVals)
```
# Model 5 : Bidirectional LSTM (16 bins hist pixel)
```
num_bins = 16
def train_merge_models(mode_name):
np.random.seed(3)
time_steps=19
# load the dataset
dataframe = pandas.read_csv('./Trainold.csv')
dataset = dataframe.values
scaler = MinMaxScaler(feature_range=(0, 1))
#print(dataset)
# we group by day so we can process a video at a time.
grouped = dataframe.groupby(dataframe.VidName)
per_vid = []
for _, group in grouped:
per_vid.append(group)
print(len(per_vid))
trainX=[]
trainY=[]
# generate sequences a vid at a time
for i,vid in enumerate(per_vid):
histValuesList=[]
scoreList=[]
# if we have less than 20 datapoints for a vid we skip over the
# vid assuming something is missing in the raw data
total = vid.iloc[:,4:20].values
vidImPath=vid.iloc[:,0:2].values
if len(total) < time_steps :
continue
scoreVal=vid["Score"].values[0] + 1
max_total_for_vid = scoreVal.tolist()
#max_total_for_vid = vid["Score"].values[0].tolist()
for i in range(0,time_steps):
videoName=vidImPath[i][0]
imgName=vidImPath[i][1]
path="./IMAGES/Train/"+videoName+"/"+imgName
image = cv2.imread(path,0)
hist = cv2.calcHist([image],[0],None,[num_bins],[0,256])
hist_arr = hist.flatten()
#img_arr = img_to_array(image)
histValuesList.append(hist_arr)
#scoreList.append(max_total_for_vid)
trainX.append(histValuesList)
trainY.append([max_total_for_vid])
#trainY.append(scoreList)
print(len(trainX[0]))
#trainX = np.array([np.array(xi) for xi in trainX])
trainX=numpy.array(trainX)
trainY=numpy.array(trainY)
print(trainX.shape,trainY.shape)
time_steps=19
# load the dataset
dataframe = pandas.read_csv('./Test.csv')
dataset = dataframe.values
#print(dataset)
# we group by day so we can process a video at a time.
grouped = dataframe.groupby(dataframe.VidName)
per_vid = []
for _, group in grouped:
per_vid.append(group)
print(len(per_vid))
testX=[]
testY=[]
# generate sequences a vid at a time
for i,vid in enumerate(per_vid):
histValuesList=[]
scoreList=[]
# if we have less than 20 datapoints for a vid we skip over the
# vid assuming something is missing in the raw data
total = vid.iloc[:,4:20].values
vidImPath=vid.iloc[:,0:2].values
if len(total)<time_steps :
continue
scoreVal=vid["Score"].values[0] + 1
max_total_for_vid = scoreVal.tolist()
#max_total_for_vid = vid["Score"].values[0].tolist()
for i in range(0,time_steps):
#histValuesList.append(total[i])
#print("Vid and Img name")
#print(req[i][0],req[i][1])
videoName=vidImPath[i][0]
imgName=vidImPath[i][1]
path="./IMAGES/Test/"+videoName+"/"+imgName
image = cv2.imread(path,0)
hist = cv2.calcHist([image],[0],None,[num_bins],[0,256])
hist_arr = hist.flatten()
histValuesList.append(hist_arr)
#scoreList.append(max_total_for_vid)
testX.append(histValuesList)
testY.append([max_total_for_vid])
#testY.append(scoreList)
print(len(testX[0]))
#trainX = np.array([np.array(xi) for xi in trainX])
testX=numpy.array(testX)
testY=numpy.array(testY)
print(testX.shape,testY.shape)
trainX=numpy.array(trainX)
trainX=trainX.reshape(-1,num_bins)
trainX=trainX.reshape(-1,19,num_bins)
print(numpy.max(trainX))
testX=numpy.array(testX)
testX=testX.reshape(-1,num_bins)
testX=testX.reshape(-1,19,num_bins)
print(numpy.max(testX))
trainX=numpy.array(trainX)
trainX=trainX.reshape(-1,num_bins)
trainX = trainX/numpy.max(trainX)
trainX=trainX.reshape(-1,19,num_bins)
print(trainX.shape,trainY.shape)
testX=numpy.array(testX)
testX=testX.reshape(-1,num_bins)
testX = testX/numpy.max(testX)
testX=testX.reshape(-1,19,num_bins)
print(testX.shape,testY.shape)
print(trainX.shape,trainY.shape)
print(testX.shape,testY.shape)
#print(valX.shape,valY.shape)
adam1 = optimizers.Adam(lr=0.001)
sgd1 = optimizers.SGD(lr=0.005)
print('Build model...')
# Build Model
model = Sequential()
#model.add(LSTM(100, input_shape=(19, num_bins), return_sequences=False, go_backwards=True))
#mode = 'sum','mul','concat'
model.add(Bidirectional(LSTM(100, return_sequences=True), input_shape=(19, num_bins), merge_mode=mode_name))
model.add(Bidirectional(LSTM(50)))
model.add(Dense(1))
model.compile(loss='mse', optimizer=sgd1, metrics=['mse'])
#model.compile(loss="mean_squared_error", optimizer="rmsprop")
model.summary()
history =model.fit(trainX, trainY, nb_epoch=500, batch_size=20, verbose=2,shuffle=True) #500 batch =2
# make predictions
trainPredict = model.predict(trainX)
trainScore = mean_squared_error(trainY, trainPredict)
print('Train Score: %.2f MSE' % (trainScore))
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
pred=model.predict(testX)
print(pred.shape)
print(testY.shape)
testScore = mean_squared_error(testY, pred)
print('Test Score: %.2f MSE' % (testScore))
#maeScore = root_mean_squared_error(testY, pred)
#print('RMSE Score: %.2f MAE' % (maeScore))
rmse = np.sqrt(((pred - testY) ** 2).mean(axis=0))
print('RMSE Score: %.2f rmse' % (rmse))
mae = mean_absolute_error(testY, pred)
print('MAE Score: %.2f mae' % (mae))
list1=[]
list2=[]
diff=[]
for i in range(0,len(pred)):
print(testY[i],pred[i])
list1.append(testY[i])
list2.append(pred[i])
diff.append(abs(testY[i]-pred[i]))
print(numpy.mean(diff))
stdVals=numpy.std(diff)
results_rmse.append(rmse)
results_mae.append(mae)
#stdVals = np.std(testY-pred)
print(stdVals)
results_std.append(stdVals)
# Model 5 : Bidirectional LSTM ( 16 bin hist pix)
train_merge_models('sum')
```
# Model 6 : Stacked LSTM hist of color 10 bin
```
# Model 6 : Stacked LSTM with 10 bin hist of color values
num_bins = 10
def train_optimizer_models(opt):
np.random.seed(3)
time_steps=19
# load the dataset
dataframe = pandas.read_csv('./Trainold.csv')
dataset = dataframe.values
scaler = MinMaxScaler(feature_range=(0, 1))
#print(dataset)
# we group by day so we can process a video at a time.
grouped = dataframe.groupby(dataframe.VidName)
per_vid = []
for _, group in grouped:
per_vid.append(group)
print(len(per_vid))
trainX=[]
trainY=[]
# generate sequences a vid at a time
for i,vid in enumerate(per_vid):
histValuesList=[]
scoreList=[]
# if we have less than 20 datapoints for a vid we skip over the
# vid assuming something is missing in the raw data
total = vid.iloc[:,4:20].values
vidImPath=vid.iloc[:,0:2].values
if len(total) < time_steps :
continue
scoreVal=vid["Score"].values[0] + 1
max_total_for_vid = scoreVal.tolist()
#max_total_for_vid = vid["Score"].values[0].tolist()
for i in range(0,time_steps):
videoName=vidImPath[i][0]
imgName=vidImPath[i][1]
path="./IMAGES/Train/"+videoName+"/"+imgName
image = cv2.imread(path)
hist = cv2.calcHist([image], [0, 1, 2], None, [num_bins, num_bins, num_bins], [0, 256, 0, 256, 0, 256])
hist_arr = hist.flatten()
#img_arr = img_to_array(image)
histValuesList.append(hist_arr)
#scoreList.append(max_total_for_vid)
trainX.append(histValuesList)
trainY.append([max_total_for_vid])
#trainY.append(scoreList)
print(len(trainX[0]))
#trainX = np.array([np.array(xi) for xi in trainX])
trainX=numpy.array(trainX)
trainY=numpy.array(trainY)
print(trainX.shape,trainY.shape)
# load the dataset
dataframe = pandas.read_csv('./Test.csv')
dataset = dataframe.values
#print(dataset)
# we group by day so we can process a video at a time.
grouped = dataframe.groupby(dataframe.VidName)
per_vid = []
for _, group in grouped:
per_vid.append(group)
print(len(per_vid))
testX=[]
testY=[]
# generate sequences a vid at a time
for i,vid in enumerate(per_vid):
histValuesList=[]
scoreList=[]
# if we have less than 20 datapoints for a vid we skip over the
# vid assuming something is missing in the raw data
total = vid.iloc[:,4:20].values
vidImPath=vid.iloc[:,0:2].values
if len(total)<time_steps :
continue
scoreVal=vid["Score"].values[0] + 1
max_total_for_vid = scoreVal.tolist()
#max_total_for_vid = vid["Score"].values[0].tolist()
for i in range(0,time_steps):
#histValuesList.append(total[i])
#print("Vid and Img name")
#print(req[i][0],req[i][1])
videoName=vidImPath[i][0]
imgName=vidImPath[i][1]
path="./IMAGES/Test/"+videoName+"/"+imgName
image = cv2.imread(path)
#img_arr = img_to_array(image)
hist = cv2.calcHist([image], [0, 1, 2], None, [num_bins, num_bins, num_bins], [0, 256, 0, 256, 0, 256])
hist_arr = hist.flatten()
histValuesList.append(hist_arr)
#scoreList.append(max_total_for_vid)
testX.append(histValuesList)
testY.append([max_total_for_vid])
#testY.append(scoreList)
print(len(testX[0]))
#trainX = np.array([np.array(xi) for xi in trainX])
testX=numpy.array(testX)
testY=numpy.array(testY)
print(testX.shape,testY.shape)
trainX=numpy.array(trainX)
trainX=trainX.reshape(-1,num_bins**3)
trainX = trainX/numpy.max(trainX)
trainX=trainX.reshape(-1,19,num_bins**3)
print(trainX.shape,trainY.shape)
testX=numpy.array(testX)
testX=testX.reshape(-1,num_bins**3)
testX = testX/numpy.max(testX)
testX=testX.reshape(-1,19,num_bins**3)
print(testX.shape,testY.shape)
print('Build model...')
# Build Model
model = Sequential()
model.add(LSTM(100, input_shape=(19, num_bins**3),return_sequences=True))#100
model.add(LSTM(50, input_shape=(19, num_bins**3),return_sequences=True)) #100
model.add(LSTM(20, input_shape=(19, num_bins**3))) #100
model.add(Dense(1))
#model.add(Dropout(0.1))
#activation=act_func, recurrent_activation=act_func,dropout=0.1,recurrent_dropout=0.1
model.compile(loss='mse', optimizer=opt, metrics=['mse'])
#model.compile(optimizer='adam', loss = 'mae')
model.summary()
history =model.fit(trainX, trainY, nb_epoch=500, batch_size=20, verbose=2,shuffle=True) #500 batch =2
# make predictions
trainPredict = model.predict(trainX)
trainScore = mean_squared_error(trainY, trainPredict)
print('Train Score: %.2f MSE' % (trainScore))
from keras import backend as K
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
pred=model.predict(testX)
print(pred.shape)
print(testY.shape)
# calculate root mean squared error
#trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0]))
#print('Train Score: %.2f RMSE' % (trainScore))
testScore = mean_squared_error(testY, pred)
print('Test Score: %.2f MSE' % (testScore))
#maeScore = root_mean_squared_error(testY, pred)
#print('RMSE Score: %.2f MAE' % (maeScore))
rmse = np.sqrt(((pred - testY) ** 2).mean(axis=0))
print('RMSE Score: %.2f rmse' % (rmse))
mae = mean_absolute_error(testY, pred)
print('MAE Score: %.2f mae' % (mae))
list1=[]
list2=[]
diff=[]
for i in range(0,len(pred)):
print(testY[i],pred[i])
list1.append(testY[i])
list2.append(pred[i])
diff.append(abs(testY[i]-pred[i]))
print(numpy.mean(diff))
stdVals=numpy.std(diff)
results_rmse.append(rmse)
results_mae.append(mae)
#stdVals = np.std(testY-pred)
print(stdVals)
results_std.append(stdVals)
adam1 = optimizers.Adam(lr=0.001)
sgd1 = optimizers.SGD(lr=0.005) #0.005 or 6,100 neurons (1.24,1.12 with 0.003 and 0.2 )
train_optimizer_models(sgd1)
```
# Plotting graphs
```
results_mae
results_rmse
results_std
newArr1=[]
newArr2=[]
newArr3=[]
for item in results_rmse:
newArr1.append(float(item))
for item in results_mae:
newArr2.append(float(item))
for item in results_std:
newArr3.append(float(item))
rmse_val = tuple(newArr1)
mae_val = tuple(newArr2)
std_val=tuple(newArr3)
mae_val
rmse_val
std_val
# data to plot
n_groups = 6
# create plot
fig, ax = plt.subplots()
index = np.arange(n_groups)
bar_width = 0.3
opacity = 0.8
rects1 = plt.bar(index, rmse_val, bar_width,alpha=opacity,color='b',label='RMSE')
rects2 = plt.bar(index + bar_width, mae_val, bar_width,alpha=opacity,color='g',label='MAE')
plt.xlabel('Model')
plt.ylabel('Error Values')
plt.title('Error by Model')
plt.xticks(index + bar_width, ('M-1', 'M-2','M-3','M-4','M-5', 'M-6'))
plt.legend()
plt.tight_layout()
axes = plt.gca()
axes.set_ylim([0.70,1.3])
plt.show()
newArr1=[]
newArr2=[]
newArr3=[]
for item in results_rmse:
newArr1.append(float(item))
for item in results_mae:
newArr2.append(float(item))
for item in results_std:
newArr3.append(float(item))
rmse_val = newArr1
mae_val = newArr2
std_val= newArr3
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
modelNames= ['M1','M2','M3','M4','M5','M6']
df = pd.DataFrame(mae_val,index=modelNames,columns=['MAE'])
dfStd = pd.DataFrame(std_val,index=modelNames,columns=['StdDev'])
dfStd
df
p = df.plot(figsize=(10,5),legend=False,kind="bar",rot=10,color="blue",fontsize=10,yerr=dfStd['StdDev'],width=0.3);
p.set_title("MAE (Std) for different models", fontsize=10);
p.set_xlabel("Models", fontsize=10);
p.set_ylabel("Error : MAE (Std)", fontsize=10);
p.set_ylim(0,2);
import pandas as pd
import matplotlib.pylab as plt
plt.errorbar(range(len(df['MAE'])), df['MAE'], yerr=dfStd['StdDev'], fmt='^')
plt.xticks(range(len(df['MAE'])), df.index.values)
plt.ylim([-1, 3])
plt.rcParams['errorbar.capsize']=4
plt.show()
import pandas as pd
import matplotlib.pylab as plt
import numpy as np
results_mae=[0.8969300022492042,
0.9593815402342722,
1.0824612929270818,
0.9911895095714871,
0.9546054349495814,
0.9426209605657138]
results_std=[0.592690230798271,
0.6237677673564315,
0.6335018816522697,
0.693901551170415,
0.5902626748442625,
0.606875061333133]
#newArr1=[]
newArr2=[]
newArr3=[]
#for item in results_rmse:
#newArr1.append(float(item))
for item in results_mae:
newArr2.append(float(item))
for item in results_std:
newArr3.append(float(item))
#rmse_val = newArr1
mae_val = newArr2
std_val= newArr3
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
modelNames= ['M1','M2','M3','M4','M5','M6']
df = pd.DataFrame(mae_val,index=modelNames,columns=['MAE'])
dfStd = pd.DataFrame(std_val,index=modelNames,columns=['StdDev'])
dfStd
```
| github_jupyter |
# Artificial Intelligence Nanodegree
## Convolutional Neural Networks
---
In this notebook, we train an MLP to classify images from the MNIST database.
### 1. Load MNIST Database
```
from keras.datasets import mnist
# use Keras to import pre-shuffled MNIST database
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("The MNIST database has a training set of %d examples." % len(X_train))
print("The MNIST database has a test set of %d examples." % len(X_test))
```
### 2. Visualize the First Six Training Images
```
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.cm as cm
import numpy as np
# plot first six training images
fig = plt.figure(figsize=(20,20))
for i in range(6):
ax = fig.add_subplot(1, 6, i+1, xticks=[], yticks=[])
ax.imshow(X_train[i], cmap='gray')
ax.set_title(str(y_train[i]))
```
### 3. View an Image in More Detail
```
def visualize_input(img, ax):
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
ax.annotate(str(round(img[x][y],2)), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
visualize_input(X_train[0], ax)
```
### 4. Rescale the Images by Dividing Every Pixel in Every Image by 255
```
# rescale [0,255] --> [0,1]
X_train = X_train.astype('float32')/255
X_test = X_test.astype('float32')/255
```
### 5. Encode Categorical Integer Labels Using a One-Hot Scheme
```
from keras.utils import np_utils
# print first ten (integer-valued) training labels
print('Integer-valued labels:')
print(y_train[:10])
# one-hot encode the labels
y_train = np_utils.to_categorical(y_train, 10)
y_test = np_utils.to_categorical(y_test, 10)
# print first ten (one-hot) training labels
print('One-hot labels:')
print(y_train[:10])
```
### 6. Define the Model Architecture
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
# define the model
model = Sequential()
model.add(Flatten(input_shape=X_train.shape[1:]))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
# summarize the model
model.summary()
```
### 7. Compile the Model
```
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
```
### 8. Calculate the Classification Accuracy on the Test Set (Before Training)
```
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]
# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
```
### 9. Train the Model
```
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='mnist.model.best.hdf5',
verbose=1, save_best_only=True)
hist = model.fit(X_train, y_train, batch_size=128, epochs=10,
validation_split=0.2, callbacks=[checkpointer],
verbose=1, shuffle=True)
```
### 10. Load the Model with the Best Classification Accuracy on the Validation Set
```
# load the weights that yielded the best validation accuracy
model.load_weights('mnist.model.best.hdf5')
```
### 11. Calculate the Classification Accuracy on the Test Set
```
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]
# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
```
| github_jupyter |
Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.
- Author: Sebastian Raschka
- GitHub Repository: https://github.com/rasbt/deeplearning-models
---
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
```
# MobileNet-v3 (large) trained on Cifar-10
## Imports
```
import torch
import torchvision
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.insert(0, "..") # to include ../helper_evaluate.py etc.
# From local helper files
from helper_utils import set_all_seeds, set_deterministic
from helper_evaluate import compute_confusion_matrix, compute_accuracy
from helper_train import train_classifier_simple_v2
from helper_plotting import plot_training_loss, plot_accuracy, show_examples, plot_confusion_matrix
from helper_data import get_dataloaders_cifar10, UnNormalize
```
## Settings and Dataset
```
##########################
### SETTINGS
##########################
RANDOM_SEED = 123
BATCH_SIZE = 128
NUM_EPOCHS = 150
DEVICE = torch.device('cuda:3' if torch.cuda.is_available() else 'cpu')
set_all_seeds(RANDOM_SEED)
#set_deterministic()
##########################
### CIFAR-10 DATASET
##########################
### Note: Network trains about 2-3x faster if you don't
# resize (keeping the orig. 32x32 res.)
# Test acc. I got via the 32x32 was lower though; ~77%
train_transforms = torchvision.transforms.Compose([
torchvision.transforms.Resize((70, 70)),
torchvision.transforms.RandomCrop((64, 64)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
test_transforms = torchvision.transforms.Compose([
torchvision.transforms.Resize((70, 70)),
torchvision.transforms.CenterCrop((64, 64)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
train_loader, valid_loader, test_loader = get_dataloaders_cifar10(
batch_size=BATCH_SIZE,
validation_fraction=0.1,
train_transforms=train_transforms,
test_transforms=test_transforms,
num_workers=2)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
print('Class labels of 10 examples:', labels[:10])
break
plt.figure(figsize=(8, 8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(torchvision.utils.make_grid(images[:64],
padding=2, normalize=True),
(1, 2, 0)))
```
## Model
```
##########################
### MODEL
##########################
model = torch.hub.load('pytorch/vision:v0.9.0', 'mobilenet_v3_large',
pretrained=False)
model.classifier[-1] = torch.nn.Linear(in_features=1280, # as in original
out_features=10) # number of class labels in Cifar-10)
model = model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
minibatch_loss_list, train_acc_list, valid_acc_list = train_classifier_simple_v2(
model=model,
num_epochs=NUM_EPOCHS,
train_loader=train_loader,
valid_loader=valid_loader,
test_loader=test_loader,
optimizer=optimizer,
best_model_save_path='mobilenet-v2-best-1.pt',
device=DEVICE,
scheduler_on='valid_acc',
logging_interval=100)
plot_training_loss(minibatch_loss_list=minibatch_loss_list,
num_epochs=NUM_EPOCHS,
iter_per_epoch=len(train_loader),
results_dir=None,
averaging_iterations=200)
plt.show()
plot_accuracy(train_acc_list=train_acc_list,
valid_acc_list=valid_acc_list,
results_dir=None)
plt.ylim([60, 100])
plt.show()
model.load_state_dict(torch.load('mobilenet-v2-best-1.pt'))
model.eval()
test_acc = compute_accuracy(model, test_loader, device=DEVICE)
print(f'Test accuracy: {test_acc:.2f}%')
model.cpu()
unnormalizer = UnNormalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
class_dict = {0: 'airplane',
1: 'automobile',
2: 'bird',
3: 'cat',
4: 'deer',
5: 'dog',
6: 'frog',
7: 'horse',
8: 'ship',
9: 'truck'}
show_examples(model=model, data_loader=test_loader, unnormalizer=unnormalizer, class_dict=class_dict)
mat = compute_confusion_matrix(model=model, data_loader=test_loader, device=torch.device('cpu'))
plot_confusion_matrix(mat, class_names=class_dict.values())
plt.show()
```
| github_jupyter |
```
import numpy as np
from numba import jit
from scipy import ndimage
from osgeo import gdal, osr, ogr
import matplotlib.pyplot as plt
plt.style.use('default')
@jit(nopython=True)
def np_mean(neighborhood):
return np.nanmean(neighborhood)
def lonlat_to_utm(lon, lat):
if lat < 0:
return int(32700 + np.floor((180+lon)/6) + 1)
else:
return int(32600 + np.floor((180+lon)/6) + 1)
def generate_contours(dem_band, dem_nodata, iterval=50):
contours = ogr.GetDriverByName('Memory').CreateDataSource('')
contours_lyr = contours.CreateLayer('contour',
geom_type=ogr.wkbLineString25D,
srs=srs)
field_defn = ogr.FieldDefn('ID', ogr.OFTInteger)
contours_lyr.CreateField(field_defn)
field_defn = ogr.FieldDefn('elev', ogr.OFTReal)
contours_lyr.CreateField(field_defn)
# Generate the smoothed Contours
gdal.ContourGenerate(dem_band, iterval, 0, [], 1,
dem_nodata, contours_lyr, 0, 1)
features = []
for fc in contours_lyr:
features.append(fc)
return features
in_dem = r"..\Data\N46E009.hgt"
dem_ds = gdal.Open(in_dem)
prj = dem_ds.GetProjection()
srs = osr.SpatialReference(wkt=prj)
gt = dem_ds.GetGeoTransform()
xmin, xpixel, _, ymax, _, ypixel = gt
# Project to UTM zone
utm_epsg = lonlat_to_utm(xmin, ymax)
dem_ds = gdal.Warp("", dem_ds, format="MEM", dstSRS=f'EPSG:{utm_epsg}')
# Get SRS
prj = dem_ds.GetProjection()
srs = osr.SpatialReference(wkt=prj)
# Get the new gt
gt = dem_ds.GetGeoTransform()
xmin, xpixel, _, ymax, _, ypixel = gt
dem_band = dem_ds.GetRasterBand(1)
dem_nodata = dem_band.GetNoDataValue()
width, height = dem_ds.RasterXSize, dem_ds.RasterYSize
xmax = xmin + width * xpixel
ymin = ymax + height * ypixel
# Get the array
dem_array = dem_band.ReadAsArray().astype(float)
dem_array[dem_array == dem_nodata] = np.nan
# Apply a local filter where the resulting value is the mean of the neighborhood
kernel_size = 9
filtered_dem = ndimage.generic_filter(dem_array,
np_mean,
size=kernel_size,
cval=np.nan)
# Create a layer from the filtered_dem
srs = osr.SpatialReference()
srs.ImportFromWkt(prj)
filtered_dem[np.isnan(filtered_dem)] = dem_nodata
driver = gdal.GetDriverByName("MEM")
filtered_dem_ds = driver.Create("", width, height, 1, gdal.GDT_Float32)
filtered_dem_ds.GetRasterBand(1).WriteArray(filtered_dem)
filtered_dem_ds.GetRasterBand(1).SetNoDataValue(dem_nodata)
filtered_dem_ds.SetGeoTransform(gt)
filtered_dem_ds.SetProjection(srs.ExportToWkt())
# Create a layer for non-smoothed contours
contours_nogen = generate_contours(dem_ds.GetRasterBand(1), dem_nodata)
# Create a layer for smoothed contours
contours_gen = generate_contours(filtered_dem_ds.GetRasterBand(1), dem_nodata)
# Plot the contours
xlim_min, xlim_max = 530000, 550000
ylim_min, ylim_max = 5102000, 5125000
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
ax1.get_xaxis().set_visible(False)
ax1.get_yaxis().set_visible(False)
ax2.get_xaxis().set_visible(False)
ax2.get_yaxis().set_visible(False)
# Plot non-smoothed contours
for fc in contours_nogen:
geom = fc.GetGeometryRef()
elev = fc.GetField("elev")
pts = np.array([(pt[0], pt[1]) for pt in geom.GetPoints()])
if elev % 250 == 0:
lw = 0.3
else:
lw = 0.1
ax1.plot(pts[:, 0], pts[:, 1], lw=lw, c=(0,0,0, 0.75))
# Plot smoothed contours
for fc in contours_gen:
geom = fc.GetGeometryRef()
elev = fc.GetField("elev")
pts = np.array([(pt[0], pt[1]) for pt in geom.GetPoints()])
if elev % 250 == 0:
lw = 0.3
else:
lw = 0.1
ax2.plot(pts[:, 0], pts[:, 1], lw=lw, c=(0, 0, 0, 0.75))
ax1.set_xlim(xlim_min, xlim_max)
ax1.set_ylim(ylim_min, ylim_max)
ax1.set_aspect(True)
ax2.set_xlim(xlim_min, xlim_max)
ax2.set_ylim(ylim_min, ylim_max)
ax2.set_aspect(True)
plt.savefig("contours_gdal.jpg",
dpi=300,
bbox_inches="tight")
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Rajansharma05/A-mobile-based-photo-editing-app/blob/master/Copy_of_Copy_of_Welcome_to_Colaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<p><img alt="Colaboratory logo" height="45px" src="/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"></p>
<h1>What is Colaboratory?</h1>
Colaboratory, or 'Colab' for short, allows you to write and execute Python in your browser, with
- Zero configuration required
- Free access to GPUs
- Easy sharing
Whether you're a <strong>student</strong>, a <strong>data scientist</strong> or an <strong>AI researcher</strong>, Colab can make your work easier. Watch <a href="https://www.youtube.com/watch?v=inN8seMm7UI">Introduction to Colab</a> to find out more, or just get started below!
```
x = [1,2,3,4]
y = [3,4,7,10]
n = len(x)
xm = sum(x)/n
ym = sum(y)/n
num = 0
dem = 0
for i in range(0,n):
num = num + (y[i] - ym) * (x[i] - xm)
dem = dem + (x[i] - xm) ** 2
m = num / dem
print("Slope of the line : ", m)
c = ym - (xm * m )
print("constant of the linear regression : ", c)
```
Linear regression
## <strong>Getting started</strong>
The document that you are reading is not a static web page, but an interactive environment called a <strong>Colab notebook</strong> that lets you write and execute code.
For example, here is a <strong>code cell</strong> with a short Python script that computes a value, stores it in a variable and prints the result:
```
seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day
```
To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut 'Command/Ctrl+Enter'. To edit the code, just click the cell and start editing.
Variables that you define in one cell can later be used in other cells:
```
seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week
```
Colab notebooks allow you to combine <strong>executable code</strong> and <strong>rich text</strong> in a single document, along with <strong>images</strong>, <strong>HTML</strong>, <strong>LaTeX</strong> and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To find out more, see <a href="/notebooks/basic_features_overview.ipynb">Overview of Colab</a>. To create a new Colab notebook you can use the File menu above, or use the following link: <a href="http://colab.research.google.com#create=true">Create a new Colab notebook</a>.
Colab notebooks are Jupyter notebooks that are hosted by Colab. To find out more about the Jupyter project, see <a href="https://www.jupyter.org">jupyter.org</a>.
## Data science
With Colab you can harness the full power of popular Python libraries to analyse and visualise data. The code cell below uses <strong>numpy</strong> to generate some random data, and uses <strong>matplotlib</strong> to visualise it. To edit the code, just click the cell and start editing.
```
import numpy as np
from matplotlib import pyplot as plt
ys = 200 + np.random.randn(100)
x = [x for x in range(len(ys))]
plt.plot(x, ys, '-')
plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)
plt.title("Sample Visualization")
plt.show()
```
You can import your own data into Colab notebooks from your Google Drive account, including from spreadsheets, as well as from GitHub and many other sources. To find out more about importing data, and how Colab can be used for data science, see the links below under <a href="#working-with-data">Working with data</a>.
## Machine learning
With Colab you can import an image dataset, train an image classifier on it, and evaluate the model, all in just <a href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb">a few lines of code</a>. Colab notebooks execute code on Google's cloud servers, meaning you can leverage the power of Google hardware, including <a href="#using-accelerated-hardware">GPUs and TPUs</a>, regardless of the power of your machine. All you need is a browser.
Colab is used extensively in the machine learning community with applications including:
- Getting started with TensorFlow
- Developing and training neural networks
- Experimenting with TPUs
- Disseminating AI research
- Creating tutorials
To see sample Colab notebooks that demonstrate machine learning applications, see the <a href="#machine-learning-examples">machine learning examples</a> below.
## More resources
### Working with notebooks in Colab
- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)
- [Guide to markdown](/notebooks/markdown_guide.ipynb)
- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)
- [Saving and loading notebooks in GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)
- [Interactive forms](/notebooks/forms.ipynb)
- [Interactive widgets](/notebooks/widgets.ipynb)
- <img src="/img/new.png" height="20px" align="left" hspace="4px" alt="New"></img>
[TensorFlow 2 in Colab](/notebooks/tensorflow_version.ipynb)
<a name="working-with-data"></a>
### Working with data
- [Loading data: Drive, Sheets and Google Cloud Storage](/notebooks/io.ipynb)
- [Charts: visualising data](/notebooks/charts.ipynb)
- [Getting started with BigQuery](/notebooks/bigquery.ipynb)
### Machine learning crash course
These are a few of the notebooks from Google's online machine learning course. See the <a href="https://developers.google.com/machine-learning/crash-course/">full course website</a> for more.
- [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb)
- [TensorFlow concepts](/notebooks/mlcc/tensorflow_programming_concepts.ipynb)
- [First steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)
- [Intro to neural nets](/notebooks/mlcc/intro_to_neural_nets.ipynb)
- [Intro to sparse data and embeddings](/notebooks/mlcc/intro_to_sparse_data_and_embeddings.ipynb)
<a name="using-accelerated-hardware"></a>
### Using accelerated hardware
- [TensorFlow with GPUs](/notebooks/gpu.ipynb)
- [TensorFlow with TPUs](/notebooks/tpu.ipynb)
<a name="machine-learning-examples"></a>
## Machine learning examples
To see end-to-end examples of the interactive machine-learning analyses that Colaboratory makes possible, take a look at these tutorials using models from <a href="https://tfhub.dev">TensorFlow Hub</a>.
A few featured examples:
- <a href="https://tensorflow.org/hub/tutorials/tf2_image_retraining">Retraining an Image Classifier</a>: Build a Keras model on top of a pre-trained image classifier to distinguish flowers.
- <a href="https://tensorflow.org/hub/tutorials/tf2_text_classification">Text Classification</a>: Classify IMDB film reviews as either <em>positive</em> or <em>negative</em>.
- <a href="https://tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization">Style Transfer</a>: Use deep learning to transfer style between images.
- <a href="https://tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa">Multilingual Universal Sentence Encoder Q&A</a>: Use a machine-learning model to answer questions from the SQuAD dataset.
- <a href="https://tensorflow.org/hub/tutorials/tweening_conv3d">Video Interpolation</a>: Predict what happened in a video between the first and the last frame.
| github_jupyter |
# Prepare Data for TPZ
* query GCR with the same cuts we used for BPZ
* deredden the magnitudes
* fill in missing values
* reformat to TPZ output
## Query GCR with the same cuts we used for BPZ
```
# everything we need for the whole notebook
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import dustmaps
from dustmaps.sfd import SFDQuery
from astropy.coordinates import SkyCoord
from dustmaps.config import config
import GCRCatalogs
from GCR import GCRQuery
object_cat = GCRCatalogs.load_catalog('dc2_object_run2.2i_dr6a')
tract_ids = [2731, 2904, 2906, 3081, 3082, 3084,
3262, 3263, 3265, 3448, 3450, 3831,
3832, 3834, 4029, 4030, 4031, 2905,
3083, 3264, 3449, 3833]
basic_cuts = [
GCRQuery('extendedness > 0'), # Extended objects
GCRQuery((np.isfinite, 'mag_i')), # Select objects that have i-band magnitudes
GCRQuery('clean'), # The source has no flagged pixels (interpolated, saturated, edge, clipped...)
# and was not skipped by the deblender
GCRQuery('xy_flag == 0'), # Bad centroiding
GCRQuery('snr_i_cModel >= 10'),
GCRQuery('detect_isPrimary'), # (from this and below) basic flag cuts
~GCRQuery('deblend_skipped'),
~GCRQuery('base_PixelFlags_flag_edge'),
~GCRQuery('base_PixelFlags_flag_interpolatedCenter'),
~GCRQuery('base_PixelFlags_flag_saturatedCenter'),
~GCRQuery('base_PixelFlags_flag_crCenter'),
~GCRQuery('base_PixelFlags_flag_bad'),
~GCRQuery('base_PixelFlags_flag_suspectCenter'),
~GCRQuery('base_PixelFlags_flag_clipped')
]
mag_filters = [
(np.isfinite, 'mag_i'),
'mag_i < 25.',
]
allcols = ['ra','dec','blendedness','extendedness','clean','objectId','tract','patch']
for band in ('u','g','r','i','z','y'):
allcols.append(f"mag_{band}_cModel")
allcols.append(f"magerr_{band}_cModel")
allcols.append(f"snr_{band}_cModel")
print(allcols)
object_df_list = []
for i in tract_ids:
print("reading tract {}".format(i))
object_data = object_cat.get_quantities(allcols,
filters=basic_cuts+mag_filters, native_filters=['tract == {}'.format(i)])
object_df_list.append(pd.DataFrame(object_data))
coadd_df = pd.concat(object_df_list)
## Deredden the magnitudes
band_a_ebv = np.array([4.81,3.64,2.70,2.06,1.58,1.31])
filters = ['u','g','r','i','z','y']
coords = SkyCoord(coadd_df['ra'], coadd_df['dec'], unit='deg', frame='fk5')
sfd = SFDQuery()
ebvvec = sfd(coords)
coadd_df['ebv'] = ebvvec
for ebv, filt in zip(band_a_ebv, filters):
coadd_df['mag_{}_cModel_dered'.format(filt)] = coadd_df['mag_{}_cModel'.format(filt)] - coadd_df['ebv']*ebv
```
## Fill in missing values
```
bands = ['u','g','r','i','z','y']
patches = np.unique(coadd_df['patch'])
def impute_nan_mags(df, verbose=False):
for ii,band in enumerate(bands):
#add dereddened magnitudes and re-calculate log version of errors
# deredden_mag = ebv_vec*band_a_ebv[ii]
# cmod_dered =df[f"mag_{band}_cModel"] - deredden_mag
# df[f"cModel_{band}_obj_dered"]=cmod_dered
invsn = 1./df[f"snr_{band}_cModel"]
logmagerr = 2.5*np.log10(1.+invsn)
df[f"magerrlog_{band}_dered"] = logmagerr
#now, replace the non-detections in each band with 99.0 for mag and
#the 1 sigma limit determined from 1 sigma objects in the same band
#do this patch by patch to partially account for variable depth/conditions!
siglow = 0.73
sighigh = 0.77
defaultlimmag = 25.8
for patch in patches:
goodpatch = True
sigselect = ((df[f'magerrlog_{band}_dered']>siglow) & (df[f'magerrlog_{band}_dered']<sighigh) \
& (df['patch']==patch)\
& (np.isfinite(df[f'mag_{band}_cModel_dered'])))
if np.sum(sigselect)==0:
siglow = 0.71
sighigh = 0.79
sigselect = ((df[f'magerrlog_{band}_dered']>siglow) & (df[f'magerrlog_{band}_dered']<sighigh) \
& (df['patch']==patch)\
& (np.isfinite(df[f'mag_{band}_cModel_dered'])))
if np.sum(sigselect)==0:
print(f"bad data in patch {patch} for band {band}: no 1 sig objects, put in hard coded 25.8 limit")
goodpatch = False
if verbose: print(f"{np.sum(sigselect)} total in cut for patch {patch}")
if goodpatch:
sigmag = df[f'mag_{band}_cModel_dered'][sigselect]
limmag = np.median(sigmag)
defaultlimmag = limmag
else:
limmag = 25.8 #hardcoded temp solution
if verbose: print(f"1 sigma mag for patch {patch} in band {band} is {limmag}")
#find all NaN and Inf and replace with mag = 99 and magerr = 1 sigma limit
nondet = ((~np.isfinite(df[f'mag_{band}_cModel_dered']) | (~np.isfinite(df[f'magerrlog_{band}_dered']))) \
& (df['patch']==patch))
df[f'mag_{band}_cModel_dered'][nondet] = limmag #changed by ih
df[f'magerrlog_{band}_dered'][nondet] = .7525 #changed by ih
if verbose: print(f"replacing inf and nan for {np.sum(nondet)} {band} band detects for patch {patch}")
return df
impute_coadd_df = impute_nan_mags(coadd_df, verbose=True)
f, ax = plt.subplots(nrows=2, ncols=3)
for a, band in zip(ax.flatten(),bands):
a.scatter(impute_coadd_df['mag_{}_cModel_dered'.format(band)],
impute_coadd_df['magerrlog_{}_dered'.format(band)], s=5, alpha=.3,
label='{}'.format(band))
a.legend()
ax[1,1].set_xlabel('Magnitude')
ax[0,0].set_ylabel('Magnitude error')
```
## Reformat for TPZ input
```
len(impute_coadd_df)
features = {
'u':impute_coadd_df['mag_u_cModel_dered'].values,
'g':impute_coadd_df['mag_g_cModel_dered'].values,
'r':impute_coadd_df['mag_r_cModel_dered'].values,
'i':impute_coadd_df['mag_i_cModel_dered'].values,
'z':impute_coadd_df['mag_z_cModel_dered'].values,
'y':impute_coadd_df['mag_y_cModel_dered'].values,
'u-g':impute_coadd_df['mag_u_cModel_dered'].values - impute_coadd_df['mag_g_cModel_dered'].values,
'g-r':impute_coadd_df['mag_g_cModel_dered'].values - impute_coadd_df['mag_r_cModel_dered'].values,
'r-i':impute_coadd_df['mag_r_cModel_dered'].values - impute_coadd_df['mag_i_cModel_dered'].values,
'i-z':impute_coadd_df['mag_i_cModel_dered'].values - impute_coadd_df['mag_z_cModel_dered'].values,
'z-y':impute_coadd_df['mag_z_cModel_dered'].values - impute_coadd_df['mag_y_cModel_dered'].values,
'eu':impute_coadd_df['magerrlog_u_dered'].values,
'eg':impute_coadd_df['magerrlog_g_dered'].values,
'er':impute_coadd_df['magerrlog_r_dered'].values,
'ei':impute_coadd_df['magerrlog_i_dered'].values,
'ez':impute_coadd_df['magerrlog_z_dered'].values,
'ey':impute_coadd_df['magerrlog_y_dered'].values,
'ra':impute_coadd_df['ra'].values,
'dec':impute_coadd_df['dec'].values,
'objectId':impute_coadd_df['objectId'].values
}
pd.DataFrame(features).to_csv('/global/cfs/cdirs/lsst/groups/PZ/PZBLEND/Run2.2i_dr6_dered_test.txt',
sep=' ', index=False, header=False, float_format='%.7g')
```
| github_jupyter |
# making an aggregate master dataframe for baseline model 10/22/18 -11/6/18
this notebook is going to make standardized longformat dataframes for each dataframe that i will adjust for each model to.
for my first pass, i will work to establish a baseline model by using:
the single "worst", or value that most indicates poor clinical outcomes, for each variable so each variable only has one row per patient.
- make a long format table(ie variable, patient, time, value)
- step1:Standardize all columns, format, etc.
- step2: Maybe make a long format table for each dataframe
- step3: Impute
- combine features from each long table for 1 wide table (ie each patient has a row, each parameter has a column).
- feature select for "Clinical worst case"
- Establish a baseline (ie train model initially), using an obvious baseline: last valid mesurement of a particular variable. Will hope that it doesnโt perform too good or too bad. Second would be to pick an aggregate within a time window (over 3 days, or of each day, ie can change graunlarity).
- Next try temporal trend, maybe vector autoregression.
So my first step would be to pick either the last recorded value for each variable or the โworstโ value, or ones that we might expect to indicate poor outcome (NEED TO CHOOSE)
## last run:
* (1) 6/9/19: sensitivity analysis 1day timewindow
* (2) 11/9/19: rerun 72day timewindow because 72hr has low patients on
```
import pandas as pd
import matplotlib.pyplot as plt
import os
from pathlib import Path
import seaborn as sns
import numpy as np
import glob
from sklearn.externals.joblib import Memory
memory = Memory(cachedir='/tmp', verbose=0)
#@memory.cache above any def fxn.
%matplotlib inline
plt.style.use('ggplot')
from notebook.services.config import ConfigManager
cm = ConfigManager()
cm.update('livereveal', {
'width': 1024,
'height': 768,
'scroll': True,
})
%load_ext autotime
#cohort import
os.chdir('/Users/geickelb1/Documents/GitHub/mimiciii-antibiotics-modeling') #use to change working directory
wd= os.getcwd() #'/Users/geickelb1/Documents/GitHub/mimiciii-antibiotics-modeling'
most_updated_patient_df= "04042019"
final_pt_df2 = pd.read_csv('/Users/geickelb1/Documents/GitHub/mimiciii-antibiotics-modeling/data/raw/csv/%s_final_pt_df2.csv'%(most_updated_patient_df), index_col=0) #only for patients with minimum vitals
patients= list(final_pt_df2['subject_id'].unique())
hadm_id= list(final_pt_df2['hadm_id'].unique())
icustay_id= list(final_pt_df2['icustay_id'].unique())
icustay_id= [int(x) for x in icustay_id]
final_pt_df2['icustay_id'].nunique() #14478
#import all clinical variables
##ensure they are the versions with UOM
#importing in all clinical_variable files
# ##24 hr sensitivity
# #importing in all clinical_variable files
lower_window=0
upper_window=1
time_col="charttime"
time_var="t_0"
folder="24_hr_window"
timewindowdays="24"
date= '09062019'
patient_df= final_pt_df2
# #48 hr sensitivity
# lower_window=0
# upper_window=2
# time_var="t_0"
# folder="48_hr_window"
# timewindowdays="48"
# date='16052019'
# time_col="charttime"
# time_var= 't_0'
# patient_df= final_pt_df2
# # 24 hr sensitivity
# lower_window=0
# upper_window=1
# time_var="t_0"
# folder="24_hr_window"
# date='04042019'
# timewindowdays="24"
# # # 48 hr sensitivity
# lower_window=0
# upper_window=2
# time_var="t_0"
# folder="48_hr_window"
# date='16052019'
# timewindowdays="48"
# # # 78 hr elixhauser-redo
# lower_window=0
# upper_window=3
# folder="72_hr_window"
# date='11062019'
# time_col="charttime"
# time_var= 't_0'
# timewindowdays="72"
os.chdir(r'/Users/geickelb1/Documents/GitHub/mimiciii-antibiotics-modeling/data/processed/') #folder with all prepped clinical data stored in date_file_prepped.csv format
allFiles = glob.glob(os.getcwd() + "/{}_*.csv".format(date))
os.getcwd()
allFiles #need to rerun 03.1-clinical_variables and have a new date to make it easier.
#making a dictionary of all my dataframes for easier cycling through
df_list=[]
for element in allFiles:
df_list.append(element.split('{}_'.format(date))[1].split('_prepped.csv')[0]) #making an list of all my dataframes in order they appear in file
dfs = {}
i=0
for name in df_list:
dfs[name] = pd.read_csv(allFiles[i], index_col=0)
i+=1
#all of the column names
for element in df_list:
print(element,':',list(dfs[element]))
```
## standardizing columns
#### adding icustay_id, dropping hadm_id
```
##dropping hadm_id from all:
list1=[]
for element in df_list:
if 'hadm_id' in (list(dfs[element])):
list1.append(element)
for element in list1:
dfs[element]= dfs[element].drop('hadm_id', axis=1)
##dropping subject_id from all:
list1=[]
for element in df_list:
if 'subject_id' in (list(dfs[element])):
list1.append(element)
for element in list1:
dfs[element]= dfs[element].drop('subject_id', axis=1)
#all of the column names
for element in df_list:
print(element,':',sorted(list(dfs[element])))
#dropping charttime, endtime and first_charttime
list1=[]
list2=[]
for element in df_list:
if 'charttime' in (list(dfs[element])):
list1.append(element)
if 'endtime' in (list(dfs[element])):
list2.append(element)
for element in list1:
dfs[element]= dfs[element].drop('charttime', axis=1)
for element in list2:
dfs[element]= dfs[element].drop('endtime', axis=1)
#dfs['rrt']= dfs['rrt'].drop('first_charttime', axis=1)
#converting valuenum and value to same label
list1=[]
for element in df_list:
if 'valuenum' in (list(dfs[element])):
list1.append(element)
for element in list1:
dfs[element]= dfs[element].rename(index=str, columns={'valuenum':'value'})
del(list1,list2)
def label_lower(df_name):
dfs[df_name]['label']= dfs[df_name]['label'].apply(lambda x: x.lower())
#turning all labels to lowercase
for element in df_list:
label_lower(element)
#adding a df source table label to each df.
for element in df_list:
dfs[element]['source']=element
#adding a patient id to each
for element in df_list:
dfs[element]= pd.merge(dfs[element], final_pt_df2[['icustay_id','subject_id']], how='left')
#all of the column names
for element in df_list:
print(element,':',sorted(list(dfs[element])))
pd.to_timedelta(dfs['norepinephrine']['delta']).describe()
```
# converting data formats
# looking at measured values
```
df_list
def value_viewer(df_name):
return(dfs[df_name]['label'].unique())
value_viewer('bg_ART')
value_viewer('bg_all')
#value_viewer('uti')
value_viewer('labs')
value_viewer('vitals')
value_viewer('pt_info')
list(dfs)
```
# combining data
```
set(value_viewer('labs')) & set(value_viewer('bg_all'))
set(value_viewer('labs')) & set(value_viewer('vitals'))
set(value_viewer('bg_all')) & set(value_viewer('vitals'))
# (dfs['labs'].loc[
# dfs['labs'].loc[:,'label']=='glucose', ['label','value']
# ]
# .groupby(['label'])
# .describe(percentiles=[.25, .5, .75,.95, .99])
# )
# (dfs['vitals'].loc[
# dfs['vitals'].loc[:,'label']=='glucose', ['label','value']
# ]
# .groupby(['label'])
# .describe(percentiles=[.25, .5, .75,.95, .99])
# )
# (dfs['bg_all'].loc[
# dfs['bg_all'].loc[:,'label']=='glucose', ['label','valuenum']
# ]
# .groupby(['label'])
# .describe(percentiles=[.25, .5, .75,.95, .99])
# )
```
### merging labs together
```
# lab_bg_vital= pd.concat([dfs['labs'],dfs['bg_all'],dfs['vitals']], sort=False).sort_values(['icustay_id','delta','label','source'], ascending=True)
# lab_bg_vital.head()
# #rounding timedeltas to the 2 minute mark.
# #pd.to_datetime(lab_bg_vital['delta'])#.dt.round('m')
# lab_bg_vital['delta']= pd.to_timedelta(lab_bg_vital['delta'])
# lab_bg_vital['delta']= pd.to_datetime(lab_bg_vital['delta']).dt.round('2min')
# lab_bg_vital['delta']= pd.to_timedelta(lab_bg_vital['delta'])
# #note this is more efficient than rounding timedeltas for some reason.
# lab_bg_vital.drop_duplicates(subset=['icustay_id','label','value','delta',], keep='last', inplace=False) #n=7001349 at 1 min, 6913527 at 2min rounding, vs 7222647 without.
# list(lab_bg_vital['label'].unique())
```
# combining all df
```
#testing combining all df's
##this may not be a useful exercise, but experimenting.
#big_df= pd.concat([dfs['labs'],dfs['bg_all'],dfs['vitals']], sort=False).sort_values(['icustay_id','delta','label','source'], ascending=True)
# making one big dataframe via pd. concat
big_df= pd.concat(dfs.values(), sort=False).sort_values(['icustay_id','delta','label','source'], ascending=True)
#converting delta to time delta, to datetime rounded to 2 minutes, and back to time delta (more efficient than rounding timedeltas)
big_df['delta']= pd.to_timedelta(big_df['delta'])
big_df['delta']= pd.to_datetime(big_df['delta']).dt.round('2min')
big_df['delta']= pd.to_timedelta(big_df['delta'])
len(big_df)
big_df= big_df.drop_duplicates(subset=['icustay_id','label','value','delta',], keep='last', inplace=False) #7638425 -> 7315304 at 2 min.
len(big_df)
#big_df['sum_elix']
big_df.groupby('label')['value'].describe() #14478 icustay_id's
big_df['label'].unique()
big_df[big_df['label']=='leukocyte'].head()
big_df['icustay_id'].nunique() #also odd this is not 14668
```
## last minute data cleaning
```
#removing firstpos else neg ssc col
big_df=big_df.loc[:,list(big_df.columns!="first_pos_else_neg_ssc")]
wd
date
pd.DataFrame(big_df).to_csv(Path(
wd+'/data/processed/merged/{}_longdf_preImp_{}.csv'.format(date,timewindowdays)))
del big_df
del dfs
```
#### making a patient missingness visualization
```
# #big_agg= big_df.groupby(['icustay_id','label'], as_index=False)['value'].agg(['min'])
# big_agg= big_df.groupby(['icustay_id','label'], as_index=False)['value'].size()
# big_agg_count= big_agg.reset_index().pivot(index='icustay_id',columns='label', values=0)#, levels='icustay_id')
# big_agg_count= big_agg.reset_index().pivot(index='icustay_id',columns='label', values=0)#, levels='icustay_id')
# big_agg_count
# sns.set(rc={'figure.figsize':(25,15)})
# #big_agg_min
# #%matplotlib inline
# sns.set(rc={'figure.figsize':(25,15)})
# big_agg_count= big_agg_count.fillna(0)
# big_agg_count = big_agg_count[big_agg_count.columns].astype(float)
# sns.heatmap(big_agg_count,vmin=0, vmax=1, cmap=sns.color_palette("RdBu_r", 5))
###f
# len(list(big_agg_count)) #38 columns.
# big_agg_count
# big_agg_count[big_agg_count>0] =1
# big_agg_pt_missing= big_agg_count.T.apply(lambda x:100*(len(list(big_agg_count))-sum(x))/len(list(big_agg_count)))
# big_agg_pt_missing= pd.DataFrame(big_agg_pt_missing).rename(index=str, columns={0:'%_of_values_missing'})
# big_agg_pt_missing.sort_values('%_of_values_missing',ascending=False).plot()
# len(big_agg_pt_missing)
# big_agg_pt_missing.describe()
```
# 04-08-19: need to visualize current data. need to see matrix of patient by time.
see 07.X-PatientTime for this
```
# list1=[]
# list2=[]
# for element in df_list:
# if 'value' in (list(dfs[element])):
# list1.append(element)
# if 'valuenum' in (list(dfs[element])):
# list2.append(element)
# for element in list2:
# print(dfs[element].groupby('label')['valuenum'].describe())
# for element in list1:
# print(dfs[element].groupby('label')['value'].describe())
# dfs['bg_ART'].groupby('label')['valuenum'].apply(lambda x: type(x))
```
| github_jupyter |
### Primitive Data Types: Booleans
These are the basic data types that constitute all of the more complex data structures in python. The basic data types are the following:
* Strings (for text)
* Numeric types (integers and decimals)
* Booleans
### Booleans
Booleans represent the truth or success of a statement, and are commonly used for branching and checking status in code.
They can take two values: `True` or `False`.
```
bool_1 = True
bool_2 = False
print(bool_1)
print(bool_2)
```
If you remember from our strings session, we could execute a command that checks in a string appears within another. For example:
```
lookfor = "Trump"
text = """Three American prisoners freed from North Korea arrived here early
Thursday to a personal welcome from President Trump, who traveled to an air
base in the middle of the night to meet them."""
trump_in_text = lookfor in text
print("Does Trump appear in the text?", trump_in_text)
```
### Boolean Operations:
Frequently, one wants to combine or modify boolean values. Python has several operations for just this purpose:
+ `not a`: returns the opposite value of `a`.
+ `a and b`: returns true if and only if both `a` and `b` are true.
+ `a or b`: returns true either `a` or `b` are true, or both.
See LPTHW [Exercise 27](http://learnpythonthehardway.org/book/ex27.html)
Like mathematical expressions, boolean expressions can be nested using parentheses.
```
var1 = 5
var2 = 6
var3 = 7
```
Consider the outcomes of the following examples
```
print(var1 + var2 == 11)
print(var2 + var3 == 13)
print(var1 + var2 == 11 and var2 + var3 == 13)
print(var1 + var2 == 12 and var2 + var3 == 13)
print(var1 + var2 == 12 or var2 + var3 == 13)
print((not var1 + var2 == 12) or (var2 + var3 == 14))
```
### Exercise
Complete Exercises 1-12 in [28](http://learnpythonthehardway.org/book/ex28.html) at LPTHW. You can find them also below. Try to find the outcome before executing the cell.
```
# 1
True and True
# 2
False and True
# 3
1 == 1 and 2 == 1
# 4
"test" == "test"
# 5
1 == 1 or 2 != 1
# 6
True and 1 == 1
# 7
False and 0 != 0
# 8
True or 1 == 1
# 9
"test" == "testing"
# 10
1 != 0 and 2 == 1
# 11
"test" != "testing"
# 12
"test" == 1
```
Now Complete Exercises 12-20 in [28](http://learnpythonthehardway.org/book/ex28.html). But this time let's examine how to evaluate these expressions on a step by step basis.
```
# 13
not (True and False)
# 14
not (1 == 1 and 0 != 1)
# 15
not (10 == 1 or 1000 == 1000)
# 16
not (1 != 10 or 3 == 4)
# 17
not ("testing" == "testing" and "Zed" == "Cool Guy")
# 18
1 == 1 and (not ("testing" == 1 or 1 == 0))
# 19
"chunky" == "bacon" and (not (3 == 4 or 3 == 3))
# 20
3 == 3 and (not ("testing" == "testing" or "Python" == "Fun"))
# bonus
3 != 4 and not ("testing" != "test" or "Python" == "Python")
```
### Exercise
Now let's try to write the boolean expressions that will evaluate different conditions, given a set of other variables.
```
# To drink alcohol, you need to be above 21 yo
age = 18
# your code here, replace "False" with an expression
can_drink_alcohol = False
print(f"Age: {age}; can drink alcohol? {can_drink_alcohol}")
# To get a driving license you need to be above 16 yo
age = 18
# your code here, replace "False" with an expression
can_get_driving_license = False
print(f"Age: {age}; can get driving license? {can_get_driving_license}")
# You need to be a US Citizen to have a passport
us_citizen = True
# your code here, replace "False" with an expression
can_get_us_passport = False
print(f"US Citizen: {us_citizen}; can get US passport? {can_get_us_passport}")
# You need to be above 18 and a US Citizen
age = 18
us_citizen = True
# your code here, replace "False" with an expression
can_vote = False
print(f"US Citizen: {us_citizen}; Age: {age}\nCan Vote? {can_vote}")
# You need to be above 35, a US Citizen, and born in the US
age = 70
born_in_us = True
us_citizen = True
# your code here, replace "False" with an expression
can_be_president = False
print(f"US Citizen: {us_citizen}; Age: {age}; Born in US? {born_in_us}")
print(f"Can be president? {can_be_president}")
# Can you become citizen?
# You qualify for a citizen if any of the following holds
# * Your parents are US Citizens and you are under 18
# * You have been born in the US
age = 19
parents_citizens = False
born_in_us = True
# your code here, replace "False" with an expression
citizen_eligible = False
print(f"Citizen parents: {parents_citizens}")
print(f"Age: {age}")
print(f"Born in US? {born_in_us}")
print(f"Eligible for Citizen? {citizen_eligible}")
```
### Control Structures: if statements
Traversing over data and making decisions based upon data are a common aspect of every programming language, known as control flow. Python provides a rich control flow, with a lot of conveniences for the power users. Here, we're just going to talk about the basics, to learn more, please [consult the documentation](http://docs.python.org/2/tutorial/controlflow.html).
A common theme throughout this discussion of control structures is the notion of a "block of code." Blocks of code are **demarcated by a specific level of indentation**, typically separated from the surrounding code by some control structure elements, immediately preceeded by a colon, `:`. We'll see examples below.
Finally, note that control structures can be nested arbitrarily, depending on the tasks you're trying to accomplish.
### if Statements:
**See also LPTHW, Exp 29, 30, and 31.**
If statements are perhaps the most widely used of all control structures. An if statement consists of a code block and an argument. The if statement evaluates the boolean value of it's argument, executing the code block if that argument is true.
```
execute = False
if execute:
print("Of course!")
print("This will execute as well")
execute = False
if execute:
print("Me? Nobody?")
print("Really? Nobody?")
print("I am not nested, I will show up!")
```
And here is an `if` statement paired with an `else`.
```
lookfor = "Trump"
text = """
Three American prisoners freed from North Korea arrived here early Thursday
to a personal welcome from President Trump, who traveled to an air
base in the middle of the night to meet them.
"""
if lookfor in text:
print(lookfor, "appears in the text")
else:
print(lookfor, "does not appear in the text")
lookfor = "Obama"
text = """
Three American prisoners freed from North Korea arrived
here early Thursday to a personal welcome from President Trump,
who traveled to an air base in the middle of the night to meet them.
"""
if lookfor in text:
print(lookfor, "appears in the text")
else:
print(lookfor, "does not appear in the text")
```
Each argument in the above if statements is a boolean expression. Often you want to have alternatives, blocks of code that get evaluated in the event that the argument to an if statement is false. This is where **`elif`** (else if) and else come in.
An **`elif`** is evaluated if all preceding if or elif arguments have evaluated to false. The else statement is the last resort, assigning the code that gets executed if no if or elif above it is true. These statements are optional, and can be added to an if statement in any order, with at most one code block being evaluated. An else will always have it's code be executed, if nothing above it is true.
```
status = "Senior"
if status == "Freshman":
print("Hello newbie!")
print("How's college treating you?")
elif status == "Sophomore":
print("Welcome back!")
elif status == "Junior":
print("Almost there, almost there")
elif status == "Senior":
print("You can drink now! You will need it.")
elif status == "Senior":
print("The secret of life is 42. But you will never see this")
else:
print("Are you a graduate student? Or (gasp!) faculty?")
```
#### Exercise
* You need to be 21 years old and above to drink alcohol. Write a conditional expression that checks the age, and prints out whether the person is allowed to drink alcohol.
```
age = 20
if age >= 21:
print("You are above 21, you can drink")
else:
print("You are too young. Wait for", 21 - age, "years")
age = 18
if age >= 21:
print("You are above 21, you can drink")
else:
print("You are too young. Wait for", 21 - age, "years")
```
* You need to be 16 years old and above to drive. If you are above 16, you also need to have a driving license. Write a conditional expression that checks the age and prints out: (a) whether the person is too young to drive, (b) whether the person satisfies the age criteria but needs a driving license, or (c) the person can drive.
```
age = 18
has_driving_license = False
if age < 16:
print("You are too young to drive")
else:
if has_driving_license:
print("You can drive")
else:
print("You are old enough to drive, but you need a driving license")
age = 15
has_driving_license = True
if age >= 16 and has_driving_license:
print("You can drive")
elif age >= 16 and not has_driving_license:
print("You are old enough to drive, but you need a driving license")
else:
print("You are too young to drive")
age = 18
has_driving_license = False
if age < 16:
print("You are too young to drive")
else:
if has_driving_license:
print("You can drive")
else:
print("You are old enough to drive, but you need a driving license")
age = 18
has_driving_license = False
if age >= 16 and has_driving_license:
print("You can drive")
elif age >= 16 and not has_driving_license:
print("You are old enough to drive, but you need a driving license")
else:
print("You are too young to drive")
```
* You need to be above 18 and a US Citizen, to be able to vote. You also need to be registered. Write the conditional expression that checks for these conditions and prints out whether the person can vote. If the person cannot vote, the code should print out the reason (below 18, or not a US citizen, or not registered, or a combination thereof).
```
age = 15
us_citizen = False
registered = True
if age >= 18 and us_citizen and registered:
print("You can vote")
else:
print("You cannot vote")
# Now let's explain the reason
if age < 18:
print("You are below 18")
if not us_citizen:
print("You are not a US citizen")
if not registered:
print("You are not registered")
age = 15
us_citizen = False
registered = True
if age >= 18 and us_citizen and registered:
print("You can vote")
else:
print("You cannot vote")
if age < 18:
print("You are below 18")
if not us_citizen:
print("You are not a US citizen")
if not registered:
print("You are not registered")
```
* You qualify for US citizen if any of the following holds: (a) Your parents are US Citizens and you are under 18, (b) You have been born in the US. Write the conditional expression that checks if a person is eligible to become a US citizen. If the person is not eligible, the code should print out the reason.
```
age = 15
parents_citizens = False
born_in_us = False
if (age < 18 and parents_citizens) or born_in_us:
print("You can become a US citizen")
else: # none of the two conditions around the or were True
print("You cannot become a US citizen")
if not born_in_us:
print("You were not born in the US")
if not (age < 18 and parents_citizens):
print("You need to be below 18 and your parents need to be citizens")
age = 16
parents_citizens = True
born_in_us = False
if (age < 18 and parents_citizens) or born_in_us:
print("You can become a US citizen")
else: # none of the conditions were true
if not born_in_us:
print("You were not born in the US")
if not (age < 18 and parents_citizens):
print("You need to be below 18 and your parents need to be citizens")
```
| github_jupyter |
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
SPDX-License-Identifier: Apache-2.0
# Using the AWS Batch Architecture for AlphaFold
This notebook allows you to predict protein structures using AlphaFold on AWS Batch.
**Differences to AlphaFold Notebooks**
In comparison to AlphaFold v2.1.0, this notebook uses AWS Batch to submit protein analysis jobs to a scalable compute cluster. The accuracy should be the same as if you ran it locally. However, by using HPC services like AWS Batch and Amazon FSx for Lustre, we can support parallel job execution and optimize the resources for each run.
**Citing this work**
Any publication that discloses findings arising from using this notebook should [cite](https://github.com/deepmind/alphafold/#citing-this-work) the [AlphaFold paper](https://doi.org/10.1038/s41586-021-03819-2).
**Licenses**
Please refer to the `LICENSE` and `THIRD-PARTY-NOTICES` file for more information about third-party software/licensing.
## Table of Contents
0. [Install Dependencies](#0.-Install-Dependencies)
1. [Run a monomer analysis job](#1.-Run-a-monomer-analysis-job)
2. [Run a multimer analysis job](#2.-Run-a-multimer-analysis-job)
3. [Analyze multiple proteins](#3.-Analyze-multiple-proteins)
## 0. Install Dependencies
```
%pip install -r notebook-requirements.txt -q -q
# Import required Python packages
from Bio import SeqIO
from Bio.SeqRecord import SeqRecord
import boto3
from datetime import datetime
from IPython import display
from nbhelpers import nbhelpers
import os
import pandas as pd
import sagemaker
from time import sleep
pd.set_option("max_colwidth", None)
# Get client informatiion
boto_session = boto3.session.Session()
sm_session = sagemaker.session.Session(boto_session)
region = boto_session.region_name
s3_client = boto_session.client("s3", region_name=region)
batch_client = boto_session.client("batch")
S3_BUCKET = sm_session.default_bucket()
print(f" S3 bucket name is {S3_BUCKET}")
```
If you have multiple AWS-Alphafold stacks deployed in your account, which one should we use? If not specified, the `submit_batch_alphafold_job` function defaults to the first item in this list.
```
nbhelpers.list_alphafold_stacks()
```
## 1. Run a monomer analysis job
Provide sequences for analysis
```
## Enter the amino acid sequence to fold
id_1 = "T1084"
sequence_1 = "MAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH"
DB_PRESET = "reduced_dbs"
```
Validate the input and determine which models to use.
```
input_sequences = (sequence_1,)
input_ids = (id_1,)
input_sequences, model_preset = nbhelpers.validate_input(input_sequences)
sequence_length = len(input_sequences[0])
```
Upload input file to S3
```
job_name = nbhelpers.create_job_name()
object_key = nbhelpers.upload_fasta_to_s3(
input_sequences, input_ids, S3_BUCKET, job_name, region=region
)
```
Submit two Batch jobs, the first one for the data prep and the second one (dependent on the first) for the structure prediction.
```
# Define resources for data prep and prediction steps
if DB_PRESET == "reduced_dbs":
prep_cpu = 4
prep_mem = 16
prep_gpu = 0
else:
prep_cpu = 16
prep_mem = 32
prep_gpu = 0
if sequence_length < 700:
predict_cpu = 4
predict_mem = 16
predict_gpu = 1
else:
predict_cpu = 16
predict_mem = 64
predict_gpu = 1
step_1_response = nbhelpers.submit_batch_alphafold_job(
job_name=str(job_name),
fasta_paths=object_key,
output_dir=job_name,
db_preset=DB_PRESET,
model_preset=model_preset,
s3_bucket=S3_BUCKET,
cpu=prep_cpu,
memory=prep_mem,
gpu=prep_gpu,
run_features_only=True,
#stack_name = "MyStack" # Update if you have multiple stacks deployed
)
print(f"Job ID {step_1_response['jobId']} submitted")
step_2_response = nbhelpers.submit_batch_alphafold_job(
job_name=str(job_name),
fasta_paths=object_key,
output_dir=job_name,
db_preset=DB_PRESET,
model_preset=model_preset,
s3_bucket=S3_BUCKET,
cpu=predict_cpu,
memory=predict_mem,
gpu=predict_gpu,
features_paths=os.path.join(job_name, job_name, "features.pkl"),
depends_on=step_1_response["jobId"],
#stack_name = "MyStack" # Update if you have multiple stacks deployed
)
print(f"Job ID {step_2_response['jobId']} submitted")
```
Check status of jobs
```
status_1 = nbhelpers.get_batch_job_info(step_1_response["jobId"])
status_2 = nbhelpers.get_batch_job_info(step_2_response["jobId"])
print(f"Data prep job {status_1['jobName']} is in {status_1['status']} status")
print(f"Predict job {status_2['jobName']} is in {status_2['status']} status")
```
Once the job has a status of "RUNNING", "SUCCEEDED" or "FAILED", we can view the run logs
```
nbhelpers.get_batch_logs(status_1["logStreamName"])
```
Download results from S3
```
# job_name = status_1["jobName"]
# job_name = "1234567890" # You can also provide the name of a previous job here
nbhelpers.download_results(bucket=S3_BUCKET, job_name=job_name, local="data")
```
View MSA information
```
nbhelpers.plot_msa_output_folder(
path=f"data/{job_name}/{job_name}/msas", id=input_ids[0]
)
```
View predicted structure
```
pdb_path = os.path.join(f"data/{job_name}/{job_name}/ranked_0.pdb")
print("Default coloring is by plDDT")
nbhelpers.display_structure(pdb_path)
print("Can also use rainbow coloring")
nbhelpers.display_structure(pdb_path, color="rainbow")
```
## 2. Run a multimer analysis job
Provide sequences for analysis
```
## Enter the amino acid sequence to fold
id_1 = "5ZNG_1"
sequence_1 = "MDLSNMESVVESALTGQRTKIVVKVHMPCGKSRAKAMALAASVNGVDSVEITGEDKDRLVVVGRGIDPVRLVALLREKCGLAELLMVELVEKEKTQLAGGKKGAYKKHPTYNLSPFDYVEYPPSAPIMQDINPCSTM"
id_2 = "5ZNG_2"
sequence_2 = (
"MAWKDCIIQRYKDGDVNNIYTANRNEEITIEEYKVFVNEACHPYPVILPDRSVLSGDFTSAYADDDESCYRHHHHHH"
)
# Add additional sequences, if necessary
input_ids = (
id_1,
id_2,
)
input_sequences = (
sequence_1,
sequence_2,
)
DB_PRESET = "reduced_dbs"
input_sequences, model_preset = nbhelpers.validate_input(input_sequences)
sequence_length = len(max(input_sequences))
# Upload input file to S3
job_name = nbhelpers.create_job_name()
object_key = nbhelpers.upload_fasta_to_s3(
input_sequences, input_ids, S3_BUCKET, job_name, region=region
)
```
Submit batch jobs
```
# Define resources for data prep and prediction steps
if DB_PRESET == "reduced_dbs":
prep_cpu = 4
prep_mem = 16
prep_gpu = 0
else:
prep_cpu = 16
prep_mem = 32
prep_gpu = 0
if sequence_length < 700:
predict_cpu = 4
predict_mem = 16
predict_gpu = 1
else:
predict_cpu = 16
predict_mem = 64
predict_gpu = 1
step_1_response = nbhelpers.submit_batch_alphafold_job(
job_name=str(job_name),
fasta_paths=object_key,
output_dir=job_name,
db_preset=DB_PRESET,
model_preset=model_preset,
s3_bucket=S3_BUCKET,
cpu=prep_cpu,
memory=prep_mem,
gpu=prep_gpu,
run_features_only=True,
)
print(f"Job ID {step_1_response['jobId']} submitted")
step_2_response = nbhelpers.submit_batch_alphafold_job(
job_name=str(job_name),
fasta_paths=object_key,
output_dir=job_name,
db_preset=DB_PRESET,
model_preset=model_preset,
s3_bucket=S3_BUCKET,
cpu=predict_cpu,
memory=predict_mem,
gpu=predict_gpu,
features_paths=os.path.join(job_name, job_name, "features.pkl"),
depends_on=step_1_response["jobId"],
)
print(f"Job ID {step_2_response['jobId']} submitted")
```
Check status of jobs
```
status_1 = nbhelpers.get_batch_job_info(step_1_response["jobId"])
status_2 = nbhelpers.get_batch_job_info(step_2_response["jobId"])
print(f"Data prep job {status_1['jobName']} is in {status_1['status']} status")
print(f"Predict job {status_2['jobName']} is in {status_2['status']} status")
```
Download results from S3
```
job_name = status_1["jobName"]
# job_name = "1234567890" # You can also provide the name of a previous job here
nbhelpers.download_results(bucket=S3_BUCKET, job_name=job_name, local="data")
```
View MSA information
```
nbhelpers.plot_msa_output_folder(
path=f"data/{job_name}/{job_name}/msas", id=input_ids[0]
)
```
View predicted structure
```
pdb_path = os.path.join(f"data/{job_name}/{job_name}/ranked_0.pdb")
print("Can also color by chain")
nbhelpers.display_structure(pdb_path, chains=2, color="chain")
```
## 3. Analyze multiple proteins
Download and process CASP14 sequences
```
!wget "https://predictioncenter.org/download_area/CASP14/sequences/casp14.seq.txt"
!sed '137,138d' "casp14.seq.txt" > "casp14_dedup.fa" # Remove duplicate entry for T1085
casp14_iterator = SeqIO.parse("casp14_dedup.fa", "fasta")
casp14_df = pd.DataFrame(
(
(record.id, record.description, len(record), record.seq)
for record in casp14_iterator
),
columns=["id", "description", "length", "seq"],
).sort_values(by="length")
!rm data/casp14*
```
Display information about CASP14 proteins
```
with pd.option_context("display.max_rows", None):
display.display(casp14_df.loc[:, ("id", "description")])
```
Plot distribution of the protein lengths
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
plt.hist(casp14_df.length, bins=50)
plt.ylabel("Sample count")
plt.xlabel("Residue count")
plt.title("CASP-14 Protein Length Distribution")
plt.show()
```
Submit analysis jobs for a subset of CASP14 proteins
```
protein_count = (
5 # Change this to analyze a larger number of CASP14 targets, smallest to largest
)
job_name_list = []
for row in casp14_df[:protein_count].itertuples(index=False):
record = SeqRecord(row.seq, id=row.id, description=row.description)
print(f"Protein sequence for analysis is \n{record.description}")
sequence_length = len(record.seq)
print(f"Sequence length is {sequence_length}")
print(record.seq)
input_ids = (record.id,)
input_sequences = (str(record.seq),)
DB_PRESET = "reduced_dbs"
input_sequences, model_preset = nbhelpers.validate_input(input_sequences)
# Upload input file to S3
job_name = nbhelpers.create_job_name()
object_key = nbhelpers.upload_fasta_to_s3(
input_sequences, input_ids, S3_BUCKET, job_name, region=region
)
# Define resources for data prep and prediction steps
if DB_PRESET == "reduced_dbs":
prep_cpu = 4
prep_mem = 16
prep_gpu = 0
else:
prep_cpu = 16
prep_mem = 32
prep_gpu = 0
if sequence_length < 700:
predict_cpu = 4
predict_mem = 16
predict_gpu = 1
else:
predict_cpu = 16
predict_mem = 64
predict_gpu = 1
step_1_response = nbhelpers.submit_batch_alphafold_job(
job_name=str(job_name),
fasta_paths=object_key,
output_dir=job_name,
db_preset=DB_PRESET,
model_preset=model_preset,
s3_bucket=S3_BUCKET,
cpu=prep_cpu,
memory=prep_mem,
gpu=prep_gpu,
run_features_only=True,
)
print(f"Job ID {step_1_response['jobId']} submitted")
step_2_response = nbhelpers.submit_batch_alphafold_job(
job_name=str(job_name),
fasta_paths=object_key,
output_dir=job_name,
db_preset=DB_PRESET,
model_preset=model_preset,
s3_bucket=S3_BUCKET,
cpu=predict_cpu,
memory=predict_mem,
gpu=predict_gpu,
features_paths=os.path.join(job_name, job_name, "features.pkl"),
depends_on=step_1_response["jobId"],
)
print(f"Job ID {step_2_response['jobId']} submitted")
sleep(1)
```
| github_jupyter |
# Convolutional Neural Networks: Step by Step
Welcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation.
**Notation**:
- Superscript $[l]$ denotes an object of the $l^{th}$ layer.
- Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.
- Superscript $(i)$ denotes an object from the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example input.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer.
- $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$.
- $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$.
We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started!
## 1 - Packages
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
```
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
```
## 2 - Outline of the Assignment
You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:
- Convolution functions, including:
- Zero Padding
- Convolve window
- Convolution forward
- Convolution backward (optional)
- Pooling functions, including:
- Pooling forward
- Create mask
- Distribute value
- Pooling backward (optional)
This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:
<img src="images/model.png" style="width:800px;height:300px;">
**Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation.
## 3 - Convolutional Neural Networks
Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below.
<img src="images/conv_nn.png" style="width:350px;height:200px;">
In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself.
### 3.1 - Zero-Padding
Zero-padding adds zeros around the border of an image:
<img src="images/PAD.png" style="width:600px;height:400px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Zero-Padding**<br> Image (3 channels, RGB) with a padding of 2. </center></caption>
The main benefits of padding are the following:
- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer.
- It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.
**Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:
```python
a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))
```
```
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
"""
### START CODE HERE ### (โ 1 line)
X_pad = np.pad(X, ((0,0), (pad,pad), (pad,pad), (0,0)), 'constant', constant_values = (0,0))
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
```
**Expected Output**:
<table>
<tr>
<td>
**x.shape**:
</td>
<td>
(4, 3, 3, 2)
</td>
</tr>
<tr>
<td>
**x_pad.shape**:
</td>
<td>
(4, 7, 7, 2)
</td>
</tr>
<tr>
<td>
**x[1,1]**:
</td>
<td>
[[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
</td>
</tr>
<tr>
<td>
**x_pad[1,1]**:
</td>
<td>
[[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
</td>
</tr>
</table>
### 3.2 - Single step of convolution
In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which:
- Takes an input volume
- Applies a filter at every position of the input
- Outputs another volume (usually of different size)
<img src="images/Convolution_schematic.gif" style="width:500px;height:300px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : **Convolution operation**<br> with a filter of 3x3 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption>
In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output.
Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation.
**Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html).
```
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
"""
### START CODE HERE ### (โ 2 lines of code)
# Element-wise product between a_slice_prev and W. Do not add the bias yet.
s =a_slice_prev*W
# Sum over all entries of the volume s.
Z = np.sum(s)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = Z+b
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
```
**Expected Output**:
<table>
<tr>
<td>
**Z**
</td>
<td>
-6.99908945068
</td>
</tr>
</table>
### 3.3 - Convolutional Neural Networks - Forward pass
In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume:
<center>
<video width="620" height="440" src="images/conv_kiank.mp4" type="video/mp4" controls>
</video>
</center>
**Exercise**: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding.
**Hint**:
1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:
```python
a_slice_prev = a_prev[0:2,0:2,:]
```
This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.
2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below.
<img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;">
<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** <br> This figure shows only a single channel. </center></caption>
**Reminder**:
The formulas relating the output shape of the convolution to the input shape is:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2ย \times pad}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2ย \times pad}{stride} \rfloor +1 $$
$$ n_C = \text{number of filters used in the convolution}$$
For this exercise, we won't worry about vectorization, and will just implement everything with for-loops.
```
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
"""
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (โ1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape (โ1 line)
(f, f, n_C_prev, n_C) =W.shape
# Retrieve information from "hparameters" (โ2 lines)
stride = hparameters["stride"]
pad = hparameters["pad"]
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (โ2 lines)
n_H = 1+(n_H_prev-f+2*pad)//(stride)
n_W = 1+(n_W_prev-f+2*pad)//(stride)
# Initialize the output volume Z with zeros. (โ1 line)
Z = np.zeros((m,n_H,n_W,n_C))
# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(A_prev, pad)
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i,:,:,:] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Find the corners of the current "slice" (โ4 lines)
vert_start = h*stride
vert_end = h*stride+f
horiz_start = w*stride
horiz_end = w*stride+f
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (โ1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (โ1 line)
Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:,:,:,c], b[:,:,:,c])
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("Z[3,2,1] =", Z[3,2,1])
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
```
**Expected Output**:
<table>
<tr>
<td>
**Z's mean**
</td>
<td>
0.0489952035289
</td>
</tr>
<tr>
<td>
**Z[3,2,1]**
</td>
<td>
[-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437
5.18531798 8.75898442]
</td>
</tr>
<tr>
<td>
**cache_conv[0][1][2][3]**
</td>
<td>
[-0.20075807 0.18656139 0.41005165]
</td>
</tr>
</table>
Finally, CONV layer should also contain an activation, in which case we would add the following line of code:
```python
# Convolve the window to get back one output neuron
Z[i, h, w, c] = ...
# Apply activation
A[i, h, w, c] = activation(Z[i, h, w, c])
```
You don't need to do it here.
## 4 - Pooling layer
The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are:
- Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.
- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.
<table>
<td>
<img src="images/max_pool1.png" style="width:500px;height:300px;">
<td>
<td>
<img src="images/a_pool.png" style="width:500px;height:300px;">
<td>
</table>
These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over.
### 4.1 - Forward Pooling
Now, you are going to implement MAX-POOL and AVG-POOL, in the same function.
**Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below.
**Reminder**:
As there's no padding, the formulas binding the output shape of the pooling to the input shape is:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$
$$ n_C = n_{C_{prev}}$$
```
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
"""
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
for w in range(n_W): # loop on the horizontal axis of the output volume
for c in range (n_C): # loop over the channels of the output volume
# Find the corners of the current "slice" (โ4 lines)
vert_start = h*stride
vert_end = h*stride+f
horiz_start = w*stride
horiz_end = w*stride+f
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (โ1 line)
a_prev_slice = A_prev[i,vert_start:vert_end,horiz_start:horiz_end,:]
# Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 2, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
```
**Expected Output:**
<table>
<tr>
<td>
A =
</td>
<td>
[[[[ 1.74481176 0.86540763 1.13376944]]]
[[[ 1.13162939 1.51981682 2.18557541]]]]
</td>
</tr>
<tr>
<td>
A =
</td>
<td>
[[[[ 0.02105773 -0.20328806 -0.40389855]]]
[[[-0.22154621 0.51716526 0.48155844]]]]
</td>
</tr>
</table>
Congratulations! You have now implemented the forward passes of all the layers of a convolutional network.
The remainer of this notebook is optional, and will not be graded.
## 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)
In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like.
When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below.
### 5.1 - Convolutional layer backward pass
Let's start by implementing the backward pass for a CONV layer.
#### 5.1.1 - Computing dA:
This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:
$$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$
Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices.
In code, inside the appropriate for-loops, this formula translates into:
```python
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
```
#### 5.1.2 - Computing dW:
This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:
$$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$
Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$.
In code, inside the appropriate for-loops, this formula translates into:
```python
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
```
#### 5.1.3 - Computing db:
This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:
$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$
As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost.
In code, inside the appropriate for-loops, this formula translates into:
```python
db[:,:,:,c] += dZ[i, h, w, c]
```
**Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
```
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
"""
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = cache
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters"
stride = hparameters["stride"]
pad = hparameters["pad"]
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = dZ.shape
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
dW = np.zeros((f, f, n_C_prev, n_C))
db = np.zeros((1, 1, 1, n_C))
# Pad A_prev and dA_prev
A_prev_pad = zero_pad(A_prev, 2)
dA_prev_pad = zero_pad(dA_prev, 2)
for i in range(m): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i]
da_prev_pad = dA_prev_pad[i]
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = h*stride
vert_end = h*stride+f
horiz_start = w*stride
horiz_end = w*stride+f
# Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:]
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
db[:,:,:,c] += dZ[i, h, w, c]
# Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
```
** Expected Output: **
<table>
<tr>
<td>
**dA_mean**
</td>
<td>
1.45243777754
</td>
</tr>
<tr>
<td>
**dW_mean**
</td>
<td>
1.72699145831
</td>
</tr>
<tr>
<td>
**db_mean**
</td>
<td>
7.83923256462
</td>
</tr>
</table>
## 5.2 Pooling layer - backward pass
Next, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer.
### 5.2.1 Max pooling - backward pass
Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following:
$$ X = \begin{bmatrix}
1 && 3 \\
4 && 2
\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}
0 && 0 \\
1 && 0
\end{bmatrix}\tag{4}$$
As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask.
**Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward.
Hints:
- [np.max()]() may be helpful. It computes the maximum of an array.
- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:
```
A[i,j] = True if X[i,j] = x
A[i,j] = False if X[i,j] != x
```
- Here, you don't need to consider cases where there are several maxima in a matrix.
```
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""
### START CODE HERE ### (โ1 line)
mask = (x == np.max(x))
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
```
**Expected Output:**
<table>
<tr>
<td>
**x =**
</td>
<td>
[[ 1.62434536 -0.61175641 -0.52817175] <br>
[-1.07296862 0.86540763 -2.3015387 ]]
</td>
</tr>
<tr>
<td>
**mask =**
</td>
<td>
[[ True False False] <br>
[False False False]]
</td>
</tr>
</table>
Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost.
### 5.2.2 - Average pooling - backward pass
In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.
For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like:
$$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}
1/4 && 1/4 \\
1/4 && 1/4
\end{bmatrix}\tag{5}$$
This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average.
**Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)
```
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
"""
### START CODE HERE ###
# Retrieve dimensions from shape (โ1 line)
(n_H, n_W) = shape
# Compute the value to distribute on the matrix (โ1 line)
average = dz/(n_H*n_W)
# Create a matrix where every entry is the "average" value (โ1 line)
a = np.ones(shape)*average
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
```
**Expected Output**:
<table>
<tr>
<td>
distributed_value =
</td>
<td>
[[ 0.5 0.5]
<br\>
[ 0.5 0.5]]
</td>
</tr>
</table>
### 5.2.3 Putting it together: Pooling backward
You now have everything you need to compute backward propagation on a pooling layer.
**Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dA.
```
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
"""
### START CODE HERE ###
# Retrieve information from cache (โ1 line)
(A_prev, hparameters) = cache
# Retrieve hyperparameters from "hparameters" (โ2 lines)
stride = hparameters["stride"]
f = hparameters["f"]
# Retrieve dimensions from A_prev's shape and dA's shape (โ2 lines)
m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape
m, n_H, n_W, n_C = dA.shape
# Initialize dA_prev with zeros (โ1 line)
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
for i in range(m): # loop over the training examples
# select training example from A_prev (โ1 line)
a_prev =A_prev[i]
for h in range(n_H): # loop on the vertical axis
for w in range(n_W): # loop on the horizontal axis
for c in range(n_C): # loop over the channels (depth)
# Find the corners of the current "slice" (โ4 lines)
vert_start = h*stride
vert_end = h*stride+f
horiz_start = w*stride
horiz_end = w*stride+f
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (โ1 line)
a_prev_slice = a_prev[vert_start:vert_end,horiz_start:horiz_end,c]
# Create the mask from a_prev_slice (โ1 line)
mask = create_mask_from_window(a_prev_slice)
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (โ1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += mask*dA[i,h,w,c]
elif mode == "average":
# Get the value a from dA (โ1 line)
da = dA[i,h,w,c]
# Define the shape of the filter as fxf (โ1 line)
shape = (f,f)
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (โ1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape)
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
```
**Expected Output**:
mode = max:
<table>
<tr>
<td>
**mean of dA =**
</td>
<td>
0.145713902729
</td>
</tr>
<tr>
<td>
**dA_prev[1,1] =**
</td>
<td>
[[ 0. 0. ] <br>
[ 5.05844394 -1.68282702] <br>
[ 0. 0. ]]
</td>
</tr>
</table>
mode = average
<table>
<tr>
<td>
**mean of dA =**
</td>
<td>
0.145713902729
</td>
</tr>
<tr>
<td>
**dA_prev[1,1] =**
</td>
<td>
[[ 0.08485462 0.2787552 ] <br>
[ 1.26461098 -0.25749373] <br>
[ 1.17975636 -0.53624893]]
</td>
</tr>
</table>
### Congratulations !
Congratulation on completing this assignment. You now understand how convolutional neural networks work. You have implemented all the building blocks of a neural network. In the next assignment you will implement a ConvNet using TensorFlow.
| github_jupyter |
```
#Importing relevant libraries
import numpy as np
import pandas as pd
from pathlib import Path
import os.path
import matplotlib.pyplot as plt
from IPython.display import Image, display
import matplotlib.cm as cm
from sklearn.model_selection import train_test_split
import tensorflow as tf
#Set the image directory using Path and isolate labels and image names using path.split through the anonymous function method.
image_dir = Path('Data\Fish_Dataset\Fish_Dataset')
filespaths = list(image_dir.glob(r'**/*.png'))
labelspaths = list(map(lambda x: os.path.split(os.path.split(x)[0])[1], filespaths))
filespaths = pd.Series(filespaths, name='Filepath').astype(str)
labels = pd.Series(labelspaths, name='Label')
#Create the image dataframe with the image paths and labels as the columns.
image_frame = pd.concat([filespaths, labels], axis=1)
#Removing the GT Images
image_frame = image_frame[image_frame['Label'].apply(lambda x: x[-2:] != 'GT')]
#Shuffle the dataframe by sampling it.
image_frame = image_frame.sample(frac=1).reset_index(drop= True)
image_frame.head()
#SPlitting the dataset into the training and test dataframes with a 10% test size.
df_train, df_test = train_test_split(image_frame, train_size=0.9, shuffle= True, random_state=1)
#Initialize the generators and set the validation size to 20% of the training set. The validation dataframe allows for overfitting monitoring through the validation loss.
train_gen = tf.keras.preprocessing.image.ImageDataGenerator(
preprocessing_function = tf.keras.applications.mobilenet_v2.preprocess_input, validation_split = 0.2
)
test_gen = tf.keras.preprocessing.image.ImageDataGenerator(
preprocessing_function = tf.keras.applications.mobilenet_v2.preprocess_input
)
train_im = train_gen.flow_from_dataframe(
dataframe = df_train,
x_col = 'Filepath',
y_col = 'Label',
target_size = (224, 244),
color_mode = 'rgb',
class_mode = 'categorical',
batch_size = 32,
shuffle = True,
seed = 50,
subset = 'training'
)
val_im = train_gen.flow_from_dataframe(
dataframe=df_train,
x_col='Filepath',
y_col='Label',
target_size=(224, 224),
color_mode='rgb',
class_mode='categorical',
batch_size=32,
shuffle=True,
seed=50,
subset='validation'
)
test_im = test_gen.flow_from_dataframe(
dataframe=df_test,
x_col='Filepath',
y_col='Label',
target_size=(224, 224),
color_mode='rgb',
class_mode='categorical',
batch_size=32,
shuffle=False
)
#Load the Imagenet pretrained MobileNetV2 architecture without the output layer and an average pooling.
pre_mod = tf.keras.applications.MobileNetV2(
input_shape = (224, 224, 3),
include_top = False,
weights = 'imagenet',
pooling = 'avg'
)
#Freeze the lower layers in order for the model to perform as a stand-alone feature extractor and predictor
pre_mod.trainable = False
inputs = pre_mod.input
#Replacing the FC layers of the MobileNetV2 with 2 128 FC Layers.
x = tf.keras.layers.Dense(128, activation='relu')(pre_mod.output)
x = tf.keras.layers.Dense(128, activation='relu')(x)
outputs = tf.keras.layers.Dense(9, activation='softmax')(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer = 'adam',
loss = 'categorical_crossentropy',
metrics = ['accuracy']
)
#In order to monitor overfit, set earlystopping rounds with a patience of 1 so that there is a limit of one instance of validation loss increasing per epoch.
history = model.fit(
train_im,
validation_data = val_im,
epochs = 50,
callbacks = [
tf.keras.callbacks.EarlyStopping(
monitor = 'val_loss',
patience=1,
restore_best_weights = True
)
]
)
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
plt.title('Loss')
plt.show()
pd.DataFrame(history.history)[['loss','val_loss']].plot()
plt.title("Loss")
plt.show()
results = model.evaluate(test_im, verbose=0)
print("Loss: ", results[0])
print("Accuracy: ", results[1])
```
| github_jupyter |
```
%matplotlib inline
import gym
import itertools
import matplotlib
import numpy as np
import pandas as pd
import sys
if "../" not in sys.path:
sys.path.append("../")
from collections import defaultdict
from lib.envs.windy_gridworld import WindyGridworldEnv
from lib import plotting
matplotlib.style.use('ggplot')
env = WindyGridworldEnv()
def make_epsilon_greedy_policy(Q, epsilon, nA):
"""
Creates an epsilon-greedy policy based on a given Q-function and epsilon.
Args:
Q: A dictionary that maps from state -> action-values.
Each value is a numpy array of length nA (see below)
epsilon: The probability to select a random action . float between 0 and 1.
nA: Number of actions in the environment.
Returns:
A function that takes the observation as an argument and returns
the probabilities for each action in the form of a numpy array of length nA.
"""
def policy_fn(observation):
A = np.ones(nA, dtype=float) * epsilon / nA
best_action = np.argmax(Q[observation])
A[best_action] += (1.0 - epsilon)
return A
return policy_fn
def sarsa(env, num_episodes, discount_factor=1.0, alpha=0.5, epsilon=0.1):
"""
SARSA algorithm: On-policy TD control. Finds the optimal epsilon-greedy policy.
Args:
env: OpenAI environment.
num_episodes: Number of episodes to run for.
discount_factor: Gamma discount factor.
alpha: TD learning rate.
epsilon: Chance the sample a random action. Float betwen 0 and 1.
Returns:
A tuple (Q, stats).
Q is the optimal action-value function, a dictionary mapping state -> action values.
stats is an EpisodeStats object with two numpy arrays for episode_lengths and episode_rewards.
"""
# The final action-value function.
# A nested dictionary that maps state -> (action -> action-value).
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# Keeps track of useful statistics
stats = plotting.EpisodeStats(
episode_lengths=np.zeros(num_episodes),
episode_rewards=np.zeros(num_episodes))
# The policy we're following
policy = make_epsilon_greedy_policy(Q, epsilon, env.action_space.n)
for i_episode in range(num_episodes):
# Print out which episode we're on, useful for debugging.
if (i_episode + 1) % 100 == 0:
print("\rEpisode {}/{}.".format(i_episode + 1, num_episodes), end="")
sys.stdout.flush()
# Reset the environment and pick the first action
state = env.reset()
action_probs = policy(state)
action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
# One step in the environment
for t in itertools.count():
# Take a step
next_state, reward, done, _ = env.step(action)
# Pick the next action
next_action_probs = policy(next_state)
next_action = np.random.choice(np.arange(len(next_action_probs)), p=next_action_probs)
# Update statistics
stats.episode_rewards[i_episode] += reward
stats.episode_lengths[i_episode] = t
# TD Update
td_target = reward + discount_factor * Q[next_state][next_action]
td_delta = td_target - Q[state][action]
Q[state][action] += alpha * td_delta
if done:
break
action = next_action
state = next_state
return Q, stats
Q, stats = sarsa(env, 200)
plotting.plot_episode_stats(stats)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/beta/alphafold_output_at_each_recycle.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%%bash
if [ ! -d alphafold ]; then
pip -q install biopython dm-haiku ml-collections py3Dmol
# download model
git clone --quiet https://github.com/deepmind/alphafold.git
cd /content/alphafold
git checkout --quiet 1d43aaff941c84dc56311076b58795797e49107b
cd /content
# apply patch to return model at each recycle step
wget -qnc https://raw.githubusercontent.com/sokrypton/af_tests/main/model.patch
wget -qnc https://raw.githubusercontent.com/sokrypton/af_tests/main/modules.patch
patch -u alphafold/alphafold/model/model.py -i model.patch
patch -u alphafold/alphafold/model/modules.py -i modules.patch
# download model params (~1 min)
mkdir params
curl -fsSL https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar | tar x -C params
# colabfold
wget -qnc https://raw.githubusercontent.com/sokrypton/ColabFold/main/beta/colabfold.py
fi
# import libraries
import os
import sys
sys.path.append('/content/alphafold')
import numpy as np
import jax.numpy as jnp
from alphafold.common import protein
from alphafold.data import pipeline
from alphafold.data import templates
from alphafold.model import data
from alphafold.model import config
from alphafold.model import model
import colabfold as cf
# setup which model params to use
model_name = "model_2_ptm" # model we want to use
model_config = config.model_config("model_5_ptm") # configure based on model that doesn't use templates
model_config.model.num_recycle = 24
model_config.data.common.num_recycle = 24
# since we'll be using single sequence input, setting size of MSA to 1
model_config.data.common.max_extra_msa = 1 # 5120
model_config.data.eval.max_msa_clusters = 1 # 512
# setup model
model_params = data.get_model_haiku_params(model_name=model_name, data_dir=".")
model_runner = model.RunModel(model_config, model_params)
# setup inputs
query_sequence = "MQDGPGTLDVFVAAGWNTDNTIEITGGATYQLSPYIMVKAGYGWNNSSLNRFEFGGGLQYKVTPDLEPYAWAGATYNTDNTLVPAAGAGFRYKVSPEVKLVVEYGWNNSSLQFLQAGLSYRIQP"
feature_dict = {
**pipeline.make_sequence_features(sequence=query_sequence,description="none",num_res=len(query_sequence)),
**pipeline.make_msa_features(msas=[[query_sequence]],deletion_matrices=[[[0]*len(query_sequence)]]),
}
inputs = model_runner.process_features(feature_dict, random_seed=0)
# get outputs
outputs, prev_outputs = model_runner.predict(inputs)
plddts = np.asarray(jnp.concatenate([prev_outputs["prev_plddt"], outputs['plddt'][None]],0))
positions = np.asarray(jnp.concatenate([prev_outputs["prev_pos"], outputs['structure_module']["final_atom_positions"][None]],0))
```
LET'S ANIMATE
```
import matplotlib
from matplotlib import animation
import matplotlib.pyplot as plt
from IPython.display import HTML
def make_animation(positions, plddts=None, line_w=2.0):
def ca_align_to_last(positions):
def align(P, Q):
p = P - P.mean(0,keepdims=True)
q = Q - Q.mean(0,keepdims=True)
return p @ cf.kabsch(p,q)
pos = positions[-1,:,1,:] - positions[-1,:,1,:].mean(0,keepdims=True)
best_2D_view = pos @ cf.kabsch(pos,pos,return_v=True)
new_positions = []
for i in range(len(positions)):
new_positions.append(align(positions[i,:,1,:],best_2D_view))
return np.asarray(new_positions)
# align all to last recycle
pos = ca_align_to_last(positions)
fig, (ax1, ax2, ax3) = plt.subplots(1,3)
fig.subplots_adjust(top = 0.90, bottom = 0.10, right = 1, left = 0, hspace = 0, wspace = 0)
fig.set_figwidth(13)
fig.set_figheight(5)
fig.set_dpi(100)
xy_min = pos[...,:2].min() - 1
xy_max = pos[...,:2].max() + 1
for ax in [ax1,ax3]:
ax.set_xlim(xy_min, xy_max)
ax.set_ylim(xy_min, xy_max)
ax.axis(False)
ims=[]
for k,(xyz,plddt) in enumerate(zip(pos,plddts)):
ims.append([])
im2 = ax2.plot(plddt, animated=True, color="black")
tt1 = cf.add_text("colored by N->C", ax1)
tt2 = cf.add_text(f"recycle={k}", ax2)
tt3 = cf.add_text(f"pLDDT={plddt.mean():.3f}", ax3)
ax2.set_xlabel("positions")
ax2.set_ylabel("pLDDT")
ax2.set_ylim(0,100)
ims[-1] += [cf.plot_pseudo_3D(xyz, ax=ax1, line_w=line_w)]
ims[-1] += [im2[0],tt1,tt2,tt3]
ims[-1] += [cf.plot_pseudo_3D(xyz, c=plddt, cmin=50, cmax=90, ax=ax3, line_w=line_w)]
ani = animation.ArtistAnimation(fig, ims, blit=True, interval=120)
plt.close()
return ani.to_html5_video()
HTML(make_animation(positions, plddts))
# save all recycles as PDB files
for n,(plddt,pos) in enumerate(zip(plddts,positions)):
b_factors = plddt[:,None] * outputs['structure_module']['final_atom_mask']
p = protein.Protein(aatype=inputs['aatype'][0],
atom_positions=pos,
atom_mask=outputs['structure_module']['final_atom_mask'],
residue_index=inputs['residue_index'][0] + 1,
b_factors=b_factors)
pdb_lines = protein.to_pdb(p)
with open(f"tmp.{n}.pdb", 'w') as f:
f.write(pdb_lines)
```
| github_jupyter |
<small><small><i>
All the IPython Notebooks in this **Python Examples** series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/90_Python_Examples)**
</i></small></small>
# Python Program to Differentiate Between `del`, `remove`, and `pop` on a List
In this example, you will learn to differentiate between **`del`**, **`remove`**, and **`pop`** on a list.
To understand this example, you should have the knowledge of the following Python programming topics:
To understand this example, you should have the knowledge of the following **[Python programming](https://github.com/milaan9/01_Python_Introduction/blob/main/000_Intro_to_Python.ipynb)** topics:
* **[Python List](https://github.com/milaan9/02_Python_Datatypes/blob/main/003_Python_List.ipynb)**
## Use of `del`
**`del`** deletes items at a specified position.
```
# Example 1:
my_list = [1, 2, 3, 4]
del my_list[1]
print(my_list)
'''
>>Output/Runtime Test Cases:
[1, 3, 4]
'''
```
**Explanation:**
**`del`** can delete the entire list with a single statement whereas **`remove()`** and **`pop()`** cannot.
```
# Example 2:
my_list = [1, 2, 3, 4]
del my_list
print(my_list)
'''
>>Output/Runtime Test Cases:
name 'my_list' is not defined
'''
```
Moreover, it can also remove a specific range of values, unlike **`remove()`** and **`pop()`**.
```
# Example 3:
my_list = [1, 2, 3, 4]
del my_list[3:]
print(my_list)
'''
>>Output/Runtime Test Cases:
[1, 2, 3]
'''
```
### Error mode
If the given item is not present in the list, **`del`** gives an **`IndexError`**.
```
# Example 4:
my_list = [1, 2, 3, 4]
del my_list[4]
print(my_list)
'''
>>Output/Runtime Test Cases:
list assignment index out of range
'''
```
## Use of `remove`
**`remove()`** deletes the specified item.
```
# Example 5:
my_list = [1, 2, 3, 4]
my_list.remove(2)
print(my_list)
'''
>>Output/Runtime Test Cases:
[1, 3, 4]
'''
```
### Error mode
If the given item is not present in the list, **`del`** gives an **`ValueError`**.
```
# Example 6:
my_list = [1, 2, 3, 4]
my_list.remove(12)
print(my_list)
'''
>>Output/Runtime Test Cases:
list.remove(x): x not in list
'''
```
## Use of `pop`
**`pop()`** removes the item at a specified position and returns it.
```
# Example 7:
my_list = [1, 2, 3, 4]
print(my_list.pop(2))
print(my_list)
'''
>>Output/Runtime Test Cases:
3
[1, 2, 4]
'''
```
### Error mode
If the given item is not present in the list, **`del`** gives an **`ValueError`**.
```
# Example 8:
my_list = [1, 2, 3, 4]
my_list.pop(4)
print(my_list)
'''
>>Output/Runtime Test Cases:
pop index out of range
'''
```
If you want to learn more about these individual methods/statements, you can learn at:
* **[Python del Statement](https://github.com/milaan9/02_Python_Datatypes/blob/main/003_Python_List_Methods/Python_del_statement.ipynb)**
* **[Python List remove()](https://github.com/milaan9/02_Python_Datatypes/blob/main/003_Python_List_Methods/005_Python_List_remove%28%29.ipynb)**
* **[Python List pop()](https://github.com/milaan9/02_Python_Datatypes/blob/main/003_Python_List_Methods/007_Python_List_pop%28%29.ipynb)**
| github_jupyter |
```
import scipy.io
import torch
import numpy as np
import torch.nn as nn
import torch.utils.data as Data
import matplotlib.pyplot as plt
import torch.nn.functional as F
#from tensorboardX import SummaryWriter
from sklearn.metrics import roc_auc_score,roc_curve,auc,average_precision_score,precision_recall_curve
torch.manual_seed(1337)
np.random.seed(1337)
torch.cuda.manual_seed(1337)
torch.backends.cudnn.benchmark=True
```
# when loading the pred file directly
```
print('starting loading the data')
np_test_data = scipy.io.loadmat('test.mat')
testY_data = torch.FloatTensor(np_test_data['testdata'])
BATCH_SIZE = 100
print('starting loading the data')
np_test_data = scipy.io.loadmat('test.mat')
testX_data = torch.FloatTensor(np_test_data['testxdata'])
testY_data = torch.FloatTensor(np_test_data['testdata'])
test_loader = Data.DataLoader(
dataset=Data.TensorDataset(testX_data, testY_data),
batch_size=BATCH_SIZE,
shuffle=False,
num_workers=2,
drop_last=False,
)
print('compling the network')
class DanQ(nn.Module):
def __init__(self, ):
super(DanQ, self).__init__()
self.Conv1 = nn.Conv1d(in_channels=4, out_channels=320, kernel_size=13)
#nn.init.uniform_(self.Conv1.weight, -0.05, 0.05)
self.Maxpool = nn.MaxPool1d(kernel_size=13, stride=6)
self.Drop1 = nn.Dropout(p=0.2)
self.BiLSTM = nn.LSTM(input_size=320, hidden_size=320, num_layers=2,
batch_first=True,
dropout=0.5,
bidirectional=True)
self.Linear1 = nn.Linear(163*640, 925)
self.Linear2 = nn.Linear(925, 919)
def forward(self, input):
x = self.Conv1(input)
x = F.relu(x)
x = self.Maxpool(x)
x = self.Drop1(x)
x_x = torch.transpose(x, 1, 2)
x, (h_n,h_c) = self.BiLSTM(x_x)
#x, h_n = self.BiGRU(x_x)
x = x.contiguous().view(-1, 163*640)
x = self.Linear1(x)
x = F.relu(x)
x = self.Linear2(x)
#x = torch.sigmoid(x)
return x
danq = DanQ()
danq.load_state_dict(torch.load('model/model0512/danq_net_params_4.pkl'))
danq.cuda()
loss_func = nn.BCEWithLogitsLoss()
print('starting testing')
# training
pred_y = np.zeros([455024, 919])
i=0;j = 0
test_losses = []
danq.eval()
for step, (seq, label) in enumerate(test_loader):
#print(step)
seq = seq.cuda()
label = label.cuda()
test_output = danq(seq)
cross_loss = loss_func(test_output, label)
test_losses.append(cross_loss.item())
test_output = torch.sigmoid(test_output.cpu().data)
if(step<4550):
for i, j in zip(range(step*100, (step+1)*100),range(0, 100)):
pred_y[i, :] = test_output.numpy()[j, :]
else:
for i,j in zip(range(455000,455024),range(0,24)):
pred_y[i, :] = test_output.numpy()[j, :]
#print(test_output.numpy())
test_loss = np.average(test_losses)
print_msg = (f'test_loss: {test_loss:.5f}')
print(print_msg)
#np.save('0522pred.npy',pred_y)
pred_y = np.load('pred/0419pred.npy')
aucs_file = open('aucs_pyDanQ_13bp.txt', 'w')
aucs_file.write('pyDanQ AU ROC\tpyDanQ AU PR')
for i in range(0,125):
aucs_file.write('%.5f\t%.5f' % (roc_auc_score(testY_data.data[:, i], pred_y[:, i]),average_precision_score(testY_data.data[:, i], pred_y[:, i])))
aucs_file.write('\n')
for i in range(125,598):
aucs_file.write('%.5f\t%.5f' % (roc_auc_score(testY_data.data[:, i], pred_y[:, i]),average_precision_score(testY_data.data[:, i], pred_y[:, i])))
aucs_file.write('\n')
for i in range(599,815):
aucs_file.write('%.5f\t%.5f' % (roc_auc_score(testY_data.data[:, i], pred_y[:, i]),average_precision_score(testY_data.data[:, i], pred_y[:, i])))
aucs_file.write('\n')
for i in range(815,919):
aucs_file.write('%.5f\t%.5f' % (roc_auc_score(testY_data.data[:, i], pred_y[:, i]),average_precision_score(testY_data.data[:, i], pred_y[:, i])))
aucs_file.write('\n')
aucs_file.close()
roc_auc_score(testY_data.data[:, 598], pred_y[:, 598])
for i in (211,277):
precision, recall, thresholds = precision_recall_curve(testY_data.data[:, i], pred_y[:, i])
ap = average_precision_score(testY_data.data[:, i], pred_y[:, i])
plt.plot(recall, precision, lw=1, label='%s(AP = %0.4f)' % (str(i),ap))
plt.plot(recall, precision, lw=1)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.xlim([0, 1.05])
plt.ylim([0, 1.05])
plt.title('pr_curve')
plt.legend()
plt.show()
for i in (211,277):
fpr, tpr, thresholds = roc_curve(testY_data.data[:, i], pred_y[:, i])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, label='%s(AUC = %0.4f)' % (str(i),roc_auc))
plt.plot(fpr, tpr, lw=1)
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6))
plt.xlim([0, 1.05])
plt.ylim([0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('roc_curve')
plt.legend()
plt.show()
ap_=[]
for i in range(0,125):
ap_.append(average_precision_score(testY_data.data[:, i], pred_y[:, i]))
ap = np.average(ap_)
print_msg = (f'ap_DNase: {ap:.5f}')
print(print_msg)
ap_=[]
for i in range(125,598):
ap_.append(average_precision_score(testY_data.data[:, i], pred_y[:, i]))
for i in range(599,815):
ap_.append(average_precision_score(testY_data.data[:, i], pred_y[:, i]))
ap = np.average(ap_)
print_msg = (f'ap_TFBinding: {ap:.5f}')
print(print_msg)
ap_=[]
for i in range(815,919):
ap_.append(average_precision_score(testY_data.data[:, i], pred_y[:, i]))
ap = np.average(ap_)
print_msg = (f'ap_Histone: {ap:.5f}')
print(print_msg)
auc_=[]
for i in range(0,125):
auc_.append(roc_auc_score(testY_data.data[:, i], pred_y[:, i]))
ap = np.average(auc_)
print_msg = (f'auc_DNase: {ap:.5f}')
print(print_msg)
auc_=[]
for i in range(125,598):
auc_.append(roc_auc_score(testY_data.data[:, i], pred_y[:, i]))
for i in range(599,815):
auc_.append(roc_auc_score(testY_data.data[:, i], pred_y[:, i]))
ap = np.average(auc_)
print_msg = (f'auc_TFBinding: {ap:.5f}')
print(print_msg)
auc_=[]
for i in range(815,919):
auc_.append(roc_auc_score(testY_data.data[:, i], pred_y[:, i]))
ap = np.average(auc_)
print_msg = (f'auc_Histone: {ap:.5f}')
print(print_msg)
num=910
print(roc_auc_score(testY_data.data[:, num], pred_y[:, num]))
print(average_precision_score(testY_data.data[:, num], pred_y[:, num]))
print('printing the pr_curve_125_DNase')
for i in range(0,125):
precision, recall, thresholds = precision_recall_curve(testY_data.data[:, i], pred_y[:, i])
#ap = average_precision_score(testY_data.data[:, i], pred_y[:, i])
#plt.plot(recall, precision, lw=1, label='%s(AP = %0.4f)' % (str(i),ap))
plt.plot(recall, precision, lw=1)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.xlim([0, 1.05])
plt.ylim([0, 1.05])
plt.title('pr_curve_125_DNase')
plt.savefig('pr_curve_125_DNase.svg')
plt.show()
print('printing the pr_curve_690_TFbinding')
for i in range(125,815):
precision, recall, thresholds = precision_recall_curve(testY_data.data[:, i], pred_y[:, i])
#ap = average_precision_score(testY_data.data[:, i], pred_y[:, i])
plt.plot(recall, precision, lw=1)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.xlim([0, 1.05])
plt.ylim([0, 1.05])
plt.title('pr_curve_690_TFbinding')
plt.savefig('pr_curve_690_TFbinding.png')
plt.show()
print('printing the pr_curve_104_Histone')
for i in range(815,919):
precision, recall, thresholds = precision_recall_curve(testY_data.data[:, i], pred_y[:, i])
#ap = average_precision_score(testY_data.data[:, i], pred_y[:, i])
plt.plot(recall, precision, lw=1)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.xlim([0, 1.05])
plt.ylim([0, 1.05])
plt.title('pr_curve_104_Histone')
plt.savefig('pr_curve_104_Histone.svg')
plt.show()
print('printing the roc_curve_125_DNase')
for i in range(0,125):
fpr, tpr, thresholds = roc_curve(testY_data.data[:, i], pred_y[:, i])
roc_auc = auc(fpr, tpr)
#plt.plot(fpr, tpr, lw=1, label='%s(AUC = %0.4f)' % (str(i),roc_auc))
plt.plot(fpr, tpr, lw=1)
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6))
plt.xlim([0, 1.05])
plt.ylim([0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('roc_curve_125_DNase')
plt.savefig('roc_curve_125_DNase.svg')
plt.show()
print('printing the roc_curve_690_TFbinding')
for i in range(125,815):
fpr, tpr, thresholds = roc_curve(testY_data.data[:, i], pred_y[:, i])
#roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1)
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6))
plt.xlim([0, 1.05])
plt.ylim([0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('roc_curve_690_TFbinding')
plt.savefig('roc_curve_690_TFbinding.svg')
plt.show()
print('printing the roc_curve_104_Histone')
for i in range(815,919):
fpr, tpr, thresholds = roc_curve(testY_data.data[:, i], pred_y[:, i])
#roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1)
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6))
plt.xlim([0, 1.05])
plt.ylim([0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('roc_curve_104_Histone')
plt.savefig('roc_curve_104_Histone.svg')
plt.show()
import scipy.io
import numpy as np
from sklearn.metrics import roc_auc_score,roc_curve,auc,average_precision_score,precision_recall_curve
print('starting loading the data')
np_test_data = scipy.io.loadmat('test.mat')
testY_data = np_test_data['testdata']
pred_y = np.load('0419pred.npy')
print('125_DNase')
auc_5=0
auc_6=0
auc_7=0
auc_8=0
auc_9=0
for i in range(0,125):
if roc_auc_score(testY_data[:, i], pred_y[:, i])>0.9:
auc_9 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.8 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.9:
auc_8 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.7 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.8:
auc_7 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.6 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.7:
auc_6 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.5 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.6:
auc_5 +=1
print_msg = (f'auc_9: {auc_9:.3f}'+'\n'
f'auc_8: {auc_8:.3f}'+'\n'
f'auc_7: {auc_7:.3f}'+'\n'
f'auc_6: {auc_6:.3f}'+'\n'
f'auc_5: {auc_5:.3f}')
print(print_msg)
print('690_TFbinding')
auc_5=0
auc_6=0
auc_7=0
auc_8=0
auc_9=0
for i in range(125,598):
if roc_auc_score(testY_data[:, i], pred_y[:, i])>0.9:
auc_9 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.8 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.9:
auc_8 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.7 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.8:
auc_7 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.6 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.7:
auc_6 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.5 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.6:
auc_5 +=1
for i in range(599,815):
if roc_auc_score(testY_data[:, i], pred_y[:, i])>0.9:
auc_9 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.8 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.9:
auc_8 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.7 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.8:
auc_7 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.6 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.7:
auc_6 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.5 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.6:
auc_5 +=1
print_msg = (f'auc_9: {auc_9:.3f}'+'\n'
f'auc_8: {auc_8:.3f}'+'\n'
f'auc_7: {auc_7:.3f}'+'\n'
f'auc_6: {auc_6:.3f}'+'\n'
f'auc_5: {auc_5:.3f}')
print(print_msg)
print('104_Histone')
auc_5=0
auc_6=0
auc_7=0
auc_8=0
auc_9=0
for i in range(815,919):
if roc_auc_score(testY_data[:, i], pred_y[:, i])>0.9:
auc_9 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.8 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.9:
auc_8 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.7 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.8:
auc_7 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.6 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.7:
auc_6 +=1
elif roc_auc_score(testY_data[:, i], pred_y[:, i])>0.5 and roc_auc_score(testY_data[:, i], pred_y[:, i])<0.6:
auc_5 +=1
print_msg = (f'auc_9: {auc_9:.3f}'+'\n'
f'auc_8: {auc_8:.3f}'+'\n'
f'auc_7: {auc_7:.3f}'+'\n'
f'auc_6: {auc_6:.3f}'+'\n'
f'auc_5: {auc_5:.3f}')
print(print_msg)
```
| github_jupyter |
# Python and Web Tutorial
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Python-and-Web-Tutorial" data-toc-modified-id="Python-and-Web-Tutorial-1">Python and Web Tutorial</a></span></li><li><span><a href="#์น(Web)" data-toc-modified-id="์น(Web)-2">์น(Web)</a></span></li><li><span><a href="#CS(ํด๋ผ์ด์ธํธ-์๋ฒ;-Client-Server)-Model" data-toc-modified-id="CS(ํด๋ผ์ด์ธํธ-์๋ฒ;-Client-Server)-Model-3">CS(ํด๋ผ์ด์ธํธ-์๋ฒ; Client-Server) Model</a></span><ul class="toc-item"><li><span><a href="#HTTP(Hyper-Text-Transfer-Protocol)" data-toc-modified-id="HTTP(Hyper-Text-Transfer-Protocol)-3.1">HTTP(Hyper Text Transfer Protocol)</a></span></li><li><span><a href="#Markup-&-Markdown" data-toc-modified-id="Markup-&-Markdown-3.2">Markup & Markdown</a></span></li><li><span><a href="#์น-๋ธ๋ผ์ฐ์ (Web-Browser)" data-toc-modified-id="์น-๋ธ๋ผ์ฐ์ (Web-Browser)-3.3">์น ๋ธ๋ผ์ฐ์ (Web Browser)</a></span></li><li><span><a href="#์์ฒญ(Request)" data-toc-modified-id="์์ฒญ(Request)-3.4">์์ฒญ(Request)</a></span><ul class="toc-item"><li><span><a href="#URI(Uniform-Resource-Identifier)" data-toc-modified-id="URI(Uniform-Resource-Identifier)-3.4.1">URI(Uniform Resource Identifier)</a></span></li><li><span><a href="#URL(Uniform-Resource-Location)" data-toc-modified-id="URL(Uniform-Resource-Location)-3.4.2">URL(Uniform Resource Location)</a></span></li><li><span><a href="#์์ฒญ-๋ฐฉ์(Request-Method)" data-toc-modified-id="์์ฒญ-๋ฐฉ์(Request-Method)-3.4.3">์์ฒญ ๋ฐฉ์(Request Method)</a></span></li><li><span><a href="#Python-Code" data-toc-modified-id="Python-Code-3.4.4">Python Code</a></span><ul class="toc-item"><li><span><a href="#GET-๋ฐฉ์์ผ๋ก-์์ฒญํ๊ธฐ" data-toc-modified-id="GET-๋ฐฉ์์ผ๋ก-์์ฒญํ๊ธฐ-3.4.4.1">GET ๋ฐฉ์์ผ๋ก ์์ฒญํ๊ธฐ</a></span></li></ul></li></ul></li><li><span><a href="#์๋ต(Response)" data-toc-modified-id="์๋ต(Response)-3.5">์๋ต(Response)</a></span><ul class="toc-item"><li><span><a href="#์ํ-์ฝ๋(Status-Code)" data-toc-modified-id="์ํ-์ฝ๋(Status-Code)-3.5.1">์ํ ์ฝ๋(Status Code)</a></span></li></ul></li><li><span><a href="#DOM" data-toc-modified-id="DOM-3.6">DOM</a></span><ul class="toc-item"><li><span><a href="#XML-Parser" data-toc-modified-id="XML-Parser-3.6.1">XML Parser</a></span><ul class="toc-item"><li><span><a href="#์ง์ํ๋-XML-Parser์-์ข
๋ฅ" data-toc-modified-id="์ง์ํ๋-XML-Parser์-์ข
๋ฅ-3.6.1.1">์ง์ํ๋ XML Parser์ ์ข
๋ฅ</a></span></li></ul></li></ul></li><li><span><a href="#CSS" data-toc-modified-id="CSS-3.7">CSS</a></span><ul class="toc-item"><li><span><a href="#CSS-Selector" data-toc-modified-id="CSS-Selector-3.7.1">CSS Selector</a></span><ul class="toc-item"><li><span><a href="#Pattern" data-toc-modified-id="Pattern-3.7.1.1">Pattern</a></span></li></ul></li><li><span><a href="#Attribute-Selector" data-toc-modified-id="Attribute-Selector-3.7.2">Attribute Selector</a></span><ul class="toc-item"><li><span><a href="#Pattern" data-toc-modified-id="Pattern-3.7.2.1">Pattern</a></span></li></ul></li><li><span><a href="#์ฐ์ต" data-toc-modified-id="์ฐ์ต-3.7.3">์ฐ์ต</a></span><ul class="toc-item"><li><span><a href="#1.-๋ชจ๋ -HTML-ํ๊ทธ๋ฅผ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="1.-๋ชจ๋ -HTML-ํ๊ทธ๋ฅผ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.1">1. ๋ชจ๋ HTML ํ๊ทธ๋ฅผ ๊ฐ์ ธ์ค๊ธฐ</a></span></li><li><span><a href="#2.-ํ๊ทธ๋ช
์-ํตํด-ํน์ -ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="2.-ํ๊ทธ๋ช
์-ํตํด-ํน์ -ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.2">2. ํ๊ทธ๋ช
์ ํตํด ํน์ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ</a></span></li><li><span><a href="#3.-ํ๊ทธ-id๊ฐ-fruit์ธ-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="3.-ํ๊ทธ-id๊ฐ-fruit์ธ-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.3">3. ํ๊ทธ id๊ฐ <code>fruit</code>์ธ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ</a></span></li><li><span><a href="#4.-ํ๊ทธ-id๊ฐ-food์ธ-ํ๊ทธ์-์์-ํ๊ทธ์ธ-p-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="4.-ํ๊ทธ-id๊ฐ-food์ธ-ํ๊ทธ์-์์-ํ๊ทธ์ธ-p-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.4">4. ํ๊ทธ id๊ฐ <code>food</code>์ธ ํ๊ทธ์ ์์ ํ๊ทธ์ธ <code>p</code> ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ</a></span></li><li><span><a href="#5.-ํ๊ทธ-id๊ฐ-food์ธ-ํ๊ทธ์-์์-ํ๊ทธ์ธ-div-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="5.-ํ๊ทธ-id๊ฐ-food์ธ-ํ๊ทธ์-์์-ํ๊ทธ์ธ-div-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.5">5. ํ๊ทธ id๊ฐ <code>food</code>์ธ ํ๊ทธ์ ์์ ํ๊ทธ์ธ <code>div</code> ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ</a></span></li><li><span><a href="#6.-class๊ฐ-f์ธ-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="6.-class๊ฐ-f์ธ-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.6">6. class๊ฐ <code>f</code>์ธ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ</a></span></li><li><span><a href="#7.-class๊ฐ-f-f-a์ธ-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="7.-class๊ฐ-f-f-a์ธ-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.7">7. class๊ฐ <code>f f a</code>์ธ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ</a></span></li><li><span><a href="#8.-p-ํ๊ทธ์-class๊ฐ-f-b์ธ-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="8.-p-ํ๊ทธ์-class๊ฐ-f-b์ธ-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.8">8. <code>p</code> ํ๊ทธ์ class๊ฐ <code>f b</code>์ธ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ</a></span></li><li><span><a href="#9.-p-ํ๊ทธ์-id๊ฐ-apple์ด๋ฉฐ-class๊ฐ-f-f-a์ธ-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="9.-p-ํ๊ทธ์-id๊ฐ-apple์ด๋ฉฐ-class๊ฐ-f-f-a์ธ-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.9">9. <code>p</code> ํ๊ทธ์ id๊ฐ <code>apple</code>์ด๋ฉฐ class๊ฐ <code>f f a</code>์ธ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ</a></span></li><li><span><a href="#10.-name์ด๋ผ๋-์์ฑ์-๊ฐ์ง-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="10.-name์ด๋ผ๋-์์ฑ์-๊ฐ์ง-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.10">10. <code>name</code>์ด๋ผ๋ ์์ฑ์ ๊ฐ์ง ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ</a></span></li><li><span><a href="#11.-p-ํ๊ทธ-์ค-name์ด๋ผ๋-์์ฑ์-๊ฐ์ง-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="11.-p-ํ๊ทธ-์ค-name์ด๋ผ๋-์์ฑ์-๊ฐ์ง-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.11">11. <code>p</code> ํ๊ทธ ์ค <code>name</code>์ด๋ผ๋ ์์ฑ์ ๊ฐ์ง ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ</a></span></li><li><span><a href="#12.-p-ํ๊ทธ-์ค-name์ด๋ผ๋-์์ฑ์-๊ฐ์ง-ํ๊ทธ-์ค-bread๋ผ๋-๊ฐ์-๊ฐ์ง-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ" data-toc-modified-id="12.-p-ํ๊ทธ-์ค-name์ด๋ผ๋-์์ฑ์-๊ฐ์ง-ํ๊ทธ-์ค-bread๋ผ๋-๊ฐ์-๊ฐ์ง-ํ๊ทธ-๊ฐ์ ธ์ค๊ธฐ-3.7.3.12">12. <code>p</code> ํ๊ทธ ์ค <code>name</code>์ด๋ผ๋ ์์ฑ์ ๊ฐ์ง ํ๊ทธ ์ค <code>bread</code>๋ผ๋ ๊ฐ์ ๊ฐ์ง ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ</a></span></li></ul></li><li><span><a href="#์ค์ -์์" data-toc-modified-id="์ค์ -์์-3.7.4">์ค์ ์์</a></span></li><li><span><a href="#Header" data-toc-modified-id="Header-3.7.5">Header</a></span></li></ul></li></ul></li></ul></div>
# ์น(Web)
์๋ ์์ด๋ ์น(World Wide Web)์ด๋ ์ธํฐ๋ท์ ์ฐ๊ฒฐ๋ ์ฌ์ฉ์๋ค์ด ์๋ก์ ์ ๋ณด๋ฅผ ๊ณต์ ํ ์ ์๋ ๊ณต๊ฐ์ ์๋ฏธํ๋ค. ๊ฐ๋จํ ์ค์ฌ์ WWW๋ W3๋ผ๊ณ ๋ ๋ถ๋ฅด๋ฉฐ, ๊ฐ๋จํ ์น(Web)์ด๋ผ๊ณ ๊ฐ์ฅ ๋ง์ด ๋ถ๋ฆฐ๋ค.
์ธํฐ๋ท๊ณผ ๊ฐ์ ์๋ฏธ๋ก ๋ง์ด ์ฌ์ฉ๋๊ณ ์์ง๋ง, ์ ํํ ๋งํด ์น์ ์ธํฐ๋ท์์ ์ธ๊ธฐ ์๋ ํ๋์ ์๋น์ค์ผ ๋ฟ์ด๋ค. ํ์ง๋ง ํ์ฌ์๋ ์ธํฐ๋ท๊ณผ ์น์ด๋ผ๋ ๋จ์ด๊ฐ ์๋ก ํผ์ฉ๋์ด ์ฌ์ฉ๋ ๋งํผ ์ธํฐ๋ท์ ๊ฐ์ฅ ํฐ ๋ถ๋ถ์ ์ฐจ์งํ๊ณ ์๋ค.
# CS(ํด๋ผ์ด์ธํธ-์๋ฒ; Client-Server) Model
ํด๋ผ์ด์ธํธ์ ์๋ฒ๊ฐ ์๋ก ๋ฐ์ดํฐ๋ฅผ ์ฃผ๊ณ ๋ฐ๋ ๋ชจ๋ธ์ด๋ค.
์๋ฒ๋ก๋ถํฐ ํด๋ผ์ด์ธํธ๊ฐ ๋ฐ์ดํฐ๋ฅผ ์์ฒญํ๋ฉด ์๋ฒ๋ ํด๋ผ์ด์ธํธ์๊ฒ ์์ฒญํ ๋ฐฉ์์ ๋ํ ์๋ต์ ํ์ฌ ๋ฐ์ดํฐ๋ฅผ ์ ๊ณตํ๋ ํํ๋ฅผ ์ทจํ๋ค.
## HTTP(Hyper Text Transfer Protocol)
W3(WWW: World Wide Web) ์์์ ์ ๋ณด๋ฅผ ์ฃผ๊ณ ๋ฐ์ ์ ์๋ ๊ท์ฝ์ HTTP์ด๋ผ ํ๋ค.
ํ์ดํผํ
์คํธ(Hyper Text)๋, ๋
์๋ก ํ์ฌ๊ธ ๋ฌธ์์์ ๋ค๋ฅธ ๋ฌธ์๋ก ์ฆ์ ์ ๊ทผํ ์ ์๋๋ก ํ๋ ํ
์คํธ ํํ๋ฅผ ๊ฐ๋ฅดํจ๋ค. ๋ค์๋งํด ๋ฌธ์ ๋ด๋ถ์ ๋ ๋ค๋ฅธ ๋ฌธ์๋ก ์ฐ๊ฒฐ๋๋ ์ฐธ์กฐ๋ฅผ ์ง์ด ๋ฃ์์ผ๋ก์จ ์น ์์ ์กด์ฌํ๋ ์ฌ๋ฌ ๋ฌธ์๋ผ๋ฆฌ ์๋ก ์ฐธ์กฐํ ์ ์๋ ๊ธฐ์ ์ ์๋ฏธํ๋ค.
์ด๋ ๋ฌธ์ ๋ด๋ถ์์ ๋ ๋ค๋ฅธ ๋ฌธ์๋ก ์ฐ๊ฒฐ๋๋ ์ฐธ์กฐ๋ฅผ ํ์ดํผ๋งํฌ(hyperlink)๋ผ๊ณ ๋ถ๋ฅธ๋ค. ๋ํ์ ์ธ ํ์ดํผ ํ
์คํธ๋ HTML(Hyper Text Markup Language)๋ผ๋ ๋งํฌ์
ํํ์ ๋ฌธ์๊ฐ ์๋ค.
## Markup & Markdown
์ฌ๊ธฐ์ ๋งํ๋ ๋งํฌ๋ ํ๊ทธ(Tag)๋ฅผ ๋ปํ๋ฉฐ, ๋งํฌ์
์ ๋ฐ๋๋ง์ธ ๋งํฌ๋ค์ด(Markdown)๋ ์กด์ฌํ๋ค. ๋งํฌ๋ค์ด์ ๋งํฌ์
๊ณผ๋ ๋ฌ๋ฆฌ ๋จ์ํ ํ
์คํธ ํ์๊ณผ ๊ธฐํธ๋ก ํํํ ์ ์์ด ์ฝ๊ธฐ ํธํ๊ณ ์์ฑํ๊ธฐ ์ฝ๋ค๋ ์ฅ์ ์ ๊ฐ์ง๊ณ ์๋ค. ํ์ฌ ์ฝ๊ณ ์๋ jupyter notebook ๋ฌธ์์ ์ค๋ช
๋ฌธ๋ ์ด ๋งํฌ๋ค์ด ํ์์ผ๋ก ์์ฑ๋์๋ค.
## ์น ๋ธ๋ผ์ฐ์ (Web Browser)
์น ๋ธ๋ผ์ฐ์ ๋ ๋ฉํฐ๋ฏธ๋์ด ํ์ผ๋ฑ W3๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ ์ธํฐ๋ท์ ์ปจํ
์ธ ๋ฅผ ๊ฒ์ ๋ฐ ์ด๋ํ๊ธฐ ์ํ ์์ฉ ํ๋ก๊ทธ๋จ์ ์ด์นญ์ด๋ค.
์ฝ๊ฒ ๋งํด ๋งํฌ์
ํ์์ผ๋ก ์์ฑ๋ HTML์ ํด์ํ๋ ํต์ญ์ฌ๋ก ์ดํดํ๋ฉด ๋๋ค.
## ์์ฒญ(Request)
ํด๋ผ์ด์ธํธ๊ฐ ์๋ฒ์ ๋ฐ์ดํฐ๋ฅผ ์์ฒญํ ๋๋ URI ๋ฐ URL์ ํตํด ์์ฒญ์ ํ๋ค.
### URI(Uniform Resource Identifier)
URI์ ์ธํฐ๋ท ์์ ์์์ ์๋ณํ๊ธฐ ์ํ ๋ฌธ์์ด์ ๊ตฌ์ฑ์ด๋ค.
### URL(Uniform Resource Location)
URI์ ํ ํํ์ธ URL์ ์ธํฐ๋ท ์์ ์กด์ฌํ๋ ์์ ์์น๋ฅผ ๋ํ๋ธ๋ค.
### ์์ฒญ ๋ฐฉ์(Request Method)
HTTP ์์ฒญ ๋ฐฉ์์ ๋ง์ง๋ง, ๊ทธ ์ค์์ 5๊ฐ์ง, 5๊ฐ์ง ์ค์์๋ ๊ฐ์ฅ ๋ง์ด ์ฐ์ด๋ ๋ฐฉ์์ผ๋ก 2๊ฐ์ง(GET, POST)๊ฐ ์กด์ฌํ๋ค.
1. GET: ๋ฐ์ดํฐ ํ๋
- ์น ์๋ฒ์ ๋ฐ์ดํฐ ์ ์ก์ ์๊ตฌ
- ์์ฒญํ ๋ฐ์ดํฐ๋ฅผ ์๋ฒ๋ก๋ถํฐ ๊ฐ์ ธ์ค๊ธฐ๋ง ํจ
- URL์ ์ง์ ์ ์ผ๋ก ์ฟผ๋ฆฌ ๋ฌธ์์ด(parameter)์ ์ด์ด ๋ถ์ด๋ ๋ฐฉ์์ ์ทจํจ
- ์ด ์์ฒญ์ ๋ํ ์๋ต์ ์บ์๊ฐ ๊ฐ๋ฅํจ
2. POST: ๋ฐ์ดํฐ ์ ์ก
- ํด๋ผ์ด์ธํธ -> ์น ์๋ฒ๋ก ๋ฐ์ดํฐ๋ฅผ ์ ์ก
- ํ์ผ ์ ์ก์ด ๊ฐ๋ฅ
- ๋ฐ์ดํฐ๋ฅผ ์์ฒญ ๋ฉ์์ง์ body์ ๋ด์ ์ ์ก
3. PUT: ๋ฐ์ดํฐ ๊ฐฑ์
- ํด๋ผ์ด์ธํธ์์ ์น ์๋ฒ๋ก ๋ฐ์ดํฐ๋ฅผ ์ ์ก
- ํ์ผ ์ ์ก์ด ๊ฐ๋ฅ
- ์๋ก์ด ๋ฐ์ดํฐ ์ ์ก์ด ์๋ ๋ฐ์ดํฐ ๊ฐฑ์ ์ ๋ชฉ์ ์ผ๋ก ํจ
- ๋ฐ๋ผ์ ์น ์๋ฒ๋ ํด๋ผ์ด์ธํธ๊ฐ ์ต๊ทผ์ ์ ์ถํ URI๋ฅผ ๊ทธ๋๋ก ์ฌ์ฉ
4. DELETE: ๋ฐ์ดํฐ ์ญ์
- ์น ์๋ฒ์ ๋ฐ์ดํฐ ์ญ์ ๋ฅผ ์์ฒญ
- PUT๊ณผ ์๋ฐ๋ ๊ฐ๋
5. HEAD: ํค๋ ํ๋
- ์น ์๋ฒ์ ํค๋๋ฅผ ์์ฒญ
- ์ค์ ๋ฌธ์(body)๊ฐ ์๋ ๋ฌธ์์ ๋ํ ์ ๋ณด(header)๋ง์ ์์ฒญ
- ์น ์๋ฒ์ ์ํ ์ ๊ฒ(health check)์ด๋ ์น ์๋ฒ ์ ๋ณด(version) ๋ฑ์ ์ป๊ธฐ ์ํด ์ฌ์ฉ๋จ
์ ๋ถ ๋ค ์ ํ์๋ ์๊ณ ๊ธฐ์ตํด์ผ ํ๋ ๊ฐ์ฅ ๋ง์ด ์ฐ์ด๋ GET ๋ฐฉ์๊ณผ POST ๋ฐฉ์ ๋ ๊ฐ์ง๋ง ๊ธฐ์ตํ๋ฉด ๋๋ค.
### Python Code
์๋๋ ์น ์๋ฒ์ ์์ฒญ์ ๋ณด๋ด๋ ํ์ด์ฌ ์ฝ๋๋ฅผ ์์ฑํ๋ ์์์ด๋ค.
๋จผ์ , ์น ์๋ฒ์ ์์ฒญ์ ๋ณด๋ด๊ธฐ ์ํด ํ์ํ `requests` ํจํค์ง๋ฅผ import ํ๋ค.
```
import requests
```
#### GET ๋ฐฉ์์ผ๋ก ์์ฒญํ๊ธฐ
GET ๋ฐฉ์์ผ๋ก ์์ฒญ์ ๋ณด๋ด๋ ๋ฐฉ์๊ณผ URL์ ํํ๋ ์๋์ ๊ฐ๋ค.
- protocol://domain_address?key1=value1&key2=value2...
์๋์ ๊ฐ์ด ์ํ์ผ์ ๊ตฌ๊ธ์ GET ๋ฐฉ์์ผ๋ก ์์ฒญ์ ๋ณด๋ด๋ ์ฝ๋๋ฅผ ์์ฑํด๋ณธ๋ค.
```
url = 'http://www.google.co.kr'
resp = requests.get(url)
```
## ์๋ต(Response)
์ฌ๊ธฐ์ ์๋ต์ ํด๋ผ์ด์ธํธ๊ฐ ์๋ฒ์๊ฒ ์์ฒญ์ ๋ณด๋ด๋ฉด ์๋ฒ๋ก๋ถํฐ ์๋ต์ ๋ฐ์ ๋ฐ์ดํฐ๋ฅผ ์๋ฏธํ๋ค. W3์์ ์๋ต์ ์ต์ข
์ ์ธ ๊ฒฐ๊ณผ๋ HTML ๋ฌธ์์ด๋ค.
### ์ํ ์ฝ๋(Status Code)
์ํ์ฝ๋๋ 1xx~5xx๊น์ง ์กด์ฌํ๋๋ฐ ์ด๊ฒ์ด ์๋ฏธํ๋ ๊ฒ์ ๋ค์๊ณผ ๊ฐ๋ค.
- 1xx: ์กฐ๊ฑด๋ถ ์๋ต
- 2xx: ์ฑ๊ณต
- 3xx: ๋ฆฌ๋ค์ด๋ ์
์๋ฃ
- 4xx: ์์ฒญ ์ค๋ฅ, ํด๋ผ์ด์ธํธ ์ธก ์๋ชป
- 5xx: ์๋ฒ ์ค๋ฅ, ์๋ฒ ์ธก ์๋ชป
์๊น ๊ตฌ๊ธ ์๋ฒ๋ก๋ถํฐ ์ ๋ฌ๋ฐ์ ์๋ต ๊ฐ์ฒด๋ฅผ ํ์ธํด๋ณธ๋ค.
```
resp
```
๊ฒฐ๊ณผ๋ `Response`๋ผ๋ ์๋ต ๊ฐ์ฒด๋ฅผ ๋ฐํํ์๊ณ , ์ด ์๋ต๊ฐ์ฒด์ ์ํ์ฝ๋๋ 200์ด๋ค. ์ ์์ ์ผ๋ก ์๋ต์ ๋ฐ์๋ค๋ ๋ป์ด๋ค.
์ด์ ์ด ์๋ต๋ฐ์ HTML ๋ฌธ์๋ฅผ ํ์ธํด๋ณด์. ๋ฌธ์์ ๊ธธ์ด๊ฐ ์๋นํ ๊ธธ์ด ์ฐ์ 300์๊น์ง๋ง ์ถ๋ ฅํ์๋ค.
```
resp.text[:300]
```
## DOM
DOM์ด๋, Document Object Model์ ์ฝ์ด์ด๋ฉฐ HTML๊ณผ XML ๋ฌธ์์ ํ๋ก๊ทธ๋๋ฐ ์ธํฐํ์ด์ค(interface)์ด๋ค.
DOM์ ๋ฌธ์์ ๊ตฌ์กฐํ๋ ํํ์ ์ ๊ณตํ๋ฉฐ ํ๋ก๊ทธ๋๋ฐ ์ธ์ด๊ฐ DOM ๊ตฌ์กฐ์ ์ ๊ทผํ ์ ์๋ ๋ฐฉ๋ฒ์ ์ ๊ณตํ์ฌ ๊ทธ๋ค์ด ๋ฌธ์ ๊ตฌ์กฐ, ์คํ์ผ, ๋ด์ฉ ๋ฑ์ ๋ณ๊ฒฝํ ์ ์๊ฒ ๋๋๋ค.
### XML Parser
ํ์ฌ ๋๋ถ๋ถ์ ์ฃผ์ ์น ๋ธ๋ผ์ฐ์ ๋ XML ๋ฌธ์์ ์ ๊ทผํ๊ณ ์กฐ์ํ๊ธฐ ์ํ XML ํ์๋ฅผ ๋ณ๋๋ก ๋ด์ฅํ๊ณ ์๋ค.
XML DOM์ XML ๋ฌธ์์ ์ ๊ทผํ๊ณ ์กฐ์ํ ์ ์๋ ๋ค์ํ ๋ฉ์๋๋ฅผ ํฌํจํ๊ณ ์๋ค.
ํ์ง๋ง ์ด๋ฌํ ๋ฉ์๋๋ฅผ ์ด์ฉํ๋ ค๋ฉด ์ฐ์ XML ๋ฌธ์๋ฅผ XML DOM ๊ฐ์ฒด๋ก ๋ณํํด์ผ๋ง ํ๋ค.
XML ํ์(parser)๋ XML ๋ฌธ์์ ํ๋ฌธ(plain text) ๋ฐ์ดํฐ๋ฅผ ์ฝ์ด ๋ค์ฌ, ๊ทธ๊ฒ์ XML DOM ๊ฐ์ฒด๋ก ๋ฐํํด์ค๋ค.
DOM ํธ๋ฆฌ ๊ตฌ์กฐ๋ฅผ ๋ง๋ค๊ธฐ ์ํด์ `bs4` ๋ผ์ด๋ธ๋ฌ๋ฆฌ์ ์๋ `BeautifulSoup` ๊ฐ์ฒด๋ฅผ ์ฌ์ฉํ๋ค.
```
from bs4 import BeautifulSoup
```
#### ์ง์ํ๋ XML Parser์ ์ข
๋ฅ
- html.parser: ๋น ๋ฅด์ง๋ง ์ ์ฐํ์ง ์์ ๋จ์ํ HTML ๋ฌธ์์ ์ฌ์ฉ
- lxml: ๋งค์ฐ ๋น ๋ฅด๊ณ ์ ์ฐํจ
- xml: XML ํ์ผ์๋ง ์ฌ์ฉ
- html5lib: ๋ณต์กํ ๊ตฌ์กฐ์ HTML์ ๋ํด์ ์ฌ์ฉ
```
dom = BeautifulSoup(resp.text, 'lxml')
```
## CSS
CSS๋ Cascading Style Sheet์ ์ฝ์ด๋ก์จ, HTML ๋ฑ ๋งํฌ์
์ผ๋ก ์์ฑ๋ ๋ฌธ์๊ฐ ์ค์ ์น ์ฌ์ดํธ์ ํํ๋๋ ๋ฐฉ๋ฒ์ ์ ํด์ฃผ๋ ์ธ์ด์ธ๋ฐ, ์ฝ๊ฒ ๋งํด์ ์๊ฐ์ ์ธ ๋์์ธ๊ณผ ๋ ์ด์์์ ํํํ๋๋ฐ ์ฌ์ฉ๋๋ค.
์ด CSS์๋ ๊ฐ์ฒด๋ฅผ ๋ง๋ค์ด ๊ฐ์ฒด์ ์์ฑ๊ฐ์ ์ ํ ์ ์๋ค.
### CSS Selector
Selector๋ ์ ํ์๋ผ๋ ์๋ฏธ๋ก์จ, ๋ง ๊ทธ๋๋ก ์ ํ์ ํด์ฃผ๋ ์์์ด๋ค. CSS ๊ฐ์ฒด์ CSS ๊ฐ์ฒด์ ๋ํ ์์๋ฅผ ์ ํํ๋ค.
#### Pattern
ํจํด์ ์ข
๋ฅ๋ ์๋นํ ๋ง์๋ฐ, ํจํด์ ์ข
๋ฅ์ ์ฌ์ฉ ์์๋ ์๋์ ๊ฐ๋ค.
- *(asterisk): HTML ํ์ด์ง ๋ด๋ถ์ ๋ชจ๋ ํ๊ทธ ์ ํ
- E: ํ๊ทธ๋ช
์ด E์ธ ํน์ ํ๊ทธ๋ฅผ ์ ํ
- .my_class: ํด๋์ค ์์ฑ๊ฐ์ด my_class๋ก ์ง์ ๋ ์์๋ฅผ ์ ํ
- #my_id: ํ๊ทธ id ์์ฑ๊ฐ์ด my_id๋ก ์ง์ ๋ ์์๋ฅผ ์ ํ
- E E2: E ์์์ '์์'์ธ E2 ์์๋ฅผ ์ ํ
- E > E2: E ์์์ '์์'์ธ E2 ์์๋ฅผ ์ ํ
- E+E2: E ์์๋ฅผ ๋ค๋ฐ๋ฅด๋ E2 ์์๋ฅผ ์ ํ (๋ง์ฝ E์ E2 ์ฌ์ด์ ๋ค๋ฅธ ์์๊ฐ ์กด์ฌํ๋ฉด ์ ํ๋์ง ์์)
- E~E2: E ์์๋ณด๋ค ์์ ์กด์ฌํ๋ฉด E2๋ฅผ ์ ํ (E๊ฐ E2๋ณด๋ค ๋จผ์ ์ ์๋์ด์์ง ์์ผ๋ฉด ์ ํ๋์ง ์์)
### Attribute Selector
ํน์ ์์ฑ์ ๊ฐ์ง ์์๋ฅผ ์ ํํ๋๋ฐ ์ฌ์ฉํ๋ค.
#### Pattern
- E[attr]: attr ์์ฑ์ด ํฌํจ๋ ์์ E๋ฅผ ์ ํ
- E[attr="val"]: 'attr' ์์ฑ์ ๊ฐ์ด ์ ํํ๊ฒ 'val'๊ณผ ์ผ์นํ๋ ์์ E๋ฅผ ์ ํ
- E[attr~="val"]: 'attr' ์์ฑ์ ๊ฐ์ 'val'์ด ํฌํจ๋๋ ์์๋ฅผ ์ ํ (๊ณต๋ฐฑ์ผ๋ก ๋ถ๋ฆฌ๋ ๊ฐ์ด ์ผ์นํด์ผํจ)
- E[attr^="val"]: 'attr' ์์ฑ์ ๊ฐ์ด 'val'์ผ๋ก ์์ํ๋ ์์๋ฅผ ์ ํ
- E[attr$="val"]: 'attr' ์์ฑ์ ๊ฐ์ด 'val'์ผ๋ก ๋๋๋ ์์๋ฅผ ์ ํ
- E[attr*="val"]: 'attr' ์์ฑ์ ๊ฐ์ 'val'์ด ํฌํจ๋๋ ์์๋ฅผ ์ ํ
- E[attr|="val"]: 'attr' ์์ฑ์ ๊ฐ์ด ์ ํํ๊ฒ 'val' ์ด๊ฑฐ๋ 'val-' ์ผ๋ก ์์๋๋ ์์ E๋ฅผ ์ ํ
### ์ฐ์ต
์์๋ฅผ ๋ค๊ธฐ ์ํด HTML ๋ฌธ์๋ฅผ ์์ฑํ์๋ค.
```
html = '''
<!DOCTYPE HTML>
<html>
<head>
<meta charset='utf8' />
<title>์ฌ์ดํธ๋ช
</title>
</head>
<body>
<div id='animal' class='a' name='animal'>
<p id='cat' class='a c' name='cat'>๊ณ ์์ด</p>
<p id='dog' class='a d' name='dog'>๊ฐ</p>
</div>
<div id='food' class='f' name='food'>
<p>์์</p>
<div id='fruit' class='f f'>
<p id='apple' class='f f a'>์ฌ๊ณผ</p>
<p id='banana' class='f f b'>๋ฐ๋๋</p>
</div>
<div id='bread' class='f b'>
<p id='bread' class='f b' name='bread'>๋นต</p>
</div>
</div>
</body>
</html>
'''
print(html)
```
์์์ ์์ฑํ HTML ๋ฌธ์๋ฅผ DOM ๊ฐ์ฒด๋ก ๋ง๋ค๊ณ selector๋ฅผ ์ด์ฉํด ํน์ ํจํด์ ํด๋นํ๋ ์์๋ฅผ ๊ฐ์ ธ์จ๋ค.
```
dom = BeautifulSoup(html, 'lxml')
```
#### 1. ๋ชจ๋ HTML ํ๊ทธ๋ฅผ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('*')
```
#### 2. ํ๊ทธ๋ช
์ ํตํด ํน์ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('body')
```
#### 3. ํ๊ทธ id๊ฐ `fruit`์ธ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('#fruit')
```
#### 4. ํ๊ทธ id๊ฐ `food`์ธ ํ๊ทธ์ ์์ ํ๊ทธ์ธ `p` ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('#food p')
```
#### 5. ํ๊ทธ id๊ฐ `food`์ธ ํ๊ทธ์ ์์ ํ๊ทธ์ธ `div` ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('#food > p')
```
#### 6. class๊ฐ `f`์ธ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('.f')
```
#### 7. class๊ฐ `f f a`์ธ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('.f.f.a')
```
#### 8. `p` ํ๊ทธ์ class๊ฐ `f b`์ธ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('p.f.b')
```
#### 9. `p` ํ๊ทธ์ id๊ฐ `apple`์ด๋ฉฐ class๊ฐ `f f a`์ธ ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('p#apple.f.f.a')
```
#### 10. `name`์ด๋ผ๋ ์์ฑ์ ๊ฐ์ง ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('[name]')
```
#### 11. `p` ํ๊ทธ ์ค `name`์ด๋ผ๋ ์์ฑ์ ๊ฐ์ง ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('p[name]')
```
#### 12. `p` ํ๊ทธ ์ค `name`์ด๋ผ๋ ์์ฑ์ ๊ฐ์ง ํ๊ทธ ์ค `bread`๋ผ๋ ๊ฐ์ ๊ฐ์ง ํ๊ทธ ๊ฐ์ ธ์ค๊ธฐ
```
dom.select('p[name="bread"]')
```
### ์ค์ ์์
๊ตฌ๊ธ ์ฌ์ดํธ์ GET ๋ฐฉ์์ผ๋ก ์์ฒญ์ ๋ณด๋ธ ๋ค, ๋ก๊ณ ์ด๋ฏธ์ง๋ฅผ ๋ค์ด๋ก๋ํ๋ค.
```
url = 'http://www.google.co.kr'
resp = requests.get(url)
dom = BeautifulSoup(resp.text, 'lxml')
tags = dom.select('img#hplogo')
logo_tag = tags[0]
logo_tag
```
๋ก๊ณ ์ด๋ฏธ์ง ํ๊ทธ๋ฅผ ๊ฐ์ ธ์ค๋๋ฐ ์ฑ๊ณตํ๋ค. ํ๊ทธ์ ์์ฑ ์ค `src`๊ฐ ๋ก๊ณ ์ด๋ฏธ์ง ๋งํฌ์ด๋ค.
```
logo_tag['src']
```
ํด๋น ๋งํฌ์ ๋ค์ ์์ฒญ์ ๋ณด๋ด๋ฉด ๋ฐ์ด๋๋ฆฌ ํ์์ ๊ฒฐ๊ณผ๊ฐ์ ์ป์ ์ ์๋๋ฐ, ์ด๋ฏธ์ง๋ฅผ ์น์์ ํํํ ๋์๋ ๋ฐ์ด๋๋ฆฌ ํ์์ผ๋ก ํํ์ด ๋๋ค.
๋จผ์ `src` ์์ฑ๊ฐ์ ๋งํฌ๋ URL์ด๋ค. ๋จ์ URL๋ง์ผ๋ก๋ ์ด๋ค ๋๋ฉ์ธ์ ์์นํ ๋ฆฌ์์ค์ธ์ง ์ ์ ์์ผ๋ฏ๋ก ์ด ๋๋ฉ์ธ ์ฃผ์์ ํฉ์ณ์ค๋ค.
์ด ์์
์ `URL Parsing`์ด๋ผ๊ณ ํ๋ค.
์๋์ ์ฝ๋๋ ๊ตฌ๊ธ ๋๋ฉ์ธ ์ฃผ์์ ๋ก๊ณ ์ด๋ฏธ์ง ์ฃผ์(์์น)๋ฅผ ํฉ์ณ ์ต์ข
์ ์ธ ํํ์ URL์ ๋ง๋๋ ์ฝ๋์ด๋ค.
```
from urllib.parse import urljoin
logo_url = urljoin(url, logo_tag['src'])
```
์ด์ ์ค์ง์ ์ธ ๋ก๊ณ ์ด๋ฏธ์ง ๋ฐ์ดํฐ๋ฅผ ์ป์ด์ผํ๋ค. ์ ์ฐจ๋ URL์ ํตํด ์์ฒญ์ ๋ณด๋ด๊ณ ์๋ต์ ๋ฐ๋ ๋์ผํ ์ ์ฐจ์ด๋ค.
```
resp = requests.get(logo_url)
```
### Header
์๋ต๋ฐ์ ๊ฐ์ฒด๋ก๋ถํฐ ํค๋(header)๋ฅผ ํ์ธํด๋ณด์.
ํด๋๋, ์์ฒญ๊ณผ ์๋ต์ ์ฃผ๊ณ ๋ฐ๊ธฐ ์ํด ์ฌ์ฉ๋๋ ์ํ๋ค์ ์ ์ํ ์ ๋ณด๋ค์ด๋ค.
```
resp.headers
```
์ด ์๋ต์ `Content-Type` ์ฆ ๋ด์ฉ ํํ๋ `image`ํ์ผ์ `png`ํ์์ด๋ผ๋ ๊ฒ์ ๋ช
์ํ๊ณ ์๋ค.
๋ด์ฉ์ ํ์ธํด๋ณด์.
```
resp.content
```
๋ด์ฉ์ ์ถ๋ ฅํ๋ฉด ์ด์ํ ๋ฌธ์๋ค์ด ๊นจ์ ธ์ ๋์ค๋ ๊ฒ์ฒ๋ผ ๋ณด์ด๋ ๊ฒ์ ํ์ธํ ์ ์๋๋ฐ, ์ด๋ฏธ์ง๋ ๋ฐ์ด๋๋ฆฌ ํํ์ ๋ฐ์ดํฐ๋ผ์ ๊ทธ๋ ๋ค.
```
type(resp.content)
```
์ด์ ๋ก๊ณ ์ด๋ฏธ์ง๋ฅผ ๋ฐ์ด๋๋ฆฌ ํํ๋ก ํ๋ฒ ์ ์ฅํด๋ณด์.
```
with open('logo.png', 'wb') as image:
image.write(resp.content)
```
์ด๋ฏธ์ง๊ฐ ์ ์ ์ฅ์ด ๋์๋์ง ํ์ธํด๋ณด์.
```
from PIL import Image
Image.open('logo.png')
```
์์์ ๋ณด์ด๋ ๋ฐ์ ๊ฐ์ด ์ ์์ ์ผ๋ก ๋ก๊ณ ๋ฅผ ๊ฐ์ ธ์์ ๋ค์ด๋ก๋๋ฅผ ์๋ฃํ์์ ํ์ธํ ์ ์๋ค.
| github_jupyter |
```
import os
import os.path as op
import psutil
from pathlib import Path
from IPython.display import display
import findspark
findspark.init()
import pyspark.sql.functions as F
from pyspark import SparkConf
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
if os.getenv('SLURM_TMPDIR'):
SPARK_TMPDIR = Path(os.getenv('SLURM_TMPDIR')).resolve(strict=True)
elif os.getenv("TMPDIR"):
SPARK_TMPDIR = Path(os.getenv('TMPDIR'))
elif os.getenv('SCRATCH'):
SPARK_TMPDIR = Path(os.getenv('SCRATCH')).joinpath('tmp')
else:
raise Exception("Could not find a temporary directory for SPARK data!")
SPARK_TMPDIR.mkdir(parents=True, exist_ok=True)
vmem = psutil.virtual_memory().total // 1024**2
spark_conf = SparkConf()
spark_conf.set("spark.sql.execution.arrow.enabled", "true")
if "SPARK_MASTER_HOST" in os.environ:
SPARK_MASTER = f"spark://{os.environ['SPARK_MASTER_HOST']}:7077"
CORES_PER_WORKER = 16
num_workers = max(1, psutil.cpu_count() // CORES_PER_WORKER)
print(f"num_workers: {num_workers}")
# Make sure we are not wasting any cores
if num_workers != psutil.cpu_count() / CORES_PER_WORKER:
print("WARNING!!! Not using all available CPUs!")
spark_conf.set("spark.driver.memory", "65000M")
spark_conf.set("spark.driver.maxResultSize", "65000M")
spark_conf.set("spark.executor.cores", f"{CORES_PER_WORKER}")
spark_conf.set("spark.executor.memory", f"{int((vmem - 1024) * 0.8 / num_workers)}M")
spark_conf.set("spark.network.timeout", "600s")
spark_conf.set("spark.sql.shuffle.partitions", "2001")
# spark_conf.set("spark.local.dirs", SPARK_TMPDIR.as_posix())
else:
SPARK_MASTER = f"local[{psutil.cpu_count()}]"
driver_memory = min(64000, int(vmem // 2))
executor_memory = int(vmem - driver_memory)
spark_conf.set("spark.driver.memory", f"{driver_memory}M")
spark_conf.set("spark.driver.maxResultSize", f"{driver_memory}M")
spark_conf.set("spark.executor.memory", f"{executor_memory}M")
# spark_conf.set("spark.network.timeout", "600s")
spark_conf.set("spark.sql.shuffle.partitions", "200")
spark_conf.set("spark.local.dirs", SPARK_TMPDIR.as_posix())
spark_conf.set("spark.driver.extraJavaOptions", f"-Djava.io.tmpdir={SPARK_TMPDIR.as_posix()}")
spark_conf.set("spark.executor.extraJavaOptions", f"-Djava.io.tmpdir={SPARK_TMPDIR.as_posix()}")
try:
SPARK_CONF_EXTRA
except NameError:
pass
else:
for key, value in SPARK_CONF_EXTRA.items():
spark_conf.set(key, value)
spark = (
SparkSession
.builder
.master(SPARK_MASTER)
.appName(op.basename(op.dirname(os.getcwd())))
.config(conf=spark_conf)
.getOrCreate()
)
assert spark.conf.get("spark.sql.execution.arrow.enabled") == "true"
display(spark)
```
| github_jupyter |
```
import sys, os, re, csv, codecs, numpy as np, pandas as pd
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model
from keras import initializers, regularizers, constraints, optimizers, layers
BASEDIR = '../data/raw'
train = pd.read_csv(os.path.join(BASEDIR, 'train.csv'))
train.head()
test = pd.read_csv(os.path.join(BASEDIR, 'test.csv'))
test.head()
train['comment_text'] = train['comment_text'].fillna(' ')
test['comment_text'] = test['comment_text'].fillna(' ')
list_classes = ["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]
y = train[list_classes].values
```
# Preprocessing
```
import spacy
nlp = spacy.load('en', disable=['parser', 'ner', 'textcat'])
def reduce_to_double_max(text):
"""Removes unecessary doubling/tripling/etc of characters
Steps:
1. Replaces every 3+ consecutive identical chars by 2 consecutive identical chars
2. Replaces every 2+ consecutive non-word character by a single
"""
import re
text = re.sub(r'(\w)\1{2,}', r'\1\1', text)
return re.sub(r'(\W)\1+', r'\1', text)
def preprocess_corpus(corpus):
"""Applies all preprocessing rules to the corpus"""
corpus = (reduce_to_double_max(s.lower()) for s in corpus)
docs = nlp.pipe(corpus, batch_size=1000, n_threads=4)
return [' '.join([x.lemma_ for x in doc if x.is_alpha]) for doc in docs]
fname_train_processed = '../data/processed/train.txt'
if os.path.isfile(fname_train_processed):
with open(fname_train_processed, 'r') as fin:
train_processed = [line.strip() for line in fin if line]
else:
train_processed = preprocess_corpus(train['comment_text'])
with open(fname_train_processed, 'w') as fout:
for doc in train_processed:
fout.write('{}\n'.format(doc))
train['comment_text_processed'] = train_processed
fname_test_processed = '../data/processed/test.txt'
if os.path.isfile(fname_test_processed):
with open(fname_test_processed, 'r') as fin:
test_processed = [line.strip() for line in fin if line]
else:
test_processed = preprocess_corpus(test['comment_text'])
with open(fname_test_processed, 'w') as fout:
for doc in test_processed:
fout.write('{}\n'.format(doc))
test['comment_text_processed'] = test_processed
embed_size = 50 # how big is each word vector
max_features = 20000 # how many unique words to use (i.e num rows in embedding vector)
maxlen = 100 # max number of words in a comment to use
EMBEDDING_FILE = '/Users/mathieu/datasets/glove.6B.50d.txt'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.strip().split()) for o in open(EMBEDDING_FILE))
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
emb_mean,emb_std
word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
list_sentences_train = train['comment_text_processed'].values
list_sentences_test = test['comment_text_processed'].values
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(list_sentences_train))
list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train)
list_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test)
X_t = pad_sequences(list_tokenized_train, maxlen=maxlen)
X_te = pad_sequences(list_tokenized_test, maxlen=maxlen)
```
# Train Network
```
inp = Input(shape=(maxlen,))
x = Embedding(max_features, embed_size, weights=[embedding_matrix])(inp)
x = Bidirectional(LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(x)
x = GlobalMaxPool1D()(x)
x = Dense(50, activation="relu")(x)
x = Dropout(0.1)(x)
x = Dense(6, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_t, y, batch_size=32, epochs=2)
import time
y_test = model.predict([X_te], batch_size=1024, verbose=1)
sample_submission = pd.read_csv('../data/raw/sample_submission.csv')
sample_submission[list_classes] = y_test
sample_submission.to_csv('../data/external/submission-{}.csv'.format(time.strftime('%Y%m%d_%H%M', time.localtime())), index=False)
```
| github_jupyter |
# Rust Crash Course - 01 - Variables and Data Types
In order to process data correctly and efficiently, Rust needs to know the data type of a variable.
In the following, variables and common data types of the Rust programming language are explained.
The contents represent a brief and compact introduction to the topic, inspired by the [Rust Book](https://doc.rust-lang.org/book/), the [Rust Reference](https://doc.rust-lang.org/reference/), and [Rust By Example](https://doc.rust-lang.org/rust-by-example/).
## Variables and Mutability
By default, variables are immutable in Rust, i.e. once a value is bound to a name, it cannot be changed.
Values are bound to names by the keyword ``let``, while adding ``mut`` makes the given variable mutable.
The advantage of this approach is that, if someone accidently alters the value of an immutable variable in the code, the compiler will output an error.
If there is no specific data type given, as in the following simple example, Rust automatically determines the data type of the variable.
Also note that idiomatic Rust uses ``snake_case`` in variable names.
```
let x = 1;
x = 5; // will lead to a compiler error
let mut y = 2;
y = 17; // perfectly fine, beause y is mutable
let snake_case_example_number = 7;
// by the way: comments are always denoted by two slashes at the beginning
println!("y = {}", y); // the println!() macro enables formatted output
```
However, it is possible to *shadow* a variable by using the ``let`` keyword again and binding it to a new value.
```
let z = 123;
println!("z = {} (original)", z);
{
let z = 456; // z (and bound value) are "shadowed" with new binding
println!("z = {} (in-block shadowing)", z);
}
println!("z = {} (after in-block shadowing)", z);
let z = 789; // original z (and bound value) are "shadowed" with new binding
println!("z = {} (same-level shadowing)", z);
```
Using ``const``, you can declare constant values that may never be changed and may not depend on runtime computations. Further, it is mandatory to add the data type of the constant value. At compilation time, these constants will be inlined, wherever possible.
In Rust, constants are denoted in ``SCREAMING_SNAKE_CASE``.
```
const CPU_FREQ: u32 = 16_000_000; // underscores can be used to improve readbility of numbers
// u32 denotes an unsigned integer type with 32 bits
```
In Rust, global variables are called *static variables*. The corresponding keyword ``static`` is very similar to ``const``. However, values introduced with ``static`` are not inlined upon compilation but can be accessed at one specific location in memory.
```
static N_BYTES_MAX: u32 = 4096;
```
## Scalar Data Types
Scalar data types describe data that represents a single value. Rust has four scalar data types:
- integers
- floating-point numbers
- booleans
- characters
**Note:** *Using ``std::mem::size_of`` and ``std::mem::size_of_val``, you can check the memory footprint of types and variables, respectively. For more information, see https://doc.rust-lang.org/std/mem/index.html.*
### Integers
Integers can be signed (``i8``, ``i16``, ``i32``, ``i64``, ``i128``) or unsigned (``u8``, ``u16``, ``u32``, ``u64``, ``u128``). The data types annotations contain the bit length of the specific data type.
There are two further types, ``isize`` and ``usize``, which depend on the CPU architecture the program runs on.
```
let int_1: i8 = 42;
let int_2: i32 = -768;
let int_3: i128 = 123 * -456;
let int_4: u8 = 42;
let int_5: u32 = 2018;
let int_6: u128 = 2048 * 2048;
let int_7: isize = 1 - 129;
let int_8: usize = 127 + 1;
println!("Type <i8> uses {} byte(s)!", std::mem::size_of::<i8>());
println!("Type <u16> uses {} byte(s)!", std::mem::size_of::<u16>());
println!("Type <i32> uses {} byte(s)!", std::mem::size_of::<i32>());
println!("Type <u64> uses {} byte(s)!", std::mem::size_of::<u64>());
println!("int_1 = {} and uses {} byte(s)!", int_1, std::mem::size_of_val(&int_1));
println!("int_2 = {} and uses {} byte(s)!", int_2, std::mem::size_of_val(&int_2));
println!("int_3 = {} and uses {} byte(s)!", int_3, std::mem::size_of_val(&int_3));
println!("int_4 = {} and uses {} byte(s)!", int_4, std::mem::size_of_val(&int_4));
println!("int_5 = {} and uses {} byte(s)!", int_5, std::mem::size_of_val(&int_5));
println!("int_6 = {} and uses {} byte(s)!", int_6, std::mem::size_of_val(&int_6));
println!("int_7 = {} and uses {} byte(s)!", int_7, std::mem::size_of_val(&int_7));
println!("int_8 = {} and uses {} byte(s)!", int_8, std::mem::size_of_val(&int_8));
```
### Floats
For floating point numbers, Rust offers two data types, one for single-precision (``f32``) and one for double-precision (``f64``) calculations.
```
let pi_singleprec: f32 = 3.14159265;
let pi_doubleprec: f64 = 3.141592653589793;
println!("Type <f32> uses {} byte(s)!", std::mem::size_of::<f32>());
println!("Type <f64> uses {} byte(s)!", std::mem::size_of::<f64>());
println!("pi_singleprec = {} and uses {} byte(s)!", pi_singleprec, std::mem::size_of_val(&pi_singleprec));
println!("pi_doubleprec = {} and uses {} byte(s)!", pi_doubleprec, std::mem::size_of_val(&pi_doubleprec));
```
### Booleans
In Rust, boolean variables are introduced using the data type ``bool`` and can only take two possible values: ``true`` or ``false``.
```
let bool_1 = true;
let bool_2: bool = false;
println!("Type <bool> uses {} byte(s)!", std::mem::size_of::<bool>());
println!("bool_1 = {} and uses {} byte(s)!", bool_1, std::mem::size_of_val(&bool_1));
println!("bool_2 = {} and uses {} byte(s)!", bool_2, std::mem::size_of_val(&bool_2));
```
### Characters
Characters in Rust are introduced by the keyword ``char`` and do not contain a single byte (as e.g. in C), but a 4-byte ''Unicode Scalar Value''. Also note that characters have to be written in single quotes, while strings in Rust use double quotes.
Since characters in Rust are different, e.g. compared to C, you might want to have a closer look here: https://doc.rust-lang.org/std/primitive.char.html
```
let char_1 = 'h';
let char_2: char = '๐ท';
let char_3: char = '\u{1F601}';
println!("Type <char> uses {} byte(s)!", std::mem::size_of::<char>());
println!("char_1 = {} and uses {} byte(s)!", char_1, std::mem::size_of_val(&char_1));
println!("char_2 = {} and uses {} byte(s)!", char_2, std::mem::size_of_val(&char_2));
println!("char_3 = {} and uses {} byte(s)!", char_3, std::mem::size_of_val(&char_3));
```
## Compound Data Types
And there are two primitive compound data types in Rust:
- tuples
- arrays
### Tuples
Tuples can group several different data types into one compound type that can be used to annotate a variable. However, tuple variables cannot grow or shrink in size once they have been created.
```
let mixed_tuple: (char, f32, u8) = ('a', 1.23, 42); // tuple variable consisting of a char, a float, and a byte
println!("mixed_type = {:?} and uses {} byte(s)!", mixed_tuple, std::mem::size_of_val(&mixed_tuple));
// {:?} enables "debug printing" of compound types
let point = (1,2); // if no data types are given for tuples, Rust automatically determines them
let (x, y) = point;
println!("point.0 = {} = {} = x", point.0, x); // parts of tuple can be accessed by their index number
println!("point.1 = {} = {} = y", point.1, y);
let mut mutable_point: (u8, u8) = (11,7);
println!("mutable_point.0 = {}", mutable_point.0);
println!("mutable_point.1 = {}", mutable_point.1);
mutable_point.0 = 5;
mutable_point.1 = point.0;
println!("mutable_point = {:?}", mutable_point);
```
### Emtpy Tuple
The empty tuple ``()`` can be used to express that there is no data, e.g. as a return value of a function.
```
let no_data = ();
```
### Arrays
Array variables in Rust can also contain several elements. However, each element has the same data type. Like tuples, arrays are also fixed-length.
```
let primes: [u8; 5] = [1, 2, 3, 5, 7];
let time_series: [f32; 8] = [0.73, 0.81, 0.88, 0.92, 0.72, 0.83, 0.85, 0.90];
println!("primes[0] = {} and the array uses {} byte(s)!", primes[0], std::mem::size_of_val(&primes));
println!("times_series[4] = {} and the array uses {} byte(s)!", time_series[4], std::mem::size_of_val(&time_series));
```
## Custom Data Types
There are two common ways to create custom data types in Rust:
- structs
- enums
Further, both ways offer useful variations, e.g. tuple structs and enums with values, and applications, e.g. an Option enum, that will be explained in the following.
### Structs
Structs like tuples can contain different data types. They can be introduced by the keyword ``struct`` and each part of a struct has to have a specific name, so it can be defined and accessed in a more convenient way.
By the way: The default memory layout of structs in Rust is different to the one used in the C language. If you would like to dig deeper into this topic, take a look at this: https://doc.rust-lang.org/reference/type-layout.html
```
struct Point {
x: i16,
y: i16,
mark: char,
transparency: u8,
}
let p = Point { x: 1, y: 7, mark: 'P', transparency: 100 };
println!("p.x = {}", p.x);
println!("p.y = {}", p.y);
println!("p uses {} byte(s)!", std::mem::size_of::<Point>());
```
### Tuple Structs
If element names are not required, but a distinct data type name is desired in order to detect accidental misuse, tuple structs can be useful.
```
struct Point (i16, i16);
struct Cell (i16, i16);
let p = Point(1, 2);
println!("p.0 = {}", p.0);
println!("p.1 = {}", p.1);
let mut c = Cell(7,3);
c = p; // leads to a compile error
```
### Enums
If a data type can only take a few specific values, we can enumerate them. The keyword ``enum`` allows us to specify a data type with fixed possible values.
```
enum Direction {
Right,
Left,
Up,
Down,
}
let direction = Direction::Left;
```
### Enums with Values
Sometimes, it might also be useful to add a value, e.g. a number, to an enum value.
```
enum Movement {
Right(i32),
Left(i32),
}
let movement = Movement::Right(12);
// match constructs allow to extract enum values (more on match expressions later)
match movement {
Movement::Right(steps) => {
println!("Movement to the right, {} steps", steps);
}
Movement::Left(value) => {
println!("Movement to the left, {} steps", value);
}
};
```
### The ``Option`` Enum
The ``Option`` enum is a special enum for the common case that a value can be "something" or "nothing". Since there is no ``null`` value in Rust, this is the idiomatic way to express a value or the absence of this value, e.g. in return values of functions. In the case of ``Some()``, one can call ``unwrap()`` to extract the returned value. Calling it on ``None`` will make your application panic!
The standard library defines it as follows:
```rust
enum Option<T> {
Some(T),
None,
}
```
```
let result_int = Some(42);
let absent_value: Option<u8> = None;
println!("result_int = {:?}", result_int);
println!("absent_value = {:?}", absent_value);
println!("result_int.unwrap() = {}", result_int.unwrap());
//println!("absent_value.unwrap() = {}", absent_value.unwrap()); // unwrap() on None --> panic!
```
### The ``Result`` Enum
The ``Result`` enum is another special enum for the common case that a function either returns a valid result or an error. This is a common way of basic error handling in Rust. In the success case, one might want to call ``unwrap()`` to recover the value inside ``Ok()``. However, calling it on ``Err()`` will make your application panic!
The standard library defines it as follows:
```rust
enum Result<T, E> {
Ok(T),
Err(E),
}
```
```
let int_str = "10"; // correct input
let int_number = int_str.parse::<i32>(); // operation returns a value
let int_str_fail = "abc"; // invalid input
let int_number_fail = int_str_fail.parse::<i32>(); // operation returns an error
println!("int_number = {:?}", int_number);
println!("int_number_fail = {:?}", int_number_fail);
println!("int_number.unwrap() = {}", int_number.unwrap());
//println!("int_number_fail.unwrap() = {}", int_number_fail.unwrap()); // unwrap() on error --> panic!
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.