code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Download Patent DB & Adding Similarity Data
The similarity data on its own provides data on patent doc2vec vectors, and some pre-calculated similarity scores. However, it is much more useful in conjunction with a dataset containing other patent metadata. To achieve this it is useful to download a patent dataset and join it with the similarity data.
There are a number of sources of patent data, if you have a working dataset already it may be easiest to join the similarity data to your own dataset. If however, you do not have a local dataset you can easily download the data from <a href="http://www.patentsview.org/download/">Patentsview</a>
Patentsview offers a lot of data on their bulk download page. For ease of downloading, I have created a Python script that will take care of parsing all those URLs, downloading the CSV files, and reading them into a SQLite database. If you want a local version of the patent data, I recommend you use that script (available <a href = "https://github.com/ryanwhalen/patentsview_data_download">here</a>). Download the 'patentsview_download.py' file to the same folder you have this iPython notebook in and run the code below. Note that downloading may take a significant amount of time. So, run the script using the code below and then go make a cup of coffee. Then go to bed, do whatever you want to do over the course of the next couple of days, and then come back and check up on it.
```
%run ./patentsview_download.py
```
Once you've run the script above, you'll have a local database called 'patent_db.sqlite.' If you want a GUI to check out the contents, I recommend <a href="https://sqlitestudio.pl/">SQLite Studio</a> as a nice open-source option.
The next step is to add the similarity tables to your database. We'll run a separate python script to do so.
```
%run ./write_sim_data_to_db.py
```
# Initial Similarity Explorations
Everything from here on out assumes that you're using the SQLIte database as constructed above. If you've chosen to marry the similarity data to your own dataset, you'll need to adapt the below as required.
First, let's import a few packages and connect to the DB.
```
import pandas as pd
import sqlite3
import seaborn as sns
import numpy as np
import random
import gensim
from matplotlib import pyplot as plt
import networkx as nx
import itertools
import os
from sklearn.metrics.pairwise import cosine_similarity
from scipy import stats
from collections import defaultdict
import json
import csv
db_path ='/mnt/BigDisk1/patent_db_20191231/' #file path to your db file here
conn = sqlite3.connect(db_path+'patent_db.sqlite')
cur = conn.cursor()
cur2 = conn.cursor()
```
Let's make a pandas dataframe containing the similarity scores between citing/cited patents and the date the citations were made. Note that this may take a few moments, but once the dataframe has loaded working with it should be relatively quick provided your machine has sufficient memory.
```
df = pd.read_sql_query('''SELECT cite_similarity.similarity,
patent.date FROM cite_similarity
JOIN patent ON cite_similarity.patent_id = patent.id''', conn)
```
Let's have a quick look at the dataframe to see what we've loaded
```
df.head()
df.describe()
```
### Plotting the similarity distribution
Plotting the distribution of similarity scores for all citations shows that most patents tend to cite to other somewhat similar patents, but that there is also substantial variation.
```
sns.distplot(df['similarity'])
```
We saw above that citing/cited patents have an average similarity of about 0.26. How do we know how to interpret that number? Well, one way is to compare citing/cited similarity with the similarity scores we would expect to see between random patents.
The pre-calculated similarity dataset doesn't contain all pairwise similarity scores, so random pairs are unlikely to have a pre-calculated score. We'll need some code that can take two patent numbers, find their vectors and return the similarity score.
```
def patent_pair_sim(patent1, patent2):
'''takes 2 patent numbers, finds their doc2vec vectors and returns their cosine similarity'''
v1 = cur.execute('''SELECT vector FROM doc2vec WHERE patent_id = ?''',[patent1]).fetchone()
v2 = cur.execute('''SELECT vector FROM doc2vec WHERE patent_id = ?''',[patent2]).fetchone()
if v1 == None or v2 == None: #if either patent has no pre-calculated vector, return None
return None
v1 = json.loads(v1[0])
v2 = json.loads(v2[0])
sim = float(cosine_similarity([v1],[v2])[0])
return sim
```
Let's try that similarity calculting function out. Feel free to tweak the below patent numbers if there's a pair you're interested in comparing.
```
print(patent_pair_sim('9000000','9000001'))
```
To do some sanity checks, let's compare the similarity of patents randomly paired on various criteria. The CPC codes are a handy place to start. The code below will compare the similarity score distributions for patents which share the same Section (highest level), class (second highest level), or Subclass (third highest level) as their primary categorization. We would expect that patents sharing lower-level CPC classifications will have more in common with one another than those that do not.
```
def match_on_cpc(patent, level):
'''takes a patent number and returns a second patent number
that shares the same cpc group codes'''
if level == 'subclass':
group = cur.execute('''SELECT group_id FROM cpc_current WHERE
sequence = '0' and patent_id = ?''',[patent]).fetchone()
if group is None:
return None
group = group[0]
match = cur.execute('''SELECT patent_id FROM cpc_current WHERE
group_id = ? ORDER BY RANDOM() LIMIT 1''',[group]).fetchone()
match = match[0]
if level == 'section':
section = cur.execute('''SELECT section_id FROM cpc_current
WHERE sequence = '0' and patent_id = ?''',[patent]).fetchone()
if section is None:
return None
section = section[0]
match = cur.execute('''SELECT patent_id FROM cpc_current WHERE
section_id = ? ORDER BY RANDOM() LIMIT 1''',[section]).fetchone()
match = match[0]
if level == 'class':
class_id = cur.execute('''SELECT subsection_id FROM cpc_current
WHERE sequence = '0' and patent_id = ?''',[patent]).fetchone()
if class_id is None:
return None
class_id = class_id[0]
match = cur.execute('''SELECT patent_id FROM cpc_current WHERE
subsection_id = ? ORDER BY RANDOM() LIMIT 1''',[class_id]).fetchone()
match = match[0]
return match
def get_cpc_match_sims(n, level):
'''returns n random pairwise similarities where the pairs
share the same primary cpc classification at the hierarchical
level identicated'''
patents = cur2.execute('''SELECT id FROM patent ORDER BY RANDOM()''')
sims = []
for p in patents:
p = p[0]
if not p.isdigit():
continue
match = match_on_cpc(p, level)
if match == None or match == p:
continue
sim = patent_pair_sim(p,match)
if sim == None:
continue
sims.append(sim)
if len(sims) == n:
return sims
```
We can use those functions to get similarity scores for each level of the CPC categorization. This can take some time and requires proper indexing on the DB to work well.
```
n = 1000
section_match_sims = get_cpc_match_sims(n, level='section')
class_match_sims = get_cpc_match_sims(n, level='class')
subclass_match_sims = get_cpc_match_sims(n, level='subclass')
```
For good measure, we can also compare with randomly paired patents. We would expect these patents to have the least in common with one another.
```
def get_random_pairwise_sims(patents, n):
'''returns the similarities between n randomly paired patents'''
sims = []
while len(sims) < n:
patent1, patent2 = random.sample(patents,2)
sim = patent_pair_sim(patent1, patent2)
if sim is None:
continue
sims.append(sim)
return sims
patents = cur2.execute('''SELECT id FROM patent ORDER BY RANDOM()''').fetchall()
patents = [p[0] for p in patents if p[0].isdigit()]
random_sims = get_random_pairwise_sims(patents, n)
```
And now, we can compare each of these types of pairs and how similar they are to one another
```
fig = plt.figure(1, figsize=(9, 6))
ax = fig.add_subplot(111)
bp = ax.boxplot([random_sims, section_match_sims, class_match_sims, subclass_match_sims])
ax.set_xticklabels(['Random','Section', 'Class', 'Subclass'])
fig.savefig('cpc_sim_comparisons_bopxplots.png', bbox_inches='tight', dpi=300)
```
As you can see, the similarity scores track what we would expect to see. So, random patent pairs are least similar, random pairs of patents sharing the same section are somewhat more similar, while those sharing the same class are yet more similar, and those sharing the same subclass are even more similar. As we can see below, all of these differences are statistically significant.
```
print('Random '+str(np.mean(random_sims)))
print('Section '+str(np.mean(section_match_sims)))
t = stats.ttest_ind(random_sims, section_match_sims)
print(t)
print('Class '+str(np.mean(class_match_sims)))
t = stats.ttest_ind(section_match_sims, class_match_sims)
print(t)
print('Subclass '+str(np.mean(subclass_match_sims)))
t = stats.ttest_ind(class_match_sims, subclass_match_sims)
print(t)
```
Now, let's get a list of all of the patents, so that we can select some random pairs to compare.
```
def get_all_patents():
'''returns a list of all patent numbers in the DB'''
patents = cur.execute('''SELECT id FROM patent''').fetchall()
patents = [p[0] for p in patents]
patents = [p for p in patents if p.isdigit()] #this removes non-numerical patents like design, plant, etc.
return patents
patents = get_all_patents()
```
Now let's find the scores for some random pairs and plot that distribution.
```
sims = []
for i in range(10000):
pair = random.choices(patents, k=2)
sim = patent_pair_sim(pair[0],pair[1])
if sim is not None:
sims.append(sim)
sns.distplot(sims)
print(np.mean(sims))
```
### Comparing citing/cited similarity to random pairwise similarity
Plotting the two distributions side-by-side shows that - as we would expect - patents that share a citation relationship tend to be more similar than those that do not.
```
fig, ax = plt.subplots()
sns.kdeplot(df['similarity'], shade=True, ax=ax, label='Citation Similarity', linestyle = '--')
sns.kdeplot(sims, shade=True, ax = ax, label = 'Random Pairwise Similarity')
fig = ax.get_figure()
fig.savefig('cite_vs_random_sim.png', dpi=300)
```
### Citation similarity over time
Plotting the citation similarity by yearly mean reveals a trend towards decreasing similarity between citing and cited patents.
```
df['date'] = pd.to_datetime(df['date'])
yearly_means = df.groupby(df.date.dt.year).mean()
ax = yearly_means.plot()
fig = ax.get_figure()
fig.savefig('yearly_cite_sim.png', dpi=300)
```
# Patent-Level Similarity Metrics
As well as identifying global trends, similarity metrics can also provide insight into single inventions. Many patent metrics use citations in combination with metadata such as technical classifications as proxy measures of either knowledge inputs (e.g. Originality) or as a measure of impact (e.g. Generality)(_See_ Trajtenberg, Jaffe and Henderson, 1997).
The code below can be used to generate a network of forward or backward (e.g. citing or cited) references and their similarity scores. These networks can subsequently be used to define measures of impact or input knowledge diversity. The blue arrows in the diagram below show backwards and forward citation relationships in relation to the focal patent document, while the red arrows represent four different proposed similarity-based citation metrics: (a) knoweldge proximity; (b) knowledge homogeneity; (c) impact proximity; and (d) impact homogeneity.
<img src = "cite_metrics.png">
## Forward and backward distance (knowledge proximity, and impact proximity)
By comparing a patent with its cited or citing prior art, these measures provide insight into the degree to which an invention draws on distant information, or alternately goes on to impact similar or dissimilar inventions.
Knowledge proximity measures the similarity between the focal patent and its cited backward references.To do so, we calculate the similarities between a patent and its cited prior art, and take the minimum of these similarities as the knowledge proximity score. This provides insight into the degree to which the invention integrates any one piece of particularly distant knowledge. A low knowledge proximity score demonstrates that the invention in question cited to prior art from a very dissimilar field.
Impact proximity is calculated in a simliar manner, but instead measures the similarity between the focal patent and its citing forward references. This provides an impact measure that accounts for the degree to which an invention goes on to influnce technical areas that are similar or dissimilar to its own.
For some of our measures, we'll want to both know a patent's granting year and the years of other related patents. The below function will determine the granting year of any patent. Meanwhile, the yearly_max dictionary stores the highest patent number granted in all the years in the dataset.
```
def patent_year(patent):
'''takes a patent number and returns an integer of the year it was granted'''
date = cur.execute('''SELECT date FROM patent WHERE id = ?''',[patent]).fetchone()
year = int(date[0].split('-')[0])
return year
def find_yearly_maxes():
'''returns a dictionary keyed by year, with values for the highest patent number
granted in that year'''
yearly_maxes = {}
years = range(1976,2020)
for year in years:
patents = cur.execute('''SELECT id FROM patent
WHERE strftime('%Y', date) = ?''', [str(year)]).fetchall()
patents = [p[0] for p in patents]
patents = [int(p) for p in patents if p.isdigit()]
yearly_maxes[year] = max(patents)
return yearly_maxes
yearly_maxes = find_yearly_maxes()
def prior_art_proximity(patent):
'''takes a patent number, identifies similarity scores for backwards citations and returns
the min similarity score - a demonstration of the degree to which the invention draws on distant knowledge'''
sims = cur.execute('''SELECT similarity FROM cite_similarity WHERE patent_id = ?''',[patent]).fetchall()
if sims == None:
return None
sims = [s[0] for s in sims]
if len(sims) == 0:
return None
return min(sims)
def impact_proximity(patent):
'''takes a patent number, identifies similarity scores for forward citations and returns
the min similarity score - a demonstration of the degree to which the invention has influenced distant areas'''
year = patent_year(patent)
max_patent = yearly_maxes[year + 10] #the maximum patent number for forward metric comparisons
sims = []
cites = cur.execute('''SELECT patent_id, similarity FROM cite_similarity WHERE citation_id = ?''',[patent]).fetchall()
if cites == None:
return None
for cite in cites:
try:
patent = int(cite[0])
except:
continue #skip design, plant and other non numeric patents
if patent > max_patent: #skip patents granted more than 10-years after focal patent
continue
sims.append(cite[1])
if len(sims) == 0:
return None
return min(sims)
```
We'll want to plot our data by year, which the below function will allow us to do.
```
def plot_yearly_means(data, label):
'''takes dictionary with year keys and mean values and plots change over time'''
xs = sorted(data.keys())
ys = [data[x] for x in xs]
plt.plot(xs,ys)
plt.legend([label])
plt.tight_layout()
plt.savefig(label.replace(' ','')+'.png', dpi=300)
plt.show()
```
To use the above proximity code and assess potential changes over time, we can use a random smaple of patents. The function below will randomly sample _n_ patents per year and return those patents as a lists in a dictionary keyed by year. To address the truncation in citation data availability, we create two different samples, one to demonstrate the backwards-oriented measures and one to demonstrate the forwards-oriented measures.
```
def random_yearly_sample(n, years):
'''takes a vector of years and returns a dict of patents with n randomly sampled per year where year is the key'''
sample = {}
for year in years:
patents = cur.execute('''SELECT id FROM patent WHERE strftime('%Y', date) = ?
ORDER BY RANDOM() LIMIT ?''',[str(year), n]).fetchall()
patents = [p[0] for p in patents]
sample[year]=patents
return sample
backward_sample = random_yearly_sample(10000,range(1986,2020)) #sample for backward citation metrics
forward_sample = random_yearly_sample(10000,range(1976,2010)) #sample for forward citation metrics
```
### Prior Art Proximity
With the sample in hand, we can then calculate the average prior art or impact proximity by year to determine whether there have been changes over time. Note that depending on the size of the sample, this might take some time as it may require many database calls. The cell below will compute the knowledge proximity scores for the random sample we created above to calculate the backwards-focused measures on.
```
data = {}
for year in backward_sample:
kp = [prior_art_proximity(i) for i in backward_sample[year]]
kp = [k for k in kp if k is not None]
data[year] = np.mean(kp)
plot_yearly_means(data, 'Prior Art Proximity')
```
### Impact proximity
Now let's do the same but calculate the forward-oriented impact proximity.
```
data = {}
for year in forward_sample:
kp = [impact_proximity(i) for i in forward_sample[year]]
kp = [k for k in kp if k is not None]
data[year] = np.mean(kp)
plot_yearly_means(data, 'Impact Proximity')
```
### Co-citing and co-cited similarities
Having seen the changes in knowledge and impact proximity over time, let us now look to whether or not knowledge homogeneity or impact homogeneity have changed over time. To do so, we will again use our random sample of yearly patents. This time however, because knowledge homogeneity and impact homogeneity require comparing co-cited or co-citing prior art, we calculate the pairwise similarities between all of the citing or cited prior art for the focal patent. The functions below will perform these calculations and return the minimum similarity between all of the patents cited by the focal patent (knowledge homogeneity) or all of the patents that cite the focal patent (impact homogeneity).
```
def impact_homogeneity(patent, metric = 'min'):
'''takes patent number and returns the minimum similarity
between co-citing prior art (similar to generality)
currently implemented to only work for patents we have pre-modeled vectors for
By default returns minium similarity between citing patents,
passing metric = mean or median will return those instead '''
year = patent_year(patent)
max_patent = yearly_maxes[year + 10] #the maximum patent number for forward metric comparisons
sims = []
cites = cur.execute('''SELECT patent_id FROM uspatentcitation WHERE citation_id = ?''',[patent]).fetchall()
if len(cites) < 2: #undefined if fewer than 2 forward cites
return None
cites = [c[0] for c in cites if c[0].isdigit()] #slice patent numbers out of returned tuples
cites = [c for c in cites if int(c) < max_patent]
for p1, p2 in itertools.combinations(cites, 2):
try: #not all patents will have vectors, so use this try loop here
sim = patent_pair_sim(p1, p2)
sims.append(sim)
except:
continue
sims = [s for s in sims if s is not None]
if len(sims) < 1:
return None
if metric == 'min':
return min(sims)
if metric == 'mean':
return np.mean(sims)
if metric == 'median':
return np.median(sims)
def prior_art_homogeneity(patent, metric = 'min'):
'''takes patent number and returns the minimum similarity
between co-cited prior art (similar to originality)
By default returns minium similarity between citing patents,
passing metric = mean or median will return those instead '''
sims = []
cites = cur.execute('''SELECT citation_id FROM cite_similarity WHERE patent_id = ?''''',[patent]).fetchall()
if len(cites) < 2:
return None
cites = [c[0] for c in cites]
for p1, p2 in itertools.combinations(cites, 2):
sim = patent_pair_sim(p1, p2)
sims.append(sim)
sims = [s for s in sims if s is not None]
if len(sims) < 1:
return None
if metric == 'min':
return min(sims)
if metric == 'mean':
return np.mean(sims)
if metric == 'median':
return np.median(sims)
```
### Prior Art Homogeneity
Now let's apply the homogeneity analyses on our backward sample for the knowledge homogeneity score:
```
data = {}
for year in backward_sample:
kp = [prior_art_homogeneity(patent) for patent in backward_sample[year]]
kp = [k for k in kp if k is not None]
data[year] = np.mean(kp)
plot_yearly_means(data, 'Prior Art Homogeneity')
```
### Impact Homogeneity
And on forward samples for the impact homogeneity score:
```
data = {}
for year in forward_sample:
kp = [impact_homogeneity(patent) for patent in forward_sample[year]]
kp = [k for k in kp if k is not None]
data[year] = np.mean(kp)
plot_yearly_means(data, 'Impact Homogeneity')
```
### Changes in technology space
The above shows both backwards/forward citation similarity and co-cited/co-citing citation similarity have decreased over time. Part of this is likely due to the increasing 'size' of the technological space. As more new inventions are produced, the possible distances between them increases. We can estimate the magnitude of this by randomly sampling patents granted within a given year and plotting their average similarity. If desired, the above raw similarity measures can be adjusted to show their divergence from the similarities we would expect at random.
```
def patents_by_year(year):
'''returns a set of utility patents granted in the year passed
as argument'''
patents = cur.execute('''SELECT id FROM patent
WHERE strftime('%Y', date) = ?''', [str(year)]).fetchall()
patents = [p[0] for p in patents]
patents = [int(p) for p in patents if p.isdigit()]
return patents
data = {}
years = range(1976,2019)
for year in years:
patents = patents_by_year(year)
sims = get_random_pairwise_sims(patents, 10000)
data[year] = np.mean(sims)
plot_yearly_means(data, 'Technological Space Change')
```
### Similarity by citation type
The above four patent-level citation measures provide insight into the how inventions are related to the prior art that they cite, and those that go on to cite them. However, one might also be interested in citations as traces of the patent application and examination process. Research has suggested that the citations added by patent examiners are qualitatively different from those added by the patent applicants themselves. We can use the patent similarity data to get a sense of the degree to which this is reflected in the semantic similarity of the cited prior art.
The below function will return a vector of simiarity scores for a random sample of citations. It takes as an argument either 'cited by examiner' or 'cited by applicant'.
```
def get_sims_by_cite_type(n, cite_type):
'''takes a citation type (cited by applicant, cited by examiner, or cited by other)
and returns n random similarity scores between the cited and citing patent'''
cites = cur.execute('''SELECT patent_id, citation_id FROM uspatentcitation
WHERE category = ? ORDER BY RANDOM() LIMIT ?''', [cite_type, n]).fetchall()
sims = []
for cite in cites:
try:
sims.append(patent_pair_sim(cite[0], cite[1]))
except:
pass #skip combos not in pre-calculated model
return sims
examiner_sims = get_sims_by_cite_type(50000, 'cited by examiner')
applicant_sims = get_sims_by_cite_type(50000, 'cited by applicant')
examiner_sims = [s for s in examiner_sims if s is not None]
applicant_sims = [s for s in applicant_sims if s is not None]
fig, ax = plt.subplots()
sns.kdeplot(examiner_sims, shade=True, ax=ax, label='Examiner')
sns.kdeplot(applicant_sims, shade=True, ax = ax, label = 'Applicant', linestyle = '--')
plt.savefig('examiner_applicant_sims'+'.png', dpi=300)
t = stats.ttest_ind(examiner_sims, applicant_sims)
print(t)
```
## Nearest Neighbors
The patent similarity dataset, also includes data on each patent’s 100 nearest neighbors. These are the 100 patents from the dataset that are closest to the focal patent, and their accompanying similarity scores. These data can be used for a wide variety of analyses, including those that provide perspective on how crowded an invention’s “neighborhood” is.
As an example, consider the neighborhoods of both litigated and non-litigated patents. To examine whether they differ from one another, we begin with the litigated patent data, and identify the similarity between each litigated patent and its nearest neighbor. We then compare these similarity scores with the similarity between non-litigated patents and their nearest neighbors. Having a very similar nearest neighbor, suggests that the patent in question is in a ‘crowded’ intellectual property space, with perhaps many other competing, blocking, or related patents, whereas having only more distant neighbors suggests an invention is relatively unique. By comparing the distributions of the nearest neighbor similarities for both litigated and non-litigated patents, we can see that, on average, litigated patents tend to have much more similar nearest neighbors than their non-litigated counterparts, and a wider distribution of these scores.
```
def make_litigated_patent_set(path):
'''uses data file from Schwartz et. al litigated patent dataset, returns a set of
patent numbers involved in infringement litigation'''
infile = open(path ,encoding = 'utf-8')
reader = csv.DictReader(infile)
infringement_litigated_patents = set()
count = 0
for row in reader:
patent = row['patent']
doc_type = row['patent_doc_type']
case_types = [row['case_type_1'], row['case_type_2'],row['case_type_3']]
if '1' in case_types and doc_type == 'Patent':
count += 1
infringement_litigated_patents.add(patent)
return infringement_litigated_patents
def get_nearest_neighbor_sim(patent):
'''takes a patent number, returns the similarity score for its nearest neighbor
'''
sims = cur.execute('''SELECT top_100 FROM most_similar
WHERE patent_id = ?''',[patent]).fetchone()
if sims is None:
return None
sims = json.loads(sims[0])
sims = [s[1] for s in sims]
return max(sims)
path_to_litigated_dataset = #add path to this dataset file here
litigated_patents = make_litigated_patent_set(path_to_litigated_dataset)
litigated_sims = [get_nearest_neighbor_sim(p) for p in litigated_patents]
litigated_sims = [s for s in litigated_sims if s is not None]
all_patents = get_all_patents()
random_sims = []
while len(random_sims) < len(litigated_sims):
patent = random.choice(all_patents)
sim = get_nearest_neighbor_sim(patent)
if sim is not None:
random_sims.append(sim)
fig, ax = plt.subplots()
sns.kdeplot(litigated_sims, shade = 1, color = 'red', label = 'litigated', linestyle='--')
sns.kdeplot(random_sims, shade = 1, color='blue', label = 'non-litigated')
plt.savefig('litigated_vs_non_litigated.png', dpi=300)
```
# Inventor-Level Metrics
Patent similarity data can also be used to help understand the career of a given inventor. By locating each of an inventor's inventions within semantic space, one can produce a network of their inventions, measure their average, minimum, and maximum similarity scores, identify clusters, or find their mean invention.
The below code demonstrates how to identify and visualize the invention networks for four well known tech company CEOs.
```
def make_inventor_net(inventor, save_path = False):
'''takes inventor ID and returns networkx Graph object containing
nodes represeting each of his/her inventions with links between them
weighted by their doc2vec similarity
if save_path is defined will save a graphml file at the designated path
'''
inventions = cur.execute('''SELECT patent_id FROM patent_inventor
WHERE inventor_id = ?''',[inventor]).fetchall()
g = nx.Graph()
if len(inventions) < 2:
return None
inventions = [i[0] for i in inventions if i[0].isdigit()]
for p1, p2 in itertools.combinations(inventions, 2):
sim = patent_pair_sim(p1, p2)
if sim is None:
continue
g.add_edge(p1, p2, weight = sim)
if save_path != False:
nx.write_graphml(g, save_path)
return g
def make_mst(g):
'''takes a graph object and returns the minimum spanning tree
however, defines MST as the maximum sum of edgeweights for a tree
because the default MST treats weight as distance rather than sim'''
ng = nx.Graph()
for edge in g.edges(data=True):
ng.add_edge(edge[0], edge[1], weight = 1 - edge[2]['weight'])
ng = nx.minimum_spanning_tree(ng)
return ng
def net_stats(g):
'''takes a nx Graph object and returns least similar score (i.e. the similarity
between the most dissimilar inventions) and average pairwise similarity'''
ew = [e[2]['weight'] for e in g.edges(data=True)]
return round(min(ew),3), round(np.mean(ew), 3)
def draw_inventor_net(g, firstname, lastname):
d = dict(g.degree(weight='weight'))
size = [v * 5 for v in d.values()] #rescale weights for visibility
least_sim, mean_sim = net_stats(g)
g = make_mst(g)
pos = nx.spring_layout(g, iterations = 100)
fig, ax = plt.subplots()
nx.draw_networkx_nodes(g, pos, node_size = size,
node_color = 'darkslategrey')
nx.draw_networkx_edges(g, pos)
plt.xticks([])
plt.yticks([])
textstr = '\n'.join((
r"$\bf{"+firstname+"}$"+" "+r"$\bf{"+lastname+"}$",
'Minimum sim=%s' % (least_sim,),
'Mean sim=%s' % (mean_sim,)))
plt.title(textstr)
plt.tight_layout()
plt.savefig(firstname+lastname, dpi=300)
plt.show()
```
The first step is to find the inventor IDs of interest. We can do this by looking through the 'inventor' table of the patent_db. Below are the inventor IDs for four well known tech CEOs. We can use these to plot each of their invention networks.
```
jb_id = '5715399-1'
sj_id = 'D268584-1'
mz_id = '7669123-1'
bg_id = '5552982-2'
jb = make_inventor_net(jb_id)
draw_inventor_net(jb, 'Jeff', 'Bezos')
sj = make_inventor_net(sj_id)
draw_inventor_net(sj, 'Steve', 'Jobs')
bg = make_inventor_net(bg_id)
draw_inventor_net(bg, 'Bill', 'Gates')
mz = make_inventor_net(mz_id)
draw_inventor_net(mz, 'Mark', 'Zuckerberg')
```
These visualized networks show the minimum spanning tree of each inventor's patent similarity network, and some basic statistics. Each of these provides insight into the degree to which an inventor has worked within a single technological domain, or has alternately created a wide variety of dissimilar inventions.
### Inter-inventor similarity
Just as we can visualize a given inventor's invention similarity network, we can also compare inventors to one another by identifying their 'mean' invention (i.e. the mean vector of all their invention vectors) and subsequently calcuating the similarity between those.
```
def find_inventor_mean(inventor):
'''takes inventor ID, finds their patent vectors and returns mean vector'''
inventions = cur.execute('''SELECT patent_inventor.patent_id,
doc2vec.vector FROM patent_inventor
JOIN doc2vec
ON patent_inventor.patent_id = doc2vec.patent_id
WHERE inventor_id = ?''''',[inventor]).fetchall()
inventions = [i[1][1:-1] for i in inventions if i!= None]
inventions = [i.split(',') for i in inventions]
for i in range(len(inventions)):
inventions[i] = [float(i) for i in inventions[i]]
if len(inventions) < 1:
return None
return np.mean(inventions, axis = 0)
def make_mean_sim_net(means):
'''takes a list of tuples (node_id, vector) and constructs a network of nodes
with edges weighted by the similarity between their vectors'''
g = nx.Graph()
for i1, i2 in itertools.combinations(means, 2):
inv1 = i1[0]
v1 = i1[1]
inv2 = i2[0]
v2 = i2[1]
sim = float(cosine_similarity(v1.reshape(1,-1), v2.reshape(1,-1))[0])
g.add_edge(inv1, inv2, weight = sim)
return g
def plot_inventor_sim_net(g, filename):
'''takes network of inventors with edges between them weighted by similarity of their mean invention vectors
plots network'''
pos = nx.spring_layout(g, iterations = 100)
nx.draw(g,pos, with_labels = True, node_size = 2000)
labels = nx.get_edge_attributes(g,'weight')
nx.draw_networkx_edge_labels(g,pos,edge_labels=labels)
plt.savefig(filename, dpi=300)
plt.show()
sj = ('Jobs', find_inventor_mean(sj_id))
bg = ('Gates', find_inventor_mean(bg_id))
jb = ('Bezos', find_inventor_mean(jb_id))
mz = ('Zuckerberg', find_inventor_mean(mz_id))
mean_vectors = [sj, bg, jb, mz]
inter_inv_net = make_mean_sim_net(mean_vectors)
plot_inventor_sim_net(inter_inv_net, 'inventor_net.png')
```
# Team-level metrics
In addition to providing insight into individual patents or inventors, similarity data can be useful at the team-level to characterize different types of collaborative teams. Some teams have are comprised of members largely from the same or similar disciplines, while others feature more expertise diversity in their makeup.
To calculate team-level metrics it is often useful to first typify each individual member's expertise by locating their average semantic location (i.e. the average vector of all of their invention vectors). These mid-points can then be used to typify teams—those with large degrees of similarity between their average vectors are made up of members with similar inventing backgrounds, whereas those with little similarity between them have more knowledge-diverse membership.
In the sample code below, we compare the knowledge diversity of two teams, both inventors on Nest thermostat related patents assigned to Google. The first patent (8,757,507) relates to an easy-to-install thermostat, while the second (9,256,230) relates to scheduling a network-connected thermostat. As we can see from the histogram generated below, the team on the first patent has more concentrated expertise (i.e. generally high similarity scores) whereas the second features more knowledge diversity.
```
def get_inventors(patent):
'''takes patent_id returns inventor_ids for listed inventors'''
inventors = cur.execute('''SELECT inventor_id FROM patent_inventor
WHERE patent_id = ?''',[patent]).fetchall()
inventors = [i[0] for i in inventors]
return inventors
def make_team_network(inventors, save_path =False):
'''takes a list of inventor IDs, finds mean semantic location for each
measures distance between each of their means and returns a network
object w/ inventor nodes and weighted edges between them representing
the similarity of their average inventions'''
averages = [(i, find_inventor_mean(i)) for i in inventors]
g = nx.Graph()
for i1, i2 in itertools.combinations(averages, 2):
inv1, v1 = i1[0], i1[1]
inv2, v2 = i2[0], i2[1]
if v1 is None or v2 is None:
continue
sim = float(cosine_similarity(v1.reshape(1,-1), v2.reshape(1,-1))[0])
g.add_edge(inv1, inv2, weight = sim)
if save_path != False:
nx.write_graphml(g, save_path)
return g
def plot_degree_dists(g1, label1, g2, label2):
'''takes new network objects (g1 and g2) and accompanying labels
plots kde of each network degree distribution'''
ew1 = [e[2]['weight'] for e in g1.edges(data=True)]
ew2 = [e[2]['weight'] for e in g2.edges(data=True)]
print(label1 +' average sim: '+str(np.mean(ew1)))
print(label2 +' average sim: '+str(np.mean(ew2)))
fig, ax = plt.subplots()
sns.kdeplot(ew1, shade = True, ax = ax, label = label1)
sns.kdeplot(ew2, shade = True, ax = ax, label = label2, linestyle = '--')
plt.tight_layout()
plt.savefig(label1.replace(',','')+'.png', dpi = 300)
team_net_1 = make_team_network(get_inventors('8757507'))
team_net_2 = make_team_network(get_inventors('9256230'))
plot_degree_dists(team_net_1, '8,757,507', team_net_2, '9,256,230')
```
# Location and firm-level metrics
Because it interfaces easily with other patent data, the patent similarity dataset can also be used to assess innovation at the firm or location level. The code below does a simple comparison of the similarity between inventions made by inventors in California, compared with those located in Louisiana. We see that although the distributions are almost identical, inventions originating in Louisiana are somewhat more likely to be similar to one another than those from California. Similar analyses can be performed to compare firms, or with slight modifications to track changes over time at the firm or location level.
```
def calc_pairwise_state_sims(state, n):
'''takes a state abbreviation and returns a
returns a list of n random pairwise similarities between patents granted to inventors associated
with that state in the db'''
patents = cur.execute('''SELECT patent_id FROM patent_inventor WHERE patent_inventor.inventor_id in (
SELECT inventor_id FROM location_inventor WHERE location_inventor.location_id in
(SELECT id FROM location WHERE state = ?)) ORDER BY RANDOM() LIMIT ?''',[state, n]).fetchall()
patents = [p[0] for p in patents]
sims = []
while len(sims) < n:
p1, p2 = random.sample(patents,2)
sim = patent_pair_sim(p1, p2)
if sim is not None:
sims.append(sim)
return sims
CA_sims = calc_pairwise_state_sims('CA', 10000)
LA_sims = calc_pairwise_state_sims('LA', 10000)
fig, ax = plt.subplots()
sns.kdeplot(CA_sims, shade=True, ax=ax, label='CA mean = %s' % round(np.mean(CA_sims),4), linestyle = '--')
sns.kdeplot(LA_sims, shade=True, ax = ax, label = 'LA mean = %s' % round(np.mean(LA_sims), 4))
t = stats.ttest_ind(LA_sims, CA_sims)
print(t)
fig.savefig('CA_vs_LA_sim.png', bbox_inches='tight', dpi=300)
conn.close()
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Train and explain models remotely via Azure Machine Learning Compute and deploy model and scoring explainer
_**This notebook illustrates how to use the Azure Machine Learning Interpretability SDK to train and explain a classification model remotely on an Azure Machine Leanrning Compute Target (AMLCompute), and use Azure Container Instances (ACI) for deploying your model and its corresponding scoring explainer as a web service.**_
Problem: IBM employee attrition classification with scikit-learn (train a model and run an explainer remotely via AMLCompute, and deploy model and its corresponding explainer.)
---
## Table of Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Run model explainer locally at training time](#Explain)
1. Apply feature transformations
1. Train a binary classification model
1. Explain the model on raw features
1. Generate global explanations
1. Generate local explanations
1. [Visualize results](#Visualize)
1. [Deploy model and scoring explainer](#Deploy)
1. [Next steps](#Next)
## Introduction
This notebook showcases how to train and explain a classification model remotely via Azure Machine Learning Compute (AMLCompute), download the calculated explanations locally for visualization and inspection, and deploy the final model and its corresponding explainer to Azure Container Instances (ACI).
It demonstrates the API calls that you need to make to submit a run for training and explaining a model to AMLCompute, download the compute explanations remotely, and visualizing the global and local explanations via a visualization dashboard that provides an interactive way of discovering patterns in model predictions and downloaded explanations, and using Azure Machine Learning MLOps capabilities to deploy your model and its corresponding explainer.
We will showcase one of the tabular data explainers: TabularExplainer (SHAP) and follow these steps:
1. Develop a machine learning script in Python which involves the training script and the explanation script.
2. Create and configure a compute target.
3. Submit the scripts to the configured compute target to run in that environment. During training, the scripts can read from or write to datastore. And the records of execution (e.g., model, metrics, prediction explanations) are saved as runs in the workspace and grouped under experiments.
4. Query the experiment for logged metrics and explanations from the current and past runs. Use the interpretability toolkit’s visualization dashboard to visualize predictions and their explanation. If the metrics and explanations don't indicate a desired outcome, loop back to step 1 and iterate on your scripts.
5. After a satisfactory run is found, create a scoring explainer and register the persisted model and its corresponding explainer in the model registry.
6. Develop a scoring script.
7. Create an image and register it in the image registry.
8. Deploy the image as a web service in Azure.
|  |
|:--:|
## Setup
Make sure you go through the [configuration notebook](../../../../configuration.ipynb) first if you haven't.
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
## Initialize a Workspace
Initialize a workspace object from persisted configuration
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
```
## Explain
Create An Experiment: **Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
```
from azureml.core import Experiment
experiment_name = 'explainer-remote-run-on-amlcompute'
experiment = Experiment(workspace=ws, name=experiment_name)
```
## Introduction to AmlCompute
Azure Machine Learning Compute is managed compute infrastructure that allows the user to easily create single to multi-node compute of the appropriate VM Family. It is created **within your workspace region** and is a resource that can be used by other users in your workspace. It autoscales by default to the max_nodes, when a job is submitted, and executes in a containerized environment packaging the dependencies as specified by the user.
Since it is managed compute, job scheduling and cluster management are handled internally by Azure Machine Learning service.
For more information on Azure Machine Learning Compute, please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)
If you are an existing BatchAI customer who is migrating to Azure Machine Learning, please read [this article](https://aka.ms/batchai-retirement)
**Note**: As with other Azure services, there are limits on certain resources (for eg. AmlCompute quota) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
The training script `run_explainer.py` is already created for you. Let's have a look.
## Submit an AmlCompute run
First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.
You can also pass a different region to check availability and then re-create your workspace in that region through the [configuration notebook](../../../configuration.ipynb)
```
from azureml.core.compute import ComputeTarget, AmlCompute
AmlCompute.supported_vmsizes(workspace=ws)
# AmlCompute.supported_vmsizes(workspace=ws, location='southcentralus')
```
### Create project directory
Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on
```
import os
import shutil
project_folder = './explainer-remote-run-on-amlcompute'
os.makedirs(project_folder, exist_ok=True)
shutil.copy('train_explain.py', project_folder)
```
### Provision a compute target
> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
You can provision an AmlCompute resource by simply defining two parameters thanks to smart defaults. By default it autoscales from 0 nodes and provisions dedicated VMs to run your job in a container. This is useful when you want to continously re-use the same target, debug it between jobs or simply share the resource with other users of your workspace.
* `vm_size`: VM family of the nodes provisioned by AmlCompute. Simply choose from the supported_vmsizes() above
* `max_nodes`: Maximum nodes to autoscale to while running a job on AmlCompute
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster"
# Verify that cluster does not exist already
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True)
```
### Configure & Run
```
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.runconfig import DEFAULT_CPU_IMAGE
# Create a new runconfig object
run_config = RunConfiguration()
# Set compute target to AmlCompute target created in previous step
run_config.target = cpu_cluster.name
# Set Docker base image to the default CPU-based image
run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE
# Use conda_dependencies.yml to create a conda environment in the Docker image for execution
run_config.environment.python.user_managed_dependencies = False
azureml_pip_packages = [
'azureml-defaults', 'azureml-telemetry', 'azureml-interpret'
]
# Note: this is to pin the scikit-learn version to be same as notebook.
# In production scenario user would choose their dependencies
import pkg_resources
available_packages = pkg_resources.working_set
sklearn_ver = None
pandas_ver = None
for dist in available_packages:
if dist.key == 'scikit-learn':
sklearn_ver = dist.version
elif dist.key == 'pandas':
pandas_ver = dist.version
sklearn_dep = 'scikit-learn'
pandas_dep = 'pandas'
if sklearn_ver:
sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)
if pandas_ver:
pandas_dep = 'pandas=={}'.format(pandas_ver)
# Specify CondaDependencies obj
# The CondaDependencies specifies the conda and pip packages that are installed in the environment
# the submitted job is run in. Note the remote environment(s) needs to be similar to the local
# environment, otherwise if a model is trained or deployed in a different environment this can
# cause errors. Please take extra care when specifying your dependencies in a production environment.
azureml_pip_packages.extend(['pyyaml', sklearn_dep, pandas_dep])
run_config.environment.python.conda_dependencies = CondaDependencies.create(pip_packages=azureml_pip_packages)
# Now submit a run on AmlCompute
from azureml.core.script_run_config import ScriptRunConfig
script_run_config = ScriptRunConfig(source_directory=project_folder,
script='train_explain.py',
run_config=run_config)
run = experiment.submit(script_run_config)
# Show run details
run
```
Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run).
```
%%time
# Shows output of the run on stdout.
run.wait_for_completion(show_output=True)
# Delete () is used to deprovision and delete the AmlCompute target. Useful if you want to re-use the compute name
# 'cpucluster' in this case but use a different VM family for instance.
# cpu_cluster.delete()
```
## Download Model Explanation, Model, and Data
```
# Retrieve model for visualization and deployment
from azureml.core.model import Model
import joblib
original_model = Model(ws, 'amlcompute_deploy_model')
model_path = original_model.download(exist_ok=True)
original_svm_model = joblib.load(model_path)
# Retrieve global explanation for visualization
from azureml.interpret import ExplanationClient
# get model explanation data
client = ExplanationClient.from_run(run)
global_explanation = client.download_model_explanation()
# Retrieve x_test for visualization
import joblib
x_test_path = './x_test.pkl'
run.download_file('x_test_ibm.pkl', output_file_path=x_test_path)
x_test = joblib.load(x_test_path)
```
## Visualize
Visualize the explanations
```
from interpret_community.widget import ExplanationDashboard
ExplanationDashboard(global_explanation, original_svm_model, datasetX=x_test)
```
## Deploy
Deploy Model and ScoringExplainer
```
from azureml.core.conda_dependencies import CondaDependencies
# WARNING: to install this, g++ needs to be available on the Docker image and is not by default (look at the next cell)
azureml_pip_packages = [
'azureml-defaults', 'azureml-core', 'azureml-telemetry',
'azureml-interpret'
]
# Note: this is to pin the scikit-learn and pandas versions to be same as notebook.
# In production scenario user would choose their dependencies
import pkg_resources
available_packages = pkg_resources.working_set
sklearn_ver = None
pandas_ver = None
for dist in available_packages:
if dist.key == 'scikit-learn':
sklearn_ver = dist.version
elif dist.key == 'pandas':
pandas_ver = dist.version
sklearn_dep = 'scikit-learn'
pandas_dep = 'pandas'
if sklearn_ver:
sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)
if pandas_ver:
pandas_dep = 'pandas=={}'.format(pandas_ver)
# Specify CondaDependencies obj
# The CondaDependencies specifies the conda and pip packages that are installed in the environment
# the submitted job is run in. Note the remote environment(s) needs to be similar to the local
# environment, otherwise if a model is trained or deployed in a different environment this can
# cause errors. Please take extra care when specifying your dependencies in a production environment.
azureml_pip_packages.extend(['pyyaml', sklearn_dep, pandas_dep])
myenv = CondaDependencies.create(pip_packages=azureml_pip_packages)
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
with open("myenv.yml","r") as f:
print(f.read())
# Retrieve scoring explainer for deployment
scoring_explainer_model = Model(ws, 'IBM_attrition_explainer')
from azureml.core.webservice import Webservice
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.model import Model
from azureml.core.environment import Environment
from azureml.exceptions import WebserviceException
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={"data": "IBM_Attrition",
"method" : "local_explanation"},
description='Get local explanations for IBM Employee Attrition data')
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score_remote_explain.py", environment=myenv)
# Use configs and models generated above
service = Model.deploy(ws, 'model-scoring-service', [scoring_explainer_model, original_model], inference_config, aciconfig)
try:
service.wait_for_deployment(show_output=True)
except WebserviceException as e:
print(e.message)
print(service.get_logs())
raise
import requests
# Create data to test service with
examples = x_test[:4]
input_data = examples.to_json()
headers = {'Content-Type':'application/json'}
# Send request to service
print("POST to url", service.scoring_uri)
resp = requests.post(service.scoring_uri, input_data, headers=headers)
# Can covert back to Python objects from json string if desired
print("prediction:", resp.text)
service.delete()
```
## Next
Learn about other use cases of the explain package on a:
1. [Training time: regression problem](https://github.com/interpretml/interpret-community/blob/master/notebooks/explain-regression-local.ipynb)
1. [Training time: binary classification problem](https://github.com/interpretml/interpret-community/blob/master/notebooks/explain-binary-classification-local.ipynb)
1. [Training time: multiclass classification problem](https://github.com/interpretml/interpret-community/blob/master/notebooks/explain-multiclass-classification-local.ipynb)
1. Explain models with engineered features:
1. [Simple feature transformations](https://github.com/interpretml/interpret-community/blob/master/notebooks/simple-feature-transformations-explain-local.ipynb)
1. [Advanced feature transformations](https://github.com/interpretml/interpret-community/blob/master/notebooks/advanced-feature-transformations-explain-local.ipynb)
1. [Save model explanations via Azure Machine Learning Run History](../run-history/save-retrieve-explanations-run-history.ipynb)
1. [Run explainers remotely on Azure Machine Learning Compute (AMLCompute)](../remote-explanation/explain-model-on-amlcompute.ipynb)
1. [Inferencing time: deploy a locally-trained model and explainer](./train-explain-model-locally-and-deploy.ipynb)
1. [Inferencing time: deploy a locally-trained keras model and explainer](./train-explain-model-keras-locally-and-deploy.ipynb)
| github_jupyter |
# Reconstructing MNIST images using Autoencoder
Now that we have understood how autoencoders reconstruct the inputs, in this section we will learn how autoencoders reconstruct the images of handwritten digits using the MNIST dataset.
In this chapter, we use keras API from the tensorflow for building the models. So that we would be familiarized with how to use high-level APIs.
## Import Libraries
First, let us import the necessary libraries:
```
import warnings
warnings.filterwarnings('ignore')
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
#plotting
import matplotlib.pyplot as plt
%matplotlib inline
#dataset
from tensorflow.keras.datasets import mnist
import numpy as np
```
## Prepare the Dataset
Let us load the MNIST dataset. We don't need the labels for autoencoder. Since we are reconstructing the given input we don't need the labels. So, we just load x_train for training and x_test for testing:
```
(x_train, _), (x_test, _) = mnist.load_data()
```
Normalize the data by dividing with max pixel value which is 255:
```
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
```
Shape of our dataset:
```
print(x_train.shape, x_test.shape)
```
Reshape the images as 2D array:
```
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
```
Now, the shape of data would become:
```
print(x_train.shape, x_test.shape)
```
# Define the Encoder
Now, we define the encoder which takes the images as an input and returns the encodings.
Define the size of the encodings:
```
encoding_dim = 32
```
Define the placeholders for the input:
```
input_image = Input(shape=(784,))
```
Define the encoder which takes the input_image and returns the encodings:
```
encoder = Dense(encoding_dim, activation='relu')(input_image)
```
# Define the Decoder
Let us define the decoder which takes the encoded values from the encoder and returns the reconstructed image:
```
decoder = Dense(784, activation='sigmoid')(encoder)
```
# Build the model
Now that we defined encoder and decoder, we define the model which takes images as input and returns the output of the decoder which is the reconstructed image:
```
model = Model(inputs=input_image, outputs=decoder)
```
Let us look at summary of the model:
```
model.summary()
```
Compile the model with loss as binary cross entropy and we minimize the loss using AdaDelta optimizer:
```
model.compile(optimizer='adadelta', loss='binary_crossentropy')
```
Now, let us train the model.
Generally, we feed the data to the model as model.fit(x,y) where x is the input and y is the label. But since autoencoders reconstruct its inputs, the input and output to the model should be the same. So we feed the data to the model as model.fit(x_train, x_train)
```
model.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True, validation_data=(x_test, x_test))
```
## Reconstruct images
Let us see how our model is performing in the test dataset. Feed the test images to the model and get the reconstructed images:
```
reconstructed_images = model.predict(x_test)
```
## Plotting reconstructed images
First let us plot the atcual images i.e input images:
```
n = 7
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(1, n, i+1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
Plot the reconstructed image:
```
n = 7
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(reconstructed_images[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
As you can notice, autoencoders have learned to reconstruct the given input image. In the next section, we will learn about convolutional autoencoder which uses convolutional layers in the encoder and decoder network.
| github_jupyter |
# Ch 2: Supervised Learning
2.1: Classification and regression
----
Code for Chapter 2 by authors can be found here:
https://github.com/amueller/introduction_to_ml_with_python/blob/master/02-supervised-learning.ipynb
Two major types of supervised learning:
* classification: goal is to predict a class label (discrete)
* regression: goal is to predict a real number (continuous)
2.2: Generalization, Overfitting, Underfitting
----
generalization: a model should work well on the training data and test data.
overfitting: model fits too closely to the particularities of the training set and so does not generalize well (too complex)
underfitting: model does not even fit the data set well (too simple)
overfitting: https://en.wikipedia.org/wiki/Overfitting
Bias-variance tradeoff: https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
2.3 Supervised Machine Learning Algorithms
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# The mglearn packages has some helper function used for plotting.
import mglearn # https://github.com/amueller/introduction_to_ml_with_python/tree/master/mglearn
from IPython.display import display
from sklearn.datasets import load_iris
from sklearn.datasets import make_blobs
# Not sure what this is for
%matplotlib inline
from preamble import *
# generate dataset
X, y = mglearn.datasets.make_forge()
# plot dataset
mglearn.discrete_scatter(X[:, 0], X[:, 1], y)
plt.legend(["Class 0", "Class 1"], loc=4)
plt.xlabel("First feature")
plt.ylabel("Second feature")
print("X.shape:", X.shape)
X, y = mglearn.datasets.make_wave(n_samples=40)
plt.plot(X, y, 'o')
plt.ylim(-3, 3)
plt.xlabel("Feature")
plt.ylabel("Target")
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print("cancer.keys():\n", cancer.keys())
print("Shape of cancer data:", cancer.data.shape)
print("Sample counts per class:\n",
{n: v for n, v in zip(cancer.target_names, np.bincount(cancer.target))})
print("Feature names:\n", cancer.feature_names)
from sklearn.datasets import load_boston
boston = load_boston()
print("Data shape:", boston.data.shape)
X, y = mglearn.datasets.load_extended_boston()
print("X.shape:", X.shape)
```
# 2.3.2 k-Nearest Neighbors
k-Neighbors classification
```
mglearn.plots.plot_knn_classification(n_neighbors=1)
mglearn.plots.plot_knn_classification(n_neighbors=3)
from sklearn.model_selection import train_test_split
X, y = mglearn.datasets.make_forge()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=3)
clf.fit(X_train, y_train)
print("Test set predictions:", clf.predict(X_test))
print("Test set accuracy: {:.2f}".format(clf.score(X_test, y_test)))
```
Analyzing KNeighborClassifier
```
fig, axes = plt.subplots(1, 3, figsize=(10, 3))
for n_neighbors, ax in zip([1, 3, 9], axes):
# the fit method returns the object self, so we can instantiate
# and fit in one line
clf = KNeighborsClassifier(n_neighbors=n_neighbors).fit(X, y)
mglearn.plots.plot_2d_separator(clf, X, fill=True, eps=0.5, ax=ax, alpha=.4)
mglearn.discrete_scatter(X[:, 0], X[:, 1], y, ax=ax)
ax.set_title("{} neighbor(s)".format(n_neighbors))
ax.set_xlabel("feature 0")
ax.set_ylabel("feature 1")
axes[0].legend(loc=3)
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=66)
training_accuracy = []
test_accuracy = []
# try n_neighbors from 1 to 20
neighbors_settings = range(1, 20)
for n_neighbors in neighbors_settings:
# build the model
clf = KNeighborsClassifier(n_neighbors=n_neighbors)
clf.fit(X_train, y_train)
# record training set accuracy
training_accuracy.append(clf.score(X_train, y_train))
# record generalization accuracy
test_accuracy.append(clf.score(X_test, y_test))
plt.plot(neighbors_settings, training_accuracy, label="training accuracy")
plt.plot(neighbors_settings, test_accuracy, label="test accuracy")
plt.ylabel("Accuracy")
plt.xlabel("n_neighbors")
plt.legend()
```
**k-neighbors regression**
```
mglearn.plots.plot_knn_regression(n_neighbors=1)
mglearn.plots.plot_knn_regression(n_neighbors=3)
from sklearn.neighbors import KNeighborsRegressor
X, y = mglearn.datasets.make_wave(n_samples=40)
# split the wave dataset into a training and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# instantiate the model and set the number of neighbors to consider to 3
reg = KNeighborsRegressor(n_neighbors=3)
# fit the model using the training data and training targets
reg.fit(X_train, y_train)
print("Test set predictions:\n", reg.predict(X_test))
print("Test set R^2: {:.2f}".format(reg.score(X_test, y_test)))
fig, axes = plt.subplots(1, 3, figsize=(15, 4))
# create 1,000 data points, evenly spaced between -3 and 3
line = np.linspace(-3, 3, 1000).reshape(-1, 1)
for n_neighbors, ax in zip([1, 3, 9], axes):
# make predictions using 1, 3, or 9 neighbors
reg = KNeighborsRegressor(n_neighbors=n_neighbors)
reg.fit(X_train, y_train)
ax.plot(line, reg.predict(line))
ax.plot(X_train, y_train, '^', c=mglearn.cm2(0), markersize=8)
ax.plot(X_test, y_test, 'v', c=mglearn.cm2(1), markersize=8)
ax.set_title(
"{} neighbor(s)\n train score: {:.2f} test score: {:.2f}".format(
n_neighbors, reg.score(X_train, y_train),
reg.score(X_test, y_test)))
ax.set_xlabel("Feature")
ax.set_ylabel("Target")
axes[0].legend(["Model predictions", "Training data/target",
"Test data/target"], loc="best")
```
**Strength, weaknesses, and parameters**
Parameters
* number of neighbors
* distance measure
Strengths
* Easy to understand
* quick to implement
* good baseline method
Weaknesses
* can be slow with large datasets
* does not perform will with more than 100 features
* does not perform well with sparse datasets
# 2.3.3 Linear Models
Linear models \[IMLP, p. 47\] using make prediction by using a linear model of the input features
Linear model: y = w\[0\] * x\[0\] + w\[1\] * x\[1\] + ... + w\[p\] * x\[p\] + b
where the w\[i\] and b are learned and the x\[i\] are features.
The w\[i\] can be thought of as weights.
```
mglearn.plots.plot_linear_regression_wave()
```
**Linear Models for Regression**
* linear regression (ordinary least squares or OLS)
* ridge regression
* lasso
**Linear Regression or OLS**
```
from sklearn.linear_model import LinearRegression
# A small data set
X, y = mglearn.datasets.make_wave(n_samples=60)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
lr = LinearRegression().fit(X_train, y_train)
print("lr.coef_:", lr.coef_)
print("lr.intercept_:", lr.intercept_)
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test)))
# Large data set
X, y = mglearn.datasets.load_extended_boston()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
lr = LinearRegression().fit(X_train, y_train)
print(f'The Boston Housing Data has length {len(X_train)}.')
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test)))
```
The discrepancy here is a sign of overfitting. We need a model that will allow us to control complexity.
Ridge regression is a common alternative to ordinary linear regression.
In ridge regression the weights are chosen so that the magnitude of the coefficients is as small as possible. This is an example of regularization. More specifically L2 regularization.
To chase down more technical details see https://en.wikipedia.org/wiki/Ridge_regression and https://en.wikipedia.org/wiki/Tikhonov_regularization.
**Ridge Regression**
```
from sklearn.linear_model import Ridge
ridge = Ridge().fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge.score(X_test, y_test)))
ridge10 = Ridge(alpha=10).fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge10.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge10.score(X_test, y_test)))
ridge01 = Ridge(alpha=0.1).fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge01.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge01.score(X_test, y_test)))
plt.plot(ridge.coef_, 's', label="Ridge alpha=1")
plt.plot(ridge10.coef_, '^', label="Ridge alpha=10")
plt.plot(ridge01.coef_, 'v', label="Ridge alpha=0.1")
plt.plot(lr.coef_, 'o', label="LinearRegression")
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
xlims = plt.xlim()
plt.hlines(0, xlims[0], xlims[1])
plt.xlim(xlims)
plt.ylim(-25, 25)
plt.legend()
mglearn.plots.plot_ridge_n_samples()
```
##### **Lasso**
Start with Lasso (linear regression with L1 regularization)
\[IMLP, p. 55\] in the book and In \[36\]
https://github.com/amueller/introduction_to_ml_with_python/blob/master/02-supervised-learning.ipynb
In statistics and machine learning, lasso (least absolute shrinkage and selection operator; also Lasso or LASSO) is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the resulting statistical model. It was originally introduced in geophysics,and later by Robert Tibshirani, who coined the term.
See https://en.wikipedia.org/wiki/Lasso_(statistics) for more information.
Original paper on [Lasso](https://www.jstor.org/stable/2346178?seq=1#metadata_info_tab_contents).
```
from sklearn.linear_model import Lasso
lasso = Lasso().fit(X_train, y_train)
print("Training set score: {:.2f}".format(lasso.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lasso.score(X_test, y_test)))
print("Number of features used:", np.sum(lasso.coef_ != 0))
# we increase the default setting of "max_iter",
# otherwise the model would warn us that we should increase max_iter.
lasso001 = Lasso(alpha=0.01, max_iter=100000).fit(X_train, y_train)
print("Training set score: {:.2f}".format(lasso001.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lasso001.score(X_test, y_test)))
print("Number of features used:", np.sum(lasso001.coef_ != 0))
# We try setting alpha lower. In this case we remove the effect of regularization and
# achieve a result simliar to linear regression.
lasso00001 = Lasso(alpha=0.0001, max_iter=100000).fit(X_train, y_train)
print("Training set score: {:.2f}".format(lasso00001.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lasso00001.score(X_test, y_test)))
print("Number of features used:", np.sum(lasso00001.coef_ != 0))
plt.plot(lasso.coef_, 's', label="Lasso alpha=1")
plt.plot(lasso001.coef_, '^', label="Lasso alpha=0.01")
plt.plot(lasso00001.coef_, 'v', label="Lasso alpha=0.0001")
plt.plot(ridge01.coef_, 'o', label="Ridge alpha=0.1")
plt.legend(ncol=2, loc=(0, 1.05))
plt.ylim(-25, 25)
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
```
The ElasticNet class of sckkit-leaern combines the penalties of Lasso and Ridge and works best in practice. But then L1 and L2 regularization must both be trained.
Logistic Regression: https://en.wikipedia.org/wiki/Logistic_regression
Linear Support Vector Machine: Found here https://en.wikipedia.org/wiki/Support-vector_machine
```
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
X, y = mglearn.datasets.make_forge()
fig, axes = plt.subplots(1, 2, figsize=(10, 3))
for model, ax in zip([LinearSVC(), LogisticRegression()], axes):
clf = model.fit(X, y)
mglearn.plots.plot_2d_separator(clf, X, fill=False, eps=0.5,
ax=ax, alpha=.7)
mglearn.discrete_scatter(X[:, 0], X[:, 1], y, ax=ax)
ax.set_title(clf.__class__.__name__)
ax.set_xlabel("Feature 0")
ax.set_ylabel("Feature 1")
axes[0].legend()
mglearn.plots.plot_linear_svc_regularization()
```
Start with Linear models for classification
\[IMLP, p. 61\] in the book and In \[42\]
https://github.com/amueller/introduction_to_ml_with_python/blob/master/02-supervised-learning.ipynb
```
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=42)
logreg = LogisticRegression(C=1).fit(X_train, y_train)
# print("Training set score: {:.3f}".format(logreg.score(X_train, y_train)))
print(f"Training set score: {logreg.score(X_train, y_train):.3f}")
print(f"Test set score: {logreg.score(X_test, y_test):.3f}")
```
Regularization mean restricting a model to avoid overfitting. The parameter C determines the strength of the regularization. A higber value of C corresponds to less regularization.
```
logreg100 = LogisticRegression(C=100).fit(X_train, y_train)
print("Training set score: {:.3f}".format(logreg100.score(X_train, y_train)))
print("Test set score: {:.3f}".format(logreg100.score(X_test, y_test)))
logreg001 = LogisticRegression(C=0.01).fit(X_train, y_train)
print("Training set score: {:.3f}".format(logreg001.score(X_train, y_train)))
print("Test set score: {:.3f}".format(logreg001.score(X_test, y_test)))
plt.plot(logreg.coef_.T, 'o', label="C=1")
plt.plot(logreg100.coef_.T, '^', label="C=100")
plt.plot(logreg001.coef_.T, 'v', label="C=0.001")
plt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)
xlims = plt.xlim()
plt.hlines(0, xlims[0], xlims[1])
plt.xlim(xlims)
plt.ylim(-5, 5)
plt.xlabel("Feature")
plt.ylabel("Coefficient magnitude")
plt.legend()
for C, marker in zip([0.001, 1, 100], ['o', '^', 'v']):
lr_l1 = LogisticRegression(C=C, solver='liblinear', penalty="l1").fit(X_train, y_train)
print("Training accuracy of l1 logreg with C={:.3f}: {:.2f}".format(
C, lr_l1.score(X_train, y_train)))
print("Test accuracy of l1 logreg with C={:.3f}: {:.2f}".format(
C, lr_l1.score(X_test, y_test)))
plt.plot(lr_l1.coef_.T, marker, label="C={:.3f}".format(C))
plt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)
xlims = plt.xlim()
plt.hlines(0, xlims[0], xlims[1])
plt.xlim(xlims)
plt.xlabel("Feature")
plt.ylabel("Coefficient magnitude")
plt.ylim(-5, 5)
plt.legend(loc=3)
from sklearn.datasets import make_blobs
X, y = make_blobs(random_state=42)
mglearn.discrete_scatter(X[:, 0], X[:, 1], y)
plt.xlabel("Feature 0")
plt.ylabel("Feature 1")
plt.legend(["Class 0", "Class 1", "Class 2"])
linear_svm = LinearSVC().fit(X, y)
print("Coefficient shape: ", linear_svm.coef_.shape)
print("Intercept shape: ", linear_svm.intercept_.shape)
mglearn.discrete_scatter(X[:, 0], X[:, 1], y)
line = np.linspace(-15, 15)
for coef, intercept, color in zip(linear_svm.coef_, linear_svm.intercept_,
mglearn.cm3.colors):
plt.plot(line, -(line * coef[0] + intercept) / coef[1], c=color)
print(f"y = {-1 * coef[0]/coef[1]} x + {-1 * intercept/coef[1]}")
plt.ylim(-10, 15)
plt.xlim(-10, 8)
plt.xlabel("Feature 0")
plt.ylabel("Feature 1")
plt.legend(['Class 0', 'Class 1', 'Class 2', 'Line class 0', 'Line class 1',
'Line class 2'], loc=(1.01, 0.3))
mglearn.plots.plot_2d_classification(linear_svm, X, fill=True, alpha=.7)
mglearn.discrete_scatter(X[:, 0], X[:, 1], y)
line = np.linspace(-15, 15)
for coef, intercept, color in zip(linear_svm.coef_, linear_svm.intercept_,
mglearn.cm3.colors):
plt.plot(line, -(line * coef[0] + intercept) / coef[1], c=color)
plt.legend(['Class 0', 'Class 1', 'Class 2', 'Line class 0', 'Line class 1',
'Line class 2'], loc=(1.01, 0.3))
plt.xlabel("Feature 0")
plt.ylabel("Feature 1")
```
### Strength, weaknesses, and parameters
[IMLP, p. 69] Start with In[50] here:
https://github.com/amueller/introduction_to_ml_with_python/blob/master/02-supervised-learning.ipynb
If only a few features are important use L1 regularization
Otherwise, default should be L2 regularization.
Linear models are very fast to train, fast to predict, scale to large datasets and work well with sparse data, relatively easy to understand. Highly correlation features can be it difficult to interpret models.
For very large datasets consider using the solver='sag' option in LogisticRegression or Ridge. For even more scalable version of linear models try the SGDClassifier class or the SGDRegressor class.
```
# instantiate model and fit it in one line
logreg = LogisticRegression().fit(X_train, y_train)
logreg = LogisticRegression()
y_pred = logreg.fit(X_train, y_train).predict(X_test)
y_pred = LogisticRegression().fit(X_train, y_train).predict(X_test)
```
### Naive Bayes Classifiers
```
X = np.array([[0, 1, 0, 1],
[1, 0, 1, 1],
[0, 0, 0, 1],
[1, 0, 1, 0]])
y = np.array([0, 1, 0, 1])
counts = {}
for label in np.unique(y):
# iterate over each class
# count (sum) entries of 1 per feature
counts[label] = X[y == label].sum(axis=0)
print("Feature counts:\n", counts)
```
| github_jupyter |
```
# default_exp core
```
# Few-shot Learning with GPT-J
> API details.
```
# export
import os
import pandas as pd
#hide
from nbdev.showdoc import *
import toml
s = toml.load("../.streamlit/secrets.toml", _dict=dict)
```
Using `GPT_J` model API from [Nlpcloud](https://nlpcloud.io/home/token)
```
import nlpcloud
client = nlpcloud.Client("gpt-j", s['nlpcloud_token'], gpu=True)
```

## Aoe2 Civ Builder
https://ageofempires.fandom.com/wiki/Civilizations_(Age_of_Empires_II)
```
# example API call
generation = client.generation("""Civilisation: Britons
Specialty: Foot archers
Unique unit: Longbowman
Unique technologies: Yeomen (+1 range for foot archers and +2 attack for towers)
Unique technologies: Warwolf (Trebuchets do blast damage)
Wonder: Chichester Cathedral
Civilization bonuses: Shepherds work 25% faster.
Team bonus: Town Centers cost -50% wood (starting in the Castle Age).
###
Civilisation: Mongols
Specialty: Cavalry archers
Unique unit: Mangudai
Unique technologies: Nomads (Houses retain population when destroyed)
Unique technologies: Drill (Siege Workshop units move 50% faster)
Wonder: Great Tent of Genghis Khan
Civilization bonuses: Hunters work 40% faster.
Team bonus: The Scout Cavalry line has +2 Line of Sight.
###
Civilisation: Celts
Specialty: Infantry and siege weapons
Unique unit: Woad Raider
Unique technologies: Stronghold (Castles and towers fire 25% faster)
Unique technologies: Furor Celtica (Siege Workshop units have +40% HP)
Wonder: Rock of Cashel
Civilization bonuses: Infantry units move 15% faster (starting in the Feudal Age).
Civilization bonuses: Lumberjacks work 15% faster.
Civilization bonuses: Siege weapons fire 25% faster.
Civilization bonuses: Enemy herdables can be converted regardless of enemy units next to them.
Team bonus: Siege Workshops work 20% faster.
###
Civilisation: New Zealand Maori""",
max_length=250,
length_no_input=True,
end_sequence="###",
remove_input=True)
print('Civilisation: New Zealand Maori\n ', generation["generated_text"])
def create_input_string(civname):
return f"""Civilisation: Britons
Specialty: Foot archers
Unique unit: Longbowman
Unique technologies: Yeomen (+1 range for foot archers and +2 attack for towers)
Unique technologies: Warwolf (Trebuchets do blast damage)
Wonder: Chichester Cathedral
Civilization bonuses: Shepherds work 25% faster.
Team bonus: Town Centers cost -50% wood (starting in the Castle Age).
###
Civilisation: Mongols
Specialty: Cavalry archers
Unique unit: Mangudai
Unique technologies: Nomads (Houses retain population when destroyed)
Unique technologies: Drill (Siege Workshop units move 50% faster)
Wonder: Great Tent of Genghis Khan
Civilization bonuses: Hunters work 40% faster.
Team bonus: The Scout Cavalry line has +2 Line of Sight.
###
Civilisation: Celts
Specialty: Infantry and siege weapons
Unique unit: Woad Raider
Unique technologies: Stronghold (Castles and towers fire 25% faster)
Unique technologies: Furor Celtica (Siege Workshop units have +40% HP)
Wonder: Rock of Cashel
Civilization bonuses: Infantry units move 15% faster (starting in the Feudal Age).
Civilization bonuses: Lumberjacks work 15% faster.
Civilization bonuses: Siege weapons fire 25% faster.
Civilization bonuses: Enemy herdables can be converted regardless of enemy units next to them.
Team bonus: Siege Workshops work 20% faster.
###
Civilisation: {civname}"""
def generate_civ(civname, client):
"""
Creates input string and sends to nlpcloud for few-shot learning
"""
print(f'🌐 Generating New Civ: {civname} \n')
input_str = create_input_string(civname)
generation = client.generation(input_str,
max_length=250,
length_no_input=True,
end_sequence='###',
remove_input=True)
civgen = generation["generated_text"].strip('\n')
print(f"🛡️ **{civname}**\n{civgen}")
return civgen
c = generate_civ(civname='New Zealand Maori', client=client)
c = generate_civ(civname='Fijians', client=client)
```

```
c = generate_civ(civname='Canadians', client=client)
c = generate_civ(civname='European Union', client=client)
c = generate_civ(civname='Dutch', client=client)
c = generate_civ(civname='Star Wars Death Star', client=client)
```
| github_jupyter |
# Synthetic Images from simulated data
## Authors
Yi-Hao Chen, Sebastian Heinz, Kelle Cruz, Stephanie T. Douglas
## Learning Goals
- Assign WCS astrometry to an image using ```astropy.wcs```
- Construct a PSF using ```astropy.modeling.model```
- Convolve raw data with PSF using ```astropy.convolution```
- Calculate polarization fraction and angle from Stokes I, Q, U data
- Overplot quivers on the image
## Keywords
modeling, convolution, coordinates, WCS, FITS, radio astronomy, matplotlib, colorbar
## Summary
In this tutorial, we will:
[1. Load and examine the FITS file](#1.-Load-and-examine-the-FITS-file)
[2. Set up astrometry coordinates](#2.-Set-up-astrometry-coordinates)
[3. Prepare a Point Spread Function (PSF)](#3.-Prepare-a-Point-Spread-Function-(PSF))
>[3.a How to do this without astropy kernels](#3.a-How-to-do-this-without-astropy-kernels)
[4. Convolve image with PSF](#4.-Convolve-image-with-PSF)
[5. Convolve Stokes Q and U images](#5.-Convolve-Stokes-Q-and-U-images)
[6. Calculate polarization angle and fraction for quiver plot](#6.-Calculate-polarization-angle-and-fraction-for-quiver-plot)
```
from astropy.utils.data import download_file
from astropy.io import fits
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.wcs import WCS
from astropy.convolution import Gaussian2DKernel
from astropy.modeling.models import Lorentz1D
from astropy.convolution import convolve_fft
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
## 1. Load and examine the FITS file
Here we begin with a 2-dimensional data that were stored in FITS format from some simulations. We have Stokes I, Q, and U maps. We we'll first load a FITS file and examine the header.
```
file_i = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrotron_i_lobe_0700_150MHz_sm.fits',
cache=True)
hdulist = fits.open(file_i)
hdulist.info()
hdu = hdulist['NN_EMISSIVITY_I_LOBE_150.0MHZ']
hdu.header
```
We can see this FITS file, which was created in [yt](https://yt-project.org/), has x and y coordinate in physical units (cm). We want to convert it into sky coordinates. Before we proceed, let's find out the range of the data and plot a histogram.
```
print(hdu.data.max())
print(hdu.data.min())
np.seterr(divide='ignore') #suppress the warnings raised by taking log10 of data with zeros
plt.hist(np.log10(hdu.data.flatten()), range=(-3, 2), bins=100);
```
Once we know the range of the data, we can do a visualization with the proper range (```vmin``` and ```vmax```).
```
fig = plt.figure(figsize=(6,12))
fig.add_subplot(111)
# We plot it in log-scale and add a small number to avoid nan values.
plt.imshow(np.log10(hdu.data+1E-3), vmin=-1, vmax=1, origin='lower')
```
## 2. Set up astrometry coordinates
From the header, we know that the x and y axes are in centimeter. However, in an observation we usually have RA and Dec. To convert physical units to sky coordinates, we will need to make some assumptions about where the object is located, i.e. the distance to the object and the central RA and Dec.
```
# distance to the object
dist_obj = 200*u.Mpc
# We have the RA in hh:mm:ss and DEC in dd:mm:ss format.
# We will use Skycoord to convert them into degrees later.
ra_obj = '19h59m28.3566s'
dec_obj = '+40d44m02.096s'
```
Here we convert the pixel scale from cm to degree by dividing the distance to the object.
```
cdelt1 = ((hdu.header['CDELT1']*u.cm/dist_obj.to('cm'))*u.rad).to('deg')
cdelt2 = ((hdu.header['CDELT2']*u.cm/dist_obj.to('cm'))*u.rad).to('deg')
print(cdelt1, cdelt2)
```
Use ```astropy.wcs.WCS``` to prepare a FITS header.
```
w = WCS(naxis=2)
# reference pixel coordinate
w.wcs.crpix = [hdu.data.shape[0]/2,hdu.data.shape[1]/2]
# sizes of the pixel in degrees
w.wcs.cdelt = [-cdelt1.base, cdelt2.base]
# converting ra and dec into degrees
c = SkyCoord(ra_obj, dec_obj)
w.wcs.crval = [c.ra.deg, c.dec.deg]
# the units of the axes are in degrees
w.wcs.cunit = ['deg', 'deg']
```
Now we can convert the WCS coordinate into header and update the hdu.
```
wcs_header = w.to_header()
hdu.header.update(wcs_header)
```
Let's take a look at the header. ```CDELT1```, ```CDELT2```, ```CUNIT1```, ```CUNIT2```, ```CRVAL1```, and ```CRVAL2``` are in sky coordinates now.
```
hdu.header
wcs = WCS(hdu.header)
fig = plt.figure(figsize=(6,12))
fig.add_subplot(111, projection=wcs)
plt.imshow(np.log10(hdu.data+1e-3), vmin=-1, vmax=1, origin='lower')
plt.xlabel('RA')
plt.ylabel('Dec')
```
Now we have the sky coordinate for the image!
## 3. Prepare a Point Spread Function (PSF)
Simple PSFs are included in ```astropy.convolution.kernel```. We'll use ```astropy.convolution.Gaussian2DKernel``` here.
First we need to set the telescope resolution. For a 2D Gaussian, we can calculate sigma in pixels by using our pixel scale keyword ```cdelt2``` from above.
```
# assume our telescope has 1 arcsecond resolution
telescope_resolution = 1*u.arcsecond
# calculate the sigma in pixels.
# since cdelt is in degrees, we use _.to('deg')
sigma = telescope_resolution.to('deg')/cdelt2
# By default, the Gaussian kernel will go to 4 sigma
# in each direction
psf = Gaussian2DKernel(sigma)
# let's take a look:
plt.imshow(psf.array.value)
```
## 3.a How to do this without astropy kernels
Maybe your PSF is more complicated. Here's an alternative way to do this, using a 2D Lorentzian
```
# set FWHM and psf grid
telescope_resolution = 1*u.arcsecond
gamma = telescope_resolution.to('deg')/cdelt2
x_grid = np.outer(np.linspace(-gamma*4,gamma*4,int(8*gamma)),np.ones(int(8*gamma)))
r_grid = np.sqrt(x_grid**2 + np.transpose(x_grid**2))
lorentzian = Lorentz1D(fwhm=2*gamma)
# extrude a 2D azimuthally symmetric PSF
lorentzian_psf = lorentzian(r_grid)
# normalization
lorentzian_psf /= np.sum(lorentzian_psf)
# let's take a look again:
plt.imshow(lorentzian_psf.value, interpolation='none')
```
## 4. Convolve image with PSF
Here we use ```astropy.convolution.convolve_fft``` to convolve image. This routine uses fourier transform for faster calculation. Especially since our data is $2^n$ sized, which makes it particually fast. Using a fft, however, causes boundary effects. We'll need to specify how we want to handle the boundary. Here we choose to "wrap" the data, which means making the data periodic.
```
convolved_image = convolve_fft(hdu.data, psf, boundary='wrap')
# Put a psf at the corner of the image
delta_x_psf=100 # number of pixels from the edges
xmin, xmax = -psf.shape[1]-delta_x_psf, -delta_x_psf
ymin, ymax = delta_x_psf, delta_x_psf+psf.shape[0]
convolved_image[xmin:xmax, ymin:ymax] = psf.array/psf.array.max()*10
```
Now let's take a look at the convolved image.
```
wcs = WCS(hdu.header)
fig = plt.figure(figsize=(8,12))
i_plot = fig.add_subplot(111, projection=wcs)
plt.imshow(np.log10(convolved_image+1e-3), vmin=-1, vmax=1.0, origin='lower')#, cmap=plt.cm.viridis)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.colorbar()
```
## 5. Convolve Stokes Q and U images
```
hdulist.info()
file_q = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrotron_q_lobe_0700_150MHz_sm.fits',
cache=True)
hdulist = fits.open(file_q)
hdu_q = hdulist['NN_EMISSIVITY_Q_LOBE_150.0MHZ']
file_u = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrotron_u_lobe_0700_150MHz_sm.fits',
cache=True)
hdulist = fits.open(file_u)
hdu_u = hdulist['NN_EMISSIVITY_U_LOBE_150.0MHZ']
# Update the header with the wcs_header we created earlier
hdu_q.header.update(wcs_header)
hdu_u.header.update(wcs_header)
# Convolve the images with the the psf
convolved_image_q = convolve_fft(hdu_q.data, psf, boundary='wrap')
convolved_image_u = convolve_fft(hdu_u.data, psf, boundary='wrap')
```
Let's plot the Q and U images.
```
wcs = WCS(hdu.header)
fig = plt.figure(figsize=(16,12))
fig.add_subplot(121, projection=wcs)
plt.imshow(convolved_image_q, cmap='seismic', vmin=-0.5, vmax=0.5, origin='lower')#, cmap=plt.cm.viridis)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.colorbar()
fig.add_subplot(122, projection=wcs)
plt.imshow(convolved_image_u, cmap='seismic', vmin=-0.5, vmax=0.5, origin='lower')#, cmap=plt.cm.viridis)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.colorbar()
```
## 6. Calculate polarization angle and fraction for quiver plot
Note that rotating Stokes Q and I maps requires changing signs of both. Here we assume that the Stokes q and u maps were calculated defining the y/declination axis as vertical, such that Q is positive for polarization vectors along the x/right-ascention axis.
```
# First, we plot the background image
fig = plt.figure(figsize=(8,16))
i_plot = fig.add_subplot(111, projection=wcs)
i_plot.imshow(np.log10(convolved_image+1e-3), vmin=-1, vmax=1, origin='lower')
# ranges of the axis
xx0, xx1 = i_plot.get_xlim()
yy0, yy1 = i_plot.get_ylim()
# binning factor
factor = [64, 66]
# re-binned number of points in each axis
nx_new = convolved_image.shape[1] // factor[0]
ny_new = convolved_image.shape[0] // factor[1]
# These are the positions of the quivers
X,Y = np.meshgrid(np.linspace(xx0,xx1,nx_new,endpoint=True),
np.linspace(yy0,yy1,ny_new,endpoint=True))
# bin the data
I_bin = convolved_image.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1)
Q_bin = convolved_image_q.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1)
U_bin = convolved_image_u.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1)
# polarization angle
psi = 0.5*np.arctan2(U_bin, Q_bin)
# polarization fraction
frac = np.sqrt(Q_bin**2+U_bin**2)/I_bin
# mask for low signal area
mask = I_bin < 0.1
frac[mask] = 0
psi[mask] = 0
pixX = frac*np.cos(psi) # X-vector
pixY = frac*np.sin(psi) # Y-vector
# keyword arguments for quiverplots
quiveropts = dict(headlength=0, headwidth=1, pivot='middle')
i_plot.quiver(X, Y, pixX, pixY, scale=8, **quiveropts)
```
## Exercise
### Convert the units of the data from Jy/arcsec^2 to Jy/beam
The intensity of the data is given in unit of Jy/arcsec^2. Observational data usually have the intensity unit in Jy/beam. Assuming a beam size or take the psf we created earlier, you can convert the data into Jy/beam.
| github_jupyter |
# Candlestick Upside Gap Two Crows
https://www.investopedia.com/terms/u/upside-gap-two-crows.asp
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import talib
import warnings
warnings.filterwarnings("ignore")
# yahoo finance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'ICLR'
start = '2012-01-01'
end = '2021-10-22'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
```
## Candlestick with Upside Gap Two Crows
```
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = pd.to_datetime(dfc['Date'])
dfc['Date'] = dfc['Date'].apply(mdates.date2num)
dfc.head()
from mplfinance.original_flavor import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax.grid(True, which='both')
ax.minorticks_on()
axv = ax.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
two_crows = talib.CDLUPSIDEGAP2CROWS(df['Open'], df['High'], df['Low'], df['Close'])
two_crows = two_crows[two_crows != 0]
df['two_crows'] = talib.CDLUPSIDEGAP2CROWS(df['Open'], df['High'], df['Low'], df['Close'])
df.loc[df['two_crows'] !=0]
df['Adj Close'].loc[df['two_crows'] !=0]
df['two_crows'].loc[df['two_crows'] !=0].index
two_crows
two_crows.index
df
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax.grid(True, which='both')
ax.minorticks_on()
axv = ax.twinx()
ax.plot_date(df['Adj Close'].loc[df['two_crows'] !=0].index, df['Adj Close'].loc[df['two_crows'] !=0],
'Dc', # marker style 'o', color 'g'
fillstyle='none', # circle is not filled (with color)
ms=10.0)
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
```
## Plot Certain dates
```
df = df['2019-04-20':'2019-05-05']
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = pd.to_datetime(dfc['Date'])
dfc['Date'] = dfc['Date'].apply(mdates.date2num)
dfc.head()
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
ax.set_facecolor('white')
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='black', colordown='red', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
#ax.grid(True, which='both')
#ax.minorticks_on()
axv = ax.twinx()
ax.plot_date(df['Adj Close'].loc[df['two_crows'] !=0].index, df['Adj Close'].loc[df['two_crows'] !=0],
'*y', # marker style 'o', color 'g'
fillstyle='none', # circle is not filled (with color)
ms=40.0)
colors = dfc.VolumePositive.map({True: 'black', False: 'red'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
```
# Highlight Candlestick
```
from matplotlib.dates import date2num
from datetime import datetime
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
#ax.grid(True, which='both')
#ax.minorticks_on()
axv = ax.twinx()
ax.axvspan(date2num(datetime(2019,4,28)), date2num(datetime(2019,4,30)),
label="Upside Gap Two Crows Bearish",color="red", alpha=0.3)
ax.legend()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
```
| github_jupyter |
# Slope Analysis
This project use the change of holding current slope to identify drug responders.
## Analysis Steps
The `getBaselineAndMaxDrugSlope` function smoothes the raw data by the moving window decided by `filterSize`, and analyzes the smoothed holding current in an ABF and returns baseline slope and drug slope.
The _slope of baseline_ is calculated as the linear regreasion slope during the 3 minutes period before drug onset.
In addition, the smoothed data are separated into segments which n = regressionSize data points are included. The linear regression slope is then calculated for each segment.
The _peak slope of drug_ is the most negative slope during the chosen drug period (1-5 minutes after drug onset, in this case).
## Set-Up the Environment
```
%load_ext autoreload
import sys
sys.path.append("../src")
from os.path import basename
import slopeTools
import plotTools
import statsTools
import matplotlib.pyplot as plt
```
## Define ABF Files and Filter Settings
The user can list the ABF files they want to analyze
```
#opto:
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21124006.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21124013.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21124020.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21124026.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21124033.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21126007.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21126016.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21126030.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21126050.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21126056.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21218033.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21219006.abf"
]
```
#opto+l368:
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21218077.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21219013.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21219039.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21219069.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21323006.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21323036.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21323047.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21325007.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21325019.abf"
]
#10nM TGOT
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804007.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804030.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804043.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804048.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804060.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804066.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805008.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805029.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805035.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20811011.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20811021.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20817012.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20831011.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20831017.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/2021_05_14_DIC1_0008.abf"
]
#10nM TGOT+L368
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805041.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805047.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805053.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20806018.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20806036.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20811034.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20811041.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20817020.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20817026.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20817032.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20817039.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20901022.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20901035.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20902011.abf",
]
#50nM TGOT
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20723038.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20723029.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724011.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724017.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724023.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724027.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724033.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724045.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/2021_05_13_DIC1_0005.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/2021_05_13_DIC1_0021.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/2021_05_13_DIC1_0025.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/2021_05_13_DIC3_0050.abf"
]
#50nM TGOT+L368
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20727010.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20727026.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20727032.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20727039.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20728005.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20728011.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20728026.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/2021_05_13_DIC3_0043.abf"
]
#50nM TGOT
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20n19022.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20n19029.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20n19036.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20n19052.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d03006.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d03032.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d03055.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04012.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04023.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04030.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04038.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04045.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04052.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d16012.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d16020.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d16035.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d17022.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d17028.abf"
]
The users can decide the parameters they want for data analysis.
`filterSize` decides number of points (sweeps) for the moving window average.
`regressionSize` decides the number of smoothed data points used to calculate linear regression slopes during the drug range.
```
filterSize = 10
regressionSize = 17
```
## Analyze All ABFs
```
baselineSlopes = []
drugSlopes = []
abfIDs = []
for abfFilePath in abfFilePaths:
baselineSlope, drugSlope = slopeTools.getBaselineAndMaxDrugSlope(abfFilePath, filterSize, regressionSize)
baselineSlopes.append(baselineSlope)
drugSlopes.append(drugSlope)
abfIDs.append(basename(abfFilePath))
```
## Compare Baseline vs. Drug Slopes
The users can plot the basleine slope and the peak drug slope of each cell, and report the p-value in the title by performing a paired t-test between baseline slopes and peak drug slopes.
```
plotTools.plotPairs(baselineSlopes, drugSlopes, "slopes")
```
## Assess Responsiveness of All Cells
Generate a scatter plot showing the slope difference of each cell.
This plot can assist users to decide the desired threshold (red dotted line) to seperate
```
slopeThreshold = -1.5
drugEffects = []
for i in range(len(abfIDs)):
drugEffects.append(drugSlopes[i] - baselineSlopes[i])
plt.figure (figsize=(6, 4))
plt.ylabel("Slope Difference (pA/min)")
plt.plot(abfIDs, drugEffects, 'o', color = "b")
plt.gca().set_xticklabels(abfIDs, rotation=45, ha='right')
plt.axhline(slopeThreshold, color='r', ls='--')
plt.show()
```
## Define Cells as Responsive vs. Non-Responsive
The users can define the <b>slopeThreshold</b>. The difference between baseline slope and peak drug slope must be more negative than this value to be a responder
slopeThreshold
```
drugEffects=statsTools.responderLessThanThreshold(abfIDs, drugEffects, slopeThreshold)
```
| github_jupyter |
# TensorFlow Regression Example
## Creating Data
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# 1 Million Points
x_data = np.linspace(0.0,10.0,1000000)
noise = np.random.randn(len(x_data))
# y = mx + b + noise_levels
b = 5
y_true = (0.5 * x_data ) + 5 + noise
my_data = pd.concat([pd.DataFrame(data=x_data,columns=['X Data']),pd.DataFrame(data=y_true,columns=['Y'])],axis=1)
my_data.head()
my_data.sample(n=250).plot(kind='scatter',x='X Data',y='Y')
```
# TensorFlow
## Batch Size
We will take the data in batches (1,000,000 points is a lot to pass in at once)
```
import tensorflow as tf
# Random 10 points to grab
batch_size = 8
```
** Variables **
```
w_tf = tf.Variable(np.random.uniform())
b_tf = tf.Variable(np.random.uniform(1,10))
```
** Placeholders **
```
x_train = tf.placeholder(tf.float32,shape=(batch_size))
y_train = tf.placeholder(tf.float32,shape=(batch_size))
```
** Graph **
```
y_hat = w_tf * x_train + b_tf
```
** Loss Function **
```
error = tf.reduce_sum((y_train - y_hat)**2)
```
** Optimizer **
```
optimizer = tf.train.GradientDescentOptimizer(0.001)
train = optimizer.minimize(error)
```
** Initialize Variables **
```
init = tf.global_variables_initializer()
```
### Session
```
with tf.Session() as sess:
sess.run(init)
batchs = 1000
for i in range(batchs):
batch_index = np.random.randint(len(x_data),size=(batch_size))
feed = {x_train:x_data[batch_index], y_train:y_true[batch_index]}
sess.run(train,feed_dict = feed)
final_w, final_b = sess.run([w_tf,b_tf])
final_w
final_b
```
### Results
```
my_data.sample(n=250).plot(kind='scatter',x='X Data',y='Y')
plt.plot(x_data, final_w*x_data+final_b,'r')
```
## tf.keras API
```
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
```
## tf.estimator API
Much simpler API for basic tasks like regression! We'll talk about more abstractions like TF-Slim later on.
```
feat_cols = [tf.feature_column.numeric_column('x',shape=[1])]
estimator = tf.estimator.LinearRegressor(feature_columns=feat_cols)
```
### Train Test Split
We haven't actually performed a train test split yet! So let's do that on our data now and perform a more realistic version of a Regression Task
```
from sklearn.model_selection import train_test_split
x_train, x_eval, y_train, y_eval = train_test_split(x_data,y_true,test_size=0.3, random_state = 101)
print(x_train.shape)
print(y_train.shape)
print(x_eval.shape)
print(y_eval.shape)
```
### Set up Estimator Inputs
```
# Can also do .pandas_input_fn
input_func = tf.estimator.inputs.numpy_input_fn({'x':x_train},y_train,batch_size=4,num_epochs=None,shuffle=True)
train_input_func = tf.estimator.inputs.numpy_input_fn({'x':x_train},y_train,batch_size=4,num_epochs=1000,shuffle=False)
eval_input_func = tf.estimator.inputs.numpy_input_fn({'x':x_eval},y_eval,batch_size=4,num_epochs=1000,shuffle=False)
```
### Train the Estimator
```
estimator.train(input_fn=input_func,steps=1000)
```
### Evaluation
```
train_metrics = estimator.evaluate(input_fn=train_input_func,steps=1000)
eval_metrics = estimator.evaluate(input_fn=eval_input_func,steps=1000)
print("train metrics: {}".format(train_metrics))
print("eval metrics: {}".format(eval_metrics))
```
### Predictions
```
input_fn_predict = tf.estimator.inputs.numpy_input_fn({'x':np.linspace(0,10,10)},shuffle=False)
list(estimator.predict(input_fn=input_fn_predict))
predictions = []# np.array([])
for x in estimator.predict(input_fn=input_fn_predict):
predictions.append(x['predictions'])
predictions
my_data.sample(n=250).plot(kind='scatter',x='X Data',y='Y')
plt.plot(np.linspace(0,10,10),predictions,'r')
```
# Great Job!
| github_jupyter |
# Project: Part of Speech Tagging with Hidden Markov Models
---
### Introduction
Part of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.
In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more.

The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
<div class="alert alert-block alert-info">
**Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files.
</div>
<div class="alert alert-block alert-info">
**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
</div>
### The Road Ahead
You must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers.
- [Step 1](#Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus
- [Step 2](#Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline
- [Step 3](#Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline
- [Step 4](#Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger
```
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
import helpers, tests
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from itertools import chain
from collections import Counter, defaultdict
from helpers import show_model, Dataset
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
```
## Step 1: Read and preprocess the dataset
---
We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same.
The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.
Example from the Brown corpus.
```
b100-38532
Perhaps ADV
it PRON
was VERB
right ADJ
; .
; .
b100-35577
...
```
```
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8)
print("There are {} sentences in the corpus.".format(len(data)))
print("There are {} sentences in the training set.".format(len(data.training_set)))
print("There are {} sentences in the testing set.".format(len(data.testing_set)))
assert len(data) == len(data.training_set) + len(data.testing_set), \
"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus"
```
### The Dataset Interface
You can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step.
```
Dataset-only Attributes:
training_set - reference to a Subset object containing the samples for training
testing_set - reference to a Subset object containing the samples for testing
Dataset & Subset Attributes:
sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus
keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus
vocab - an immutable collection of the unique words in the corpus
tagset - an immutable collection of the unique tags in the corpus
X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...)
Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...)
N - returns the number of distinct samples (individual words or tags) in the dataset
Methods:
stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus
__iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs
__len__() - returns the nubmer of sentences in the dataset
```
For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:
```
subset.keys == {"s1", "s0"} # unordered
subset.vocab == {"See", "run", "ran", "Spot"} # unordered
subset.tagset == {"VERB", "NOUN"} # unordered
subset.X == (("Spot", "ran"), ("See", "Spot", "run")) # order matches .keys
subset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) # order matches .keys
subset.N == 7 # there are a total of seven observations over all sentences
len(subset) == 2 # because there are two sentences
```
<div class="alert alert-block alert-info">
**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data.
</div>
#### Sentences
`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`.
```
key = 'b100-38532'
print("Sentence: {}".format(key))
print("words:\n\t{!s}".format(data.sentences[key].words))
print("tags:\n\t{!s}".format(data.sentences[key].tags))
```
<div class="alert alert-block alert-info">
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data.
</div>
#### Counting Unique Elements
You can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`.
```
print("There are a total of {} samples of {} unique words in the corpus."
.format(data.N, len(data.vocab)))
print("There are {} samples of {} unique words in the training set."
.format(data.training_set.N, len(data.training_set.vocab)))
print("There are {} samples of {} unique words in the testing set."
.format(data.testing_set.N, len(data.testing_set.vocab)))
print("There are {} words in the test set that are missing in the training set."
.format(len(data.testing_set.vocab - data.training_set.vocab)))
assert data.N == data.training_set.N + data.testing_set.N, \
"The number of training + test samples should sum to the total number of samples"
```
#### Accessing word and tag Sequences
The `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset.
```
# accessing words with Dataset.X and tags with Dataset.Y
for i in range(100):
print("Sentence {}:".format(i + 1), data.X[i])
print()
print("Labels {}:".format(i + 1), data.Y[i])
print()
```
#### Accessing (word, tag) Samples
The `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus.
```
# use Dataset.stream() (word, tag) samples for the entire corpus
print("\nStream (word, tag) pairs:\n")
for i, pair in enumerate(data.stream()):
print("\t", pair)
if i > 5: break
```
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts.
## Step 2: Build a Most Frequent Class tagger
---
Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus.
### IMPLEMENTATION: Pair Counts
Complete the function below that computes the joint frequency counts for two input sequences.
```
def pair_counts(sequences_A, sequences_B):
"""Return a dictionary keyed to each unique value in the first sequence list
that counts the number of occurrences of the corresponding value from the
second sequences list.
For example, if sequences_A is tags and sequences_B is the corresponding
words, then if 1244 sequences contain the word "time" tagged as a NOUN, then
you should return a dictionary such that pair_counts[NOUN][time] == 1244
"""
# TODO: Finish this function!
#raise NotImplementedError
pair_count = {}
for i in range(len(sequences_A)):
for word, tag in zip(sequences_A[i], sequences_B[i]):
if tag not in pair_count:
pair_count[tag] = {}
pair_count[tag][word] = 1
else:
pair_count[tag][word] = pair_count[tag].get(word, 0) + 1
return pair_count
# Calculate C(t_i, w_i)
emission_counts = pair_counts(data.X, data.Y)
assert len(emission_counts) == 12, \
"Uh oh. There should be 12 tags in your dictionary."
assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \
"Hmmm...'time' is expected to be the most common NOUN."
HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>')
```
### IMPLEMENTATION: Most Frequent Class Tagger
Use the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.
The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably.
```
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word
from collections import namedtuple
FakeState = namedtuple("FakeState", "name")
class MFCTagger:
# NOTE: You should not need to modify this class or any of its methods
missing = FakeState(name="<MISSING>")
def __init__(self, table):
self.table = defaultdict(lambda: MFCTagger.missing)
self.table.update({word: FakeState(name=tag) for word, tag in table.items()})
def viterbi(self, seq):
"""This method simplifies predictions by matching the Pomegranate viterbi() interface"""
return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"]))
# TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not
# the same as the emission probabilities) and use it to fill the mfc_table
word_counts = pair_counts(data.training_set.Y, data.training_set.X)
mfc_table = dict((word, max(tags.keys(), key=lambda key: tags[key])) for word, tags in word_counts.items())
# DO NOT MODIFY BELOW THIS LINE
mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance
assert len(mfc_table) == len(data.training_set.vocab), ""
assert all(k in data.training_set.vocab for k in mfc_table.keys()), ""
assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, ""
HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>')
```
### Making Predictions with a Model
The helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger.
```
def replace_unknown(sequence):
"""Return a copy of the input sequence where each unknown word is replaced
by the literal string value 'nan'. Pomegranate will ignore these values
during computation.
"""
return [w if w in data.training_set.vocab else 'nan' for w in sequence]
def simplify_decoding(X, model):
"""X should be a 1-D sequence of observations for the model to predict"""
_, state_path = model.viterbi(replace_unknown(X))
return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions
```
### Example Decoding Sequences with MFC Tagger
```
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, mfc_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
```
### Evaluating Model Accuracy
The function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus.
```
def accuracy(X, Y, model):
"""Calculate the prediction accuracy by using the model to decode each sequence
in the input X and comparing the prediction with the true labels in Y.
The X should be an array whose first dimension is the number of sentences to test,
and each element of the array should be an iterable of the words in the sequence.
The arrays X and Y should have the exact same shape.
X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...]
Y = [(), (), ...]
"""
correct = total_predictions = 0
for observations, actual_tags in zip(X, Y):
# The model.viterbi call in simplify_decoding will return None if the HMM
# raises an error (for example, if a test sentence contains a word that
# is out of vocabulary for the training set). Any exception counts the
# full sentence as an error (which makes this a conservative estimate).
try:
most_likely_tags = simplify_decoding(observations, model)
correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))
except:
pass
total_predictions += len(observations)
return correct / total_predictions
```
#### Evaluate the accuracy of the MFC tagger
Run the next cell to evaluate the accuracy of the tagger on the training and test corpus.
```
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)
print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc))
mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)
print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc))
assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right."
assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>')
```
## Step 3: Build an HMM tagger
---
The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.
We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).
The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:
$$t_i^n = \underset{t_i^n}{\mathrm{argmax}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$
Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information.
### IMPLEMENTATION: Unigram Counts
Complete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)
$$P(tag_1) = \frac{C(tag_1)}{N}$$
```
def unigram_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequence list that
counts the number of occurrences of the value in the sequences list. The sequences
collection should be a 2-dimensional array.
For example, if the tag NOUN appears 275558 times over all the input sequences,
then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.
"""
return Counter(sequences)
# raise NotImplementedError
# TODO: call unigram_counts with a list of tag sequences from the training set
tags = [tag for i, (word, tag) in enumerate(data.training_set.stream())]
tag_unigrams = unigram_counts(tags)
assert set(tag_unigrams.keys()) == data.training_set.tagset, \
"Uh oh. It looks like your tag counts doesn't include all the tags!"
assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \
"Hmmm...'X' is expected to be the least common class"
assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \
"Hmmm...'NOUN' is expected to be the most common class"
HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>')
```
### IMPLEMENTATION: Bigram Counts
Complete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$
```
def bigram_counts(sequences):
"""Return a dictionary keyed to each unique PAIR of values in the input sequences
list that counts the number of occurrences of pair in the sequences list. The input
should be a 2-dimensional array.
For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should
return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582
"""
# TODO: Finish this function!
return Counter(sequences)
#raise NotImplementedError
# TODO: call bigram_counts with a list of tag sequences from the training set
tags = [tag for i, (word, tag) in enumerate(data.stream())]
tag_pairs = [(tags[i],tags[i+1]) for i in range(0,len(tags)-2,2)]
tag_bigrams = bigram_counts(tag_pairs)
assert len(tag_bigrams) == 144, \
"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)"
assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \
"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')."
assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \
"Hmmm...('DET', 'NOUN') is expected to be the most common bigram."
HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>')
```
### IMPLEMENTATION: Sequence Starting Counts
Complete the code below to estimate the bigram probabilities of a sequence starting with each tag.
```
def starting_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the beginning of
a sequence.
For example, if 8093 sequences start with NOUN, then you should return a
dictionary such that your_starting_counts[NOUN] == 8093
"""
# TODO: Finish this function!
#raise NotImplementedError
return Counter(sequences)
# TODO: Calculate the count of each tag starting a sequence
starting_tags = [tag[0] for tag in data.training_set.Y ]
tag_starts = starting_counts(starting_tags)
assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram."
assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram."
HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>')
```
### IMPLEMENTATION: Sequence Ending Counts
Complete the function below to estimate the bigram probabilities of a sequence ending with each tag.
```
def ending_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the end of
a sequence.
For example, if 18 sequences end with DET, then you should return a
dictionary such that your_starting_counts[DET] == 18
"""
# TODO: Finish this function!
#raise NotImplementedError
return Counter(sequences)
# TODO: Calculate the count of each tag ending a sequence
ending_tags = [tag[-1] for tag in data.training_set.Y ]
tag_ends = ending_counts(ending_tags)
assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram."
assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram."
HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>')
```
### IMPLEMENTATION: Basic HMM Tagger
Use the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.
- Add one state per tag
- The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$
- Add an edge from the starting state `basic_model.start` to each tag
- The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$
- Add an edge from each tag to the end state `basic_model.end`
- The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$
- Add an edge between _every_ pair of tags
- The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$
```
basic_model = HiddenMarkovModel(name="base-hmm-tagger")
# TODO: create states with emission probability distributions P(word | tag) and add to the model
# (Hint: you may need to loop & create/add new states)
tag_states = {}
for tag in data.training_set.tagset:
emission_prob = {word:emission_counts[tag][word]/tag_unigrams[tag] for word in emission_counts[tag]}
tag_emission = DiscreteDistribution(emission_prob)
tag_states[tag] = State(tag_emission, name=tag)
basic_model.add_states(tag_states[tag])
basic_model.add_transition(basic_model.start, tag_states[tag], tag_starts[tag]/len(data.training_set))
basic_model.add_transition(tag_states[tag], basic_model.end, tag_ends[tag]/len(data.training_set))
for tag1 in data.training_set.tagset:
for tag2 in data.training_set.tagset:
basic_model.add_transition(tag_states[tag1], tag_states[tag2], tag_bigrams[tag1,tag2]/tag_unigrams[tag1])
# TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1)
# (Hint: you may need to loop & add transitions
show_model(basic_model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=True)
# NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE
# finalize the model
basic_model.bake()
assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \
"Every state in your network should use the name of the associated tag, which must be one of the training set tags."
assert basic_model.edge_count() == 168, \
("Your network should have an edge from the start node to each state, one edge between every " +
"pair of tags (states), and an edge from each state to the end node.")
HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>')
hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)
print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc))
hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)
print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc))
assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right."
assert hmm_training_acc > 0.955, "Uh oh. Your HMM accuracy on the training set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>')
```
### Example Decoding Sequences with the HMM Tagger
```
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, basic_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
```
## Finishing the project
---
<div class="alert alert-block alert-info">
**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review.
</div>
```
!!jupyter nbconvert *.ipynb
```
## Step 4: [Optional] Improving model performance
---
There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.
- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts)
Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.
- Backoff Smoothing
Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.
- Extending to Trigrams
HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two.
### Obtain the Brown Corpus with a Larger Tagset
Run the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.
Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets.
```
import nltk
from nltk import pos_tag, word_tokenize
from nltk.corpus import brown
nltk.download('brown')
training_corpus = nltk.corpus.brown
training_corpus.tagged_sents()[0]
```
| github_jupyter |
```
import json
import itertools
import copy
import random
def filter_lexicon(lexicon):
keys_to_hold = "yellow,red,green,cyan,purple,blue,gray,brown".split(",")
deleted_keys = set()
for k in lexicon.keys():
if k not in keys_to_hold:
deleted_keys.add(k)
for k in deleted_keys:
del lexicon[k]
return lexicon
def load_lexicon(lexicon_path, train_path):
lexicon = json.load(open(lexicon_path))
inputs = []
with open(train_path, 'r') as f:
for line in f:
inputs.append(line.split('\t')[0])
return lexicon, inputs
def filter_uncommon_tokens(lexicon, threshold):
# Filter uncommon tokens
deleted_keys = set()
for (k1, v1) in lexicon.items():
deleted_codes = set()
for c, count in v1.items():
if count < threshold:
deleted_codes.add(c)
for k in deleted_codes:
del v1[k]
if len(v1) == 0:
deleted_keys.add(k1)
for k in deleted_keys:
del lexicon[k]
return lexicon
def filter_intersected_tokens(lexicon):
deleted_keys = set()
for (k1, v1) in lexicon.items():
for ci, count in v1.items():
for (k2, v2) in lexicon.items():
if k2 == k1:
continue
if ci in v2:
deleted_keys.add(k1)
deleted_keys.add(k2)
for k in deleted_keys:
del lexicon[k]
return lexicon
def get_swapables(lexicon, inputs):
inputs = copy.deepcopy(inputs)
random.shuffle(inputs)
swapables = {k: [] for k in lexicon.keys()}
for k1 in lexicon.keys():
for k2 in lexicon.keys():
if k1 != k2:
if k1 in swapables[k2]:
swapables[k1].append(k2)
else:
x1s = itertools.islice(filter(lambda x: k1 in x, inputs), 5000)
x2s = itertools.islice(filter(lambda x: k2 in x, inputs), 5000)
for (x1, x2) in itertools.product(x1s, x2s):
if x1.replace(k1, k2) == x2:
swapables[k1].append(k2)
print(f"Linked {k1} - {k2}")
break
deleted_keys = set()
for k, v in swapables.items():
if len(v) == 0:
deleted_keys.add(k)
for k in deleted_keys:
del lexicon[k]
del swapables[k]
return (lexicon, swapables)
def propagate_swaps(swapables):
for k1, swaps in swapables.items():
for k2 in swaps:
swaps2 = swapables[k2]
if k1 in swaps2 and k2 not in swaps:
swaps.append(k2)
elif k2 in swaps and k1 not in swaps2:
swaps2.append(k1)
return swapables
def filter_lexicon_v2(lexicon, inputs):
lexicon = copy.deepcopy(lexicon)
lexicon = filter_uncommon_tokens(lexicon, len(inputs)/100)
lexicon = filter_intersected_tokens(lexicon)
lexicon, swapables = get_swapables(lexicon, inputs)
return lexicon, propagate_swaps(swapables)
from IPython.core.debugger import Pdb
#this one triggers the debugger
for clevr_type in ("clevr",):
for seed in range(3, 4):
exp_root = f"clip_exp_img_seed_{seed}_{clevr_type}/clevr/VQVAE/beta_1.0_ncodes_32_ldim_64_dim_128_lr_0.0003/"
lexicon, inputs = load_lexicon(exp_root + "diag.align.o.json", exp_root + "train_encodings.txt")
filtered_lexicon, swapables = filter_lexicon_v2(lexicon, inputs)
print(swapables)
```
| github_jupyter |
# Image classification training with image format
1. [Introduction](#Introduction)
2. [Prerequisites and Preprocessing](#Prerequisites-and-Preprocessing)
1. [Permissions and environment variables](#Permissions-and-environment-variables)
2. [Prepare the data](#Prepare-the-data)
3. [Fine-tuning The Image Classification Model](#Fine-tuning-the-Image-classification-model)
1. [Training parameters](#Training-parameters)
2. [Training](#Training)
4. [Deploy The Model](#Deploy-the-model)
1. [Create model](#Create-model)
2. [Batch transform](#Batch-transform)
3. [Realtime inference](#Realtime-inference)
1. [Create endpoint configuration](#Create-endpoint-configuration)
2. [Create endpoint](#Create-endpoint)
3. [Perform inference](#Perform-inference)
4. [Clean up](#Clean-up)
## Introduction
Welcome to our end-to-end example of the image classification algorithm training with image format. In this demo, we will use the Amazon SageMaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on ImageNet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using the [Caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/).
To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
## Prerequisites and Preprocessing
### Permissions and environment variables
Here we set up the linkage and authentication to AWS services. There are three parts to this:
* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook
* The S3 bucket that you want to use for training and model data
* The Amazon SageMaker image classification docker image which need not be changed
```
%%time
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker import image_uris
role = get_execution_role()
bucket = sagemaker.session.Session().default_bucket()
training_image = image_uris.retrieve(
region=boto3.Session().region_name, framework="image-classification"
)
```
## Fine-tuning the Image classification model
### Prepare the data
The Caltech-256 dataset consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category.
The image classification algorithm can take two types of input formats. The first is a [RecordIO format](https://mxnet.incubator.apache.org/tutorials/basic/record_io.html) (content type: application/x-recordio) and the other is a [lst format](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec) (content type: application/x-image). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the lst format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/).
```
import os
import urllib.request
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
# Caltech-256 image files
s3 = boto3.client("s3")
s3.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/256_ObjectCategories.tar",
"256_ObjectCategories.tar",
)
!tar -xf 256_ObjectCategories.tar
# Tool for creating lst file
download("https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/im2rec.py")
%%bash
mkdir -p caltech_256_train_60
for i in 256_ObjectCategories/*; do
c=`basename $i`
mkdir -p caltech_256_train_60/$c
for j in `ls $i/*.jpg | shuf | head -n 60`; do
mv $j caltech_256_train_60/$c/
done
done
python im2rec.py --list --recursive caltech-256-60-train caltech_256_train_60/
python im2rec.py --list --recursive caltech-256-60-val 256_ObjectCategories/
```
A .lst file is a tab-separated file with three columns that contains a list of image files. The first column specifies the image index, the second column specifies the class label index for the image, and the third column specifies the relative path of the image file. The image index in the first column should be unique across all of the images. Here we make an image list file using the [im2rec](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) tool from MXNet. You can also create the .lst file in your own way. An example of .lst file is shown as follows.
```
!head -n 3 ./caltech-256-60-train.lst > example.lst
f = open("example.lst", "r")
lst_content = f.read()
print(lst_content)
```
When you are bringing your own image files to train, please ensure that the .lst file follows the same format as described above. In order to train with the lst format interface, passing the lst file for both training and validation in the appropriate format is mandatory. Once we have the data available in the correct format for training, the next step is to upload the image and .lst file to S3 bucket.
```
# Four channels: train, validation, train_lst, and validation_lst
s3train = "s3://{}/image-classification/train/".format(bucket)
s3validation = "s3://{}/image-classification/validation/".format(bucket)
s3train_lst = "s3://{}/image-classification/train_lst/".format(bucket)
s3validation_lst = "s3://{}/image-classification/validation_lst/".format(bucket)
# upload the image files to train and validation channels
!aws s3 cp caltech_256_train_60 $s3train --recursive --quiet
!aws s3 cp 256_ObjectCategories $s3validation --recursive --quiet
# upload the lst files to train_lst and validation_lst channels
!aws s3 cp caltech-256-60-train.lst $s3train_lst --quiet
!aws s3 cp caltech-256-60-val.lst $s3validation_lst --quiet
```
Now we have all the data stored in S3 bucket. The image and lst files will be converted to RecordIO file internally by the image classification algorithm. But if you want do the conversion, the following cell shows how to do it using the [im2rec](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) tool. Note that this is just an example of creating RecordIO files. We are **_not_** using them for training in this notebook. More details on creating RecordIO files can be found in this [tutorial](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec).
```
%%bash
python im2rec.py --resize 256 --quality 90 --num-thread 16 caltech-256-60-val 256_ObjectCategories/
python im2rec.py --resize 256 --quality 90 --num-thread 16 caltech-256-60-train caltech_256_train_60/
```
After you created the RecordIO files, you can upload them to the train and validation channels for training. To train with RecordIO format, you can follow "[Image-classification-fulltraining.ipynb](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-fulltraining.ipynb)" and "[Image-classification-transfer-learning.ipynb](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-transfer-learning.ipynb)". Again, we will **_not_** use the RecordIO file for the training. The following sections will only show you how to train a model with images and list files.
Before training the model, we need to set up the training parameters. The next section will explain the parameters in detail.
## Fine-tuning the Image Classification Model
### Training parameters
There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:
* **Input specification**: These are the training and validation channels that specify the path where training data is present. These are specified in the "InputDataConfig" section. The main parameters that need to be set is the "ContentType" which can be set to "application/x-recordio" or "application/x-image" based on the input data format and the S3Uri which specifies the bucket and the folder where the data is present.
* **Output specification**: This is specified in the "OutputDataConfig" section. We just need to specify the path where the output can be stored after training
* **Resource config**: This section specifies the type of instance on which to run the training and the number of hosts used for training. If "InstanceCount" is more than 1, then training can be run in a distributed manner.
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:
* **num_layers**: The number of layers (depth) for the network. We use 18 in this sample but other values such as 50, 152 can be used.
* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.
* **num_training_samples**: This is the total number of training samples. It is set to 15240 for the Caltech dataset with the current split.
* **num_classes**: This is the number of output classes for the new dataset. ImageNet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For Caltech, we use 257 because it has 256 object categories + 1 clutter class.
* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.
* **epochs**: Number of training epochs.
* **learning_rate**: Learning rate for training.
* **top_k**: Report the top-k accuracy during training.
* **resize**: Resize the image before using it for training. The images are resized so that the shortest side is of this parameter. If the parameter is not set, then the training data is used as such without resizing.
* **checkpoint_frequency**: Period to store model parameters (in number of epochs).
* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.
```
# The algorithm supports multiple network depth (number of layers). They are 18, 34, 50, 101, 152 and 200
# For this training, we will use 18 layers
num_layers = 18
# we need to specify the input image shape for the training data
image_shape = "3,224,224"
# we also need to specify the number of training samples in the training set
num_training_samples = 15240
# specify the number of output classes
num_classes = 257
# batch size for training
mini_batch_size = 128
# number of epochs
epochs = 6
# learning rate
learning_rate = 0.01
# report top_5 accuracy
top_k = 5
# resize image before training
resize = 256
# period to store model parameters (in number of epochs), in this case, we will save parameters from epoch 2, 4, and 6
checkpoint_frequency = 2
# Since we are using transfer learning, we set use_pretrained_model to 1 so that weights can be
# initialized with pre-trained weights
use_pretrained_model = 1
```
### Training
Run the training using Amazon SageMaker CreateTrainingJob API
```
%%time
import time
import boto3
from time import gmtime, strftime
s3 = boto3.client("s3")
# create unique job name
job_name_prefix = "sagemaker-imageclassification-notebook"
timestamp = time.strftime("-%Y-%m-%d-%H-%M-%S", time.gmtime())
job_name = job_name_prefix + timestamp
training_params = {
# specify the training docker image
"AlgorithmSpecification": {"TrainingImage": training_image, "TrainingInputMode": "File"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": "s3://{}/{}/output".format(bucket, job_name_prefix)},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.p2.xlarge", "VolumeSizeInGB": 50},
"TrainingJobName": job_name,
"HyperParameters": {
"image_shape": image_shape,
"num_layers": str(num_layers),
"num_training_samples": str(num_training_samples),
"num_classes": str(num_classes),
"mini_batch_size": str(mini_batch_size),
"epochs": str(epochs),
"learning_rate": str(learning_rate),
"top_k": str(top_k),
"resize": str(resize),
"checkpoint_frequency": str(checkpoint_frequency),
"use_pretrained_model": str(use_pretrained_model),
},
"StoppingCondition": {"MaxRuntimeInSeconds": 360000},
# Training data should be inside a subdirectory called "train"
# Validation data should be inside a subdirectory called "validation"
# The algorithm currently only supports fullyreplicated model (where data is copied onto each machine)
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": s3train,
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-image",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": s3validation,
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-image",
"CompressionType": "None",
},
{
"ChannelName": "train_lst",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": s3train_lst,
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-image",
"CompressionType": "None",
},
{
"ChannelName": "validation_lst",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": s3validation_lst,
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-image",
"CompressionType": "None",
},
],
}
print("Training job name: {}".format(job_name))
print(
"\nInput Data Location: {}".format(
training_params["InputDataConfig"][0]["DataSource"]["S3DataSource"]
)
)
# create the Amazon SageMaker training job
sagemaker = boto3.client(service_name="sagemaker")
sagemaker.create_training_job(**training_params)
# confirm that the training job has started
status = sagemaker.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print("Training job current status: {}".format(status))
try:
# wait for the job to finish and report the ending status
sagemaker.get_waiter("training_job_completed_or_stopped").wait(TrainingJobName=job_name)
training_info = sagemaker.describe_training_job(TrainingJobName=job_name)
status = training_info["TrainingJobStatus"]
print("Training job ended with status: " + status)
except:
print("Training failed to start")
# if exception is raised, that means it has failed
message = sagemaker.describe_training_job(TrainingJobName=job_name)["FailureReason"]
print("Training failed with the following error: {}".format(message))
training_info = sagemaker.describe_training_job(TrainingJobName=job_name)
status = training_info["TrainingJobStatus"]
print("Training job ended with status: " + status)
print(training_info)
```
If you see the message,
> `Training job ended with status: Completed`
then that means training sucessfully completed and the output model was stored in the output path specified by `training_params['OutputDataConfig']`.
You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab.
## Deploy The Model
A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class label given an input image.
This section involves several steps,
1. [Create model](#CreateModel) - Create model for the training output
1. [Batch Transform](#BatchTransform) - Create a transform job to perform batch inference.
1. [Host the model for realtime inference](#HostTheModel) - Create an inference endpoint and perform realtime inference.
### Create model
We now create a SageMaker Model from the training output. Using the model we can create an Endpoint Configuration.
```
%%time
import boto3
from time import gmtime, strftime
sage = boto3.Session().client(service_name="sagemaker")
timestamp = time.strftime("-%Y-%m-%d-%H-%M-%S", time.gmtime())
model_name = "image-classification-model" + timestamp
print(model_name)
info = sage.describe_training_job(TrainingJobName=job_name)
model_data = info["ModelArtifacts"]["S3ModelArtifacts"]
print(model_data)
hosting_image = image_uris.retrieve(
region=boto3.Session().region_name, framework="image-classification"
)
primary_container = {
"Image": hosting_image,
"ModelDataUrl": model_data,
}
create_model_response = sage.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response["ModelArn"])
```
### Batch transform
We now create a SageMaker Batch Transform job using the model created above to perform batch prediction.
```
timestamp = time.strftime("-%Y-%m-%d-%H-%M-%S", time.gmtime())
batch_job_name = "image-classification-model" + timestamp
batch_input = s3validation + "001.ak47/"
request = {
"TransformJobName": batch_job_name,
"ModelName": model_name,
"MaxConcurrentTransforms": 16,
"MaxPayloadInMB": 6,
"BatchStrategy": "SingleRecord",
"TransformOutput": {"S3OutputPath": "s3://{}/{}/output".format(bucket, batch_job_name)},
"TransformInput": {
"DataSource": {"S3DataSource": {"S3DataType": "S3Prefix", "S3Uri": batch_input}},
"ContentType": "application/x-image",
"SplitType": "None",
"CompressionType": "None",
},
"TransformResources": {"InstanceType": "ml.p2.xlarge", "InstanceCount": 1},
}
print("Transform job name: {}".format(batch_job_name))
print("\nInput Data Location: {}".format(batch_input))
sagemaker = boto3.client("sagemaker")
sagemaker.create_transform_job(**request)
print("Created Transform job with name: ", batch_job_name)
while True:
response = sagemaker.describe_transform_job(TransformJobName=batch_job_name)
status = response["TransformJobStatus"]
if status == "Completed":
print("Transform job ended with status: " + status)
break
if status == "Failed":
message = response["FailureReason"]
print("Transform failed with the following error: {}".format(message))
raise Exception("Transform job failed")
time.sleep(30)
```
After the job completes, let's check the prediction results.
```
from urllib.parse import urlparse
import json
import numpy as np
s3_client = boto3.client("s3")
object_categories = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
def list_objects(s3_client, bucket, prefix):
response = s3_client.list_objects(Bucket=bucket, Prefix=prefix)
objects = [content["Key"] for content in response["Contents"]]
return objects
def get_label(s3_client, bucket, prefix):
filename = prefix.split("/")[-1]
s3_client.download_file(bucket, prefix, filename)
with open(filename) as f:
data = json.load(f)
index = np.argmax(data["prediction"])
probability = data["prediction"][index]
print("Result: label - " + object_categories[index] + ", probability - " + str(probability))
return object_categories[index], probability
inputs = list_objects(s3_client, bucket, urlparse(batch_input).path.lstrip("/"))
print("Sample inputs: " + str(inputs[:2]))
outputs = list_objects(s3_client, bucket, batch_job_name + "/output")
print("Sample output: " + str(outputs[:2]))
# Check prediction result of the first 2 images
[get_label(s3_client, bucket, prefix) for prefix in outputs[0:2]]
```
### Realtime inference
We now host the model with an endpoint and perform realtime inference.
This section involves several steps,
1. [Create endpoint configuration](#CreateEndpointConfiguration) - Create a configuration defining an endpoint.
1. [Create endpoint](#CreateEndpoint) - Use the configuration to create an inference endpoint.
1. [Perform inference](#PerformInference) - Perform inference on some input data using the endpoint.
1. [Clean up](#CleanUp) - Delete the endpoint and model
#### Create endpoint configuration
At launch, we will support configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way.
In addition, the endpoint configuration describes the instance type required for model deployment, and at launch will describe the autoscaling configuration.
```
from time import gmtime, strftime
timestamp = time.strftime("-%Y-%m-%d-%H-%M-%S", time.gmtime())
endpoint_config_name = job_name_prefix + "-epc-" + timestamp
endpoint_config_response = sage.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.p2.xlarge",
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print("Endpoint configuration name: {}".format(endpoint_config_name))
print("Endpoint configuration arn: {}".format(endpoint_config_response["EndpointConfigArn"]))
```
#### Create endpoint
Next, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
```
%%time
import time
timestamp = time.strftime("-%Y-%m-%d-%H-%M-%S", time.gmtime())
endpoint_name = job_name_prefix + "-ep-" + timestamp
print("Endpoint name: {}".format(endpoint_name))
endpoint_params = {
"EndpointName": endpoint_name,
"EndpointConfigName": endpoint_config_name,
}
endpoint_response = sagemaker.create_endpoint(**endpoint_params)
print("EndpointArn = {}".format(endpoint_response["EndpointArn"]))
```
Finally, now the endpoint can be created. It may take a few minutes to create the endpoint...
```
# get the status of the endpoint
response = sagemaker.describe_endpoint(EndpointName=endpoint_name)
status = response["EndpointStatus"]
print("EndpointStatus = {}".format(status))
try:
sagemaker.get_waiter("endpoint_in_service").wait(EndpointName=endpoint_name)
finally:
resp = sagemaker.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Arn: " + resp["EndpointArn"])
print("Create endpoint ended with status: " + status)
if status != "InService":
message = sagemaker.describe_endpoint(EndpointName=endpoint_name)["FailureReason"]
print("Training failed with the following error: {}".format(message))
raise Exception("Endpoint creation did not succeed")
```
If you see the message,
> `Endpoint creation ended with EndpointStatus = InService`
then congratulations! You now have a functioning inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console.
We will finally create a runtime object from which we can invoke the endpoint.
#### Perform inference
Finally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
```
import boto3
runtime = boto3.Session().client(service_name="runtime.sagemaker")
```
##### Download test image
```
file_name = "/tmp/test.jpg"
s3.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/256_ObjectCategories/008.bathtub/008_0007.jpg",
file_name,
)
# test image
from IPython.display import Image
Image(file_name)
import json
import numpy as np
with open(file_name, "rb") as f:
payload = f.read()
payload = bytearray(payload)
response = runtime.invoke_endpoint(
EndpointName=endpoint_name, ContentType="application/x-image", Body=payload
)
result = response["Body"].read()
# result will be in json format and convert it to ndarray
result = json.loads(result)
# the result will output the probabilities for all classes
# find the class with maximum probability and print the class index
index = np.argmax(result)
object_categories = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
print("Result: label - " + object_categories[index] + ", probability - " + str(result[index]))
```
#### Clean up
When we're done with the endpoint, we can just delete it and the backing instances will be released. Uncomment and run the following cell to delete the endpoint and model
```
sage.delete_endpoint(EndpointName=endpoint_name)
```
| github_jupyter |
```
%matplotlib inline
```
단일 머신을 이용한 모델 병렬화 실습 예제
===================================================
**저자** : `Shen Li <https://mrshenli.github.io/>`_
**번역** : `안상준 <https://github.com/Justin-A>`_
모델 병렬 처리는 분산 학습 기술에 범용적으로 사용되고 있습니다.
이전 튜토리얼들에서는 'DataParallel' `<https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html>`_
여러 GPU를 사용하여 신경망 모델을 학습 시킬 때 어떻게 DataParallel을 사용하는지에 대해서 살펴보았습니다.
이 방법은 각 GPU에 입력 데이터를 부분적으로 할당하고 동일한 신경망 모델을 복제하여 이용하는 방식이었습니다.
이 방법은 신경망 모델을 상당히 빠르게 학습시킬 수 있는 장점이 있지만, 신경망 모델이 너무 커서 단일 GPU에 할당이 되지 않는 경우에는 동작하지 않습니다.
이번 튜토리얼에서는 ``데이터 병렬 처리`` 가 아닌 **모델 병렬 처리** 문제를 해결하는 방법을 소개합니다.
각 GPU에 모델 전체를 복제하는 것이 아닌, 하나의 모델을 여러 GPU에 분할하여 할당하는 방법입니다.
구체적으로, 10개의 층으로 구성된 ``m`` 신경망 모델에 대해서 ``데이터 병렬 처리`` 방법은 10개의 층을 전부 복제하여 각 GPU에 할당하여 처리하지만,
이와 반대로 2개의 GPU에 모델을 병렬 처리한다면, 각 GPU에 5개의 층씩 각각 할당하여 호스팅할 수 있습니다.
모델 병렬 처리의 전반적인 아이디어는 모델의 서브 네트워크들을 각각 다른 GPU에 할당하고,
각 장비 별로 순전파를 진행하여 계산되는 출력값들을 각 장비 간 공유하여 이용하는 것입니다.
이용하고자 하는 신경망 모델을 부분적으로 각 GPU에 할당하는 것이기 때문에, 여러 GPU를 이용하여 더 큰 신경망 모델을 할당하고 학습시킬 수 있습니다.
이번 튜토리얼은 거대한 모델을 제한된 수의 GPU에 분할하여 할당하지 않고, 그 대신, 모델 병렬 처리의 아이디어를 이해하는 목적으로 작성되었습니다.
모델 병렬 처리의 아이디어를 활용하여 실제 어플리케이션에 적용하는 것은 여러분의 몫입니다.
<div class="alert alert-info"><h4>Note</h4><p>신경망 모델을 여러 서버를 이용하여 학습시키는 병렬 학습 방법에 대해서는 다음 튜토리얼을 참고하세요.
`분산 프레임워크 RPC 시작해보기 <rpc_tutorial.html>`__</p></div>
Basic Usage
-----------
2개의 층으로 이루어진 간단한 신경망 모델을 이용해서 기본적인 내용을 실습해봅시다.
신경망 모델을 2개의 GPU에 할당하여 실행하기 위해서, 각 1개의 층을 각각 다른 GPU에 할당하고,
입력 텐서값과 중간 산출물 텐서값을 신경망 모델의 구성에 맞게 배치합니다.
```
import torch
import torch.nn as nn
import torch.optim as optim
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = torch.nn.Linear(10, 10).to('cuda:0') # 첫 번째 층을 첫 번째 GPU에 할당
self.relu = torch.nn.ReLU()
self.net2 = torch.nn.Linear(10, 5).to('cuda:1') # 두 번째 층을 두 번째 GPU에 할당
def forward(self, x):
x = self.relu(self.net1(x.to('cuda:0')))
return self.net2(x.to('cuda:1')) # 첫 번째 층의 산출물을 두 번째 GPU에 할당하여 진행
```
위의 ``ToyModel`` 예제는 선헝 층과 텐션값을 5개의 ``to(device)`` 장비에 적절하게 할당하는 것이 아닌,
단일 GPU로 신경망 모델을 구현하는 것과 매우 유사한 구조인 것임을 확인할 수 있습니다.
다시 말해, GPU에 텐서값 혹은 층을 할당하는 것 외에는 추가적으로 설정하는 부분이 없습니다.
``backward()`` 와 ``torch.optim`` 코드를 통해 단일 GPU를 이용하여 신경망 모델의 가중치값을 업데이트하는 것 처럼, 자동으로 오차에 의한 기울기값을 반영합니다.
여러분은 레이블값과 신경망 모델의 최종 출력 텐서값을 이용하여 오차를 계산할 수 있도록 동일한 GPU에 할당하는 것만 주의하면 됩니다.
```
model = ToyModel()
loss_fn = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = model(torch.randn(20, 10))
labels = torch.randn(20, 5).to('cuda:1') # 신경망 모델의 최종 출력값과 동일한 GPU에 할당
loss_fn(outputs, labels).backward()
optimizer.step()
```
기존에 존재하는 모듈에 모델 병렬 처리 적용해보기
----------------------------------------
기존에 단일 GPU에 존재하는 모듈을 여러 GPU에 할당하는 것은 단지 몇 줄의 코드를 수정하는 것으로도 쉽게 가능합니다.
아래에 있는 코드들은 ResNet50 모델을 분할하는 방법입니다. 이 아이디어는, 기존에 존재하는 ResNet 모듈을 상속받아 설계할 때, 2개의 GPU에 층을 나누어 설계하는 방식으로 진행됩니다.
그 후, 2개 GPU에서 계산되는 중간 산출물 텐서값을 적절히 배치하기 위헤 순전파 메소드를 수정합니다.
```
from torchvision.models.resnet import ResNet, Bottleneck
num_classes = 1000
class ModelParallelResNet50(ResNet):
def __init__(self, *args, **kwargs):
super(ModelParallelResNet50, self).__init__(
Bottleneck, [3, 4, 6, 3], num_classes=num_classes, *args, **kwargs)
self.seq1 = nn.Sequential(
self.conv1,
self.bn1,
self.relu,
self.maxpool,
self.layer1,
self.layer2
).to('cuda:0') # 첫 번째 GPU에 일련의 과정을 할당
self.seq2 = nn.Sequential(
self.layer3,
self.layer4,
self.avgpool,
).to('cuda:1') # 두 번째 GPU에 일련의 과정을 할당
self.fc.to('cuda:1') # ResNet50 구성요소를 두 번째 GPU에 할당
def forward(self, x):
x = self.seq2(self.seq1(x).to('cuda:1')) # seq1의 출력값을 두 번쨰 GPU에 할당하여 연결
return self.fc(x.view(x.size(0), -1))
```
위의 예제에서는 단일 GPU에 신경망 모델을 할당하여 학습시키기에는 모델 크기가 너무 클 때 발생하는 문제를 해결하는 방법입니다.
하지만, 여러분은 단일 GPU를 이용할 때보다 학습 과정이 오래걸리며, 이는 여러분들이 이미 알고 있는 내용이었을 수 있습니다.
그 이유는, 두 개의 GPU가 동시에 계산하는 것이 아니라 1개의 GPU는 계산하지 않고 대기하고 있기 때문입니다.
또한, 두 번째 층 (layer2)이 할당된 첫 번째 GPU에서 계산된 결과를 세 번째 층 (layer3)이 할당된 두 번째 GPU로 텐서값을 복사하기 때문에 계산 과정이 더 길어지게 됩니다.
코드 실행 시간을 정량적으로 살펴보기 위해 실험을 하나 해봅시다. 입력 텐서값과 레이블값을 랜덤으로 설정한 후,
이미 존재하는 torchvision.models.reset50() 과, 모델 병렬 처리를 진행한 ``ModelParallelResNet50`` 을 통해 학습을 진행합니다.
학습 진행을 완료한 후, 두 모델들은 랜덤으로 생성된 데이터로 학습을 진행했기 때문에 실용적인 예측을 하진 못하지만, 학습 진행 시간을 실용적으로 비교하여 할 수 있습니다.
```
import torchvision.models as models
num_batches = 3
batch_size = 120
image_w = 128
image_h = 128
def train(model):
model.train(True)
loss_fn = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)
one_hot_indices = torch.LongTensor(batch_size) \
.random_(0, num_classes) \
.view(batch_size, 1)
for _ in range(num_batches):
# 입력 텐서값과 레이블값을 랜덤으로 생성합니다.
inputs = torch.randn(batch_size, 3, image_w, image_h)
labels = torch.zeros(batch_size, num_classes) \
.scatter_(1, one_hot_indices, 1)
# 입력값을 이용하여 순전파를 진행합니다.
optimizer.zero_grad()
outputs = model(inputs.to('cuda:0'))
# 역전파를 진행하여 신경망 모델의 가중치를 업데이트합니다.
labels = labels.to(outputs.device)
loss_fn(outputs, labels).backward()
optimizer.step()
```
위에서 정의한 ``train(model)`` 메소드는 nn.MSELoss (Mean Squared Error ; 평균 제곱 오차) 로 손실 함수를 정의하여 신경망 모델을 학습하는 것을 의미합니다.
그리고, ``optim.SGD`` 메소드는 최적화 방식을 의미합니다. 위 방식은 128 * 128 크기의 이미지가 120개로 구성된 배치 데이터가 3개 존재하는 상황을 모방하기 위해 랜덤으로 생성하였습니다.
그리고나서, 우리는 ``timeit`` 을 이용하여 ``train(model)`` 메소드를 10회 실행하여 학습을 진행하고, 학습 실행 시간에 대해서 표준 편차값을 반영하는 이미지를 생성하여 저장합니다.
```
import matplotlib.pyplot as plt
plt.switch_backend('Agg')
import numpy as np
import timeit
num_repeat = 10
stmt = "train(model)"
setup = "model = ModelParallelResNet50()"
# globals 인자값은 파이썬 3 버전에서만 이용할 수 있습니다.
# 만약 파이썬 2 버전을 이용한다면 다음과 같이 이용할 수 있습니다.
# import __builtin__
# __builtin__.__dict__.update(locals())
mp_run_times = timeit.repeat(
stmt, setup, number=1, repeat=num_repeat, globals=globals())
mp_mean, mp_std = np.mean(mp_run_times), np.std(mp_run_times)
setup = "import torchvision.models as models;" + \
"model = models.resnet50(num_classes=num_classes).to('cuda:0')"
rn_run_times = timeit.repeat(
stmt, setup, number=1, repeat=num_repeat, globals=globals())
rn_mean, rn_std = np.mean(rn_run_times), np.std(rn_run_times)
def plot(means, stds, labels, fig_name):
fig, ax = plt.subplots()
ax.bar(np.arange(len(means)), means, yerr=stds,
align='center', alpha=0.5, ecolor='red', capsize=10, width=0.6)
ax.set_ylabel('ResNet50 Execution Time (Second)')
ax.set_xticks(np.arange(len(means)))
ax.set_xticklabels(labels)
ax.yaxis.grid(True)
plt.tight_layout()
plt.savefig(fig_name)
plt.close(fig)
plot([mp_mean, rn_mean],
[mp_std, rn_std],
['Model Parallel', 'Single GPU'],
'mp_vs_rn.png')
```
.. figure:: /_static/img/model-parallel-images/mp_vs_rn.png
:alt:
```
# 실험 결과, 모델 병렬 철리하여 학습하는 시간이 단일 GPU로 학습하는 시간보다 약 7% ``4.02/3.75-1=7%``정도
# 오래 걸리는 것을 확인할 수 있습니다. 그러므로, 순전파와 역전파를 진행하면서 GPU 간 텐서값들이
# 복제되어 이용하는 시간이 약 7%정도 소요되는 것으로 결론지을 수 있습니다. 학습하는 과정 속에서
# 2개의 GPU 중 1개의 GPU가 계산하지 않고 대기하고 있기 때문에, 이를 해결하여
# 학습 시간을 빠르게 개선시킬 수 있습니다. 그 중 한 가지 방법은, 학습 단위인 미니 배치 1개의 데이터를
# 2개로 분할하는 파이프라인을 생성하여, 분할된 첫 번째 데이터가 첫 번째 층을 통과하여 두 번째 층으로
# 복제되고, 두 번째 층을 통과할 때, 두번재로 분할된 데이터가 첫 번쨰 층을 통해 계산되는 방식으로 설정하는 것입니다.
# 이러한 방법을 통해서 2개의 GPU가 2개로 분할된 데이터를 동시에 처리할 수 있으며 학습 시간을 단축시킬 수 있습니다.
```
입력 텐서값을 분할하는 파이프라인을 설계하여 학습 시간을 단축하는 방법에 대한 예제
-----------------------------
아래에 있는 실험은, 120개의 이미지로 구성된 1개의 미니 배치 데이터를 20개씩 나누어 진행하는
과정입니다. 아래의 과정을 실행할 때, PyTorch가 CUDA 연산을 비동기적으로 이용하기 때문에,
프로세스를 실행하는 스레드를 여러개 생성할 필요가 없습니다.
```
class PipelineParallelResNet50(ModelParallelResNet50):
def __init__(self, split_size=20, *args, **kwargs):
super(PipelineParallelResNet50, self).__init__(*args, **kwargs)
self.split_size = split_size
def forward(self, x):
splits = iter(x.split(self.split_size, dim=0))
s_next = next(splits)
s_prev = self.seq1(s_next).to('cuda:1')
ret = []
for s_next in splits:
# A. s_prev는 두 번째 GPU에서 실행됩니다.
s_prev = self.seq2(s_prev)
ret.append(self.fc(s_prev.view(s_prev.size(0), -1)))
# B. s_next는 A.와 동시에 진행되면서 첫 번째 GPU에서 실행됩니다.
s_prev = self.seq1(s_next).to('cuda:1')
s_prev = self.seq2(s_prev)
ret.append(self.fc(s_prev.view(s_prev.size(0), -1)))
return torch.cat(ret)
setup = "model = PipelineParallelResNet50()"
pp_run_times = timeit.repeat(
stmt, setup, number=1, repeat=num_repeat, globals=globals())
pp_mean, pp_std = np.mean(pp_run_times), np.std(pp_run_times)
plot([mp_mean, rn_mean, pp_mean],
[mp_std, rn_std, pp_std],
['Model Parallel', 'Single GPU', 'Pipelining Model Parallel'],
'mp_vs_rn_vs_pp.png')
```
GPU 간 텐서값이 복사되는 것은 현재 계산되고 있는 소스값과, 소스값의 목적지 GPU 간 연산되고 있는
스트림과 동기화되는 것을 주의하세요. 만약 여러 스트림을 생성하여 진행하고 있다면, GPU 간 텐서값이
정상적으로 복사되어 계산되고 있는지 꼭 확인해야 합니다. 만약 복사되는 과정 중에 소스값을 이용하거나,
GPU의 텐서값을 읽거나 쓰는 것은 올바르게 계산되지 않을 수 있습니다. 위의 예제에서는 소스값 및 GPU
텐서값을 기본 스트림만 이용하여 진행하므로 추가적인 동기화 과정을 진행할 필요는 없습니다.
.. figure:: /_static/img/model-parallel-images/mp_vs_rn_vs_pp.png
:alt:
파이프라인을 이용하여 미니 배치 내 데이터를 분할하여 적용하였을 때, ResNet50 신경망 모델의
학습 시간이 약 49% ``3.75/2.51-1=49%`` 정도 단축된 것을 이번 실험을 통해 확인할 수 있습니다. 하지만, 이상적으로
학습 시간이 2배 단축되는 것에 비해 다소 적게 학습 시간이 단축되었습니다. 파이프라인을 이용할 때,
``split_sizes`` 매개변수를 도입하였기 때문에, 파이프라인을 이용하는 것이 학습 시간 단축에 얼마나
영향을 미쳤는지 불분명합니다. 직관적으로 생각하였을 때, ``split_sizes`` 매개변수 값을 작게 설정한다면,
아주 소규모의 CUDA 연산이 많이 진행되고, ``split_sizes`` 매개변수 값을 크게 설정한다면, 첫 번째와
마지막 분리될 때 비교적 긴 시간 동안 CUDA 연산이 이루어지게 됩니다. 둘 다 최적의 설정이 아닙니다.
따라서, ``split_sizes`` 매개변수 값을 최적으로 설정하였을 때, 학습 시간 과정이 단축될 수 있을 것이라
기대됩니다. ``split_sizes`` 매개변수 값을 조정하여 실험하면서 최적의 값을 찾아봅시다.
```
means = []
stds = []
split_sizes = [1, 3, 5, 8, 10, 12, 20, 40, 60]
for split_size in split_sizes:
setup = "model = PipelineParallelResNet50(split_size=%d)" % split_size
pp_run_times = timeit.repeat(
stmt, setup, number=1, repeat=num_repeat, globals=globals())
means.append(np.mean(pp_run_times))
stds.append(np.std(pp_run_times))
fig, ax = plt.subplots()
ax.plot(split_sizes, means)
ax.errorbar(split_sizes, means, yerr=stds, ecolor='red', fmt='ro')
ax.set_ylabel('ResNet50 Execution Time (Second)')
ax.set_xlabel('Pipeline Split Size')
ax.set_xticks(split_sizes)
ax.yaxis.grid(True)
plt.tight_layout()
plt.savefig("split_size_tradeoff.png")
plt.close(fig)
```
.. figure:: /_static/img/model-parallel-images/split_size_tradeoff.png
:alt:
실험 결과, ``split_size`` 매개변수값을 12로 설정하였을 때, 학습 시간이 54% 수준으로
가장 많이 단축되었습니다. 아직 학습 시간을 더 단축시킬 수 있는 방법은 다양하게 존재합니다.
예를 들어, 첫 번째 GPU에서 모든 연산과정이 기본으로 설정되어 진행됩니다. 이는 미니배치 분할 과정 중,
현재 진행되는 과정의 다음 단계는 현재 진행되는 과정과 동시에 복제가 이루어질 수 없는 것을 의미합니다.
그러나, 이전과 다음 단계의 분할과정이 다른 텐서값을 이용하기 때문에, 다른 계산과 중복되어 진행되어도
문제가 없습니다. 이에 대해서, 2개 GPU에 여러개의 스트림을 사용하는 것이 필요하며, 서로 다른 서브 네트워크
구조가 서로 다른 스트림을 관리하는 전략이 요구됩니다. 모델 병렬 처리에 대해서 여러 스트림을 사용하는 방법이
일반적을로 존재하지 않기 때문에 이번 튜토리얼에서는 설명하지 않습니다.
```
"""
.. note::
이번 게시물에서는 다양한 성능 측정값을 확인할 수 있습니다. 여러분은 위의 예제를 실행할 때 마다 매번
다른 결과를 확인할 수 있습니다. 그 이유는, 이용하는 소프트웨어 및 하드웨어에 따라 결과가
다르게 나타나기 때문입니다. 여러분이 이용하고 있는 환경 내에서 가장 좋은 성능을 얻기 위해서는, 곡선을 그려서
최적의 ``split_size`` 값을 도출한 후, 해당 값을 이용하여 미니 배치 내 데이터를 분리하는 파이프라인을
생성하는 것입니다.
"""
```
| github_jupyter |
## PySpark Data Engineering Practice (Sandboxing)
### Olympic Athlete Data
This notebook is for data engineering practicing purposes.
During this notebook I want to explore data by using and learning PySpark.
The data is from: https://www.kaggle.com/mysarahmadbhat/120-years-of-olympic-history
```
## Imports
from pyspark.sql import SparkSession ## Create session
from pyspark.sql.types import StructType, StructField, StringType, IntegerType ## Create schema
## Create spark sessions
spark = (SparkSession.builder.appName("AthletesAnalytics").getOrCreate())
```
### Import the data
```
## Create schema
schema = StructType([
StructField("ID", StringType(), True),
StructField("Name", StringType(), True),
StructField("Sex", StringType(), True),
StructField("Age", StringType(), True),
StructField("Height", StringType(), True),
StructField("Weight", StringType(), True),
StructField("Team", StringType(), True),
StructField("NOC", StringType(), True),
StructField("Games", StringType(), True),
StructField("Year", StringType(), True),
StructField("Season", StringType(), True),
StructField("City", StringType(), True),
StructField("Sport", StringType(), True),
StructField("Event", StringType(), True),
StructField("Medal", StringType(), True),
])
## Read CSV into dataframe
file_path = "./data/athlete_events.csv"
athletes_df = (spark.read.format("csv")
.option("header", True)
.schema(schema)
.load(file_path))
## Showing first 10 rows
athletes_df.show(10, False)
## Print out schema details
athletes_df.printSchema()
athletes_df.show(3, vertical=True)
```
### Exploration & Cleansing
```
### Check for NA values by exploring columns
from pyspark.sql.functions import col
athletes_df.filter(col("Medal") == "NA").show(10)
## NA values in:
## Age, Height, Weight, Team, NOC National Olympics Committee, and Medal.
```
#### Drop rows where age, height or weight have NA values.
```
athletes_df = athletes_df.filter((col("Age") != "NA") & (col("Height") != "NA") & (col("Weight") != "NA"))
## Check if correct
athletes_df.filter((col("Age") == "NA")).show(5)
athletes_df.filter((col("Height") == "NA")).show(5)
athletes_df.filter((col("Weight") == "NA")).show(5)
```
#### Check if other columns have the right values
```
### Check if ID, Age, Height, Weight and Year are indeed all integer values
### Checking ID first on non numeric values
from pyspark.sql.types import DataType, StructField, StructType, IntegerType, StringType
test_df = athletes_df.select('ID',col('ID').cast(IntegerType()).isNotNull().alias("Value"))
test_df.filter((col("Value") == False)).show(5)
### Checking Age on non numeric values
from pyspark.sql.types import DataType, StructField, StructType, IntegerType, StringType
test_df = athletes_df.select('Age',col('Age').cast(IntegerType()).isNotNull().alias("Value"))
test_df.filter((col("Value") == False)).show(5)
### As seen something isn't going well. There are gender and even name values in Age.
### Let's see how many rows have this problem
test_df.filter((col("Value") == True)).count()
### 500 out of 206188 values have this problem
test_df.filter((col("Value") == False)).count()
### Percentage of broken rows
print(str(round(500 / 206188 * 100,2)) + '%')
athletes_df.filter((col("Age") == "M")).show(5)
### The reason for this error is that there is a , in some of the names.
### For now I'll drop these rows. This can be done with the following filter function
athletes_df = athletes_df.filter("CAST(Age AS INTEGER) IS NOT NULL")
athletes_df.filter((col("Age"))=="M").show()
### By fixing the rows, there are also no wrong values anymore in Height
test_df = athletes_df.select('Height',col('Height').cast(IntegerType()).isNotNull().alias("Value"))
test_df.filter((col("Value") == False)).show(5)
### As you can see, 500 rows where deleted.
athletes_df.count()
### Check the distinct values for seasons.
### As seen there are no odd values in this column.
athletes_df.select("Season").distinct().show()
### Check the length of NOC, as seen in the result this is always 3, so that is good.
from pyspark.sql.functions import length
test_df = athletes_df.withColumn("length_NOC", length("NOC")).filter((col("length_NOC") != 3))
test_df.show()
### Check if sex is only M and F, as seen this is correct.
athletes_df.filter((col("Sex")!="F") & (col("Sex")!="M")).show()
```
### Masking the name
To practice the idea of private information I want to explore masking the name.
#### Masking
```
### Masks name showing the first and last two characters.
### If name is less than 5 characters, it will only show the first character.
from pyspark.sql.functions import udf
def mask_name(columnValue):
if len(columnValue) < 5:
nameList=list(columnValue)
start = "".join(nameList[:1])
masking = 'x'*(len(nameList)-1)
masked_name = start+masking
else:
nameList=list(columnValue)
start = "".join(nameList[:2])
end = "".join(nameList[-2:])
masking = 'x'*(len(nameList)-4)
masked_name = start+masking+end
return masked_name
### Make the function work with PySpark
mask_name_udf = udf(mask_name, StringType())
### Test function
athletes_df.select("Name",mask_name_udf(athletes_df["Name"])).distinct().show(5, truncate=False)
athletes_df = athletes_df.withColumn("MaskedName",mask_name_udf(athletes_df["Name"])).drop(col("Name"))
athletes_df.show(1,vertical=True)
```
### Fixing Schema
```
athletes_df.printSchema()
### ID, Age Height, Weight and Year should be integer
athletes_final_df = (athletes_df.withColumn("PlayerID", col("ID").cast(IntegerType()))
.drop(col("ID"))
.withColumn("Name", col("MaskedName").cast(StringType()))
.withColumn("Age", col("Age").cast(IntegerType()))
.withColumn("Height", col("Height").cast(IntegerType()))
.withColumn("Weight", col("Weight").cast(IntegerType()))
.withColumn("Year", col("Year").cast(IntegerType()))
)
athletes_final_df.printSchema()
### Sort column order
athletes_sorted_df = athletes_final_df.select(
[athletes_final_df.columns[-2]]
+ [athletes_final_df.columns[-1]]
+ athletes_final_df.columns[:-3])
athletes_sorted_df.show(1, vertical=True)
athletes_sorted_df.printSchema()
```
### Save to parquet
```
## Write to parquet file, but this crashes laptop
#output_path = './output/athlete_data'
#athletes_sorted_df.write.partitionBy("Games").mode("overwrite").parquet(output_path)
```
### Aggregations
```
from pyspark.sql.functions import min, max, sum, sumDistinct, avg, col, expr, round, count
```
#### Medals per year
```
### Get year and medal
medals_per_year_df = athletes_sorted_df.select(
col("Year"),
col("Medal")
)
medals_per_year_df.show(5)
### Filter out all rows with NA
medals_per_year_df = medals_per_year_df.filter(col("Medal")!="NA")
medals_per_year_df.show(5)
### show amount of medals per Year
medals_per_year_df.groupBy("Year").agg(count("Medal").alias("Medals Amount")).orderBy("Year", ascending=False).show(5)
```
#### Medals per country
```
### Show distinct medal values.
athletes_sorted_df.select("Medal").distinct().show()
### create new dataframe and filter out NA values for the medal column.
medals_per_country_df = athletes_sorted_df.select(
col("Team"),
col("Medal")
)
medals_per_country_df = medals_per_country_df.filter(col("Medal")!="NA")
medals_per_country_df.show(5)
### Aggregate and order by medal amount
medals_per_country_df = medals_per_country_df.groupBy("Team","Medal").agg(count("Medal").alias("Amount")).orderBy("Amount", ascending=False)
medals_per_country_df.show(10)
```
#### Show information about height and weight
```
### This could also be used to make sure there are no odd values in the columns
athletes_sorted_df.select("Height", "Weight").describe().show()
### Weight of only 25?? Let's check out why that is.
athletes_sorted_df.select("Weight","Height","Age","PlayerID","Name","Team").filter(col("Weight")==25).distinct().show()
```
#### Which country has the most medals in basketball?
```
athletes_sorted_df.show(2)
best_in_basketball_df = athletes_sorted_df.select(
col("Team"),
col("Sport"),
col("Medal")
)
best_in_basketball_df = best_in_basketball_df.filter(col("Sport")=="Basketball")
best_in_basketball_df.show(3)
best_in_basketball_df = best_in_basketball_df.groupBy("Team","Sport").agg(count("Medal").alias("Amount")).orderBy("Amount", ascending=False)
best_in_basketball_df.show(5)
```
As you could expect, US has the most medals in Basketball.
| github_jupyter |
# Summed Likelihood Analysis with Python
This sample analysis shows a way of performing joint likelihood on two data selections using the same XML model. This is useful if you want to do the following:
* Coanalysis of Front and Back selections (not using the combined IRF)
* Coanalysis of separate time intervals
* Coanalysis of separate energy ranges
* Pass 8 PSF type analysis
* Pass 8 EDISP type analysis
This tutorial also assumes that you've gone through the standard [binned likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned_likelihood_tutorial.html) thread using the combined front + back events, to which we will compare.
# Get the data
For this thread the original data were extracted from the [LAT data server](https://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi) with the following selections (these selections are similar to those in the paper):
```
Search Center (RA,Dec) = (193.98,-5.82)
Radius = 15 degrees
Start Time (MET) = 239557417 seconds (2008-08-04T15:43:37)
Stop Time (MET) = 302572802 seconds (2010-08-04T00:00:00)
Minimum Energy = 100 MeV
Maximum Energy = 500000 MeV
```
For more information on how to download LAT data please see the [Extract LAT Data](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extract_latdata.html) tutorial.
These are the event files. Run the code cell below to retrieve them:
```
L181126210218F4F0ED2738_PH00.fits (5.4 MB)
L181126210218F4F0ED2738_PH01.fits (10.8 MB)
L181126210218F4F0ED2738_PH02.fits (6.9 MB)
L181126210218F4F0ED2738_PH03.fits (9.8 MB)
L181126210218F4F0ED2738_PH04.fits (7.8 MB)
L181126210218F4F0ED2738_PH05.fits (6.6 MB)
L181126210218F4F0ED2738_PH06.fits (4.8 MB)
L181126210218F4F0ED2738_SC00.fits (256 MB spacecraft file)
```
```
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH00.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH01.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH02.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH03.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH04.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH05.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH06.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_SC00.fits
!mkdir data
!mv *.fits ./data
```
You'll first need to make a file list with the names of your input event files:
```
!ls ./data/*_PH*.fits > ./data/binned_events.txt
!cat ./data/binned_events.txt
```
In the following analysis we've assumed that you've named your list of data files `binned_events.txt`.
# Perform Event Selections
You could follow the unbinned likelihood tutorial to perform your event selections using **gtlike*, **gtmktime**, etc. directly from the command line, and then use pylikelihood later.
But we're going to go ahead and use python. The `gt_apps` module provides methods to call these tools from within python. This'll get us used to using python.
So, let's jump into python:
```
import gt_apps as my_apps
```
We need to run **gtselect** (called `filter` in python) twice. Once, we select only the front events and the other time we select only back events. You do this with `evtype=1` (front) and `evtype=2` (back).
```
my_apps.filter['evclass'] = 128
my_apps.filter['evtype'] = 1
my_apps.filter['ra'] = 193.98
my_apps.filter['dec'] = -5.82
my_apps.filter['rad'] = 15
my_apps.filter['emin'] = 100
my_apps.filter['emax'] = 500000
my_apps.filter['zmax'] = 90
my_apps.filter['tmin'] = 239557417
my_apps.filter['tmax'] = 302572802
my_apps.filter['infile'] = '@./data/binned_events.txt'
my_apps.filter['outfile'] = './data/3C279_front_filtered.fits'
```
Once this is done, we can run **gtselect**:
```
my_apps.filter.run()
```
Now, we select the back events and run it again:
```
my_apps.filter['evtype'] = 2
my_apps.filter['outfile'] = './data/3C279_back_filtered.fits'
my_apps.filter.run()
```
Now, we need to find the GTIs for each data set (front and back). This is accessed within python via the `maketime` object:
```
# Front
my_apps.maketime['scfile'] = './data/L181126210218F4F0ED2738_SC00.fits'
my_apps.maketime['filter'] = '(DATA_QUAL>0)&&(LAT_CONFIG==1)'
my_apps.maketime['roicut'] = 'no'
my_apps.maketime['evfile'] = './data/3C279_front_filtered.fits'
my_apps.maketime['outfile'] = './data/3C279_front_filtered_gti.fits'
my_apps.maketime.run()
```
Similar for the back:
```
# Back
my_apps.maketime['evfile'] = './data/3C279_back_filtered.fits'
my_apps.maketime['outfile'] = './data/3C279_back_filtered_gti.fits'
my_apps.maketime.run()
```
# Livetime and Counts Cubes
### Livetime Cube
We can now compute the livetime cube. We only need to do this once since in this case we made the exact same time cuts and used the same GTI filter on front and back datasets.
```
my_apps.expCube['evfile'] = './data/3C279_front_filtered_gti.fits'
my_apps.expCube['scfile'] = './data/L181126210218F4F0ED2738_SC00.fits'
my_apps.expCube['outfile'] = './data/3C279_front_ltcube.fits'
my_apps.expCube['zmax'] = 90
my_apps.expCube['dcostheta'] = 0.025
my_apps.expCube['binsz'] = 1
my_apps.expCube.run()
```
### Counts Cube
The counts cube is the counts from our data file binned in space and energy. All of the steps above use a circular ROI (or a cone, really).
Once you switch to binned analysis, you start doing things in squares. Your counts cube can only be as big as the biggest square that can fit in the circular ROI you already selected.
We start with front events:
```
my_apps.evtbin['evfile'] = './data/3C279_front_filtered_gti.fits'
my_apps.evtbin['outfile'] = './data/3C279_front_ccube.fits'
my_apps.evtbin['algorithm'] = 'CCUBE'
my_apps.evtbin['nxpix'] = 100
my_apps.evtbin['nypix'] = 100
my_apps.evtbin['binsz'] = 0.2
my_apps.evtbin['coordsys'] = 'CEL'
my_apps.evtbin['xref'] = 193.98
my_apps.evtbin['yref'] = -5.82
my_apps.evtbin['axisrot'] = 0
my_apps.evtbin['proj'] = 'AIT'
my_apps.evtbin['ebinalg'] = 'LOG'
my_apps.evtbin['emin'] = 100
my_apps.evtbin['emax'] = 500000
my_apps.evtbin['enumbins'] = 37
my_apps.evtbin.run()
```
And then for the back events:
```
my_apps.evtbin['evfile'] = './data/3C279_back_filtered_gti.fits'
my_apps.evtbin['outfile'] = './data/3C279_back_ccube.fits'
my_apps.evtbin.run()
```
# Exposure Maps
The binned exposure map is an exposure map binned in space and energy.
We first need to import the python version of `gtexpcube2`, which doesn't have a gtapp version by default. This is easy to do (you can import any of the command line tools into python this way). Then, you can check out the parameters with the `pars` function.
```
from GtApp import GtApp
expCube2= GtApp('gtexpcube2','Likelihood')
expCube2.pars()
```
Here, we generate exposure maps for the entire sky.
```
expCube2['infile'] = './data/3C279_front_ltcube.fits'
expCube2['cmap'] = 'none'
expCube2['outfile'] = './data/3C279_front_BinnedExpMap.fits'
expCube2['irfs'] = 'P8R3_SOURCE_V2'
expCube2['evtype'] = '1'
expCube2['nxpix'] = 1800
expCube2['nypix'] = 900
expCube2['binsz'] = 0.2
expCube2['coordsys'] = 'CEL'
expCube2['xref'] = 193.98
expCube2['yref'] = -5.82
expCube2['axisrot'] = 0
expCube2['proj'] = 'AIT'
expCube2['ebinalg'] = 'LOG'
expCube2['emin'] = 100
expCube2['emax'] = 500000
expCube2['enumbins'] = 37
expCube2.run()
expCube2['infile'] = './data/3C279_front_ltcube.fits'
expCube2['outfile'] = './data/3C279_back_BinnedExpMap.fits'
expCube2['evtype'] = '2'
expCube2.run()
```
# Compute Source Maps
The source maps step convolves the LAT response with your source model, generating maps for each source in the model for use in the likelihood calculation.
We use the same [XML](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_input_model.xml) file as in the standard [binned likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned_likelihood_tutorial.html) analysis.
You should also download the recommended models for a normal point source analysis `gll_iem_v07.fits` and `iso_P8R3_SOURCE_V2_v1.txt`.
These three files can be downloaded by running the code cell below:
```
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/4fgl/gll_iem_v07.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/4fgl/iso_P8R3_SOURCE_V2_v1.txt
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_input_model.xml
!mv *.xml ./data
```
Note that the files `gll_iem_v07.fits` and `iso_P8R3_SOURCE_V2_v1.txt` must be in your current working directory for the next steps to work.
We compute the front events:
```
my_apps.srcMaps['expcube'] = './data/3C279_front_ltcube.fits'
my_apps.srcMaps['cmap'] = './data/3C279_front_ccube.fits'
my_apps.srcMaps['srcmdl'] = './data/3C279_input_model.xml'
my_apps.srcMaps['bexpmap'] = './data/3C279_front_BinnedExpMap.fits'
my_apps.srcMaps['outfile'] = './data/3C279_front_srcmap.fits'
my_apps.srcMaps['irfs'] = 'P8R3_SOURCE_V2'
my_apps.srcMaps['evtype'] = '1'
my_apps.srcMaps.run()
```
And similarly, the back events:
```
my_apps.srcMaps['expcube'] = './data/3C279_front_ltcube.fits'
my_apps.srcMaps['cmap'] = './data/3C279_back_ccube.fits'
my_apps.srcMaps['srcmdl'] = './data/3C279_input_model.xml'
my_apps.srcMaps['bexpmap'] = './data/3C279_back_BinnedExpMap.fits'
my_apps.srcMaps['outfile'] = './data/3C279_back_srcmap.fits'
my_apps.srcMaps['irfs'] = 'P8R3_SOURCE_V2'
my_apps.srcMaps['evtype'] = '2'
my_apps.srcMaps.run()
```
# Run the Likelihood Analysis
First, import the BinnedAnalysis and SummedAnalysis libraries. Then, create a likelihood object for both the front and the back datasets. For more details on the pyLikelihood module, check out the [pyLikelihood Usage Notes](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/python_usage_notes.html).
```
import pyLikelihood
from BinnedAnalysis import *
from SummedLikelihood import *
front = BinnedObs(srcMaps='./data/3C279_front_srcmap.fits',binnedExpMap='./data/3C279_front_BinnedExpMap.fits',expCube='./data/3C279_front_ltcube.fits',irfs='CALDB')
likefront = BinnedAnalysis(front,'./data/3C279_input_model.xml',optimizer='NewMinuit')
back = BinnedObs(srcMaps='./data/3C279_back_srcmap.fits',binnedExpMap='./data/3C279_back_BinnedExpMap.fits',expCube='./data/3C279_front_ltcube.fits',irfs='CALDB')
likeback = BinnedAnalysis(back,'./data/3C279_input_model.xml',optimizer='NewMinuit')
```
Then, create the summedlikelihood object and add the two likelihood objects, one for the front selection and the second for the back selection.
```
summed_like = SummedLikelihood()
summed_like.addComponent(likefront)
summed_like.addComponent(likeback)
```
Perform the fit and print out the results:
```
summedobj = pyLike.NewMinuit(summed_like.logLike)
summed_like.fit(verbosity=0,covar=True,optObject=summedobj)
```
Print TS for 3C 279 (4FGL J1256.1-0547):
```
summed_like.Ts('4FGL J1256.1-0547')
```
We can now compare to the standard [binned likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned_likelihood_tutorial.html) analysis that uses only one data set containing both Front and Back event types that are represented by a single, combined IRF set. You will need to download the files created in that analysis thread or rerun this python tutorial with the combined dataset `(evtype=3)`.
For your convenience, the files can be obtained from the code cell below:
```
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_srcmaps.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_allsky_expcube.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_ltcube.fits
!mv 3C279*.fits ./data
all = BinnedObs(srcMaps='./data/3C279_binned_srcmaps.fits',binnedExpMap='./data/3C279_binned_allsky_expcube.fits',expCube='./data/3C279_binned_ltcube.fits',irfs='CALDB')
likeall = BinnedAnalysis(all,'./data/3C279_input_model.xml',optimizer='NewMinuit')
```
Perform the fit and print out the results:
```
likeallobj = pyLike.NewMinuit(likeall.logLike)
likeall.fit(verbosity=0,covar=True,optObject=likeallobj)
```
Print TS for 3C 279 (4FGL J1256.1-0547):
```
likeall.Ts('4FGL J1256.1-0547')
```
The TS for the front + back analysis is 29261.558, a bit lower than what we found for the separate front and back analysis 30191.550.
The important difference is that in the separated version of the analysis each event type has a dedicated response function set instead of using the averaged Front+Back response. This should increase the sensitivity, and therefore, the TS value.
| github_jupyter |
## The Golden Standard
In the previous session, we saw why and how association is different from causation. We also saw what is required to make association be causation.
$
E[Y|T=1] - E[Y|T=0] = \underbrace{E[Y_1 - Y_0|T=1]}_{ATET} + \underbrace{\{ E[Y_0|T=1] - E[Y_0|T=0] \}}_{BIAS}
$
To recap, association becomes causation if there is no bias. There will be no bias if \\(E[Y_0|T=0]=E[Y_0|T=1]\\). In words, association will be causation if the treated and control are equal, or comparable, unless for the treatment they receive. Or, in more technical words, when the outcome of the untreated is equal to the counterfactual outcome of the treated. Remember that this counterfactual outcome is the outcome of the treated group if they had not received the treatment.
I think we did an OK job explaining in math terms how to make association equal to causation. But that was only in theory. Now, we look at the first tool we have to make the bias vanish: **Randomised Experiments**. Randomised experiments consist of randomly assigning individuals in a population to the treatment or to a control group. The proportion that receives the treatment doesn't have to be 50%. You could have an experiment where only 10% of your samples get the treatment.
Randomisation annihilates bias by making the potential outcomes independent of the treatment.
$
(Y_0, Y_1) \perp\!\!\!\perp T
$
This can be confusing at first. If the outcome is independent of the treatment, doesn't it mean that the treatment has no effect? Well, yes! but notice I'm not talking about the outcomes. Rather, I'm talking about the **potential** outcomes. The potential outcomes is how the outcome **would have been** under the treatment (\\(Y_1\\)) or under the control (\\(Y_0\\)). In randomized trials, we **don't** want the outcome to be dependent on the treatment, since we think the treatment causes the outcome. But we want the **potential** outcomes to be independent from the treatment.

Saying that the potential outcomes are independent from the treatment is saying that they would be, in expectation, the same in the treatment or the control group. In simpler terms, it means that treatment and control are comparable. Or that knowing the treatment assignment doesn't give me any information on how the outcome was previous to the treatment. Consequently, \\((Y_0, Y_1)\perp T\\) means that the treatment is the only thing that is generating a difference between the outcome in the treated and in the control. To see this, notice that independence implies precisely that that
$
E[Y_0|T=0]=E[Y_0|T=1]=E[Y_0]
$
Which, as we've seen, makes it so that
$
E[Y|T=1] - E[Y|T=0] = E[Y_1 - Y_0]=ATE
$
So, randomization gives us a way to use a simple difference in means between treatment and control and call that the treatment effect.
## In a School Far, Far Away
In the year of 2020, the Coronavirus Pandemic forced business to adapt to social distancing. Delivery services became widespread, big corporations shifted to a remote work strategy. With schools, it wasn't different. Many started their own online repository of classes.
Four months into the crises and many are wondering if the introduced changes could be maintained. There is no question that online learning has its benefits. For once, it is cheaper, since it can save on real estate and transportation. It can also me more digital, leveraging world class content from around the globe, not just from a fixed set of teachers. In spite all of that, we still need to answer if online learning has or not a negative or positive impact in the student's academic performance.
One way to answer that is to take students from schools that give mostly online classes and compare them with students from schools that give lectures in traditional classrooms. As we know by now this is not the best approach. It could be that online schools attract only the well disciplined students that do better than average even if the class where presential. In this case, we would have a positive bias, where the treated are academically better than the untreated: \\(E[Y_0|T=1] > E[Y_0|T=0]\\).
Or, on the flip side, it could be that online classes are cheaper and are composed mostly of less wealthy students, who might have to work besides studying. In this case, these students would do worse than those from the presidential schools even if they took presential classes. If this was the case, we would have bias in the other direction, where the treated are academically worse than the untreated: \\(E[Y_0|T=1] < E[Y_0|T=0]\\).
So, although we could do simple comparisons, it wouldn't be very convincing. One way or another, we could never be sure if there wasn't any bias lurking around and masking our causal effect.

To solve that, we need to make the treated and untreated comparable \\(E[Y_0|T=1] = E[Y_0|T=0]\\). One way to force this, is by randomly assigning the online and presential classes to students. If we managed to do that, the treatment and untreated would be, on average, the same, unless for the treatment they receive.
Fortunately, some economists have done that for us. They randomized not the students, but the classes. Some of them were randomly assigned to have face-to-face lectures, others, to have only online lectures and a third group, to have a blended format of both online and face-to-face lectures. At the end of the semester, they collected data on a standard exam.
Here is what the data looks like:
```
import pandas as pd
import numpy as np
data = pd.read_csv("./data/online_classroom.csv")
print(data.shape)
data.head()
```
We can see that we have 323 samples. It's not exactly big data, but is something we can work with. To estimate the causal effect, we can simply compute the mean score for each of the treatment groups.
```
(data
.assign(class_format = np.select(
[data["format_ol"].astype(bool), data["format_blended"].astype(bool)],
["online", "blended"],
default="face_to_face"
))
.groupby(["class_format"])
.mean())
```
Yup. It's that simple. We can see that face to face classes yield a 78.54 average score, while online classes yield a 73.63 average score. Not so good news for the proponents of online learning. The \\(ATE\\) for online class is thus -4.91. This means that online classes cause students to perform about 5 points lower, on average. That's it. You don't need to worry that online classes might have poorer students that can't afford face to face classes or, for that matter, you don't have to worry that the students from the different treatments are different in any way other than the treatment they received. By design, the random experiment is made to wipe out those differences.
For this reason, a good sanity check to see if the randomisation was done right (or if you are looking at the right data) is to check if the treated are equal to the untreated in pre-treatment variables. In our data, we have information on gender and ethnicity, so we can see if they are equal across groups. For the `gender`, `asian`, `hispanic` and `white` variables, we can say that they look pretty similar. The `black` variable, however, looks a little bit different. This draws attention to what happens with a small dataset. Even under randomisation, it could be that, by chance, one group is different from another. In large samples, this difference tends to disappear.
## The Ideal Experiment
Randomised experiments are the most reliable way to get causal effects. It is a ridiculously simple technique and absurdly convincing. It is so powerful that most countries have it as a requirement for showing the effectiveness of new medicine. To make a terrible analogy, you can think of RCT and Aang, from Avatar: The Last Airbender, while other techniques are more like Sokka. He is cool and can pull some neat tricks here and there, but Aang can bend the four elements and connect with the spiritual world. Think of it this way, if we could, RCT would be all we would ever do to uncover causality. A well designed RCT is the dream of any scientist.

Unfortunately, they tend to be either very expensive or just plain unethical. Sometimes, we simply can't control the assignment mechanism. Imagine yourself as a physician trying to estimate the effect of smoking during pregnancy on baby weight at birth. You can't simply force a random portion of moms to smoke during pregnancy. Or say you work for a big bank and you need to estimate the impact of the credit line on customer churn. It would be too expensive to give random credit lines to your customers. Or that you want to understand the impact of increasing minimum wage on unemployment. You can't simply assign countries to have one or another minimum wage.
We will later see how to lower the randomisation cost by using conditional randomisation, but there is nothing we can do about unethical or unfeasible experiments. Still, whenever we deal with causal questions, it is worth thinking about the **ideal experiment**. Always ask yourself, if you could, **what would be the ideal experiment you would run to uncover this causal effect?**. This tends to shed some light in the way of how we can uncover the causal effect even without the ideal experiment.
## The Assignment Mechanism
In a randomised experiment, the mechanism that assigns unit to one treatment or the other is, well, random. As we will see later, all causal inference techniques will somehow try to identify the assignment mechanisms of the treatments. When we know for sure how this mechanism behaves, causal inference will be much more certain, even if the assignment mechanism isn't random.
Unfortunately, the assignment mechanism can't be discovered by simply looking at the data. For example, if you have a dataset where higher education correlates with wealth, you can't know for sure which one caused which by just looking at the data. You will have to use your knowledge about how the world works to argue in favor of a plausible assignment mechanism: is it the case that schools educate people, making them more productive and hence leading them to higher paying jobs. Or, if you are pessimistic about education, you can say that schools do nothing to increase productivity and this is just a spurious correlation because only wealthy families can afford to have a kid getting a higher degree.
In causal questions, we usually have the possibility to argue in both ways: that X causes Y, or that it is a third variable Z that causes both X and Y, and hence the X and Y correlation is just spurious. It is for this reason that knowing the assignment mechanism leads to a much more convincing causal answer.
## Key Ideas
We looked at how randomised experiments are the simplest and most effective way to uncover causal impact. It does this by making the treatment and control group comparable. Unfortunately, we can't do randomised experiments all the time, but it is still useful to think about what is the ideal experiment we would do if we could.
Someone that is familiar with statistics might be protesting right now that I didn't look at the variance of my causal effect estimate. How can I know that a 4.91 points decrease is not due to chance? In other words, how can I know if the difference is statistically significant? And they would be right. Don't worry. I intend to review some statistical concepts next.
## References
I like to think of this entire series as a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.
* [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)
* [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)
I'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.
* [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)
* [Mastering 'Metrics](https://www.masteringmetrics.com/)
My final reference is Miguel Hernan and Jamie Robins' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.
* [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)
The data used here is from a study of Alpert, William T., Kenneth A. Couch, and Oskar R. Harmon. 2016. ["A Randomized Assessment of Online Learning"](https://www.aeaweb.org/articles?id=10.1257/aer.p20161057). American Economic Review, 106 (5): 378-82.

| github_jupyter |
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from scipy.stats import poisson, norm
def compute_scaling_ratio(mu_drain,mu_demand,drift_sd,init_state):
drain_time = init_state/(mu_drain-mu_demand)
accum_std = drift_sd*np.sqrt(drain_time)
ratio = accum_std/init_state
return ratio
def compute_workloads(arrival_buffer,inter_buffer,drain_buffer):
workload_1= arrival_buffer/(mu_drain/2)+(inter_buffer+drain_buffer)/(mu_drain)
workload_2 = (inter_buffer+arrival_buffer)/(mu_fast)
return workload_1, workload_2
def compute_draining_times(arrival_buffer,inter_buffer,drain_buffer):
workload_1, workload_2 = compute_workloads(arrival_buffer,inter_buffer,drain_buffer)
drain_time_1= workload_1/(1-mu_demand*2/mu_drain)
drain_time_2 = workload_2/(1-mu_demand/mu_fast)
return drain_time_1, drain_time_2
def simulate_single_buffer_pull(feed_sequence,
demand_sequence,
h_thres,
init_state,
flow,
init_with_zeros = False):
demand_buffer = np.zeros(len(feed_sequence)+1)
demand_buffer[0] = init_state if not init_with_zeros else 0
for i,(f,d) in enumerate(zip(feed_sequence,demand_sequence)):
if demand_buffer[i] > h_thres:
f = 0
demand_buffer[i+1] = demand_buffer[i]+f-d
return demand_buffer
def simulate_double_buffer_pull(feed_sequence_1,
feed_sequence_2,
demand_sequence_1,
demand_sequence_2,
h_thres_1,
h_thres_2,
sf_thres_1,
sf_thres_2,
sf_1):
buffer_1 = np.zeros(len(feed_sequence_1)+1)
buffer_2 = np.zeros(len(feed_sequence_1)+1)
buffer_1[0] = 300
buffer_2[0] = 200
for i,(f1,f2,d1,d2) in enumerate(zip(feed_sequence_1,feed_sequence_2,demand_sequence_1,demand_sequence_2)):
z1 = 0
z2 = 0
if sf_1:
if buffer_2[i] <= sf_thres_2:
z1 = 0
z2 = 1
if buffer_1[i] <= sf_thres_1:
z1 = 1
z2 = 0
else:
if buffer_1[i] <= sf_thres_1:
z1 = 1
z2 = 0
if buffer_2[i] <= sf_thres_2:
z1 = 0
z2 = 1
if buffer_2[i] <= h_thres_2 and z1 == 0:
z2 = 1
if buffer_1[i] <= h_thres_1 and z2 == 0:
z1 = 1
#if i % 2 == 0:
# z1 = 1
# z2 = 0
#else:
# z1 = 0
# z2 = 1
#if buffer_2[i] > h_thres_2:
# z2 = 0
#if buffer_1[i] > h_thres_1:
# z1 = 0
assert z1+z2 < 2
buffer_1[i+1] = buffer_1[i]+z1*f1-d1
buffer_2[i+1] = buffer_2[i]+z2*f2-d2
return buffer_1,buffer_2
def simulate_tandem_buffer_pull(feed_sequence_1,
feed_sequence_2,
demand_sequence,
h_thres_1,
h_thres_2):
buffer_1 = np.zeros(len(feed_sequence_1)+1)
buffer_2 = np.zeros(len(feed_sequence_1)+1)
buffer_1[0] = h_thres_1
buffer_2[0] = 0
for i,(f1,f2,d) in enumerate(zip(feed_sequence_1,feed_sequence_2,demand_sequence)):
z1 = 1
z2 = 1
if buffer_2[i] > h_thres_2:
z2 = 0
if buffer_1[i] > h_thres_1:
z1 = 0
f2 = min(f2,buffer_1[i])
assert z1*f1 <= 1
assert z2*f2 <= 1
buffer_1[i+1] = buffer_1[i]+z1*f1-z2*f2
assert buffer_1[i+1] >= 0
buffer_2[i+1] = buffer_2[i]+z2*f2-d
return buffer_1,buffer_2
mu_demand = 0.33
mu_feed_1 = 0.34
mu_feed_2 = 0.34
duration = int(1e6)
np.random.seed(100)
demand_seq = np.random.binomial(1,mu_demand,duration)
feed_seq_1 = np.random.binomial(1,mu_feed_1,duration)
feed_seq_2 = np.random.binomial(1,mu_feed_2,duration)
c_s = 1
c_d = 10
0.33/0.34
0.33/(0.005),0.33/(0.5-0.33)
buffer_1,buffer_2 = simulate_tandem_buffer_pull(feed_seq_1,feed_seq_2,demand_seq,50,55)
h_optimal = np.percentile(-buffer_2,1000/11)
h_range = range(20,80,5)
deficit_cost = np.zeros_like(h_range)
surplus_cost = np.zeros_like(h_range)
sf_cost = np.zeros_like(h_range)
for i,h1 in enumerate(h_range):
print(i)
buffer_1,buffer_2 = simulate_tandem_buffer_pull(feed_seq_1,feed_seq_2,demand_seq,h1,0)
h_optimal = np.percentile(-buffer_2,1000/11)
surplus = np.where(buffer_2+h_optimal >= 0,buffer_2+h_optimal,0)
deficit = np.where(buffer_2+h_optimal < 0,buffer_2+h_optimal,0)
deficit_cost[i] = np.sum(-deficit)*c_d
surplus_cost[i] = np.sum(surplus)*c_s
sf_cost[i] = np.sum(buffer_1)*c_s
mu_range = np.arange(0.34,0.55,0.01)
deficit_cost = np.zeros_like(mu_range)
surplus_cost = np.zeros_like(mu_range)
sf_cost = np.zeros_like(mu_range)
for i,mu in enumerate(mu_range):
print(i)
np.random.seed(100)
feed_seq_1 = np.random.binomial(1,mu,duration)
buffer_1,buffer_2 = simulate_tandem_buffer_pull(feed_seq_1,feed_seq_2,demand_seq,500,0)
a = np.percentile(buffer_1,1)
print(a)
h_optimal = np.percentile(-buffer_2,1000/11)
surplus = np.where(buffer_2+h_optimal >= 0,buffer_2+h_optimal,0)
deficit = np.where(buffer_2+h_optimal < 0,buffer_2+h_optimal,0)
deficit_cost[i] = np.sum(-deficit)*c_d
surplus_cost[i] = np.sum(surplus)*c_s
sf_cost[i] = np.sum(np.maximum(buffer_1-a,0))
#cost = np.sum(surplus)*c_s + np.sum(-deficit)*c_d + np.sum(buffer_1)*c_s
mu_range
plt.plot((mu_range-0.33)/0.01,sf_cost/min(sf_cost))
plt.plot((mu_range-0.33)/0.01,sf_cost/min(sf_cost),".")
plt.figure(figsize=(8,6))
plt.plot(0.33/mu_range,sf_cost/min(sf_cost))
plt.plot(0.33/mu_range,sf_cost/min(sf_cost),".")
plt.vlines(0.97,0,10,label="mu_2 load = 0.97")
plt.xlabel("mu_1 load")
plt.ylabel("Relative cost")
plt.legend()
h_optimal = np.percentile(-buffer_2,1000/11)
a = []
for i in range(-10,10):
surplus = np.where(buffer_2+h_optimal+i >= 0,buffer_2+h_optimal+i,0)
deficit = np.where(buffer_2+h_optimal+i < 0,buffer_2+h_optimal+i,0)
a.append(np.sum(-deficit)*c_d + np.sum(surplus)*c_s)
plt.plot(range(-10,10),a)
sf_cost
h_optimal = np.percentile(-buffer_2,1000/11)
#plt.plot(h_range,sf_cost)
#norm = np.min(deficit_cost+surplus_cost)
plt.figure(figsize=(10,8))
plt.fill_between(h_range,sf_cost/norm,label="safety_stocks_cost")
plt.plot(h_range,sf_cost/norm,"k.")
plt.fill_between(h_range,(surplus_cost+sf_cost)/norm,sf_cost/norm,label="surplus cost")
plt.plot(h_range,(surplus_cost+sf_cost)/norm,"k.")
plt.fill_between(h_range,(surplus_cost+sf_cost)/norm,(deficit_cost+surplus_cost+sf_cost)/norm,label="deficit cost")
plt.plot(h_range,(deficit_cost+surplus_cost+sf_cost)/norm,"k.")
plt.hlines(1,20,75,"k",label="infinite supply reference")
plt.legend()
max(buffer_2)
np.percentile(buffer_2,1)
a = plt.hist(-buffer_2,bins=range(-1,600))
a = plt.hist(-buffer_2,bins=range(-1,200))
h_optimal
plt.figure(figsize=(10,4))
plt.plot(buffer_2,label="buffer 2")
plt.plot(buffer_1,label="buffer 1")
#plt.plot(buffer_2,label="buffer 2")
plt.legend()
x3 = buffer_2
x2 = buffer_2
x1 = buffer_2
c,d = np.histogram(x2,bins=range(-150,0))
#plt.plot(b[:-1],np.log(a))
plt.plot(d[:-1],np.log10(c))
plt.plot(e[:-1],np.log10(f))
plt.figure(figsize=(10,4))
#a = plt.hist(buffer_2,bins=range(-150,10),label = "30")
a = plt.hist(-x3,bins=range(-1,150),alpha = 1,label="non-limiting")
a = plt.hist(-x2,bins=range(-1,150),alpha = 0.75,label="45")
a = plt.hist(-x1,bins=range(-1,200),alpha=0.5,label="25")
plt.legend()
plt.figure(figsize=(10,4))
#a = plt.hist(buffer_2,bins=range(-150,10),label = "30")
a = plt.hist(-x1,bins=range(-1,250),alpha=1,label="25")
a = plt.hist(-x2,bins=range(-1,150),alpha = 0.75,label="45")
a = plt.hist(-x3,bins=range(-1,150),alpha = 0.5,label="non-limiting")
#a = plt.hist(buffer_2,bins=range(-100,50))
plt.
mu_demand = 0.33
mu_feed = 0.68
duration = int(1e5)
demand_seq_1 = np.random.binomial(1,mu_demand,duration)
demand_seq_2 = np.random.binomial(1,mu_demand,duration)
feed_seq_1 = np.random.binomial(1,mu_feed,duration)
feed_seq_2 = np.random.binomial(1,mu_feed,duration)
buffer_1,buffer_2 = simulate_double_buffer_pull(feed_seq_1,feed_seq_2,
demand_seq_1, demand_seq_2,
30,3,3,3,sf_1=True)
plt.figure(figsize=(10,4))
plt.plot(buffer_1,label="buffer 1")
plt.plot(buffer_2,label="buffer 2")
plt.legend()
buffer_1,buffer_2 = simulate_double_buffer_pull(feed_seq_1,feed_seq_2,
demand_seq_1, demand_seq_2,
30,3,3,3,False)
plt.figure(figsize=(10,4))
plt.plot(buffer_1,label="buffer 1")
plt.plot(buffer_2,label="buffer 2")
plt.legend()
buffer_1,buffer_2 = simulate_double_buffer_pull(feed_seq_1,feed_seq_2,
demand_seq_1, demand_seq_2,
3,30,3,3,True)
plt.figure(figsize=(10,4))
plt.plot(buffer_1,label="buffer 1")
plt.plot(buffer_2,label="buffer 2")
plt.legend()
buffer_1,buffer_2 = simulate_double_buffer_pull(feed_seq_1,feed_seq_2,
demand_seq_1, demand_seq_2,
3,30,3,3,False)
plt.figure(figsize=(10,4))
plt.plot(buffer_1,label="buffer 1")
plt.plot(buffer_2,label="buffer 2")
plt.legend()
mu_demand = 0.33
mu_feed = 0.34
c_s = 1
c_d = 10
duration = int(1e5)
demand_seq = np.random.binomial(1,mu_demand,duration)
feed_seq = np.random.binomial(1,mu_feed,duration)
demand_buffer = simulate_single_buffer_pull(feed_seq,demand_seq,60,0,False)
surplus = np.where(demand_buffer >= 0,demand_buffer,0)
deficit = np.where(demand_buffer < 0,demand_buffer,0)
plt.plot(demand_buffer)
#plt.plot(demand_buffer[:100000])
plt.figure(figsize=(8,6))
plt.fill_between(np.arange(len(surplus)),surplus,0)
plt.fill_between(np.arange(len(surplus)),deficit,0)
cost = np.sum(surplus)*c_s + np.sum(-deficit)*c_d
cost_record = []
hedging = np.arange(-5,140,5)
hedging = np.arange(40,70,1)
for h in hedging:
demand_buffer = simulate_single_buffer_pull(feed_seq,demand_seq,h,h,False)
surplus = np.where(demand_buffer >= 0,demand_buffer,0)
deficit = np.where(demand_buffer < 0,demand_buffer,0)
cost = np.sum(surplus)*c_s + np.sum(-deficit)*c_d
cost_record.append(cost)
f,ax = plt.subplots(2,1,figsize=(10,8),sharex=True)
ax[0].hist(-demand_buffer,bins=range(-20,140),normed=True)
ax[0].vlines(h_optimal,0,0.04)
ax[1].plot(hedging,cost_record/min(cost_record))
ax[1].plot(hedging,cost_record/min(cost_record),"o")
ax[1].vlines(h_optimal,1,1.1)
f,ax = plt.subplots(2,1,figsize=(10,8),sharex=True)
ax[0].hist(-demand_buffer,bins=range(-20,140),normed=True)
ax[0].vlines(h_optimal,0,0.04)
ax[1].plot(hedging,cost_record/min(cost_record))
ax[1].plot(hedging,cost_record/min(cost_record),"o")
ax[1].vlines(h_optimal,1,5)
1000/11
h_optimal
h_optimal = np.percentile(-demand_buffer,1000/11)
plt.hist(-demand_buffer,bins=range(120),normed=True)
plt.vlines(h_optimal,0,0.04)
h_optimal = np.percentile(-demand_buffer,1000/11)
#np.percentile(-demand_buffer,1000/11)
c1 = 1
c2 = 2
c3 = 1
c1 = 1.5
c2 = 1
c3 = 2
c1 = 0.1
c2 = 1
c3 = 1
costs = {}
betas = {}
sc_ratios = {}
eff_rates = {}
slopes = {}
hedging_levels = {}
percentile = 4
hedging = np.concatenate((np.arange(0,20,2),np.arange(20,150,10)))
from sklearn.linear_model import LinearRegression
hedging = np.arange(2,40,2)
arrivals = []
#scale_list = [0.1,0.3,1,3]
#scale_list = [0.2,0.4,0.5,0.6,0.7]
scale_list = np.arange(0.35,0.37,0.001)
scale_list = np.arange(0.32,0.333,0.001)
scale_list = np.arange(0.335,0.345,0.001)
scale_list = [0.33]
hedging = np.concatenate((np.arange(0,20,2),np.arange(20,150,10)))
#hedging = np.arange(0,150,10)
hedging = np.arange(50,600,50)
hedging = np.arange(5,100,5)
#hedging = np.arange(7,8,1)
#hedging = [beta_h]
#hedging = np.arange(30,200,10)
#hedging = np.arange(20,500,50)
#hedging = np.concatenate((np.arange(50,500,50),np.arange(500,11000,2000)))
hedging = np.arange(100,11000,1000)
#hedging = np.arange(2,100,5)
#hedging = np.arange(0,100,5)
#offset = -100000
#hedging = [100]
# settings for scale = 3
dur_star = 10000
omega_star = 7.5645
#init_state_star = 210000
#dur_star = int(4500000*1)
duration = dur_star
for scale in reversed(scale_list):
print(scale)
scale_costs = []
scale_rates = []
#init_state = 7e4*scale
mu_demand = 0.33
mu_drain = mu_transfer = 0.35*2
mu_fast = 0.34
slack_capacity_h = mu_fast-mu_drain/2
std_h = np.sqrt(mu_drain*(1-mu_drain)+mu_fast*(1-mu_fast))
omega_h = std_h/slack_capacity_h
print(slack_capacity_h,std_h,omega_h)
print()
slack_capacity_ss = mu_fast-mu_drain
std_ss = np.sqrt(mu_fast*(1-mu_fast)+mu_drain*(1-mu_drain))
omega_ss = std_ss/slack_capacity_ss
duration = int(1000000 * 1.5 * 0.5)
print(scale,duration)
#print(scale,omega)
#continue
#print(omega/omega_star)
#duration = int((omega/omega_star)**2*dur_star)
init_state = 10000
#init_state = 0
n_seeds = 1#100
beta_h = (1/4)*(percentile**2)*omega_h# + slack_capacity/std
beta_ss = (1/4)*(percentile**2)*omega_ss
scaling_ratio = compute_scaling_ratio(mu_drain,mu_demand,std_h,init_state)
print(scaling_ratio)
hedge = True
for h in reversed(hedging):
print(h)
if hedge:
h_thres = h
ss_thres = mu_drain+beta_ss*std_ss
else:
h_thres = beta_h*std_ss
ss_thres = mu_drain+h*std_ss
print(h_thres)
#thres = 2*mu_drain+h*np.sqrt(mu_drain+mu_fast)
#thres = h*10
buf_1_samples = []
buf_2_samples = []
buf_3_samples = []
np.random.seed(7)
for _ in range(n_seeds):
demand_seq = np.random.binomial(1,mu_demand,duration)
transfer_seq = np.random.binomial(1,mu_transfer,duration)
fast_seq = np.random.binomial(1,mu_fast,duration)
drain_seq = np.random.binomial(1,mu_drain,duration)
arrival_buffer,inter_buffer,drain_buffer = simulate_simple_reentrant_line(
demand_seq[:duration],
transfer_seq[:duration],
fast_seq[:duration],
drain_seq[:duration],
h_thres=h_thres,
ss_thres=5,
init_state=init_state,
flow=False,
init_with_zeros=False)
#try:
# end = np.where((arrival_buffer < 10) & (inter_buffer < 10))[0][0]
#except:
end = len(arrival_buffer)
buf_1_samples.append(sum(arrival_buffer[0:end]*c1))
buf_2_samples.append(sum(inter_buffer[0:end]*c2))
buf_3_samples.append(sum(drain_buffer[0:end]*c3))
#arrivals.append(arrival_buffer)
scale_costs.append((np.mean(buf_1_samples),np.mean(buf_2_samples),np.mean(buf_3_samples)))
#scale_rates.append(zeta*mu_transfer)
#scale_costs.append(sum(arrival_buffer*c1))
'''
a,b = np.histogram(inter_buffer,bins=40,normed=True)
b = b.reshape(-1,1)
clf = LinearRegression()
clf.fit(b[:-15,:],np.log(a[:-14]))
plt.plot(b[:-15],np.log(a[:-14]),label=scale)
slopes[scale] = clf.coef_
'''
costs[scale] = np.array(scale_costs[::-1])
betas[scale] = beta_h
sc_ratios[scale] = scaling_ratio
eff_rates[scale] = np.array(scale_rates[::-1])
plt.legend()
costs
#arrivals_2 = arrivals
plt.plot(np.cumsum(np.array(arrivals_10).mean(axis=0)))
plt.plot(np.cumsum(np.array(arrivals).mean(axis=0)),"r")
#arrivals_10 = arrivals
#plt.plot(np.array(arrivals_30).mean(axis=0)[:2000])
plt.plot(np.array(arrivals_10).mean(axis=0)[:200000])
plt.plot(np.array(arrivals).mean(axis=0)[:20000],"r")
no_h_cost = ref_cost
no_h_cost
min_t_cost/no_h_cost
no_h_cost/min_t_cost
bad_cost = ref_cost
bad_cost/ref_cost
scale = 0.33
beta = beta_ss#betas[scale]
sc_ratio = sc_ratios[scale]
cost_1,cost_2,cost_3 = zip(*costs[scale])
cost_1=np.array(cost_1)
cost_2=np.array(cost_2)
cost_3=np.array(cost_3)
t_cost = np.array(cost_1)+np.array(cost_2)+np.array(cost_3)
#t_cost = np.array(cost_2)+np.array(cost_3)
#t_cost = np.array(cost_3)
min_t_cost = min(t_cost)
#t_cost = t_cost/min_t_cost
#ref_cost = no_ss_cost
ref_cost = min_t_cost
#ref_cost = no_h_cost
t_cost = t_cost/ref_cost
cost_1=np.array(cost_1)/ref_cost
cost_2=np.array(cost_2)/ref_cost
cost_3=np.array(cost_3)/ref_cost
indexes = np.where(t_cost < 100)[0]
plt.figure(figsize=(16,8))
plt.plot(hedging[indexes],cost_1[indexes],label="Buffer 1 cost")
#plt.plot(hedging[indexes],cost_1[indexes],"o")
#plt.plot(hedging[indexes],cost_2[indexes])
plt.fill_between(hedging[indexes],cost_1[indexes]+cost_2[indexes],cost_1[indexes],alpha=0.5,label="Buffer 2 cost")
plt.fill_between(hedging[indexes],t_cost[indexes],cost_1[indexes]+cost_2[indexes],alpha=0.5, label="Buffer 3 cost")
plt.plot(hedging[indexes],t_cost[indexes],label="Total cost")
plt.plot(hedging[indexes],t_cost[indexes],".")
#plt.vlines(10,min(t_cost[indexes]),max(t_cost[indexes]),label="empirical hedging")
plt.hlines(1.03,min(hedging[indexes]),max(hedging[indexes]),color="r",label="+3% margin")
#plt.hlines(0.97,min(hedging[indexes]),max(hedging[indexes]),color="r",label="-3% margin")
#plt.title("{:.3f}".format(sc_ratio))
plt.ylabel("Relative cumulative cost")
plt.xlabel("Hedging threshold h2")
plt.legend()
set(np.array([1,2]))
scale = 0.33
beta = beta_ss#betas[scale]
sc_ratio = sc_ratios[scale]
cost_1,cost_2,cost_3 = zip(*costs[scale])
cost_1=np.array(cost_1)
cost_2=np.array(cost_2)
cost_3=np.array(cost_3)
t_cost = np.array(cost_1)+np.array(cost_2)+np.array(cost_3)
#t_cost = np.array(cost_2)+np.array(cost_3)
#t_cost = np.array(cost_3)
min_t_cost = min(t_cost)
#t_cost = t_cost/min_t_cost
#ref_cost = no_ss_cost
#ref_cost = min_t_cost
t_cost = t_cost/ref_cost
cost_1=np.array(cost_1)/ref_cost
cost_2=np.array(cost_2)/ref_cost
cost_3=np.array(cost_3)/ref_cost
indexes = np.where(t_cost < 100)[0]
plt.figure(figsize=(12,8))
plt.plot(hedging[indexes],cost_1[indexes],label="Buffer 1 cost")
#plt.plot(hedging[indexes],cost_1[indexes],"o")
#plt.plot(hedging[indexes],cost_2[indexes])
plt.fill_between(hedging[indexes],cost_1[indexes]+cost_2[indexes],cost_1[indexes],alpha=0.5,label="Buffer 2 cost")
plt.fill_between(hedging[indexes],t_cost[indexes],cost_1[indexes]+cost_2[indexes],alpha=0.5, label="Buffer 3 cost")
plt.plot(hedging[indexes],t_cost[indexes],label="Total cost")
plt.plot(hedging[indexes],t_cost[indexes],".")
#plt.vlines(10,min(t_cost[indexes]),max(t_cost[indexes]),label="empirical hedging")
plt.hlines(1.03*min(t_cost),min(hedging[indexes]),max(hedging[indexes]),color="r",label="+3% margin")
#plt.hlines(0.97,min(hedging[indexes]),max(hedging[indexes]),color="r",label="-3% margin")
#plt.title("{:.3f}".format(sc_ratio))
plt.ylabel("Relative cumulative cost")
plt.xlabel("Hedging threshold")
plt.legend()
ref_cost
scale = 0.33
beta = beta_ss#betas[scale]
sc_ratio = sc_ratios[scale]
cost_1,cost_2,cost_3 = zip(*costs[scale])
cost_1=np.array(cost_1)
cost_2=np.array(cost_2)
cost_3=np.array(cost_3)
t_cost = np.array(cost_1)+np.array(cost_2)+np.array(cost_3)
t_cost = np.array(cost_2)+np.array(cost_3)
#t_cost = np.array(cost_3)
min_t_cost = min(t_cost)
#t_cost = t_cost/min_t_cost
#ref_cost = no_ss_cost
ref_cost = min_t_cost
t_cost = t_cost/ref_cost
cost_1=np.array(cost_1)/ref_cost
cost_2=np.array(cost_2)/ref_cost
cost_3=np.array(cost_3)/ref_cost
indexes = np.where(t_cost < 100)[0]
plt.figure(figsize=(12,4))
#plt.plot(hedging[indexes],cost_1[indexes],label="Buffer 1 cost")
#plt.plot(hedging[indexes],cost_1[indexes],"o")
#plt.plot(hedging[indexes],cost_2[indexes])
#plt.fill_between(hedging[indexes],cost_1[indexes]+cost_2[indexes],cost_1[indexes],alpha=0.5,label="Buffer 2 cost")
#plt.fill_between(hedging[indexes],t_cost[indexes],cost_1[indexes]+cost_2[indexes],alpha=0.5, label="Buffer 3 cost")
plt.plot(hedging[indexes],t_cost[indexes],label="Total cost")
plt.plot(hedging[indexes],t_cost[indexes],".")
#plt.vlines(10,min(t_cost[indexes]),max(t_cost[indexes]),label="empirical hedging")
plt.hlines(1.03,min(hedging[indexes]),max(hedging[indexes]),color="r",label="+3% margin")
#plt.hlines(0.97,min(hedging[indexes]),max(hedging[indexes]),color="r",label="-3% margin")
#plt.title("{:.3f}".format(sc_ratio))
plt.ylabel("Relative cumulative cost")
plt.xlabel("Hedging threshold")
plt.legend()
(2120/(1-0.33/0.345))/(2770/(1-0.33/0.35))
np.sum(costs[0.33])/no_ss_cost
no_ss_cost = np.sum(costs[0.33])
no_ss_cost
plt.plot(inter_buffer[:10000], label="buffer 3")
np.sum(inter_buffer == 0)
np.sum(inter_buffer == 0)
-1.02*2977.9+1.05*2874.3
-1.02*2972.+1.05*2868.6
2874.3*0.35,2868.6*0.35
988+18,983+21
2/0.35
plt.plot(inter_buffer[8000:10000], label="buffer 3")
end = 100000
plt.figure(figsize=(16,6))
#plt.plot(arrival_buffer[:end],label="buffer 1")
plt.plot(inter_buffer[30000:end], label="buffer 2")
plt.plot(drain_buffer[30000:end], label="buffer 3")
plt.legend()
plt.hist(inter_buffer,bins=np.arange(150))
plt.hist(drain_buffer,bins=np.arange(150))
end = 80000
plt.figure(figsize=(16,6))
#plt.plot(arrival_buffer[:end],label="buffer 1")
plt.plot(inter_buffer[:end], label="buffer 2")
#plt.plot(drain_buffer[:end], label="buffer 3")
plt.legend()
plt.figure(figsize=(16,6))
plt.plot(arrival_buffer,label="buffer 1")
plt.plot(inter_buffer, label="buffer 2")
plt.plot(drain_buffer, label="buffer 3")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
plt.legend()
plt.figure(figsize=(16,6))
plt.plot(arrival_buffer,label="buffer 1")
plt.plot(inter_buffer, label="buffer 2")
plt.plot(drain_buffer, label="buffer 3")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
plt.legend()
plt.figure(figsize=(16,6))
plt.plot(arrival_buffer,label="buffer 1")
plt.plot(inter_buffer, label="buffer 2")
plt.plot(drain_buffer, label="buffer 3")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
plt.legend()
f,ax = plt.subplots(2,1,figsize=(16,10))
ax[0].plot(arrival_buffer,label="buffer 1")
ax[0].plot(inter_buffer, label="buffer 2")
ax[0].plot(drain_buffer, label="buffer 3")
ax[0].set_ylabel("Buffer level")
ax[0].legend()
drain_time_1,drain_time_2=compute_draining_times(arrival_buffer,inter_buffer,drain_buffer)
ax[1].plot(drain_time_1,label="resource 1")
ax[1].plot(drain_time_2,label="resource 2")
ax[1].set_ylabel("Draining time")
ax[1].legend()
#ax[1].gca().set_aspect("equal")
drain_time_1,drain_time_2=compute_draining_times(arrival_buffer,inter_buffer,drain_buffer)
workload_1,workload_2 = compute_workloads(arrival_buffer,inter_buffer,drain_buffer)
np.array([i for i in range(10)])
np.where(np.array([i for i in range(10)]) > 5)[0]
plt.figure(figsize=(8,8))
plt.plot(drain_time_1,label="1")
plt.plot(drain_time_2)
plt.legend()
plt.gca().set_aspect("equal")
plt.plot(workload_1)
plt.plot(workload_2)
#plt.figure(figsize=(16,6))
f,ax = plt.subplots(2,1,figsize=(16,8))
ax[0].plot(arrival_buffer,label="buffer 1")
ax[0].plot(inter_buffer, label="buffer 2")
ax[0].plot(drain_buffer, label="buffer 3")
ax[1].plot(arrival_buffer*c1+inter_buffer*c2+drain_buffer*c3,label="Total cost")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
ax[0].legend()
ax[1].legend()
#plt.figure(figsize=(16,6))
f,ax = plt.subplots(2,1,figsize=(16,8))
ax[0].plot(arrival_buffer,label="buffer 1")
ax[0].plot(inter_buffer, label="buffer 2")
ax[0].plot(drain_buffer, label="buffer 3")
ax[1].plot(arrival_buffer*c1+inter_buffer*c2+drain_buffer*c3,label="Total cost")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
ax[0].legend()
ax[1].legend()
cost_2 = arrival_buffer*c1+inter_buffer*c2+drain_buffer*c3
plt.plot(arrival_buffer*c1+inter_buffer*c2+drain_buffer*c3)
plt.plot(cost_2)
plt.plot(cost_1)
plt.plot(cost_2)
plt.figure(figsize=(16,6))
plt.plot(arrival_buffer,label="buffer 1")
plt.plot(inter_buffer, label="buffer 2")
plt.plot(drain_buffer, label="buffer 3")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
plt.legend()
workload = arrival_buffer/(mu_drain/2)+(inter_buffer+drain_buffer)/(mu_drain)
workload_2 = (inter_buffer+arrival_buffer)/(mu_fast)
plt.plot(workload[:100000],workload_2[:100000])
plt.plot(workload[:100000],workload_2[:100000])
min_drain_time = workload/(1-mu_demand*2/mu_drain)
np.mean(min_drain_time),np.median(min_drain_time)
np.mean(min_drain_time > 1000)
a,b,_ = plt.hist(min_drain_time,bins=np.arange(0,14000,50),normed=True)
np.argmax(a)
a[:20]
b[:20]
b[17]
np.mean(arrival_buffer)
np.mean(inter_buffer)
plt.figure(figsize=(10,8))
dur = np.arange(54000,65000)
#dur = np.arange(300000)
plt.fill_between(dur,drain_buffer[dur],label = "buffer 3")
#plt.plot(dur,drain_buffer[dur])
plt.fill_between(dur,-inter_buffer[dur],label='-buffer 2')
#plt.fill_between(dur,-inter_buffer[dur],np.minimum(-inter_buffer[dur],-offset),label='-buffer 2')
#plt.plot(dur,-inter_buffer[dur])
#plt.plot(dur,a[dur]-offset,"k",alpha=0.5)
plt.ylim(top=50,bottom=-100)
plt.legend()
np.mean(arrival_buffer)
a = drain_buffer
std_h
np.percentile(inter_buffer,33)
350*0.16
inter_buffer_ss = inter_buffer
plt.figure(figsize=(10,6))
plt.hist(inter_buffer,bins=np.arange(150),normed=True,label="long drain")
plt.vlines(np.percentile(inter_buffer,33),0,0.04,label="long_drain")
plt.figure(figsize=(10,6))
plt.hist(inter_buffer,bins=np.arange(150),normed=True,label="long drain")
plt.hist(inter_buffer_ss,bins=np.arange(150),normed=True,label="steady state",alpha=0.7)
plt.xlabel("Buffer 2 level")
plt.ylabel("Occupancy probability")
h = np.percentile(inter_buffer,33)
plt.vlines(np.percentile(inter_buffer,33),0,0.04,label="long_drain")
plt.vlines(np.percentile(inter_buffer_ss,33),0,0.04,label="steady state",color="r")
plt.legend()
np.percentile(150-drain_buffer,33)
1/(omega_h*std_h)
plt.plot(drain_buffer)
-np.log(0.33)/(0.01*3.5)
b,a = zip(*slopes.items())
clf = LinearRegression()
clf.fit(np.array(b).reshape(-1,1),a)
clf.coef_
plt.plot(np.array(b),a,".")
plt.plot(np.array(b),clf.predict(np.array(b).reshape(-1,1)))
np.histogram(inter_buffer,bins=50)
beta_ss = (1/4)*(percentile**2)*omega_ss
beta_ss
mu_demand,mu_transfer,mu_fast,mu_drain
std_h**2*(1-omega_h*2*(c3/c2))/(4*slack_capacity_h)
plt.plot(arrival_buffer[:1000000])
np.sum(drain_buffer)/(26*len(drain_buffer))
#
#plt.plot(arrival_buffer[:1000000])
#plt.plot(inter_buffer[:1000000])
plt.plot(drain_buffer[:1000000])
plt.plot(inter_buffer[:1000000],label='safety stocks')
plt.legend()
#
#plt.plot(arrival_buffer[:1000000])
#plt.plot(inter_buffer[:1000000])
plt.plot(drain_buffer[:1000000])
plt.plot(inter_buffer[:1000000],label='safety stocks')
plt.legend()
#
#plt.plot(arrival_buffer[:1000000])
#plt.plot(inter_buffer[:1000000])
#plt.plot(drain_buffer[:1000000])
plt.plot(inter_buffer[:100000000],label='safety stocks')
plt.legend()
max(drain_buffer)- np.percentile(drain_buffer,66)
np.percentile(inter_buffer,33)
plt.plot(inter_buffer)
plt.plot(np.arange(199,-1,-1),0.035*np.exp(np.arange(200)*-0.035))
std_h
(0.7*omega_h*std_h)
s = 1/(0.7*omega_h*std_h)
s
1/clf.coef_
plt.hist(drain_buffer,bins=40,normed=True)
#plt.plot(b[15:,:],clf.predict(b[15:,:]))
np.log(0.66)/s
plt.figure(figsize=(10,6))
a,b,_ = plt.hist(drain_buffer,bins=30,normed=True)
b = b.reshape(-1,1)
clf = LinearRegression()
clf.fit(b[15:,:],np.log(a[14:]))
print(clf.coef_)
#plt.plot(np.arange(149,-1,-1),clf.coef_[0]*np.exp(np.arange(150)*-clf.coef_[0]))
plt.plot(np.arange(149,-1,-1),s*np.exp(np.arange(150)*-s),linewidth=2)
plt.vlines(150+np.log(0.66)/s,0,0.04,color="r")
plt.xlabel("Buffer 3 level")
plt.ylabel("Occupancy probability")
np.percentile(a,66)
1/omega_h
len(a)
len(b)
0.33-0.34
3/200
mu_demand/mu_fast
mu_transfer/2/mu_fast
5/140
-np.log(1-0.33)/(3.5*0.015)
plt.plot(b[10:],np.log(a[9:]))
#
#plt.plot(arrival_buffer[:1000000])
#plt.plot(inter_buffer[:1000000])
plt.plot(-drain_buffer[:1000000])
plt.plot(inter_buffer[:1000000],label='safety stocks')
plt.legend()
beta_h*std_h/(beta_ss*std_ss)
beta_h
plt.figure(figsize=(14,8))
run = np.arange(10000)
plt.fill_between(run,inter_buffer[run],label="buffer 2")
plt.fill_between(run,drain_buffer[run],label="buffer 3")
plt.legend()
omega_h
cost_3
scale = 0.33
beta = beta_ss#betas[scale]
sc_ratio = sc_ratios[scale]
cost_1,cost_2,cost_3 = zip(*costs[scale])
cost_1=np.array(cost_1)
cost_2=np.array(cost_2)
cost_3=np.array(cost_3)
t_cost = np.array(cost_1)+np.array(cost_2)+np.array(cost_3)
min_t_cost = min(t_cost)
t_cost = t_cost/min_t_cost
cost_1=np.array(cost_1)/min_t_cost
cost_2=np.array(cost_2)/min_t_cost
cost_3=np.array(cost_3)/min_t_cost
indexes = np.where(t_cost < 5)[0]
plt.figure(figsize=(12,8))
plt.plot(hedging[indexes],cost_1[indexes],label="Buffer 1 cost")
#plt.plot(hedging[indexes],cost_1[indexes],"o")
#plt.plot(hedging[indexes],cost_2[indexes])
plt.fill_between(hedging[indexes],cost_1[indexes]+cost_2[indexes],cost_1[indexes],alpha=0.1)
plt.fill_between(hedging[indexes],t_cost[indexes],cost_1[indexes]+cost_2[indexes],alpha=0.1)
plt.plot(hedging[indexes],t_cost[indexes],label="Total cost")
plt.plot(hedging[indexes],t_cost[indexes],".")
plt.vlines(beta,min(t_cost[indexes]),max(t_cost[indexes]),label="beta")
plt.hlines(1.03,min(hedging[indexes]),max(hedging[indexes]),color="r",label="3% margin")
plt.title("{:.3f}".format(sc_ratio))
plt.ylabel("Relative cumulative cost")
plt.xlabel("Threshold (xSTD)")
plt.legend()
scale = 0.33
beta = betas[scale]
sc_ratio = sc_ratios[scale]
cost_1,cost_2,cost_3 = zip(*costs[scale])
cost_1=np.array(cost_1)
cost_2=np.array(cost_2)
cost_3=np.array(cost_3)
t_cost = np.array(cost_1)+np.array(cost_2)+np.array(cost_3)
min_t_cost = min(t_cost)
t_cost = t_cost/min_t_cost
cost_1=np.array(cost_1)/min_t_cost
cost_2=np.array(cost_2)/min_t_cost
cost_3=np.array(cost_3)/min_t_cost
indexes = np.where(t_cost < 2e6)[0]
plt.figure(figsize=(12,8))
plt.plot(hedging[indexes],cost_1[indexes],label="Buffer 1 cost")
#plt.plot(hedging[indexes],cost_1[indexes],"o")
#plt.plot(hedging[indexes],cost_2[indexes])
plt.fill_between(hedging[indexes],cost_1[indexes]+cost_2[indexes],cost_1[indexes],alpha=0.1)
plt.fill_between(hedging[indexes],t_cost[indexes],cost_1[indexes]+cost_2[indexes],alpha=0.1)
plt.plot(hedging[indexes],t_cost[indexes],label="Total cost")
plt.plot(hedging[indexes],t_cost[indexes],".")
plt.vlines(beta,min(t_cost[indexes]),max(t_cost[indexes]),label="beta")
plt.hlines(1.03,min(hedging[indexes]),max(hedging[indexes]),color="r",label="3% margin")
plt.title("{:.3f}".format(sc_ratio))
plt.ylabel("Relative cumulative cost")
plt.xlabel("Threshold (xSTD)")
plt.legend()
scale = 3
beta = betas[scale]
sc_ratio = sc_ratios[scale]
cost = costs[scale]
r_cost = cost/min(cost)
indexes = np.where(r_cost < 1.2)[0]
plt.plot(hedging[indexes],r_cost[indexes])
plt.plot(hedging[indexes],r_cost[indexes],".")
plt.vlines(beta,min(r_cost[indexes]),max(r_cost[indexes]))
plt.hlines(1.03,min(hedging[indexes]),max(hedging[indexes]),color="r")
plt.title("{:.3f}".format(sc_ratio))
plt.plot(hedging,costs[1])
mu_demand
percentile = 3.1
scale = 0.1
cost = []
rates = []
hedging = np.arange(30,200,100)
f,ax = plt.subplots(3,1,figsize=(16,8))
duration = 10000
plot_range = range(0,duration)
mu_demand = 30*scale
mu_drain = mu_demand*1.02
mu_transfer = mu_drain + (mu_drain-mu_demand)*1
slack_capacity = mu_transfer-mu_drain
std = np.sqrt(mu_drain+mu_transfer)
omega = std/slack_capacity
beta = (1/4)*(percentile**2)*(std/slack_capacity)
hedging=[beta/4,beta/2,beta]
#hedging=[beta]
init_state = (mu_drain-mu_demand)*duration*0.6
np.random.seed(5)
demand_seq = np.random.poisson(mu_demand,duration)
transfer_seq = np.random.poisson(mu_transfer,duration)
drain_seq = np.random.poisson(mu_drain,duration)
cumul =False
for h in reversed(hedging):
thres = 2*mu_drain+h*np.sqrt(mu_drain+mu_transfer)
#thres = h*10
arrival_buffer,drain_buffer,zeta = simulate_reflected_random_walk_repeat(
demand_seq[:duration],
transfer_seq[:duration],
drain_seq[:duration],
thres,
init_state=init_state,
flow=False)
#print(np.where(drain_buffer == 0))
cost.append(sum(arrival_buffer*c1)+sum(drain_buffer*c2))
rates.append(zeta*mu_transfer)
#plt.plot(drain_buffer[j*1000:(j+1)*1000]*c2+arrival_buffer[j*1000:(j+1)*1000]*c1)
if cumul:
ax[1].plot(np.cumsum(drain_buffer)[plot_range],label=int(h))
ax[0].plot(np.cumsum(arrival_buffer)[plot_range])
ax[2].plot(np.cumsum(arrival_buffer*c1+drain_buffer*c2)[plot_range])
else:
ax[1].plot((drain_buffer)[plot_range])
#ax[1].plot(np.ones(len(plot_range))*thres,".-")
ax[0].plot((arrival_buffer)[plot_range],label="{} * {}".format(int(h),int(std)))
ax[2].plot((arrival_buffer*c1+drain_buffer*c2)[plot_range])
#print(np.min(np.diff((arrival_buffer[1500:2000]*c1+drain_buffer[1500:2000]*c2))))
ax[0].set_ylabel("Items in buffer 1")
ax[1].set_ylabel("Items in buffer 2")
ax[2].set_ylabel("Total cost")
f.legend()
slack_capacity
std/slack_capacity
mu_drain*c2
thres*c2
np.sum(drain_buffer == 0)
mu_demand
rates
mu_demand
mu_transfer
time_horizon
offset/std
offset
percentile = 1.645
#percentile = 0
percentile = 1.96
#percentile = 2.33
percentile = 3.1
#percentile = 1
#percentile = 7
slack_capacity = mu_transfer-mu_drain
std = np.sqrt(mu_drain+mu_transfer)
time_horizon = (percentile*std)**2/(2*slack_capacity)**2
offset = time_horizon*(-slack_capacity) + percentile*std*np.sqrt(time_horizon)
time_horizon = int(np.ceil(time_horizon))
offset = int(np.ceil(offset))
percentile*np.sqrt(3)
slack_capacity = mu_transfer-mu_drain
std = np.sqrt(mu_drain+mu_transfer)
beta = (1/4)*(percentile**2)*(std/slack_capacity) + slack_capacity/std
offset
std
slack_capacity
slack_capacity/std
slack_capacity
0.5*percentile*std/np.sqrt(time_horizon)
offset/std + slack_capacity/std
scaling_ratio = compute_scaling_ratio(mu_drain,mu_demand,std,init_state)
beta
min_cost = min(cost)
hedging = np.array(hedging)
r_cost = np.array([c/min_cost for c in cost[::-1]])
indexes = np.where(r_cost < 1.2)[0]
plt.plot(hedging[indexes],r_cost[indexes])
plt.plot(hedging[indexes],r_cost[indexes],".")
plt.vlines(beta,min(r_cost[indexes]),max(r_cost[indexes]))
plt.title("{:.3f}".format(scaling_ratio))
min_cost = min(cost)
hedging = np.array(hedging)
r_cost = np.array([c/min_cost for c in cost[::-1]])
indexes = np.where(r_cost < 1.2)[0]
plt.plot(hedging[indexes],r_cost[indexes])
plt.plot(hedging[indexes],r_cost[indexes],".")
plt.vlines(beta,min(r_cost[indexes]),max(r_cost[indexes]))
plt.title("{:.3f}".format(scaling_ratio))
cost = []
hedging = np.arange(30,60,5)
init_state = 7e4
#hedging = np.arange(1,7)
j = 1
f,ax = plt.subplots(3,1,figsize=(16,8))
#plot_range = range(4000,5000)
duration = 100000
plot_range = range(0,10000)
plot_range = range(0,200)
cumul =False
for h in reversed(hedging):
thres = mu_drain+h*np.sqrt(mu_drain+mu_transfer)
#thres = h*10
arrival_buffer,drain_buffer,zeta = simulate_reflected_random_walk_repeat(demand_seq[:duration],
transfer_seq[:duration],
drain_seq[:duration],
thres,init_state=init_state,
flow=False)
cost.append(sum(arrival_buffer*c1)+sum(drain_buffer*c2))
#plt.plot(drain_buffer[j*1000:(j+1)*1000]*c2+arrival_buffer[j*1000:(j+1)*1000]*c1)
if cumul:
ax[1].plot(np.cumsum(drain_buffer*c2)[plot_range],label=h)
ax[0].plot(np.cumsum(arrival_buffer*c1)[plot_range])
ax[2].plot(np.cumsum(arrival_buffer*c1+drain_buffer*c2)[plot_range])
else:
ax[1].plot((drain_buffer*c2)[plot_range],label=h)
ax[0].plot((arrival_buffer*c1)[plot_range])
ax[2].plot((arrival_buffer*c1+drain_buffer*c2)[plot_range])
#print(np.min(np.diff((arrival_buffer[1500:2000]*c1+drain_buffer[1500:2000]*c2))))
f.legend()
min_cost = min(cost)
plt.plot(hedging,[c/min_cost for c in cost[::-1]])
plt.plot(hedging,[c/min_cost for c in cost[::-1]],".")
cost = []
hedging = np.arange(5,70,5)
init_state = 1e4
#hedging = np.arange(1,7)
j = 1
f,ax = plt.subplots(3,1,figsize=(16,8))
#plot_range = range(4000,5000)
duration = 6000
plot_range = range(0,6000)
#plot_range = range(0,300)
cumul =False
for h in reversed(hedging):
thres = mu_drain+h*np.sqrt(mu_drain)
#thres = h*10
arrival_buffer,drain_buffer,zeta = simulate_reflected_random_walk(demand_seq[:duration],transfer_seq[:duration],drain_seq[:duration],thres,init_state=init_state)
cost.append(sum(arrival_buffer*c1)+sum(drain_buffer*c2))
#plt.plot(drain_buffer[j*1000:(j+1)*1000]*c2+arrival_buffer[j*1000:(j+1)*1000]*c1)
if cumul:
ax[1].plot(np.cumsum(drain_buffer*c2)[plot_range],label=h)
ax[0].plot(np.cumsum(arrival_buffer*c1)[plot_range])
ax[2].plot(np.cumsum(arrival_buffer*c1+drain_buffer*c2)[plot_range])
else:
ax[1].plot((drain_buffer*c2)[plot_range],label=h)
ax[0].plot((arrival_buffer*c1)[plot_range])
ax[2].plot((arrival_buffer*c1+drain_buffer*c2)[plot_range])
#print(np.min(np.diff((arrival_buffer[1500:2000]*c1+drain_buffer[1500:2000]*c2))))
thres = 1e6
#thres = h*10
arrival_buffer,drain_buffer,_ = simulate_reflected_random_walk(demand_seq[:duration],transfer_seq[:duration],drain_seq[:duration],thres,init_state=init_state)
#plt.plot(drain_buffer[j*1000:(j+1)*1000]*c2+arrival_buffer[j*1000:(j+1)*1000]*c1)
if cumul:
#ax[1].plot(np.cumsum(drain_buffer*c2)[plot_range],label="e")
ax[0].plot(np.cumsum(arrival_buffer*c1)[plot_range],label="e")
#ax[2].plot(np.cumsum(arrival_buffer*c1+drain_buffer*c2)[plot_range])
else:
#ax[1].plot((drain_buffer*c2)[plot_range],label="e")
ax[0].plot((arrival_buffer*c1)[plot_range],label="e")
#ax[2].plot((arrival_buffer*c1+drain_buffer*c2)[plot_range])
f.legend()
(mu_transfer-mu_demand)/((zeta*mu_transfer)-mu_demand)
min_cost = min(cost)
plt.plot(hedging,[c/min_cost for c in cost[::-1]])
plt.plot(hedging,[c/min_cost for c in cost[::-1]],".")
min_cost = min(cost)
plt.plot(hedging,[c/min_cost for c in cost[::-1]])
plt.plot(hedging,[c/min_cost for c in cost[::-1]],".")
h = []
for i in np.arange(0.94,0.949,0.001):
h.append(1/(1-i))
plt.plot(np.arange(0.94,0.949,0.001)/0.94,[i/min(h) for i in h])
min_cost = min(cost)
cost[0]-cost[1]
plt.plot(drain_buffer[:300])
plt.plot(arrival_buffer[:600])
plt.plot(buffer_seq[:1000])
sum(buffer_seq)
sum(buffer_seq)
np.percentile((supply_seq-demand_seq)[(supply_seq-demand_seq) < 0],0.01)
plt.plot(np.cumsum(supply_seq)-np.cumsum(demand_seq))
percentile = 1.645
#percentile = 0
#percentile = 1.96
#percentile = 2.33
slack_capacity = mu_supply-mu_demand
time_horizon = (percentile**2)*mu_supply/(2*slack_capacity**2)
offset = time_horizon*(-slack_capacity) + percentile* np.sqrt(mu_supply*2*time_horizon)
print(time_horizon*2)
time_horizon = int(np.ceil(time_horizon))
offset = int(np.ceil(offset))
time_horizon = (percentile**2)*mu_supply*2/slack_capacity**2
time_horizon = int(np.ceil(time_horizon))
y = []
for d in range(time_horizon):
y.append(d*(slack_capacity) - percentile* np.sqrt(mu_supply*2*d))
y_1 = y
time_horizon_1 = time_horizon
y_2 = y
time_horizon_2 = time_horizon
time_horizon/time_horizon_1
1.96/1.645
plt.plot(range(time_horizon),y)
plt.plot(range(time_horizon_1),y_1)
plt.plot(range(time_horizon_2),y_2)
y
time_horizon
offset
thres = poisson.ppf(0.95,mu_demand)
#thres = 0
thres = poisson.ppf(0.5,mu_demand)
def idle_supply(demand_seq,supply_seq,offset):
inv_pos = offset
idle_supply_seq = np.zeros_like(supply_seq)
idle_count = 0
for i,(d,s) in enumerate(zip(demand_seq,supply_seq)):
if inv_pos > thres+offset:
s = 0
idle_count += 1
idle_supply_seq[i] = s
inv_pos += s-d
#print(idle_count/len(supply_seq))
return idle_supply_seq
def idle_supply_time_horizon(demand_seq,supply_seq,offset,time_horizon):
inv_pos = offset
inv_pos_seq = np.zeros_like(supply_seq)
days_count = 0
for i,(d,s) in enumerate(zip(demand_seq,supply_seq)):
if (inv_pos > thres+offset) and days_count >= time_horizon:
s = 0
days_count = 0
idle_supply_seq[i] = s
inv_pos += s-d
inv_pos_seq[i] = inv_pos
days_count += 1
return inv_pos_seq
def idle_supply_time_horizon_smooth(demand_seq,supply_seq,offset,time_horizon):
inv_pos = offset
inv_pos_seq = np.zeros_like(supply_seq)
days_count = 0
just_idled = False
for i,(d,s) in enumerate(zip(demand_seq,supply_seq)):
surplus = inv_pos - offset
if surplus > 0 and ((days_count >= time_horizon) or just_idled):
if d > surplus:
s = d-surplus
else:
s = 0
days_count=0
just_idled = True
else:
just_idled = False
inv_pos += s-d
inv_pos_seq[i] = inv_pos
if not just_idled:
days_count += 1
return inv_pos_seq
def work_supply_time_horizon_smooth(demand_seq,supply_seq,offset,time_horizon):
inv_pos = offset
inv_pos_seq = np.zeros_like(supply_seq)
days_count = 0
just_idled = True
for i,(d,s) in enumerate(zip(demand_seq,supply_seq)):
surplus = inv_pos - offset
if surplus > 0 and ((days_count >= time_horizon) or just_idled):
days_count = 0
if d > surplus:
s = d-surplus
else:
s = 0
days_count=0
just_idled = True
else:
days_count += 1
just_idled = False
inv_pos += s-d
inv_pos_seq[i] = inv_pos
return inv_pos_seq
def idle_supply_smooth(demand_seq,supply_seq,offset):
inv_pos = offset
idle_supply_seq = np.zeros_like(supply_seq)
idle_count = 0
inv_pos_array = np.zeros_like(supply_seq)
for i,(d,s) in enumerate(zip(demand_seq,supply_seq)):
surplus = inv_pos - offset
if surplus > 0:
if d > surplus:
s = d-surplus
else:
s = 0
idle_count += 1
idle_supply_seq[i] = s
inv_pos += s-d
inv_pos = min(inv_pos,offset)
inv_pos_array[i] = inv_pos
#print(idle_count/len(supply_seq))
print(inv_pos)
return inv_pos_array
slack_capacity/np.sqrt(2*mu_demand)
point = 1400
plt.plot(inv_pos_seq[point-100:point+500])
point = 1400
plt.plot(inv_pos_seq[point-100:point+500])
point = 1400
plt.plot(inv_pos_seq[point-100:point+100])
offset
time_horizon*slack_capacity/2
slack_capacity
inv_pos_seq = work_supply_time_horizon_smooth(demand_seq,supply_seq,53,12)
print(np.mean(inv_pos_seq < 0))
inv_pos_seq = idle_supply_time_horizon_smooth(demand_seq,supply_seq,53,12)
print(np.mean(inv_pos_seq < 0))
stocks = inv_pos_seq.copy()
stocks[inv_pos_seq < 0] = 0
np.mean(stocks)
inv_pos_seq = idle_supply_time_horizon_smooth(demand_seq,supply_seq,41,69)
print(np.mean(inv_pos_seq < 0))
stocks = inv_pos_seq.copy()
stocks[inv_pos_seq < 0] = 0
np.mean(stocks)
inv_pos_seq = idle_supply_time_horizon(demand_seq,supply_seq,offset,time_horizon)
print(np.mean(inv_pos_seq < 0))
#plt.plot(inv_pos_seq[827341-10:827341+10])
#plt.plot(inv_pos_seq[827341-10:827341+10],".")
stocks = inv_pos_seq.copy()
stocks[inv_pos_seq < 0] = 0
np.mean(stocks)
idle_supply_seq,inv_pos_seq = idle_supply_smooth(demand_seq,supply_seq, np.ceil(offset))
#inv_pos_seq = offset + np.cumsum(idle_supply_seq)-np.cumsum(demand_seq)
print(np.mean(inv_pos_seq < 0))
#plt.plot(inv_pos_seq[827341-10:827341+10])
#plt.plot(inv_pos_seq[827341-10:827341+10],".")
plt.plot(inv_pos_seq[:1200])
n_sims = 100000
demand_sum = np.random.poisson(mu_demand*np.ceil(time_horizon),n_sims)
supply_sum = np.random.poisson(mu_supply*np.ceil(time_horizon),n_sims)
print(np.mean((demand_sum-supply_sum) > np.ceil(offset)))
offset+time_horizon*slack_capacity
1001 % 100
offset
time_horizon*slack_capacity/2
np.random.seed(500)
n_sims = 100000
#n_sims = 20
stockouts = []
last_day_stockouts = []
last_day_stockouts_vals = []
ave_inventories = []
sim_time_horizon = time_horizon
for i in range(n_sims):
demand = np.random.poisson(mu_demand,sim_time_horizon)
supply = np.random.poisson(mu_supply,sim_time_horizon)
inv_pos_seq = offset + np.cumsum(supply)-np.cumsum(demand)
stockouts.append(np.sum(inv_pos_seq < 0))
last_day_stockouts.append(inv_pos_seq[-1] < offset)
if last_day_stockouts[-1]:
last_day_stockouts_vals.append(inv_pos_seq[-1]-offset)
ave_inventories.append(np.mean(inv_pos_seq))
if i % 10000 == 0:
plt.plot(inv_pos_seq)
sum(stockouts)/(sim_time_horizon*n_sims),np.sum(last_day_stockouts)/(n_sims),np.mean(ave_inventories)
offset
np.median(last_day_stockouts_vals)
for offset in range(200):
stock_out_probs = []
for d in range(1,time_horizon+1):
stock_out_prob = norm.cdf(-offset,slack_capacity*d,np.sqrt(2*mu_supply*d))
stock_out_probs.append(stock_out_prob)
overal_stockout_prob = np.mean(stock_out_probs)
#print(overal_stockout_prob)
if overal_stockout_prob < 0.05:
break
time_horizon
def get_percentile_deficit(cycle_dur,slack_capacity,variance,percentile = 0.5):
mu = slack_capacity*cycle_dur
std = np.sqrt(variance*cycle_dur)
cum_deficit_prob = norm.cdf(0,mu,std)
cum_percentile = 0
prev_cum_prob = cum_deficit_prob
for i in range(10000):
cum_prob = norm.cdf(-i,mu,std)
prob = (prev_cum_prob - cum_prob)/cum_deficit_prob
cum_percentile += prob
if cum_percentile >= percentile:
return i
prev_cum_prob = cum_prob
a = get_percentile_deficit(time_horizon/4,slack_capacity,2*mu_supply)
#get_percentile_deficit(slack_capacity,2*mu_supply,time_horizon)
print(a)
def compute_recovery_time(slack_capacity,variance,deficit,bound = 2.33):
dur = ((bound*np.sqrt(variance)+np.sqrt(bound**2*variance+4*slack_capacity*deficit))/(2*slack_capacity))**2
return int(np.ceil(dur))
print(compute_recovery_time(slack_capacity,2*mu_supply,a))
def get_average_stockout_prob(duration,slack_capacity,variance,start):
stock_out_probs = []
for d in range(1,duration+1):
stock_out_prob = norm.cdf(0,start+slack_capacity*d,np.sqrt(variance*d))
stock_out_probs.append(stock_out_prob)
average_stockout_prob = np.mean(stock_out_probs)
return average_stockout_prob
def compute_stockout_prob_and_inventory_cost(cycle_dur,slack_capacity,variance,offset):
mu = slack_capacity*cycle_dur
std = np.sqrt(variance*cycle_dur)
cum_deficit_prob = norm.cdf(0,mu,std)
#print(cum_deficit_prob)
deficit = get_percentile_deficit(cycle_dur,slack_capacity,variance,0.95)
#print(deficit)
rec_dur = compute_recovery_time(slack_capacity,variance,deficit)
#print(rec_dur)
cycle_stockout_prob = get_average_stockout_prob(cycle_dur,slack_capacity,variance,offset)
rec_dur = int(np.ceil(deficit/slack_capacity))
print(rec_dur)
rec_stockout_prob = get_average_stockout_prob(rec_dur,slack_capacity,variance,offset-deficit)
#print(cycle_stockout_prob,rec_stockout_prob)
effective_duration = (cycle_dur+cum_deficit_prob*rec_dur)
#print(cycle_dur/effective_duration)
overall_stockout_prob = (cycle_dur*cycle_stockout_prob+cum_deficit_prob*rec_dur*rec_stockout_prob)/effective_duration
overall_inventory_cost = (cycle_dur*(0.5*slack_capacity*cycle_dur+offset)+cum_deficit_prob*rec_dur*(0.5*slack_capacity*rec_dur+offset-deficit))/effective_duration
#print(overall_inventory_cost)
return overall_stockout_prob,overall_inventory_cost
time_horizon/4
variance = 2*mu_supply
min_inv_cost = np.inf
min_cycle_dur = None
min_offset = None
for cycle_dur in range(1,int(time_horizon)):
for offset in range(200):
overall_stockout_prob,inv_cost = compute_stockout_prob_and_inventory_cost(cycle_dur,slack_capacity,variance,offset)
#print(overall_stockout_prob)
if overall_stockout_prob < 0.05:
break
print(cycle_dur,inv_cost)
if inv_cost < min_inv_cost:
print(cycle_dur)
min_inv_cost = inv_cost
min_cycle_dur = cycle_dur
min_offset = offset
print(offset)
min_offset
min_cycle_dur
min_inv_cost
time_horizon
int(time_horizon)*(0.5*slack_capacity)
inv_cost
print(overal_stockout_prob)
overal_stockout_prob
probs = []
deficit = 10000
for i in range(deficit):
v = -offset-i
mu = slack_capacity*time_horizon
std = np.sqrt(2*mu_supply*time_horizon)
probs.append(norm.cdf(v,mu,std))
#print(i,probs[-1])
np.sum(-np.diff(probs)*np.arange(1,deficit)/norm.cdf(-offset,mu,std))
offsets = []
for dur in range(1,time_horizon+1):
for offset in range(200):
stock_out_probs = []
for d in range(1,dur+1):
stock_out_prob = norm.cdf(-offset,slack_capacity*d,np.sqrt(2*mu_supply*d))
stock_out_probs.append(stock_out_prob)
overal_stockout_prob = np.mean(stock_out_probs)
#print(overal_stockout_prob)
if overal_stockout_prob < 0.05:
break
#print(dur,offset)
offsets.append(offset)
plt.plot(offsets)
norm.cdf(-offset,mu,std)
offset
mu
(-np.diff(probs)/norm.cdf(-offset,mu,std))[:50]
-np.diff(probs)/norm.cdf(-offset,mu,std)
offset
np.sum(last_day_stockouts)/(n_sims)
sum(stockouts)/(int(np.ceil(time_horizon))*n_sims)
np.sum(last_day_stockouts)
np.sum(last_day_stockouts)/sum(stockouts)
np.mean(stockouts)
stockouts = np.array(stockouts)
np.median(stockouts[stockouts > 0])
plt.hist(stockouts[stockouts > 0])
plt.hist(stockouts,bins=range(0,50,2))
2*time_horizon
norm.cdf(-offset,slack_capacity*10,np.sqrt(mu_supply*10))
int(np.ceil(time_horizon))
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
num_epochs = 100
total_series_length = 50000
truncated_backprop_length = 15
state_size = 4
num_classes = 2
echo_step = 3
batch_size = 5
num_batches = total_series_length//batch_size//truncated_backprop_length
from numpy import *
from matplotlib.pyplot import *
import scipy.linalg
# load the data
trainLen = 2000
testLen = 2000
initLen = 100
data = loadtxt('MackeyGlass_t17.txt')
# plot some of it
figure(10).clear()
plot(data[0:1000])
title('A sample of data')
# generate the ESN reservoir
inSize = outSize = 1
resSize = 1000
a = 0.3 # leaking rate
random.seed(42)
Win = (random.rand(resSize,1+inSize)-0.5) * 1
W = random.rand(resSize,resSize)-0.5
# Option 1 - direct scaling (quick&dirty, reservoir-specific):
#W *= 0.135
# Option 2 - normalizing and setting spectral radius (correct, slow):
print ('Computing spectral radius...',)
rhoW = max(abs(linalg.eig(W)[0]))
print ('done.')
W *= 1.25 / rhoW
# allocated memory for the design (collected states) matrix
X = zeros((1+inSize+resSize,trainLen-initLen))
# set the corresponding target matrix directly
Yt = data[None,initLen+1:trainLen+1]
# run the reservoir with the data and collect X
x = zeros((resSize,1))
for t in range(trainLen):
u = data[t]
x = (1-a)*x + a*tanh( dot( Win, vstack((1,u)) ) + dot( W, x ) )
if t >= initLen:
X[:,t-initLen] = vstack((1,u,x))[:,0]
# train the output
reg = 1e-8 # regularization coefficient
X_T = X.T
Wout = dot( dot(Yt,X_T), linalg.inv( dot(X,X_T) + \
reg*eye(1+inSize+resSize) ) )
#Wout = dot( Yt, linalg.pinv(X) )
# run the trained ESN in a generative mode. no need to initialize here,
# because x is initialized with training data and we continue from there.
Y = zeros((outSize,testLen))
u = data[trainLen]
for t in range(testLen):
x = (1-a)*x + a*tanh( dot( Win, vstack((1,u)) ) + dot( W, x ) )
y = dot( Wout, vstack((1,u,x)) )
Y[:,t] = y
# generative mode:
u = y
## this would be a predictive mode:
#u = data[trainLen+t+1]
# compute MSE for the first errorLen time steps
errorLen = 500
mse = sum( square( data[trainLen+1:trainLen+errorLen+1] - Y[0,0:errorLen] ) ) / errorLen
print ('MSE = ' + str( mse ))
# plot some signals
figure(1).clear()
plot( data[trainLen+1:trainLen+testLen+1], 'g' )
plot( Y.T, 'b' )
title('Target and generated signals $y(n)$ starting at $n=0$')
legend(['Target signal', 'Free-running predicted signal'])
figure(2).clear()
plot( X[0:20,0:200].T )
title('Some reservoir activations $\mathbf{x}(n)$')
figure(3).clear()
bar( range(1+inSize+resSize), Wout.T )
title('Output weights $\mathbf{W}^{out}$')
show()
```
| github_jupyter |
<!--NOTEBOOK_HEADER-->
*This notebook contains course material from [CBE30338](https://jckantor.github.io/CBE30338)
by Jeffrey Kantor (jeff at nd.edu); the content is available [on Github](https://github.com/jckantor/CBE30338.git).
The text is released under the [CC-BY-NC-ND-4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode),
and code is released under the [MIT license](https://opensource.org/licenses/MIT).*
<!--NAVIGATION-->
< [Interacting Tanks](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/03.07-Interacting-Tanks.ipynb) | [Contents](toc.ipynb) | [Modeling and Control of a Campus Outbreak of Coronavirus COVID-19](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/03.09-COVID-19.ipynb) ><p><a href="https://colab.research.google.com/github/jckantor/CBE30338/blob/master/notebooks/03.08-Manometer-Models-and-Dynamics.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a><p><a href="https://raw.githubusercontent.com/jckantor/CBE30338/master/notebooks/03.08-Manometer-Models-and-Dynamics.ipynb"><img align="left" src="https://img.shields.io/badge/Github-Download-blue.svg" alt="Download" title="Download Notebook"></a>
# Manometer Models and Dynamics
## Summary
This notebook demonstrates the modeling and interactive simulation of a u-tube manometer. This device demonstrates a variety of behaviors exhibited by a linear second order system. An interesting aspect of the problem is the opportunity for passive design of dynamics for a measurement device.
## Learning Goals
* Develop linear differential equations models for mechanical systems from momentum/force balances.
* Describe role of position and velocity as state variables in a dynamic model.
* Describe undamped, underdamped, overdamped, and critically damped responses.
* Represent a second order system in standard form with natural frequency and damping factor.
* Describe second order response to sinusoidal input, and resonance.
* Construct a state space representation of a second order linear differential equation.
## Initializations
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from scipy import linalg as la
from ipywidgets import interact,interactive
from control.matlab import *
# scales for all subsequent plots
tmax = 20
ymin = -0.02
ymax = +0.02
axis = [0.0,tmax,ymin,ymax]
t = np.linspace(0.0,tmax,1000)
# physical properties
g = 9.8 # m/s
rho = 1000.0 # density of water kg/m^3
nu = 1.0e-6 # kinematic viscosity of water m/s^2
# system dimensions
L = 7 # meters
d = 0.08 # meters
```
## Model 1. Steady State Response to a Pressure Differential
For this first model we will that the ends of the u-tube are exposed to a pressure differential $\Delta P$. How does the level in the tubes change?
The u-tube manometer of cross-sectional area $A$, filled with a liquid of density $\rho$, the total length of the liquid column $L$. When the ends are open and exposed to the same environmental pressure $P$ the liquid levels in the two the legs of the device will reach the same level. We'll measure the levels in the tubes as a deviation $y$ from this equilibrium position.
At steady state the difference in the levels of the tubes will be $h$. The static pressure difference
$$\Delta P = \rho g h$$
or
$$y = \frac{\Delta P}{\rho g}$$
This is simple statics. Notice that neither the cross-sectional area or the length of the liquid column matter. This is the rationale behind the common water level.

(By [Bd](https://de.wikipedia.org/wiki/User:Bd) at the [German language Wikipedia](https://de.wikipedia.org/wiki/), [CC BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/), [Link](https://commons.wikimedia.org/w/index.php?curid=46342405))
```
def model1(deltaP = 100.0):
h = deltaP/(rho*g)
plt.axis(axis)
plt.plot(plt.xlim(),[h,h])
plt.grid()
plt.xlabel('Time [sec]')
plt.ylabel('h [m]')
plt.title('dP = {0:5.1f} Pascals'.format(deltaP))
interact(model1,deltaP=(-200,200,20.0));
```
## Model 2. Dynamic Response with Negligible Viscosity
The second model for the manometer includes the dynamics associated with moving a mass $m$ of the liquid column held within the manometer. For this model we will chose a different measure of displacem
The net force on the liquid column is due to the applied pressure differential, $A\Delta P$, and the gravitational force due to the difference in liquid levels between the two arms of the manometer, $2 A \rho g$. $A$ is the cross-sectional area. From Newton's law
$$m \frac{d^2y}{dt^2} = A \Delta P - 2 A \rho g y$$
The mass of liquid is $m = \rho L A$ where $L$ is the total length of the liquid column. After canceling a common factor $A$, the result is an inhomogeneous linear second order differential equation
$$ \frac{d^2y}{dt^2} + \frac{2 g}{L} y = \frac{1}{\rho L} \Delta P$$
At steady state this model reduces to the static case outlined in model 1 above. The dynamic case corresponds to an undamped harmonic oscillator with an angular frequency
$$\omega = \sqrt{\frac{2 g}{L}}$$
For numerical solution using the scipy libraries, it is necessary to convert the second order differential equation to a system of first order differential equations.
$$\begin{align*}
\frac{dy}{dt} & = v \\
\frac{dv}{dt} & = -\frac{2g}{L} y + \frac{1}{\rho L} \Delta P
\end{align*}$$
```
def model2(deltaP=100, L = 7.0):
def deriv(X,t):
x,v = X
xdot = v
vdot = -(2*g/L)*x + deltaP/rho/L
return [xdot,vdot]
IC = [0,0]
w = np.sqrt(2*g/L)
print(" natural frequency = {0:0.1f} rad/sec".format(w))
print("period of oscillation = {0:0.1f} seconds".format(2*np.pi/w))
sol = odeint(deriv,IC,t)
plt.axis(axis)
plt.plot(t,sol)
plt.grid()
plt.xlabel('Time [sec]')
plt.ylabel('y [m], v[m/s]')
plt.title('dP = {0:5.1f} Pascals, L = {1:4.2f} meters'.format(deltaP,L))
plt.legend(['Position','Velocity'])
interact(model2, deltaP = (-200,200,1), L = (0.2,10,0.1));
```
## Model 3. Dynamic Response with Viscous Dissipation
This third model for manometer incorporates the energy loss due to viscous dissipation in fluid motion. The pressure drop due to the laminar flow of incompressible Newtonian fluid in a long pipe with circular cross-section is given by the Hagen-Poiseuille equation
$$\Delta P_{drag} = \frac{32 \mu L v}{d^2}$$
where $\mu$ is the dynamic viscosity and $d$ is pipe diameter. Doing a balance of forces acting on the fluid column
$$\rho AL\frac{d^2y}{dt^2} + \frac{32\mu L A}{d^2}v + 2 A \rho g y = A \Delta P$$
Denoting $\nu = \frac{\mu}{\rho}$ as the kinematic viscosity, substituting for velocity $\frac{dy}{dt} = v$ leaves
$$\frac{d^2y}{dt^2} + \frac{32 \nu }{d^2}\frac{dy}{dt} + \frac{2g}{L} y = \frac{1}{\rho L} \Delta P$$
This can be recast as a pair of first-order linear differential equations
$$\begin{align*}
\frac{dy}{dt} & = v \\
\frac{dv}{dt} & = -\frac{2g}{L} y - \frac{32 \nu }{d^2}v + \frac{1}{\rho L} \Delta P
\end{align*}$$
```
def model3(dP = 100.0, L = 7.0, d = 0.008):
def deriv(X,t):
y,v = X
ydot = v
vdot = -(2*g/L)*y - (32*nu/d**2)*v + dP/rho/L
return [ydot,vdot]
IC = [0,0]
sol = odeint(deriv,IC,t)
plt.axis(axis)
plt.plot(t,sol)
plt.grid()
plt.xlabel('Time [sec]')
plt.ylabel('y [m], v[m/s]')
plt.title('dP = {0:5.1f} bars, L = {1:4.2f} meters, d = {2:5.3f} meters'.format(dP,L,d))
plt.legend(['Position','Velocity'])
w = interactive(model3, dP=(-200,200,20), L = (0.2,30,0.1), d=(0.001,0.020,0.001));
w.children[2].readout_format = '.3f'
w
```
## Model 4. Second Order System in Standard Form
Standard form of a damped second order system is
$$\tau^2\frac{d^2y}{dt^2} + 2\zeta\tau\frac{dy}{dt} + y = K u(t)$$
Examples include buildings, car suspensions, other structures. Starting with the model equation
$$\frac{d^2y}{dt^2} + \frac{32 \nu }{d^2}\frac{dy}{dt} + \frac{2g}{L} y = \frac{1}{\rho L} \Delta P$$
The first step is to normalize the zeroth order term in $y$ and compare to the second-order model in standard form
$$\underbrace{\frac{L}{2g}}_{\tau^2}\frac{d^2y}{dt^2} + \underbrace{\frac{16 \nu L}{g d^2}}_{2\zeta\tau}\frac{dy}{dt} + y = \underbrace{\frac{1}{2\rho g}}_K \underbrace{\Delta P}_{u(t)}$$
Solving for the coefficients in standard form
$$\begin{align*}
K & = \frac{1}{2\rho g}\\
\tau & = \sqrt{\frac{L}{2g}} \\
\zeta & = \frac{8\nu}{d^2}\sqrt{\frac{2L}{g}}
\end{align*}$$
#### Undamped ($\zeta = 0$)
#### Underdamped ($\zeta < 1$)
#### Critically damped ($\zeta = 1$)
$$d_\text{critical damping} = \left(\frac{128 \nu^2 L}{g}\right)^\frac{1}{4}$$
#### Overdamped ($\zeta > 1$)
```
K = 1/2/rho/g
tau = np.sqrt(L/2/g)
zeta = (8*nu/d**2)*np.sqrt(2*L/g)
print(K,tau,zeta)
dcritical = (128*nu*nu*L/g)**0.25
print(dcritical)
```
## Model 5. Dynamic Response to Sinusoidal Input
$$\frac{d^2y}{dt^2} + \frac{32 \nu }{d^2}\frac{dy}{dt} + \frac{2g}{L} y = \frac{1}{\rho L} \Delta P$$
```
def model4(dP=100.0, L=1.0, d=0.10, freq=0.5):
def deriv(X,t):
x,v = X
xdot = v
vdot = -(2*g/L)*x - (32*nu/d**2)*v + dP*np.sin(2.0*np.pi*freq*t)/rho/L
return [xdot,vdot]
IC = [0,0]
sol = odeint(deriv,IC,t)
plt.axis(axis)
plt.plot(t,sol[:,1])
plt.plot(t,dP*np.sin(2.0*np.pi*freq*t)/10000)
plt.grid()
plt.xlabel('Time [sec]')
plt.ylabel('y [m], P[bars/10000]')
plt.title('dP = {0:5.1f} bars, L = {1:4.2f} meters, d = {2:5.3f} meters'.format(dP,L,d))
plt.legend(['Position','Pressure/10000'])
interact(model4, dP=(-200,200,20), L = (0.2,5,0.1), d=(0.01,0.20,0.002), freq=(0,4,0.01));
```
## Model 6. State Space Representation
State space models are widely used in textbooks, software, and the research literature to represent linear systems. It's a generic model that represents a system with inputs and outputs. Here's how to recast out manometer model is time-varying pressure as a state model where the liquid level is the measured output.
Start with the model written as a differential equation
$$\frac{d^2y}{dt^2} + \frac{32\nu}{d^2}\frac{dy}{dt} + \frac{2g}{L} y = \frac{1}{\rho L} \Delta P$$
Assemble the dependent variables in a vector, and rewrite using matrix/vector operations.
$$\begin{align*}
\frac{d}{dt}
\left[\begin{array}{c}y \\ v\end{array}\right]
& =
\left[\begin{array}{cc}0 & 1 \\ - \frac{2g}{L} & -\frac{32\nu}{d^2} \end{array}\right]
\left[\begin{array}{c}y \\ v\end{array}\right]
+
\left[\begin{array}{c}0 \\ \frac{1}{\rho L}\end{array}\right]
\left[\Delta P\right] \\
\left[y\right]
& =
\left[\begin{array}{c} 1 & 0\end{array}\right]
\left[\begin{array}{c}y \\ v\end{array}\right]
+
\left[0\right]
\left[\Delta P\right]
\end{align*}
$$
Use standard symbols to label the vectors and matrices.
$$\begin{align*}
\frac{d}{dt}
\underbrace{\left[\begin{array}{c}y \\ v\end{array}\right]}_{x}
& =
\underbrace{\left[\begin{array}{cc}0 & 1 \\ - \frac{2g}{L} & -\frac{32\nu}{d^2} \end{array}\right]}_{A}
\underbrace{\left[\begin{array}{c}y \\ v\end{array}\right]}_{x}
+
\underbrace{\left[\begin{array}{c}0 \\ \frac{1}{\rho L}\end{array}\right]}_{B}
\underbrace{\left[\Delta P\right]}_{u} \\
\underbrace{\left[y\right]}_{y}
& =
\underbrace{\left[\begin{array}{c} 1 & 0\end{array}\right]}_{C}
\underbrace{\left[\begin{array}{c}y \\ v\end{array}\right]}_{x}
+
\underbrace{\left[0\right]}_{D}
\underbrace{\left[\Delta P\right]}_{u}
\end{align*}
$$
The result is a model of a linear system in a standard state space representation.
$$\begin{align*}
\frac{dx}{dt} & = Ax + Bu \\
y & = Cx + Du
\end{align*}$$
```
def model6(dP=100, L=1.0, d=0.10):
A = [[0,1],[-2*g/L, -32*nu/(d**2)]]
B = [[0],[1/rho/L]]
C = [[1,0]]
D = [[0]]
sys = ss(A,B,C,D)
y,tout = step(sys,t);
plt.axis(axis)
plt.plot(t,dP*y)
plt.grid()
plt.xlabel('Time [sec]')
plt.ylabel('y [m]')
plt.title('dP = {0:5.1f} bars, L = {1:4.2f} meters, d = {2:5.3f} meters'.format(dP,L,d))
plt.legend(['Position'])
interact(model6, dP=(-200,200,1), L = (0.2,5,0.1), d=(0.01,0.20,0.002));
w = np.logspace(0,1,200)
def model6(L=1.0, d=0.10):
A = [[0,1],[-2*g/L, -32*nu/(d**2)]]
B = [[0],[1/rho/L]]
C = [[1,0]]
D = [[0]]
mano = ss(A,B,C,D)
bode(mano,w);
interact(model6, L = (0.2,5,0.1), d=(0.01,0.20,0.002));
w = np.logspace(0,1,200)
def model6(L=1.0, d=0.10):
A = [[0,1],[-2*g/L, -128*nu/(np.pi*d**4)]]
B = [[0],[1/rho/L]]
C = [[1,0]]
D = [[0]]
e_vals,e_vecs = la.eig(A)
plt.axis([-5,5,-5,5])
plt.axis('equal')
plt.plot(e_vals.real,e_vals.imag,'o')
interact(model6, L = (0.2,5,0.1), d=(0.01,0.20,0.002));
```
<!--NAVIGATION-->
< [Interacting Tanks](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/03.07-Interacting-Tanks.ipynb) | [Contents](toc.ipynb) | [Modeling and Control of a Campus Outbreak of Coronavirus COVID-19](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/03.09-COVID-19.ipynb) ><p><a href="https://colab.research.google.com/github/jckantor/CBE30338/blob/master/notebooks/03.08-Manometer-Models-and-Dynamics.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a><p><a href="https://raw.githubusercontent.com/jckantor/CBE30338/master/notebooks/03.08-Manometer-Models-and-Dynamics.ipynb"><img align="left" src="https://img.shields.io/badge/Github-Download-blue.svg" alt="Download" title="Download Notebook"></a>
| github_jupyter |
# Ridge Regressor with StandardScaler
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data fetching
pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of the coefficients. The ridge coefficients minimize a penalized residual sum of squares:
\begin{equation*}
\min_{w} || X w - y||_2^2 + \alpha ||w||_2^2
\end{equation*}
The complexity parameter controls the amount of shrinkage: the larger the value of , the greater the amount of shrinkage and thus the coefficients become more robust to collinearity.
This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. Also known as Ridge Regression or Tikhonov regularization. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape (n_samples, n_targets)).
#### Model Tuning Parameters
> **alpha** -> Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization.
> **solver** -> Solver to use in the computational routines {‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}
```
Input=[("standard",StandardScaler()),("model",Ridge(random_state=123))]
model=Pipeline(Input)
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Thilakraj Devadiga , Github: [Profile](https://github.com/Thilakraj1998)
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from latency import run_latency, run_latency_changing_topo, run_latency_per_round, run_latency_per_round_changing_topo, nodes_latency
import sys
sys.path.append('..')
from utils import create_mixing_matrix, load_data, run, consensus
```
# Base case
```
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs = run(train_loader, test_loader, comm_matrix, num_rounds, epochs, num_clients)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs)
plt.show()
```
# Latency with fixed topology
```
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 2)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs2 = run_latency(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs2)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 4)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs4 = run_latency(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs4)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 8)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs8 = run_latency(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs8)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 16)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs16 = run_latency(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs16)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 32)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs32 = run_latency(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs32)
plt.show()
fig, ax = plt.subplots(1, figsize=(12, 9))
ax.set_ylim([0, 1])
x = np.array(range(10))
ax.plot(x, accs, color="red", label="base case")
ax.plot(x, accs2, color="lime", label="two delayed nodes")
ax.plot(x, accs4, color="green", label="four delayed nodes")
ax.plot(x, accs8, color="purple", label="eight delayed nodes")
ax.plot(x, accs16, color="blue", label="sixteen delayed nodes")
ax.plot(x, accs32, color="cyan", label="thirty-two delayed nodes")
plt.legend(loc="lower right", title="Number of delayed nodes")
plt.title("Accuracy curve depending on number of delayed nodes")
plt.xlabel("Round")
plt.ylabel("Accuracy")
plt.show()
```
# Latency with changing topology
```
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 2)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs2_ = run_latency_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs2_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 4)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs4_ = run_latency_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs4_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 8)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs8_ = run_latency_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs8_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 16)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs16_ = run_latency_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs16_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 32)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs32_ = run_latency_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs32_)
plt.show()
fig, ax = plt.subplots(1, figsize=(12, 9))
ax.set_ylim([0, 1])
x = np.array(range(10))
ax.plot(x, accs, color="red", label="base case")
ax.plot(x, accs2_, color="lime", label="two delayed nodes")
ax.plot(x, accs4_, color="green", label="four delayed nodes")
ax.plot(x, accs8_, color="purple", label="eight delayed nodes")
ax.plot(x, accs16_, color="blue", label="sixteen delayed nodes")
ax.plot(x, accs32_, color="cyan", label="thirty-two delayed nodes")
plt.legend(loc="lower right", title="Number of delayed nodes")
plt.title("Accuracy curve depending on number of delayed nodes with changing topology")
plt.xlabel("Round")
plt.ylabel("Accuracy")
plt.show()
```
# Latency on a few rounds
```
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 2)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs1 = run_latency_per_round(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs1)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 4)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs2 = run_latency_per_round(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs2)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 8)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs3 = run_latency_per_round(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs3)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 16)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs4 = run_latency_per_round(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs4)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 32)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs5 = run_latency_per_round(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs5)
plt.show()
fig, ax = plt.subplots(1, figsize=(12, 9))
ax.set_ylim([0, 1])
x = np.array(range(10))
ax.plot(x, accs, color="red", label="base case")
ax.plot(x, accs1, color="lime", label="two delayed nodes")
ax.plot(x, accs2, color="green", label="four delayed nodes")
ax.plot(x, accs3, color="purple", label="eight delayed nodes")
ax.plot(x, accs4, color="blue", label="sixteen delayed nodes")
ax.plot(x, accs5, color="cyan", label="thirty-two delayed nodes")
plt.legend(loc="lower right", title="Number of delayed nodes")
plt.title("Accuracy curve depending on number of delayed nodes with delays only on specific rounds")
plt.xlabel("Round")
plt.ylabel("Accuracy")
plt.show()
```
# Latency on a few rounds with changing topology
```
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 2)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs1_ = run_latency_per_round_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs1_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 4)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs2_ = run_latency_per_round_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs2_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 8)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs3_ = run_latency_per_round_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs3_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 16)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs4_ = run_latency_per_round_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs4_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 32)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs5_ = run_latency_per_round_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs5_)
plt.show()
fig, ax = plt.subplots(1, figsize=(12, 9))
ax.set_ylim([0, 1])
x = np.array(range(10))
ax.plot(x, accs, color="red", label="base case")
ax.plot(x, accs1_, color="lime", label="two delayed nodes")
ax.plot(x, accs2_, color="green", label="four delayed nodes")
ax.plot(x, accs3_, color="purple", label="eight delayed nodes")
ax.plot(x, accs4_, color="blue", label="sixteen delayed nodes")
ax.plot(x, accs5_, color="cyan", label="thirty-two delayed nodes")
plt.legend(loc="lower right", title="Number of delayed nodes")
plt.title("Accuracy curve depending on number of delayed nodes with changing topology and delays only on specific rounds")
plt.xlabel("Round")
plt.ylabel("Accuracy")
plt.show()
```
| github_jupyter |
```
# import customizing_motif_vec
import extract_motif
import motif_class
import __init__
import json_utility
from importlib import reload
reload(__init__)
reload(extract_motif)
# reload(customizing_motif_vec)
reload(motif_class)
import plot_glycan_utilities
reload(plot_glycan_utilities)
import matplotlib.pyplot as plt
from glypy.io import glycoct
from glypy.structure.glycan import fragment_to_substructure, Glycan
import glycan_io
from glypy.structure.glycan_composition import GlycanComposition, FrozenGlycanComposition
%matplotlib inline
```
A4FG4S4 = """
RES
1b:x-dglc-HEX-1:5
2s:n-acetyl
3b:b-dglc-HEX-1:5
4s:n-acetyl
5b:a-dman-HEX-1:5
6b:a-dman-HEX-1:5
7b:b-dglc-HEX-1:5
8s:n-acetyl
9b:b-dgal-HEX-1:5
10b:a-dgro-dgal-NON-2:6|1:a|2:keto|3:d
11s:n-acetyl
12b:b-dglc-HEX-1:5
13s:n-acetyl
14b:b-dgal-HEX-1:5
15b:a-dgro-dgal-NON-2:6|1:a|2:keto|3:d
16s:n-acetyl
17b:a-dman-HEX-1:5
18b:b-dglc-HEX-1:5
19s:n-acetyl
20b:b-dgal-HEX-1:5
21b:a-dgro-dgal-NON-2:6|1:a|2:keto|3:d
22s:n-acetyl
23b:b-dglc-HEX-1:5
24s:n-acetyl
25b:b-dgal-HEX-1:5
26b:a-dgro-dgal-NON-2:6|1:a|2:keto|3:d
27s:n-acetyl
28b:a-lgal-HEX-1:5|6:d
LIN
1:1d(2+1)2n
2:1o(4+1)3d
3:3d(2+1)4n
4:3o(4+1)5d
5:5o(3+1)6d
6:6o(2+1)7d
7:7d(2+1)8n
8:7o(4+1)9d
9:9o(3+2)10d
10:10d(5+1)11n
11:6o(4+1)12d
12:12d(2+1)13n
13:12o(4+1)14d
14:14o(3+2)15d
15:15d(5+1)16n
16:5o(6+1)17d
17:17o(2+1)18d
18:18d(2+1)19n
19:18o(4+1)20d
20:20o(3+2)21d
21:21d(5+1)22n
22:17o(6+1)23d
23:23d(2+1)24n
24:23o(4+1)25d
25:25o(3+2)26d
26:26d(5+1)27n
27:1o(6+1)28d
"""
a_3350 = """RES
1b:x-dglc-HEX-1:5
2b:x-lgal-HEX-1:5|6:d
3b:x-dglc-HEX-1:5
4b:x-dman-HEX-1:5
5b:x-dman-HEX-1:5
6b:x-dglc-HEX-1:5
7b:x-dgal-HEX-1:5
8b:x-dgro-dgal-NON-2:6|1:a|2:keto|3:d
9s:n-acetyl
10s:n-acetyl
11b:x-dman-HEX-1:5
12b:x-dglc-HEX-1:5
13b:x-dgal-HEX-1:5
14s:n-acetyl
15b:x-dglc-HEX-1:5
16b:x-dgal-HEX-1:5
17s:n-acetyl
18s:n-acetyl
19s:n-acetyl
LIN
1:1o(-1+1)2d
2:1o(-1+1)3d
3:3o(-1+1)4d
4:4o(-1+1)5d
5:5o(-1+1)6d
6:6o(-1+1)7d
7:7o(-1+2)8d
8:8d(5+1)9n
9:6d(2+1)10n
10:4o(-1+1)11d
11:11o(-1+1)12d
12:12o(-1+1)13d
13:12d(2+1)14n
14:11o(-1+1)15d
15:15o(-1+1)16d
16:15d(2+1)17n
17:3d(2+1)18n
18:1d(2+1)19n
"""
```
undefined = """RES
1b:x-dglc-HEX-1:5
2s:n-acetyl
3b:b-dglc-HEX-1:5
4s:n-acetyl
5b:b-dman-HEX-1:5
6b:a-dman-HEX-1:5
7b:b-dglc-HEX-1:5
8s:n-acetyl
9b:a-dman-HEX-1:5
10b:b-dglc-HEX-1:5
11s:n-acetyl
12b:b-dglc-HEX-1:5
13s:n-acetyl
14b:a-lgal-HEX-1:5|6:d
LIN
1:1d(2+1)2n
2:1o(4+1)3d
3:3d(2+1)4n
4:3o(4+1)5d
5:5o(3+1)6d
6:6o(2+1)7d
7:7d(2+1)8n
8:5o(6+1)9d
9:9o(2+1)10d
10:10d(2+1)11n
11:9o(6+1)12d
12:12d(2+1)13n
13:1o(6+1)14d
UND
UND1:100.0:100.0
ParentIDs:1|3|5|6|7|9|10|12|14
SubtreeLinkageID1:o(4+1)d
RES
15b:b-dgal-HEX-1:5
16b:a-lgal-HEX-1:5|6:d
17b:a-dgal-HEX-1:5
18s:n-acetyl
LIN
14:15o(2+1)16d
15:15o(3+1)17d
16:17d(2+1)18n"""
und_glycan = glycoct.loads(undefined)
test1 = """RES
1b:x-dglc-HEX-1:5
2s:n-acetyl
3b:b-dglc-HEX-1:5
4s:n-acetyl
5b:a-dman-HEX-1:5
6b:a-dman-HEX-1:5
7b:b-dglc-HEX-1:5
8s:n-acetyl
9b:b-dglc-HEX-1:5
10s:n-acetyl
11b:a-dman-HEX-1:5
12b:b-dglc-HEX-1:5
13s:n-acetyl
14b:a-lgal-HEX-1:5|6:d
LIN
1:1d(2+1)2n
2:1o(4+1)3d
3:3d(2+1)4n
4:3o(4+1)5d
5:5o(3+1)6d
6:6o(2+1)7d
7:7d(2+1)8n
8:6o(4+1)9d
9:9d(2+1)10n
10:5o(6+1)11d
11:11o(2+1)12d
12:12d(2+1)13n
13:1o(6+1)14d
UND
UND1:100.0:100.0
ParentIDs:1|3|5|6|7|9|11|12|14
SubtreeLinkageID1:o(4+1)d
RES
15b:b-dgal-HEX-1:5
"""
glycan_test1 = glycoct.loads(test1)
reload(glycoct)
reload(glycan_io)
glycan_dict = glycan_io.load_glycan_obj_from_dir('/Users/apple/Desktop/NathanLab/CHO_Anders/GlycanSVG/')
A4FG4S4 = glycoct.loads(str(glycan_dict['A4FG4S4']))
glycan_dict['A4FG4S4']
temp_mono = A4FG4S4.root
## recursion,
temp_mono.children()
GlycanComposition.from_glycan(A4FG4S4)
from glypy.structure import monosaccharide
from glypy import monosaccharides
from glypy.structure import glycan
# (monosaccharides.GlcNAc)
GlycanComposition.from_glycan(glycan.Glycan(monosaccharides.GlcNAc))
# # get
def drop_terminal(a_glycan):
term_list =[]
temp_mono = a_glycan.root
def rec_drop_term(a_mono):
# print(a_mono)
temp_children = a_mono.children()
return_list = []
if temp_children:
for pos, child in temp_children:
temp_term = rec_term(child)
# print(temp_term)
return_list.extend(temp_term)
return return_list
else:
# print(a_mono, temp_children)
return [(a_mono] # a list of term
temp_term = rec_term(temp_mono)
return temp_term
# # A4FG4S4.root
# term_a4fg4s4=find_terminal(A4FG4S4)[4]
# term_a4fg4s4.parents()
A4FG4S4 = glycoct.loads(str(glycan_dict['A4FG4S4']))
for i in list(A4FG4S4.leaves()):
i.drop_monosaccharide(i.parents()[0][0])
_mono_list = list(A4FG4S4.leaves())
_mono_list
for i in _mono_list:
i.drop_monosaccharide(i.parents()[0][0])
plot_glycan_utilities.plot_glycan(A4FG4S4)
_mono_parents_list = [i.parents()[0][1] for i in _mono_list]
_mono_parents_list
#drop_monosaccharide(pos)
for _mpar in _mono_parents_list:
if len(_mpar.children())==1:
print(_mpar.children())
_mpar.drop_monosaccharide(_mpar.children()[0][0])
continue
for _index, _mchild in _mpar.children():
if _mchild in _mono_list:
_mpar.drop_monosaccharide(_index)
break
A4FG4S4
ud_composition = GlycanComposition.from_glycan(ud_glycan)
ud_composition.serialize()
a = FrozenGlycanComposition.from_glycan(ud_glycan)
```
# extract_motif
```
# transform glycoct to Glycan obj
a_glycan = glycoct.loads(a_3350)
# extract_motif
glycan_motif_dict = extract_motif.extract_motif(a_glycan)
print(glycan_motif_dict.keys())
print(glycan_motif_dict[1])
print(type(glycan_motif_dict[1][0]))
```
# Plot
```
plot_glycan_utilities.plot_glycan(a_glycan)
plot_glycan_utilities.plot_glycan_list([a_glycan],['demo'])
```
# pipeline
```
# in gc_init: clarify the glycoct_dict_goto_extraction_addr
# in gc_init: clarify the glytoucan_data_base_addr__
# two files above are input data file for this pip
extract_motif.get_motif_pip(22, prior=True)
# it would be faster if you run the python directly
# check the gc_init as well
# it would be faster if you run the python directly
customizing_motif_vec.customizing_motif_vec_pip()
# load motif vector and return edge_list
motif_dict = json_utility.load_json("/Users/apple/PycharmProjects/GlyCompare/intermediate_file/NBT_motif_dic_degree_list.json")
motif_lib = motif_class.GlycanMotifLib(motif_dict)
dep_tree, edge_list = motif_lib.motif_dependence_tree()
edge_list
len(motif_lib.motif_vec)
```
## plot glycan mass
```
a = json_utility.load_json('/Users/apple/PycharmProjects/nbt_glycan_profile/intermediate_file/NBT_glycan_dict.json')
name_k = {}
name_dict = {}
list_k = []
list_mass = []
# fi.patch.set_facecolor('white')
for i in sorted(a.keys()):
for k in a[i].keys():
name_k[k] = a[i][k]
name_dict[k] = i
list_k.append(glycoct.loads(a[i][k]))
list_mass.append(i)
len(list(name_k))
plot_glycan_utilities.plot_glycan_list(list_k, list_mass)
```
| github_jupyter |
# Passive and active colloidal chemotaxis in a microfluidic channel: mesoscopic and stochastic models
**Author:** Pierre de Buyl
*Supplemental information to the article by L. Deprez and P. de Buyl*
This notebook reports the characterization of the diffusion coefficients for a rigid dimer
confined between plates.
The data originates from the RMPCDMD simulation program. Please read its documentation and the
published paper for meaningful use of this notebook.
The correlation functions are computed online in RMPCDMD and stored in the H5MD files. They are read here
and integrated to obtain the diffusion coefficients. A time limit on the integral is set for all integrals,
and displayed in the figures, to obtain the value of the plateau of the running integral for D.
```
%matplotlib inline
import h5py
import matplotlib.pyplot as plt
from matplotlib.figure import SubplotParams
import numpy as np
from scipy.signal import fftconvolve
from scipy.optimize import leastsq, curve_fit
from scipy.integrate import simps, cumtrapz
from glob import glob
plt.rcParams['figure.figsize'] = (12, 6)
plt.rcParams['figure.subplot.hspace'] = 0.25
plt.rcParams['figure.subplot.wspace'] = 0.25
plt.rcParams['figure.subplot.left'] = 0.17
plt.rcParams['axes.labelsize'] = 16
def expfitfunc(t, f0, tau):
"""Exponential fitting function"""
return f0*np.exp(-t/tau)
def fitfunc(p, t):
"""Linear fitting function"""
return p[0] + p[1]*t
def errfunc(p, t, y):
"""Error function for `fitfunc`"""
return fitfunc(p, t) - y
def get_block_data(group, name, dim=3):
"""Return the time and correlation function for the data
read from RMPCDMD output files."""
block = group[name]['value'][:]
count = group[name]['count'][:]
block /= count.reshape((-1, 1, 1, 1))
t_data = [np.array([0])]
data = [block[0,:1,:,:].reshape((-1,dim))]
dt = group[name]['time'][()]
for i in range(block.shape[0]):
t = dt*np.arange(block.shape[1])*block.shape[1]**i
t_data.append(t[1:])
data.append(block[i,1:,:,:].reshape((-1,dim)))
return np.concatenate(t_data), np.concatenate(data)
# Collect simulation data
runs = glob('cceq_*.h5')
runs.sort()
msd_all = []
vacf_all = []
tvacf_all = []
pvacf_all = []
wacf_all = []
for f in runs:
a = h5py.File(f, 'r')
group = a['block_correlators']
msd_t, msd_data = get_block_data(group, 'mean_square_displacement')
msd_all.append(msd_data)
vacf_t, vacf_data = get_block_data(group, 'velocity_autocorrelation')
vacf_all.append(vacf_data)
do_pvacf = 'parallel_velocity_autocorrelation' in group
if do_pvacf:
pvacf_t, pvacf_data = get_block_data(group, 'parallel_velocity_autocorrelation')
pvacf_all.append(pvacf_data)
do_tvacf = 'transverse_velocity_autocorrelation' in group
if do_tvacf:
tvacf_t, tvacf_data = get_block_data(group, 'transverse_velocity_autocorrelation')
tvacf_all.append(tvacf_data)
do_wacf = 'planar_angular_velocity_autocorrelation' in group
if do_wacf:
wacf_t, w_data = get_block_data(group, 'planar_angular_velocity_autocorrelation', dim=1)
wacf_all.append(w_data.flatten())
a.close()
msd_all = np.array(msd_all)
vacf_all = np.array(vacf_all)
pvacf_all = np.array(pvacf_all)
tvacf_all = np.array(tvacf_all)
wacf_all = np.array(wacf_all)
```
Below, we plot the mean-square displacement (MSD) of the dimer in cartesian coordinates.
There are thus three components. The z component saturates because of the confinement.
The x and y components result from a mixing of the parallel and transverse diffusion
coefficients.
The fit is for the long-time behaviour of the x-y MSD.
```
# Plot and fit the mean-squared displacement
plt.ylabel(r'$\langle (\mathbf{r}(\tau) - \mathbf{r}(0))^2 \rangle$')
m = msd_all.mean(axis=0)
# Plot all three components
plt.plot(msd_t, m, marker='o')
# Sum only xy components
m = m[...,:2].sum(axis=-1)
# Fit data to t>100
mask = msd_t>100
solution, ierr = leastsq(errfunc, [0, 0.1], args=(msd_t[mask], m[mask]))
intercept, D = solution
# MSD = 2 d D t = 4 D t -> The coefficient of the linear fit must be divided by 4
# as the diffusion in z is bounded by the confining plates.
D = D/4
plt.plot(msd_t, fitfunc((intercept, 2*D), msd_t))
plt.xlabel(r'$\tau$')
plt.loglog()
# Via the MSD, we can only access the sum of D_parallel and D_perp
print("D_parallel + D_perp = ", 2*D)
```
We use the velocity autocorrelation function (VACF) of the transverse and
parallel components of the velocity.
Integrating those functions yields the transverse and parallel diffusion
coefficients.
The integration is stopped when it reaches a plateau. This is done by setting
a limit in time, that is highlighted by reference lines in the plots.
We proceed in the same fashion for the planar angle diffusion coefficient.
```
# Integrate the VACF
limit = 800
params = SubplotParams(hspace=0.08, wspace=0.15)
plt.figure(figsize=(14,8), subplotpars=params)
# Transverse VACF
m = tvacf_all[...,:2].sum(axis=-1).mean(axis=0)
ax1 = plt.subplot(221)
plt.plot(tvacf_t, m, marker='o')
plt.axvline(limit)
plt.xscale('log')
plt.xticks([])
plt.ylabel(r'Transv. VACF')
# Integral of transverse VACF
ax1_int = plt.subplot(222)
plt.plot(tvacf_t, cumtrapz(m, tvacf_t, initial=0))
plt.axvline(limit)
plt.xscale('log')
plt.xticks([])
idx = np.searchsorted(tvacf_t, limit)
integrated_Dt = simps(m[:idx], tvacf_t[:idx])
plt.axhline(integrated_Dt)
ax1_int.yaxis.tick_right()
ax1_int.yaxis.set_label_position('right')
plt.ylabel(r'Integral of transv. VACF')
plt.ylim(-0.0002,0.0025)
# Parallel VACF
ax2 = plt.subplot(223)
m = pvacf_all[...,:2].sum(axis=-1).mean(axis=0)
plt.plot(pvacf_t, m, marker='o')
plt.axvline(limit)
plt.xscale('log')
plt.xlabel(r'$\tau$')
plt.ylabel(r'Parallel VACF')
# Integral of parallel VACF
ax2_int = plt.subplot(224)
plt.plot(pvacf_t, cumtrapz(m, pvacf_t, initial=0))
plt.axvline(limit)
plt.xscale('log')
plt.xlabel(r'$\tau$')
idx = np.searchsorted(pvacf_t, limit)
integrated_Dp = simps(m[:idx], pvacf_t[:idx])
plt.axhline(integrated_Dp)
plt.ylim(-0.0002,0.0025)
ax2_int.yaxis.tick_right()
ax2_int.yaxis.set_label_position('right')
plt.ylabel(r'Integral of parallel VACF')
print('Transverse D:', integrated_Dt)
print('Parallel D:', integrated_Dp)
print("Sum of the D's", integrated_Dt+integrated_Dp)
plt.figure(figsize=(14,4), subplotpars=params)
m = wacf_all.mean(axis=0)
s = wacf_all.std(axis=0)
ax1 = plt.subplot(121)
plt.xscale('log')
plt.plot(wacf_t, m, marker='o')
plt.axvline(limit)
plt.xlim(.5, 1e4)
plt.xlabel(r'$\tau$')
plt.ylabel(r'Orientational ACF')
ax2 = plt.subplot(122)
plt.xscale('log')
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position('right')
plt.plot(wacf_t, cumtrapz(m, wacf_t, initial=0))
plt.xlim(.5, 1e4)
plt.ylim(-1e-6, 2e-4)
plt.xlabel(r'$\tau$')
plt.ylabel(r'Integral of orientational ACF')
limit = 800
idx = np.searchsorted(wacf_t, limit)
plt.axvline(limit)
D_integral = simps(m[:idx], wacf_t[:idx])
print('Integrated rotational diffusion coefficient', D_integral)
plt.axhline(D_integral)
plt.xlabel(r'$\tau$')
```
| github_jupyter |
# Project for Machine Learning and Statistics - December 2021
## Submitted by Sinéad Duffy, ID 10016151
***
## Notebook 2 - Scipy-stats.ipynb
### Brief - write an overview of the SciPy.stats library, outline (using examples) the package and complete an example hypothesis using ANOVA
***

# SciPy.Stats Library
SciPy is a an extension of the NumPy language in Python, and gives users the opportunity to work with data in an environment similar to that of MatLab/ SciLab etc.$^3$ The package is organised into 15 subpackages dealing with specific mathmatical domains such as clustering, optimize, sparse and statistics. For the purpose of this notebook, the author will focus on the SciPy.Stats package.
<br><br>
SciPy.Stats contains algorithms outlining probability distributions, summary / frequency, correlation and statistical tests. Users will need to still include packages such as Pandas to format the data before applying an algorithm to it.
<br><br>
SciPy.Stats allows the user to complete t-tests (__ttest_1samp()__) as well as One-Way Anova (__f_oneway()__). T-tests allow the user to compare the statistical difference between two groups, whilst a one-way ANOVA between three or more groups. Completing a one-way ANOVA is outlined in the following paragraphs.
***
### What is a One-Way ANOVA
<br>
Laerd Statistics defines ANOVA as being short for one-way analysis of variance, and outlines that it is used to see if there are any statistically significant differences between 3 or more independent (unrelated) groups.$^9$
<br><br>
An ANOVA will allow a person to understand the statistical differences (variances) through use of hypotheses. In this instance, an null hypothesis and an alternative hypothesis is formed, along with a research question to be answered.
<br><br>
The research question in this instance is; What is the best video to show when informing a group of the public about a medical condition.
<br><br>
The hypotheses relating to this question are;
<br><br>
<b>1 - Null Hypothesis</b> is that there is no difference between the subjects knowledge of the medical condition after watching the videos
<br><br>
<b>2 - Alternative Hypothesis</b> is that there is a difference between the groups based on their knowledge of the medical condition
***
Laerd Statistics$^2$ outline that for a sucessful ANOVA to be run, the data will need to pass 6 assumptions. The assumptions are;
1 - <b>Dependent variable</b> is an interval or ratio along a continious scale<br>
2 - <b>Independent Variable</b> should be made up of categorical groups<br>
3 - The data has <b>independence of observations</b><br>
4 - The data has <b>no significant outliers</b><br>
5 - The dependent variable has <b>a normal distribution</b> for each category of the independent variable<br>
6 - There is <b>homogeneity of variances</b> in the data<br>
The author has determined that the independent variable in this example is the categorical data which outlines the groups who have heard / not heard / or it is not relevant of the medical condition (i.e the column of Heardofcondition). The dependent variable is the first preference video of the shown to the public. The data has been gathered using a Likert Scale.
```
# import the standard pyton libraries
import pandas as pd
import numpy as np
# import graphs library
import seaborn as sns
import matplotlib as plt
# set style for graphs
sns.set_style("white")
# Statsistical Libraries
import scipy.stats as ss
# create tables
from tabulate import tabulate
```
### Import and explore the dataset
The chosen dataset relates to informational videos relating to a prescribed medical condition. The dataset was sourced from the University of Sheffield. $^1$
<br>
The attributes of the dataframe are;
- Person, index value to the person who answered the survey
- Gender, with binary values of 1= Male, 2 = Female
- Heardofcondition questions if the respondent heard of the condition being discussed, the answers are 0 = N/A, 1 = Yes, 2 = No
- Set gives the order of the groups the preference of the respondents in terms of videos watched, the replies are maked as
- 1 = General Video A,
- 2 = Medical video B,
- 3 = Old video C,
- 4 = Demo D
- @1st-Favourite video
- @2nd-2nd favourite
- @3rd-3rd favourite
- @4th-Least favourite
- Combination displays the order that videos were seen in; the combination is shown as a series of numbers
- General understandings of the videos. The ordinal Likert scale used was from 1, where the respondents strongly disagree to 5 where the respondents strongly agree
- VideoAGenUnderstandingCONDITION wehere the video A is a general understanding
- VideoBdoctorUnderstandingCONDITION where video B Doctors video B understanding
- VideoCOldUnderstandingCONDITION where video C is the old understanding
- DEMOUnderstandingCONDITION where video D demonstrates an understanding
- TotalAGen-Overall score (video A)
- TotalBdoc-Overall score (video B)
- TotalCOld-Overall score (video C)
- TotalDDEMO-Overall score (demo D)
The following sections outline explore the dataframe before completing the ANOVA analysis.
```
# import the dataframe to the notebook
df = pd.read_csv('https://www.sheffield.ac.uk/polopoly_fs/1.937213!/file/Video_R.csv')
#display first 5 rows of the dataframe
df.head(5)
#show the main statistics associated with df
df.describe()
```
***
### Preparing for the ANOVA
This section will look at the 6 assumptions that must be taken into account to run a true ANOVA.
<br> <br>
As outlined above, 6 assumptions must be passed in order for the results of the ANOVA to be true.
***
#### Assumption 1 - Dependent Variable
The dependent variable 'should be measured at the interval or ratio level (i.e., they are continuous).'$^2$
<br> <br>
In this instance, the chosen dependent variable is the 1st preference video of each of the group.
```
# Set a value for the dependent variable
dependent = df['@1st']
```
***
#### Assumption 2 - Independent Variable
The indepedent variable should consist of at least two independent categorical groups$^2$.
<br> <br>
For this analysis, the chosen categorical variable with no overlap e.g.you have heard of the condition, you haven't heard of the condition, or the question is not appliciable to you.
```
#The independent variable
independent = df['Heardofcondition']
```
***
#### Assumption 3 - Independence of Observation
This refers to the fact that there should be no relationship between the groups themselves $^2$.
<br><br>
This dataset was collected to evaulate the best way of educating the public about a medical condition$^1$.
<br><br>
The source does not call out any relationships between the groups of data. Data was collected using Likert style questions were answers were given along a scale.$^2$
<br>
***
#### Assumption 4 - No Significant Outliers
Laerd Statistics outlines that the chosen variables should have no significant outliers in the data$^2$.
<br><br>
The author will demonstrate this using boxplots. The dependent and independent variables are plotted together and seperately to identify any outliers.
<br><br>
As can be clearly seen, there is no significant outliers identifed in the dataset.
<br>
```
#plotting the dependent and independent variables
sns.boxplot(x=dependent, y=independent)
#plotting the dependent variable
sns.boxplot(x=dependent)
#plotting the independent variable
sns.boxplot(y=independent)
```
***
#### Assumption 5 - Normal distribution for each of the independent variable categories
One of the key assumptions is that the dependent variable should approximately follow a normal distribution for the different categories of the individual variable$^2$.
<br><br>
To confirm if a normal distribution is true, a displot of the independent variables is plotted against the dependent varaible. The results show that the curves appear to largely follow a normal distribution.
<br><br>
Further analysis can be completed using the Shapiro Wilks test as the sample in this instance is less than 50$^4$.
<br><br>
Results of the Shapiro Wilks test with a pvalue of greater that 0.05$^6$, indicate that the data is normally distributed. Where the value of p is less than 0.05, then the data is not normal, i.e. the data will deviate from a normal distribution.$^4$
<br>
```
sns.displot(x=dependent, hue=independent, kind="kde")
# Shapiro Wilk test for Normalacy - 1
# previous knowledge N/a
shapiro_test1 = ss.shapiro(dependent[independent == 0])
#shapiro_test1
print("The p-value of the Shapiro_Test1 is = {:.2}".format(shapiro_test1.pvalue))
# Shapiro Wilk test for Normalacy - 2
# previous knowledge is yes
shapiro_test2 = ss.shapiro(dependent[independent == 1])
#shapiro_test2
print("The p-value of the Shapiro_Test2 is = {:.2}".format(shapiro_test2.pvalue))
# Shapiro Wilk test for Normalacy - 3
# previous knowledge is no
shapiro_test3 = ss.shapiro(dependent[independent == 2])
#shapiro_test3
print("The p-value of the Shapiro_Test3 is = {:.2}".format(shapiro_test3.pvalue))
```
***
#### Assumption 6 - Homogeneity of variances
Laerd Statistics outlines that the 6th and final assumption to complete an Anova analysis is that there must be 'homogeneity of variances'.$^2$ This relates to the <i>t</i> and <i>F</i> statistics respectivly $^2$,$^4$ , and basically means that the variance of the different groups should be the same$^6$.
Laerd Statistics$^2$ outline that the Levene’s test for homogeneity of variances will help determine if this is the case for the chosen dataset.
The pvalue result (the significant value) should be greater than 0.05 for the variances to be treated as equal$^7$.
Using the levene test, it is possible to say that the current dataset has equal values.
```
#test for variances - Levene
ss.levene(
dependent[independent == 0],
dependent[independent == 1],
dependent[independent == 2])
```
***
#### Review of the Assumptions
In order for the data to comply with the ANOVA standards, it must pass all of the 6 assumptions outlined above.
<br>
The results of the analysis clearly show that Assumption 5 outlining the need for the data to follow a normal distribution curve initially is true. However further analysis using the Shapiro-Wilks tests shows that the data fails this test.
<br><br>
The table below shows the results of the 3 tests runs for each of the categories of data. All the values are less than 0.05, therefore the data does not follow a normal distribution.
<br><br>
Assumption 5 is the only assumption to fail the ANOVA test. Laerd Statistics outline that the one-way ANOVA is a robust test and can accept data that does not fully follow the normal distribution$^10$.
<br><br>
On that basis, the author has decided to proceed with the ANOVA test, and will complete post hoc analysis using Tukey's honestly significant difference (HSD) as outined by Laerd Statistics.
<br>
```
# display the results of the Shapiro results
shapiro_results = {'Test1': [shapiro_test1.pvalue],
'Test2': [shapiro_test2.pvalue],
'Test3':[shapiro_test3.pvalue]}
print(tabulate(shapiro_results, headers='keys', tablefmt='fancy_grid'))
```
***
### Running the ANOVA
A pvalue result of greater than 0.05 mens that there was no statistially difference between the groups, and therefore the null hypothesis can be rejected $^8$
A pvalue result of less than 0.05 determines that a statistical difference was found. This requires a posthoc test should be run. $^8$
A posthoc will allow the author to deterimine where the difference between the groups occurred
<br>
```
ss.f_oneway(
dependent[independent == 0],
dependent[independent == 1],
dependent[independent == 2])
```
---
### Reporting the results of the ANOVA
https://statistics.laerd.com/statistical-guides/one-way-anova-statistical-guide-3.php
The pvalue of the one-way ANOVA is 0.45 (see above). This means that no statistical difference was identifed between the groups, so the null hypothesis can be rejected i.e. there is no difference between the subjects knowledge of the medical condition after watching the videos.
The author does acknowledge that the test group of 20 individuals was quiet small.
***
### Post hoc test
As outlined previously, the dataset failed the normality test in Assumption 5. One of the reasons for this could be the small size of the sample. As such, the Author has decided to undertake post hoc analsys.
<br><br>
Laerd Satistic's suggest using Tukey's honestly significant difference (HSD) in cases where assumption 6 was not violated. In the case of this notebook, Assumption 5 was not met, i.e. normal distribtuion was not found to be in place. Tukey's test (also known as the honestly significant difference (HSD) test) will help explain where the signifcant differences lie between the groups that form part of the analysis.$^11$
```
from statsmodels.stats.multicomp import pairwise_tukeyhsd
m_comp = pairwise_tukeyhsd(endog=df['@1st'], groups=df['Heardofcondition'], alpha=0.05)
print(m_comp)
```
Referencing back to the ANOVA, the pvalue was 0.45, which is in excess of 0.05 which finds that the groups are significantly different. The results of Tukey's Post Hoc analsyis show that;
<br>
1. the pvalue of differences between group 0 and group 1 was 0.6594
2. the pvalue of differences between group 0 and group 2 was 0.8476
3. the pvalue of differences between group 0 and group 1 was 0.4511
As all the values are in excess of the pvalue of the ANOVA, and also in excess of the 0.05, it is possible to say that there is a statistically significant difference between all the goups.
***
### Conclusion
In conclusion, it is possoble to say that there is no statistical difference between the groups who watched the video, and their previous knowledge of the subject.
***
### References:
1. University of Sheffield.ac.uk, Datasets for Teaching, https://www.sheffield.ac.uk/mash/statistics/datasets, accessed 01 December 2021
2. Laerd Statistics, One-way ANOVA in SPSS Statistics, https://statistics.laerd.com/spss-tutorials/one-way-anova-using-spss-statistics.php , accessed 01 December 2021
3. Scipy.org, Statistical functions (scipy.stats), https://docs.scipy.org/doc/scipy/reference/stats.html, accessed 01 December 2021
4. , LaerdStatistics.com, Testing for Normality using SPSS Statistics, https://statistics.laerd.com/spss-tutorials/testing-for-normality-using-spss-statistics.php, accessed 29 December 2021
5. Statistic Solutions.com, The Assumption of Homogeneity of Variance, https://www.statisticssolutions.com/the-assumption-of-homogeneity-of-variance/, accessed 29 December 2021
6. TechnologyNetworks.com, One-Way vs Two-Way ANOVA: Differences, Assumptions and Hypotheses, https://www.technologynetworks.com/informatics/articles/one-way-vs-two-way-anova-definition-differences-assumptions-and-hypotheses-306553, accessed 29 December 2021
7. LaerdStatistics.com, Independent t-test for two samples, https://statistics.laerd.com/statistical-guides/independent-t-test-statistical-guide.php, accessed 29 December 2021
8. LaerdStatistics.com, One-way ANOVA (cont...), https://statistics.laerd.com/statistical-guides/one-way-anova-statistical-guide-4.php, accessed 29 December 2021
9. LaerdStatistic.com, One-way ANOVA, https://statistics.laerd.com/statistical-guides/one-way-anova-statistical-guide.php, accessed 29 December 2021
10. LaerdStatistic.com, One-way ANOVA (Contd.), https://statistics.laerd.com/statistical-guides/one-way-anova-statistical-guide-3.php , accessed 29 December 2021
11. Statisticshowto.com, What is the Tukey Test / Honest Significant Difference? , https://www.statisticshowto.com/tukey-test-honest-significant-difference/, accessed 30 December 2021
## End
| github_jupyter |
```
EPOCHS = 40
LR = 3e-4
BATCH_SIZE_TWO = 1
HIDDEN =20
MEMBERS = 3
import pandas as pd
import numpy as np
import random
import torch
import torch.nn.functional as F
import torch.nn as nn
from torchinfo import summary
import re
import string
import torch.optim as optim
from torchtext.legacy import data
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence
from sklearn.model_selection import train_test_split
def collate_batch(batch):
label_list, text_list, length_list = [], [], []
for (_text,_label, _len) in batch:
label_list.append(_label)
length_list.append(_len)
tensor = torch.tensor(_text, dtype=torch.long)
text_list.append(tensor)
text_list = pad_sequence(text_list, batch_first=True)
label_list = torch.tensor(label_list, dtype=torch.float)
length_list = torch.tensor(length_list)
return text_list,label_list, length_list
class VectorizeData(Dataset):
def __init__(self, file):
self.data = pd.read_pickle(file)
def __len__(self):
return self.data.shape[0]
def __getitem__(self, idx):
X = self.data.vector[idx]
lens = self.data.lengths[idx]
y = self.data.label[idx]
return X,y,lens
testing = VectorizeData('predict_set.csv')
prediction = DataLoader(testing, batch_size=BATCH_SIZE_TWO, shuffle=False, collate_fn=collate_batch)
'''loading the pretrained embedding weights'''
weights=torch.load('CBOW_NEWS.pth')
pre_trained = nn.Embedding.from_pretrained(weights)
pre_trained.weight.requires_grad=False
def create_emb_layer(pre_trained):
num_embeddings = pre_trained.num_embeddings
embedding_dim = pre_trained.embedding_dim
emb_layer = nn.Embedding.from_pretrained(pre_trained.weight.data, freeze=True)
return emb_layer, embedding_dim
class StackedLSTMAtteionModel(nn.Module):
def __init__(self, pre_trained,num_labels):
super(StackedLSTMAtteionModel, self).__init__()
self.n_class = num_labels
self.embedding, self.embedding_dim = create_emb_layer(pre_trained)
self.LSTM = nn.LSTM(self.embedding_dim, HIDDEN, num_layers=2,bidirectional=True,dropout=0.26,batch_first=True)
self.label = nn.Linear(2*HIDDEN, self.n_class)
self.act = nn.Sigmoid()
def attention_net(self, Lstm_output, final_state):
hidden = final_state
output = Lstm_output[0]
attn_weights = torch.matmul(output, hidden.transpose(1, 0))
soft_attn_weights = F.softmax(attn_weights.transpose(1, 0), dim=1)
new_hidden_state = torch.matmul(output.transpose(1,0), soft_attn_weights.transpose(1,0))
return new_hidden_state.transpose(1, 0)
def forward(self, x, text_len):
embeds = self.embedding(x)
pack = pack_padded_sequence(embeds, text_len, batch_first=True, enforce_sorted=False)
output, (hidden, cell) = self.LSTM(pack)
hidden = torch.cat((hidden[0,:, :], hidden[1,:, :]), dim=1)
attn_output = self.attention_net(output, hidden)
logits = self.label(attn_output)
outputs = self.act(logits.view(-1))
return outputs
class TwoLayerGRUAttModel(nn.Module):
def __init__(self, pre_trained, HIDDEN, num_labels):
super(TwoLayerGRUAttModel, self).__init__()
self.n_class = num_labels
self.embedding, self.embedding_dim = create_emb_layer(pre_trained)
self.gru = nn.GRU(self.embedding_dim, hidden_size=HIDDEN, num_layers=2,batch_first=True, bidirectional=True, dropout=0.2)
self.label = nn.Linear(2*HIDDEN, self.n_class)
self.act = nn.Sigmoid()
def attention_net(self, gru_output, final_state):
hidden = final_state
output = gru_output[0]
attn_weights = torch.matmul(output, hidden.transpose(1, 0))
soft_attn_weights = F.softmax(attn_weights.transpose(1, 0), dim=1)
new_hidden_state = torch.matmul(output.transpose(1,0), soft_attn_weights.transpose(1,0))
return new_hidden_state.transpose(1, 0)
def forward(self, x, text_len):
embeds = self.embedding(x)
pack = pack_padded_sequence(embeds, text_len, batch_first=True, enforce_sorted=False)
output, hidden = self.gru(pack)
hidden = torch.cat((hidden[0,:, :], hidden[1,:, :]), dim=1)
attn_output = self.attention_net(output, hidden)
logits = self.label(attn_output)
outputs = self.act(logits.view(-1))
return outputs
class C_DNN(nn.Module):
def __init__(self, pre_trained,num_labels):
super(C_DNN, self).__init__()
self.n_class = num_labels
self.embedding, self.embedding_dim = create_emb_layer(pre_trained)
self.conv1D = nn.Conv2d(1, 100, kernel_size=(3,16), padding=(1,0))
self.label = nn.Linear(100, self.n_class)
self.act = nn.Sigmoid()
def forward(self, x):
embeds = self.embedding(x)
embeds = embeds.unsqueeze(1)
conv1d = self.conv1D(embeds)
relu = F.relu(conv1d).squeeze(3)
maxpool = F.max_pool1d(input=relu, kernel_size=relu.size(2)).squeeze(2)
fc = self.label(maxpool)
sig = self.act(fc)
return sig.squeeze(1)
class MetaLearner(nn.Module):
def __init__(self, modelA, modelB, modelC):
super(MetaLearner, self).__init__()
self.modelA = modelA
self.modelB = modelB
self.modelC = modelC
self.fc1 = nn.Linear(3, 2)
self.fc2 = nn.Linear(2, 1)
self.act = nn.Sigmoid()
def forward(self, text, length):
x1=self.modelA(text, length)
x2=self.modelB(text,length)
x3=self.modelC(text)
x4 = torch.cat((x1.detach(),x2.detach(), x3.detach()), dim=0)
x5 = F.relu(self.fc1(x4))
output = self.act(self.fc2(x5))
return output
def load_all_models(n_models):
all_models = []
for i in range(n_models):
filename = "models/model_"+str(i+1)+'.pth'
if filename == "models/model_1.pth":
model_one = StackedLSTMAtteionModel(pre_trained, 1)
model_one.load_state_dict(torch.load(filename))
for param in model_one.parameters():
param.requires_grad = False
all_models.append(model_one)
elif filename == "models/model_2.pth":
model_two = TwoLayerGRUAttModel(pre_trained, HIDDEN, 1)
model_two.load_state_dict(torch.load(filename))
for param in model_two.parameters():
param.requires_grad = False
all_models.append(model_two)
else:
model = C_DNN(pre_trained=pre_trained, num_labels=1)
model.load_state_dict(torch.load(filename))
for param in model.parameters():
param.requires_grad = False
all_models.append(model)
return all_models
'''Loading the meta_model'''
filename="models/model_metaLearner.pth"
models = load_all_models(MEMBERS)
meta_model = MetaLearner(models[0], models[1], models[2])
meta_model.load_state_dict(torch.load(filename))
summary(meta_model)
def binary_accuracy(dataloader, model):
#round predictions to the closest integer
correct = []
model.eval()
with torch.no_grad():
for idx, (text,label,lengths) in enumerate(dataloader):
rounded_preds = torch.round(model(text, lengths))
correct.append((rounded_preds == label).float())
acc = sum(correct)/len(correct)
return acc
print('Checking the results of test dataset.')
accu_test = binary_accuracy(prediction, meta_model)
print(f'test accuracy: {accu_test.item():8.3f}')
```
| github_jupyter |
```
import numpy as np
import sys
class PartyNN(object):
def __init__(self, learning_rate=0.1):
self.weights_0_1 = np.random.normal(0.0, 2 ** -0.5, (2, 3))
self.weights_1_2 = np.random.normal(0.0, 1, (1, 2))
self.sigmoid_mapper = np.vectorize(self.sigmoid)
self.learning_rate = np.array([learning_rate])
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def predict(self, inputs): # len=3
inputs_1 = np.dot(self.weights_0_1, inputs)
outputs_1 = self.sigmoid_mapper(inputs_1)
inputs_2 = np.dot(self.weights_1_2, outputs_1)
outputs_2 = self.sigmoid_mapper(inputs_2)
return outputs_2
def train(self, inputs, expected_predict):
inputs_1 = np.dot(self.weights_0_1, inputs)
outputs_1 = self.sigmoid_mapper(inputs_1)
inputs_2 = np.dot(self.weights_1_2, outputs_1)
outputs_2 = self.sigmoid_mapper(inputs_2)
actual_predict = outputs_2[0]
error_layer_2 = np.array([actual_predict - expected_predict])
gradient_layer_2 = actual_predict * (1 - actual_predict)
weights_delta_layer_2 = error_layer_2 * gradient_layer_2
self.weights_1_2 -= (np.dot(weights_delta_layer_2, outputs_1.reshape(1, len(outputs_1)))) * self.learning_rate
error_layer_1 = weights_delta_layer_2 * self.weights_1_2
gradient_layer_1 = outputs_1 * (1 - outputs_1)
weights_delta_layer_1 = error_layer_1 * gradient_layer_1
self.weights_0_1 -= np.dot(inputs.reshape(len(inputs), 1), weights_delta_layer_1).T * self.learning_rate
def mean_squared_error(y, Y):
return np.mean((y - Y) ** 2)
train = [
([0, 0, 0], 0),
([0, 0, 1], 1),
([0, 1, 0], 0),
([0, 1, 1], 0),
([1, 0, 0], 1),
([1, 0, 1], 1),
([1, 1, 0], 0),
([1, 1, 1], 1),
]
# to GPU, Parallel
epochs = 5000
learning_rate = 0.05
network = PartyNN(learning_rate=learning_rate)
for e in range(epochs):
inputs_ = []
correct_predictions = []
for input_stat, correct_predict in train:
network.train(np.array(input_stat), correct_predict)
inputs_.append(np.array(input_stat))
correct_predictions.append(np.array(correct_predict))
train_loss = mean_squared_error(network.predict(np.array(inputs_).T), np.array(correct_predictions))
sys.stdout.write("\rProgress: {}, Training loss: {}".format(str(100 * e / float(epochs))[:4], str(train_loss)[:5]))
for input_stat, correct_predict in train:
predict = network.predict(np.array(input_stat))
print("For input: {} the prediction is: {}:{}, expected: {}".format(
str(input_stat),
str(predict),
str(predict > .5),
str(correct_predict == 1)))
network.weights_0_1
network.weights_1_2
```
[Resource](https://www.youtube.com/watch?v=HA-F6cZPvrg)
| github_jupyter |
```
import sqlite3 as sl
import pandas as pd # type: ignore
COLORS_by_TYPE = {
'fire': 'red',
'water': '#09E1FF',
'normal': '#1DFDA8',
'poison': '#B918FF',
'electric': 'yellow',
'ground': '#FF9C15',
'fairy': '#FF69B4',
'grass': '#34FF5C',
'bug': '#90EE38',
'psychic': '#B71ECF',
'rock': '#DCB883',
'fighting': '#FF3A17',
'ghost': '#6817ff',
'ice': '#52fffa',
'dragon': '#a533ff',
'dark': '#3D009C',
'flying': '#4da1ff',
'steel': '#bfbfbf'}
def clean_lite_6(datf: pd.DataFrame) -> pd.DataFrame:
return (datf.fillna('')
.assign(Legendary=[1 if x else 0 for x in datf.Legendary],
Sp_Attack=datf['Sp. Atk'],
Sp_Defense=datf['Sp. Def'],
Type1=datf['Type 1'],
Type2=datf['Type 2'])
.drop(['Sp. Atk', 'Sp. Def', 'Type 1', 'Type 2'], axis=1)
.rename(lambda s: s.lower() + '_g6', axis='columns')
)
def clean_7(datf: pd.DataFrame) -> pd.DataFrame:
'''we need to renamed `against_fight` to `against_fighting`'''
return datf
df6 = pd.read_csv('https://raw.githubusercontent.com/pokepokepokedex/pokedex-ds-quinn/master/Pokemon.csv').pipe(clean_lite_6)
df7 = pd.read_csv('https://raw.githubusercontent.com/pokepokepokedex/pokedex-ds-quinn/master/pokemon_w7.csv').pipe(clean_7)
df = df7.merge(df6, how='outer', left_on='name', right_on='name' + '_g6')
import pandas as pd # type: ignore
import numpy as np # type: ignore
from scipy.stats import norm # type: ignore
import altair as alt # type : ignore
from typing import Optional
from functools import reduce
from itertools import chain
Vcat = lambda R,S: R & S
Ocat = lambda C,D: C + D
#from models import df, COLORS_by_TYPE
types = set(chain.from_iterable(df[['type1', 'type2']].values)) - {np.nan}
ordering = pd.DataFrame(np.ones((len(types), len(types))), columns=types, index=types)
class PokeDescribe:
def __init__(self, datf: pd.DataFrame):
self.TYPE_COLOR_MAPPING = COLORS_by_TYPE
self.HEIGHT = 30
self.WIDTH = 330
self.xlim = (0, 180)
self.stats = ['hp', 'attack', 'defense',
'sp_attack', 'sp_defense', 'speed']
self.df = datf
self.x = np.linspace(self.xlim[0], self.xlim[1], 1000)
self.gaussians = {name: norm(loc=self.df[name].mean(),
scale=self.df[name].std())
for name in self.stats}
self.bells = pd.DataFrame({**{'x': self.x},
**{name: self.gaussians[name].pdf(self.x)
for name in self.stats}})
self.C = alt.Chart(self.bells,
height=self.HEIGHT,
width=self.WIDTH
).mark_line(color='white').encode(
x=alt.X('x', title=None, axis=alt.Axis(labels=False)))
self.charts = {name: self.C.encode(y=alt.Y(name, title=None, axis=alt.Axis(labels=False))) for name in self.stats}
self.BellCurves = reduce(Vcat, [self.charts[name] for name in self.stats])
class PokeDescribeNAME(PokeDescribe):
def __init__(self, datf: pd.DataFrame, Name: str):
super().__init__(datf)
self.PSI = 50
self.pokename = Name
self.typ = self.df[self.df.name==self.pokename].type1.values[0]
self.typ_color = self.TYPE_COLOR_MAPPING[self.typ]
self.y_max = 1.3 * max([max(ls) for ls in [self.gaussians[st].pdf(self.x) for st in self.stats]])
self.y = pd.DataFrame({'y': np.linspace(0, self.y_max, self.PSI)})
self.D = alt.Chart(self.y).mark_line(color=self.typ_color).encode(y=alt.Y('y', title=None))
self.means = {st: self.df[self.df.name==self.pokename][st].mean() for st in self.stats}
self.Dcharts = {st: self.D.encode(x=alt.value(self.means[st]))
for st in self.stats}
self.SHOW = reduce(Vcat, [self.charts[st] + self.Dcharts[st]
for st in self.stats]
).configure_text(color='white', angle=90)
# from gaussians import PokeDescribeNAME
# from models import df, COLORS_by_TYPE
from typing import List
import matplotlib.pyplot as plt
import seaborn as sns
raichu = PokeDescribeNAME(df, 'Charizard')
raichu.bells.head()
raichu.means
stats = ['hp', 'attack', 'defense', 'sp_attack', 'sp_defense', 'speed']
fig, axes = plt.subplots(nrows=len(stats), ncols=1, sharex=True, constrained_layout=True)
#axes.flatten()
def bell(sts: List[str]):
for i in range(len(sts)):
c = COLORS_by_TYPE[raichu.typ]
axes[i].set_xlim(raichu.xlim)
axes[i].set_ylim(0, raichu.y_max)
axes[i].axvline(raichu.means[sts[i]], color=c)#, xmin=raichu.xlim[0], xmax=raichu.xlim[1])
axes[i].plot(x=raichu.bells.x, y=raichu.bells[sts[i]], color=c)
# for i in range(len(stats)):
# (i)
bell(stats)
stats = ['hp', 'attack', 'defense', 'sp_attack', 'sp_defense', 'speed']
sts = stats
c = COLORS_by_TYPE[raichu.typ]
plt.figure(1)
plt.subplot(611)
plt.plot(raichu.bells.x, raichu.bells.hp, color=c)
plt.axvline(raichu.means[sts[0]], color=c)
plt.subplot(612)
plt.plot(raichu.bells.x, raichu.bells.attack, color=c)
plt.axvline(raichu.means[sts[1]], color=c)
plt.subplot(613)
plt.plot(raichu.bells.x, raichu.bells.defense, color=c)
plt.axvline(raichu.means[sts[2]], color=c)
plt.subplot(614)
plt.plot(raichu.bells.x, raichu.bells.sp_attack, color=c)
plt.axvline(raichu.means[sts[3]], color=c)
plt.subplot(615)
plt.plot(raichu.bells.x, raichu.bells.sp_defense, color=c)
plt.axvline(raichu.means[sts[4]], color=c)
plt.subplot(616)
plt.plot(raichu.bells.x, raichu.bells.speed, color=c)
plt.axvline(raichu.means[sts[5]], color=c)
plt.show()
fig, ax = plt.subplots(nrows=6, ncols=1, constrained_layout=True, sharex=True)
# for i in range(6):
# ax[i].set_xlim(raichu.xlim)
ax[0].plot(x=raichu.bells.x, y=raichu.bells.hp, color='black')
ax[1].plot(x=raichu.bells.x, y=raichu.bells.attack, color='black')
ax[2].plot(x=raichu.bells.x, y=raichu.bells.defense, color='black')
ax[3].plot(x=raichu.bells.x, y=raichu.bells.sp_attack, color='black')
ax[4].plot(x=raichu.bells.x, y=raichu.bells.sp_defense, color='black')
ax[5].plot(x=raichu.bells.x, y=raichu.bells.speed, color='black')
plt.show()
plt.plot(x=raichu.bells.x, y=raichu.bells.defense, color='black');
import seaborn as sns
sns.line(x=raichu.bells.x, y=raichu.bells.defense, color=COLORS_by_TYPE[raichu.typ])
sns.line(x=raichu.bells.x, y=raichu.bells.sp_attack, color=COLORS_by_TYPE[raichu.typ])
import numpy as np
import matplotlib.pyplot as plt
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure(1)
plt.subplot(211)
plt.plot(t1, f(t1), 'bo', t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
plt.figure(1)
plt.subplot(611)
plt.plot(raichu.bells.x, raichu.bells.hp)
plt.subplot(612)
plt.plot(raichu.bells.x, raichu.bells.attack)
plt.show()
```
| github_jupyter |
#### Import Dependencies
```
import os
import gc
gc.enable()
import math
import json
import time
import random
import multiprocessing
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
import numpy as np
import pandas as pd
from tqdm import tqdm, trange
from sklearn import model_selection
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import Parameter
import torch.optim as optim
from torch.utils.data import (
Dataset, DataLoader,
SequentialSampler, RandomSampler
)
from torch.utils.data.distributed import DistributedSampler
try:
from apex import amp
APEX_INSTALLED = True
except ImportError:
APEX_INSTALLED = False
import transformers
from transformers import (
WEIGHTS_NAME,
AdamW,
AutoConfig,
AutoModel,
AutoTokenizer,
get_cosine_schedule_with_warmup,
get_linear_schedule_with_warmup,
logging,
MODEL_FOR_QUESTION_ANSWERING_MAPPING,
)
logging.set_verbosity_warning()
logging.set_verbosity_error()
def fix_all_seeds(seed):
np.random.seed(seed)
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
def optimal_num_of_loader_workers():
num_cpus = multiprocessing.cpu_count()
num_gpus = torch.cuda.device_count()
optimal_value = min(num_cpus, num_gpus*4) if num_gpus else num_cpus - 1
return optimal_value
MODEL_CONFIG_CLASSES = list(MODEL_FOR_QUESTION_ANSWERING_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
tamil_xquad_tr = pd.read_csv('../input/google-translated-squad20-to-hindi-and-tamil/squad_ta.csv')
hindi_xquad_tr = pd.read_csv('../input/google-translated-squad20-to-hindi-and-tamil/squad_hi.csv')
hindi_xquad_tr.head()
import ast
hindi_xquad_tr['answers'] = hindi_xquad_tr['answers'].apply(ast.literal_eval)
tamil_xquad_tr['answers'] = tamil_xquad_tr['answers'].apply(ast.literal_eval)
def get_text(d):
return d[0]['text']
def get_start(d):
return d[0]['answer_start']
hindi_xquad_tr['answer_text'] = hindi_xquad_tr['answers'].apply(get_text)
hindi_xquad_tr['answer_start'] = hindi_xquad_tr['answers'].apply(get_start)
tamil_xquad_tr['answer_text'] = tamil_xquad_tr['answers'].apply(get_text)
tamil_xquad_tr['answer_start'] = tamil_xquad_tr['answers'].apply(get_start)
hindi_xquad_tr['language'] = 'hindi'
tamil_xquad_tr['language'] = 'tamil'
hindi_xquad_tr.drop(['id','answers','c_id','is_in'], axis=1, inplace=True)
tamil_xquad_tr.drop(['id','answers','c_id','is_in'], axis=1, inplace=True)
hindi_xquad_tr = hindi_xquad_tr[hindi_xquad_tr['answer_start']!=-1]
tamil_xquad_tr = tamil_xquad_tr[tamil_xquad_tr['answer_start']!=-1]
hindi_xquad_tr = hindi_xquad_tr.sample(frac=0.03)
tamil_xquad_tr = tamil_xquad_tr.sample(frac=0.05)
tamil_xquad_tr.shape,hindi_xquad_tr.shape
tamil_xquad_tr
XQA_tamil_dev = pd.read_csv('../input/preprocessed-xqa-tamil/XQA_tamil_dev.csv')
XQA_tamil_test = pd.read_csv('../input/preprocessed-xqa-tamil/XQA_tamil_test.csv')
XQA_tamil_dev = XQA_tamil_dev[XQA_tamil_dev['answer_start']!=-1]
XQA_tamil_test = XQA_tamil_test[XQA_tamil_test['answer_start']!=-1]
XQA_tamil_dev = XQA_tamil_dev.sample(frac=0.5)
XQA_tamil_test = XQA_tamil_test.sample(frac=0.5)
XQA_tamil_dev.head()
XQA_tamil_test.shape,XQA_tamil_dev.shape
```
#### Training Configuration
```
class Config:
# model
model_type = 'xlm_roberta'
model_name_or_path = '../input/xlm-roberta-squad2/deepset/xlm-roberta-large-squad2'
config_name = '../input/xlm-roberta-squad2/deepset/xlm-roberta-large-squad2'
fp16 = True if APEX_INSTALLED else False
fp16_opt_level = "O1"
gradient_accumulation_steps = 2
# tokenizer
tokenizer_name = '../input/xlm-roberta-squad2/deepset/xlm-roberta-large-squad2'
max_seq_length = 400
doc_stride = 135
# train
epochs = 1
train_batch_size = 4
eval_batch_size = 8
# optimizer
optimizer_type = 'AdamW'
learning_rate = 1e-5
weight_decay = 1e-2
epsilon = 1e-8
max_grad_norm = 1.0
# scheduler
decay_name = 'cosine-warmup'
warmup_ratio = 0.1
# logging
logging_steps = 10
# evaluate
output_dir = 'output'
seed = 43
```
#### Data Factory
```
train = pd.read_csv('../input/chaii-hindi-and-tamil-question-answering/train.csv')
test = pd.read_csv('../input/chaii-hindi-and-tamil-question-answering/test.csv')
external_mlqa = pd.read_csv('../input/mlqa-hindi-processed/mlqa_hindi.csv')
external_xquad = pd.read_csv('../input/mlqa-hindi-processed/xquad.csv')
external_train = pd.concat([external_mlqa, external_xquad,XQA_tamil_dev,XQA_tamil_test,hindi_xquad_tr, tamil_xquad_tr])#
def create_folds(data, num_splits):
data["kfold"] = -1
kf = model_selection.StratifiedKFold(n_splits=num_splits, shuffle=True, random_state=43)
for f, (t_, v_) in enumerate(kf.split(X=data, y=data['language'])):
data.loc[v_, 'kfold'] = f
return data
train = create_folds(train, num_splits=5)
external_train["kfold"] = -1
external_train['id'] = list(np.arange(1, len(external_train)+1))
train = pd.concat([train, external_train]).reset_index(drop=True)
def convert_answers(row):
return {'answer_start': [row[0]], 'text': [row[1]]}
train['answers'] = train[['answer_start', 'answer_text']].apply(convert_answers, axis=1)
len(train)
train = train.drop_duplicates(subset=['context','question','answer_text','answer_start','language'])
len(train)
```
#### Convert Examples to Features (Preprocess)
```
def prepare_train_features(args, example, tokenizer):
example["question"] = example["question"].lstrip()
tokenized_example = tokenizer(
example["question"],
example["context"],
truncation="only_second",
max_length=args.max_seq_length,
stride=args.doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
sample_mapping = tokenized_example.pop("overflow_to_sample_mapping")
offset_mapping = tokenized_example.pop("offset_mapping")
features = []
for i, offsets in enumerate(offset_mapping):
feature = {}
input_ids = tokenized_example["input_ids"][i]
attention_mask = tokenized_example["attention_mask"][i]
feature['input_ids'] = input_ids
feature['attention_mask'] = attention_mask
feature['offset_mapping'] = offsets
cls_index = input_ids.index(tokenizer.cls_token_id)
sequence_ids = tokenized_example.sequence_ids(i)
## for validation
feature["example_id"] = example['id']
feature['sequence_ids'] = [0 if i is None else i for i in tokenized_example.sequence_ids(i)]
feature['context'] = example["context"]
feature['question'] = example["question"]
feature['hindi_tamil'] = 0 if example["language"]=='hindi' else 1
##
sample_index = sample_mapping[i]
answers = example["answers"]
if len(answers["answer_start"]) == 0:
feature["start_position"] = cls_index
feature["end_position"] = cls_index
else:
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
token_start_index = 0
while sequence_ids[token_start_index] != 1:
token_start_index += 1
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != 1:
token_end_index -= 1
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
feature["start_position"] = cls_index
feature["end_position"] = cls_index
else:
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
feature["start_position"] = token_start_index - 1
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
feature["end_position"] = token_end_index + 1
features.append(feature)
return features
```
#### Dataset Retriever
```
class DatasetRetriever(Dataset):
def __init__(self, features, mode='train'):
super(DatasetRetriever, self).__init__()
self.features = features
self.mode = mode
def __len__(self):
return len(self.features)
def __getitem__(self, item):
feature = self.features[item]
if self.mode == 'train':
return {
'input_ids':torch.tensor(feature['input_ids'], dtype=torch.long),
'attention_mask':torch.tensor(feature['attention_mask'], dtype=torch.long),
'offset_mapping':torch.tensor(feature['offset_mapping'], dtype=torch.long),
'start_position':torch.tensor(feature['start_position'], dtype=torch.long),
'end_position':torch.tensor(feature['end_position'], dtype=torch.long)
}
else:
if self.mode == 'valid':
return {
'input_ids':torch.tensor(feature['input_ids'], dtype=torch.long),
'attention_mask':torch.tensor(feature['attention_mask'], dtype=torch.long),
'offset_mapping':torch.tensor(feature['offset_mapping'], dtype=torch.long),
'sequence_ids':feature['sequence_ids'],
'start_position':torch.tensor(feature['start_position'], dtype=torch.long),
'end_position':torch.tensor(feature['end_position'], dtype=torch.long),
'example_id':feature['example_id'],
'context': feature['context'],
}
else:
return {
'input_ids':torch.tensor(feature['input_ids'], dtype=torch.long),
'attention_mask':torch.tensor(feature['attention_mask'], dtype=torch.long),
'offset_mapping':feature['offset_mapping'],
'sequence_ids':feature['sequence_ids'],
'id':feature['example_id'],
'context': feature['context'],
'question': feature['question']
}
```
#### Model
```
class WeightedLayerPooling(nn.Module):
def __init__(self, num_hidden_layers, layer_start: int = 4, layer_weights=None):
super(WeightedLayerPooling, self).__init__()
self.layer_start = layer_start
self.num_hidden_layers = num_hidden_layers
self.layer_weights = layer_weights if layer_weights is not None \
else nn.Parameter(
torch.tensor([1] * (num_hidden_layers + 1 - layer_start), dtype=torch.float)
)
def forward(self, all_hidden_states):
all_layer_embedding = all_hidden_states[self.layer_start:, :, :, :]
weight_factor = self.layer_weights.unsqueeze(-1).unsqueeze(-1).unsqueeze(-1).expand(all_layer_embedding.size())
weighted_average = (weight_factor * all_layer_embedding).sum(dim=0) / self.layer_weights.sum()
return weighted_average
class Model(nn.Module):
def __init__(self, modelname_or_path, config, layer_start, layer_weights=None):
super(Model, self).__init__()
self.config = config
config.update({
"hidden_dropout_prob": 0.0,
"layer_norm_eps": 1e-7,
"output_hidden_states": True
})
self.xlm_roberta = AutoModel.from_pretrained(modelname_or_path, config=config)
self.layer_start = layer_start
self.pooling = WeightedLayerPooling(config.num_hidden_layers,
layer_start=layer_start,
layer_weights=None)
self.layer_norm = nn.LayerNorm(config.hidden_size)
self.dropout = torch.nn.Dropout(0.3)
self.qa_output = torch.nn.Linear(config.hidden_size, 2)
torch.nn.init.normal_(self.qa_output.weight, std=0.02)
def forward(self, input_ids, attention_mask=None):
outputs = self.xlm_roberta(input_ids, attention_mask=attention_mask)
all_hidden_states = torch.stack(outputs.hidden_states)
weighted_pooling_embeddings = self.layer_norm(self.pooling(all_hidden_states))
#weighted_pooling_embeddings = weighted_pooling_embeddings[:, 0]
norm_embeddings = self.dropout(weighted_pooling_embeddings)
logits = self.qa_output(norm_embeddings)
start_logits, end_logits = logits.split(1, dim=-1)
start_logits = start_logits.squeeze(-1)
end_logits = end_logits.squeeze(-1)
return start_logits, end_logits
```
#### Loss
```
def loss_fn(preds, labels):
start_preds, end_preds = preds
start_labels, end_labels = labels
start_loss = nn.CrossEntropyLoss(ignore_index=-1)(start_preds, start_labels)
end_loss = nn.CrossEntropyLoss(ignore_index=-1)(end_preds, end_labels)
total_loss = (start_loss + end_loss) / 2
return total_loss
```
#### Grouped Layerwise Learning Rate Decay
```
def get_optimizer_grouped_parameters(args, model):
no_decay = ["bias", "LayerNorm.weight"]
group1=['layer.0.','layer.1.','layer.2.','layer.3.']
group2=['layer.4.','layer.5.','layer.6.','layer.7.']
group3=['layer.8.','layer.9.','layer.10.','layer.11.']
group_all=['layer.0.','layer.1.','layer.2.','layer.3.','layer.4.','layer.5.','layer.6.','layer.7.','layer.8.','layer.9.','layer.10.','layer.11.']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.xlm_roberta.named_parameters() if not any(nd in n for nd in no_decay) and not any(nd in n for nd in group_all)],'weight_decay': args.weight_decay},
{'params': [p for n, p in model.xlm_roberta.named_parameters() if not any(nd in n for nd in no_decay) and any(nd in n for nd in group1)],'weight_decay': args.weight_decay, 'lr': args.learning_rate/10},
{'params': [p for n, p in model.xlm_roberta.named_parameters() if not any(nd in n for nd in no_decay) and any(nd in n for nd in group2)],'weight_decay': args.weight_decay, 'lr': args.learning_rate},
{'params': [p for n, p in model.xlm_roberta.named_parameters() if not any(nd in n for nd in no_decay) and any(nd in n for nd in group3)],'weight_decay': args.weight_decay, 'lr': args.learning_rate*10},
{'params': [p for n, p in model.xlm_roberta.named_parameters() if any(nd in n for nd in no_decay) and not any(nd in n for nd in group_all)],'weight_decay': 0.0},
{'params': [p for n, p in model.xlm_roberta.named_parameters() if any(nd in n for nd in no_decay) and any(nd in n for nd in group1)],'weight_decay': 0.0, 'lr': args.learning_rate/10},
{'params': [p for n, p in model.xlm_roberta.named_parameters() if any(nd in n for nd in no_decay) and any(nd in n for nd in group2)],'weight_decay': 0.0, 'lr': args.learning_rate},
{'params': [p for n, p in model.xlm_roberta.named_parameters() if any(nd in n for nd in no_decay) and any(nd in n for nd in group3)],'weight_decay': 0.0, 'lr': args.learning_rate*10},
{'params': [p for n, p in model.named_parameters() if args.model_type not in n], 'lr':args.learning_rate*40, "weight_decay": 0.0},
]
return optimizer_grouped_parameters
```
#### Metric Logger
```
class AverageMeter(object):
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
self.max = 0
self.min = 1e5
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
if val > self.max:
self.max = val
if val < self.min:
self.min = val
```
#### Utilities
```
def make_model(args):
config = AutoConfig.from_pretrained(args.config_name)
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name)
model = Model(args.model_name_or_path, config=config,layer_start=12,layer_weights=None)
#model = Model(args.model_name_or_path, config=config)
return config, tokenizer, model
def make_optimizer(args, model):
named_parameters = list(model.named_parameters())
roberta_parameters = named_parameters[:389]
pooler_parameters = named_parameters[389:391]
qa_parameters = named_parameters[391:]
parameters = []
# increase lr every k layer
increase_lr_every_k_layer = 1
lrs = np.linspace(1, 5, 24 // increase_lr_every_k_layer)
for layer_num, (name, params) in enumerate(roberta_parameters):
weight_decay = 0.0 if "bias" in name else 0.01
splitted_name = name.split('.')
lr = args.learning_rate #Config.lr
if len(splitted_name) >= 4 and str.isdigit(splitted_name[3]):
layer_num = int(splitted_name[3])
lr = lrs[layer_num // increase_lr_every_k_layer] * lr
parameters.append({"params": params,
"weight_decay": weight_decay,
"lr": lr})
default_lr = 1e-3 #default LR for AdamW
for layer_num, (name,params) in enumerate(qa_parameters):
weight_decay = 0.0 if "bias" in name else 0.01
parameters.append({"params": params,
"weight_decay": weight_decay,
"lr": default_lr})
for layer_num, (name,params) in enumerate(pooler_parameters):
weight_decay = 0.0 if "bias" in name else 0.01
parameters.append({"params": params,
"weight_decay": weight_decay,
"lr": default_lr})
return AdamW(parameters)
def make_scheduler(
args, optimizer,
num_warmup_steps,
num_training_steps
):
if args.decay_name == "cosine-warmup":
scheduler = get_cosine_schedule_with_warmup(
optimizer,
num_warmup_steps=num_warmup_steps,
num_training_steps=num_training_steps
)
else:
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=num_warmup_steps,
num_training_steps=num_training_steps
)
return scheduler
def make_loader(
args, data,
tokenizer, fold
):
train_set, valid_set = data[data['kfold']!=fold], data[data['kfold']==fold].reset_index(drop=True)
train_features, valid_features = [[] for _ in range(2)]
for i, row in train_set.iterrows():
train_features += prepare_train_features(args, row, tokenizer)
for i, row in valid_set.iterrows():
valid_features += prepare_train_features(args, row, tokenizer)
## Weighted sampler
hindi_tamil_count = []
for i, f in enumerate(train_features):
hindi_tamil_count.append(train_features[i]['hindi_tamil'])
class_sample_count = pd.Series(hindi_tamil_count).value_counts().values
weight = 1. / class_sample_count
samples_weight = np.array([weight[t] for t in hindi_tamil_count])
samples_weight = torch.from_numpy(samples_weight)
wsampler = torch.utils.data.sampler.WeightedRandomSampler(samples_weight.type('torch.DoubleTensor'), len(samples_weight))
train_dataset = DatasetRetriever(train_features, mode="train")
valid_dataset = DatasetRetriever(valid_features, mode="valid")
print(f"Num examples Train= {len(train_dataset)}, Num examples Valid={len(valid_dataset)}")
train_sampler = RandomSampler(train_dataset)
valid_sampler = SequentialSampler(valid_dataset)
train_dataloader = DataLoader(
train_dataset,
batch_size=args.train_batch_size,
sampler=train_sampler, #wsampler
num_workers=optimal_num_of_loader_workers(),
pin_memory=True,
drop_last=False
)
valid_dataloader = DataLoader(
valid_dataset,
batch_size=args.eval_batch_size,
sampler=valid_sampler,
num_workers=optimal_num_of_loader_workers(),
pin_memory=True,
drop_last=False
)
return train_dataloader, valid_dataloader, valid_features, valid_set
```
#### Trainer
```
class Trainer:
def __init__(
self, model, tokenizer,
optimizer, scheduler
):
self.model = model
self.tokenizer = tokenizer
self.optimizer = optimizer
self.scheduler = scheduler
def train(
self, args,
train_dataloader,
epoch, result_dict
):
count = 0
losses = AverageMeter()
self.model.zero_grad()
self.model.train()
fix_all_seeds(args.seed)
for batch_idx, batch_data in enumerate(train_dataloader):
input_ids, attention_mask, targets_start, targets_end = \
batch_data['input_ids'], batch_data['attention_mask'], \
batch_data['start_position'], batch_data['end_position']
input_ids, attention_mask, targets_start, targets_end = \
input_ids.cuda(), attention_mask.cuda(), targets_start.cuda(), targets_end.cuda()
outputs_start, outputs_end = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
)
loss = loss_fn((outputs_start, outputs_end), (targets_start, targets_end))
loss = loss / args.gradient_accumulation_steps
if args.fp16:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
count += input_ids.size(0)
losses.update(loss.item(), input_ids.size(0))
# if args.fp16:
# torch.nn.utils.clip_grad_norm_(amp.master_params(self.optimizer), args.max_grad_norm)
# else:
# torch.nn.utils.clip_grad_norm_(self.model.parameters(), args.max_grad_norm)
if batch_idx % args.gradient_accumulation_steps == 0 or batch_idx == len(train_dataloader) - 1:
self.optimizer.step()
self.scheduler.step()
self.optimizer.zero_grad()
if (batch_idx % args.logging_steps == 0) or (batch_idx+1)==len(train_dataloader):
_s = str(len(str(len(train_dataloader.sampler))))
ret = [
('Epoch: {:0>2} [{: >' + _s + '}/{} ({: >3.0f}%)]').format(epoch, count, len(train_dataloader.sampler), 100 * count / len(train_dataloader.sampler)),
'Train Loss: {: >4.5f}'.format(losses.avg),
]
print(', '.join(ret))
result_dict['train_loss'].append(losses.avg)
return result_dict
```
#### Evaluator
```
class Evaluator:
def __init__(self, model):
self.model = model
def save(self, result, output_dir):
with open(f'{output_dir}/result_dict.json', 'w') as f:
f.write(json.dumps(result, sort_keys=True, indent=4, ensure_ascii=False))
def evaluate(self, valid_dataloader, epoch, result_dict):
losses = AverageMeter()
all_outputs_start, all_outputs_end = [], []
for batch_idx, batch_data in enumerate(valid_dataloader):
self.model = self.model.eval()
input_ids, attention_mask, targets_start, targets_end = \
batch_data['input_ids'], batch_data['attention_mask'], \
batch_data['start_position'], batch_data['end_position']
input_ids, attention_mask, targets_start, targets_end = \
input_ids.cuda(), attention_mask.cuda(), targets_start.cuda(), targets_end.cuda()
with torch.no_grad():
outputs_start, outputs_end = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
)
all_outputs_start.append(outputs_start.cpu().numpy().tolist())
all_outputs_end.append(outputs_end.cpu().numpy().tolist())
loss = loss_fn((outputs_start, outputs_end), (targets_start, targets_end))
losses.update(loss.item(), input_ids.size(0))
all_outputs_start = np.vstack(all_outputs_start)
all_outputs_end = np.vstack(all_outputs_end)
print('----Validation Results Summary----')
print('Epoch: [{}] Valid Loss: {: >4.5f}'.format(epoch, losses.avg))
result_dict['val_loss'].append(losses.avg)
return result_dict, all_outputs_start, all_outputs_end
```
#### Initialize Training
```
def init_training(args, data, fold):
fix_all_seeds(args.seed)
if not os.path.exists(args.output_dir):
os.makedirs(args.output_dir)
# model
model_config, tokenizer, model = make_model(args)
if torch.cuda.device_count() >= 1:
print('Model pushed to {} GPU(s), type {}.'.format(
torch.cuda.device_count(),
torch.cuda.get_device_name(0))
)
model = model.cuda()
else:
raise ValueError('CPU training is not supported')
# data loaders
train_dataloader, valid_dataloader, valid_features, valid_set = make_loader(args, data, tokenizer, fold)
# optimizer
optimizer = make_optimizer(args, model)
# scheduler
num_training_steps = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) * args.epochs
if args.warmup_ratio > 0:
num_warmup_steps = int(args.warmup_ratio * num_training_steps)
else:
num_warmup_steps = 0
print(f"Total Training Steps: {num_training_steps}, Total Warmup Steps: {num_warmup_steps}")
scheduler = make_scheduler(args, optimizer, num_warmup_steps, num_training_steps)
# mixed precision training with NVIDIA Apex
if args.fp16:
model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level)
result_dict = {
'epoch':[],
'train_loss': [],
'val_loss' : [],
'best_val_loss': np.inf
}
return (
model, model_config, tokenizer, optimizer, scheduler,
train_dataloader, valid_dataloader, result_dict, valid_features, valid_set
)
```
#### Validation Jaccard
```
# Ref: https://www.kaggle.com/rhtsingh/chaii-qa-5-fold-xlmroberta-torch-infer
import collections
def postprocess_qa_predictions(examples, features1, raw_predictions, tokenizer, n_best_size = 20, max_answer_length = 30):
features = features1
all_start_logits, all_end_logits = raw_predictions
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
predictions = collections.OrderedDict()
print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")
for example_index, example in examples.iterrows():
feature_indices = features_per_example[example_index]
#print(example['id'],example_index,feature_indices)
min_null_score = None
valid_answers = []
context = example["context"]
for feature_index in feature_indices:
start_logits = all_start_logits[feature_index]
end_logits = all_end_logits[feature_index]
sequence_ids = features[feature_index]["sequence_ids"]
context_index = 1
offset_mapping = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(features[feature_index]["offset_mapping"])
]
cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id)
feature_null_score = start_logits[cls_index] + end_logits[cls_index]
if min_null_score is None or min_null_score < feature_null_score:
min_null_score = feature_null_score
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
for start_index in start_indexes:
for end_index in end_indexes:
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue
start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)
if len(valid_answers) > 0:
best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0]
else:
best_answer = {"text": "", "score": 0.0}
predictions[example["id"]] = best_answer["text"]
return predictions
def jaccard(str1, str2):
a = set(str1.lower().split())
b = set(str2.lower().split())
c = a.intersection(b)
return float(len(c)) / (len(a) + len(b) - len(c))
```
#### Run
```
all_jacard_scores = []
def run(data, fold):
args = Config()
model, model_config, tokenizer, optimizer, scheduler, train_dataloader, \
valid_dataloader, result_dict, valid_features, valid_set = init_training(args, data, fold)
trainer = Trainer(model, tokenizer, optimizer, scheduler)
evaluator = Evaluator(model)
train_time_list = []
valid_time_list = []
for epoch in range(args.epochs):
result_dict['epoch'].append(epoch)
# Train
torch.cuda.synchronize()
tic1 = time.time()
result_dict = trainer.train(
args, train_dataloader,
epoch, result_dict
)
torch.cuda.synchronize()
tic2 = time.time()
train_time_list.append(tic2 - tic1)
# Evaluate
torch.cuda.synchronize()
tic3 = time.time()
result_dict, all_outputs_start, all_outputs_end = evaluator.evaluate(
valid_dataloader, epoch, result_dict
)
torch.cuda.synchronize()
# # Get valid jaccard score
valid_features1 = valid_features.copy()
valid_preds = postprocess_qa_predictions(valid_set, valid_features1, (all_outputs_start, all_outputs_end), tokenizer)
valid_set['PredictionString'] = valid_set['id'].map(valid_preds)
valid_set['jaccard'] = valid_set[['answer_text','PredictionString']].apply(lambda x: jaccard(x[0],x[1]), axis=1)
print("valid jaccard: ",np.mean(valid_set.jaccard))
all_jacard_scores.append(np.mean(valid_set.jaccard))
tic4 = time.time()
valid_time_list.append(tic4 - tic3)
output_dir = os.path.join(args.output_dir, f"checkpoint-fold-{fold}-epoch-{epoch}")
os.makedirs(output_dir, exist_ok=True)
if result_dict['val_loss'][-1] < result_dict['best_val_loss']:
print("{} Epoch, Best epoch was updated! Valid Loss: {: >4.5f}".format(epoch, result_dict['val_loss'][-1]))
result_dict["best_val_loss"] = result_dict['val_loss'][-1]
# os.makedirs(output_dir, exist_ok=True)
torch.save(model.state_dict(), f"{output_dir}/pytorch_model.bin")
model_config.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)
print(f"Saving model checkpoint to {output_dir}.")
print()
evaluator.save(result_dict, output_dir)
print(f"Total Training Time: {np.sum(train_time_list)}secs, Average Training Time per Epoch: {np.mean(train_time_list)}secs.")
print(f"Total Validation Time: {np.sum(valid_time_list)}secs, Average Validation Time per Epoch: {np.mean(valid_time_list)}secs.")
#del trainer, evaluator
#del model, model_config, tokenizer
#del optimizer, scheduler
#del train_dataloader, valid_dataloader, result_dict
for fold in range(5):
print();print()
print('-'*50)
print(f'FOLD: {fold}')
print('-'*50)
run(train, fold)
print("*"*50)
print("Final jacard scores, 5-fold: ", np.round(all_jacard_scores,5))
print("Average jacard:",np.mean(all_jacard_scores))
print("*"*50)
```
| github_jupyter |
```
import matplotlib.pyplot as plt
from matplotlib import style
import numpy as np
%matplotlib inline
style.use('ggplot')
x = [20,30,50]
y = [ 10,50,13]
x2 = [4,10,47,]
y2= [56,4,30]
plt.plot(x, y, 'r', label='line one', linewidth=5)
plt.plot(x2, y2, 'c', label ='line two', linewidth=5)
plt.title('Interactive plot')
plt.xlabel('X axis')
plt.ylabel('Y axis')
plt.legend()
#plt.grid(True, color='k')
plt.show()
#BAR GRAPH
plt.bar([1,4,5,3,2],[4,7,8,10,11], label='Type 1')
plt.bar([9,7,6,8,10],[3,6,9,11,15], label = 'Type 2', color='k')
plt.legend()
plt.xlabel('Bar Number')
plt.ylabel('Bar Height')
plt.title('Bar Graph')
plt.show()
```
HISTOGRAM
```
#Bar plots have cateogrical variables while histogram has quantitative variables
population_ages = [22,34,45,78,23,65,47,98,70,56,54,87,23,54,31,35,
64,76,87,80,60,73,47,63,79,52,75,64,51,46,83,62,36,74,63]
from numpy.random import seed
from numpy.random import randint
seed(1)
#generate some random integers
population_ages_2 = randint(10,50,40)
#print(population_ages_2)
bins = [20,30,40,50,60,70,80,90,100]
plt.hist(population_ages, bins, histtype='bar', color = 'm', rwidth = 0.5)
plt.hist(population_ages_2, bins, histtype='bar', color = 'c', rwidth = 0.5)
plt.xlabel('X asis')
plt.ylabel('Y axis')
plt.title('Histogram')
plt.legend()
plt.show()
```
AREA PLOT AND STACK PLOT
```
days = randint(1,5,5)
seed(0)
sleeping = randint(10,30,5)
eating = randint(40,60,5)
working = randint(70,100,5)
playing = randint(100,150,5)
plt.plot([],[], color = 'm', label = 'sleeping', linewidth = 5)
plt.plot([],[], color = 'c', label = 'eating', linewidth = 5)
plt.plot([],[], color = 'r', label = 'working', linewidth = 5)
plt.plot([],[], color = 'k', label = 'playing', linewidth = 5)
plt.stackplot(days, sleeping, eating, working, playing, colors = ['m','c','r','k'])
plt.legend()
```
PIE CHART
```
seed(0)
slices = randint(20,100,5)
activities = ['balling','playing','sleeping','praying','eating']
cols = ['c','m','r','b','y']
plt.pie(slices,
labels = activities,
startangle = 90,
shadow = True,
colors = cols,
autopct = '%.1f%%', #formats the percentage of the data given
explode=(0,0.2,0,0,0.1)) #this is to explode the chart and takes positional argument
plt.title('Pie Chart')
plt.show()
#working with Multiple Plots
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0,5.0,0.1)
t2 = np.arange(0.0,6.0,0.4)
plt.subplot(211)
plt.plot(t1, f(t1),'bo',
t2, f(t2))
plt.subplot(212)
plt.plot(t1, np.cos(2*np.pi*t1), color = 'k')
plt.show()
```
FURTHER PLOTTING IN MATPLOTLIB/PYLAB
```
from matplotlib import pylab
pylab.__version__
import numpy as np
x = np.linspace(0,10,25)
y = x*x+2
print()
print(x)
print()
print(y)
#print(np.array([x,y]).reshape(25,2)) # to join the array together
pylab.plot(x,y, 'r') #'r' stands for red
#drawing a subgraph
pylab.subplot(1,2,1) #rows, columns and indexes
pylab.plot(x,y, 'b--')
pylab.subplot(1,2,2)
pylab.plot(y,x, 'g*-')
import matplotlib.pyplot as plt
from matplotlib import style
style.use('ggplot')
fig = plt.figure()
ax = fig.add_axes([0.5,0.1,0.8,0.8]) #this controls the left,bottom,width and height of the canvas
ax.plot(x,y, 'r')
#we can also draw subgraphs
fig, ax = plt.subplots(nrows=1, ncols=2)
for ax in ax:
ax.plot(x,y, 'r')
#we can drw a picture or a graph inside of another graph
fig = plt.figure()
ax1 = fig.add_axes([0.5,0.1,0.8,0.8]) #Big axes
ax2 = fig.add_axes([0.6,0.5,0.35,0.3]) #small canvas
ax1.plot(x,y,'r')
ax2.plot(y,x, 'g')
fig, ax = plt.subplots(dpi=100)
ax.set_xlabel('X-axis')
ax.set_ylabel('Y-axis')
ax.set_title('tutorial plots')
#ax.plot(x,y, 'r')
ax.plot(x,x**2)
ax.plot(x, x**3)
#ax.legend(['label 1', 'label 2'])
ax.legend(['y = x**2', 'y = x**3'], loc=2) #plotting the legend
#you can also set other properties such as line color, transparency and more
fig, ax = plt.subplots(dpi=100)
ax.plot(x, x**2, 'r', alpha=0.5) #alpha sets the line colour transparency
ax.plot(x, x+2, alpha=.5)
ax.plot(x, x+3, alpha=.5)
fig, ax = plt.subplots(dpi=100)
#line width
ax.plot(x, x+1, 'b', lw=0.5 )
ax.plot(x, x+2, 'b', lw=1.5)
ax.plot(x, x+3, 'b', lw=3)
ax.plot(x, x+4, 'b', lw=3.5)
fig, ax = plt.subplots(dpi=100)
ax.plot(x, x+1, 'b', lw=0.5, linestyle='-')
ax.plot(x, x+2, 'b', lw=1.5, linestyle='-.')
ax.plot(x, x+3, 'b', lw=3, linestyle=':')
ax.plot(x, x+4, 'b', lw=3.5, linestyle='-')
fig, ax = plt.subplots(dpi=100)
ax.plot(x, x+1, 'b', lw=0.5 , marker='o', markersize=5, markerfacecolor='r')
ax.plot(x, x+2, 'b', lw=1.5, marker='+')
ax.plot(x, x+3, 'b', lw=3, marker='s')
ax.plot(x, x+4, 'b', lw=3.5, marker='1', markersize=10)
```
LIMITING OUR DATA
```
fig, ax = plt.subplots(1,2, figsize=(10,5))
ax[0].plot(x,x**2, x,x**3, lw=3)
#ax[0].grid(True) this applies if we are not using ggplot
ax[1].plot(x,x**2, x,x**3, lw=3)
#we set the x and y limit on the second plot
ax[1].set_ylim([0,60])
ax[1].set_xlim([2,5])
```
Other 2_d Graphs
```
n = np.array([0,1,2,3,4,5])
fig, ax = plt.subplots(1,4, figsize=(16,5))
ax[0].set_title('scatter')
ax[0].scatter(x, x + 0.25*np.random.randn(len(x)))
ax[1].set_title('step plot')
ax[1].step(n, n**2, lw=2, color='b')
ax[2].set_title('Bar')
ax[2].bar(n, n**2, align='center', color ='g', alpha=0.5)
ax[3].set_title('fill between')
ax[3].fill_between(x, x**2, x**3, color ='g', alpha=0.5)
plt.show()
#Draw a Histogram '''Very important''
x = np.random.randn(10000)
fig, ax = plt.subplots(1,2, figsize=(12,4))
ax[0].set_title('Histogram')
ax[0].hist(x, color='g', alpha=0.8)
ax[1].set_title('Cumulative detailed histogram')
ax[1].hist(x, cumulative=True, bins=9)
plt.show()
#draw a contour map
#lets create some data where X and Y are coordinates and Z is the depth or height
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.cm as cm
delta = 0.0075
x = np.arange(-3, 3, delta)
y = np.arange(-2, 2, delta)
X, Y = np.meshgrid(x,y)
Z1 = np.exp(-X**2 - Y**2)
Z2 = np.exp(-(-X - 1)**2 - (Y - 1)**2)
Z = (Z1 - Z2)*2
fig, ax = plt.subplots(dpi=100)
CS = ax.contour(X,Y,Z) #CS is contour surface
ax.clabel(CS, inline=1, fontsize=10)
ax.set_title('Contour Map')
```
3 D MAPS
```
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = plt.figure(figsize=(14,6), dpi=100)
#Specify the 3D graphics to draw with projection='3d'
ax = fig.add_subplot(1,2,1, projection='3d')
ax.plot_surface(X, Y, Z, rstride=10, cstride=10, lw=0, color='c')
#write a program to create a pie chart of the popularity of programming languages
popularity = [200,334,890,290,679,300,980] #No of users of programming languages
prog_lang = ['Java', 'C#', 'C++', 'CSS', 'Java Script', 'Python', 'R']
fig = plt.figure(figsize=(14,6), dpi=100)
plt.pie(popularity,
shadow = True,
autopct= '%.f%%', startangle = 180,
explode=[0,0,0,0,0,0,0.1],
labels = prog_lang)
plt.title('Popularity of Programming languages')
plt.show()
```
| github_jupyter |
# Gas Price Prediction with Recurrent Neural Networks (Hourly, Window 2)
This notebook contains the generic RNN model used in the thesis project. The experiment includes two extracted datasets of a predefined gas station. The first dataset contains the daily maximum prices, while the other contains data of hourly granularity. The datasets are tested with basic recurrent, LSTM and GRU neurons. Extensive grid-search has been performed to build a set of different models. The following excerpt displays the different settings. <br> <br>
**Excerpt of experiment settings (Thesis):**
> [...] For this reason, the **hidden layer size (4, 8 or 12 neurons)** and the window size were modifed in a systematic way. Modifying the number of hidden neurons helps in determining a suitable architecture of the network. Different **window sizes (1, 2, 3, 7)** represent the number of input neurons. Furthermore, the various window sizes should test the dataset for different long or short-term dependencies. [...] The number of **training iterations** were **50, 100 and 200** for the daily dataset and **10, 25, 50** for the hourly dataset.
<br>
Main parts of this notebook are adapted from **[2]**.
```
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn.metrics as metrics
from sklearn.preprocessing import StandardScaler
%matplotlib inline
```
# Model Building
## 1. Data Loading
In the following example, a preprocessed gas price dataset is used. The data contains the hourly prices in the period from 06-2014 - 05-2016. The dataset has been extracted from **[1]**.
```
gas_price = pd.read_csv("../Data/First_station_hour.csv", sep=";")
gas_price.head()
data_to_use = gas_price['e10'].values
```
## 2. Data Preprocessing
At this step, the input data gets scaled. Scaling supports model training. The following figure shows the scaled data.
In addtition to that, the dataset gets windowed. With sliding windows, squences of different length can be fed to the network for network tuning. Predictions are made based on theses sequences. By appling windowing, the original dataset gets shorten by a few observations.
#### Step 2.1: Data Scaling
```
scaler = StandardScaler()
scaled_data = scaler.fit_transform(data_to_use.reshape(-1, 1))
plt.figure(figsize=(16,7), frameon=False, edgecolor='blue')
plt.title('Scaled gas prices from July 2014 to May 2016')
plt.xlabel('Hours')
plt.ylabel('Scaled prices')
plt.plot(scaled_data, label='Price data (hourly)')
plt.legend()
plt.show()
```
#### Step 2.2: Windowing the dataset
In order to modify the widow size, the window_size parameter must be changed here **and** in the hyperparameter section. (see Step 3)
```
def window_data(data, window_size):
X = []
y = []
i = 0
while (i + window_size) <= len(data) - 1:
X.append(data[i:i+window_size])
y.append(data[i+window_size])
i += 1
assert len(X) == len(y)
return X, y
X, y = window_data(scaled_data, 2)
```
#### Step 2.3 Train/test splitting
The windowed dataset gets divided into 80/20 (%)
```
split = int(len(X) * 0.8)
X_train = np.array(X[:split])
y_train = np.array(y[:split])
X_test = np.array(X[split:])
y_test = np.array(y[split:])
print("X_train size: {}".format(len(X_train)))
print("y_train size: {}".format(len(X_test)))
print("X_test size: {}".format(len(y_train)))
print("y_test size: {}".format(len(y_test)))
print("X_train size: {}".format(X_train.shape))
print("y_train size: {}".format(y_train.shape))
print("X_test size: {}".format(X_test.shape))
print("y_test size: {}".format(y_test.shape))
```
## 3. Network Definition
#### Hyperparameter definition
```
#Hyperparameters to change
window_size = 2
hidden_layer_size = 4
epochs = 10
number_of_layers = 1
#Fixed Hyperparameters
batch_size = 7
gradient_clip_margin = 4
learning_rate = 0.001
number_of_classes = 1
```
#### Output layer
For comparison of various networks, weight initialization is fixed. Therefore, the seed has been set to 2222.
```
def output_layer(lstm_output, in_size, out_size):
x = lstm_output[:, -1, :]
weights = tf.Variable(tf.truncated_normal([in_size, out_size], stddev=0.05, seed=2222), name='output_layer_weights')
bias = tf.Variable(tf.zeros([out_size]), name='output_layer_bias')
output = tf.matmul(x, weights) + bias
return output
```
#### Loss and optimization
In this function, the gradients are computed, adam optimizer and gradient clipping are getting applied. Furthermore, the loss function gets minimized.
```
def opt_loss(logits, targets, learning_rate, grad_clip_margin):
losses = []
for i in range(targets.get_shape()[0]):
losses.append([(tf.pow(logits[i] - targets[i], 2))])
loss = tf.losses.mean_squared_error(targets, logits)
#Clipping the gradients
gradients = tf.gradients(loss, tf.trainable_variables())
clipper_, _ = tf.clip_by_global_norm(gradients, grad_clip_margin)
optimizer = tf.train.AdamOptimizer(learning_rate)
train_optimizer = optimizer.apply_gradients(zip(gradients, tf.trainable_variables()))
return loss, train_optimizer
```
#### Build network
At this point, the entire network (computation graph) is built. To completely exclude randomness, the random seed gets also fixed on graph level. <br> <br>
```
tf.reset_default_graph()
tf.set_random_seed(1111)
#Inputs
inputs = tf.placeholder(tf.float32, [batch_size, window_size, 1], name='input_data')
targets = tf.placeholder(tf.float32, [batch_size, 1], name='targets')
drop_rate = tf.placeholder(tf.float32, name='drop_rate')
#Build Network
#
# Replace the following signature [GRUcell()] with:
#
# -> LSTMCell() for LSTM
# -> BasicRNNCell() for RNN
# -> GRUCell() for GRU
#
# The arguments remain unchanged.
lstm_cell = tf.nn.rnn_cell.BasicRNNCell(hidden_layer_size, activation=tf.nn.elu)
lstm_dropout = tf.contrib.rnn.DropoutWrapper(lstm_cell,input_keep_prob=drop_rate)
cell = tf.nn.rnn_cell.MultiRNNCell([lstm_dropout] * number_of_layers)
init_state = cell.zero_state(batch_size, tf.float32)
outputs, states = tf.nn.dynamic_rnn(cell, inputs, initial_state=init_state)
logits = output_layer(outputs, hidden_layer_size, number_of_classes)
loss, opt = opt_loss(logits, targets, learning_rate, gradient_clip_margin)
```
## 4. Network Training
```
session = tf.Session()
session.run(tf.global_variables_initializer())
```
#### 4.1. Model training
After building the tf-graph, it is now possible to train the network. In order to do that, the computation graph gets called by session_run(). Placeholder are fed with the feed_dict argument.
```
step=0
#global lists to save run-time statistics (loss and predictions)
scores_per_epoch = []
loss_per_epoch = []
for i in range(epochs):
trained_scores = []
epoch_loss = []
ii = 0
while(ii + batch_size) <= len(X_train):
X_batch = X_train[ii:ii+batch_size]
y_batch = y_train[ii:ii+batch_size]
o, c, _ = session.run([logits, loss, opt], feed_dict={inputs:X_batch, targets:y_batch, drop_rate:1})
epoch_loss.append(c)
trained_scores.append(o)
ii += batch_size
step += 1
#Add current statistics to global list
scores_per_epoch.append(trained_scores)
loss_per_epoch.append(np.mean(epoch_loss))
if (i % 5) == 0:
with session.as_default():
print('Epoch {}/{}'.format(i, epochs), ' Current loss: {}'.format(np.mean(epoch_loss)))
```
#### 4.2. Plot of training score (loss per epoch)
As despicted in the figure, the network rapidly learns within the first few epochs. After that, the training performance is low.
```
plt.figure(figsize=(16, 7))
plt.plot(loss_per_epoch, label='Original data')
```
#### 4.3. Collect training data
In this loop, the most recent predictions on the training set are collected for later use.
```
sup =[]
for i in range(len(trained_scores)):
for j in range(len(trained_scores[i])):
sup.append(trained_scores[i][j])
```
#### 4.4. Model Test
In the cell below, the model validation process is performed. It is possible to predict short-term price movements with the help of rolling forecasts. By doing so, test data is fed to the trained model. The forecast results (based on batch and window size) get stored in the list tests.
```
tests = []
i = 0
while i+batch_size <= len(X_test):
o = session.run([logits], feed_dict={inputs:X_test[i:i+batch_size], drop_rate:1.0}) #, model.drop_rate:1
i += batch_size
tests.append(o)
```
#### 4.5. Remove duplicate entries
The list tests contains multiple predictions for one observation due to batched mode. To get rid of these, the following operation is performed.
```
tests_new = []
for i in range(len(tests)):
for j in range(len(tests[i][0])):
tests_new.append(tests[i][0][j])
```
#### 4.6 Postprocess predictions
The cleansed list tests_new must be processed in order to plot the results correctly. For this reason, the predictions get inserted at the right index.
```
pos = (len(X_train))
size = len(X)-(batch_size-window_size+5)
test_results = []
for i in range(size):
if i >= pos:
test_results.append(tests_new[i-pos])
else:
test_results.append(None)
```
#### 4.7. Plot (scaled) results
The blue line shows the original data. The network seems to make good predictions. Due to batched mode, the last observations cannot be predicted.
```
plt.figure(figsize=(16, 9))
plt.title('Hourly Dataset')
plt.xlabel('Pedicted Hours')
plt.ylabel('Scaled Gas Prices')
plt.plot(scaled_data, label='Original data')
plt.plot(sup, label='Training data')
plt.plot(test_results, label='Testing data')
plt.legend()
plt.show()
```
## 5. Analysis of Results
```
pred_rescaled = scaler.inverse_transform(tests_new, copy=None)
pred_rescaled_round = pred_rescaled.round()
```
#### 5.1. Plot rescaled results
The plot reveals that the network is able to predict the hourly data with deviations. The graphs further display that the network might have problems with the prediction of minima and maxima. Interestingly, the predictions show a spike into the right direction, but get adjusted after one step. Apart from that, rounding the values did not increase the accuracy.
```
plt.figure(figsize=(16, 7))
plt.plot(data_to_use[len(X_train)+1:(len(X_train)+37)], label='Original data', linestyle='-', color='k')
plt.plot(pred_rescaled[:36], label='Test data')
plt.plot(pred_rescaled_round[:36], label='Test data (round)')
plt.legend()
plt.show()
```
### Metrics
The following metrics are used:
+ Mean Absolute Error **(MAE)**
+ Mean Squared Error **(MSE)**
+ Root Mean Squared Error **(RMSE)**
+ Mean Abs. Percentage Error **(MAPE)**$*$
+ Mean Percentage Error **(MPE)**
*$*$ Function **mean_absolute_percentage_error** has been adapted from **[3]**. **mean_percentage_error** is based on this.*
```
def mean_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean( y_true != y_pred ) *100
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def print_metrics(prediction, rnd):
start = len(X_train)+1
end = len(X)-5
local_data = data_to_use[start:end] #Test-Set, extracted
if rnd == True: #if rounded data is investigated
#cast to int does not harm data as it is already rounded when passed to function;
#cast is not necessary, but performed for completeness.
prediction = prediction.astype(np.int64)
prediction = prediction.reshape(local_data.shape)
else:
prediction = prediction.reshape(local_data.shape)
local_data = local_data.astype(np.float64)
mae = metrics.mean_absolute_error(local_data, prediction)#data_to_use[start:end]
mse = metrics.mean_squared_error(local_data, prediction)
msle = metrics.mean_squared_log_error(local_data, prediction)
mpe = mean_percentage_error(local_data, prediction)
mape = mean_absolute_percentage_error(local_data, prediction)
print("Mean Absolute Error: ", mae, sep="\t\t")
print("Mean Squared Error: ", mse, sep="\t\t")
print("Root Mean Squared Error: ", np.sqrt(mse), sep="\t")
print("Mean Abs. Percentage Error: ", mape, sep="\t")
print("Mean Percentage Error: ", mpe, sep="\t\t")
```
#### 5.2. Print metrics
```
print_metrics(pred_rescaled, False)
print_metrics(pred_rescaled_round, True)
session.close()
```
##### Dataset:
[1] Martin Kurz. *Historische Preisdaten*. 2017. Retrieved from https://creativecommons.tankerkoenig.de/ and licensed under CC-BY 4.0.
##### References:
[2] Luka Anicin. *tensorflow_lstm.ipynb*. 2017. URL: https://github.com/lucko515/tesla-stocks-prediction/blob/master/lstm_from_scratch_tensorflow.ipynb. *(visited on 02/28/2018)*
[3] Antonín Hoskovec. *Mean absolute percentage error (MAPE) in Scikit-learn*. 2017. URL: https://stats.stackexchange.com/questions/58391/mean-absolute-percentage-error-mape-in-scikit-learn/294069#294069 (visited on 02/28/2018)
| github_jupyter |
# Svenskt Kvinnobiografiskt lexikon part 5
version part 5 - 0.1
Check SKBL women if Alvin has an authority for the women
* this [Jupyter Notebook](https://github.com/salgo60/open-data-examples/blob/master/Svenskt%20Kvinnobiografiskt%20lexikon%20part%205.ipynb)
* [part 1](https://github.com/salgo60/open-data-examples/blob/master/Svenskt%20Kvinnobiografiskt%20lexikon.ipynb) check Wikidata and SKBL
* [part 2](https://github.com/salgo60/open-data-examples/blob/master/Svenskt%20Kvinnobiografiskt%20lexikon%20part%202.ipynb) more queries etc.
* [part 4](https://github.com/salgo60/open-data-examples/blob/master/Svenskt%20Kvinnobiografiskt%20lexikon%20part%204.ipynb) get archives
# Wikidata
get SKBL women not connected to Alvin
```
from datetime import datetime
now = datetime.now()
print("Last run: ", datetime.now())
# pip install sparqlwrapper
# https://rdflib.github.io/sparqlwrapper/
import sys,json
import pandas as pd
from SPARQLWrapper import SPARQLWrapper, JSON
endpoint_url = "https://query.wikidata.org/sparql"
querySKBLAlvin = """SELECT ?item (REPLACE(STR(?item), ".*Q", "Q") AS ?wid) ?SKBL (URI(CONCAT("https://www.alvin-portal.org/alvin/resultList.jsf?query=", ENCODE_FOR_URI(?itemLabel), "&searchType=PERSON")) AS ?Alvin) WHERE {
?item wdt:P4963 ?id.
OPTIONAL { ?item wdt:P569 ?birth. }
MINUS { ?item wdt:P6821 ?value. }
BIND(URI(CONCAT("https://www.skbl.se/sv/artikel/", ?id)) AS ?SKBL)
SERVICE wikibase:label {
bd:serviceParam wikibase:language "sv".
?item rdfs:label ?itemLabel.
}
}
ORDER BY (?itemLabel)"""
def get_sparql_dataframe(endpoint_url, query):
"""
Helper function to convert SPARQL results into a Pandas data frame.
"""
user_agent = "salgo60/%s.%s" % (sys.version_info[0], sys.version_info[1])
sparql = SPARQLWrapper(endpoint_url, agent=user_agent)
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
result = sparql.query()
processed_results = json.load(result.response)
cols = processed_results['head']['vars']
out = []
for row in processed_results['results']['bindings']:
item = []
for c in cols:
item.append(row.get(c, {}).get('value'))
out.append(item)
return pd.DataFrame(out, columns=cols)
SKBLmissingAlvin = get_sparql_dataframe(endpoint_url, querySKBLAlvin )
SKBLmissingAlvin.info()
import csv
import urllib3, json
http = urllib3.PoolManager()
listNewItems =[]
for index,row in SKBLmissingAlvin.iterrows():
url = row["Alvin"]
print(url)
r = http.request('GET', url)
print(len(r.data),url)
#listNewItems.append(new_item)
#print (len(listNewItems) ," antal poster")
```
| github_jupyter |
[[source]](../api/alibi.explainers.shap_wrappers.rst)
# Tree SHAP
<div class="alert alert-info">
Note
To enable SHAP support, you may need to run:
```bash
pip install alibi[shap]
```
</div>
## Overview
The tree SHAP (**SH**apley **A**dditive ex**P**lanations) algorithm is based on the paper [From local explanations to global understanding with explainable AI for trees](https://www.nature.com/articles/s42256-019-0138-9) by Lundberg et al. and builds on the open source [shap library](https://github.com/slundberg/shap) from the paper's first author.
The algorithm provides human interpretable explanations suitable for regression and classification of models with tree structure applied to tabular data. This method is a member of the *additive feature attribution methods* class; feature attribution refers to the fact that the change of an outcome to be explained (e.g., a class probability in a classification problem) with respect to a *baseline* (e.g., average prediction probability for that class in the training set) can be attributed in different proportions to the model input features.
A simple illustration of the explanation process is shown in Figure 1. Here we see depicted a tree-based model which takes as an input features such as `Age`, `BMI` or `Blood pressure` and outputs `Mortality risk score`, a continuous value. Let's assume that we aim to explain the difference between and observed outcome and no risk, corresponding to a base value of `0.0`. Using the Tree SHAP algorithm, we attribute the `4.0` difference to the input features. Because the sum of the attribute values equals `output - base value`, this method is _additive_. We can see for example that the `Sex` feature contributes negatively to this prediction whereas the remainder of the features have a positive contribution (i.e., increase the mortality risk). For explaining this particular data point, the `Blood Pressure` feature seems to have the largest effect, and corresponds to an increase in the mortality risk. See our example on how to perform explanations with this algorithm and visualise the results using the `shap` library visualisations [here](../examples/interventional_tree_shap_adult_xgb.ipynb) and [here](../examples/path_dependent_tree_shap_adult_xgb.ipynb).

Figure 1: Cartoon ilustration of explanation models with Tree SHAP.
Image Credit: Scott Lundberg (see source [here](https://www.nature.com/articles/s42256-019-0138-9))
## Usage
In order to compute the shap values , the following arguments can optionally be set when calling the `explain` method:
- `interactions`: set to `True` to decompose the shap value of every feature for every example into a main effect and interaction effects
- `approximate`: set to `True` to calculate an approximation to shap values (see our [example](../examples/path_dependent_tree_shap_adult_xgb.ipynb))
- `check_additivity`: if the explainer is initialised with `model_output = raw` and this option is `True` the explainer checks that the sum of the shap values is equal to model output - expected value
- `tree_limit`: it an `int` is passed, an ensemble formed of only `tree_limit` trees is explained
If the dataset contains categorical variables that have been encoded before being passed to the explainer and a single shap value is desired for each categorical variable, the the following options should be specified:
- `summarise_result`: set to `True`
- `cat_var_start_idx`: a sequence of integers containing the column indices where categorical variables start. If the feature matrix contains a categorical feature starting at index 0 and one at index 10, then `cat_var_start_idx=[0, 10]`
- `cat_vars_enc_dim`: a list containing the dimension of the encoded categorical variables. The number of columns specified in this list is summed for each categorical variable starting with the corresponding index in `cat_var_start_idx`. So if `cat_var_start_idx=[0, 10]` and `cat_vars_enc_dim=[3, 5]`, then the columns with indices `0, 1` and `2` and `10, 11, 12, 13` and `14` will be combined to return one shap value for each categorical variable, as opposed to `3` and `5`.
### Path-dependent feature perturbation algorithm
#### Initialiastion and fit
The explainer is initialised with the following agruments:
- a model, which could be an `sklearn`, `xgboost`, `catboost` or `lightgbm` model. Note that some of the models in these packages or models trained with specific objectives may not be supported. In particular, passing raw strings as categorical levels for `catboost` and `lightgbm` is not supported
- `model_output` should always default to `raw` for this algorithm
- optionally, set `task` to `'classification'` or `'regression'` to indicate the type of prediction the model makes. If set to `regression` the `prediction` field of the response is empty
- optionally, a list of feature names via `feature_names`. This is used to provide information about feature importances in the response
- optionally, a dictionary, `category_names`, that maps the columns of the categorical variables to a list of strings representing the names of the categories. This may be used for visualisation in the future.
```python
from alibi.explainers import TreeShap
explainer = TreeShap(
model,
feature_names=['size', 'age'],
categorical_names={0: ['S', 'M', 'L', 'XL', 'XXL']}
)
```
For this algorithm, fit is called with no arguments:
```python
explainer.fit()
```
#### Explanation
To explain an instance `X`, we simply pass it to the explain method:
```python
explanation = explainer.explain(X)
```
The returned explanation object has the following fields:
* `explanation.meta`:
```python
{'name': 'TreeShap',
'type': ['whitebox'],
'task': 'classification',
'explanations': ['local', 'global'],
'params': {'summarise_background': False, 'algorithm': 'tree_path_dependent' ,'kwargs': {}}
}
```
This field contains metadata such as the explainer name and type as well as the type of explanations this method can generate. In this case, the `params` attribute shows the Tree SHAP variant that will be used to explain the model in the `algorithm` attribute.
* `explanation.data`:
```python
data={'shap_values': [
array([[ 5.0661433e-01, 2.7620478e-02],
[-4.1725192e+00, 4.4859368e-03],
[ 4.1338313e-01, -5.5618007e-02]],
dtype=float32)
],
'shap_interaction_values': [array([], dtype=float64)],
'expected_value': array([-0.06472124]),
'model_output': 'raw',
'categorical_names': {0: ['S', 'M', 'L', 'XL', 'XXL']},
'feature_names': ['size', 'age'],
'raw': {
'raw_prediction': array([-0.73818872, -8.8434663 , -3.24204564]),
'loss': [],
'prediction': array([0, 0, 0]),
'instances': array([[0, 23],
[4, 55],
[2, 43]]),
'labels': array([], dtype=float64),
'importances': {
'0': {
'ranked_effect': array([1.6975055 , 1.3598266], dtype=float32),
'names': [
'size',
'age',
]
},
'aggregated': {
'ranked_effect': array([1.6975055 , 1.3598266], dtype=float32),
'names': [
'size',
'age',
]
}
}
}
}
```
This field contains:
* `shap_values`: a list of length equal to the number of model outputs, where each entry is an array of dimension samples x features of shap values. For the example above , 3 instances with 2 features has been explained so the shap values for each class are of dimension 3 x 2
* `shap_interaction_values`: an empty list since this `interactions` was set to `False` in the explain call
* `expected_value`: an array containing expected value for each model output
* `model_output`: `raw` indicates that the model raw output was explained, the only option for the path dependent algorithm
* `feature_names`: a list with the feature names
* `categorical_names`: a mapping of the categorical variables (represented by indices in the shap_values columns) to the description of the category
* `raw`: this field contains:
* `raw_prediction`: a samples x n_outputs array of predictions for each instance to be explained.
* `prediction`: an array containing the index of the maximum value in the `raw_prediction` array
* `instances`: a samples x n_features array of instances which have been explained
* `labels`: an array containing the labels for the instances to be explained
* `importances`: a dictionary where each entry is a dictionary containing the sorted average magnitude of the shap value (ranked_effect) along with a list of feature names corresponding to the re-ordered shap values (names). There are n_outputs + 1 keys, corresponding to n_outputs and the aggregated output (obtained by summing all the arrays in shap_values)
Please see our examples on how to visualise these outputs using the shap library visualisations library visualisations [here](../examples/interventional_tree_shap_adult_xgb.ipynb) and [here](../examples/path_dependent_tree_shap_adult_xgb.ipynb).
#### Shapley interaction values
##### Initialisation and fit
Shapley interaction values can only be calculated using the path-dependent feature perturbation algorithm in this release, so no arguments are passed to the `fit` method:
```python
explainer = TreeShap(
model,
model_output='raw',
)
explainer.fit()
```
##### Explanation
To obtain the Shapley interaction values, the `explain` method is called with the option `interactions=True`:
```python
explanation = explainer.explain(X, interactions=True)
```
The explanation contains a list with the shap interaction values for each model output in the `shap_interaction_values` field of the `data` property.
### Interventional feature perturbation algorithm
#### Explaining model output
##### Initialiastion and fit
```python
explainer = TreeShap(
model,
model_output='raw',
)
explainer.fit(X_reference)
```
Model output can be set to `model_output='probability'` to explain models which return probabilities. Note that this requires the model to be trained with specific objectives. Please the footnote to our path-dependent feature perturbation [example](../examples/path_dependent_tree_shap_adult_xgb.ipynb) for an example of how to set the model training objective in order to explain probability outputs.
##### Explanation
To explain instances in `X`, the explainer is called as follows:
```python
explanation = explainer.explain(X)
```
#### Explaining loss functions
##### Initialisation and fit
To explain loss function, the following configuration and fit steps are necessary:
```python
explainer = TreeShap(
model,
model_output='log_loss',
)
explainer.fit(X_reference)
```
Only square loss regression objectives and cross-entropy classification objectives are supported in this release.
##### Explanation
Note that the labels need to be passed to the `explain` method in order to obtain the explanation:
```python
explanation = explainer.explain(X, y)
```
### Miscellaneous
#### Runtime considerations
##### Adjusting the size of the reference dataset
The algorithm automatically warns the user if a background dataset size of more than `1000` samples is passed. If the runtime of an explanation with the original dataset is too large, then the algorithm can automatically subsample the background dataset during the `fit` step. This can be achieve by specifying the fit step as
```python
explainer.fit(
X_reference,
summarise_background=True,
n_background_samples=300,
)
```
or
```python
explainer.fit(
X_reference,
summarise_background='auto'
)
```
The `auto` option will select `1000` examples, whereas using the boolean argument allows the user to directly control the size of the reference set. If categorical variables are specified, the algorithm uses subsampling of the data. Otherwise, a kmeans clustering algorithm is used to select the background dataset.
As describe above, the explanations are performed with respect to the expected output over this dataset so the shap values will be affected by the dataset selection. We recommend experimenting with various ways to choose the background dataset before deploying explanations.
## Theoretical overview
Recall that, for a model $f$, the Kernel SHAP algorithm [[1]](#References) explains a certain outcome with respect to a chosen reference (or an expected value) by estimating the shap values of each feature $i$ from $\{1, ..., M\}$, as follows:
- enumerate all subsets $S$ of the set $F \setminus \{i\}$
- for each $S \subseteq F \setminus \{i\}$, compute the contribution of feature $i$ as $C(i|S) = f(S \cup \{i\}) - f(S)$
- compute the shap value according to
\begin{equation}\tag{1}
\phi_i := \frac{1}{M} \sum \limits_{{S \subseteq F \setminus \{i\}}} \frac{1}{\binom{M - 1}{|S|}} C(i|S).
\end{equation}
Since most models do not accept arbitrary patterns of missing values at inference time, $f(S)$ needs to be approximated. The original formulation of the Kernel Shap algorithm [[1]](#References) proposes to compute $f(S)$ as the _observational conditional expectation_
\begin{equation}\tag{2}
f(S) := \mathbb{E}\left[f(\mathbf{x}_{S}, \mathbf{X}_{\bar{S}} | \mathbf{X}_S = \mathbf{x}_S) \right]
\end{equation}
where the expectation is taken over a *background dataset*, $\mathcal{D}$, after conditioning. Computing this expectation involves drawing sufficiently many samples from $\mathbf{X}_{\bar{S}}$ for every sample from $\mathbf{X}_S$, which is expensive. Instead, $(2)$ is approximated by
$$
f(S) := \mathbb{E} \left[f(\mathbf{x}_{S}, \mathbf{X}_{\bar{S}})\right]
$$
where features in a subset $S$ are fixed and features in $\bar{S}$ are sampled from the background dataset. This quantity is referred to as _marginal_ or *interventional conditional expectation*, to emphasise that setting features in $S$ to the values $\mathbf{x}_{S}$ can be viewed as an intervention on the instance to be explained.
As described in [[2]](#References), if estimating impact of a feature $i$ on the function value by $\mathbb{E} \left[ f | X_i = x_i \right]$, one should bear in mind that observing $X_i = x_i$ changes the distribution of the features $X_{j \neq i}$ if these variables are correlated. Hence, if the conditional expectation if used to estimate $f(S)$, the Shapley values might not be accurate since they also depend on the remaining variables, effect which becomes important if there are strong correlations amongst the independent variables. Furthermore, the authors show that estimating $f(S)$ using the conditional expectation violates the *sensitivity principle*, according to which the Shapley value of a redundant variable should be 0. On the other hand, the intervention breaks the dependencies, ensuring that the sensitivity holds. One potential drawback of this method is that setting a subset of values to certain values without regard to the values of the features in the complement (i.e., $\bar{S}$) can generate instances that are outside the training data distribution, which will affect the model prediction and hence the contributions.
The following sections detail how these methods work and how, unlike Kernel SHAP, compute the exact shap values in polynomial time. The algorithm estimating contributions using interventional expectations is presented, with the remaining sections being dedicated to presenting an approximate algorithm for evaluating the interventional expectation that does not require a background dataset and Shapley interaction values.
<a id='source_1'></a>
### Interventional feature perturbation
<a id='interventional'></a>
The interventional feature perturbation algorithm provides an efficient way to calculate the expectation $f(S) := \mathbb{E} \left[f(\mathbf{x}_{S}, \mathbf{X}_{\bar{S}})\right]$ for all possible subsets $S$, and to combine these values according to equation $(1)$ in order to obtain the Shapley value. Intuitively, one can proceed as follows:
- choose a background sample $r \in \mathcal{D}$
- for each feature $i$, enumerate all subsets $S \subseteq F \setminus \{i\}$
- for each such subset, $S$, compute $f(S)$ by traversing the tree with a _hybrid sample_ where the features in $\bar{S}$ are replaced by their corresponding values in $r$
- combine results according to equation $(1)$
If $R$ samples from the background distribution are used, then the complexity of this algorithm is $O(RM2^M)$ since we perform $2^M$ enumerations for each of the $M$ features, $R$ times. The key insight into this algorithm is that multiple hybrid samples will end up traversing identical paths and that this can be avoided if the shap values' calculation is reformulated as a summation over the paths in the tree (see [[4]](#References) for a proof):
$$
\phi_i = \sum_{P}\phi_{i}^P
$$
where the summation is over paths $P$ in the tree descending from $i$. The value and sign of the contribution of each path descending through a node depends on whether the split from the node is due to a foreground or a background feature, as explained in the practical example below.
<a id='source_4'></a>
#### Computing contributions with interventional Tree SHAP: a practical example.

Figure 2: Ilustration of the feature contribution and expected value estimation process using interventional perturbation Tree SHAP. The positive and the negative contributions of a node are represented in <span style="color:green">green</span> and <span style="color:red">red</span>, respectively.
In the figure above, the paths followed due the instance to be explained $x$ are coloured in red, paths followed due to the background sample in red, and common paths in yellow.
The instance to be explained is perturbed using a reference sample by the values of the features $F1$, $F3$ and $F5$ in $x$ with the corresponding values in $r$. This process gives the name of the algorithm since following the paths indicated by the background sample is akin to intervening on the instance to be explained with features from the background sample. Therefore, one defines the set $F$ in the previous section as $F = \{ j: x_{j} \neq r_{j}\}$ for this case. Note that these are the only features for which one can estimate a contribution given this background sample; the same path is followed for features $F2$ and $F4$ for both the original and the perturbed sample, so these features do not contribute to explain the difference between the observed outcome ($v_6$) and the outcome that would have been observed if the tree had been traversed according to the reference $(v_{10})$.
Considering the structure of the tree for the given $x$ and $r$ together with equation $(1)$ reveals that the left subtree can be traversed to compute the negative terms in the summation whereas the right subtree will provide positive terms. This is because the nodes in the left subtree can only be reached if $F1$ takes the value from the background sample, that is, only $F1$ is missing. Because $F2$ and $F4$ do not contribute to explaining $f(x) - f(r)$, the negative contribution of the left subtree will be equal to the negative contribution of node $8$. This node sums two negative components: one when the downstream feature $F5$ is also missing (corresponding to evaluating $f$ at $S = \varnothing$) and one when $F5$ is present (corresponding to evaluating $f$ at $S=\{F5\}$). These negative values are weighted according to the combinatorial factor in equation $(1)$. By a similar reasoning, the nodes in the right subtree are reached only if $F1$ is present and they provide the positive terms for the shap value computation. Note that the combinatorial factor in $(1)$ should be evaluated with $|S| \gets |S| - 1$ for positive contributions since $|S|$ is increased by $1$ because of the feature whose contribution is calculated is present in the right subtree.
A similar reasoning is applied to compute the contributions of the downstream nodes. For example, to estimate the contribution of $F5$, one considers a set $S = \varnothing$ and observes the value of node $10$, and weighs that with the combinatorial factor from equation $(1)$ where $M-1 = 1$ and $|S|=0$ (because there are no features present on the path) and a positive contribution from node $9$ weighted by the same combinatorial factor (because $S = \{F5\}$ so $|S| - 1 = 0$).
To summarise, the efficient algorithm relies on the following key ideas:
- each node in the tree is assigned a positive contribution reflecting membership of the splitting feature in a subset $S$ and a negative contribution to indicate the feature is missing ($i\in \bar{S}$)
- the positive and negative contributions of a node can be computed by summing the positive and negative contributions of the children nodes, in keeping with the fact that the Shapley value can be computed by summing a contribution from each path the feature is on
- to compute the contribution of a feature at a node, one adds a positive contribution from the node reached by splitting on the feature from the instance to be explained and a negative contribution from the node reached by splitting on the feature in the background sample
- features for which the instance to be explained and the reference follow the same path are assigned $0$ contribution.
#### Explaining loss functions
One advantage of the interventional approach is that it allows to approximately transform the shap values to account for nonlinear transformation of outputs, such as the loss function. Recall that given $\phi_i, ..., \phi_M$ the local accuracy property guarantees that given $\phi_0 = \mathbb{E}[f(x)]$
\begin{equation}\tag{3}
f(x) = \phi_0 + \sum \limits_{i=1}^M \phi_i.
\end{equation}
Hence, in order to account for the effect of the nonlinear transformation $h$, one has to find the functions $g_0, ..., g_M$ such that
\begin{equation}\tag{4}
h(f(x)) = g(\phi_0) + \sum \limits_{i=1}^M g_i(\phi_i)
\end{equation}
For simplicity, let $y=h(x)$. Then using a first-order Taylor series expansion around $\mathbb{E}[y]$ one obtains
\begin{equation}\tag{5}
h(y) \approx h(\mathbb{E}[y]) + \frac{\partial h(y) }{\partial y} \Bigr|_{y=\mathbb{E}[y]}(y - \mathbb{E}[y]).
\end{equation}
Substituting $(3)$ in $(5)$ and comparing coefficients with $(4)$ yields
\begin{equation*}
\begin{split}
g_0 & \approx h(\mathbb{E}[y]) \\
g_i &\approx \phi_i \frac{\partial h(y) }{\partial y} \Bigr|_{y=\mathbb{E}[y]} .
\end{split}
\end{equation*}
Hence, an approximate correction is given by simply scaling the shap values using the gradient of the nonlinear function. Note that in practice one may take the Taylor series expansion at a reference point $r$ from the background dataset and average over the entire background dataset to compute the scaling factor. This introduces an additional source of noise since $h(\mathbb{E}[y]) = \mathbb{E}[h(y)]$ only when $h$ is linear.
#### Computational complexity
For a single foreground and background sample and a single tree, the algorithm runs in $O(LD)$ time. Thus, using $R$ background samples and a model containing $T$ trees, yields a complexity of $O(TRLD)$.
### Path dependent feature perturbation
<a id='path_dependent'></a>
Another way to approximate equation $(2)$ to compute $f(S)$ given an instance $x$ and a set of missing features $\bar{S}$ is to recursively follow the decision path through the tree and:
- return the node value if a split on a feature $i \in S$ is performed
- take a weighted average of the values returned by children if $i \in \bar{S}$, where the weighing factor is equal to the proportion of training examples flowing down each branch. This proportion is a property of each node, sometimes referred to as _weight_ or _cover_ and measures how important is that node with regard to classifying the training data.
Therefore, in the path-dependent perturbation method, we compute the expectations with respect to the training data distribution by weighting the leaf values according to the proportion of the training examples that flow to that leaf.
To avoid repeating the above recursion $M2^M$ times, one first notices that for a single decision tree, applying a perturbation would result in the sample ending up in a different leaf. Therefore, following each path from the root to a leaf in the tree is equivalent to perturbing subsets of features of varying cardinalities. Consequently, each leaf will contain a certain proportion of all possible subsets $S \subseteq F$. Therefore, to compute the shap values, the following quantities are computed at each leaf, *for every feature $i$ on the path leading to that leaf*:
- the proportion of subsets $S$ at the leaf that contain $i$ and the proportion of subsets $S$ that do not contain $i$
- for each cardinality, the proportion of the sets of that cardinality contained at the leaf. Tracking each cardinality as opposed to a single count of subsets falling into a given leaf is necessary since it allows to apply the weighting factor in equation (1), which depends on the subset size, $|S|$.
This intuition can be summarised as follows:
\begin{equation}\tag{6}
\phi_i := \sum \limits_{j=1}^L \sum \limits_{P \in {S_j}} \frac {w(|P|, j)}{ M_j {\binom{M_j - 1}{|P|}}} (p_o^{i,j} - p_z^{i, j}) v_j
\end{equation}
where $S_j$ is the set of present feature subsets at leaf $j$, $M_j$ is the length of the path and $w(|P|, j)$ is the proportion of all subsets of cardinality $P$ at leaf $j$, $p_o^{i, j}$ and $p_z^{i, j}$ represent the fractions of subsets that contain or do not contain feature $i$ respectively.
#### Computational complexity
Using the above quantities, one can compute the _contribution_ of each leaf to the Shapley value of every feature. This algorithm has complexity $O(TLD^2)$ for an ensemble of trees where $L$ is the number of leaves, $T$ the number of trees in the ensemble and $D$ the maximum tree depth. If the tree is balanced, then $D=\log L$ and the complexity of our algorithm is $O(TL\log^2L)$
#### Expected value for the path-dependent perturbation algorithm
Note that although a background dataset is not provided, the expected value is computed using the node cover information, stored at each node. The computation proceeds recursively, starting at the root. The contribution of a node to the expected value of the tree is a function of the expected values of the children and is computed as follows:
$$
c_j = \frac{c_{r(j)}r_{r(j)} + c_{l(j)}r_{l(j)}}{r_j}
$$
where $j$ denotes the node index, $c_j$ denotes the node expected value, $r_j$ is the cover of the $j$th node and $r(j)$ and $l(j)$ represent the indices of the right and left children, respectively. The expected value used by the tree is simply $c_{root}$. Note that for tree ensembles, the expected values of the ensemble members is weighted according to the tree weight and the weighted expected values of all trees are summed to obtain a single value.
The cover depends on the objective function and the model chosen. For example, in a gradient boosted tree trained with squared loss objective, $r_j$ is simply the number of training examples flowing through $j$. For an arbitrary objective, this is the sum of the Hessian of the loss function evaluated at each point flowing through $j$, as explained [here](../examples/xgboost_model_fitting_adult.ipynb).
### Shapley interaction values
While the Shapley values provide a solution to the problem of allocating a function variation to the input features, in practice it might be of interest to understand how the importance of a feature depends on the other features. The Shapley interaction values can solve this problem, by allocating the change in the function amongst the individual features (*main effects*) and all pairs of features (*interaction effects*). Thus, they are defined as
\begin{equation}\tag{7}
\Phi_{i, j}(f, x) = \sum_{S \subseteq {F \setminus \{i, j\}}} \frac{1}{2|S| {\binom{M-1}{|S| - 1}}} \nabla_{ij}(f, x, S), \; i \neq j
\end{equation}
and
\begin{equation}\tag{8}
\nabla_{ij}(f, x, S) = \underbrace{f_{x}(S \cup \{i, j\}) - f_x(S \cup \{j\})}_{j \; present} - \underbrace{[f_x(S \cup \{i\}) - f_x(S)]}_{j \; not \; present}.
\end{equation}
Therefore, the interaction of features $i$ and $j$ can be computed by taking the difference between the shap values of $i$ when $j$ is present and when $j$ is not present. The main effects are defined as
$$
\Phi_{i,i}(f, x) = \phi_i(f, x) - \sum_{i \neq j} \Phi_{i, j}(f, x),
$$
Setting $\Phi_{0, 0} = f_x(\varnothing)$ yields the local accuracy property for Shapley interaction values:
$$f(x) = \sum \limits_{i=0}^M \sum \limits_{j=0}^M \Phi_{i, j}.(f, x) $$.
The interaction is split equally between feature $i$ and $j$, which is why the division by two appears in equation $(7)$. The total interaction effect is defined as $\Phi_{i, j}(f, x) + \Phi_{j, i}(f,x)$.
#### Computational complexity
According to equation $(8)$, the interaction values can be computed by applying either the interventional or path-dependent feature perturbation algorithm twice: once by fixing the value of feature $j$ to $x_j$ and computing the shapley value for feature $i$ in this configuration, and once by fixing $x_j$ to a "missing" value and performing the same computation. Thus, the interaction values can be computed in $O(TMLD^2)$ with the path-dependent perturbation algorithm and $O(TMLDR)$ with the interventional feature perturbation algorithm.
### Comparison to other methods
Tree-based models are widely used in areas where model interpretability is of interest because node-level statistics gathered from the training data can be used to provide insights into the behaviour of the model across the training dataset, providing a _global explanation_ technique. As shown in our [example](../examples/path_dependent_tree_shap_adult_xgb.ipynb), considering different statistics gives rise to different importance rankings. As discussed in [[1]](#References) and [[3]](#References), depending on the statistic chosen, feature importances derived from trees are not *consistent*, meaning that a model where a feature is known to have a bigger impact might fail to have a larger importance. As such, feature importances cannot be compared across models. In contrast, both the path-dependent and interventional perturbation algorithms tackle this limitation.
In contrast to feature importances derived from tree statistics, the Tree SHAP algorithms can also provide local explanations, allowing the identification of features that are globally "not important", but can affect specific outcomes significantly, as might be the case in healthcare applications. Additionally, it provides a means to succinctly summarise the effect magnitude and direction (positive or negative) across potentially large samples. Finally, as shown in [[1]](#References) (see [here](https://static-content.springer.com/esm/art%3A10.1038%2Fs42256-019-0138-9/MediaObjects/42256_2019_138_MOESM1_ESM.pdf), p. 26), averaging the instance-level shap values importance to derive a global score for each feature can result in improvements in feature selection tasks.
Another method to derive instance-level explanations for tree-based model has been proposed by Sabaas [here](https://github.com/andosa/treeinterpreter). This feature attribution method is similar in spirit to Shapley value, but does not account for the effect of variable order as explained [here](https://static-content.springer.com/esm/art%3A10.1038%2Fs42256-019-0138-9/MediaObjects/42256_2019_138_MOESM1_ESM.pdf) (pp. 10-11) as well as not satisfying consistency ([[3]](#References)).
Finally, both Tree SHAP algorithms exploit model structure to provide exact Shapley values computation albeit using different estimates for the effect of missing features, achieving explanations in low-order polynomial time. The KernelShap method relies on post-hoc (black-box) function modelling and approximations to approximate the same quantities and given enough samples has been shown to to the exact values (see experiments [here](https://static-content.springer.com/esm/art%3A10.1038%2Fs42256-019-0138-9/MediaObjects/42256_2019_138_MOESM1_ESM.pdf) and our [example](../examples/interventional_tree_shap_adult_xgb.ipynb)). Our Kernel SHAP [documentation](KernelSHAP.ipynb) provides comparisons of feature attribution methods based on Shapley values with other algorithms such as LIME and [anchors](Anchors.ipynb).
<a id='source_3'></a>
## References
<a id='References'></a>
[[1]](#source_1) Lundberg, S.M. and Lee, S.I., 2017. A unified approach to interpreting model predictions. In Advances in neural information processing systems (pp. 4765-4774).
[[2]](#source_2) Janzing, D., Minorics, L. and Blöbaum, P., 2019. Feature relevance quantification in explainable AI: A causality problem. arXiv preprint arXiv:1910.13413.
[[3]](#source_3) Lundberg, S.M., Erion, G.G. and Lee, S.I., 2018. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888.
[[4]](#source_4) Chen, H., Lundberg, S.M. and Lee, S.I., 2018. Understanding Shapley value explanation algorithms for trees. Under review for publication in Distill, draft available [here](https://hughchen.github.io/its_blog/index.html).
## Examples
### Path-dependent Feature Perturbation Tree SHAP
[Explaing tree models with path-dependent feature perturbation Tree SHAP](../examples/path_dependent_tree_shap_adult_xgb.ipynb)
### Interventional Feature Perturbation Tree SHAP
[Explaing tree models with path-dependent feature perturbation Tree SHAP](../examples/interventional_tree_shap_adult_xgb.ipynb)
| github_jupyter |
# Using PyTorch with TensorRT through ONNX:
TensorRT is a great way to take a trained PyTorch model and optimize it to run more efficiently during inference on an NVIDIA GPU.
One approach to convert a PyTorch model to TensorRT is to export a PyTorch model to ONNX (an open format exchange for deep learning models) and then convert into a TensorRT engine. Essentially, we will follow this path to convert and deploy our model:

Both TensorFlow and PyTorch models can be exported to ONNX, as well as many other frameworks. This allows models created using either framework to flow into common downstream pipelines.
To get started, let's take a well-known computer vision model and follow five key steps to deploy it to the TensorRT Python runtime:
1. __What format should I save my model in?__
2. __What batch size(s) am I running inference at?__
3. __What precision am I running inference at?__
4. __What TensorRT path am I using to convert my model?__
5. __What runtime am I targeting?__
## 1. What format should I save my model in?
We are going to use ResNet50, a widely used CNN architecture first described in <a href=https://arxiv.org/abs/1512.03385>this paper</a>.
Let's start by loading dependencies and downloading the model:
```
import torchvision.models as models
import torch
import torch.onnx
# load the pretrained model
resnet50 = models.resnet50(pretrained=True, progress=False)
```
Next, we will select our batch size and export the model:
```
# set up a dummy input tensor and export the model to ONNX
BATCH_SIZE = 32
dummy_input=torch.randn(BATCH_SIZE, 3, 224, 224)
torch.onnx.export(resnet50, dummy_input, "resnet50_pytorch.onnx", verbose=False)
```
Note that we are picking a BATCH_SIZE of 4 in this example.
Let's use a benchmarking function included in this guide to time this model:
```
from benchmark import benchmark
resnet50.to("cuda").eval()
benchmark(resnet50)
```
Now, let's restart our Jupyter Kernel so PyTorch doesn't collide with TensorRT:
```
import os
os._exit(0) # Shut down all kernels so TRT doesn't fight with PyTorch for GPU memory
```
## 2. What batch size(s) am I running inference at?
We are going to run with a fixed batch size of 4 for this example. Note that above we set BATCH_SIZE to 4 when saving our model to ONNX. We need to create another dummy batch of the same size (this time it will need to be in our target precision) to test out our engine.
First, as before, we will set our BATCH_SIZE to 4. Note that our trtexec command above includes the '--explicitBatch' flag to signal to TensorRT that we will be using a fixed batch size at runtime.
```
BATCH_SIZE = 32
```
Importantly, by default TensorRT will use the input precision you give the runtime as the default precision for the rest of the network. So before we create our new dummy batch, we also need to choose a precision as in the next section:
## 3. What precision am I running inference at?
Remember that lower precisions than FP32 tend to run faster. There are two common reduced precision modes - FP16 and INT8. Graphics cards that are designed to do inference well often have an affinity for one of these two types. This guide was developed on an NVIDIA V100, which favors FP16, so we will use that here by default. INT8 is a more complicated process that requires a calibration step.
```
import numpy as np
USE_FP16 = True
target_dtype = np.float16 if USE_FP16 else np.float32
dummy_input_batch = np.zeros((BATCH_SIZE, 224, 224, 3), dtype = np.float32)
```
## 4. What TensorRT path am I using to convert my model?
We can use trtexec, a command line tool for working with TensorRT, in order to convert an ONNX model originally from PyTorch to an engine file.
Let's make sure we have TensorRT installed (this comes with trtexec):
```
import tensorrt
```
To convert the model we saved in the previous step, we need to point to the ONNX file, give trtexec a name to save the engine as, and last specify that we want to use a fixed batch size instead of a dynamic one.
```
# step out of Python for a moment to convert the ONNX model to a TRT engine using trtexec
if USE_FP16:
!trtexec --onnx=resnet50_pytorch.onnx --saveEngine=resnet_engine_pytorch.trt --explicitBatch --fp16
else:
!trtexec --onnx=resnet50_pytorch.onnx --saveEngine=resnet_engine_pytorch.trt --explicitBatch
```
This will save our model as 'resnet_engine.trt'.
## 5. What TensorRT runtime am I targeting?
Now, we have a converted our model to a TensorRT engine. Great! That means we are ready to load it into the native Python TensorRT runtime. This runtime strikes a balance between the ease of use of the high level Python APIs used in frameworks and the fast, low level C++ runtimes available in TensorRT.
```
import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
f = open("resnet_engine_pytorch.trt", "rb")
runtime = trt.Runtime(trt.Logger(trt.Logger.WARNING))
engine = runtime.deserialize_cuda_engine(f.read())
context = engine.create_execution_context()
```
Now allocate input and output memory, give TRT pointers (bindings) to it:
```
# need to set input and output precisions to FP16 to fully enable it
output = np.empty([BATCH_SIZE, 1000], dtype = target_dtype)
# allocate device memory
d_input = cuda.mem_alloc(1 * dummy_input_batch.nbytes)
d_output = cuda.mem_alloc(1 * output.nbytes)
bindings = [int(d_input), int(d_output)]
stream = cuda.Stream()
```
Next, set up the prediction function.
This involves a copy from CPU RAM to GPU VRAM, executing the model, then copying the results back from GPU VRAM to CPU RAM:
```
def predict(batch): # result gets copied into output
# transfer input data to device
cuda.memcpy_htod_async(d_input, batch, stream)
# execute model
context.execute_async_v2(bindings, stream.handle, None)
# transfer predictions back
cuda.memcpy_dtoh_async(output, d_output, stream)
# syncronize threads
stream.synchronize()
return output
```
Finally, let's time the function!
Note that we're going to include the extra CPU-GPU copy time in this evaluation, so it won't be directly comparable with our TRTorch model performance as it also includes additional overhead.
```
print("Warming up...")
predict(dummy_input_batch)
print("Done warming up!")
%%timeit
pred = predict(dummy_input_batch)
```
However, even with the CPU-GPU copy, this is still faster than our raw PyTorch model!
## Next Steps:
<h4> Profiling </h4>
This is a great next step for further optimizing and debugging models you are working on productionizing
You can find it here: https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html
<h4> TRT Dev Docs </h4>
Main documentation page for the ONNX, layer builder, C++, and legacy APIs
You can find it here: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html
<h4> TRT OSS GitHub </h4>
Contains OSS TRT components, sample applications, and plugin examples
You can find it here: https://github.com/NVIDIA/TensorRT
#### TRT Supported Layers:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/samplePlugin
#### TRT ONNX Plugin Example:
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#layers-precision-matrix
| github_jupyter |
Added custom loss function base on @kyakvolev 's work. Credit to the author.
The forum post is here: https://www.kaggle.com/c/m5-forecasting-uncertainty/discussion/139515
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from tqdm.auto import tqdm as tqdm
from ipywidgets import widgets, interactive, interact
import ipywidgets as widgets
from IPython.display import display
import os
for dirname, _, filenames in os.walk('data/raw'):
for filename in filenames:
print(os.path.join(dirname, filename))
```
## Reading data
```
train_sales = pd.read_csv('data/raw/sales_train_validation.csv')
calendar_df = pd.read_csv('data/raw/calendar.csv')
submission_file = pd.read_csv('data/raw/sales_train_validation.csv')
sell_prices = pd.read_csv('data/raw/sample_submission.csv')
```
## Variables to help with aggregation
```
total = ['Total']
train_sales['Total'] = 'Total'
train_sales['state_cat'] = train_sales.state_id + "_" + train_sales.cat_id
train_sales['state_dept'] = train_sales.state_id + "_" + train_sales.dept_id
train_sales['store_cat'] = train_sales.store_id + "_" + train_sales.cat_id
train_sales['store_dept'] = train_sales.store_id + "_" + train_sales.dept_id
train_sales['state_item'] = train_sales.state_id + "_" + train_sales.item_id
train_sales['item_store'] = train_sales.item_id + "_" + train_sales.store_id
val_eval = ['validation', 'evaluation']
# creating lists for different aggregation levels
total = ['Total']
states = ['CA', 'TX', 'WI']
num_stores = [('CA',4), ('TX',3), ('WI',3)]
stores = [x[0] + "_" + str(y + 1) for x in num_stores for y in range(x[1])]
cats = ['FOODS', 'HOBBIES', 'HOUSEHOLD']
num_depts = [('FOODS',3), ('HOBBIES',2), ('HOUSEHOLD',2)]
depts = [x[0] + "_" + str(y + 1) for x in num_depts for y in range(x[1])]
state_cats = [state + "_" + cat for state in states for cat in cats]
state_depts = [state + "_" + dept for state in states for dept in depts]
store_cats = [store + "_" + cat for store in stores for cat in cats]
store_depts = [store + "_" + dept for store in stores for dept in depts]
prods = list(train_sales.item_id.unique())
prod_state = [prod + "_" + state for prod in prods for state in states]
prod_store = [prod + "_" + store for prod in prods for store in stores]
print("Departments: ", depts)
print("Categories by state: ", state_cats)
quants = ['0.005', '0.025', '0.165', '0.250', '0.500', '0.750', '0.835', '0.975', '0.995']
days = range(1, 1913 + 1)
time_series_columns = [f'd_{i}' for i in days]
```
## Getting aggregated sales
```
def CreateSales(name_list, group):
'''
This function returns a dataframe (sales) on the aggregation level given by name list and group
'''
rows_ve = [(name + "_X_" + str(q) + "_" + ve, str(q)) for name in name_list for q in quants for ve in val_eval]
sales = train_sales.groupby(group)[time_series_columns].sum() #would not be necessary for lowest level
return sales
total = ['Total']
train_sales['Total'] = 'Total'
train_sales['state_cat'] = train_sales.state_id + "_" + train_sales.cat_id
train_sales['state_dept'] = train_sales.state_id + "_" + train_sales.dept_id
train_sales['store_cat'] = train_sales.store_id + "_" + train_sales.cat_id
train_sales['store_dept'] = train_sales.store_id + "_" + train_sales.dept_id
train_sales['state_item'] = train_sales.state_id + "_" + train_sales.item_id
train_sales['item_store'] = train_sales.item_id + "_" + train_sales.store_id
#example usage of CreateSales
sales_by_state_cats = CreateSales(state_cats, 'state_cat')
sales_by_state_cats
```
## Getting quantiles adjusted by day-of-week
```
def CreateQuantileDict(name_list = stores, group = 'store_id' ,X = False):
'''
This function writes creates sales data on given aggregation level, and then writes predictions to the global dictionary my_dict
'''
sales = CreateSales(name_list, group)
sales = sales.iloc[:, 2:] #starting from d_3 because it is a monday, needed to make daily_factors work
sales_quants = pd.DataFrame(index = sales.index)
for q in quants:
sales_quants[q] = np.quantile(sales, float(q), axis = 1)
full_mean = pd.DataFrame(np.mean(sales, axis = 1))
daily_means = pd.DataFrame(index = sales.index)
for i in range(7):
daily_means[str(i)] = np.mean(sales.iloc[:, i::7], axis = 1)
daily_factors = daily_means / np.array(full_mean)
daily_factors = pd.concat([daily_factors, daily_factors, daily_factors, daily_factors], axis = 1)
daily_factors_np = np.array(daily_factors)
factor_df = pd.DataFrame(daily_factors_np, columns = submission_file.columns[1:])
factor_df.index = daily_factors.index
for i,x in enumerate(tqdm(sales_quants.index)):
for q in quants:
v = sales_quants.loc[x, q] * np.array(factor_df.loc[x, :])
if X:
my_dict[x + "_X_" + q + "_validation"] = v
my_dict[x + "_X_" + q + "_evaluation"] = v
else:
my_dict[x + "_" + q + "_validation"] = v
my_dict[x + "_" + q + "_evaluation"] = v
my_dict = {}
#adding prediction to my_dict on all 12 aggregation levels
CreateQuantileDict(total, 'Total', X=True) #1
CreateQuantileDict(states, 'state_id', X=True) #2
CreateQuantileDict(stores, 'store_id', X=True) #3
CreateQuantileDict(cats, 'cat_id', X=True) #4
CreateQuantileDict(depts, 'dept_id', X=True) #5
CreateQuantileDict(state_cats, 'state_cat') #6
CreateQuantileDict(state_depts, 'state_dept') #7
CreateQuantileDict(store_cats, 'store_cat') #8
CreateQuantileDict(store_depts, 'store_dept') #9
CreateQuantileDict(prods, 'item_id', X=True) #10
CreateQuantileDict(prod_state, 'state_item') #11
CreateQuantileDict(prod_store, 'item_store') #12
total
```
## Creating valid prediction df from my_dict
```
pred_df = pd.DataFrame(my_dict)
pred_df = pred_df.transpose()
pred_df_reset = pred_df.reset_index()
final_pred = pd.merge(pd.DataFrame(submission_file.id), pred_df_reset, left_on = 'id', right_on = 'index')
del final_pred['index']
final_pred = final_pred.rename(columns={0: 'F1', 1: 'F2', 2: 'F3', 3: 'F4', 4: 'F5', 5: 'F6', 6: 'F7', 7: 'F8', 8: 'F9',
9: 'F10', 10: 'F11', 11: 'F12', 12: 'F13', 13: 'F14', 14: 'F15', 15: 'F16',
16: 'F17', 17: 'F18', 18: 'F19', 19: 'F20', 20: 'F21', 21: 'F22',
22: 'F23', 23: 'F24', 24: 'F25', 25: 'F26', 26: 'F27', 27: 'F28'})
final_pred = final_pred.fillna(0)
for i in range(1,29):
final_pred['F'+str(i)] *= 1.170
final_pred.to_csv('return_of_the_blend.csv', index=False)
```
| github_jupyter |
# Analysis of NFL csv data for analysis
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from math import pi
# import seaborn as sns
# import matplotlib as plt
data_dir = '../seasonData/'
```
use 2009 data as a test set
```
df_2009 = pd.read_csv(data_dir+'season2009.csv')
df_2009.head()
```
remove defensive players
```
df_2009 = df_2009[df_2009.defense_ast.isnull()]
# df_2009.head()
```
Remove punters
```
df_2009 = df_2009[df_2009.punting_avg.isnull()]
```
remove kicking stats
```
df_2009 = df_2009[df_2009.kicking_xpb.isnull()]
```
remove defensive columns
```
list(df_2009)
df_2009 = df_2009.fillna(0)
```
Decided to remove kicking because it could confuse the model?
```
df_2009 = df_2009.drop(labels=['home', 'pos','defense_ast', 'defense_ffum', 'defense_int', 'defense_sk',
'defense_tkl','punting_avg', 'punting_i20', 'punting_lng', 'punting_pts',
'punting_yds', 'puntret_avg', 'puntret_lng', 'puntret_ret','kickret_avg',
'kickret_lng', 'kickret_ret', 'rushing_twopta', 'receiving_twopta','kicking_fga',
'kicking_fgm', 'kicking_fgyds', 'kicking_totpfg', 'kicking_xpa', 'kicking_xpb',
'kicking_xpmade', 'kicking_xpmissed', 'kicking_xptot'], axis =1)
```
One hot encoding for players team
```
df_2009 = pd.get_dummies(df_2009, columns=["team"])
df_2009.shape
```
### Column names used for scoring
'fumbles_lost', 'fumbles_rcv', 'fumbles_tot', 'fumbles_trcv', 'fumbles_yds', 'kicking_fga', 'kicking_fgm',
'kicking_fgyds', 'kicking_totpfg', 'kicking_xpa', 'kicking_xpb', 'kicking_xpmade', 'kicking_xpmissed',
'kicking_xptot', 'kickret_tds', 'passing_att', 'passing_cmp', 'passing_ints', 'passing_tds',
'passing_twopta','passing_twoptm', 'passing_yds', 'puntret_tds', 'receiving_lng', 'receiving_rec',
'receiving_tds', 'receiving_twopta', 'receiving_twoptm', 'receiving_yds', 'rushing_att', 'rushing_lng',
'rushing_tds','rushing_twopta', 'rushing_twoptm', 'rushing_yds'
```
df_2009.fillna(0)
# passing yds
points = df_2009['passing_yds'].apply(lambda x: x/25)
# passing TDs
points = points + df_2009['passing_tds'].apply(lambda x: x*4)
# interceptions
points = points + df_2009['passing_ints'].apply(lambda x: x*-1)
# rushing yards
points = points + df_2009['rushing_yds'].apply(lambda x: x/10)
# rushing TDs
points = points + df_2009['rushing_tds'].apply(lambda x: x*6)
# receiving yards
points = points + df_2009['receiving_yds'].apply(lambda x: x/10)
# receiving tds
points = points + df_2009['receiving_tds'].apply(lambda x: x*6)
# return tds
points = points + df_2009['kickret_tds'].apply(lambda x: x*10) + df_2009['puntret_tds'].apply(lambda x: x*10)
# 2 pt convs
points = points + df_2009['receiving_twoptm'].apply(lambda x: x*2) + df_2009['rushing_twoptm'].apply(lambda x: x*2)
# Fumbles lost
points = points + df_2009['fumbles_lost'].apply(lambda x: x*-2)
df_2009['FtsyPts'] = points
df_2009.sort_values(by='id')
df_2010 = pd.read_csv('../sortedSeasonData/sortedSeason2010.csv')
```
Sort tags and see if they match
```
df_2010.sort_values(by='id')
```
| github_jupyter |
# Week 3
## Introduction to Solid State
```
import numpy as np
import matplotlib.pyplot as plt
import os
import subprocess
from polypy.read import History
from polypy.msd import MSD
from polypy import plotting
def get_diffusion(file, atom):
with open(file) as f:
y = False
for line in f:
if str("atom D ") in line:
y = True
if y == True and str(atom) in line:
d = line.split()
break
return d
```
# Background#
Now that you are familiar with molecular dynamics, you are now going to use it to tackle some real world problems.
The transport properties of a material determine many properties that are utilised for modern technological applications. For example, solid oxide fuel cell (SOFCs), which are an alternative to batteries, materials are dependent on the movement of charge carriers through the solid electrolyte. Another example are nuclear fuel materials which oxidise and fall apart - this corrosive behaviour is dependent on the diffusion of oxygen into the lattice.
Due to the importance of the transport properties of these materials, scientists and engineers spend large amounts of their time tring to optomise these properties using different stoichiometries, introducing defects and by using different syntheisis techniques.
# Aim and Objectives #
The **Aim** of the next **five weeks** is to **investigate** the transport properties of a simple fluorite material - CaF$_2$.
The **first objective** is to **investigate** how the transport properties of CaF$_2$ are affected by temperature
The **second objective** is to **investigate** how the transport properties of CaF$_2$ are affected by structural defects (Schottky and Frenkel)
The **third objective** is to **investigate** how the transport properties of CaF$_2$ are affected by chemcial dopants (e.g. different cations)
A rough breakdown looks as follows:
**Week 3**
- Molecular dynamics simulations of stoichiomteric CaF$_2$
**Week 4**
- Molecular dynamics simulations of CaF$_2$ containing Schottky defects
**Week 5**
- Molecular dynamics simulations of CaF$_2$ containing Frenkel defects
**Week 6**
- Molecular dynamics simulations of CaF$_2$ containing various dopants
**Week 7**
- Molecular dynamics simulations of CaF$_2$ containing various dopants
By these **five weeks** you will be able to:
- **Perform** molecular dynamics simulations at different temperatures
- **Manipulate** the input files
- **Adjust** the ensemble for the simulation
- **Examine** the volume and energy of different simulations
- **Apply** VMD to visualize the simulation cell and evaluate radial distribution - coefficients
The **Aim** of this **week** (week 3) is to **investigate** the temperature-dependence of the transport properties of a simple fluorite material CaF$_2$ using molecular dynamics (MD).
The **first objective** is to **familiarise** yourself with the molecular simulation software package <code>DL_POLY</code>
The **second objective** is to **complete** a tutorial which demonstrates how to calculate diffusion coefficients
The **third objective** is to is to **complete** a tutorial which demonstrates how to **calculate** the activation energy barrier of F diffusion
## Introduction to DL_POLY
<code>DL_POLY</code> is a molecular dynamics (MD) program maintained by Daresbury laboratories. In contrast to <code>pylj</code>, <code>DL_POLY</code> is a three-dimensional MD code that is used worldwide by computational scientists for molecular simulation, but it should be noted that the theory is exactly the same and any understanding gained from <code>pylj</code> is completely applicable to <code>DL_POLY</code>.
For the next five weeks you will use <code>DL_POLY</code> to run short MD simulations on CaF$_2$. You first need to understand the input files required for <code>DL_POLY</code>.
<code>**CONTROL**</code>
This is the file that contains all of the simulation parameters, e.g. simulation temperature, pressure, number of steps etc.
<code>**CONFIG**</code>
This is the file that contains the structure - i.e. the atomic coordinates of each atom.
<code>**FIELD**</code>
This is the file that contains the force field or potential model e.g. Lennard-Jones.
# Exercise 1: Setting Up an MD Simulation#
First, we will use <code>METADISE</code> to produce <code>DL_POLY</code> input files.
Contained within the folder <code>Input/</code> you will find a file called <code>input.txt</code>.
This is the main file that you will interact with over the next five weeks and is the input file for <code>METADISE</code> which generates the 3 <code>DL_POLY</code> input files: <code>FIELD</code>, <code>CONTROL</code> and <code>CONFIG</code>.
Essentially it is easier to meddle with <code>input.txt</code> than it is to meddle with the 3 <code>DL_POLY</code> files everytime you want to change something.
To run <code>METADISE</code> we will use the <code>subprocess</code> <code>python</code> module.
To use <code>subprocess</code> - specify what program you want to run and the file that you want to run it in, you will need to ensure the file path is correct.
To **generate** the 3 <code>DL_POLY</code> input files: <code>FIELD</code>, <code>CONTROL</code> and <code>CONFIG</code>, **run** the cell below:
#### It is essential that the codes that were downloaded from [here](https://people.bath.ac.uk/chsscp/teach/adv.bho/progs.zip) are in the Codes/ folder in the parent directory, or this following cell will crash.
```
subprocess.call('../Codes/metadise.exe', cwd='Input/')
os.rename('Input/control_o0001.dlp', 'Input/CONTROL')
os.rename('Input/config__o0001.dlp', 'Input/CONFIG')
os.rename('Input/field___o0001.dlp', 'Input/FIELD')
```
Now you should have a <code>CONFIG</code>, <code>CONTROL</code> and <code>FIELD</code> file within the <code>Input/</code> directory.
In theory you could just call the <code>DL_POLY</code> program in this directory and your simulation would run.
However, we need to tweak the <code>CONTROL</code> file in order to set up our desired simulation.
1. **Make** a new subdirectory in the <code>week 3</code> directory named <code>"Example/"</code> and copy <code>CONFIG</code>, <code>CONTROL</code> and <code>FIELD</code> to that subdirectory.
2. Now **edit** the <code>CONTROL</code> file to change the following:
<code>Temperature 300 ---> Temperature 1500
Steps 5001 ---> Steps 40000
ensemble nve ---> ensemble npt hoover 0.1 0.5
trajectory nstraj= 1 istraj= 250 keytrj=0 ---> trajectory nstraj= 0 istraj= 100 keytrj=0</code>
3.Now your simulation is ready, **check** the structure before you run the simulation. You can view the <code>CONFIG</code> file in three dimensions using the VESTA program
It is always good to **check** your structure before (<code>CONFIG</code>) and after (<code>REVCON</code>) the simulation.
You can view the <code>CONFIG</code> and <code>REVCON</code> files in three dimensions using the <code>VESTA</code> program. <code>VESTA</code> can generate nice pictures which will look very good in a lab report.
<center>
<br>
<img src="./figures/vesta.png\" width=\"400px\">
<i>Figure 1. Fluorite CaF$_2$ unit cell visualised in VESTA.</i>
<br>
</center>
# Exercise 2: Running an MD Simulation
Now we have <code>DL_POLY</code> input files, we will run an MD simulation using <code>DL_POLY</code>.
1. **Run** <code>DL_POLY</code> from within the notebook use the command below
Keep in mind that this simulation will take 20 or so minutes so be patient.
If you are not comfortable with running things through this notebook then you can copy and paste the <code>dlpoly_classic.exe</code> executable into the Example/ sub directory and then **double click** the <code>.exe</code> file
```
subprocess.call('../Codes/dlpoly_classic.exe', cwd='Example/')
```
# Exercise 3: Inspecting an MD Simulation
Now we have run an MD simulation using <code>DL_POLY</code> we can analyse the data using the <code>VESTA</code>
Once <code>DL_POLY</code> has completed you will find several files relating to your simulaton.
<code> **HISTORY** </code>
This file contains the configuration of your system at each step during the simulation, known as a _trajectory_. You can view this as a movie using <code>VMD</code>
<code> **STATIS** </code>
Contains the statistics at each step of the simulation.
<code> **OUTPUT** </code>
Contains various properties of the simulation.
<code> **REVCON** </code>
This is the configuration at the end of the simulation. Can be viewed in <code>VESTA</code>. **Check** to see how it has changed, compare it to the <code>CONFIG</code> file.
# Exercise 4: Analysing the Diffusion Properties
Now we have inspected the final structure from the simulation, we can calculate the diffusion coefficient.
## Mean Squared Displacements - Calculating Diffusion Coefficients
As we have seen molecules in liquds, gases and solids do not stay in the same place and move constantly. Think about a drop of dye in a glass of water, as time passes the dye distributes throughout the water. This process is called diffusion and is common throughout nature.
Using the dye as an example, the motion of a dye molecule is not simple. As it moves it is jostled by collisions with other molecules, preventing it from moving in a straight path. If the path is examined in close detail, it will be seen to be a good approximation to a _random walk_.
In mathmatics, a random walk is a series of steps, each taken in a random direction. This was analysed by Albert Einstein in a study of _Brownian motion_ and he showed that the mean square of the distance travelled by a particle following a random walk is proportional to the time elapsed, as given by:
\begin{align}
\Big \langle r^2 \big \rangle & = 6 D_t + C
\end{align}
where $\Big \langle r^2 \big \rangle$ is the mean squared distance, t is time, D is the diffusion rate and C is a constant.
## What is the Mean Squared Displacement?
Going back to the example of the dye in water, lets assume for the sake of simplicity that we are in one dimension. Each step can either be forwards or backwards and we cannot predict which.
From a given starting position, what distance is our dye molecule likely to travel after 1000 steps? This can be determined simply by adding together the steps, taking into account the fact that steps backwards subtract from the total, while steps forward add to the total. Since both forward and backward steps are equally probable, we come to the surprising conclusion that the probable distance travelled sums up to zero.
By adding the square of the distance we will always be adding positive numbers to our total which now increases linearly with time. Based upon equation 1 it should now be clear that a plot of $\Big \langle r^2 \big \rangle$ vs time with produce a line, the gradient of which is equal to 6D. Giving us direct access to the diffusion coefficient of the system.
Lets try explore this with an example.
1. **Run** a short <code>DL_POLY</code> simulation on the input files provided.
You will run a small MSD program called <code>MSD.py</code> to analyse your simulation results.
First you need to **read** in the data. The <code>HISTORY</code> file contains a list of the atomic coordiantes held by the atoms during the simulation.
2.**Run** the cell below to read the <code>HISTORY</code> file into the <code>Jupyter Notebook</code>
```
## Provide the path to the simulation and the atom that you want data for.
data = History("Example/HISTORY", "F")
```
<code>data</code> is a class object containing information about the trajectory.
More information can be found here https://polypy.readthedocs.io/en/latest/reading_data.html and here https://github.com/symmy596/Polypy/blob/master/polypy/read.py .
The next step is to calculate the MSD.
3.**Run** the cell below to calculate the MSD of the chosen atom throughout the course of the simulation
```
# Run the MSD calculation
f_msd = MSD(data.trajectory, sweeps=2)
output = f_msd.msd()
```
The MSD calculation function returns an object with imformation about the MSD calculation.
More information and a full tutorial on this functionality can be found here https://polypy.readthedocs.io/en/latest/msd_tutorial.html
4.**Run** the cell below to give plots of the MSD which have a nice linear relationship.
```
ax = plotting.msd_plot(output)
plt.show()
print("Three Dimensional Diffusion Coefficient", output.xyz_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in X", output.x_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in Y", output.y_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in Z", output.z_diffusion_coefficient())
```
# Exercise 5: The Effect of Simulation Length
Now we have calculated the diffusion coefficient, we can investigate the influence of simulation length on the diffusion coefficient.
It is important to consider the length of your simulation (the number of steps).
1. **Create** a new folder called <code>"Example_2/"</code>
2. **Copy** the <code>CONFIG</code>, <code>FIELD</code> and <code>CONTROL</code> files from your previous simulation
3. **Change** the number of steps to 10000
4. **Rerun** the simulation by **running** the cell below
```
subprocess.call('../Codes/dlpoly_classic.exe', cwd='Example_2/')
```
5.**Run** the cell below to calculate and plot the MSD of the chosen atom throughout the course of the simulation
```
data = History("Example_2/HISTORY", "F")
# Run the MSD calculation
f_msd = MSD(data.trajectory, sweeps=2)
output = f_msd.msd()
ax = plotting.msd_plot(output)
plt.show()
print("Three Dimensional Diffusion Coefficient", output.xyz_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in X", output.x_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in Y", output.y_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in Z", output.z_diffusion_coefficient())
```
You will hopefully see that your MSD plot has become considerably less linear. This shows that your simulation has not run long enough and your results will be unrealiable.
You will hopefully also see a change to the value of your diffusion coefficient.
**The length of your simulation is something that you should keep in mind for the next 5 weeks.**
# Exercise 6: Calculating the Activation Energy
Now we have investigated the influence of simulation length on the diffusion coefficient, we can calculate the activation energy for F diffusion by applying the Arrhenius equation.
To apply the Arrhensius equation, diffusion coefficients from a range of temperatures are required.
Common sense and chemical intuition suggest that the higher the temperature, the faster a given chemical reaction will proceed. Quantitatively, this relationship between the rate a reaction proceeds and the temperature is determined by the Arrhenius Equation.
At higher temperatures, the probability that two molecules will collide is higher. This higher collision rate results in a higher kinetic energy, which has an effect on the activation energy of the reaction. The activation energy is the amount of energy required to ensure that a reaction happens.
\begin{align}
k = A * e^{(-Ea / RT)}
\end{align}
where k is the rate coefficient, A is a constant, Ea is the activation energy, R is the universal gas constant, and T is the temperature (in kelvin).
# Exercise 7: Putting it All Together
Using what you have learned through the tutorials above, your task this week is to calculate the activation energy of F diffusion in CaF$_2$.
1. You will need to **select** a temperature range and carry out simulations at different temperatures within that range.
#### Questions to answer:
- In what temperature range is CaF$_2$ completely solid i.e. no diffusion?
- In what range is fluorine essentially liquid i.e. fluorine diffusion with no calcium diffusion?
- What is the melting temperature of CaF$_2$?
- Plot an Arrhenius plot and determine the activation energies in temperature range - You will need to rearange the equation.
You are encouraged to split the work up within your group and to learn how to view the simulation "movie" using VMD (Ask a demonstrator). VMD is a fantastic program that allows you to visualise your simulation, included below is a video showing a short snippet of an MD simulation of CaF$_2$. A single F atom has been highlighted to show that diffusion is occuring.
```
%%HTML
<div align="middle">
<video width="80%" controls>
<source src="./figures/VMD_example.mp4" type="video/mp4">
</video></div>
```
Furthermore, VMD can also be used to generate images showing the entire trajectory of the simulation, e.g.
<center>
<br>
<img src="./figures/CaF2.png\" width=\"400px\">
<i>Figure 2. A figure showing all positions occupied by F during an MD simulation at 1500 K. F positions are shown in orange and Ca atoms are shown in green.</i>
<br>
</center>
To save you time you can use the function declared at the start of this notebook to pull out a diffusion coefficient directly from the simulation output file. <code>MSD.py</code> is a small code to allow visualisation of the MSD plot but it is not neccesary every time you want the diffusion coefficient.
It is up to you how you organise/create your directories but it is reccomended that you start a new notebook.
Use the commands/functions used in this notebook to:
1. **Generate** your input files
2. **Run** <code>DL_POLY</code>
3. **Extract** the diffusion coefficient of F diffusion
Then write your own code to:
4. **Generate** an Arrhenius plot
5. **Calculate** the activation energies of F diffusion
If you finish early then feel free to start the week 4 exercises.
| github_jupyter |
```
import pywt
import numpy as np
import pandas as pa
import sqlite3, os
from skimage.restoration import denoise_wavelet
import matplotlib.pyplot as plt
import warnings
import ruptures as rpt
from scipy.signal import savgol_filter, medfilt
import numpy as np
import pylab as pl
from scipy.signal import hilbert
from scipy import signal
%matplotlib inline
warnings.filterwarnings("ignore")
DATA_PATH = "data"
ActualWeight = pa.read_excel("Seq2Seq/Actual_Weight_Urine_Stool_1736_1745.xlsx")
ActualWeight['Total Weight (g)'] = ActualWeight.iloc[:, 1:].sum(axis = 1)
ActualWeight
```
# Defecation
```
def GetSensor(use_i,sensor_i):
sql_s = "SELECT timestamp_ms, value FROM data WHERE data_capture_id={} AND sensor_id={}".format(use_i,sensor_i)
conn = sqlite3.connect('data/toilet.db')
cursor = conn.execute(sql_s)
time_measurements = []
distance_measurements = []
for row in cursor:
time_measurements.append(row[0])
distance_measurements.append(row[1])
#endfor
data_t = (time_measurements,distance_measurements)
return data_t
#enddef
def cleanSensors(sensor1_t_l,sensor1_y_l,sensor2_t_l,sensor2_y_l):
# get min / max of time-series
#sensor1_t_l = data_d[1][0]
#sensor2_t_l = data_d[2][0]
#sensor1_y_l = data_d[1][1]
#sensor2_y_l = data_d[2][1]
min_t = min(min(sensor1_t_l),min(sensor2_t_l))
max_t = max(max(sensor1_t_l),max(sensor2_t_l))
# setup partitions
step_t = 500
min_floor_t = int(np.floor(min_t/step_t)*step_t)
max_ceil_t = int(np.ceil(max_t/step_t)*step_t)
step1_d = {}
step2_d = {}
for i in range(min_floor_t,max_ceil_t+step_t,step_t):
step1_d[i] = []
step2_d[i] = []
#endfor
# step through both and assign values to each partition
for i in range(len(sensor1_t_l)):
interval_t = int(np.floor(sensor1_t_l[i]/step_t)*step_t)
step1_d[interval_t].append(sensor1_y_l[i])
#endfor
for i in range(len(sensor2_t_l)):
interval_t = int(np.floor(sensor2_t_l[i]/step_t)*step_t)
step2_d[interval_t].append(sensor2_y_l[i])
#endfor
# step through each partition and either take averages or set to nan
clean1_d = {}
for i in step1_d.keys():
if(len(step1_d[i]) > 0):
clean1_d[i] = np.mean(step1_d[i])
#endfor
clean1_sz = pa.Series(clean1_d)
clean2_d = {}
for i in step2_d.keys():
if(len(step2_d[i]) > 0):
clean2_d[i] = np.mean(step2_d[i])
#endfor
clean2_sz = pa.Series(clean2_d)
return clean1_sz, clean2_sz
def GetTotalWeight(data_capture_id):
data_d = {}
data_d[2] = GetSensor(data_capture_id, 2) # seat scale
data_d[3] = GetSensor(data_capture_id, 3) # foot scale
#t0 = data_d[2][0][0]
clean1_sz, clean2_sz = cleanSensors(data_d[2][0],data_d[2][1],data_d[3][0],data_d[3][1])
seatScale_sz = clean1_sz/1000
footScale_sz = clean2_sz/1000
sumScale_sz = seatScale_sz + footScale_sz
#sumScaleFiltered_sz = pd.Series(signal.medfilt(sumScale_sz, 11))
sumScale_sz.index = (sumScale_sz.index - sumScale_sz.index[0])/1000
#x_ix = sumScale_sz.index
return sumScale_sz
def GetRadarSum(data_capture_id):
data_fn = 'data/data_frames/data_capture_{}/radar_data.txt'.format(data_capture_id)
data_f = open(data_fn,'rt')
line_s = data_f.read()
data_l = eval(line_s)
# save array of images
t0_sz = pa.Series(data_l[0]['data'])
data_d = {}
for j in range(len(data_l)):
t = data_l[j]['timestamp_ms']
j_sz = pa.Series(data_l[j]['data'][0])
data_d[t] = j_sz
#endfor
data_df = pa.DataFrame(data_d)
area_d = {}
floor_i = 50
ceil_i = 200
for i in data_df.columns:
sq_sz = (data_df[i])**2
area_d[i] = sum(sq_sz.iloc[floor_i:ceil_i])
#endfor
area_sz = pa.Series(area_d)
area_sz = area_sz / 1e9
area_sz = area_sz - area_sz.median()
t0 = data_l[0]['timestamp_ms']
area_sz.index = (area_sz.index-t0)/1000 #
return area_sz
def ApplyEnvelope(sz):
analytic_signal = hilbert(sz)
env_sz = pa.Series(np.abs(analytic_signal))
env_sz.index = sz.index
return env_sz
def GetValuesAboveThreshold(sz, threshold):
return sz > threshold
def GetValuesBelowThreshold(sz, threshold):
return sz < threshold
def ApplyMedianFilter(sz, window_size):
filt_sz = pa.Series(signal.medfilt(sz, window_size))
filt_sz.index = sz.index
return filt_sz
def GetStartEndTimesOfBooleanSz(sz):
ts = sz.index
start_end_times = []
i = 0
while i < len(sz):
if sz.values[i] == True:
j = i
while (j < len(sz)-1) and (sz.values[j+1] == True):
j += 1
start_end_times.append([ts[i], ts[j]])
i = j + 1
else:
i += 1
return start_end_times
def GetWeightChange(weight_sz, start_time, end_time):
start_idx = (pa.Series(weight_sz.index) > start_time).idxmax() - 1
end_idx = (pa.Series(weight_sz.index) > end_time).idxmax()
#print("Weight at start time: {}".format(weight_sz.iloc[start_idx]))
#print("Weight at end time: {}".format(weight_sz.iloc[end_idx]))
return weight_sz.iloc[start_idx] - weight_sz.iloc[end_idx]
def GetWeightChangeMinMax(weight_sz, start_time, end_time):
#start_idx = (pa.Series(weight_sz.index) > start_time).idxmax() - 1
end_idx = (pa.Series(weight_sz.index) > end_time).idxmax()
weight_sz_start_end = weight_sz[(weight_sz.index > start_time) & (weight_sz.index < end_time)]
return max(weight_sz_start_end) - weight_sz.iloc[end_idx]
def RightExtendBooleanTrueValues(sz, extension_time):
temp_sz = sz.copy()
i = 1
while i < len(temp_sz):
if((temp_sz.values[i-1] == True) and (temp_sz.values[i] == False)):
extension_end_time = temp_sz.index[i] + extension_time
while (i < len(temp_sz)) and (temp_sz.index[i] < extension_end_time):
temp_sz.values[i] = True
i += 1
i += 1
return temp_sz
def LeftExtendBooleanTrueValues(sz, extension_time):
temp_sz = sz.copy()
i = len(temp_sz) - 2
while i >= 0:
if((temp_sz.values[i] == False) and (temp_sz.values[i+1] == True)):
extension_end_time = temp_sz.index[i] - extension_time
while (i >= 0) and (temp_sz.index[i] > extension_end_time):
temp_sz.values[i] = True
i -= 1
i -= 1
return temp_sz
def GetDefecationWeightLoss(DATA_CAPTURE_ID, filter_window_size, threshold, extension_time):
defecation_start_end_times = GetDefecationStartEndTimes(DATA_CAPTURE_ID, filter_window_size, threshold, extension_time)
defecation_weight_loss = 0
total_weight_sz = GetTotalWeight(DATA_CAPTURE_ID)
total_weight_filt_sz = ApplyMedianFilter(total_weight_sz, filter_window_size)
for start_end in defecation_start_end_times:
weight_loss = GetWeightChange(total_weight_filt_sz, start_end[0], start_end[1])
if weight_loss > 0:
defecation_weight_loss += weight_loss
#print("Between {} and {}, weight loss:{}\n".format(start_end[0], start_end[1], weight_loss))
return defecation_weight_loss
def GetDefecationStartEndTimes(DATA_CAPTURE_ID, filter_window_size, threshold, extension_time):
radar_sum_sz = GetRadarSum(DATA_CAPTURE_ID)
radar_sum_env_sz = ApplyEnvelope(radar_sum_sz)
radar_sum_env_filt_sz = ApplyMedianFilter(radar_sum_env_sz, filter_window_size)
radar_sum_filt_sz = ApplyMedianFilter(radar_sum_sz, filter_window_size)
radar_vals_above_threshold = GetValuesAboveThreshold(radar_sum_filt_sz, threshold)
radar_vals_above_threshold = RightExtendBooleanTrueValues(radar_vals_above_threshold, extension_time)
radar_vals_above_threshold = LeftExtendBooleanTrueValues(radar_vals_above_threshold, extension_time)
defecation_start_end_times = GetStartEndTimesOfBooleanSz(radar_vals_above_threshold)
return defecation_start_end_times
def PlotDefecationWeightRadar(DATA_CAPTURE_ID, filter_window_size, threshold, extension_time):
start_end_times = GetDefecationStartEndTimes(DATA_CAPTURE_ID, filter_window_size, threshold, extension_time)
total_weight_loss = GetDefecationWeightLoss(DATA_CAPTURE_ID, filter_window_size, threshold, extension_time)
print("Predicted total: {}".format(total_weight_loss))
if DATA_CAPTURE_ID in ActualWeight.ID:
print("Actual total: {}".format(ActualWeight[ActualWeight.ID == DATA_CAPTURE_ID].iloc[:, 2].values[0]/1000))
radar_sum_sz = GetRadarSum(DATA_CAPTURE_ID)
total_weight_sz = GetTotalWeight(DATA_CAPTURE_ID)
total_weight_filt_sz = ApplyMedianFilter(total_weight_sz, 5)
fig, ax = plt.subplots(3, 1, figsize = (10, 6))
ax[0].plot(total_weight_sz)
ax[1].plot(total_weight_filt_sz)
ax[2].plot(radar_sum_sz)
ax[0].set_ylim(total_weight_sz.median()-0.5, total_weight_sz.median()+0.5)
ax[1].set_ylim(total_weight_sz.median()-0.5, total_weight_sz.median()+0.5)
for i in range(3):
for start_end_time in start_end_times:
ax[i].axvspan(start_end_time[0], start_end_time[1], alpha=0.5, color='orange')
plt.show()
PlotDefecationWeightRadar(1763, 5, .15, 1.83)
PlotDefecationWeightRadar(1767, 5, .15, 1.83)
```
# Area under Radar vs Defecation Weight
```
from scipy.integrate import simps, trapz
from scipy.stats.stats import pearsonr
def GetAreaUnderRadar(DATA_CAPTURE_ID):
radar_sum_sz = GetRadarSum(DATA_CAPTURE_ID)
x = np.array(radar_sum_sz.index)
f = radar_sum_sz.values
return simps(f, x), trapz(f, x)
data_captures = np.arange(1736, 1746)
actual_defecate_weights = []
area_under_radar = []
for DATA_CAPTURE_ID in data_captures:
row = ActualWeight[ActualWeight.ID == DATA_CAPTURE_ID]
actual_defecate_weights.append(row.iloc[:, 2].values[0] / 1000)
area_under_radar.append(GetAreaUnderRadar(DATA_CAPTURE_ID))
area_under_radar = np.array(area_under_radar)
actual_defecate_weights = np.array(actual_defecate_weights)
data_captures[further]
x, y = actual_defecate_weights, area_under_radar[:, 1]
m, b = np.polyfit(x, y, 1)
yhat = m * x + b
further = np.abs(yhat - y).argsort()[-3:][::-1]
corr = pearsonr(x, y)
plt.figure(figsize = (10, 6))
plt.plot(x, y, '.')
plt.plot(x[further], y[further], 'o')
plt.plot(x, m * x + b)
plt.xlabel("Actual Defecate Weights")
plt.ylabel("Area under Radar")
plt.title("$y = {:.2f}x + {:.2f}$, Cor = {:.3f}".format(m, b, corr[0]));
```
| github_jupyter |
```
import pandas as pd
import utils
import matplotlib.pyplot as plt
import random
import plotly.express as px
import numpy as np
random.seed(9000)
plt.style.use("seaborn-ticks")
plt.rcParams["image.cmap"] = "Set1"
plt.rcParams['axes.prop_cycle'] = plt.cycler(color=plt.cm.Set1.colors)
%matplotlib inline
```
In this notebook the Percent Replicating score for DMSO at each position is computed for the following U2OS 48h time point compound plates
1. Whole plate normalized CP profiles
2. Spherized CP profiles
3. Spherized DL profiles
The following are the steps taken
1. Whole plate normalized CP profiles, Spherized CP profiles and Spherized DL profiles from the 48h Compound experiment are read and the replicates plates merged into a single dataframe.
2. All the non-negative control wells are removed.
3. DMSO wells in the same position are considered replicates while DMSO wells in different positions are considered non-replicates.
4. The signal distribution, which is the median pairwise replicate correlation, is computed for each replicate.
5. The null distribution, which is the median pairwise correlation of non-replicates, is computed for 1000 combinations of non-replicates.
6. Percent Replicating is computed as the percentage of the signal distribution that is the greater than the 95th percentile of null distribution
7. The signal and noise distributions and the Percent Replicating values are plotted and the table of Percent Replicating is printed.
```
n_samples = 10000
n_replicates = 4
corr_replicating_df = pd.DataFrame()
group_by_feature = 'Metadata_Well'
perturbation = "compound"
cell = "U2OS"
time = "48"
experiment_df = (
pd.read_csv('output/experiment-metadata.tsv', sep='\t')
.query('Batch=="2020_11_04_CPJUMP1" or Batch=="2020_11_04_CPJUMP1_DL"')
.query('Perturbation==@perturbation')
.query('Cell_type==@cell')
.query('Time==@time')
)
batches = {
"2020_11_04_CPJUMP1": {
"normalized": "normalized.csv.gz",
"spherized": "spherized.csv.gz"
},
"2020_11_04_CPJUMP1_DL": {
"spherized": "spherized.csv.gz"
}
}
for batch in experiment_df.Batch.unique():
for type in batches[batch]:
filename = batches[batch][type]
batch_df = experiment_df.query('Batch==@batch')
data_df = pd.DataFrame()
for plate in experiment_df.Assay_Plate_Barcode.unique():
plate_df = utils.load_data(batch, plate, filename)
data_df = utils.concat_profiles(data_df, plate_df)
data_df = data_df.query('Metadata_control_type=="negcon"')
metadata_df = utils.get_metadata(data_df)
features_df = utils.get_featuredata(data_df).replace(np.inf, np.nan).dropna(axis=1, how="any")
data_df = pd.concat([metadata_df, features_df], axis=1)
replicating_corr = list(utils.corr_between_replicates(data_df, group_by_feature)) # signal distribution
null_replicating = list(utils.corr_between_non_replicates(data_df, n_samples=n_samples, n_replicates=n_replicates, metadata_compound_name = group_by_feature)) # null distribution
prop_95_replicating, value_95_replicating = utils.percent_score(null_replicating,
replicating_corr,
how='right')
if batch == "2020_11_04_CPJUMP1":
features = 'CellProfiler'
else:
features = 'DeepProfiler'
corr_replicating_df = corr_replicating_df.append({'Description':f'{features}_{type}',
'Modality':f'{perturbation}',
'Cell':f'{cell}',
'time':f'{time}',
'Replicating':replicating_corr,
'Null_Replicating':null_replicating,
'Percent_Replicating':'%.1f'%prop_95_replicating,
'Value_95':value_95_replicating}, ignore_index=True)
print(corr_replicating_df[['Description', 'Percent_Replicating']].to_markdown(index=False))
utils.distribution_plot(df=corr_replicating_df, output_file="5.percent_replicating.png", metric="Percent Replicating")
corr_replicating_df['Percent_Replicating'] = corr_replicating_df['Percent_Replicating'].astype(float)
corr_replicating_df.loc[(corr_replicating_df.Modality=='compound') & (corr_replicating_df.time=='48'), 'time'] = 'long'
plot_corr_replicating_df = (
corr_replicating_df.rename(columns={'Modality':'Perturbation'})
.drop(columns=['Null_Replicating','Value_95','Replicating'])
)
fig = px.bar(data_frame=plot_corr_replicating_df,
x='Description',
y='Percent_Replicating',
facet_row='time',
facet_col='Cell')
fig.update_layout(title='Percent Replicating vs. Perturbation - U2OS 48h Compound plates',
xaxis=dict(title='Feature set'),
yaxis=dict(title='Percent Replicating'),
yaxis3=dict(title='Percent Replicating'))
fig.show("png")
fig.write_image(f'figures/5.percent_replicating_facet.png', width=640, height=480, scale=2)
print(plot_corr_replicating_df[['Description','Perturbation','time', 'Cell' ,'Percent_Replicating']].to_markdown(index=False))
```
| github_jupyter |
# CaseLaw dataset to assist with Law-Research - EDA
---
<dl>
<dt>Acquiring the dataset</dt>
<dd>We initially use dataset of all cases in USA to be able to train it and as a proof of concept.</dd>
<dd>The dataset is available in XML format, which we will put in mongodb or firebase format based on how unstructured the dataset is.</dd>
<dd>dataset url: (https://case.law/)
</dd>
<dt>Research</dt>
<dd>We are looking into <em>NLP</em>, <em>LSTM</em> and <em>Sentiment Analysis</em>.</dd>
</dl>
```
import jsonlines
from pymongo import MongoClient
# client = MongoClient()
client = MongoClient()
db = client.legal_ai
cases = db.cases
some_date = '1820-01'
print(int(some_date[0:4])<1950)
id_saved = []
with jsonlines.open('../data.jsonl') as reader:
for obj in reader:
if int(obj['decision_date'][0:4])>1950:
case_id = cases.insert_one(obj).inserted_id
id_saved.append(case_id)
len(id_saved)
```
## Testing out Similarity Mechanism
---
### Setup
- Test PyDictionary to build keywords
- Construct a mechanism, to extract keywords, and store in a searchable manner.
---
### Search
- Build keywords out of your search
- Search among dataset keywords
- Nearest dates, highest weight, highest precidence shows up
- Pagination scroll, continues the search.
```
# NLTK
```
## Transforming dataset
---
### Extract the first data and study it
- Identify the key elements that need to be transformed & list them
- Build a mechanism to transform for one datapoint.
---
### Perform for entire dataset
- Run a loop and apply the same changes for every datapoints.
```
# Extracting the first element
first_case = cases.find_one()
import xml.etree.ElementTree as ET
root = ET.fromstring(first_case['casebody']['data'])
root
```
# Getting the case body cleaned into a seperate field on db
>
```
summary=''
for child in root:
for sub_child in child:
if 'footnotemark' in sub_child.tag[sub_child.tag.index("}")+1:] or 'author' in sub_child.tag[sub_child.tag.index("}")+1:]:
continue
summary+=sub_child.text + "\n"
print(summary)
```
# Do the same for all the files now!
```
all_cases = cases.find()
all_cases.count()
check_one = True
for each_case in all_cases:
root = ET.fromstring(each_case['casebody']['data'])
summary=''
for child in root:
for sub_child in child:
if 'footnotemark' in sub_child.tag[sub_child.tag.index("}")+1:] or 'author' in sub_child.tag[sub_child.tag.index("}")+1:]:
continue
summary+=sub_child.text + "\n"
myquery = { "_id": each_case['_id'] }
newvalues = { "$set": { "summary": summary } }
cases.update_one(myquery, newvalues)
```
# Change Decision Date to mongodb date format
```
import datetime
count = 0
for each_case in all_cases:
try:
decision_date = datetime.datetime.strptime(each_case['decision_date'], "%Y-%m-%d %H:%M:%S")
except:
try:
decision_date = datetime.datetime.strptime(each_case['decision_date'], "%Y-%m-%d")
except:
try:
decision_date = datetime.datetime.strptime(each_case['decision_date'], "%Y-%m")
except:
try:
decision_date = datetime.datetime.strptime(each_case['decision_date'], "%Y")
except:
pass
myquery = { "_id": each_case['_id'] }
newvalues = { "$set": { "decision_date": decision_date } }
cases.update_one(myquery, newvalues)
```
# Elastic Search
```
import elasticsearch
from datetime import date
date = date(2000, 1, 1)
cases.find({ "decision_date":{"$gte":date}})
# Take only the latest cases
all_cases = cases.find({ "decision_date":{"$gte":date}})
all_cases.count()
```
| github_jupyter |
## HOWTO estimate parameter-errors using Monte Carlo - an example with python
Will Clarkson, Sat March 8th 2014
UPDATED Sun March 14th 2021 with more recent system version and a few other minor style updates (now runs on python 3 and should be backwards-compatible to python 2.7).
I have started the process of updating this series of HOWTOs to run on more recent systems than what I had access to in 2014 (and also with improvements to the code, e.g. "N" --> "np" for numpy). I make no promises that this will be regularly updated. The new versions will be curated at this public github repository (MIT license): https://github.com/willclarkson/astroStatsHOWTOsDearborn
*(Note: This is the first really in-depth HOWTO I've put up at Dearborn, and it contains a number of other useful points about data analysis with python (e.g. how to fit a curve to data, how to annotate plots...). Even if you know Monte Carlo backwards and forwards, you may find the syntax examples below useful. As with all python "Notebooks," you should be able to reproduce everything here just by pasting the commands into your IPython interpreter.*)
A few summary points (since this HOWTO has become fairly long:
[1]. Monte Carlo is quick (to code), flexible, and easy to do - the working code examples below are only a few lines long.
[2]. It produces empirical error estimates on your fitted parameters, no matter how complicated the relationships of the parameters to the data;
[3]. The estimated range of parameters depends on the details of the simulation, so make sure it's as realistic as you can make it;
[4]. There are some interesting subtleties in how you report the error estimates; the best way to report them is to show not only a one- or two-sided range, or (better) the full covariance matrix, but (best) to provide the complete distribution of parameter-estimates over your simulations.
[5]. You can use Monte Carlo to realistically investigate how to improve your experiment to allow parameter-uncertainties sufficiently small to be scientifically useful.
### Introduction
One of the most important pieces of model-fitting is to determine the "uncertainty" in the value of some parameter in the model. You might have fit some value of your model parameter to data, and it may even go through most of the datapoints and be consistent with your prior expectation on the parameter-values. But unless you know what range of values of this parameter are consistent with the data, you really don't know if your model fits at all.
So, how do we know what range of values of a parameter are consistent with the data?
### Background
In the material below I'm skirting round some very deep and interesting ideas in order to show a practical way to determine this range. However a few points are in order to clarify what I'm talking about here.
In what follows, I'll denote the "error on the parameter" to mean "the spread in the parameter values that are consistent with the data." This is a somewhat vague definition (see any standard text for more detail). In the context of empirical estimators on the parameter-spread, one might report the "68 percent confidence interval" to mean "the range of fitted values we would obtain 68% of the time" when the true value is at or near our best-fit value. Since we can always draw likelihood contours centered on any point in the distribution, we can tighten this up a bit by requiring the range to be "centered" on the most likely value, in the sense that the trials are ordered in increasing order of likelihood and the middle set returned as the range. This is also formally a bit vague, but good enough for our purposes here. It's how we know what we'd get "X% of the time" that is the important part.
It's common in the literature to condense (massively) the information contained in the deviations from the "best-fit" value to report the "1-sigma" range, often reported as $a \pm s$ where $s$ is the "1-sigma" range. In most cases this means the range of values that bound 68 percent of the measured values under a large number of experiments (or simulations). Formally, this practice throws away most of the information the reader might want to know: even under gaussian measurement errors the posterior distribution of the best-fit parameter can be highly asymmetric and non-gaussian. Simply reporting one number throws away the true distribution and is not good practice. It's also (less) common to report a two-sided error, like: $a^{+s}_{-r}$, where $s$ is a measure of the standard deviation of the part of the distribution above the best-fit, and $r$ below it (there is a factor-two to think about here; if the distribution were symmetric, you'd want $x^{+s}_{-s}$ to denote $x\pm s$ not $x \pm 2s$...). This usually implicitly approximates the upper part to half a gaussian and the lower part to a gaussian with a different standard deviation. This still may not be a good approximation to the true distribution of best-fit parameters. However in many cases this may be sufficient (say, when you are reporting positions and their errors for ten thousand stars in a catalog and don't want to plot the full posterior for each one - although even here you can provide the graphs electronically.)
I will also use a rather classical definition of what I mean by "consistent with data" and "best-fit." When finding the model-parameters that best fit the data, we can maximize the probability of getting the measured data values given the model and our choice of best-fit parameter values. If the parameter-values are a good fit, then the deviation between observed data and model predictions is small; if they are a bad-fit, then this deviation is large. This "maximum-likelihood" way of thinking is from some points of view backwards - we are maximizing the probability that the model match the data (the "Bayesian" approach) by maximizing the probability that the data match the model. In many practical cases the two approaches give similar values and ranges, and the one approach can be tweaked to approach the other. (For more, see any standard text on data analysis.)
We make the assumptions that:
[1] our model f(x) really does describe the behavior we are measuring:
[2] any deviations between the perfect underlying pattern predicted by f(x) and those we measure y(x), are due only to measurement error that we can parameterise. (This assumption can be relaxed, but in this HOWTO I preserve it to keep things simple). A common choice of this parameterization is a Gaussian - under this parameterization then curve-fitting by minimising the chi-square statistic is formally identical to maximizing the likelihood of (data given model).
If we were to conduct a large (~infinite?) number of identical experiments, then, the "true" parameters of our model f(x) would not change, but those are inaccessible to us - the parameters that best describe the data would change a little between experiments because we don't measure exactly the underlying behaviour. **The range of best-fit values returned across the set of experiments is then a reasonable estimate for the range in the parameter-values allowed by the data.**
### Error-estimates from Monte Carlo
Since we cannot do an infinite number of repeat-experiments (sometimes we cannot even do one if the behaviour we measure is transient), we need another way to predict what range of parameter values would be returned if we could do them.
One way is the formal error-estimate: - *IF* the measurement errors all follow the same distribution, and if they are "small enough," then you can use standard error-propagation to take the measurement error and propagate it through to get a formal prediction on the error of the parameter. *BUT* there's no guarantee that this will work in all cases, for at least three obvious reasons.
(i) You can think of simple models in which the Taylor series approximation behind standard error-propagation may become pathological (to think about: what is the formal variance of the Lorentz distribution, for example? How well might error-propagation work for, say, 1/x near x=0?). Or,
(ii) your real-life science example may include a model whose error propagation is quite nasty formally. Or,
(iii) for various real-world reasons you might be using a model-fitting scenario that breaks the chain of error-propagation in some way (might be e.g. numerical approximations in there if you're near a singularity in the model, or you might have something apparently innocuous like $|x|$ in the model).
What you need in the real world, is a method that will empirically find the range of parameters that fit the model to some level of "confidence" without actually doing ten thousand re-runs of the experiment to determine this range.
This is what Monte Carlo does in this context$^1$: ** *simulate a large number of fake datasets and find the best-fit parameters using exactly the same method that you're using to fit your real data.* ** The range of returned parameters under these fake experiments is then a reasonable approximation to the true underlying error in the best-fit parameters.
Even here there are important subtleties. The uncertainty in the best-fit parameter (i.e., the range of parameters consistent with the data) can depend strongly on the truth-value of the parameter - which is unknown. The formally correct procedure in these cases is to find the distribution of returned values under a range of truth-values, and use an ordering principle in the likelihood to find the range of recovered values when the truth-value is allowed to vary. The famous (to Physicists!) paper by Feldman & Cousins illustrates how to properly do this (link below).
Feldman & Cousins 1997: A Unified Approach to the Classical Statistical Analysis of Small Signals
http://arxiv.org/abs/physics/9711021
In many cases, however, you can assume the range of consistent-values does not change much with the truth-value (or verify that this is so through simulation), and simulate your fake experiments using the same truth-value for each trial. The range of best-fit values when this truth-model is "experimentally" sampled is then a reasonable estimate for the uncertainty on the parameter-value. This is what we do in this HOWTO.
$^1$(I say "in this context" to distinguish error-estimates by Monte Carlo from Monte Carlo integration).
### Contexts in which you might see Monte Carlo error-estimates
Before (finally) moving on to the example with code, it's worth listing a few of the contexts in which you might see this. Any decent modern textbook will have lots more (e.g. Wall & Jenkins, Practical Statistics for Astronomers has a good view from 40,000 feet). Typical cases:
[1]: Well-understood model, error distribution understood, want parameter errors (the case in this HOWTO);
[2]: Well-understood model, error distribution understood, want to know what signal-strength you might mistakenly ascribe to data that doesn't actually contain a signal ("detection limits");
[3]: Well-understood model, error distribution not well-behaved or well-understood (in this case use bootstrap resampling; more about this in a future HOWTO);
[4]: Well-understood model, error distribution understood, we have information from some other measurements that constrain one or more of the relevant parameters (i.e. Bayesian framework: Markov Chain Monte Carlo is a good choice here);
[5]: Well-understood model, error distribution not understood, high-dimensional parameter space (Markov Chain Monte Carlo again)
As I hope to show in a future HOWTO, Markov Chain Monte Carlo in some sense is a superset of the techniques I describe here, as it allows these methods to be extended to prior information.
If the model is NOT well-understood, or there are a few good choices for the models that parameterise the observed variation, then we are in the range of model-comparison, or alternatively non-parametric comparisons. Those are also outside the scope of this HOWTO.
## A practical example: 1/t decay with only six measurements
With that material aside, here's a practical example. First we generate a "measured" dataset that has been perturbed from the "truth" parameters (this corresponds to our experiment). Then we fit this dataset to estimate the value of the power-law index by which y(x) decays over time. Then we use Monte-Carlo to estimate the uncertainty in this best-fit value.
First we import a few modules we'll need. NOTE: if you enter the lines below into your python command-line (all but [8]) in order, you should be able to reproduce all the steps I'm doing here.
```
import pylab as P
import numpy as np
from scipy import optimize
```
(The following line is needed in the ipython notebook: you wouldn't need to type this from the python prompt)
```
%matplotlib inline
```
### "Experimental" data
First we do the experiment - here we simulate the data from 1/t decay. I use uniform error for simplicity of exposition, but there's no reason we could not make things more realistic later on. Let's suppose we have a small-ish number of datapoints:
```
xMeas = np.random.uniform(0.5,3.0,size=6)
yTrue = 1.5/xMeas
sError = 0.1
yMeas = yTrue + np.random.normal(scale=sError, size=np.size(yTrue))
```
Let's plot this to see how our experiment looked:
```
P.errorbar(xMeas,yMeas,yerr=sError,lw=0,elinewidth=1,ecolor='b', fmt='ko',markersize=2)
P.xlabel('"Time"')
P.ylabel('Measured value')
P.xlim(0.4,3.0)
```
### Fitting our experimental data
Now we fit this data with our model. For this example, I'll assume that for whatever reason we've decided to use scipy's "curve_fit", which is pretty robust (although does not include measurement error in its fitting). No matter - the Monte Carlo will tell us what range of parameters come out under our chosen fitter.
First we define the function to fit to this data. We want to have enough free parameters to actually capture the behavior we think is going on, but not introduce redundant parameters. We also want to furnish the fitter with an initial guess, which I'll call "vGuess" below:
```
def f_decay(x,a,b):
return a*x**(b)
```
We need to supply the fitter with an initial guess of the parameters. Since we'll be using the same guess for our Monte Carlo below, I'll define this as a separate element here. I'll also make the initial guess obviously "wrong" - i.e. assuming a quadratic when the underlying behavior is 1/t - to see what happens.
```
vGuess = [2.0,-2.0]
```
Now we run the fitter. Like many of scipy's optimization routines, the fitter needs to know (i) what function to use, (ii) the data to fit, and finally (iii) an initial guess of the parameteres. curve_fit happens to return the best-fit parameters as the first of two return-values. So we need to send those two returned values into two new variables - "vPars" will hold the returned parameters-fit.
```
vPars, aCova = optimize.curve_fit(f_decay, xMeas, yMeas, vGuess)
```
Let's take a look at those best-fit parameters:
```
print(vPars)
```
That's not too bad - the "Truth" values were y(x) = 1.5/x and we have found y(x) = 1.46/x^(1.13). Let's take a look at what this model looks like over the data:
```
xFine = np.linspace(0.4,3.0,100)
P.errorbar(xMeas,yMeas,yerr=sError,lw=0,elinewidth=1,ecolor='b', fmt='ko',markersize=2)
P.plot(xFine, f_decay(xFine,*vPars), 'g-', lw=1) # Fitted parameters
P.plot(xFine, f_decay(xFine,1.5,-1.0), 'r--', lw=1) # Parameters used to generate data
P.title('Fitted curve (green) and "truth" curve (red dashed)')
```
Visually, this isn't **too** horrendous. At this point we might be tempted to claim that "obviously" our data shows y(x) = constant/$x^{0.97}$ since that model goes through the points.
But what range of parameter-values are consistent with a dataset like this?
### Monte Carlo - allowing observing times to vary
What we do next depends on what level we think our hypothetical experiments might differ from each other. I'll make the assumption here that the times of measurement between x=0.5 and x=3.0 were random. In that case, we would need to include this variation of measurement-time in our simulations in order to report the range of values another experimenter might find if they used a similar setup. So, we will generate a large number of datasets, re-fit the parameter values where the measurement-times are also not under our experimenter's control, and then find the range of parameters that match the data.
We need to set up a few things first: The number of trials and the combined set of best-fit parameters, for all the model parameters (initially empty). So:
```
nTrials = 4000
aFitPars = np.array([])
```
Now we actually do the simulations. Each time we need to generate the data as well as fit it.
(There is one syntax complication: we cannot stack a 1d vector onto an empty array in python, so there is an if/then for the FitPars array: if it's empty, copy the latest round of fitted parameters into it, if not then stack the latest round of fitted parameters onto what we have so far.)
```
for iTrial in range(nTrials):
xTrial = np.random.uniform(0.5,3.0,size=np.size(xMeas))
yGen = 1.5/xTrial
yTrial = yGen + np.random.normal(scale=sError,size=np.size(yGen))
# We use a try/except clause to catch pathologies
try:
vTrial, aCova = optimize.curve_fit(f_decay,xTrial,yTrial,vGuess)
except:
dumdum=1
continue # This moves us to the next loop without stacking.
#here follows the syntax for stacking the trial onto the running sample:
if np.size(aFitPars) < 1:
aFitPars=np.copy(vTrial)
else:
aFitPars = np.vstack(( aFitPars, vTrial ))
```
A couple points to note in the above chunk:
(i) All those np.size() calls are to ensure that the various arrays are consistent with the size of the measured data. We could equally well have typed "6" in most of those places, but then we'd have to change it each time a new experiment was done with different numbers of datapoints. Also,
(ii) Your fitting routine might sometimes not work. A more sophisticated analysis would catch these errors: here I'm just using python's "try/except" clause to gracefully ignore the bad cases. (If you're finding that more than a percent or so of cases are breaking, you may want to double-check whether your model has too few or too many parameters for the data). Finally:
(iii) In this example, I am starting with an empty aFitPars array and then stacking on the fit-values only if the fitting routine ran without failing. The "continue" statement stops the routine from dumbly stacking on the last fit-value if the fit failed. I do things this way so that the fitpars array is always the correct size to match the number of correctly-run trials.
Having done all that, let's look at the size of the set of trials:
```
np.shape(aFitPars)
```
This shows that all our 4000 trials were successful, which isn't too bad. Now, let's look at the distribution of powers of x that came out of the fit:
```
print(np.median(aFitPars[:,1]))
print(np.std(aFitPars[:,1]))
```
Let's take a graphical look at this parameter. We'll use matplotlib's "hist" feature to generate and plot the distribution for convenience, but there are other better tools you'll likely come across.
```
P.hist(aFitPars[:,1],bins=50)
P.xlabel('Power-law index b')
P.ylabel('N(b)')
print(np.std(aFitPars[:,1]))
```
We see that the standard deviation of our fitted parameter is pretty high - our measurement of (constant/$x^{1.13}$) is more accurately (constant/$x^{0.97 ~ \pm ~0.138}$). This is consistent with 1/x within the range of values we have recovered.
Notice also that our 1D distribution looks nice and gaussian. But is the situation really this simple? Let's look at both power-law components together:
```
P.scatter(aFitPars[:,0], aFitPars[:,1], alpha=0.5, s=9, edgecolor='none')
P.xlabel('Normalization of power-law a')
P.ylabel('Power-law index b')
```
Here follows a little bit of matplotlib syntax to show this in a slightly more visually appealing way:
```
from scipy.stats import kde
x,y=aFitPars.T
# Use a kernel density estimator to produce local-counts in this space, and grid them to plot.
k = kde.gaussian_kde(aFitPars.T)
nbins=200
xi, yi = np.mgrid[x.min():x.max():nbins*1j, y.min():y.max():nbins*1j]
zi = k(np.vstack([xi.flatten(), yi.flatten()]))
# Show the density
P.pcolormesh(xi, yi, zi.reshape(xi.shape), zorder=3)
P.colorbar()
# Show the datapoints on top of this, and also the contours. "zorder" sets the vertical order in the plot.
P.scatter(aFitPars[:,0], aFitPars[:,1], c='w', s=2, zorder=15, edgecolor='none',alpha=0.75)
P.contour(xi,yi,zi.reshape(xi.shape), zorder=25, colors='0.25')
P.ylim(-1.45,-0.55)
P.xlim(1.25,1.80)
P.xlabel('Power-law normalization a')
P.ylabel('Power-law index b')
```
Even in our simple two-parameter problem the results are quite interesting. For example, the correlation between parameters appears to switch sign the farther from the center of the cloud we go - perhaps indicating different regimes depending on the clustering of measurement-times.
### Were our observing times special?
Now suppose instead that we had good reason to make measurements at the times (x-values) that we did. Perhaps a realistic estimate for the errors should not allow the measurement times to vary.
Let's try another Monte-Carlo, this time asking what parameter values we recover if we make identical experiments at the same times as our real data, but still subject to experimental error at those times:
```
aFitSameTimes=np.array([])
for iTrial in range(nTrials):
yGen = 1.5/xMeas # Same measured times this time!
yTrial = yGen + np.random.normal(scale=sError,size=np.size(yGen))
# We use a try/except clause to catch pathologies
try:
vTrial, aCova = optimize.curve_fit(f_decay,xMeas,yTrial,vGuess)
except:
dumdum=1
continue # This moves us to the next loop without stacking.
#here follows the syntax for stacking the trial onto the running sample:
if np.size(aFitSameTimes) < 1:
aFitSameTimes=np.copy(vTrial)
else:
aFitSameTimes = np.vstack(( aFitSameTimes, vTrial ))
np.shape(aFitSameTimes)
```
Let's look at the spread in recovered values as we did before:
```
P.hist(aFitSameTimes[:,0],bins=50, alpha=0.5,color='r')
P.xlabel('Power-law index b')
P.ylabel('N(c)')
P.title('Same measurement times each trial')
print(np.median(aFitSameTimes[:,1]))
print(np.std(aFitSameTimes[:,1]))
```
Let's look at those parameters plotted against each other as we did before.
```
P.scatter(aFitSameTimes[:,0], aFitSameTimes[:,1],c='r', s=36, edgecolor='k', alpha=0.5)
P.xlabel('Normalization of power-law a')
P.ylabel('Power-law index b')
P.title('Same measurement times each trial')
# Set the same axis-ranges as above for visual comparison
#P.xlim(1.30, 1.70)
#P.ylim(-1.4,-0.6)
```
As we might expect, the measurements are still correlated, but the distribution is tighter. Let's take a look at the two sets of parameters on top of each other:
```
# the alpha values below are transparency values for plots.
P.scatter(aFitSameTimes[:,0], aFitSameTimes[:,1],c='r', s=9, edgecolor='none', zorder=25, alpha=0.5)
P.scatter(aFitPars[:,0], aFitPars[:,1],c='b', s=9, edgecolor='none', zorder=5)
P.xlabel('Normalization of power-law a')
P.ylabel('Power-law index b')
P.title('Random observing times (blue) and frozen times (red)')
```
Or we can generate our contours and compare the two sets visually:
```
xS,yS=aFitSameTimes.T
kS = kde.gaussian_kde(aFitSameTimes.T)
nbins=50
xiS, yiS = np.mgrid[xS.min():xS.max():nbins*1j, yS.min():yS.max():nbins*1j]
ziS = kS(np.vstack([xiS.flatten(), yiS.flatten()]))
# Now let's plot this over the previous (xi,yi,zi) case:
P.contour(xi,yi,zi.reshape(xi.shape),colors='b',lw=2, zorder=5, alpha=0.75, linestyles='dashed', label='random times')
P.contour(xiS, yiS, ziS.reshape(xiS.shape), colors='r', zorder=15, alpha=1.0, label='times frozen')
P.xlim(1.0,2.0)
P.ylim(-1.5,-0.50)
P.title('Random-times (blue dashed) and constant-times (red) compared', fontsize=10)
P.xlabel('Normalization of power-law a')
P.ylabel('Power-law index b')
```
That these two sets are not terribly different (but not identical!) indicates that the particular experiment (xMeas, yMeas at the beginning) didn't happen to pick a hugely fortuitous set of observing "times" (i.e. x-values), although it looks like the values that were picked were generally a bit better than any random set of six observing times.
### Discussion
So, which value for the spread of the power-law index "b" should we use in our hypothetical publication?
That depends on which of the scenarios simulated you believe to be the most honest representation of the differences between predicted and actual data one would encounter in real life. What you CANNOT do is just pick the scenario that gives the smallest range just because you want to report the smaller error!
It's usually best to be as upfront as possible about what your errors mean. This is where in your paper you would report not just the range, but also under what circumstances this was estimated. If you assumed your measurement times were constant when making the monte carlo, then say so - and you should also justify in the paper why you made this assumption. In this simple case above, the differences between assuming any set of random times (blue) and the exact times (red) is not very large, but you still want the reader to understand as much as possible about your data.
In most cases - even the simple toy problem here - you should really go one better, and give the reader not just the range of values consistent with your data, but the full likelihood function of the fitted parameters. This is usually hard to parameterise but easy to show - just show the graph of the recovered parameters (any of the example graphs above would be good)!
Notice also that in the case of the toy problem here, even a two-parameter model with a very simple form has led to real covariance between the fitted parameters under our monte carlo experiments. Under this situation, what would the 1-sigma variation in one of the parameters mean?
In a situation like this, you can easily report not just the standard deviation (or its square, the variance) but instead the *Covariance* of the parameters. The diagonal elements are the variance of each parameter, while the off-diagonals indicate the covariance between each pair of parameters. In python, this is easy:
```
aCovFit = np.cov(np.transpose(aFitSameTimes))
```
Looking at the resulting covariance matrix, we see that - like our graphs above suggest - the two parameters do indeed vary together:
```
print(aCovFit)
print(np.std(aFitPars[:,0]))
print(np.sqrt(aCovFit[0,0]))
```
That difference between the diagonal element and the standard deviation of the fitted parameter "a" is small but significant! It means there is a nonzero covariance. We can get a little bit more insight by computing the normalized covariance (the correlation). We see that the off-diagonal terms are about 61 percent of the diagonal terms (expressed as variance not standard deviation).
```
np.corrcoef(np.transpose(aFitSameTimes))
```
If you're more familiar with the standard deviation rather than the variance, you might take the square root to get a visual handle on how large this correlation is, remembering to use the absolute value in case of negative off-diagonal terms (which we'd get in the case of variables anti-correlated with each other). I have not seen this done much, but you might find it more intuitive. Your mileage may vary.
```
np.sqrt(np.abs(np.corrcoef(np.transpose(aFitSameTimes))))
```
The above has been a quick introduction into what monte carlo is, how it works, and how to do it in python.
For more on the ways to report the ranges when two parameters vary against each other, take a look at any standard text on data analysis in the sciences. Bevington & Robson has a good discussion at about the right level, Numerical Recipes also has some interesting advice.
# A more interesting example: powerlaw plus constant background
Now we move on to a more "realistic" example: there is a power-law decay above some unknown constant background, which we include in our model. As we will see, this leads to significant deviations from the bivariate gaussian-like posterior distrbutions we saw above, because with only a few datapoints it is statistically difficult to determine which of the background, normalization should account for this offset level.
(Note that we could flip this around and say that if we KNOW that the two-parameter model does fit our data, then throwing in a third parameter significantly complicates the range of consistent values.)
We begin as before, this time with the background term included, and assuming our experimenter has been able to take just a few more datapoints. We'll define our slightly more complex function and use the same function to generate the "experimental" data, the "truth" values and the monte-carlo simulations.
```
def f_expt(x,a,b,c):
return a*x**(b)+c
nData=14
sError=0.1
xMeas=np.random.uniform(0.5,5.0,size=nData)
yTrue=f_expt(xMeas,1.5,-1.0,0.5)
yMeas = yTrue + np.random.normal(scale=sError, size=np.size(yTrue))
P.errorbar(xMeas,yMeas,yerr=sError,lw=0,elinewidth=1,ecolor='b', fmt='ko',markersize=2)
# Some syntax to make the plot a bit clearer
P.xlim(0.4,5.0)
P.ylim(0.0,3.0)
P.title('Experimental plus background')
P.xlabel('Time')
P.ylabel('Measured value')
# Plot the total model and the constant background
xFine=np.linspace(0.4,5.0,100)
P.plot([np.min(xFine),np.max(xFine)], [0.5,0.5],'r-.')
P.plot(xFine,f_expt(xFine,1.5,-1.0,0.5), 'r--')
```
As before, we'll fit our new model to this data with background. We'll assume an optimistic guess with lower than true background:
```
vGuess=[2.0,-2.0,0.2]
vPars, aCova = optimize.curve_fit(f_expt, xMeas, yMeas, vGuess)
print(vPars)
```
This time the parameters are quite a bit different than input: the "truth" values were [1.5, -1.0, 0.5].
But is this really so "bad?" How do we know? Let's view this graphically, plotting the fitted parameters (green) over the generated parameters (red dashed):
```
P.errorbar(xMeas,yMeas,yerr=sError,lw=0,elinewidth=1,ecolor='b', fmt='ko',markersize=2)
P.plot(xFine,f_expt(xFine,1.5,-1.0,0.5), 'r--')
P.plot(xFine,f_expt(xFine,*vPars), 'g-')
# Same labels as before:
P.xlim(0.4,5.0)
P.ylim(0.0,3.0)
P.title('Power law plus background')
P.xlabel('Time')
P.ylabel('Measured value')
```
We see that, even though the fitted parameters are different from the generated parameters by quite a bit more than in the two-parameter case, the two sets of best-fit parameters produce quite similar curves. This is an indication that our experimental setup might not be sufficient to distinguish the parameters of our model.
Pressing on with this, what range of parameters are consistent with the data we do have? Let's use Monte Carlo to find out. Once again, we initialise our set of fit parameters:
```
nTrials = 4000
aFitExpt = np.array([])
for iTrial in range(nTrials):
xTrial = np.random.uniform(0.5,5.0,size=np.size(xMeas))
yGen = f_expt(xTrial,1.5,-1.0,0.5)
yTrial = yGen + np.random.normal(scale=sError,size=np.size(yGen))
# We use a try/except clause to catch pathologies
try:
vTrial, aCova = optimize.curve_fit(f_expt,xTrial,yTrial,vGuess)
except:
dumdum=1
continue # This moves us to the next loop without stacking.
#here follows the syntax for stacking the trial onto the running sample:
if np.size(aFitExpt) < 1:
aFitExpt=np.copy(vTrial)
else:
aFitExpt = np.vstack(( aFitExpt, vTrial ))
```
Since our model is now more complex given the data, let's see what fraction of trials were successful:
```
np.shape(aFitExpt)
```
As we might have expected, a small fraction (about 1-2 percent) of the trials failed. The "try/except" clause above handled this gracefully. So - let's take a look at the distribution of parameters under these simulations:
```
P.scatter(aFitExpt[:,0], aFitExpt[:,1], c=aFitExpt[:,2], alpha=0.5, s=9, edgecolor='none')
P.xlabel('Normalization of power-law a')
P.ylabel('Power-law index b')
P.title('Three-parameter model')
P.colorbar(label='Background component c')
```
We see that the distribution of fitted parameters is completely different from the two-parameter case above. Let's zoom in:
```
P.scatter(aFitExpt[:,0], aFitExpt[:,1], alpha=0.5, s=9, edgecolor='none')
P.xlabel('Normalization of power-law a')
P.ylabel('Power-law index b')
P.title('Three-parameter model - zoomed in')
P.xlim(0,5)
P.ylim(-2,0)
```
Let's see what range of parameters comes out of those simulations. Note a couple of things:
(i) two of the histograms below have a log10 scale due to the very long tails of the distributions;
(ii) We have set limits on those histograms. This is a little dangerous in practice - we don't want to throw away samples when computing the range - but those limits were set after examining the full range (we also don't want to include the really pathological cases like the very bottom-right datapoint in the scatterplot two figures up). So:
```
P.hist(aFitExpt[:,0],bins=250,alpha=0.5,range=[-10,50], log=True)
P.xlabel('Power-law normalization a')
P.ylabel('N(a)')
P.hist(aFitExpt[:,1],bins=150,alpha=0.5)
P.xlabel('Power-law index b')
P.ylabel('N(b)')
P.hist(aFitExpt[:,2],bins=250,alpha=0.5,log=True, range=[-10,3])
P.xlabel('Constant background c')
P.ylabel('N(c)')
```
Compared to the two-parameter case, the range of allowed power-law indices is considerable!
What about the co-variance of the background and the power-law normalization?
```
P.scatter(aFitExpt[:,0], aFitExpt[:,2], alpha=0.5, s=9, edgecolor='none')
P.xlabel('Normalization of power-law a')
P.ylabel('Constant-background c')
P.title('Three-parameter model - zoomed in')
P.xlim(0,6)
P.ylim(-5,2)
```
What can we conclude with behavior like this? At least three things are going on here. Under random time-sampling within the (0.5-5.0) range:
[1]. The fitter we've used here, curve_fit, does not always do a good job fitting given the 3-parameter and the model. Ideally we should be able to fold in other information we might have (e.g. 1/x^3 or steeper might be unphysical). There are (simple!) methods for including these outside constraints, but they're beyond the scope of this HOWTO.
[2]. Even though we have 14 datapoints and 3 model-parameters (so formally 11 degrees of freedom), the range of the data is not sufficient to distinguish the constant background from the power-law normalisation. Our model is too complicated for the data.
[3]. Notice: even with gaussian errors, the distributions of posterior values for the best-fit parameters are not nice well-behaved gaussians!
### Making progress in sub-optimal situations
Let's try asking a restricted set of simulations as before: assuming the experimenter is able to spread their experiments over time (thus avoiding bunching up of measurements in some cases), what happens then?
```
aStandard=np.array([])
# suppose we believe the "true" values really are 1.5, -1.0, 0.5
yTrue=f_expt(xMeas,1.5,-1.0,0.5)
for iTrial in range(nTrials):
# Note that the errors are the only source of variation here!
yTrial = yTrue + np.random.normal(scale=sError,size=np.size(yTrue))
# We use a try/except clause to catch pathologies
try:
vTrial, aCova = optimize.curve_fit(f_expt,xMeas,yTrial,vGuess)
except:
dumdum=1
continue # This moves us to the next loop without stacking.
#here follows the syntax for stacking the trial onto the running sample:
if np.size(aStandard) < 1:
aStandard=np.copy(vTrial)
else:
aStandard = np.vstack(( aStandard, vTrial ))
np.shape(aStandard)
P.scatter(aStandard[:,0], aStandard[:,1], alpha=0.5, s=9, edgecolor='none')
P.xlabel('Normalization of power-law a')
P.ylabel('Power-law index b')
P.title('Three-parameter model, measurement times frozen')
np.shape(aStandard[:,0:2])
```
We'll apply the same incantations in matplotlib to see what this distribution now looks like:
```
from scipy.stats import kde
xS,yS=aStandard[:,0:2].T
kS = kde.gaussian_kde(aStandard[:,0:2].T)
nbins=250
xiS, yiS = np.mgrid[xS.min():xS.max():nbins*1j, yS.min():yS.max():nbins*1j]
ziS = kS(np.vstack([xiS.flatten(), yiS.flatten()]))
P.pcolormesh(xiS, yiS, ziS.reshape(xiS.shape), zorder=3)
P.colorbar()
# Show the datapoints on top of this, and also the contours. "zorder" sets the vertical order in the plot.
P.scatter(aStandard[:,0], aStandard[:,1], c='w', s=2, zorder=15, edgecolor='none',alpha=0.5)
P.contour(xiS,yiS,ziS.reshape(xiS.shape), zorder=25, colors='0.25')
P.xlabel('Power-law normalization a')
P.ylabel('Power-law index b')
P.xlim(0.8,4)
```
Again - complex, but more well-behaved. This time the parameter-values and ranges are the following:
```
print( "Median of best-fit parameters:", np.median(aStandard, axis=0) )
print("Covariance matrix:")
print( np.cov(np.transpose(aStandard)) )
print( "1-parameter deviations:", np.std(aStandard, axis=0) )
```
Interestingly, the median values returned *when we sample at the times we did* are similar to the truth values we used to simulate the data, but the scatters are difficult to interpret when stated as simple standard deviations!
```
P.hist(aStandard[:,0],bins=250,alpha=0.5)
P.title('Power-law normalization')
P.xlabel('Power-law normalization a')
P.hist(aStandard[:,1],bins=50,alpha=0.5)
P.title('Power-law index b')
P.xlabel('Power-law index b')
P.hist(aStandard[:,2],bins=150,alpha=0.5)
P.title('Background c')
P.xlabel('Background c')
```
### Discussion - signal plus unknown background
While the frozen-time example above produces more "well-behaved" results, does it do a better job of representing the parameter-error one would actually encounter?
Again, that depends on the situation. In some fraction of trials, the uniform random number generator used to make the fake measurement trials, will sometimes produce all 14 measurements at one end of the time interval, in which case the relationship between the "truth" value and the best-fit could change. It might be that your hypothetical experimenter would never let this happen. Or, it might be that this is quite realistic - if, say, you're a ground-based astronomer and weather can very much cause your observations to be bunched up in time (if the only gap in the clouds were near the beginning of the inverse-t decay here).
My view is that it's up to the experimenter to be careful to communicate what they're actually doing, and give the reader as much information as possible to enable them to understand what was actually done, and thus how to interpret the results.
In the general case, it's usually better to err on the side of caution and allow "things to go wrong" in the monte carlo trials. In this case we would conclude that perhaps we got lucky in our experimental data, and any given example of an experiment with ony 14 datapoints from time 0.5-5.0 could return any value within the wider range we found above. What to do?
### Using Monte-Carlo to design better experiments
We can use our simulator to find out what happens if we had just a bit more data, or a long enough time-baseline to actually see the background separate from the power-law - this can be crucial when designing future experiments to really tie down the parameter-values we want.
In our example, let's suppose we were able to take many more points (35 compared to 14) over a just slightly longer time-baseline (interval 0.5-7 compared to 0.5-5):
```
xExtend = np.random.uniform(0.5,7.0,size=35)
yGenera = f_expt(xExtend,1.5,-1.0,0.5)
yMeasur = yGenera + np.random.normal(scale=sError,size=np.size(yGenera))
P.errorbar(xExtend,yMeasur,yerr=sError,lw=0,elinewidth=1,ecolor='b', fmt='ko',markersize=2)
vExten, aExten = optimize.curve_fit(f_expt, xExtend, yMeasur, [2.0,-2.0,0.2])
print(vExten)
```
Let's see how our best-fit parameters compare to the data and to the "truth" parameters:
```
P.errorbar(xExtend,yMeasur,yerr=sError,lw=0,elinewidth=1,ecolor='b', fmt='ko',markersize=2)
P.xlabel('X')
P.ylabel('Hypothetical data')
xFine=np.linspace(0.5,7.0,100)
P.plot(xFine,f_expt(xFine,1.5,-1.0,0.5), 'r--')
P.plot(xFine,f_expt(xFine,*vExten), 'g-')
# Show the "truth" background level for comparison with our planned experimental data
P.plot([0.0,7.0],[0.5,0.5],'r-.')
```
Now with our better dataset, let's see what happens when we try to recover parameter-ranges on this, without any assumptions on the specific times of the measurements:
```
aExtend=np.array([])
for iTrial in range(nTrials):
xTrial = np.random.uniform(0.5,5.0,size=np.size(xExtend))
yGen = f_expt(xTrial,1.61,-0.97,0.42)
yTrial = yGen + np.random.normal(scale=sError,size=np.size(yGen))
# We use a try/except clause to catch pathologies
try:
vTrial, aCova = optimize.curve_fit(f_expt,xTrial,yTrial,vGuess)
except:
dumdum=1
continue # This moves us to the next loop without stacking.
#here follows the syntax for stacking the trial onto the running sample:
if np.size(aExtend) < 1:
aExtend=np.copy(vTrial)
else:
aExtend = np.vstack(( aExtend, vTrial ))
P.scatter(aExtend[:,0],aExtend[:,1], alpha=0.5, s=9, edgecolor='none')
P.xlabel('Normalization of power-law a')
P.ylabel('Power-law index b')
P.title('Three-parameter model, better data, no assumption on measurement times')
xS,yS=aExtend[:,0:2].T
kS = kde.gaussian_kde(aExtend[:,0:2].T)
nbins=150
xiS, yiS = np.mgrid[xS.min():xS.max():nbins*1j, yS.min():yS.max():nbins*1j]
ziS = kS(np.vstack([xiS.flatten(), yiS.flatten()]))
P.pcolormesh(xiS, yiS, ziS.reshape(xiS.shape), zorder=3)
P.colorbar()
# Show the datapoints on top of this, and also the contours. "zorder" sets the vertical order in the plot.
P.scatter(aExtend[:,0], aExtend[:,1], c='w', s=2, zorder=15, edgecolor='none',alpha=0.75)
P.contour(xiS,yiS,ziS.reshape(xiS.shape), zorder=25, colors='0.25')
P.xlim(1.0,4.0)
#P.ylim(-1.6,-0.45)
P.xlabel('Power-law normalization a')
P.ylabel('Power-law index b')
xS,yS=aExtend[:,1:3].T
kS = kde.gaussian_kde(aExtend[:,1:3].T)
nbins=150
xiS, yiS = np.mgrid[xS.min():xS.max():nbins*1j, yS.min():yS.max():nbins*1j]
ziS = kS(np.vstack([xiS.flatten(), yiS.flatten()]))
P.pcolormesh(xiS, yiS, ziS.reshape(xiS.shape), zorder=3)
P.colorbar()
# Show the datapoints on top of this, and also the contours. "zorder" sets the vertical order in the plot.
P.scatter(aExtend[:,1], aExtend[:,2], c='w', s=2, zorder=15, edgecolor='none',alpha=0.75)
P.contour(xiS,yiS,ziS.reshape(xiS.shape), zorder=25, colors='0.25')
#P.xlim(1.21,2.5)
P.ylim(-1.0,1.0)
P.xlabel('Power-law index b')
P.ylabel('Constant background c')
```
This is already much better-behaved than both previous versions.
This illustrates another use of monte carlo - to find out how to make our experiment sufficient to set the constraints we want to set.
### Actually reporting the range of returned parameters
Finishing off, let's decide on the range of parameter values to report. Since there are three parameters beyond the experimenter's control, it makes sense to report the range of one at a time, when all three are varying. This is just the projection of our cloud of points onto the parameter-space we want.
(Technique note: quite a lot of the code below is repeated. In practice, you would write a method to do these plots and then just call the method each time you wanted to use it.)
We'll also calculate the two-sided limits from these distributions. We'll start with the 68% limits ("1-sigma") for our hypothetical "Extended" dataset:
```
nBins=200
P.hist(aExtend[:,0],bins=nBins,alpha=0.5, color='g')
P.xlim(1,3)
P.xlabel('Power-law normalization a')
# We use the median of the distribution as a decent estimate for
# our best-fit value. Let's choose a "1-sigma" limit, i.e. the limits
# that enclose 68% of the points between the median and the upper and lower
# bounds:
Med = np.median(aExtend[:,0])
gHi = np.where(aExtend[:,0] >= np.median(aExtend[:,0]))[0]
gLo = np.where(aExtend[:,0] < np.median(aExtend[:,0]))[0]
# This trick does the limit-setting - try to see how it works:
sLim = 0.68
vSortLo=np.sort(aExtend[gLo,0])
vSortHi=np.sort(aExtend[gHi,0])
NormLo = vSortLo[np.int((1.0-sLim)*np.size(vSortLo))]
NormHi = vSortHi[np.int(sLim *np.size(vSortHi))]
## Let's take a look - how do those limits look on the histogram?
for quant, ls in zip([Med, NormLo, NormHi],['-', '--', '--']):
P.axvline(quant, color='k', ls=ls, lw=1)
# Print the limits:
print("INFO: Lower and upper 68 percent ranges are: %.3f %.3f" % (Med-NormLo, NormHi-Med) )
nBins=50
P.hist(aExtend[:,1],bins=nBins,alpha=0.5)
P.xlabel('Power-law index b')
# We use the median of the distribution as a decent estimate for
# our best-fit value. Let's choose a "1-sigma" limit, i.e. the limits
# that enclose 68% of the points between the median and the upper and lower
# bounds:
Med = np.median(aExtend[:,1])
gHi = np.where(aExtend[:,1] >= np.median(aExtend[:,1]))[0]
gLo = np.where(aExtend[:,1] < np.median(aExtend[:,1]))[0]
# This trick does the limit-setting - try to see how it works:
vSortLo=np.sort(aExtend[gLo,1])
vSortHi=np.sort(aExtend[gHi,1])
sLim = 0.68
NormLo = vSortLo[np.int((1.0-sLim)*np.size(vSortLo))]
NormHi = vSortHi[np.int(sLim *np.size(vSortHi))]
## Let's take a look - how do those limits look on the histogram?
for quant, ls in zip([Med, NormLo, NormHi],['-', '--', '--']):
P.axvline(quant, color='k', ls=ls, lw=1)
# Print the limits:
print("INFO: Lower and upper %i percent limits are: %.3f %.3f" % (sLim*100, Med-NormLo, NormHi-Med) )
```
Just for interest, let's try a wider limit on the power-law normalization; how asymmetric does the distribution become once we get farther from the median?
```
sLim=0.99
nBins=200
P.hist(aExtend[:,0],bins=nBins,alpha=0.5, color='g')
P.xlim(1,3)
P.xlabel('Power-law normalization a')
# Let's find the values at the lower- and upper- "sLim" bounds:
Med = np.median(aExtend[:,0])
gHi = np.where(aExtend[:,0] >= np.median(aExtend[:,0]))[0]
gLo = np.where(aExtend[:,0] < np.median(aExtend[:,0]))[0]
vSortLo=np.sort(aExtend[gLo,0])
vSortHi=np.sort(aExtend[gHi,0])
NormLo = vSortLo[np.int((1.0-sLim)*np.size(vSortLo))]
NormHi = vSortHi[np.int(sLim *np.size(vSortHi))]
## Let's take a look - how do those limits look on the histogram?
for quant, ls in zip([Med, NormLo, NormHi],['-', '--', '--']):
P.axvline(quant, color='k', ls=ls, lw=1)
# Do some annotations on the plot with these limits:
P.annotate('%i percent limits' % (sLim*100), (0.6,0.9), xycoords='axes fraction')
P.title('Parameter: <%.3f> -%.3f +%.3f' % (Med, Med-NormLo, NormHi-Med))
#Print the limits:
print("INFO: Lower and upper %i percent ranges are: %.3f %.3f" % (sLim*100,Med-NormLo, NormHi-Med) )
```
We see the not-so-hidden dangers of reporting and interpreting just a symmetric 1-sigma limit. Even though our measurement errors were gaussian in all cases - and known - the posterior distribution of recovered parameters is (i) not gaussian, (ii) is asymmetric, and (iii) gets more asymmetric the more extreme we make our confidence level (e.g. 99% versus 68%).
If you have a single 68% range reported (which would be about 0.131), say, how does the likelihood of measuring a=2.2 under this model compare to the actual likelihood of getting this value? Beware of claiming signals only 2 or 3 sigma from the median without first checking the actual distribution of recovered parameters!
Just for completeness, let's try this on our 14-point data from above, whose monte carlo output we put into aFitExpt earlier. We'll use a log-scale on the histogram to show the long tail of the normalization constant:
```
sLim=0.99
nBins=400
P.hist(aFitExpt[:,0],bins=nBins,alpha=0.5, color='g',range=[0,50], log=True)
P.xlim(0,10)
P.xlabel('Power-law normalization a')
# Let's find the values at the lower- and upper- "sLim" bounds:
Med = np.median(aFitExpt[:,0])
gHi = np.where(aFitExpt[:,0] >= Med)[0]
gLo = np.where(aFitExpt[:,0] < Med)[0]
vSortLo=np.sort(aFitExpt[gLo,0])
vSortHi=np.sort(aFitExpt[gHi,0])
NormLo = vSortLo[np.int((1.0-sLim)*np.size(vSortLo))]
NormHi = vSortHi[np.int(sLim *np.size(vSortHi))]
## Let's take a look - how do those limits look on the histogram?
P.axvline(Med, color='k', ls='-', lw=1)
P.axvline(NormLo, color='k', ls='--', lw=1)
P.axvline(NormHi, color='k', ls='--', lw=1)
#P.plot([Med, Med],[1,1000.0], 'k-', lw=2)
#P.plot([NormLo, NormLo],[1,1000.0], 'k--', lw=2)
#P.plot([NormHi, NormHi],[1,1000.0], 'k--', lw=2)
# Do some annotations on the plot with these limits:
P.annotate('%i percent limits' % (sLim*100), (0.6,0.9), xycoords='axes fraction')
P.title('Parameter: <%.3f> - %.3f +%.3f' % (Med, Med-NormLo, NormHi-Med))
#Print the limits:
print("INFO: Lower and upper %i percent ranges are: %.3f %.3f" % (sLim*100,Med-NormLo, NormHi-Med) )
```
Within the context of "designing a better experiment," notice the improvement in the range when using more data over a wider time-baseline over the version with the fewer datapoints just plotted in the above panel; the range of parameters allowed by the data is much narrower when more data over a wider time interval is added. Monte Carlo allows us to quantify the improvement.
Something else is worth noticing here: the coverage of the high-a regime in our simulations is not very good in the long-tail to high positive "a."
If you want to explore confidence limits in the >90% or so regime, you are likely to need a larger number of simulations just to get good statistics towards the corners of the distribution. Whether you want to do this depends on your use-case and how important the wings of the distribution are likely to be to your hypothetical reader who is trying to reproduce your results.
```
nBins=50
sLim=0.99
P.hist(aExtend[:,1],bins=nBins,alpha=0.5)
P.xlabel('Power-law index b')
P.ylabel('N(b)')
# We use the median of the distribution as a decent estimate for
# our best-fit value. Let's choose a "1-sigma" limit, i.e. the limits
# that enclose 68% of the points between the median and the upper and lower
# bounds:
Med = np.median(aExtend[:,1])
gHi = np.where(aExtend[:,1] >= np.median(aExtend[:,1]))[0]
gLo = np.where(aExtend[:,1] < np.median(aExtend[:,1]))[0]
# This trick does the limit-setting - try to see how it works:
vSortLo=np.sort(aExtend[gLo,1])
vSortHi=np.sort(aExtend[gHi,1])
NormLo = vSortLo[np.int((1.0-sLim)*np.size(vSortLo))]
NormHi = vSortHi[np.int(sLim *np.size(vSortHi))]
## Let's take a look - how do those limits look on the histogram?
for quant, ls in zip([Med, NormLo, NormHi],['-', '--', '--']):
P.axvline(quant, color='k', ls=ls, lw=1)
#P.plot([Med, Med],[1,500.0], 'k-', lw=2)
#P.plot([NormLo, NormLo],[1,500.0], 'k--', lw=2)
#P.plot([NormHi, NormHi],[1,500.0], 'k--', lw=2)
# Print the limits:
print("INFO: Lower and upper %i percent ranges are: %.3f %.3f" % (sLim*100, Med-NormLo, NormHi-Med) )
```
| github_jupyter |
# Fine tuning Marian-NMT en-ru model
## Установка зависимостей
```
!pip install datasets transformers[sentencepiece]
!pip install sacrebleu
!pip install accelerate
!pip install openpyxl
!apt install git-lfs
!pip install matplotlib
# загрузим репозиторий; нужен для предобработки
!git clone https://github.com/eleldar/Translator.git
# загрузка исходной модели
!git clone https://huggingface.co/Helsinki-NLP/opus-mt-en-ru && ls opus-mt-en-ru
```
## Настройка git
```
!git config --global user.email "eleldar@mail.ru"
!git config --global user.name "eleldar"
!git config --global credential.helper store
```
## Импортирование зависимостей
```
import os
import torch
import numpy as np
import pandas as pd
from tqdm.auto import tqdm
from torch.utils.data import DataLoader
from time import gmtime, strftime
from huggingface_hub import Repository
from accelerate import Accelerator, notebook_launcher
import datasets
from datasets import (
Dataset, DatasetDict, load_dataset, load_metric,
concatenate_datasets, interleave_datasets
)
import transformers
from transformers import (
AdamW, AutoTokenizer, AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq,
get_constant_schedule, get_constant_schedule_with_warmup, get_cosine_schedule_with_warmup,
get_linear_schedule_with_warmup, get_cosine_with_hard_restarts_schedule_with_warmup,
get_polynomial_decay_schedule_with_warmup
)
```
## Загрузка данных
### Общий корпус длинных предложений
```
normal_url = 'https://github.com/eleldar/Translator/blob/master/test_dataset/flores101_dataset/101_languages.xlsx?raw=true'
normal_df = pd.read_excel(normal_url)[["eng", "rus"]].rename(columns={"eng": "en", "rus": "ru"})
normal_df
```
### Общий корпус кратких предложений
```
short_url = 'https://github.com/eleldar/Translator/blob/master/test_dataset/normal.xlsx?raw=true'
short_df = pd.read_excel(short_url).rename(columns={"en_sent": "en", "ru_sent": "ru"})
short_df
```
### Предметный корпус
```
subject_url = 'https://github.com/eleldar/Translator/blob/master/test_dataset/corrected_vocab.xlsx?raw=true'
subject_df = pd.read_excel(subject_url).drop(columns=['en_keys', 'ru_keys']).rename(columns={"en_sent": "en", "ru_sent": "ru"})
subject_df
```
### Тестовый корпус (из модели)
```
test_url = 'https://github.com/eleldar/Translator/blob/master/test_dataset/test_opus_en-ru_dataset.xlsx?raw=true'
test_df = pd.read_excel(test_url).drop(columns=['Unnamed: 0'])
test_df
```
## Предобработка данных
> Требуется замена символов юникода, т.к. встроенный токенизатор этого не выполняет
```
os.getcwd()
# переключим на каталог с импортируемыми модулями
os.chdir('/mnt/home/Translator/OpenAPI/')
os.getcwd()
!ls
# импортировали
from api.tools.preprocess import get_commands, preprocess_text
# словарь команд для предобработки на основе файла с расширением направления перевода и checkpoints
checkpoints = {'en-ru', 'ar-ru', 'ru-ar', 'ru-en', 'en-ar', 'ar-en'}
commands = get_commands(checkpoints)
list(commands['en-ru'])[:5], list(commands['ru-en'])[:5]
# замена спецсимволов
# normalisation = lambda text: 1 # preprocess_text(commands['en-ru'], text['en_sent']) if direct in commands else text['en_sent']
def normalisation(text):
text['en'] = preprocess_text(commands['en-ru'], text['en'])
text['ru'] = preprocess_text(commands['ru-en'], text['ru'])
return text
# вернули рабочую директорию
os.chdir('/mnt/home')
```
## Сборка наборов данных
### Создадим объекты Dataset
```
# Общий корпус длинных предложений
# normal_df
normal_dataset = Dataset.from_pandas(normal_df)
normal_dataset = normal_dataset.map(normalisation)
normal_dataset
# Общий корпус кратких предложений
# short_df
short_dataset = Dataset.from_pandas(short_df)
short_dataset = short_dataset.map(normalisation)
short_dataset
# Предметный корпус
# subject_df
subject_dataset = Dataset.from_pandas(subject_df).shuffle()
subject_dataset = subject_dataset.map(normalisation)
subject_dataset
# Тестовый корпус
# test_df
test_dataset = Dataset.from_pandas(test_df)
test_dataset = test_dataset.map(normalisation)
test_dataset
```
### Объединим обучающую часть предметного и тестовые набора
```
# целевой "словарь"
split_datasets = DatasetDict()
split_datasets['normal'] = normal_dataset
split_datasets['short'] = short_dataset
split_datasets
sub_train_and_test = subject_dataset.train_test_split(test_size=0.2)
sub_train_and_test
tmp = test_dataset.train_test_split(test_size=0.166)
tmp
split_datasets['train'] = interleave_datasets(
[sub_train_and_test['train'], tmp['test']]
).shuffle()
split_datasets['validation'] = sub_train_and_test.pop("test")
split_datasets
## Расскоментровать для использования всего модельного датасета для обучения; также в функции приедется изменить методы оценки
# split_datasets['train'] = concatenate_datasets(
# [sub_train_and_test['train'], test_dataset]
# ).shuffle()
# split_datasets['validation'] = sub_train_and_test.pop("test")
# split_datasets
split_datasets['test'] = tmp['train']
split_datasets
```
## Функция обучения
> Не поддерживает использование в качестве метода либо в цикл (проверено эмпирическим путем), т.к. используется параллельное использование нескольких GPU
```
lr_schedulers = ['get_constant_schedule', 'get_constant_schedule_with_warmup',
'get_cosine_schedule_with_warmup', 'get_cosine_with_hard_restarts_schedule_with_warmup',
'get_linear_schedule_with_warmup', 'get_polynomial_decay_schedule_with_warmup',
'torch_optim_lr_scheduler_one_cycle_lr']
hyperparameters = {
"learning_rate": 1e-6,
"num_epochs": 2,
"train_batch_size": 8,
"eval_batch_size": 32,
"model_checkpoint": "opus-mt-en-ru",
"max_input_length": 128,
"max_target_length": 128,
"max_generate_length": 128,
"output_dir": f'experiences/fine_tuned_en_ru_model_{strftime("%Y-%m-%d_%H-%M-%S", gmtime())}',
"file_scores": 'scores.txt',
"scheduler": lr_schedulers[0], # настраиваемый параметр
}
tokenizer = AutoTokenizer.from_pretrained(hyperparameters["model_checkpoint"], return_tensors="pt")
def preprocess_function(examples, hyperparameters=hyperparameters, tokenizer=tokenizer):
'''Получение IDs'''
model_inputs = tokenizer(examples["en"], max_length=hyperparameters['max_input_length'], truncation=True)
with tokenizer.as_target_tokenizer():
labels = tokenizer(examples["ru"], max_length=hyperparameters['max_target_length'], truncation=True)
model_inputs["labels"] = labels["input_ids"] # присвоили исходному языку IDs целевого языка
return model_inputs
def postprocess(predictions, labels, tokenizer=tokenizer):
'''Получение текста из IDs'''
predictions = predictions.cpu().numpy()
labels = labels.cpu().numpy()
# Декодированные токены из IDs, спрогнозированные моделью
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Замена -100 в метках, так как их нельзя декодировать.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
# Декодированные метки токены из IDs, являющиеся эталонным переводом
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Пост-обрабработка, т.к. для прогноза нужен список, а для эталона список списков
decoded_preds = [pred.strip() for pred in decoded_preds]
decoded_labels = [[label.strip()] for label in decoded_labels]
return decoded_preds, decoded_labels
def evaluate(model, accelerator, examples, epoch='base', note="sub", hyperparameters=hyperparameters):
'''Оценка'''
metric = load_metric("sacrebleu")
model.eval()
for batch in tqdm(examples):
with torch.no_grad():
generated_tokens = accelerator.unwrap_model(model).generate(
batch["input_ids"],
attention_mask=batch["attention_mask"],
max_length=hyperparameters["max_generate_length"],
)
labels = batch["labels"]
# Необходимое выравнивание для заполнения прогнозов и меток для метода accelerator.gather()
generated_tokens = accelerator.pad_across_processes(
generated_tokens, dim=1, pad_index=tokenizer.pad_token_id
)
labels = accelerator.pad_across_processes(labels, dim=1, pad_index=-100)
predictions_gathered = accelerator.gather(generated_tokens)
labels_gathered = accelerator.gather(labels)
# подготовка данных для оценки
decoded_preds, decoded_labels = postprocess(predictions_gathered, labels_gathered)
# примечение пакетной метрики
metric.add_batch(predictions=decoded_preds, references=decoded_labels)
results = metric.compute()
response = f"{note}_score for {epoch} epoch: {results['score']}\n"
with open(f"{hyperparameters['output_dir']}_{hyperparameters['scheduler']}/{hyperparameters['file_scores']}", 'a') as file:
file.write(response)
print(f"{note}_score for epoch {epoch}, BLEU score: {results['score']:.2f}")
def get_image(hyperparameters):
'''Создание и сохранение графика'''
with open(f"{hyperparameters['output_dir']}_{hyperparameters['scheduler']}/{hyperparameters['file_scores']}") as f:
score = f.readlines()
sub = [float(i.strip().split(': ')[1]) for i in score[0::4][0::4]]
normal = [float(i.strip().split(': ')[1]) for i in score[0::4][1::4]]
short = [float(i.strip().split(': ')[1]) for i in score[0::4][2::4]]
test = [float(i.strip().split(': ')[1]) for i in score[0::4][3::4]]
X = [i for i in range(hyperparameters["num_epochs"] + 1)]
Y = [i for i in range(0, 61)]
score_df = pd.DataFrame({'Предметный': sub, 'Обычные': normal, 'Короткие': short, 'Модельные': test})
mx_sub = max(sub)
inx = sub.index(mx_sub)
modscore = test[inx]
img = score_df.plot(xticks=X, yticks=Y, style='^', figsize=(15,12));
img.axvline(inx, color='grey')
img.legend(loc='lower left')
img.set_xlabel("Epochs")
img.set_ylabel("BLEU")
img.annotate(f'sub {mx_sub:.2f}', xy=(inx, mx_sub), xytext=(inx, mx_sub),
arrowprops=dict(facecolor='blue', shrink=0.05),
)
img.annotate(f'mod {modscore:.2f}', xy=(inx, modscore), xytext=(inx, modscore),
arrowprops=dict(facecolor='red', shrink=0.05),
)
img.annotate(f"{hyperparameters['scheduler'].upper()} by LR:{hyperparameters['learning_rate']}", xy=(0, 58), xytext=(0, 58))
directory = f"{hyperparameters['output_dir']}_{hyperparameters['scheduler']}"
img.get_figure().savefig(f"{directory}/maxsub-{mx_sub:.2f}_mod-{modscore:.2f}_epoch-{inx}_{hyperparameters['scheduler']}.png")
def training_function(hyperparameters, tokenized_datasets, tokenizer):
directory = f'{hyperparameters["output_dir"]}_{hyperparameters["scheduler"]}'
try:
repo = Repository(directory, clone_from='eleldar/train')
except Exception as e:
pass
if not os.path.isfile(f"{hyperparameters['output_dir']}/{hyperparameters['file_scores']}_{hyperparameters['scheduler']}"):
with open(f"{hyperparameters['output_dir']}_{hyperparameters['scheduler']}/{hyperparameters['file_scores']}", 'w') as file: # файл для складывания оценок
file.write('')
with open(f"{hyperparameters['output_dir']}_{hyperparameters['scheduler']}/.gitignore", 'w') as file:
file.write("*.png\n")
accelerator = Accelerator()
if accelerator.is_main_process:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
model = AutoModelForSeq2SeqLM.from_pretrained(hyperparameters["model_checkpoint"])
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
tokenized_datasets.set_format("torch")
train_dataloader = DataLoader(tokenized_datasets["train"], shuffle=True,
collate_fn=data_collator, batch_size=hyperparameters['train_batch_size'])
eval_dataloader = DataLoader(tokenized_datasets["validation"], shuffle=False,
collate_fn=data_collator, batch_size=hyperparameters['eval_batch_size'])
normal_dataloader = DataLoader(tokenized_datasets["normal"], shuffle=False,
collate_fn=data_collator, batch_size=hyperparameters['eval_batch_size'])
short_dataloader = DataLoader(tokenized_datasets["short"], shuffle=False,
collate_fn=data_collator, batch_size=hyperparameters['eval_batch_size'])
test_dataloader = DataLoader(tokenized_datasets["test"], shuffle=False,
collate_fn=data_collator, batch_size=hyperparameters['eval_batch_size'])
optimizer = AdamW(model.parameters(), lr=hyperparameters["learning_rate"])
model, optimizer, train_dataloader, eval_dataloader, normal_dataloader, short_dataloader, test_dataloader = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, normal_dataloader, short_dataloader, test_dataloader
)
num_epochs = hyperparameters["num_epochs"]
lr_schedulers = {'get_constant_schedule': get_constant_schedule(
optimizer=optimizer
),
'get_constant_schedule_with_warmup': get_constant_schedule_with_warmup(
optimizer=optimizer, num_warmup_steps=100
),
'get_cosine_schedule_with_warmup': get_cosine_schedule_with_warmup(
optimizer=optimizer, num_warmup_steps=100,
num_training_steps=len(train_dataloader) * num_epochs,
num_cycles=0.5
),
'get_cosine_with_hard_restarts_schedule_with_warmup': get_cosine_with_hard_restarts_schedule_with_warmup(
optimizer=optimizer, num_warmup_steps=100,
num_training_steps=len(train_dataloader) * num_epochs,
num_cycles=1
),
'get_linear_schedule_with_warmup': get_linear_schedule_with_warmup(
optimizer=optimizer, num_warmup_steps=100,
num_training_steps=len(train_dataloader) * num_epochs,
),
'get_polynomial_decay_schedule_with_warmup': get_polynomial_decay_schedule_with_warmup(
optimizer=optimizer, num_warmup_steps=100,
num_training_steps=len(train_dataloader) * num_epochs,
lr_end=1e-7, power=1.0
),
'torch_optim_lr_scheduler_one_cycle_lr': torch.optim.lr_scheduler.OneCycleLR(
optimizer=optimizer, max_lr=1e-5, pct_start=1 / (num_epochs),
total_steps=len(train_dataloader) * num_epochs + 10, div_factor=1e+3, final_div_factor=1e+4,
anneal_strategy='cos'
)
}
lr_scheduler = lr_schedulers[hyperparameters['scheduler']]
# оценка перед обучением
evaluate(model, accelerator, eval_dataloader, note="sub")
evaluate(model, accelerator, normal_dataloader, note="normal")
evaluate(model, accelerator, short_dataloader, note="short")
evaluate(model, accelerator, test_dataloader, note="test")
try:
repo.git_add(".")
repo.git_commit(commit_message="base and gitignore")
except Exception as e:
pass
# обучение
progress_bar = tqdm(range(num_epochs * len(train_dataloader)), disable=not accelerator.is_main_process)
for epoch in range(1, num_epochs + 1):
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
# оценка в момент обучения
evaluate(model, accelerator, eval_dataloader, epoch=epoch, note="sub")
evaluate(model, accelerator, normal_dataloader, epoch=epoch, note="normal")
evaluate(model, accelerator, short_dataloader, epoch=epoch, note="short")
evaluate(model, accelerator, test_dataloader, epoch=epoch, note="test")
# Сохранение и обновление
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(directory, save_function=accelerator.save)
if accelerator.is_main_process:
tokenizer.save_pretrained(directory)
try:
repo.git_add(".")
repo.git_commit(commit_message=f"Training in progress epoch {epoch}")
except Exception as e:
pass
get_image(hyperparameters)
def decorator(function, *args):
'''для добавление аргументов в функцию для обучения'''
def wrapper():
return function(*args)
return wrapper
tokenized_datasets = split_datasets.map(
preprocess_function,
batched=True,
remove_columns=split_datasets["train"].column_names,
)
training_function = decorator(training_function, hyperparameters,
tokenized_datasets, tokenizer
)
notebook_launcher(training_function, num_processes=4)
```
# Использование
```
split_datasets['validation'][60]
# до
from transformers import pipeline
model_checkpoint = "Helsinki-NLP/opus-mt-en-ru"
translator = pipeline("translation", model=model_checkpoint)
translator("Companies need to buy routers to direct data traffic and connect to the internet.")
# после
from transformers import pipeline
model_checkpoint = hyperparameters['output_dir']
translator = pipeline("translation", model=model_checkpoint)
translator("Companies need to buy routers to direct data traffic and connect to the internet.")
```
| github_jupyter |
```
from lenslikelihood.power_spectra import *
mass_function_model = 'rodriguezPuebla2016'
normalization = 'As'
pivot_string = '1'
pivot = 1.0
structure_formation_interp_As = load_interpolated_mapping(mass_function_model, pivot_string)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import os
plt.rcParams['axes.linewidth'] = 2.5
plt.rcParams['xtick.major.width'] = 2.5
plt.rcParams['xtick.major.size'] = 8
plt.rcParams['xtick.minor.size'] = 5
plt.rcParams['ytick.major.width'] = 2.5
plt.rcParams['ytick.major.size'] = 8
plt.rcParams['ytick.minor.size'] = 4
plt.rcParams['ytick.labelsize'] = 15
plt.rcParams['xtick.labelsize'] = 15
from lenslikelihood.measurements import *
from lenslikelihood.sampling import InterpolatedLikelihood
import dill as pickle
from trikde.pdfs import DensitySamples, IndepdendentLikelihoods, MultivariateNormalPriorHyperCube, CustomPriorHyperCube
nbins = 20
param_names = ['LOS_normalization', 'beta', 'log10c0', 'delta_power_law_index', 'sigma_sub']
param_ranges = [all_param_ranges_version2[name] for name in param_names]
load_from_pickle = True
save_to_pickle = False
filename_extension = '_joint_logprior'
base_path = './../lenslikelihood/precomputed_likelihoods/'
likelihoods = []
for lens in all_lens_names:
fname = base_path + lens + filename_extension
print('loading joint likelihoods for lens '+lens+' ...')
f = open(fname, 'rb')
single_lens_likelihood = pickle.load(f)
f.close()
likelihoods.append(single_lens_likelihood)
likelihood_noprior = IndepdendentLikelihoods(likelihoods)
```
## Priors on the subhalo and field halo mass functions
A reasonable assumption to impose on the inference is that the number of subhalos varies proportionally with the number of field halos, since subhalos are accreted from the field. We can enforce this by choosing an expected amplitude for the subhalo mass function in $\Lambda$CDM, and then coupling variations to $\Sigma_{\rm{sub}}$ around this value to $\delta_{\rm{LOS}}$.
```
def couple_mass_functions(samples, sigma_sub_theory=0.025, coupling_strength=0.2):
delta_los_samples = samples[:, 0]
sigma_sub_samples = samples[:, -1]
delta_sigma_sub = sigma_sub_samples/sigma_sub_theory
chi2 = (delta_sigma_sub - delta_los_samples)**2/coupling_strength**2
return chi2
extrapolate_likelihood = True
sigma_sub_theory = 0.05
kwargs_prior = {'sigma_sub_theory': sigma_sub_theory}
prior_on_mass_functions = CustomPriorHyperCube(couple_mass_functions, param_names, param_ranges, nbins, kwargs_prior)
likelihood = IndepdendentLikelihoods(likelihoods + [prior_on_mass_functions])
interpolated_lens_likelihood = InterpolatedLikelihood(likelihood, param_names, param_ranges, extrapolate=extrapolate_likelihood)
```
### Plot the likelihood
First we show the likelihood as inferred from the lenses with no additional modeling assumptions
```
from trikde.triangleplot import TrianglePlot
fig = plt.figure()
cmap = 'jet'
triangle_plot = TrianglePlot([likelihood_noprior])
triangle_plot.set_cmap(cmap, marginal_col='k')
triangle_plot.truth_color = 'k'
truths = {'sigma_sub': 1.05, 'LOS_normalization': 1., 'beta': 0.85, 'log10c0': np.log10(18.5), 'delta_power_law_index': 0.}
axes = triangle_plot.make_triplot(filled_contours=False, show_intervals=False, contour_alpha=1.,
contour_colors=['k', 'k'],
show_contours=True, contour_levels=[0.32], truths=truths)
beta = r'$\beta$'
beta_ticks = [-0.2, 3, 6, 9, 12, 15]
c0 = r'$\log_{10} c_8$'
c0_ticks = [0., 1.0, 2.0, 3.0, 4.0]
delta_power_law_index = r'$\Delta \alpha$'
dpli_ticks = [-0.6, -0.3, 0., 0.3, 0.6, 0.9]
sigma_sub = r'$\Sigma_{\rm{sub}} \ \left[\rm{kpc^{-2}}\right]$'
sigma_sub_ticks = [0., 0.025, 0.05, 0.075, 0.1]
delta_LOS = r'$\delta_{\rm{LOS}}$'
dlos_ticks = [0.0, 0.5, 1., 1.5, 2., 2.5]
ticksize = 14
labelsize = 18
rotation = 40
axes[5].set_ylabel(beta, fontsize=labelsize)
axes[5].set_yticks(beta_ticks)
axes[5].set_yticklabels(beta_ticks, fontsize=ticksize)
axes[10].set_ylabel(c0, fontsize=labelsize)
axes[10].set_yticks(c0_ticks)
axes[10].set_yticklabels(c0_ticks, fontsize=ticksize)
axes[15].set_ylabel(delta_power_law_index, fontsize=labelsize)
axes[15].set_yticks(dpli_ticks)
axes[15].set_yticklabels(dpli_ticks, fontsize=ticksize)
axes[20].set_ylabel(sigma_sub, fontsize=labelsize)
axes[20].set_yticks(sigma_sub_ticks)
axes[20].set_yticklabels(sigma_sub_ticks, fontsize=ticksize)
axes[20].set_xlabel(delta_LOS, fontsize=labelsize)
axes[20].set_xticks(dlos_ticks)
axes[20].set_xticklabels(dlos_ticks, fontsize=ticksize, rotation=rotation)
axes[21].set_xlabel(beta, fontsize=labelsize)
axes[21].set_xticks(beta_ticks)
axes[21].set_xticklabels(beta_ticks, fontsize=ticksize, rotation=rotation)
axes[22].set_xlabel(c0, fontsize=labelsize)
axes[22].set_xticks(c0_ticks)
axes[22].set_xticklabels(c0_ticks, fontsize=ticksize, rotation=rotation)
axes[23].set_xlabel(delta_power_law_index, fontsize=labelsize)
axes[23].set_xticks(dpli_ticks)
axes[23].set_xticklabels(dpli_ticks, fontsize=ticksize, rotation=rotation)
axes[24].set_xlabel(sigma_sub, fontsize=labelsize)
axes[24].set_xticks(sigma_sub_ticks)
axes[24].set_xticklabels(sigma_sub_ticks, fontsize=ticksize, rotation=rotation)
from mpl_toolkits.axes_grid1 import make_axes_locatable
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
ax_idx = 9
axins1 = inset_axes(axes[ax_idx],
width="300%", # width = 50% of parent_bbox width
height="15%", # height : 5%
loc='upper right')
empty = np.zeros((20, 20))
empty[0,0] = 1
im1 = axes[ax_idx].imshow(empty, interpolation='None', cmap=cmap)
cb = fig.colorbar(im1, cax=axins1, orientation="horizontal", ticks=[0, 0.25, 0.5, 0.75, 1])
axes[ax_idx].set_visible(False)
cb.set_label('probability', fontsize=15)
#plt.savefig('./figures/lensing_likelihood.pdf')
```
### Likelihood with a prior
Now we show the likelihood after adding the prior coupling $\Sigma_{\rm{sub}}$ to $\delta_{LOS}$, assuming $\Sigma_{\rm{sub}} = 0.05 \rm{kpc^{-1}}$ in $\Lambda$CDM, corresponding to doubly efficient tidal disruption of halos between in the Milky Way relative to massive ellipticals
```
fig = plt.figure()
triangle_plot = TrianglePlot([likelihood])
triangle_plot.set_cmap(cmap, marginal_col='k')
triangle_plot.truth_color = 'k'
truths= {'sigma_sub': 1.05, 'LOS_normalization': 1., 'beta': 0.85, 'log10c0': np.log10(18.5), 'delta_power_law_index': 0.}
axes = triangle_plot.make_triplot(filled_contours=False, show_intervals=False, show_contours=True,
contour_levels=[0.32], contour_colors=['k', 'k'],
display_params=['LOS_normalization', 'beta', 'log10c0', 'delta_power_law_index'],
truths=truths)
axes[4].set_ylabel(beta, fontsize=labelsize)
axes[4].set_yticks(beta_ticks)
axes[4].set_yticklabels(beta_ticks, fontsize=ticksize)
axes[8].set_ylabel(c0, fontsize=labelsize)
axes[8].set_yticks(c0_ticks)
axes[8].set_yticklabels(c0_ticks, fontsize=ticksize)
axes[12].set_ylabel(delta_power_law_index, fontsize=labelsize)
axes[12].set_yticks(dpli_ticks)
axes[12].set_yticklabels(dpli_ticks, fontsize=ticksize)
axes[12].set_xlabel(delta_LOS, fontsize=labelsize)
axes[12].set_xticks(dlos_ticks)
axes[12].set_xticklabels(dlos_ticks, fontsize=ticksize, rotation=rotation)
axes[13].set_xlabel(beta, fontsize=labelsize)
axes[13].set_xticks(beta_ticks)
axes[13].set_xticklabels(beta_ticks, fontsize=ticksize, rotation=rotation)
axes[14].set_xlabel(c0, fontsize=labelsize)
axes[14].set_xticks(c0_ticks)
axes[14].set_xticklabels(c0_ticks, fontsize=ticksize, rotation=rotation)
axes[15].set_xlabel(delta_power_law_index, fontsize=labelsize)
axes[15].set_xticks(dpli_ticks)
axes[15].set_xticklabels(dpli_ticks, fontsize=ticksize, rotation=rotation)
axes[2].annotate(r'$\Sigma_{\rm{sub(predicted)}} = 0.05 \rm{kpc^{-2}}$', fontsize=22,
xy=(0.26, 0.1), xycoords='axes fraction')
ax_idx = 7
axins1 = inset_axes(axes[ax_idx],
width="200%", # width = 50% of parent_bbox width
height="10%", # height : 5%
loc='upper right')
empty = np.zeros((20, 20))
empty[0,0] = 1
im1 = axes[ax_idx].imshow(empty, interpolation='None', cmap=cmap)
cb = fig.colorbar(im1, cax=axins1, orientation="horizontal", ticks=[0, 0.25, 0.5, 0.75, 1])
axes[ax_idx].set_visible(False)
cb.set_label('probability', fontsize=15)
#plt.savefig('./figures/lensing_likelihood_w.pdf')
```
## Systematic modeling errors
We allow for systematic errors in the model by changing the internal mapping between the parameters describing the mass function and concentration-mass relation
```
error_type = 'INTERPOLATED_GRID'
if error_type == 'INTERPOLATED_GRID':
f = open('./systematic_error_interpolations/systematic_error_interpolation_lowfit_'+mass_function_model+'_pivot'+pivot_string+'_3D', 'rb')
systematic_interp_lowfit = pickle.load(f)
f.close()
f = open('./systematic_error_interpolations/systematic_error_interpolation_highfit_'+mass_function_model+'_pivot'+pivot_string+'_3D', 'rb')
systematic_interp_highfit = pickle.load(f)
f.close()
elif error_type == 'RELATIVE':
delta_delta_los = 0.1
delta_beta = 0.2
delta_c8 = 0.2
delta_delta_alpha = 0.05
```
## Final setup
```
delta_los_range = [0., 2.5]
beta_range = [-0.2, 15.]
log10c0_range = [0., 4.]
delta_alpha_range = [-0.6, 0.9]
sigma_sub_range = [0., 0.125]
param_ranges_lensing = [delta_los_range, beta_range, log10c0_range, delta_alpha_range, sigma_sub_range]
n_draw = 50000
extrapolate_ranges = [[0., 2.5],
[-0.2, 15.],
[0., 4.0],
delta_alpha_range,
sigma_sub_range]
param_ranges_pk = [[0.4645, 1.4645], [-0.2, 0.2], [-0.018, 0.018]]
arun_ticks = [-0.16, -0.08, 0.00, 0.08, 0.16]
brun_ticks = [-0.014, -0.007, 0.000, 0.007, 0.014]
ns_ticks = [0.5645, 0.9645, 1.3645]
```
## Compute the likelihood of the power spectrum parameters
We can compute the likelihood the parameters describing $P\left(k\right)$, adding systematic models errors by hand
```
if error_type == 'INTERPOLATED_GRID':
samples_no_sys, like_no_sys = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
systematic_interp_highfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, log10c8_sys=False, delta_los_sys=False,
delta_alpha_sys=False, beta_sys=False, three_D=True)
samples_sys1, like_sys1 = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
systematic_interp_lowfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, three_D=True)
samples_sys2, like_sys2 = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
systematic_interp_highfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, three_D=True)
samples_sys_noamp_1, like_sys_noamp_1 = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
systematic_interp_lowfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, log10c8_sys=False, delta_los_sys=False, three_D=True)
samples_sys_noamp_2, like_sys_noamp_2 = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
systematic_interp_highfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, log10c8_sys=False, delta_los_sys=False, three_D=True)
samples_sys_noslope, like_sys_noslope = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
systematic_interp_lowfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, delta_alpha_sys=False, beta_sys=False, three_D=True)
elif error_type == 'RELATIVE':
samples_sys1, like_sys1 = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
delta_c8, delta_beta, delta_delta_los, delta_delta_alpha, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges)
samples_sys2, like_sys2 = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
-delta_c8, -delta_beta, -delta_delta_los, -delta_delta_alpha, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges)
samples_no_sys, like_no_sys = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
0., 0., 0., 0., extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges)
samples_sys_noamp_1, like_sys_noamp_1 = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
0., delta_beta, 0., delta_delta_alpha, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges)
samples_sys_noamp_2, like_sys_noamp_2 = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
0., -delta_beta, 0., delta_delta_alpha, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges)
samples_sys_noslope, like_sys_noslope = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
-delta_c8, 0., 0., 0., extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges)
samples_sys_noslope_2, like_sys_noslope_2 = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood,
delta_c8, 0., 0., 0., extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges)
```
## Plot the likelihood of the parameters describing the power spectrum
```
nbins = 20
param_names_pk = [r'$n_s$', r'$a_{\rm{run}}$', r'$b_{\rm{run}}$']
samples_marginalized = np.vstack((np.vstack((np.vstack((np.vstack((np.vstack((samples_no_sys, samples_sys1)), samples_sys2)), samples_sys_noamp_1)), samples_sys_noamp_2)), samples_sys_noslope))
likelihood_marginalized = np.append(np.append(np.append(np.append(np.append(like_no_sys, like_sys1), like_sys2), like_sys_noamp_1), like_sys_noamp_2), like_sys_noslope)
# samples_marginalized = samples_no_sys
# likelihood_marginalized = like_no_sys
density_marginalized = DensitySamples(samples_marginalized, param_names_pk, likelihood_marginalized,
param_ranges_pk, nbins=nbins, use_kde=False, bandwidth_scale=1.)
pk_likelihood_marginalized = IndepdendentLikelihoods([density_marginalized])
triplot = TrianglePlot([pk_likelihood_marginalized])
cmap = 'jet'
triplot.set_cmap(cmap, marginal_col='k')
triplot.truth_color = 'k'
truths= {r'$n_s$': 0.9645, r'$a_{\rm{run}}$': 0., r'$b_{\rm{run}}$': 0.}
axes = triplot.make_triplot(filled_contours=False, show_intervals=False, show_contours=True,
contour_levels=[0.32], contour_colors=['k', 'k'])
axes[3].set_yticks(arun_ticks)
axes[3].set_yticklabels(arun_ticks, fontsize=ticksize)
axes[6].set_yticks(brun_ticks)
axes[6].set_yticklabels(brun_ticks, fontsize=ticksize)
axes[6].set_xticks(ns_ticks)
axes[6].set_xticklabels(ns_ticks, fontsize=ticksize)
axes[7].set_xticks(arun_ticks)
axes[7].set_xticklabels(arun_ticks, fontsize=ticksize)
axes[8].set_xticks(brun_ticks)
axes[8].set_xticklabels(brun_ticks, fontsize=ticksize)
ax_idx = 1
axins1 = inset_axes(axes[ax_idx],
width="200%", # width = 50% of parent_bbox width
height="10%", # height : 5%
loc=6)
empty = np.zeros((20, 20))
empty[0,0] = 1
im1 = axes[ax_idx].imshow(empty, interpolation='None', cmap=cmap)
cb = fig.colorbar(im1, cax=axins1, orientation="horizontal", ticks=[0, 0.25, 0.5, 0.75, 1])
axes[ax_idx].set_visible(False)
cb.set_label('probability', fontsize=15)
plt.savefig('./figures/qP_likelihood_'+mass_function_model+'_pivot'+pivot_string+'.pdf')
import pickle
f = open('./interpolated_pq_likelihoods/Pk_likelihood_'+mass_function_model+'_pivot'+pivot_string, 'wb')
pk_likelihood_marginalized_interp = InterpolatedLikelihood(pk_likelihood_marginalized, param_names_pk, param_ranges_pk)
pickle.dump(pk_likelihood_marginalized_interp, f)
```
| github_jupyter |
# Training Neural Networks
The network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time.
<img src="assets/function_approx.png" width=500px>
At first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function.
To find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a **loss function** (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems
$$
\large \ell = \frac{1}{2n}\sum_i^n{\left(y_i - \hat{y}_i\right)^2}
$$
where $n$ is the number of training examples, $y_i$ are the true labels, and $\hat{y}_i$ are the predicted labels.
By minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called **gradient descent**. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base.
<img src='assets/gradient_descent.png' width=350px>
## Backpropagation
For single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks.
Training multilayer networks is done through **backpropagation** which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation.
<img src='assets/backprop_diagram.png' width=550px>
In the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss.
To train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule.
$$
\large \frac{\partial \ell}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial S}{\partial L_1} \frac{\partial L_2}{\partial S} \frac{\partial \ell}{\partial L_2}
$$
**Note:** I'm glossing over a few details here that require some knowledge of vector calculus, but they aren't necessary to understand what's going on.
We update our weights using this gradient with some learning rate $\alpha$.
$$
\large W^\prime_1 = W_1 - \alpha \frac{\partial \ell}{\partial W_1}
$$
The learning rate $\alpha$ is set such that the weight update steps are small enough that the iterative method settles in a minimum.
## Losses in PyTorch
Let's start by seeing how we calculate the loss with PyTorch. Through the `nn` module, PyTorch provides losses such as the cross-entropy loss (`nn.CrossEntropyLoss`). You'll usually see the loss assigned to `criterion`. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels.
Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss),
> This criterion combines `nn.LogSoftmax()` and `nn.NLLLoss()` in one single class.
>
> The input is expected to contain scores for each class.
This means we need to pass in the raw output of our network into the loss, not the output of the softmax function. This raw output is usually called the *logits* or *scores*. We use the logits because softmax gives you probabilities which will often be very close to zero or one but floating-point numbers can't accurately represent values near zero or one ([read more here](https://docs.python.org/3/tutorial/floatingpoint.html)). It's usually best to avoid doing calculations with probabilities, typically we use log-probabilities.
```
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
```
### Note
If you haven't seen `nn.Sequential` yet, please finish the end of the Part 2 notebook.
```
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10))
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
```
In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LogSoftmax)). Then you can get the actual probabilities by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)).
>**Exercise:** Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss. Note that for `nn.LogSoftmax` and `F.log_softmax` you'll need to set the `dim` keyword argument appropriately. `dim=0` calculates softmax across the rows, so each column sums to 1, while `dim=1` calculates across the columns so each row sums to 1. Think about what you want the output to be and choose `dim` appropriately.
```
### Import needed modules
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
################################################
# TODO: Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.LogSoftmax(dim=1),
nn.Linear(128, 64),
nn.LogSoftmax(dim=1),
nn.Linear(64, 10))
# TODO: Define the loss
criterion = nn.NLLLoss()
### Run this to check your work
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
```
## Autograd
Now that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, `autograd`, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set `requires_grad = True` on a tensor. You can do this at creation with the `requires_grad` keyword, or at any time with `x.requires_grad_(True)`.
You can turn off gradients for a block of code with the `torch.no_grad()` content:
```python
x = torch.zeros(1, requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
False
```
Also, you can turn on or off gradients altogether with `torch.set_grad_enabled(True|False)`.
The gradients are computed with respect to some variable `z` with `z.backward()`. This does a backward pass through the operations that created `z`.
```
x = torch.randn(2,2, requires_grad=True)
print(x)
y = x**2
print(y)
```
Below we can see the operation that created `y`, a power operation `PowBackward0`.
```
## grad_fn shows the function that generated this variable
print(y.grad_fn)
```
The autograd module keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor. Let's reduce the tensor `y` to a scalar value, the mean.
```
z = y.mean()
print(z)
```
You can check the gradients for `x` and `y` but they are empty currently.
```
print(x.grad)
```
To calculate the gradients, you need to run the `.backward` method on a Variable, `z` for example. This will calculate the gradient for `z` with respect to `x`
$$
\frac{\partial z}{\partial x} = \frac{\partial}{\partial x}\left[\frac{1}{n}\sum_i^n x_i^2\right] = \frac{x}{2}
$$
```
z.backward()
print(x.grad)
print(x/2)
```
These gradients calculations are particularly useful for neural networks. For training we need the gradients of the cost with respect to the weights. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step.
## Loss and Autograd together
When we create a network with PyTorch, all of the parameters are initialized with `requires_grad = True`. This means that when we calculate the loss and call `loss.backward()`, the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass.
```
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
images, labels = next(iter(trainloader))
images = images.view(images.shape[0], -1)
logits = model(images)
loss = criterion(logits, labels)
print('Before backward pass: \n', model[0].weight.grad)
loss.backward()
print('After backward pass: \n', model[0].weight.grad)
```
## Training the network!
There's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's [`optim` package](https://pytorch.org/docs/stable/optim.html). For example we can use stochastic gradient descent with `optim.SGD`. You can see how to define an optimizer below.
```
from torch import optim
# Optimizers require the parameters to optimize and a learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
```
Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch:
* Make a forward pass through the network
* Use the network output to calculate the loss
* Perform a backward pass through the network with `loss.backward()` to calculate the gradients
* Take a step with the optimizer to update the weights
Below I'll go through one training step and print out the weights and gradients so you can see how it changes. Note that I have a line of code `optimizer.zero_grad()`. When you do multiple backwards passes with the same parameters, the gradients are accumulated. This means that you need to zero the gradients on each training pass or you'll retain gradients from previous training batches.
```
print('Initial weights - ', model[0].weight)
images, labels = next(iter(trainloader))
images.resize_(64, 784)
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
# Forward pass, then backward pass, then update weights
output = model(images)
loss = criterion(output, labels)
loss.backward()
print('Gradient -', model[0].weight.grad)
# Take an update step and few the new weights
optimizer.step()
print('Updated weights - ', model[0].weight)
```
### Training for real
Now we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an *epoch*. So here we're going to loop through `trainloader` to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights.
>**Exercise:** Implement the training pass for our network. If you implemented it correctly, you should see the training loss drop with each epoch.
```
# Import needed modules
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
from torch import optim
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/',
download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(
trainset, batch_size=64, shuffle=True)
################################################
## Your solution here
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
optimizer.zero_grad()
# TODO: Training pass
logits = model(images)
loss = criterion(logits,labels)
loss.backward()
running_loss += loss.item()
optimizer.step()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
```
With the network trained, we can check out it's predictions.
```
%matplotlib inline
import helper
images, labels = next(iter(trainloader))
img = images[0].view(1, 784)
# Turn off gradients to speed up this part
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
helper.view_classify(img.view(1, 28, 28), ps)
```
Now our network is brilliant. It can accurately predict the digits in our images. Next up you'll write the code for training a neural network on a more complex dataset.
| github_jupyter |
# Anchor Boxes
:label:`sec_anchor`
Object detection algorithms usually sample a large number of regions in the input image, determine whether these regions contain objects of interest, and adjust the edges of the regions so as to predict the ground-truth bounding box of the target more accurately. Different models may use different region sampling methods. Here, we introduce one such method: it generates multiple bounding boxes with different sizes and aspect ratios while centering on each pixel. These bounding boxes are called anchor boxes. We will practice object detection based on anchor boxes in the following sections.
First, import the packages or modules required for this section. Here, we have modified the printing accuracy of NumPy. Because printing tensors actually calls the print function of NumPy, the floating-point numbers in tensors printed in this section are more concise.
```
%matplotlib inline
from mxnet import gluon, image, np, npx
from d2l import mxnet as d2l
np.set_printoptions(2)
npx.set_np()
```
## Generating Multiple Anchor Boxes
Assume that the input image has a height of $h$ and width of $w$. We generate anchor boxes with different shapes centered on each pixel of the image. Assume the size is $s\in (0, 1]$, the aspect ratio is $r > 0$, and the width and height of the anchor box are $ws\sqrt{r}$ and $hs/\sqrt{r}$, respectively. When the center position is given, an anchor box with known width and height is determined.
Below we set a set of sizes $s_1,\ldots, s_n$ and a set of aspect ratios $r_1,\ldots, r_m$. If we use a combination of all sizes and aspect ratios with each pixel as the center, the input image will have a total of $whnm$ anchor boxes. Although these anchor boxes may cover all ground-truth bounding boxes, the computational complexity is often excessive. Therefore, we are usually only interested in a combination containing $s_1$ or $r_1$ sizes and aspect ratios, that is:
$$(s_1, r_1), (s_1, r_2), \ldots, (s_1, r_m), (s_2, r_1), (s_3, r_1), \ldots, (s_n, r_1).$$
That is, the number of anchor boxes centered on the same pixel is $n+m-1$. For the entire input image, we will generate a total of $wh(n+m-1)$ anchor boxes.
The above method of generating anchor boxes has been implemented in the `multibox_prior` function. We specify the input, a set of sizes, and a set of aspect ratios, and this function will return all the anchor boxes entered.
```
#@save
def multibox_prior(data, sizes, ratios):
in_height, in_width = data.shape[-2:]
device, num_sizes, num_ratios = data.ctx, len(sizes), len(ratios)
boxes_per_pixel = (num_sizes + num_ratios - 1)
size_tensor = np.array(sizes, ctx=device)
ratio_tensor = np.array(ratios, ctx=device)
# Offsets are required to move the anchor to center of a pixel
# Since pixel (height=1, width=1), we choose to offset our centers by 0.5
offset_h, offset_w = 0.5, 0.5
steps_h = 1.0 / in_height # Scaled steps in y axis
steps_w = 1.0 / in_width # Scaled steps in x axis
# Generate all center points for the anchor boxes
center_h = (np.arange(in_height, ctx=device) + offset_h) * steps_h
center_w = (np.arange(in_width, ctx=device) + offset_w) * steps_w
shift_x, shift_y = np.meshgrid(center_w, center_h)
shift_x, shift_y = shift_x.reshape(-1), shift_y.reshape(-1)
# Generate boxes_per_pixel number of heights and widths which are later
# used to create anchor box corner coordinates (xmin, xmax, ymin, ymax)
# concat (various sizes, first ratio) and (first size, various ratios)
w = np.concatenate((size_tensor * np.sqrt(ratio_tensor[0]),
sizes[0] * np.sqrt(ratio_tensor[1:])))\
* in_height / in_width # handle rectangular inputs
h = np.concatenate((size_tensor / np.sqrt(ratio_tensor[0]),
sizes[0] / np.sqrt(ratio_tensor[1:])))
# Divide by 2 to get half height and half width
anchor_manipulations = np.tile(
np.stack((-w, -h, w, h)).T, (in_height * in_width, 1)) / 2
# Each center point will have boxes_per_pixel number of anchor boxes, so
# generate grid of all anchor box centers with boxes_per_pixel repeats
out_grid = np.stack([shift_x, shift_y, shift_x, shift_y],
axis=1).repeat(boxes_per_pixel, axis=0)
output = out_grid + anchor_manipulations
return np.expand_dims(output, axis=0)
```
We can see that the shape of the returned anchor box variable `y` is
(batch size, number of anchor boxes, 4).
```
img = image.imread('../img/catdog.jpg').asnumpy()
h, w = img.shape[0:2]
print(h, w)
X = np.random.uniform(size=(1, 3, h, w)) # Construct input data
Y = multibox_prior(X, sizes=[0.75, 0.5, 0.25], ratios=[1, 2, 0.5])
Y.shape
```
After changing the shape of the anchor box variable `y` to (image height, image width, number of anchor boxes centered on the same pixel, 4), we can obtain all the anchor boxes centered on a specified pixel position. In the following example, we access the first anchor box centered on (250, 250). It has four elements: the $x, y$ axis coordinates in the upper-left corner and the $x, y$ axis coordinates in the lower-right corner of the anchor box. The coordinate values of the $x$ and $y$ axis are divided by the width and height of the image, respectively, so the value range is between 0 and 1.
```
boxes = Y.reshape(h, w, 5, 4)
boxes[250, 250, 0, :]
```
In order to describe all anchor boxes centered on one pixel in the image, we first define the `show_bboxes` function to draw multiple bounding boxes on the image.
```
#@save
def show_bboxes(axes, bboxes, labels=None, colors=None):
"""Show bounding boxes."""
def _make_list(obj, default_values=None):
if obj is None:
obj = default_values
elif not isinstance(obj, (list, tuple)):
obj = [obj]
return obj
labels = _make_list(labels)
colors = _make_list(colors, ['b', 'g', 'r', 'm', 'c'])
for i, bbox in enumerate(bboxes):
color = colors[i % len(colors)]
rect = d2l.bbox_to_rect(bbox.asnumpy(), color)
axes.add_patch(rect)
if labels and len(labels) > i:
text_color = 'k' if color == 'w' else 'w'
axes.text(rect.xy[0], rect.xy[1], labels[i], va='center',
ha='center', fontsize=9, color=text_color,
bbox=dict(facecolor=color, lw=0))
```
As we just saw, the coordinate values of the $x$ and $y$ axis in the variable `boxes` have been divided by the width and height of the image, respectively. When drawing images, we need to restore the original coordinate values of the anchor boxes and therefore define the variable `bbox_scale`. Now, we can draw all the anchor boxes centered on (250, 250) in the image. As you can see, the blue anchor box with a size of 0.75 and an aspect ratio of 1 covers the dog in the image well.
```
d2l.set_figsize()
bbox_scale = np.array((w, h, w, h))
fig = d2l.plt.imshow(img)
show_bboxes(fig.axes, boxes[250, 250, :, :] * bbox_scale, [
's=0.75, r=1', 's=0.5, r=1', 's=0.25, r=1', 's=0.75, r=2', 's=0.75, r=0.5'
])
```
## Intersection over Union
We just mentioned that the anchor box covers the dog in the image well. If the ground-truth bounding box of the target is known, how can "well" here be quantified? An intuitive method is to measure the similarity between anchor boxes and the ground-truth bounding box. We know that the Jaccard index can measure the similarity between two sets. Given sets $\mathcal{A}$ and $\mathcal{B}$, their Jaccard index is the size of their intersection divided by the size of their union:
$$J(\mathcal{A},\mathcal{B}) = \frac{\left|\mathcal{A} \cap \mathcal{B}\right|}{\left| \mathcal{A} \cup \mathcal{B}\right|}.$$
In fact, we can consider the pixel area of a bounding box as a collection of pixels. In this way, we can measure the similarity of the two bounding boxes by the Jaccard index of their pixel sets. When we measure the similarity of two bounding boxes, we usually refer the Jaccard index as intersection over union (IoU), which is the ratio of the intersecting area to the union area of the two bounding boxes, as shown in :numref:`fig_iou`. The value range of IoU is between 0 and 1: 0 means that there are no overlapping pixels between the two bounding boxes, while 1 indicates that the two bounding boxes are equal.

:label:`fig_iou`
For the remainder of this section, we will use IoU to measure the similarity between anchor boxes and ground-truth bounding boxes, and between different anchor boxes.
```
#@save
def box_iou(boxes1, boxes2):
"""Compute IOU between two sets of boxes of shape (N,4) and (M,4)."""
# Compute box areas
box_area = lambda boxes: ((boxes[:, 2] - boxes[:, 0]) *
(boxes[:, 3] - boxes[:, 1]))
area1 = box_area(boxes1)
area2 = box_area(boxes2)
lt = np.maximum(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]
rb = np.minimum(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]
wh = (rb - lt).clip(min=0) # [N,M,2]
inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]
unioun = area1[:, None] + area2 - inter
return inter / unioun
```
## Labeling Training Set Anchor Boxes
In the training set, we consider each anchor box as a training example. In order to train the object detection model, we need to mark two types of labels for each anchor box: first, the category of the target contained in the anchor box (category) and, second, the offset of the ground-truth bounding box relative to the anchor box (offset). In object detection, we first generate multiple anchor boxes, predict the categories and offsets for each anchor box, adjust the anchor box position according to the predicted offset to obtain the bounding boxes to be used for prediction, and finally filter out the prediction bounding boxes that need to be output.
We know that, in the object detection training set, each image is labelled with the location of the ground-truth bounding box and the category of the target contained. After the anchor boxes are generated, we primarily label anchor boxes based on the location and category information of the ground-truth bounding boxes similar to the anchor boxes. So how do we assign ground-truth bounding boxes to anchor boxes similar to them?
Assume that the anchor boxes in the image are $A_1, A_2, \ldots, A_{n_a}$ and the ground-truth bounding boxes are $B_1, B_2, \ldots, B_{n_b}$ and $n_a \geq n_b$. Define matrix $\mathbf{X} \in \mathbb{R}^{n_a \times n_b}$, where element $x_{ij}$ in the $i^\mathrm{th}$ row and $j^\mathrm{th}$ column is the IoU of the anchor box $A_i$ to the ground-truth bounding box $B_j$.
First, we find the largest element in the matrix $\mathbf{X}$ and record the row index and column index of the element as $i_1,j_1$. We assign the ground-truth bounding box $B_{j_1}$ to the anchor box $A_{i_1}$. Obviously, anchor box $A_{i_1}$ and ground-truth bounding box $B_{j_1}$ have the highest similarity among all the "anchor box--ground-truth bounding box" pairings. Next, discard all elements in the $i_1$th row and the $j_1$th column in the matrix $\mathbf{X}$. Find the largest remaining element in the matrix $\mathbf{X}$ and record the row index and column index of the element as $i_2,j_2$. We assign ground-truth bounding box $B_{j_2}$ to anchor box $A_{i_2}$ and then discard all elements in the $i_2$th row and the $j_2$th column in the matrix $\mathbf{X}$. At this point, elements in two rows and two columns in the matrix $\mathbf{X}$ have been discarded.
We proceed until all elements in the $n_b$ column in the matrix $\mathbf{X}$ are discarded. At this time, we have assigned a ground-truth bounding box to each of the $n_b$ anchor boxes.
Next, we only traverse the remaining $n_a - n_b$ anchor boxes. Given anchor box $A_i$, find the bounding box $B_j$ with the largest IoU with $A_i$ according to the $i^\mathrm{th}$ row of the matrix $\mathbf{X}$, and only assign ground-truth bounding box $B_j$ to anchor box $A_i$ when the IoU is greater than the predetermined threshold.
As shown in :numref:`fig_anchor_label` (left), assuming that the maximum value in the matrix $\mathbf{X}$ is $x_{23}$, we will assign ground-truth bounding box $B_3$ to anchor box $A_2$. Then, we discard all the elements in row 2 and column 3 of the matrix, find the largest element $x_{71}$ of the remaining shaded area, and assign ground-truth bounding box $B_1$ to anchor box $A_7$. Then, as shown in :numref:`fig_anchor_label` (middle), discard all the elements in row 7 and column 1 of the matrix, find the largest element $x_{54}$ of the remaining shaded area, and assign ground-truth bounding box $B_4$ to anchor box $A_5$. Finally, as shown in :numref:`fig_anchor_label` (right), discard all the elements in row 5 and column 4 of the matrix, find the largest element $x_{92}$ of the remaining shaded area, and assign ground-truth bounding box $B_2$ to anchor box $A_9$. After that, we only need to traverse the remaining anchor boxes of $A_1, A_3, A_4, A_6, A_8$ and determine whether to assign ground-truth bounding boxes to the remaining anchor boxes according to the threshold.

:label:`fig_anchor_label`
```
#@save
def match_anchor_to_bbox(ground_truth, anchors, device, iou_threshold=0.5):
"""Assign ground-truth bounding boxes to anchor boxes similar to them."""
num_anchors, num_gt_boxes = anchors.shape[0], ground_truth.shape[0]
# Element `x_ij` in the `i^th` row and `j^th` column is the IoU
# of the anchor box `anc_i` to the ground-truth bounding box `box_j`
jaccard = box_iou(anchors, ground_truth)
# Initialize the tensor to hold assigned ground truth bbox for each anchor
anchors_bbox_map = np.full((num_anchors,), -1, dtype=np.int32, ctx=device)
# Assign ground truth bounding box according to the threshold
max_ious, indices = np.max(jaccard, axis=1), np.argmax(jaccard, axis=1)
anc_i = np.nonzero(max_ious >= 0.5)[0]
box_j = indices[max_ious >= 0.5]
anchors_bbox_map[anc_i] = box_j
# Find the largest iou for each bbox
col_discard = np.full((num_anchors,), -1)
row_discard = np.full((num_gt_boxes,), -1)
for _ in range(num_gt_boxes):
max_idx = np.argmax(jaccard)
box_idx = (max_idx % num_gt_boxes).astype('int32')
anc_idx = (max_idx / num_gt_boxes).astype('int32')
anchors_bbox_map[anc_idx] = box_idx
jaccard[:, box_idx] = col_discard
jaccard[anc_idx, :] = row_discard
return anchors_bbox_map
```
Now we can label the categories and offsets of the anchor boxes. If an anchor box $A$ is assigned ground-truth bounding box $B$, the category of the anchor box $A$ is set to the category of $B$. And the offset of the anchor box $A$ is set according to the relative position of the central coordinates of $B$ and $A$ and the relative sizes of the two boxes. Because the positions and sizes of various boxes in the dataset may vary, these relative positions and relative sizes usually require some special transformations to make the offset distribution more uniform and easier to fit. Assume the center coordinates of anchor box $A$ and its assigned ground-truth bounding box $B$ are $(x_a, y_a), (x_b, y_b)$, the widths of $A$ and $B$ are $w_a, w_b$, and their heights are $h_a, h_b$, respectively. In this case, a common technique is to label the offset of $A$ as
$$\left( \frac{ \frac{x_b - x_a}{w_a} - \mu_x }{\sigma_x},
\frac{ \frac{y_b - y_a}{h_a} - \mu_y }{\sigma_y},
\frac{ \log \frac{w_b}{w_a} - \mu_w }{\sigma_w},
\frac{ \log \frac{h_b}{h_a} - \mu_h }{\sigma_h}\right),$$
The default values of the constant are $\mu_x = \mu_y = \mu_w = \mu_h = 0, \sigma_x=\sigma_y=0.1, and \sigma_w=\sigma_h=0.2$.
This transformation is implemented below in the `offset_boxes` function.
If an anchor box is not assigned a ground-truth bounding box, we only need to set the category of the anchor box to background. Anchor boxes whose category is background are often referred to as negative anchor boxes, and the rest are referred to as positive anchor boxes.
```
#@save
def offset_boxes(anchors, assigned_bb, eps=1e-6):
c_anc = d2l.box_corner_to_center(anchors)
c_assigned_bb = d2l.box_corner_to_center(assigned_bb)
offset_xy = 10 * (c_assigned_bb[:, :2] - c_anc[:, :2]) / c_anc[:, 2:]
offset_wh = 5 * np.log(eps + c_assigned_bb[:, 2:] / c_anc[:, 2:])
offset = np.concatenate([offset_xy, offset_wh], axis=1)
return offset
#@save
def multibox_target(anchors, labels):
batch_size, anchors = labels.shape[0], anchors.squeeze(0)
batch_offset, batch_mask, batch_class_labels = [], [], []
device, num_anchors = anchors.ctx, anchors.shape[0]
for i in range(batch_size):
label = labels[i, :, :]
anchors_bbox_map = match_anchor_to_bbox(label[:, 1:], anchors, device)
bbox_mask = np.tile((np.expand_dims(
(anchors_bbox_map >= 0), axis=-1)), (1, 4)).astype('int32')
# Initialize class_labels and assigned bbox coordinates with zeros
class_labels = np.zeros(num_anchors, dtype=np.int32, ctx=device)
assigned_bb = np.zeros((num_anchors, 4), dtype=np.float32, ctx=device)
# Assign class labels to the anchor boxes using matched gt bbox labels
# If no gt bbox is assigned to an anchor box, then let the
# class_labels and assigned_bb remain zero, i.e the background class
indices_true = np.nonzero(anchors_bbox_map >= 0)[0]
bb_idx = anchors_bbox_map[indices_true]
class_labels[indices_true] = label[bb_idx, 0].astype('int32') + 1
assigned_bb[indices_true] = label[bb_idx, 1:]
# offset transformations
offset = offset_boxes(anchors, assigned_bb) * bbox_mask
batch_offset.append(offset.reshape(-1))
batch_mask.append(bbox_mask.reshape(-1))
batch_class_labels.append(class_labels)
bbox_offset = np.stack(batch_offset)
bbox_mask = np.stack(batch_mask)
class_labels = np.stack(batch_class_labels)
return (bbox_offset, bbox_mask, class_labels)
```
Below we demonstrate a detailed example. We define ground-truth bounding boxes for the cat and dog in the read image, where the first element is category (0 for dog, 1 for cat) and the remaining four elements are the $x, y$ axis coordinates at top-left corner and $x, y$ axis coordinates at lower-right corner (the value range is between 0 and 1). Here, we construct five anchor boxes to be labeled by the coordinates of the upper-left corner and the lower-right corner, which are recorded as $A_0, \ldots, A_4$, respectively (the index in the program starts from 0). First, draw the positions of these anchor boxes and the ground-truth bounding boxes in the image.
```
ground_truth = np.array([[0, 0.1, 0.08, 0.52, 0.92],
[1, 0.55, 0.2, 0.9, 0.88]])
anchors = np.array([[0, 0.1, 0.2, 0.3], [0.15, 0.2, 0.4, 0.4],
[0.63, 0.05, 0.88, 0.98], [0.66, 0.45, 0.8, 0.8],
[0.57, 0.3, 0.92, 0.9]])
fig = d2l.plt.imshow(img)
show_bboxes(fig.axes, ground_truth[:, 1:] * bbox_scale, ['dog', 'cat'], 'k')
show_bboxes(fig.axes, anchors * bbox_scale, ['0', '1', '2', '3', '4']);
```
We can label categories and offsets for anchor boxes by using the `multibox_target` function. This function sets the background category to 0 and increments the integer index of the target category from zero by 1 (1 for dog and 2 for cat).
We add example dimensions to the anchor boxes and ground-truth bounding boxes and construct random predicted results with a shape of (batch size, number of categories including background, number of anchor boxes) by using the `expand_dims` function.
```
labels = multibox_target(np.expand_dims(anchors, axis=0),
np.expand_dims(ground_truth, axis=0))
```
There are three items in the returned result, all of which are in the tensor format. The third item is represented by the category labeled for the anchor box.
```
labels[2]
```
We analyze these labelled categories based on positions of anchor boxes and ground-truth bounding boxes in the image. First, in all "anchor box--ground-truth bounding box" pairs, the IoU of anchor box $A_4$ to the ground-truth bounding box of the cat is the largest, so the category of anchor box $A_4$ is labeled as cat. Without considering anchor box $A_4$ or the ground-truth bounding box of the cat, in the remaining "anchor box--ground-truth bounding box" pairs, the pair with the largest IoU is anchor box $A_1$ and the ground-truth bounding box of the dog, so the category of anchor box $A_1$ is labeled as dog. Next, traverse the remaining three unlabeled anchor boxes. The category of the ground-truth bounding box with the largest IoU with anchor box $A_0$ is dog, but the IoU is smaller than the threshold (the default is 0.5), so the category is labeled as background; the category of the ground-truth bounding box with the largest IoU with anchor box $A_2$ is cat and the IoU is greater than the threshold, so the category is labeled as cat; the category of the ground-truth bounding box with the largest IoU with anchor box $A_3$ is cat, but the IoU is smaller than the threshold, so the category is labeled as background.
The second item of the return value is a mask variable, with the shape of (batch size, four times the number of anchor boxes). The elements in the mask variable correspond one-to-one with the four offset values of each anchor box.
Because we do not care about background detection, offsets of the negative class should not affect the target function. By multiplying by element, the 0 in the mask variable can filter out negative class offsets before calculating target function.
```
labels[1]
```
The first item returned is the four offset values labeled for each anchor box, with the offsets of negative class anchor boxes labeled as 0.
```
labels[0]
```
## Bounding Boxes for Prediction
During model prediction phase, we first generate multiple anchor boxes for the image and then predict categories and offsets for these anchor boxes one by one. Then, we obtain prediction bounding boxes based on anchor boxes and their predicted offsets.
Below we implement function `offset_inverse` which takes in anchors and
offset predictions as inputs and applies inverse offset transformations to
return the predicted bounding box coordinates.
```
#@save
def offset_inverse(anchors, offset_preds):
c_anc = d2l.box_corner_to_center(anchors)
c_pred_bb_xy = (offset_preds[:, :2] * c_anc[:, 2:] / 10) + c_anc[:, :2]
c_pred_bb_wh = np.exp(offset_preds[:, 2:] / 5) * c_anc[:, 2:]
c_pred_bb = np.concatenate((c_pred_bb_xy, c_pred_bb_wh), axis=1)
predicted_bb = d2l.box_center_to_corner(c_pred_bb)
return predicted_bb
```
When there are many anchor boxes, many similar prediction bounding boxes may be output for the same target. To simplify the results, we can remove similar prediction bounding boxes. A commonly used method is called non-maximum suppression (NMS).
Let us take a look at how NMS works. For a prediction bounding box $B$, the model calculates the predicted probability for each category. Assume the largest predicted probability is $p$, the category corresponding to this probability is the predicted category of $B$. We also refer to $p$ as the confidence level of prediction bounding box $B$. On the same image, we sort the prediction bounding boxes with predicted categories other than background by confidence level from high to low, and obtain the list $L$. Select the prediction bounding box $B_1$ with highest confidence level from $L$ as a baseline and remove all non-benchmark prediction bounding boxes with an IoU with $B_1$ greater than a certain threshold from $L$. The threshold here is a preset hyperparameter. At this point, $L$ retains the prediction bounding box with the highest confidence level and removes other prediction bounding boxes similar to it.
Next, select the prediction bounding box $B_2$ with the second highest confidence level from $L$ as a baseline, and remove all non-benchmark prediction bounding boxes with an IoU with $B_2$ greater than a certain threshold from $L$. Repeat this process until all prediction bounding boxes in $L$ have been used as a baseline. At this time, the IoU of any pair of prediction bounding boxes in $L$ is less than the threshold. Finally, output all prediction bounding boxes in the list $L$.
```
#@save
def nms(boxes, scores, iou_threshold):
# sorting scores by the descending order and return their indices
B = scores.argsort()[::-1]
keep = [] # boxes indices that will be kept
while B.size > 0:
i = B[0]
keep.append(i)
if B.size == 1: break
iou = box_iou(boxes[i, :].reshape(-1, 4),
boxes[B[1:], :].reshape(-1, 4)).reshape(-1)
inds = np.nonzero(iou <= iou_threshold)[0]
B = B[inds + 1]
return np.array(keep, dtype=np.int32, ctx=boxes.ctx)
#@save
def multibox_detection(cls_probs, offset_preds, anchors, nms_threshold=0.5,
pos_threshold=0.00999999978):
device, batch_size = cls_probs.ctx, cls_probs.shape[0]
anchors = np.squeeze(anchors, axis=0)
num_classes, num_anchors = cls_probs.shape[1], cls_probs.shape[2]
out = []
for i in range(batch_size):
cls_prob, offset_pred = cls_probs[i], offset_preds[i].reshape(-1, 4)
conf, class_id = np.max(cls_prob[1:], 0), np.argmax(cls_prob[1:], 0)
predicted_bb = offset_inverse(anchors, offset_pred)
keep = nms(predicted_bb, conf, nms_threshold)
# Find all non_keep indices and set the class_id to background
all_idx = np.arange(num_anchors, dtype=np.int32, ctx=device)
combined = np.concatenate((keep, all_idx))
unique, counts = np.unique(combined, return_counts=True)
non_keep = unique[counts == 1]
all_id_sorted = np.concatenate((keep, non_keep))
class_id[non_keep] = -1
class_id = class_id[all_id_sorted].astype('float32')
conf, predicted_bb = conf[all_id_sorted], predicted_bb[all_id_sorted]
# threshold to be a positive prediction
below_min_idx = (conf < pos_threshold)
class_id[below_min_idx] = -1
conf[below_min_idx] = 1 - conf[below_min_idx]
pred_info = np.concatenate((np.expand_dims(
class_id, axis=1), np.expand_dims(conf, axis=1), predicted_bb),
axis=1)
out.append(pred_info)
return np.stack(out)
```
Next, we will look at a detailed example. First, construct four anchor boxes. For the sake of simplicity, we assume that predicted offsets are all 0. This means that the prediction bounding boxes are anchor boxes. Finally, we construct a predicted probability for each category.
```
anchors = np.array([[0.1, 0.08, 0.52, 0.92], [0.08, 0.2, 0.56, 0.95],
[0.15, 0.3, 0.62, 0.91], [0.55, 0.2, 0.9, 0.88]])
offset_preds = np.array([0] * anchors.size)
cls_probs = np.array([[0] * 4, # Predicted probability for background
[0.9, 0.8, 0.7, 0.1], # Predicted probability for dog
[0.1, 0.2, 0.3, 0.9]]) # Predicted probability for cat
```
Print prediction bounding boxes and their confidence levels on the image.
```
fig = d2l.plt.imshow(img)
show_bboxes(fig.axes, anchors * bbox_scale,
['dog=0.9', 'dog=0.8', 'dog=0.7', 'cat=0.9'])
```
We use the `multibox_detection` function to perform NMS and set the threshold to 0.5. This adds an example dimension to the tensor input. We can see that the shape of the returned result is (batch size, number of anchor boxes, 6). The 6 elements of each row represent the output information for the same prediction bounding box. The first element is the predicted category index, which starts from 0 (0 is dog, 1 is cat). The value -1 indicates background or removal in NMS. The second element is the confidence level of prediction bounding box. The remaining four elements are the $x, y$ axis coordinates of the upper-left corner and the $x, y$ axis coordinates of the lower-right corner of the prediction bounding box (the value range is between 0 and 1).
```
output = multibox_detection(np.expand_dims(cls_probs, axis=0),
np.expand_dims(offset_preds, axis=0),
np.expand_dims(anchors, axis=0),
nms_threshold=0.5)
output
```
We remove the prediction bounding boxes of category -1 and visualize the results retained by NMS.
```
fig = d2l.plt.imshow(img)
for i in output[0].asnumpy():
if i[0] == -1:
continue
label = ('dog=', 'cat=')[int(i[0])] + str(i[1])
show_bboxes(fig.axes, [np.array(i[2:]) * bbox_scale], label)
```
In practice, we can remove prediction bounding boxes with lower confidence levels before performing NMS, thereby reducing the amount of computation for NMS. We can also filter the output of NMS, for example, by only retaining results with higher confidence levels as the final output.
## Summary
* We generate multiple anchor boxes with different sizes and aspect ratios, centered on each pixel.
* IoU, also called Jaccard index, measures the similarity of two bounding boxes. It is the ratio of the intersecting area to the union area of two bounding boxes.
* In the training set, we mark two types of labels for each anchor box: one is the category of the target contained in the anchor box and the other is the offset of the ground-truth bounding box relative to the anchor box.
* When predicting, we can use non-maximum suppression (NMS) to remove similar prediction bounding boxes, thereby simplifying the results.
## Exercises
1. Change the `sizes` and `ratios` values in the `multibox_prior` function and observe the changes to the generated anchor boxes.
1. Construct two bounding boxes with an IoU of 0.5, and observe their coincidence.
1. Verify the output of offset `labels[0]` by marking the anchor box offsets as defined in this section (the constant is the default value).
1. Modify the variable `anchors` in the "Labeling Training Set Anchor Boxes" and "Output Bounding Boxes for Prediction" sections. How do the results change?
[Discussions](https://discuss.d2l.ai/t/370)
| github_jupyter |
# "Analise casos de SRAG em crianças e adolecentes"
> "Dados dos casos de hospitalizações por SRAG do opendatasus"
- toc: true
- branch: master
- badges: false
- comments: false
- numbersections: true
- categories: [srag]
- image: images/some_folder/your_image.png
- hide:false
- search_exclude: true
- metadata_key1: metadata_value1
- metadata_key2: metadata_value2
## Objetivos
- Analisar comportamento dos casos e letalidade em crianças e adolecentes
- Analisar ao longo do tempo e em diferentes estados
```
#hide
import sqlite3 as sql
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from srag_functions import *
db_name = 'srag'
db_path = f'data/opendatasus/{db_name}.db'
conn = sql.connect(db_path)
df_srag = pd.read_sql(f'SELECT * FROM {db_name} WHERE ano >= 2019', conn)
#hide
def get_proportion_cases(df,index_cols,categories_cols):
df_categories = pd.DataFrame(df.groupby(by=index_cols + categories_cols).size(),columns=['casos']).reset_index()
df_categories = df_categories.pivot(index=index_cols,columns=categories_cols,values='casos')
# ex: se index_cols = ['ano','SEM_PRI'], cada linha terá total por ano e semana
df_subtotal = pd.DataFrame(df.groupby(by=index_cols).size(),columns=['total'])
# ex: calcula proporção de cada categoria na "semana", entre 0.0 e 1.0
df_rel = df_categories.div(df_subtotal.values,axis=0)
# a princípio considera apenas a primeira categoria
selected_category = categories_cols[0]
df1 = pd.melt(df_rel,ignore_index=False,value_name='proporção').set_index(selected_category,append=True)
df2 = pd.melt(df_categories,ignore_index=False,value_name='casos').set_index(selected_category,append=True)
return pd.concat([df1,df2],axis=1).reset_index()
def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
```
## Proporção de casos de SRAG em crianças e adolecentes (até 19 anos)
```
#hide-input
index_cols = ['ano','SEM_PRI']
categories_cols =['faixa_etaria']
# df_covid = df_srag.query('CLASSI_FIN == "COVID-19"')
# df_casos_faixas = get_proportion_cases(df_covid,index_cols,categories_cols)
df_casos_faixas = get_proportion_cases(df_srag,index_cols,categories_cols)
df_chart = df_casos_faixas.query('faixa_etaria == "00-20"')
alt.Chart(df_chart).mark_line(point=True).encode(
x='SEM_PRI',
y='proporção',
color='ano:N',
tooltip=['SEM_PRI','proporção','casos','ano']
)
```
## Total de casos de SRAG em crianças e adolecentes (até 19 anos)
```
#hide-input
alt.Chart(df_chart).mark_line(point=True).encode(
x='SEM_PRI',
y='casos',
color='ano:N',
tooltip=['SEM_PRI','proporção','casos','ano']
)
```
> Tabela com dados
```
#collapse-output
#hide-input
df_chart.style.apply(highlight_max,subset=['casos','proporção']).format({'casos':'{:.0f}','proporção':'{:.2%}'})
```
## Proporção de casos de SRAG-COVID em crianças e adolecentes (até 19 anos)
```
#hide-input
index_cols = ['ano','SEM_PRI']
categories_cols =['faixa_etaria']
df_covid = df_srag.query('CLASSI_FIN == "COVID-19" & ano >= 2020')
df_casos_faixas = get_proportion_cases(df_covid,index_cols,categories_cols)
df_chart = df_casos_faixas.query('faixa_etaria == "00-20"')
alt.Chart(df_chart).mark_line(point=True).encode(
x='SEM_PRI',
y='proporção',
color='ano:N',
tooltip=['SEM_PRI','proporção','casos','ano']
)
```
## Total de casos de SRAG-COVID em crianças e adolecentes (até 19 anos)
```
#hide-input
alt.Chart(df_chart).mark_line(point=True).encode(
x='SEM_PRI',
y='casos',
color='ano:N',
tooltip=['SEM_PRI','proporção','casos','ano']
)
```
> Tabela com dados
```
#collapse-output
#hide-input
df_chart.style.apply(highlight_max,subset=['casos','proporção']).format({'casos':'{:.0f}','proporção':'{:.2%}'})
```
## Proporção de óbitos de SRAG em crianças e adolecentes (até 19 anos)
```
#hide-input
index_cols = ['ano','SEM_PRI']
categories_cols =['faixa_etaria']
# df_covid = df_srag.query('CLASSI_FIN == "COVID-19" and EVOLUCAO == "obito"')
# df_covid = df_srag.query('EVOLUCAO == "obito" and UF_RES == "29_Bahia" and idade_anos <= 5')
df_covid = df_srag.query('EVOLUCAO == "obito"')
df_obitos_faixas = get_proportion_cases(df_covid,index_cols,categories_cols).fillna(0)
df_chart = df_obitos_faixas.query('faixa_etaria == "00-20"')
alt.Chart(df_chart).mark_line(point=True).encode(
x='SEM_PRI',
y='proporção',
color='ano:N',
tooltip=['SEM_PRI','proporção','casos','ano']
)
```
## Total de óbitos de SRAG-COVID em crianças e adolecentes (até 19 anos)
```
#hide-input
alt.Chart(df_chart).mark_line(point=True).encode(
x='SEM_PRI',
y='casos',
color='ano:N',
tooltip=['SEM_PRI','proporção','casos','ano']
)
```
> Tabela com dados
```
#hide-input
#collapse-output
df_chart.style.apply(highlight_max,subset=['casos','proporção']).format({'casos':'{:.0f}','proporção':'{:.2%}'})
```
## Letalidade dos casos de SRAG em crianças e adolecentes (até 19 anos)
> obs: Para o calculo da letalidade considerei tanto os casos de "obito" quanto "obito_outras_causas"
```
#hide-input
# df_casos_concluidos = df_srag.query('CLASSI_FIN == "COVID-19" and EVOLUCAO in ("obito","cura","obito_outras_causas")')
df_casos_concluidos = df_srag.query('EVOLUCAO in ("obito","cura","obito_outras_causas")')
index_cols = ['ano','SEM_PRI','faixa_etaria']
df_casos_concluidos_faixas = pd.DataFrame(df_casos_concluidos.groupby(by=index_cols).size(),columns=['casos'])
df_obitos = df_casos_concluidos.query('EVOLUCAO in ("obito","obito_outras_causas")')
df_obitos_faixas = pd.DataFrame(df_obitos.groupby(by=index_cols).size(),columns=['obitos'])
df_casos_concluidos_faixas
df_letalidate = pd.concat([df_casos_concluidos_faixas,df_obitos_faixas],axis=1).fillna(0)
df_letalidate['letalidade'] = df_letalidate['obitos'] / df_letalidate['casos']
df_chart = df_letalidate.reset_index().query('faixa_etaria == "00-20"')
# outra forma de fazer a mesma coisa, não sei se seria mais eficiente.
# df_chart = df_letalidate.sort_index().loc[(slice(None),slice(None),slice('00-20')),:].reset_index()
alt.Chart(df_chart).mark_line(point=True).encode(
x='SEM_PRI',
y='letalidade',
color='ano:N',
tooltip=df_chart.columns.to_list()
)
```
> Tabela com dados
```
#collapse-output
#hide-input
df_chart.style.apply(highlight_max,subset=['casos','obitos','letalidade']).format({'obitos':'{:.0f}','letalidade':'{:.2%}'})
```
## Proporção de casos de SRAG em crianças e adolecentes (até 19 anos) por UF
```
#hide-input
index_cols = ['UF_RES']
categories_cols =['faixa_etaria']
# df_covid = df_srag.query('CLASSI_FIN == "COVID-19"')
# df = df_covid.query('UF_RES != "nan_nd"')
df = df_srag.query('UF_RES != "nan_nd"')#in ("29_Bahia","33_Rio de Janeiro","35_São Paulo")')
# df = df.query('SEM_PRI_ABS >= 15 and SEM_PRI_ABS <= 69')
df_casos_faixas = get_proportion_cases(df,index_cols,categories_cols)
df_chart = df_casos_faixas.query('faixa_etaria == "00-20"')
category = 'UF_RES'
chart = alt.Chart(df_chart).mark_bar().encode(
# x='UF_RES',
y='proporção:Q',
x=alt.X('UF_RES:N',sort='y'),
color= category,
tooltip=df_chart.columns.to_list()
)
ns_opacity = 0.01
selection = alt.selection_multi(empty='all', fields=[category], bind='legend')
chart = chart.add_selection(
selection
).encode(
opacity=alt.condition(selection, alt.value(1.0), alt.value(ns_opacity))
)
chart
```
## Proporção de óbitos por SRAG em crianças e adolecentes (até 19 anos) por UF
```
#hide-input
index_cols = ['UF_RES']
categories_cols =['faixa_etaria']
# df_covid = df_srag.query('CLASSI_FIN == "COVID-19"')
# df = df_covid.query('UF_RES != "nan_nd"')
df = df_srag.query('UF_RES != "nan_nd"')#in ("29_Bahia","33_Rio de Janeiro","35_São Paulo")')
# df = df.query('SEM_PRI_ABS >= 15 and SEM_PRI_ABS <= 69')
df = df.query('EVOLUCAO == "obito"')
df_casos_faixas = get_proportion_cases(df,index_cols,categories_cols)
df_chart = df_casos_faixas.query('faixa_etaria == "00-20"')
category = 'UF_RES'
chart = alt.Chart(df_chart).mark_bar().encode(
# x='UF_RES',
y='proporção:Q',
x=alt.X('UF_RES:N',sort='y'),
color= category,
tooltip=df_chart.columns.to_list()
)
ns_opacity = 0.01
selection = alt.selection_multi(empty='all', fields=[category], bind='legend')
chart = chart.add_selection(
selection
).encode(
opacity=alt.condition(selection, alt.value(1.0), alt.value(ns_opacity))
)
chart
```
> Análise por semana - diferentes UF
```
#hide-input
index_cols = ['UF_RES','SEM_PRI_ABS']
categories_cols =['faixa_etaria']
# df_covid = df_srag.query('CLASSI_FIN == "COVID-19"')
# df = df_covid.query('UF_RES != "nan_nd"')
df = df_srag.query('UF_RES != "nan_nd"')#in ("29_Bahia","33_Rio de Janeiro","35_São Paulo")')
df = df.query('SEM_PRI_ABS >= 15 and SEM_PRI_ABS <= 69')
df_casos_faixas = get_proportion_cases(df,index_cols,categories_cols)
df_chart = df_casos_faixas.query('faixa_etaria == "00-20"')
category = 'UF_RES'
chart = alt.Chart(df_chart).mark_line(point=True).encode(
x='SEM_PRI_ABS',
y='proporção',
color= category,
tooltip=df_chart.columns.to_list()
)
ns_opacity = 0.01
selection = alt.selection_multi(empty='all', fields=[category], bind='legend')
chart = chart.add_selection(
selection
).encode(
opacity=alt.condition(selection, alt.value(1.0), alt.value(ns_opacity))
)
chart
```
> Análise por semana - seleciona 1 UF
```
#hide-input
chart = chart.transform_filter(
selection
)
chart
```
| github_jupyter |
TSG034 - Livy logs
==================
Description
-----------
Steps
-----
### Parameters
```
import re
tail_lines = 500
pod = None # All
container = 'hadoop-livy-sparkhistory'
log_files = [ '/var/log/supervisor/log/livy*' ]
expressions_to_analyze = [
re.compile(".{17} WARN "),
re.compile(".{17} ERROR ")
]
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Get tail for log
```
# Display the last 'tail_lines' of files in 'log_files' list
pods = api.list_namespaced_pod(namespace)
entries_for_analysis = []
for p in pods.items:
if pod is None or p.metadata.name == pod:
for c in p.spec.containers:
if container is None or c.name == container:
for log_file in log_files:
print (f"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'")
try:
output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True)
except Exception:
print (f"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}")
else:
for line in output.split('\n'):
for expression in expressions_to_analyze:
if expression.match(line):
entries_for_analysis.append(line)
print(line)
print("")
print(f"{len(entries_for_analysis)} log entries found for further analysis.")
```
### Analyze log entries and suggest relevant Troubleshooting Guides
```
# Analyze log entries and suggest further relevant troubleshooting guides
from IPython.display import Markdown
import os
import json
import requests
import ipykernel
import datetime
from urllib.parse import urljoin
from notebook import notebookapp
def get_notebook_name():
"""Return the full path of the jupyter notebook. Some runtimes (e.g. ADS)
have the kernel_id in the filename of the connection file. If so, the
notebook name at runtime can be determined using `list_running_servers`.
Other runtimes (e.g. azdata) do not have the kernel_id in the filename of
the connection file, therefore we are unable to establish the filename
"""
connection_file = os.path.basename(ipykernel.get_connection_file())
# If the runtime has the kernel_id in the connection filename, use it to
# get the real notebook name at runtime, otherwise, use the notebook
# filename from build time.
try:
kernel_id = connection_file.split('-', 1)[1].split('.')[0]
except:
pass
else:
for servers in list(notebookapp.list_running_servers()):
try:
response = requests.get(urljoin(servers['url'], 'api/sessions'), params={'token': servers.get('token', '')}, timeout=.01)
except:
pass
else:
for nn in json.loads(response.text):
if nn['kernel']['id'] == kernel_id:
return nn['path']
def load_json(filename):
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def get_notebook_rules():
"""Load the notebook rules from the metadata of this notebook (in the .ipynb file)"""
file_name = get_notebook_name()
if file_name == None:
return None
else:
j = load_json(file_name)
if "azdata" not in j["metadata"] or \
"expert" not in j["metadata"]["azdata"] or \
"log_analyzer_rules" not in j["metadata"]["azdata"]["expert"]:
return []
else:
return j["metadata"]["azdata"]["expert"]["log_analyzer_rules"]
rules = get_notebook_rules()
if rules == None:
print("")
print(f"Log Analysis only available when run in Azure Data Studio. Not available when run in azdata.")
else:
print(f"Applying the following {len(rules)} rules to {len(entries_for_analysis)} log entries for analysis, looking for HINTs to further troubleshooting.")
print(rules)
hints = 0
if len(rules) > 0:
for entry in entries_for_analysis:
for rule in rules:
if entry.find(rule[0]) != -1:
print (entry)
display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.'))
hints = hints + 1
print("")
print(f"{len(entries_for_analysis)} log entries analyzed (using {len(rules)} rules). {hints} further troubleshooting hints made inline.")
print('Notebook execution complete.')
```
| github_jupyter |
```
from bs4 import BeautifulSoup
import requests
from urllib.parse import urljoin
import re
import numpy as np
import pandas as pd
import json
```
## Dataset Generation
Stats scraped from basketball reference
```
nba_champion_url = 'https://www.basketball-reference.com/playoffs/'
nba_team_stats_url = 'https://www.basketball-reference.com/play-index/tsl_finder.cgi?request=1&match=single&type=advanced&year_min=1980&year_max=&lg_id=NBA&franch_id=&c1stat=&c1comp=&c1val=&c2stat=&c2comp=&c2val=&c3stat=&c3comp=&c3val=&c4stat=&c4comp=&c4val=&order_by=wins&order_by_asc=&offset=0'
base_url = requests.get(nba_team_stats_url).url
def get_soup_from_url(url):
return BeautifulSoup(requests.get(url).text, 'html.parser')
def create_champion_dict(soup):
champions = {}
for row in soup.find_all('tr'):
if row.find('th') and row.find_all('th')[0].get('data-stat') == 'year_id' and row.find('td') and row.find('a'):
year = int(row.find('a').text)
if year < 1980:
break
year_str = f'{year - 1}-{year % 100}'
champ = [stat.text for stat in row.find_all('td') if stat.get('data-stat') == 'champion'][0]
champions[year_str] = champ
return champions
def create_team_dataset(soup, champions):
searching = True
rows_list = []
while searching:
for row in soup.find_all('tr'):
if row.find_all('th')[0].get('data-stat') == 'ranker' and row.find('td') and row.find('a'):
current_row = {}
current_row['Team'] = row.find('a').get('title')
for stat in row.find_all('td'):
if stat.get('data-stat') == 'season':
season = stat.text
current_row['Champion'] = current_row['Team'] == champions.get(season)
current_row['Team'] += ' ' + season
elif stat.get('data-stat') == 'win_loss_pct':
current_row['win_loss_pct'] = float(stat.text)
elif stat.get('data-stat') == 'efg_pct':
current_row['efg_pct'] = float(stat.text)
elif stat.get('data-stat') == 'off_rtg':
current_row['off_rtg'] = float(stat.text)
elif stat.get('data-stat') == 'def_rtg':
current_row['def_rtg'] = float(stat.text)
rows_list.append(current_row)
searching = False
for link in soup.find_all('a'):
if link.text == 'Next page':
soup = get_soup_from_url(urljoin(base_url, link.get('href')))
searching = True
dataset = pd.DataFrame(rows_list)
dataset.set_index('Team')
return dataset
champions = create_champion_dict(get_soup_from_url(nba_champion_url))
dataset = create_team_dataset(get_soup_from_url(nba_team_stats_url), champions)
dataset.to_csv('nba_team_data.csv')
```
| github_jupyter |
# Taxi Price Prediction Competition - Team 40 - Aditya Sidharta
## Overall Pipeline
In this Taxi price prediction competition, we were asked to build a model which is able to predict the price of a taxi ride, by predicting the duration and the trajectory length of the taxi ride. Then, we will sum the values of the two prediction to get the predicted price values. This prediction will be evaluated using RMPSE method.
In tackling this problem, I have divided the pipeline into 5 stages - Stage 0, Stage 1, Stage 2, Stage 3, and Stage 4. The input of this pipeline is the training and test dataset which contain the timestamp, location, and taxi ID for each taxi ride. In the training dataset, we have the trajectories as well as the true Duration / Trajectory length values. however, in this prediction, I will not use the Trajectory information.
I will provide a brief summary for each Stage
- Stage 0
In this stage, we will process the raw data for the train and test dataset to add more features so that our model consider more information for the duration/trajlength prediction.
- INPUT : RAW TRAIN DATA, RAW TEST DATA
- OUTPUT : TRAIN DATA STAGE 0, TEST DATA STAGE 0, LOG DURATION, LOG TRAJLENGTH
- Perform Basic feature engineering for Training Dataset
- Perform Advanced feature engineering for Training Dataset
- Perform Basic feature engineering for Test Dataset
- Perform Advanced feature engineering for Test Dataset
- Perform Transformation on Training Duration & Training Trajectory length values
- Perform One hot Encoding on Training Dataset
- Perform One hot Encoding on Test Dataset
- Stage 1
In this stage, our primary goal is to detect outliers in the dataset. We will perform this outlier detection by fitting a simple model and predicting our training dataset, and we will remove all observations that have extremely bad predictions using our simple model.
- INPUT : TRAIN DATA STAGE 0, LOG DURATION, LOG TRAJLENGTH
- MODEL : Random Forest , XGBoost
- OUTPUT : NON-OUTLIER INDEX STAGE 1
- Stage 2
In this stage, our goal is to predict duration and trajlength given the TRAIN DATA STAGE 0. In this case, we will create an ensemble model using Random Forest and XGboost. We will fit a Random Forest and XGBoost model to predict the duration and trajlength, and then we will fit a Lasso linear model to fit the prediction from Random Forest and XGBoost to get the final prediction for duration and trajlength
- INPUT : TRAIN DATA STAGE 0, TEST DATA STAGE 0, LOG DURATION, LOG TRAJLENGTH
- Model : Random Forest + XGBoost (Ensemble - Lasso)
- OUTPUT : PREDICTED LOG DURATION - TRAIN STAGE 2, PREDICTED LOG TRAJLENGTH - TRAIN STAGE 2, PREDICTED LOG DURATION - TEST STAGE 2, PREDICTED LOG TRAJLENGTH - TEST STAGE 2
- Stage 3
In this stage, we would like to refine our duration and trajlength prediction given that we know the trajlength when we would like to predict duration, and vice versa. We perform this because we believe that trajlength and duration is highly correlated, and it is a useful information to have to improve our model. We will fit another ensemble model, using Random Forest, XGboost, Lasso and Elastic Net using Lasso Model to get our final prediction for duration and length, using training data + trajlength and training data + duration respectively.
- INPUT : TRAIN DATA STAGE 0, TEST DATA STAGE 0, NON-OUTLIER INDEX STAGE 1, PREDICTED LOG DURATION - TRAIN STAGE 2, PREDICTED LOG TRAJLENGTH - TRAIN STAGE 2, PREDICTED LOG DURATION - TEST STAGE 2, PREDICTED LOG TRAJLENGTH - TEST STAGE 2
- Model : Random Forest + XGBoost + Lasso + Elastic Net (Ensemble - Lasso)
- OUTPUT : PREDICTED LOG DURATION - TEST STAGE 3, PREDICTED LOG TRAJLENGTH - TEST STAGE 3
- Stage 4
In this stage, we would like to perform postprocessing to our prediction from Stage 3. As we have not used information about the coordinates from the train data, we would like to consider this information by manually refining our prediction if we have observations within the same location in the test data as compared to the training data..
- INPUT : PREDICTED LOG DURATION - TEST STAGE 3, PREDICTED LOG TRAJLENGTH - TEST STAGE 3, TRAIN DATA, TEST DATA
- OUTPUT : PREDICTED LOG DURATION - TEST STAGE 4, PREDICTED LOG TRAJLENGTH - TEST STAGE 4

The first thing that we would like to do is Data Exploration. Here, we want to understand the general structure about our dataset so that we are able to come out with the accurate pipeline model for our duration and the trajectory length prediction
```
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
import datetime
import re
import math
from pandas_summary import DataFrameSummary
train_file = "train_data.csv"
test_file = "test.csv"
def straight_dist(x_start, x_end,
y_start, y_end):
return np.sqrt((x_end - x_start)**2\
+ (y_end - y_start)**2)
def calc_azt(x_start, x_end,
y_start, y_end ):
return math.degrees(math.atan2\
(y_end - y_start,
x_end - x_start)) // 45
def coordinates_bin(coor):
return coor // 50 + 21
def convert_ts_to_datetime(ts):
return datetime.datetime.\
strptime(ts, '%Y-%m-%d %H:%M:%S')
def get_weekday(ts):
dt = convert_ts_to_datetime(ts)
return dt.weekday()
def is_weekend(ts):
dt = convert_ts_to_datetime(ts)
return dt.weekday() >= 5
def get_day(ts):
dt = convert_ts_to_datetime(ts)
return dt.day
def get_month(ts):
dt = convert_ts_to_datetime(ts)
return dt.month
def get_year(ts):
dt = convert_ts_to_datetime(ts)
return dt.year
def get_hour(ts):
dt = convert_ts_to_datetime(ts)
return dt.hour
def get_minute(ts):
dt = convert_ts_to_datetime(ts)
return dt.minute // 10
def time_classification(ts):
hour = get_hour(ts)
if hour <= 5:
return "Midnight"
if hour <= 8:
return "Morning"
if hour <= 11:
return "Noon"
if hour <= 18:
return "Afternoon"
if hour <= 20:
return "Night"
else:
return "LateNight"
vec_straight_dist = np.vectorize(straight_dist)
vec_calc_azt = np.vectorize(calc_azt)
vec_coordinates_bin = np.vectorize(coordinates_bin)
vec_get_weekday = np.vectorize(get_weekday)
vec_is_weekend = np.vectorize(is_weekend)
vec_get_day = np.vectorize(get_day)
vec_get_month = np.vectorize(get_month)
vec_get_year = np.vectorize(get_year)
vec_get_hour = np.vectorize(get_hour)
vec_get_minute = np.vectorize(get_minute)
vec_time_classification = np.vectorize\
(time_classification)
df_train = pd.read_csv(train_file)
df_test = pd.read_csv(test_file)
df_train_simple = df_train[[u'ID',
u'TAXI_ID',
u'TIMESTAMP',
u'X_START',
u'Y_START',
u'X_END',
u'Y_END']]
df_all = pd.concat([df_train_simple, df_test])
n_train = df_train.shape[0]
n_test = df_test.shape[0]
print df_train.shape
print df_test.shape
print df_train.columns
print df_test.columns
```
One of the good things about our dataset is that there are no missing values at all. However, we need to make sure that our features are somewhat normally distributed to improve the performance of our linear model prediction. As stated in the basic exploration pdf, we will perform log transformation for both `TRAJ_LENGTH` column and `DURATION` column
```
DataFrameSummary(df_train).summary()
DataFrameSummary(df_test).summary()
```
One interesting fact about this dataset is that for some of the observation, even though the starting point and ending point is close to each other, the `TRAJ_LENGTH` in the training dataset is extremely large. As we can see from the plot below, for a small value of `STRAIGHT_DIST`, its possible that the `TRAJ_LENGTH` is extremely large. There are few possibilities why this might happen
- The road within this specific geographical area is not highly connected -> the taxi driver needs to make huge round just to get to the destination.
- The taxi driver is not familiar with the geographical area, thats why the routes taken by the taxi driver is highly ineffective
- Outlier in the dataset. This might came from various sources.
This outlier is highly dangerous for our model as we might learn about wrong information when we fit the model. Therefore, we will first try to remove all the outliers from the training data so that this outliers might not affect our true model.
```
fig, ax = plt.subplots(ncols=1, nrows=1)
ax.scatter(df_train['STRAIGHT_DIST'],
df_train['TRAJ_LENGTH'], alpha=0.2)
plt.show()
```
The second important information is that the training and test data contains information between 3 March 2008 and 25 May 2009. The training data contain all dates between the starting date and testing date, and thus we can be sure that all the dates in the test set are contained in the training set. We can then use this as one of our classifier for the train and testing data
```
from datetime import date, timedelta
train_date = pd.to_datetime\
(np.unique(df_train['TIMESTAMP'])).normalize()
test_date = pd.to_datetime\
(np.unique(df_test['TIMESTAMP'])).normalize()
date_set = set(train_date[0] \
+ timedelta(x) \
for x in range((train_date[-1] \
- train_date[0]).days))
print "Start-date : " + str(train_date[0])
print "End-date : " + str(train_date[-1])
missing = sorted(date_set - set(train_date))
missing
```
We will use K-Means to aggregrate our latitude and longitude information for both starting points and ending points. Using this algorithm, we will cluster all latitude and longitude under training and test data into one of the 750 clusters. We can then use this cluster as the feature for the training and test dataset
```
from sklearn.cluster import MiniBatchKMeans, KMeans
x_coors = np.concatenate([df_all['X_START'],
df_all['X_END']])
y_coors = np.concatenate([df_all['Y_START'],\
df_all['Y_END']])
all_coors = np.vstack((x_coors, y_coors)).T
k_means_model = KMeans(init='k-means++',
n_clusters=750,
n_init=3,
n_jobs=-1,
verbose = 2).fit(all_coors)
```
The 750 clusters is plotted in this figure below
```
fig, ax = plt.subplots(ncols=1, nrows=1)
ax.scatter(all_coors[:,0], all_coors[:,1],
s=10, lw=0, cmap='tab20',
alpha=0.2, c = k_means_model.\
predict(all_coors))
ax.set_xlabel('Longitude')
ax.set_ylabel('Latitude')
plt.show()
```
The basic features engineering contains all the information that we can extract from the starting/ending location and timestamp from the training and testing dataset. The features engineered in the basic features are as follows:
- `LOG_STRAIGHT_DIST` : the log of the euclidian distance between the starting point and the ending point.
- `AZT` : bearing between the starting point and ending point.
- `DAYOFWEEK`: The day of week of the taxi ride
- `DATE` : The date of the taxi ride
- `DAY` : The day in the month of the taxi ride
- `MONTH` : The month of the taxi ride
- `YEAR` : The year of the taxi ride
- `HOUR` : The hour of the taxi ride
- `MINUTE` : The minute of the taxi ride
- `TIME_CLASS` : The time-class of the taxi ride
- `START_BIN` : The cluster where the starting point of the taxi ride belongs to
- `END_BIN` : The cluster where the ending point of the taxi ride belongs to
```
def get_basic_features_train(df):
duration_train = df['DURATION'].values
traj_train = df['TRAJ_LENGTH'].values
price_train = duration_train + traj_train
df['LOG_DURATION'] = np.log(duration_train)
df['LOG_TRAJLENGTH'] = np.log(traj_train)
df['LOG_PRICE'] = np.log(price_train)
df['LOG_STRAIGHT_DIST'] = \
np.log(vec_straight_dist(df['X_START'],
df['X_END'],
df['Y_START'],
df['Y_END']))
df['AZT'] = vec_calc_azt(df['X_START'],
df['X_END'],
df['Y_START'],
df['Y_END'])
df['DAYOFWEEK'] = vec_get_weekday(df['TIMESTAMP'])
df['DATE'] = pd.to_datetime(df['TIMESTAMP']\
.values).normalize()\
.astype(str)
df['DAY'] = vec_get_day(df['TIMESTAMP'])
df['MONTH'] = vec_get_month(df['TIMESTAMP'])
df['YEAR'] = vec_get_year(df['TIMESTAMP'])
df['HOUR'] = vec_get_hour(df['TIMESTAMP'])
df['MINUTE'] = vec_get_minute(df['TIMESTAMP'])
df['TIME_CLASS'] = vec_time_classification(df['TIMESTAMP'])
start_coors = np.vstack((df['X_START'], df['Y_START'])).T
end_coors = np.vstack((df['X_END'], df['Y_END'])).T
df['START_BIN'] = k_means_model.predict(start_coors)
df['END_BIN'] = k_means_model.predict(end_coors)
return df
def get_basic_features_test(df):
df['LOG_STRAIGHT_DIST'] = \
np.log(vec_straight_dist(df['X_START'],
df['X_END'],
df['Y_START'],
df['Y_END']))
df['AZT'] = vec_calc_azt(df['X_START'],
df['X_END'],
df['Y_START'],
df['Y_END'])
df['DAYOFWEEK'] = vec_get_weekday(df['TIMESTAMP'])
df['DATE'] = pd.to_datetime(df['TIMESTAMP']\
.values).normalize().astype(str)
df['DAY'] = vec_get_day(df['TIMESTAMP'])
df['MONTH'] = vec_get_month(df['TIMESTAMP'])
df['YEAR'] = vec_get_year(df['TIMESTAMP'])
df['HOUR'] = vec_get_hour(df['TIMESTAMP'])
df['MINUTE'] = vec_get_minute(df['TIMESTAMP'])
df['TIME_CLASS'] = vec_time_classification(df['TIMESTAMP'])
start_coors = np.vstack((df['X_START'], df['Y_START'])).T
end_coors = np.vstack((df['X_END'], df['Y_END'])).T
df['START_BIN'] = k_means_model.predict(start_coors)
df['END_BIN'] = k_means_model.predict(end_coors)
return df
df_train_basic = get_basic_features_train(df_train)
df_test_basic = get_basic_features_test(df_test)
log_duration = df_train_basic['LOG_DURATION'].values
log_trajlength = df_train_basic['LOG_TRAJLENGTH'].values
log_price = df_train_basic['LOG_PRICE'].values
print df_train_basic.shape
print df_test_basic.shape
print df_train_basic.columns
print df_test_basic.columns
```
We will then only retain all the columns that we will use as the features in the training dataset
```
df_train_basic_simple = df_train_basic[[u'ID',
u'TAXI_ID', u'TIMESTAMP', u'X_START', u'Y_START', u'X_END',
u'Y_END', u'LOG_STRAIGHT_DIST', u'AZT', u'DAYOFWEEK', u'DATE', u'DAY',
u'MONTH', u'YEAR', u'HOUR', u'MINUTE', u'TIME_CLASS', u'START_BIN',
u'END_BIN']]
df_all_basic = pd.concat((df_train_basic_simple, df_test_basic))
print df_all_basic.shape
print df_all_basic.columns
```
The next feature that we would like to engineer is the advanced features. This takes in the observation within the whole training dataset. The features extracted is as follows:
For each unique `DATE`, `TAXI_ID`, `MONTH`, `YEAR`, `DAYOFWEEK`, `TIME_CLASS`, `START_BIN` and `END_BIN`, we would like to get the following information
- The mean of the log duration for the specific unique value within the training dataset
- The mean of the log price for the specific unique value within the training dataset
- The mean of the log trajlength for the specific value within the training dataset
- Number of observations with the particular value within the training dataset
For example, if we observe `TAXI_ID` = 656, we will try to find all taxi rides with `TAXI_ID` = 656 in our training dataset, and we will find the mean log duration, log price, log trajlength, as well as the number of rides within the training dataset. This will be the values for `LOGDURATION_BY_TAXI_ID`, `LOGPRICE_BY_TAXI_ID`, `LOGTRAJLENGTH_BY_TAXI_ID`, and `COUNT_BY_TAXI_ID` respectively
The main idea behind this advanced feature engineering is to extract information about the price/duration/trajlength of a specific taxi driver / specific day / specific time / specific location and so on
```
def create_dict_date(df_train, df_all):
result_dict = {}
column = ['TAXI_ID', 'DATE', 'MONTH', 'YEAR',
'DAYOFWEEK', 'TIME_CLASS', 'START_BIN', 'END_BIN']
for column_names in column:
indiv_dict = {}
duration = df_train.groupby(column_names)\
['LOG_DURATION'].mean()
mean_duration = duration.mean()
price = df_train.groupby(column_names)\
['LOG_PRICE'].mean()
mean_price = price.mean()
trajlength = df_train.groupby(column_names)\
['LOG_TRAJLENGTH'].mean()
mean_trajlength = trajlength.mean()
count = df_all.groupby(column_names)\
[column_names].count()
mean_count = count.mean()
for index in duration.index:
indiv_dict[str(index)] = {
'duration' : duration[index],
'price' : price[index],
'trajlength' : trajlength[index],
'count' : count[index]
}
indiv_dict['avg'] = {
'duration' : mean_duration,
'price' : mean_price,
'trajlength' : mean_trajlength,
'count' : mean_count
}
result_dict[column_names] = indiv_dict
return result_dict
def get_mean_values(array_column, result_dict, column_name):
n_obs = len(array_column)
column_dict = result_dict[column_name]
column_dict_index = column_dict.keys()
result_duration = np.zeros(n_obs)
result_price = np.zeros(n_obs)
result_trajlength = np.zeros(n_obs)
result_count = np.zeros(n_obs)
for idx in range(n_obs):
target = str(array_column[idx])
if target not in column_dict_index:
print str(target) + " is not found"
result_duration[idx] = \
column_dict['avg']['duration']
result_price[idx] = \
column_dict['avg']['price']
result_trajlength[idx] = \
column_dict['avg']['trajlength']
result_count[idx] = \
column_dict['avg']['count']
else:
result_duration[idx] = \
column_dict[target]['duration']
result_price[idx] = \
column_dict[target]['price']
result_trajlength[idx] = \
column_dict[target]['trajlength']
result_count[idx] = \
column_dict[target]['count']
return result_duration, result_price,
result_trajlength, result_count
def get_advanced_features(df, result_dict):
column = ['DATE', 'TAXI_ID', 'DATE', 'MONTH', 'YEAR',
'DAYOFWEEK', 'TIME_CLASS', 'START_BIN', 'END_BIN']
for column_names in column:
result_duration, result_price,
result_trajlength, result_count = \
get_mean_values(df[column_names].values,
result_dict, column_names)
df['LOGDURATION_BY_' + column_names] = \
result_duration
df['LOGPRICE_BY_' + column_names] = \
result_price
df['LOGTRAJLENGTH_BY_' + column_names] \
= result_trajlength
df['COUNT_BY_' + column_names] = \
result_count
return df
```
If we are unable to get information about specific values, we will take the mean of the whole dataset instead. Luckily, the only value that we couldnt find in our training data is the `TAXI_ID` = 439
```
result_dict = create_dict_date(df_train_basic, df_all_basic)
df_all_advanced = get_advanced_features(df_all_basic, result_dict)
print df_all_advanced.shape
print df_all_advanced.columns
pd.options.display.max_columns = 70
DataFrameSummary(df_all_advanced).summary()
all_features = [u'TAXI_ID', u'LOG_STRAIGHT_DIST',
u'AZT', u'DAYOFWEEK', u'DATE', u'DAY',
u'MONTH', u'YEAR', u'HOUR', u'MINUTE',
u'TIME_CLASS', u'START_BIN',
u'END_BIN', u'LOGDURATION_BY_DATE', u'LOGPRICE_BY_DATE',
u'LOGTRAJLENGTH_BY_DATE', u'COUNT_BY_DATE', u'LOGDURATION_BY_TAXI_ID',
u'LOGPRICE_BY_TAXI_ID', u'LOGTRAJLENGTH_BY_TAXI_ID',
u'COUNT_BY_TAXI_ID', u'LOGDURATION_BY_MONTH', u'LOGPRICE_BY_MONTH',
u'LOGTRAJLENGTH_BY_MONTH', u'COUNT_BY_MONTH', u'LOGDURATION_BY_YEAR',
u'LOGPRICE_BY_YEAR', u'LOGTRAJLENGTH_BY_YEAR', u'COUNT_BY_YEAR',
u'LOGDURATION_BY_DAYOFWEEK', u'LOGPRICE_BY_DAYOFWEEK',
u'LOGTRAJLENGTH_BY_DAYOFWEEK', u'COUNT_BY_DAYOFWEEK',
u'LOGDURATION_BY_TIME_CLASS', u'LOGPRICE_BY_TIME_CLASS',
u'LOGTRAJLENGTH_BY_TIME_CLASS', u'COUNT_BY_TIME_CLASS',
u'LOGDURATION_BY_START_BIN', u'LOGPRICE_BY_START_BIN',
u'LOGTRAJLENGTH_BY_START_BIN', u'COUNT_BY_START_BIN',
u'LOGDURATION_BY_END_BIN', u'LOGPRICE_BY_END_BIN',
u'LOGTRAJLENGTH_BY_END_BIN', u'COUNT_BY_END_BIN']
cat_features = [u'TAXI_ID', u'AZT', u'DAYOFWEEK', u'DATE', u'DAY',
u'MONTH', u'YEAR', u'HOUR', u'MINUTE', u'TIME_CLASS', u'START_BIN',
u'END_BIN']
```
Lastly, we will perform one-hot encoding on all categorical variables for both training and test dataset
```
def get_dummify(df):
df_final = df[all_features]
return pd.get_dummies(df_final, columns=cat_features, prefix=cat_features)
df_all_advanced_dummy = get_dummify(df_all_advanced)
df_train_advanced_dummy = df_all_advanced_dummy.iloc[:n_train, :]
df_test_advanced_dummy = df_all_advanced_dummy.iloc[n_train:, :]
X_train_stage0 = df_train_advanced_dummy.values
X_test_stage0 = df_test_advanced_dummy.values
Y_train_duration = log_duration
Y_train_trajlength = log_trajlength
Y_train_price = log_price
print X_train_stage0.shape
print X_test_stage0.shape
print Y_train_duration.shape
print Y_train_trajlength.shape
print Y_train_price.shape
from scipy import sparse
sX_train_stage0 = sparse.csc_matrix(X_train_stage0)
sX_test_stage0 = sparse.csc_matrix(X_test_stage0)
from sklearn.externals import joblib
joblib.dump(sX_train_stage0 , 'sX_train_stage0.pkl')
joblib.dump(sX_test_stage0, 'sX_test_stage0.pkl')
joblib.dump(Y_train_duration, 'Y_train_duration.pkl')
joblib.dump(Y_train_trajlength, 'Y_train_trajlength.pkl')
joblib.dump(Y_train_price, 'Y_train_price.pkl')
```
| github_jupyter |
# Training and Evaluating Machine Learning Models in cuML
This notebook explores several basic machine learning estimators in cuML, demonstrating how to train them and evaluate them with built-in metrics functions. All of the models are trained on synthetic data, generated by cuML's dataset utilities.
1. Random Forest Classifier
2. UMAP
3. DBSCAN
4. Linear Regression
[](https://colab.research.google.com/github/rapidsai/cuml/blob/branch-0.15/docs/source/estimator_intro.ipynb)
### Shared Library Imports
```
import cuml
from cupy import asnumpy
from joblib import dump, load
```
## 1. Classification
### Random Forest Classification and Accuracy metrics
The Random Forest algorithm classification model builds several decision trees, and aggregates each of their outputs to make a prediction. For more information on cuML's implementation of the Random Forest Classification model please refer to :
https://docs.rapids.ai/api/cuml/stable/api.html#cuml.ensemble.RandomForestClassifier
Accuracy score is the ratio of correct predictions to the total number of predictions. It is used to measure the performance of classification models.
For more information on the accuracy score metric please refer to: https://en.wikipedia.org/wiki/Accuracy_and_precision
For more information on cuML's implementation of accuracy score metrics please refer to: https://docs.rapids.ai/api/cuml/stable/api.html#cuml.metrics.accuracy.accuracy_score
The cell below shows an end to end pipeline of the Random Forest Classification model. Here the dataset was generated by using sklearn's make_classification dataset. The generated dataset was used to train and run predict on the model. Random forest's performance is evaluated and then compared between the values obtained from the cuML and sklearn accuracy metrics.
```
from cuml.datasets.classification import make_classification
from cuml.preprocessing.model_selection import train_test_split
from cuml.ensemble import RandomForestClassifier as cuRF
from sklearn.metrics import accuracy_score
# synthetic dataset dimensions
n_samples = 1000
n_features = 10
n_classes = 2
# random forest depth and size
n_estimators = 25
max_depth = 10
# generate synthetic data [ binary classification task ]
X, y = make_classification ( n_classes = n_classes,
n_features = n_features,
n_samples = n_samples,
random_state = 0 )
X_train, X_test, y_train, y_test = train_test_split( X, y, random_state = 0 )
model = cuRF( max_depth = max_depth,
n_estimators = n_estimators,
seed = 0 )
trained_RF = model.fit ( X_train, y_train )
predictions = model.predict ( X_test )
cu_score = cuml.metrics.accuracy_score( y_test, predictions )
sk_score = accuracy_score( asnumpy( y_test ), asnumpy( predictions ) )
print( " cuml accuracy: ", cu_score )
print( " sklearn accuracy : ", sk_score )
# save
dump( trained_RF, 'RF.model')
# to reload the model uncomment the line below
loaded_model = load('RF.model')
```
## Clustering
### UMAP and Trustworthiness metrics
UMAP is a dimensionality reduction algorithm which performs non-linear dimension reduction. It can also be used for visualization.
For additional information on the UMAP model please refer to the documentation on https://docs.rapids.ai/api/cuml/stable/api.html#cuml.UMAP
Trustworthiness is a measure of the extent to which the local structure is retained in the embedding of the model. Therefore, if a sample predicted by the model lied within the unexpected region of the nearest neighbors, then those samples would be penalized. For more information on the trustworthiness metric please refer to: https://scikit-learn.org/dev/modules/generated/sklearn.manifold.t_sne.trustworthiness.html
the documentation for cuML's implementation of the trustworthiness metric is: https://docs.rapids.ai/api/cuml/stable/api.html#cuml.metrics.trustworthiness.trustworthiness
The cell below shows an end to end pipeline of UMAP model. Here, the blobs dataset is created by cuml's equivalent of make_blobs function to be used as the input. The output of UMAP's fit_transform is evaluated using the trustworthiness function. The values obtained by sklearn and cuml's trustworthiness are compared below.
```
from cuml.datasets import make_blobs
from cuml.manifold.umap import UMAP as cuUMAP
from sklearn.manifold import trustworthiness
import numpy as np
n_samples = 1000
n_features = 100
cluster_std = 0.1
X_blobs, y_blobs = make_blobs( n_samples = n_samples,
cluster_std = cluster_std,
n_features = n_features,
random_state = 0,
dtype=np.float32 )
trained_UMAP = cuUMAP( n_neighbors = 10 ).fit( X_blobs )
X_embedded = trained_UMAP.transform( X_blobs )
cu_score = cuml.metrics.trustworthiness( X_blobs, X_embedded )
sk_score = trustworthiness( asnumpy( X_blobs ), asnumpy( X_embedded ) )
print(" cuml's trustworthiness score : ", cu_score )
print(" sklearn's trustworthiness score : ", sk_score )
# save
dump( trained_UMAP, 'UMAP.model')
# to reload the model uncomment the line below
# loaded_model = load('UMAP.model')
```
### DBSCAN and Adjusted Random Index
DBSCAN is a popular and a powerful clustering algorithm. For additional information on the DBSCAN model please refer to the documentation on https://docs.rapids.ai/api/cuml/stable/api.html#cuml.DBSCAN
We create the blobs dataset using the cuml equivalent of make_blobs function.
Adjusted random index is a metric which is used to measure the similarity between two data clusters, and it is adjusted to take into consideration the chance grouping of elements.
For more information on Adjusted random index please refer to: https://en.wikipedia.org/wiki/Rand_index
The cell below shows an end to end model of DBSCAN. The output of DBSCAN's fit_predict is evaluated using the Adjusted Random Index function. The values obtained by sklearn and cuml's adjusted random metric are compared below.
```
from cuml.datasets import make_blobs
from cuml import DBSCAN as cumlDBSCAN
from sklearn.metrics import adjusted_rand_score
import numpy as np
n_samples = 1000
n_features = 100
cluster_std = 0.1
X_blobs, y_blobs = make_blobs( n_samples = n_samples,
n_features = n_features,
cluster_std = cluster_std,
random_state = 0,
dtype=np.float32 )
cuml_dbscan = cumlDBSCAN( eps = 3,
min_samples = 2)
trained_DBSCAN = cuml_dbscan.fit( X_blobs )
cu_y_pred = trained_DBSCAN.fit_predict ( X_blobs )
cu_adjusted_rand_index = cuml.metrics.cluster.adjusted_rand_score( y_blobs, cu_y_pred )
sk_adjusted_rand_index = adjusted_rand_score( asnumpy(y_blobs), asnumpy(cu_y_pred) )
print(" cuml's adjusted random index score : ", cu_adjusted_rand_index)
print(" sklearn's adjusted random index score : ", sk_adjusted_rand_index)
# save and optionally reload
dump( trained_DBSCAN, 'DBSCAN.model')
# to reload the model uncomment the line below
# loaded_model = load('DBSCAN.model')
```
## Regression
### Linear regression and R^2 score
Linear Regression is a simple machine learning model where the response y is modelled by a linear combination of the predictors in X.
R^2 score is also known as the coefficient of determination. It is used as a metric for scoring regression models. It scores the output of the model based on the proportion of total variation of the model.
For more information on the R^2 score metrics please refer to: https://en.wikipedia.org/wiki/Coefficient_of_determination
For more information on cuML's implementation of the r2 score metrics please refer to : https://docs.rapids.ai/api/cuml/stable/api.html#cuml.metrics.regression.r2_score
The cell below uses the Linear Regression model to compare the results between cuML and sklearn trustworthiness metric. For more information on cuML's implementation of the Linear Regression model please refer to :
https://docs.rapids.ai/api/cuml/stable/api.html#linear-regression
```
from cuml.datasets import make_regression
from cuml.preprocessing.model_selection import train_test_split
from cuml.linear_model import LinearRegression as cuLR
from sklearn.metrics import r2_score
n_samples = 2**10
n_features = 100
n_info = 70
X_reg, y_reg = make_regression( n_samples = n_samples,
n_features = n_features,
n_informative = n_info,
random_state = 123 )
X_reg_train, X_reg_test, y_reg_train, y_reg_test = train_test_split( X_reg,
y_reg,
train_size = 0.8,
random_state = 10 )
cuml_reg_model = cuLR( fit_intercept = True,
normalize = True,
algorithm = 'eig' )
trained_LR = cuml_reg_model.fit( X_reg_train, y_reg_train )
cu_preds = trained_LR.predict( X_reg_test )
cu_r2 = cuml.metrics.r2_score( y_reg_test, cu_preds )
sk_r2 = r2_score( asnumpy( y_reg_test ), asnumpy( cu_preds ) )
print("cuml's r2 score : ", cu_r2)
print("sklearn's r2 score : ", sk_r2)
# save and reload
dump( trained_LR, 'LR.model')
# to reload the model uncomment the line below
# loaded_model = load('LR.model')
```
| github_jupyter |
# Radiative Cores & Convective Envelopes
Analysis of how magnetic fields influence the extent of radiative cores and convective envelopes in young, pre-main-sequence stars.
Begin with some preliminaries.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
```
Load a standard and magnetic isochrone with equivalent ages. Here, the adopted age is 10 Myr to look specifically at the predicted internal structure of stars in Upper Scorpius.
```
# read standard 10 Myr isochrone
iso_std = np.genfromtxt('../models/iso/std/dmestar_00010.0myr_z+0.00_a+0.00_phx.iso')
# read standard 5 Myr isochrone
iso_5my = np.genfromtxt('../models/iso/std/dmestar_00005.0myr_z+0.00_a+0.00_phx.iso')
# read magnetic isochrone
iso_mag = np.genfromtxt('../models/iso/mag/dmestar_00010.0myr_z+0.00_a+0.00_phx_magBeq.iso')
```
The magnetic isochrone is known to begin at a lower mass than the standard isochrone and both isochrones have gaps where individual models failed to converge. Gaps need not occur at the same masses along each isochrone. To overcome these inconsistencies, we can interpolate both isochrones onto a pre-defined mass domain.
```
masses = np.arange(0.09, 1.70, 0.01) # new mass domain
# create an interpolation curve for a standard isochrone
icurve = interp1d(iso_std[:,0], iso_std, axis=0, kind='cubic')
# and transform to new mass domain
iso_std_eq = icurve(masses)
# create interpolation curve for standard 5 Myr isochrone
icurve = interp1d(iso_5my[:,0], iso_5my, axis=0, kind='linear')
# and transform to a new mass domain
iso_5my_eq = icurve(masses)
# create an interpolation curve for a magnetic isochrone
icurve = interp1d(iso_mag[:,0], iso_mag, axis=0, kind='cubic')
# and transform to new mass domain
iso_mag_eq = icurve(masses)
```
Let's compare the interpolated isochrones to the original, just to be sure that the resulting isochrones are smooth.
```
plt.plot(10**iso_std[:, 1], iso_std[:, 3], '-', lw=4, color='red')
plt.plot(10**iso_std_eq[:, 1], iso_std_eq[:, 3], '--', lw=4, color='black')
plt.plot(10**iso_mag[:, 1], iso_mag[:, 3], '-', lw=4, color='blue')
plt.plot(10**iso_mag_eq[:, 1], iso_mag_eq[:, 3], '--', lw=4, color='black')
plt.grid()
plt.xlim(2500., 8000.)
plt.ylim(-2, 1.1)
plt.xlabel('$T_{\\rm eff}\ [K]$', fontsize=20)
plt.ylabel('$\\log(L / L_{\\odot})$', fontsize=20)
```
The interpolation appears to have worked well as there are no egregious discrepancies between the real and interpolated isochrones.
We can now analyze the properties of the radiative cores and the convective envelopes. Beginning with the radiative core, we can look as a function of stellar properties, how much of the total stellar mass is contained in the radiative core.
```
# as a function of stellar mass
plt.plot(iso_std_eq[:, 0], 1.0 - iso_std_eq[:, -1]/iso_std_eq[:, 0],
'--', lw=3, color='#333333')
plt.plot(iso_5my_eq[:, 0], 1.0 - iso_5my_eq[:, -1]/iso_5my_eq[:, 0],
'-.', lw=3, color='#333333')
plt.plot(iso_mag_eq[:, 0], 1.0 - iso_mag_eq[:, -1]/iso_mag_eq[:, 0],
'-' , lw=4, color='#01a9db')
plt.grid()
plt.xlabel('${\\rm Stellar Mass}\ [M_{\\odot}]$', fontsize=20)
plt.ylabel('$M_{\\rm rad\ core}\ /\ M_{\\star}$', fontsize=20)
# as a function of effective temperature
plt.plot(10**iso_std_eq[:, 1], 1.0 - iso_std_eq[:, -1]/iso_std_eq[:, 0],
'--', lw=3, color='#333333')
plt.plot(10**iso_5my_eq[:, 1], 1.0 - iso_5my_eq[:, -1]/iso_5my_eq[:, 0],
'-.', lw=3, color='#333333')
plt.plot(10**iso_mag_eq[:, 1], 1.0 - iso_mag_eq[:, -1]/iso_mag_eq[:, 0],
'-' , lw=4, color='#01a9db')
plt.grid()
plt.xlim(3000., 7000.)
plt.xlabel('${\\rm Effective Temperature}\ [K]$', fontsize=20)
plt.ylabel('$M_{\\rm rad\ core}\ /\ M_{\\star}$', fontsize=20)
```
Now let's look at the relative difference in radiative core mass as a function of these stellar properties.
```
# as a function of stellar mass (note, there is a minus sign switch b/c we tabulate
# convective envelope mass)
plt.plot(iso_mag_eq[:, 0], (iso_mag_eq[:, -1] - iso_std_eq[:, -1]),
'-' , lw=4, color='#01a9db')
plt.plot(iso_mag_eq[:, 0], (iso_mag_eq[:, -1] - iso_5my_eq[:, -1]),
'--' , lw=4, color='#01a9db')
plt.grid()
plt.xlabel('${\\rm Stellar Mass}\ [M_{\\odot}]$', fontsize=20)
plt.ylabel('$\\Delta M_{\\rm rad\ core}\ [M_{\\odot}]$', fontsize=20)
```
Analysis
```
# interpolate into the temperature domain
Teffs = np.log10(np.arange(3050., 7000., 50.))
icurve = interp1d(iso_std[:, 1], iso_std, axis=0, kind='linear')
iso_std_te = icurve(Teffs)
icurve = interp1d(iso_5my[:, 1], iso_5my, axis=0, kind='linear')
iso_5my_te = icurve(Teffs)
icurve = interp1d(iso_mag[:, 1], iso_mag, axis=0, kind='linear')
iso_mag_te = icurve(Teffs)
# as a function of stellar mass
# (note, there is a minus sign switch b/c we tabulate convective envelope mass)
#
# plotting: standard - magnetic where + implies
plt.plot(10**Teffs, (iso_mag_te[:, 0] - iso_mag_te[:, -1] -
iso_std_te[:, 0] + iso_std_te[:, -1]),
'-' , lw=4, color='#01a9db')
plt.plot(10**Teffs, (iso_mag_te[:, 0] - iso_mag_te[:, -1] -
iso_5my_te[:, 0] + iso_5my_te[:, -1]),
'--' , lw=4, color='#01a9db')
np.savetxt('../models/rad_core_comp.txt',
np.column_stack((iso_std_te, iso_mag_te)),
fmt="%10.6f")
np.savetxt('../models/rad_core_comp_dage.txt',
np.column_stack((iso_5my_te, iso_mag_te)),
fmt="%10.6f")
plt.grid()
plt.xlim(3000., 7000.)
plt.xlabel('${\\rm Effective Temperature}\ [K]$', fontsize=20)
plt.ylabel('$\\Delta M_{\\rm rad\ core}\ [M_{\\odot}]$', fontsize=20)
```
Stars are fully convective below 3500 K, regardless of whether there is magnetic inhibition of convection. On the other extreme, stars hotter than about 6500 K are approaching ignition of the CN-cycle, which coincides with the disappearnce of the outer convective envelope. However, delayed contraction means that stars of a given effective temperature have a higher mass in the magnetic case, which leads to a slight mass offset once the radiative core comprises nearly 100% of the star. Note that our use of the term "radiative core" is technically invalid in this regime due to the presence of a convective core.
| github_jupyter |
DIFAX Replication
=================
This example replicates the traditional DIFAX images for upper-level
observations.
By: Kevin Goebbert
Observation data comes from Iowa State Archive, accessed through the
Siphon package. Contour data comes from the GFS 0.5 degree analysis.
Classic upper-level data of Geopotential Height and Temperature are
plotted.
```
import urllib.request
from datetime import datetime, timedelta
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import metpy.calc as mpcalc
import numpy as np
import xarray as xr
from metpy.plots import StationPlot
from metpy.units import units
from siphon.simplewebservice.iastate import IAStateUpperAir
```
Plotting High/Low Symbols
-------------------------
A helper function to plot a text symbol (e.g., H, L) for relative
maximum/minimum for a given field (e.g., geopotential height).
```
def plot_maxmin_points(lon, lat, data, extrema, nsize, symbol, color='k',
plotValue=True, transform=None):
"""
This function will find and plot relative maximum and minimum for a 2D grid. The function
can be used to plot an H for maximum values (e.g., High pressure) and an L for minimum
values (e.g., low pressue). It is best to used filetered data to obtain a synoptic scale
max/min value. The symbol text can be set to a string value and optionally the color of the
symbol and any plotted value can be set with the parameter color.
Parameters
----------
lon : 2D array
Plotting longitude values
lat : 2D array
Plotting latitude values
data : 2D array
Data that you wish to plot the max/min symbol placement
extrema : str
Either a value of max for Maximum Values or min for Minimum Values
nsize : int
Size of the grid box to filter the max and min values to plot a reasonable number
symbol : str
Text to be placed at location of max/min value
color : str
Name of matplotlib colorname to plot the symbol (and numerical value, if plotted)
plot_value : Boolean (True/False)
Whether to plot the numeric value of max/min point
Return
------
The max/min symbol will be plotted on the current axes within the bounding frame
(e.g., clip_on=True)
"""
from scipy.ndimage.filters import maximum_filter, minimum_filter
if (extrema == 'max'):
data_ext = maximum_filter(data, nsize, mode='nearest')
elif (extrema == 'min'):
data_ext = minimum_filter(data, nsize, mode='nearest')
else:
raise ValueError('Value for hilo must be either max or min')
if lon.ndim == 1:
lon, lat = np.meshgrid(lon, lat)
mxx, mxy = np.where(data_ext == data)
for i in range(len(mxy)):
ax.text(lon[mxx[i], mxy[i]], lat[mxx[i], mxy[i]], symbol, color=color, size=36,
clip_on=True, horizontalalignment='center', verticalalignment='center',
transform=transform)
ax.text(lon[mxx[i], mxy[i]], lat[mxx[i], mxy[i]],
'\n' + str(np.int(data[mxx[i], mxy[i]])),
color=color, size=12, clip_on=True, fontweight='bold',
horizontalalignment='center', verticalalignment='top', transform=transform)
ax.plot(lon[mxx[i], mxy[i]], lat[mxx[i], mxy[i]], marker='o', markeredgecolor='black',
markerfacecolor='white', transform=transform)
ax.plot(lon[mxx[i], mxy[i]], lat[mxx[i], mxy[i]],
marker='x', color='black', transform=transform)
```
Station Information
-------------------
A helper function for obtaining radiosonde station information (e.g.,
latitude/longitude) requried to plot data obtained from each station.
Original code by github user sgdecker.
```
def station_info(stid):
r"""Provide information about weather stations.
Parameters
----------
stid: str or iterable object containing strs
The ICAO or IATA code(s) for which station information is requested.
with_units: bool
Whether to include units for values that have them. Default True.
Returns
-------
info: dict
Information about the station(s) within a dictionary with these keys:
'state': Two-character ID of the state/province where the station is located,
if applicable
'name': The name of the station
'lat': The latitude of the station [deg]
'lon': The longitude of the station [deg]
'elevation': The elevation of the station [m]
'country': Two-character ID of the country where the station is located
Modified code from Steven Decker, Rutgers University
"""
# Provide a helper function for later usage
def str2latlon(s):
deg = float(s[:3])
mn = float(s[-3:-1])
if s[-1] == 'S' or s[-1] == 'W':
deg = -deg
mn = -mn
return deg + mn / 60.
# Various constants describing the underlying data
url = 'https://www.aviationweather.gov/docs/metar/stations.txt'
# file = 'stations.txt'
state_bnds = slice(0, 2)
name_bnds = slice(3, 19)
icao_bnds = slice(20, 24)
iata_bnds = slice(26, 29)
lat_bnds = slice(39, 45)
lon_bnds = slice(47, 54)
z_bnds = slice(55, 59)
cntry_bnds = slice(81, 83)
# Generalize to any number of IDs
if isinstance(stid, str):
stid = [stid]
# Get the station dataset
infile = urllib.request.urlopen(url)
data = infile.readlines()
# infile = open(file, 'rb')
# data = infile.readlines()
state = []
name = []
lat = []
lon = []
z = []
cntry = []
for s in stid:
s = s.upper()
for line_bytes in data:
line = line_bytes.decode('UTF-8')
icao = line[icao_bnds]
iata = line[iata_bnds]
if len(s) == 3 and s in iata or len(s) == 4 and s in icao:
state.append(line[state_bnds].strip())
name.append(line[name_bnds].strip())
lat.append(str2latlon(line[lat_bnds]))
lon.append(str2latlon(line[lon_bnds]))
z.append(float(line[z_bnds]))
cntry.append(line[cntry_bnds])
break
else:
state.append('NA')
name.append('NA')
lat.append(np.nan)
lon.append(np.nan)
z.append(np.nan)
cntry.append('NA')
infile.close()
return {'state': np.array(state), 'name': np.array(name), 'lat': np.array(lat),
'lon': np.array(lon), 'elevation': np.array(z), 'country': np.array(cntry),
'units': {'lat': 'deg', 'lon': 'deg', 'z': 'm'}}
```
Observation Data
----------------
Set a date and time for upper-air observations (should only be 00 or 12
UTC for the hour).
Request all data from Iowa State using the Siphon package. The result is
a pandas DataFrame containing all of the sounding data from all
available stations.
```
# Set date for desired UPA data
today = datetime.utcnow()
# Go back one day to ensure data availability
date = datetime(today.year, today.month, today.day, 0) - timedelta(days=1)
# Request data using Siphon request for data from Iowa State Archive
data = IAStateUpperAir.request_all_data(date)
```
Subset Observational Data
-------------------------
From the request above will give all levels from all radisonde sites
available through the service. For plotting a pressure surface map there
is only need to have the data from that level. Below the data is subset
and a few parameters set based on the level chosen. Additionally, the
station information is obtained and latitude and longitude data is added
to the DataFrame.
```
level = 500
if (level == 925) | (level == 850) | (level == 700):
cint = 30
def hght_format(v): return format(v, '.0f')[1:]
elif level == 500:
cint = 60
def hght_format(v): return format(v, '.0f')[:3]
elif level == 300:
cint = 120
def hght_format(v): return format(v, '.0f')[:3]
elif level < 300:
cint = 120
def hght_format(v): return format(v, '.0f')[1:4]
# Create subset of all data for a given level
data_subset = data.pressure == level
df = data[data_subset]
# Get station lat/lon from look-up file; add to Dataframe
stn_info = station_info(list(df.station.values))
df.insert(10, 'latitude', stn_info['lat'])
df.insert(11, 'longitude', stn_info['lon'])
```
Gridded Data
------------
Obtain GFS gridded output for contour plotting. Specifically,
geopotential height and temperature data for the given level and subset
for over North America. Data are smoothed for aesthetic reasons.
```
# Get GFS data and subset to North America for Geopotential Height and Temperature
ds = xr.open_dataset('https://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GFS/Global_0p5deg_ana/'
'GFS_Global_0p5deg_ana_{0:%Y%m%d}_{0:%H}00.grib2'.format(
date)).metpy.parse_cf()
# Geopotential height and smooth
hght = ds.Geopotential_height_isobaric.metpy.sel(
vertical=level*units.hPa, time=date, lat=slice(70, 15), lon=slice(360-145, 360-50))
smooth_hght = mpcalc.smooth_n_point(hght, 9, 10)
# Temperature, smooth, and convert to Celsius
tmpk = ds.Temperature_isobaric.metpy.sel(
vertical=level*units.hPa, time=date, lat=slice(70, 15), lon=slice(360-145, 360-50))
smooth_tmpc = (mpcalc.smooth_n_point(tmpk, 9, 10)).to('degC')
```
Create DIFAX Replication
------------------------
Plot the observational data and contours on a Lambert Conformal map and
add features that resemble the historic DIFAX maps.
```
# Set up map coordinate reference system
mapcrs = ccrs.LambertConformal(
central_latitude=45, central_longitude=-100, standard_parallels=(30, 60))
# Set up station locations for plotting observations
point_locs = mapcrs.transform_points(
ccrs.PlateCarree(), df['longitude'].values, df['latitude'].values)
# Start figure and set graphics extent
fig = plt.figure(1, figsize=(17, 15))
ax = plt.subplot(111, projection=mapcrs)
ax.set_extent([-125, -70, 20, 55])
# Add map features for geographic reference
ax.add_feature(cfeature.COASTLINE.with_scale('50m'), edgecolor='grey')
ax.add_feature(cfeature.LAND.with_scale('50m'), facecolor='white')
ax.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='grey')
# Plot plus signs every degree lat/lon
plus_lat = []
plus_lon = []
other_lat = []
other_lon = []
for x in hght.lon.values[::2]:
for y in hght.lat.values[::2]:
if (x % 5 == 0) | (y % 5 == 0):
plus_lon.append(x)
plus_lat.append(y)
else:
other_lon.append(x)
other_lat.append(y)
ax.scatter(other_lon, other_lat, s=5, marker='o',
transform=ccrs.PlateCarree(), color='lightgrey', zorder=-1)
ax.scatter(plus_lon, plus_lat, s=30, marker='+',
transform=ccrs.PlateCarree(), color='lightgrey', zorder=-1)
# Add gridlines for every 5 degree lat/lon
ax.gridlines(linestyle='solid', ylocs=range(15, 71, 5), xlocs=range(-150, -49, 5))
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 10 pt.
stationplot = StationPlot(ax, df['longitude'].values, df['latitude'].values, clip_on=True,
transform=ccrs.PlateCarree(), fontsize=10)
# Plot the temperature and dew point to the upper and lower left, respectively, of
# the center point.
stationplot.plot_parameter('NW', df['temperature'], color='black')
stationplot.plot_parameter('SW', df['dewpoint'], color='black')
# A more complex example uses a custom formatter to control how the geopotential height
# values are plotted. This is set in an earlier if-statement to work appropriate for
# different levels.
stationplot.plot_parameter('NE', df['height'], formatter=hght_format)
# Add wind barbs
stationplot.plot_barb(df['u_wind'], df['v_wind'], length=7, pivot='tip')
# Plot Solid Contours of Geopotential Height
cs = ax.contour(hght.lon, hght.lat, smooth_hght,
range(0, 20000, cint), colors='black', transform=ccrs.PlateCarree())
clabels = plt.clabel(cs, fmt='%d', colors='white', inline_spacing=5, use_clabeltext=True)
# Contour labels with black boxes and white text
for t in cs.labelTexts:
t.set_bbox({'facecolor': 'black', 'pad': 4})
t.set_fontweight('heavy')
# Plot Dashed Contours of Temperature
cs2 = ax.contour(hght.lon, hght.lat, smooth_tmpc, range(-60, 51, 5),
colors='black', transform=ccrs.PlateCarree())
clabels = plt.clabel(cs2, fmt='%d', colors='white', inline_spacing=5, use_clabeltext=True)
# Set longer dashes than default
for c in cs2.collections:
c.set_dashes([(0, (5.0, 3.0))])
# Contour labels with black boxes and white text
for t in cs.labelTexts:
t.set_bbox({'facecolor': 'black', 'pad': 4})
t.set_fontweight('heavy')
# Plot filled circles for Radiosonde Obs
ax.scatter(df['longitude'].values, df['latitude'].values, s=12,
marker='o', color='black', transform=ccrs.PlateCarree())
# Use definition to plot H/L symbols
plot_maxmin_points(hght.lon, hght.lat, smooth_hght.m, 'max', 50,
symbol='H', color='black', transform=ccrs.PlateCarree())
plot_maxmin_points(hght.lon, hght.lat, smooth_hght.m, 'min', 25,
symbol='L', color='black', transform=ccrs.PlateCarree())
# Add titles
plt.title('Upper-air Observations at {}-hPa Analysis Heights/Temperature'.format(level),
loc='left')
plt.title(f'Valid: {date}', loc='right');
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Cknowles11/DS-Unit-2-Applied-Modeling/blob/master/Copy_of_LS_DS_233_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 3, Module 3*
---
# Permutation & Boosting
You will use your portfolio project dataset for all assignments this sprint.
## Assignment
Complete these tasks for your project, and document your work.
- [ ] If you haven't completed assignment #1, please do so first.
- [ ] Continue to clean and explore your data. Make exploratory visualizations.
- [ ] Fit a model. Does it beat your baseline?
- [ ] Try xgboost.
- [ ] Get your model's permutation importances.
You should try to complete an initial model today, because the rest of the week, we're making model interpretation visualizations.
But, if you aren't ready to try xgboost and permutation importances with your dataset today, that's okay. You can practice with another dataset instead. You may choose any dataset you've worked with previously.
The data subdirectory includes the Titanic dataset for classification and the NYC apartments dataset for regression. You may want to choose one of these datasets, because example solutions will be available for each.
## Reading
Top recommendations in _**bold italic:**_
#### Permutation Importances
- _**[Kaggle / Dan Becker: Machine Learning Explainability](https://www.kaggle.com/dansbecker/permutation-importance)**_
- [Christoph Molnar: Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/feature-importance.html)
#### (Default) Feature Importances
- [Ando Saabas: Selecting good features, Part 3, Random Forests](https://blog.datadive.net/selecting-good-features-part-iii-random-forests/)
- [Terence Parr, et al: Beware Default Random Forest Importances](https://explained.ai/rf-importance/index.html)
#### Gradient Boosting
- [A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning](https://machinelearningmastery.com/gentle-introduction-gradient-boosting-algorithm-machine-learning/)
- [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 8
- _**[Gradient Boosting Explained](https://www.gormanalysis.com/blog/gradient-boosting-explained/)**_ — Ben Gorman
- [Gradient Boosting Explained](http://arogozhnikov.github.io/2016/06/24/gradient_boosting_explained.html) — Alex Rogozhnikov
- [How to explain gradient boosting](https://explained.ai/gradient-boosting/) — Terence Parr & Jeremy Howard
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
!pip install category_encoders==2.*
!pip install eli5
from google.colab import drive
drive.mount('/content/drive')
df = pd.read_csv('/content/drive/My Drive/Local Repo/wineQualityWhites.csv')
df.sample(5)
df = df.drop('Unnamed: 0', axis = 1)
df.dtypes
```
# Feature Engineering
```
# Free Sulfur Dioxide in comparison to Total Sulfur Dioxide
df['fsd_perc'] = df['free.sulfur.dioxide'] / df['total.sulfur.dioxide']
df['fsd_perc'] = df['fsd_perc'].round(3)
```
# Exploration
```
above_avg_sub = df[df['quality'] >= 5 ]
above_avg_sub.head()
above_avg_sub.describe()
below_avg_sub = df[df['quality'] <= 5 ]
below_avg_sub.describe()
```
# Fit Model
```
from sklearn.model_selection import train_test_split
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
import eli5
from eli5.sklearn import PermutationImportance
from sklearn.metrics import accuracy_score
train,test = train_test_split(df, train_size = .8, test_size = .2, stratify = df['quality'], random_state = 21)
train, val = train_test_split(train, train_size = .80, test_size = .20, stratify = train['quality'], random_state = 21)
print(train.shape)
print(val.shape)
test.shape
target = 'quality'
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
tfs = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_transformed = tfs.fit_transform(X_train)
X_val_transformed = tfs.transform(X_val)
model = RandomForestClassifier(random_state=42)
model.fit(X_train_transformed, y_train)
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy = 'median'),
RandomForestClassifier()
)
```
# Permutation Importances / Model Fit
```
permuter = PermutationImportance(model, scoring = 'accuracy', n_iter = 5, random_state=42)
permuter.fit(X_val_transformed, y_val)
feature_names = X_val.columns.tolist()
eli5.show_weights(permuter, top = None, feature_names = feature_names)
pipeline.fit(X_train,y_train)
pipeline.score(X_val, y_val)
from xgboost import XGBClassifier
xgb_pipeline = make_pipeline(
ce.OrdinalEncoder(),
XGBClassifier()
)
xgb_pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
accuracy_score(y_val, y_pred)
```
# Parameter Tuning
```
encoder = ce.OrdinalEncoder()
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
x_model = XGBClassifier(
n_estimators = 1000,
max_depth = 10,
learning_rate = 0.5,
)
eval_set = [(X_train_encoded, y_train),
(X_val_encoded, y_val)]
x_model.fit(X_train_encoded, y_train,
eval_set = eval_set,
eval_metric = 'merror',
early_stopping_rounds = 50)
```
| github_jupyter |
```
# Decorator Introduction
def func():
return 1
print(func())
print(func) # that means we can assign this function to a variable
def hello():
return "Hello"
greet = hello
print(hello)
print(greet())
# Delete hello
del hello
try:
print(hello())
except NameError:
print("hello() is not defined")
print(greet)
print(greet())
# Returning function from functions
def outer():
print("The outer() function is executing ...")
def inner():
return "The inner() function is executing ..."
print("I am going to return a function")
return inner
my_new_func = outer()
print(my_new_func)
print(my_new_func())
# Passing function to other functions
def secondary(original_function):
print("Some more codes which executed inside the secondary() function")
original_function()
def say_hello():
print("Hi Nilanjan")
say_hello()
secondary(say_hello)
# Defining Decorator function
def my_decorator(original_function):
def wrapper(*args, **kwargs):
print(f"\nSome extra code, before execution of {original_function.__name__} function")
original_function(*args, **kwargs)
print(f"Some more code, after execution of {original_function.__name__} function")
return wrapper
# Using Decorator
def function_needs_decorator():
print("This function need some decoration!")
function_needs_decorator()
# New decorated function
decorated_function = my_decorator(function_needs_decorator)
decorated_function()
# Comment and Uncomment this @my_decorator to use another_function() as decorated and normal function
@my_decorator # ON/OFF Switch
def another_function():
print("Another function which needs some decoraion!")
another_function()
# Decorator with no arguments
def decorator_one(original_function):
def wrapper():
print(f"\nSome extra code, before execution of {original_function.__name__} function")
original_function()
print(f"Some more code, after execution of {original_function.__name__} function")
return wrapper
# Decorator which accepts arguments
def decorator_two(original_function):
def wrapper(*args, **kwargs):
print(f"\nSome extra code, before execution of {original_function.__name__} function")
original_function(*args, **kwargs)
print(f"Some more code, after execution of {original_function.__name__} function")
return wrapper
@decorator_one
def display_info_one(name, age):
print(f"display_info_one function ran with arguments ({name}, {age})")
@decorator_two
def display_info_two(name, age):
print(f"display_info_two function ran with arguments ({name}, {age})")
try:
display_info_one("Nilanjan", 21)
except Exception as err_msg:
print("\ndecorated display_info_one function throw a Error:", err_msg)
try:
display_info_two("Nilanjan", 21)
except Exception as err_msg:
print("\nThis decorated display_info_decorated function throw a Type Error:", err_msg)
# Using Class as decorator
class class_decorator(object):
def __init__(self, original_function):
self.original_function = original_function
def __call__(self, *args, **kwargs):
print(f"\nSome extra code, before execution of {self.original_function.__name__} function")
self.original_function(*args, **kwargs)
print(f"Some more code, after execution of {self.original_function.__name__} function")
@class_decorator
def display_info(name, age):
print(f"display_info function ran with arguments ({name}, {age})")
display_info('Nilanjan', 21)
# When we use decorators, the newly decorated functions show some unexpected results
# Preserving the information about original_function
def some_decorator(original_function):
def my_wrapper(*args, **kwargs):
print(f"Some code before {original_function.__name__}() function")
return original_function(*args, **kwargs)
return my_wrapper
def some_func(name, country='India'):
print(f"{name} lives in {country}")
@some_decorator
def hey():
print("I am in hey() function")
my_decorated_func = some_decorator(some_func)
print(my_decorated_func.__name__) #output: wrapper
my_decorated_func = hey
print(my_decorated_func.__name__) #output: wrapper
# Solution
from functools import wraps
def my_new_decorator(original_function):
@wraps(original_function)
def wrapper(*args, **kwargs):
print(f"Some code before {original_function.__name__}() function")
return original_function(*args, **kwargs)
return wrapper
@my_new_decorator
def hey():
print("I am in hey() function")
my_decorated_func = my_new_decorator(some_func)
print(my_decorated_func.__name__) #output: some_func
my_decorated_func = hey
print(my_decorated_func.__name__) #output: hey
```
| github_jupyter |
# Section 2 - Neural Networks
## Lesson 1 - Introduction to Neural Networks
### 27. The Gradient Descent Algorithm
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
```
#### Reading and plotting the data
```
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show()
```
#### TODO: Implementing the basic functions
Here is your turn to shine. Implement the following formulas, as explained in the text.
- Sigmoid activation function
$$\sigma(x) = \frac{1}{1+e^{-x}}$$
- Output (prediction) formula
$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$
- Error function
$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$
- The function that updates the weights
$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$
$$ b \longrightarrow b + \alpha (y - \hat{y})$$
```
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Output (prediction) formula
def output_formula(features, weights, bias):
return sigmoid(np.dot(features, weights) + bias)
# Error (log-loss) formula
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output)
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
output = output_formula(x, weights, bias)
d_error = y - output
weights += learnrate * d_error * x
bias += learnrate * d_error
return weights, bias
```
#### Training function
This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
```
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
```
#### Time to train the algorithm!
When we run the function, we'll obtain the following:
- 10 updates with the current training loss and accuracy
- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.
- A plot of the error function. Notice how it decreases as we go through more epochs.
```
train(X, y, epochs, learnrate, True)
```
#### 36. Predicting Student Admissions with Neural Networks
In this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:
- GRE Scores (Test)
- GPA Scores (Grades)
- Class rank (1-4)
The dataset originally came from here: http://www.ats.ucla.edu/
##### Loading the data
To load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:
- https://pandas.pydata.org/pandas-docs/stable/
- https://docs.scipy.org/
```
# Importing pandas and numpy
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data[:10]
```
##### Plotting the data
First let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
```
# Importing matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show()
```
Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
```
# Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
```
This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.
##### TODO: One-hot encoding the rank
Use the `get_dummies` function in pandas in order to one-hot encode the data.
Hint: To drop a column, it's suggested that you use `one_hot_data`[.drop( )](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html).
```
# Make dummy variables for rank
one_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix='rank')], axis=1)
# Drop the previous rank column
one_hot_data = one_hot_data.drop('rank', axis=1)
# Print the first 10 rows of our data
one_hot_data[:10]
```
##### TODO: Scaling the data
The next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.
```
# Copying our data
processed_data = one_hot_data[:]
# Scaling the columns
processed_data['gre'] = processed_data['gre']/800
processed_data['gpa'] = processed_data['gpa']/4.0
processed_data[:10]
```
##### Splitting the data into Training and Testing
In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
```
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10])
```
##### Splitting the data into features and targets (labels)
Now, as a final step before the training, we'll split the data into features (X) and targets (y).
```
features = train_data.drop('admit', axis=1)
targets = train_data['admit']
features_test = test_data.drop('admit', axis=1)
targets_test = test_data['admit']
print(features[:10])
print(targets[:10])
```
##### Training the 2-layer Neural Network
The following function trains the 2-layer neural network. First, we'll write some helper functions.
```
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x) * (1-sigmoid(x))
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output)
```
##### TODO: Backpropagate the error
Now it's your turn to shine. Write the error term. Remember that this is given by the equation $$ (y-\hat{y}) \sigma'(x) $$
```
def error_term_formula(x, y, output):
return (y - output)*sigmoid_prime(x)
# Neural Network hyperparameters
epochs = 1000
learnrate = 0.5
# Training function
def train_nn(features, targets, epochs, learnrate):
# Use to same seed to make debugging easier
np.random.seed(42)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, targets):
# Loop through all records, x is the input, y is the target
# Activation of the output unit
# Notice we multiply the inputs and the weights here
# rather than storing h as a separate variable
output = sigmoid(np.dot(x, weights))
# The error, the target minus the network output
error = error_formula(y, output)
# The error term
error_term = error_term_formula(x, y, output)
# The gradient descent step, the error times the gradient times the inputs
del_w += error_term * x
# Update the weights here. The learning rate times the
# change in weights, divided by the number of records to average
weights += learnrate * del_w / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - targets) ** 2)
print("Epoch:", e)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
print("=========")
print("Finished training!")
return weights
weights = train_nn(features, targets, epochs, learnrate)
```
##### Calculating the Accuracy on the Test Data
```
# Calculate accuracy on test data
test_out = sigmoid(np.dot(features_test, weights))
predictions = test_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
```
## Lesson 2 - Implementing Gradient Descent
### 4. Gradient Descent : The Code
Example for a single network output
```
# Defining the sigmoid function for activations
def sigmoid(x):
return 1/(1+np.exp(-x))
# Derivative of the sigmoid function
def sigmoid_prime(x):
return sigmoid(x) * (1 - sigmoid(x))
# Input data
x = np.array([0.1, 0.3])
# Target
y = 0.2
# Input to output weights
weights = np.array([-0.8, 0.5])
# The learning rate, eta in the weight step equation
learnrate = 0.5
# the linear combination performed by the node (h in f(h) and f'(h))
h = x[0]*weights[0] + x[1]*weights[1]
# or h = np.dot(x, weights)
# The neural network output (y-hat)
nn_output = sigmoid(h)
# output error (y - y-hat)
error = y - nn_output
# output gradient (f'(h))
output_grad = sigmoid_prime(h)
# error term (lowercase delta)
error_term = error * output_grad
# Gradient descent step
del_w = [ learnrate * error_term * x[0],
learnrate * error_term * x[1]]
# or del_w = learnrate * error_term * x
```
Quiz for part 4
```
import numpy as np
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1/(1+np.exp(-x))
def sigmoid_prime(x):
"""
# Derivative of the sigmoid function
"""
return sigmoid(x) * (1 - sigmoid(x))
learnrate = 0.5
x = np.array([1, 2, 3, 4])
y = np.array(0.5)
# Initial weights
w = np.array([0.5, -0.5, 0.3, 0.1])
### Calculate one gradient descent step for each weight
### Note: Some steps have been consilated, so there are
### fewer variable names than in the above sample code
# TODO: Calculate the node's linear combination of inputs and weights
h = np.dot(x, w)
# TODO: Calculate output of neural network
nn_output = sigmoid(h)
# TODO: Calculate error of neural network
error = y - nn_output
# TODO: Calculate the error term
# Remember, this requires the output gradient, which we haven't
# specifically added a variable for.
error_term = error * sigmoid_prime(h)
# Note: The sigmoid_prime function calculates sigmoid(h) twice,
# but you've already calculated it once. You can make this
# code more efficient by calculating the derivative directly
# rather than calling sigmoid_prime, like this:
# error_term = error * nn_output * (1 - nn_output)
# TODO: Calculate change in weights
del_w = learnrate * error_term * x
print('Neural Network output:')
print(nn_output)
print('Amount of Error:')
print(error)
print('Change in Weights:')
print(del_w)
```
### 5. Implementing Gradient Descent
Quiz
```
import numpy as np
from data_prep import features, targets, features_test, targets_test
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1 / (1 + np.exp(-x))
# TODO: We haven't provided the sigmoid_prime function like we did in
# the previous lesson to encourage you to come up with a more
# efficient solution. If you need a hint, check out the comments
# in solution.py from the previous lecture.
# Use to same seed to make debugging easier
np.random.seed(42)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
# Neural Network hyperparameters
epochs = 1000
learnrate = 0.5
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, targets):
# Loop through all records, x is the input, y is the target
# Note: We haven't included the h variable from the previous
# lesson. You can add it if you want, or you can calculate
# the h together with the output
# TODO: Calculate the output
output = sigmoid(np.dot(x, weights))
# TODO: Calculate the error
error = y - output
# TODO: Calculate the error term
error_term = error * output * (1 - output)
# TODO: Calculate the change in weights for this sample
# and add it to the total weight change
del_w += error_term * x
# TODO: Update weights using the learning rate and the average change in weights
weights += learnrate * del_w / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - targets) ** 2)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
# Calculate accuracy on test data
tes_out = sigmoid(np.dot(features_test, weights))
predictions = tes_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
```
### 6. Multilayer Perceptrons
Quiz
```
import numpy as np
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1/(1+np.exp(-x))
# Network size
N_input = 4
N_hidden = 3
N_output = 2
np.random.seed(42)
# Make some fake data
X = np.random.randn(4)
weights_input_to_hidden = np.random.normal(0, scale=0.1, size=(N_input, N_hidden))
weights_hidden_to_output = np.random.normal(0, scale=0.1, size=(N_hidden, N_output))
# TODO: Make a forward pass through the network
# TODO: Make a forward pass through the network
hidden_layer_in = np.dot(X, weights_input_to_hidden)
hidden_layer_out = sigmoid(hidden_layer_in)
print('Hidden-layer Output:')
print(hidden_layer_out)
output_layer_in = np.dot(hidden_layer_out, weights_hidden_to_output)
output_layer_out = sigmoid(output_layer_in)
print('Output-layer Output:')
print(output_layer_out)
```
### 7. Backpropagation
Quiz
```
import numpy as np
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1 / (1 + np.exp(-x))
x = np.array([0.5, 0.1, -0.2])
target = 0.6
learnrate = 0.5
weights_input_hidden = np.array([[0.5, -0.6],
[0.1, -0.2],
[0.1, 0.7]])
weights_hidden_output = np.array([0.1, -0.3])
## Forward pass
hidden_layer_input = np.dot(x, weights_input_hidden)
hidden_layer_output = sigmoid(hidden_layer_input)
output_layer_in = np.dot(hidden_layer_output, weights_hidden_output)
output = sigmoid(output_layer_in)
## Backwards pass
## TODO: Calculate output error
error = target - output
# TODO: Calculate error term for output layer
output_error_term = error * output * (1 - output)
# TODO: Calculate error term for hidden layer
hidden_error_term = np.dot(output_error_term, weights_hidden_output) * \
hidden_layer_output * (1 - hidden_layer_output)
# TODO: Calculate change in weights for hidden layer to output layer
delta_w_h_o = learnrate * output_error_term * hidden_layer_output
# TODO: Calculate change in weights for input layer to hidden layer
delta_w_i_h = learnrate * hidden_error_term * x[:, None]
print('Change in weights for hidden layer to output layer:')
print(delta_w_h_o)
print('Change in weights for input layer to hidden layer:')
print(delta_w_i_h)
```
### 8. Implementing Backpropagation
Quiz
```
import numpy as np
from data_prep import features, targets, features_test, targets_test
np.random.seed(21)
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1 / (1 + np.exp(-x))
# Hyperparameters
n_hidden = 2 # number of hidden units
epochs = 900
learnrate = 0.005
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights_input_hidden = np.random.normal(scale=1 / n_features ** .5,
size=(n_features, n_hidden))
weights_hidden_output = np.random.normal(scale=1 / n_features ** .5,
size=n_hidden)
for e in range(epochs):
del_w_input_hidden = np.zeros(weights_input_hidden.shape)
del_w_hidden_output = np.zeros(weights_hidden_output.shape)
for x, y in zip(features.values, targets):
## Forward pass ##
# TODO: Calculate the output
hidden_input = np.dot(x, weights_input_hidden)
hidden_output = sigmoid(hidden_input)
output = sigmoid(np.dot(hidden_output,
weights_hidden_output))
## Backward pass ##
# TODO: Calculate the network's prediction error
error = y - output
# TODO: Calculate error term for the output unit
output_error_term = error * output * (1 - output)
## propagate errors to hidden layer
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(output_error_term, weights_hidden_output)
# TODO: Calculate the error term for the hidden layer
hidden_error_term = hidden_error * hidden_output * (1 - hidden_output)
# TODO: Update the change in weights
del_w_hidden_output += output_error_term * hidden_output
del_w_input_hidden += hidden_error_term * x[:, None]
# TODO: Update weights
weights_input_hidden += learnrate * del_w_input_hidden / n_records
weights_hidden_output += learnrate * del_w_hidden_output / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
hidden_output = sigmoid(np.dot(x, weights_input_hidden))
out = sigmoid(np.dot(hidden_output,
weights_hidden_output))
loss = np.mean((out - targets) ** 2)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
# Calculate accuracy on test data
hidden = sigmoid(np.dot(features_test, weights_input_hidden))
out = sigmoid(np.dot(hidden, weights_hidden_output))
predictions = out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
```
| github_jupyter |
```
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torch.optim as optim
import numpy as np
import pandas as pd
# Hyperparameters
input_size = 28 * 28 # 784
num_classes = 10
num_epochs = 5
batch_size = 100
lr = 0.01
# MNIST dataset (images and labels)
train_dataset = torchvision.datasets.MNIST(root='../../data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root='../../data',
train=False,
transform=transforms.ToTensor())
# Data loader (input pipeline)
train_loader_mnist = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader_mnist = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Logistic regression model.
model = torch.nn.Sequential(
torch.nn.Flatten(),
torch.nn.Linear(input_size, num_classes),
torch.nn.LogSoftmax(dim=1)
)
# Loss and optimizer
# nn.CrossEntropyLoss() computes softmax internally
criterion = nn.CrossEntropyLoss()
# Train and test functions.
def train(model, train_loader, optimizer, num_epochs, criterion, input_size, log_interval):
model.train()
for epoch in range(num_epochs):
print('Epoch {}'.format(epoch+1))
for i, (images, labels) in enumerate(train_loader):
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# Log the loss.
if i % log_interval == 0:
print('Current loss: {}'.format(loss))
def test(model, test_loader,criterion):
model.eval()
test_acc = 0
total_data = 0
loss = 0
with torch.no_grad():
for _, (images, labels) in enumerate(test_loader):
output = model(images)
pred = output.argmax(dim=1, keepdim=True)
test_acc += pred.eq(labels.view_as(pred)).sum().item()
total_data += len(images)
loss = criterion(output, labels)
print('Loss: {}'.format(loss))
test_acc /= total_data
print('Test accuracy over {} data points: {}%'.format(total_data, test_acc * 100))
return loss.item()
test_losses = []
```
# SGD
```
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
train(model, train_loader_mnist, optimizer, num_epochs, criterion, input_size, 100)
test_loss = test(model, test_loader_mnist, criterion)
test_losses.append(test_loss)
```
# SGD Momentum
```
optimizer = torch.optim.SGD(model.parameters(), lr=lr, momentum=0.9)
train(model, train_loader_mnist, optimizer, num_epochs, criterion, input_size, 100)
test_loss = test(model, test_loader_mnist, criterion)
test_losses.append(test_loss)
```
# SGD Nesterov
```
optimizer = optim.SGD(model.parameters(), lr=lr, momentum=0.9, nesterov=True)
train(model, train_loader_mnist, optimizer, num_epochs, criterion, input_size, 100)
test_loss = test(model, test_loader_mnist, criterion)
test_losses.append(test_loss)
```
# Adagrad
```
optimizer = optim.Adagrad(model.parameters(), lr=0.01)
train(model, train_loader_mnist, optimizer, num_epochs, criterion, input_size, 100)
test_loss = test(model, test_loader_mnist, criterion)
test_losses.append(test_loss)
```
# RMSProp
```
optimizer = optim.RMSprop(model.parameters(), lr=0.001)
train(model, train_loader_mnist, optimizer, num_epochs, criterion, input_size, 100)
test_loss = test(model, test_loader_mnist, criterion)
test_losses.append(test_loss)
```
# Adam
```
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
train(model, train_loader_mnist, optimizer, num_epochs, criterion, input_size, 100)
test_loss = test(model, test_loader_mnist, criterion)
test_losses.append(test_loss)
col = ['SGD','Momentum','Nesterov','Adagrad','RMSProp','Adam']
df = pd.DataFrame(data=[test_losses], columns=col)
df
df.to_csv('logistic_regression_mnist_loss.csv')
```
# Normalize loss
```
test_losses = np.asarray(test_losses)
normalized_test_losses = []
mean = np.mean(test_losses)
minus_mean = test_losses - mean
normalized_test_losses.append((minus_mean)/np.linalg.norm(minus_mean))
print(normalized_test_losses)
col = ['SGD','Momentum','Nesterov','Adagrad','RMSProp','Adam']
df = pd.DataFrame(data=normalized_test_losses, columns=col, index = ['Logistic regression MNIST'])
df
df.to_csv('logistic_regression_mnist_normalized_loss.csv')
```
| github_jupyter |
```
!pip install splinter
#Import Dependencies
from splinter import Browser
from bs4 import BeautifulSoup
import requests
import re
import pandas as pd
import pymongo
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
```
NASA Mars News¶
Scrape the NASA Mars News Site and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.
```
# URL of page to be scraped
url = 'https://mars.nasa.gov/news/'
# Retrieve page with the requests module
response = requests.get(url)
#response.headers
#response.content
# Create BeautifulSoup object; parse with 'html.parser'
soup = BeautifulSoup(response.text, 'html.parser')
# Examine the results, then determine element that contains sought info
#print(soup.prettify())
# First paragraph result returned for first article on page
news_paragraph = soup.find('div', class_="rollover_description_inner").text.strip()
news_paragraph
# First title result returned for first article on page
news_title = soup.find('div', class_="content_title").a.text.strip()
news_title
```
JPL Mars Space Images - Featured Image
Visit the url for JPL Featured Space Image here.
Use splinter to navigate the site and find the image url for the current Featured Mars Image and assign the url string to a variable called featured_image_url.
Make sure to find the image url to the full size .jpg image.
Make sure to save a complete url string for this image.
```
executable_path = {'executable_path': '\User\reena\OneDrive\Desktop\chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(url)
browser.click_link_by_partial_text('more info')
# Extra code added here to test
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
feat_img = soup.find('figure', class_='lede')
featured_image_url = f'https://www.jpl.nasa.gov{feat_img.a.img["src"]}'
featured_image_url
```
Mars Weather
Visit the Mars Weather twitter account here and scrape the latest Mars weather tweet from the page. Save the tweet text for the weather report as a variable called mars_weather.
Note: Be sure you are not signed in to twitter, or scraping may become more difficult.
Note: Twitter frequently changes how information is presented on their website. If you are having difficulty getting the correct html tag data, consider researching Regular Expression Patterns and how they can be used in combination with the .find() method.
```
# URL of page to be scraped
url = 'https://twitter.com/marswxreport?lang=en'
# Retrieve page with the requests module
response = requests.get(url)
#response.headers
#response.content
# Create BeautifulSoup object; parse with 'html.parser'
soup = BeautifulSoup(response.text, 'html.parser')
# Examine the results, then determine element that contains sought info
#print(soup.prettify())
# results are returned as an iterable list
tweets = soup.find_all('p', class_="TweetTextSize TweetTextSize--normal js-tweet-text tweet-text")
# Loop through returned results and match the first tweet that starts with 'Insight sol'
for tweet in tweets:
# Error handling
try:
# Create a regular expression to match the first phrase of a tweet about the weather
regex = '^InSight sol'
# Print results only if title, price, and link are available
if re.match(regex,tweet.text) is not None:
weather_data = tweet.text
break
except AttributeError as e:
print(e)
#weather_data
```
Mars Facts
Visit the Mars Facts webpage here and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.
Use Pandas to convert the data to a HTML table string.
```
url = 'https://space-facts.com/mars/'
# Use pandas to read the html table data on the page into a list of dictionaries
tables = pd.read_html(url)
#tables
# Read the first dictionary in the list into a pandas dataframe and name columns
df = tables[0]
df.columns = ['Parameter', 'Value']
df.set_index('Parameter', inplace=True)
df
# Convert the dataframe into an html table, strip the end of line newlines and
# write the result to an html file to view
fact_table = df.to_html()
fact_table = fact_table.replace('\n', '')
fact_table
# Inspect the result in a browser
df.to_html('table.html')
!explorer table.html
```
Mars Hemispheres
Visit the USGS Astrogeology site here to obtain high resolution images for each of Mar's hemispheres.
You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.
Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys img_url and title.
Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.
```
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
# Get page html and make beautifulsoup object
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
# Get the html containing the titles and put into a list
title_list = soup.find_all('div', class_='description')
# Loop through the div objects and scrape titles and urls of hires images
# Initiate the list to store dictionaries
hemisphere_image_urls = []
for title in title_list:
# Navigate browser to page then click on title link to hires image page
browser.visit(url)
browser.click_link_by_partial_text(title.a.h3.text)
# Grab the destination page html and make into BeautifulSoup object
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
# Parse the hires image source(src) relative url then append to domain name
# for absolute url
img_url_list = soup.find('img', class_='wide-image')
img_url = f"https://astrogeology.usgs.gov{img_url_list['src']}"
# Create dictionary with returned values and add dict to hemisphere_image_urls list
post = {
'title': title.a.h3.text,
'image_url': img_url
}
hemisphere_image_urls.append(post)
hemisphere_image_urls
```
| github_jupyter |
# Creating a Sentiment Analysis Web App
## Using PyTorch and SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## General Outline
Recall the general outline for SageMaker projects using a notebook instance.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
For this project, you will be following the steps in the general outline with some modifications.
First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.
In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.
## Step 1: Downloading the data
As in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
```
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
```
## Step 2: Preparing and Processing the data
Also, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
```
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
```
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
```
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
```
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
```
print(train_X[100])
print(train_y[100])
```
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
```
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
```
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
```
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[100])
```
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?
**Answer:**
Besides the function mentioned above, this method also do the following stuff:
-Removes HTML
-Converts words to lowercase
-Spits sentences into separate words
-Removes stopwords from words
The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
```
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
```
## Transform the data
In the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.
Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.
### (TODO) Create a word dictionary
To begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.
> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
```
import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
for review in data:
for word in review:
if word in word_count:
word_count[word] += 1
else:
word_count[word] = 1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_wordlist =sorted(word_count.items(), key=lambda x: x[1], reverse=True)
sorted_words = [k[0] for k in sorted_wordlist]
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
```
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?
**Answer:**
The five most frequently appearing words in the training set will be "movie", "film", "one", "like", "time".
It makes senses to me becasuse these words are most likely to be extracted in the reviews of a film.
```
# TODO: Use this space to determine the five most frequently appearing words in the training set.
print(list(word_dict.keys())[0:5])
```
### Save `word_dict`
Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
```
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
```
### Transform the reviews
Now that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
```
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
```
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
```
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print(train_X[100])
```
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?
**Answer:**
Althought the `preprocess_data` will remove the punctuation and `convert_and_pad_data`will truncate each review to the lenghth of 500, doing these two process for both traning and testing set could make consistency of the both training and testing data. Thus, it won't be a problem.
## Step 3: Upload the data to S3
As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.
### Save the processed training dataset locally
It is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
```
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Uploading the training data
Next, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
```
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
```
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.
## Step 4: Build and Train the PyTorch Model
In the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects
- Model Artifacts,
- Training Code, and
- Inference Code,
each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.
We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
```
!pygmentize train/model.py
```
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.
First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
```
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
```
### (TODO) Writing the training method
Next we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
```
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
out = model.forward(batch_X)
loss = loss_fn(out, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
```
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
```
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
```
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.
### (TODO) Training the model
When a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.
**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.
The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
```
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})
```
## Step 5: Testing the model
As mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.
## Step 6: Deploy the model for testing
Now that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.
There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.
**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )
Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.
**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.
In other words **If you are no longer using a deployed endpoint, shut it down!**
**TODO:** Deploy the trained model.
```
# TODO: Deploy the trained model
estimator_predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
## Step 7 - Use the model for testing
Once deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
```
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, estimator_predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
```
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?
**Answer:**
The result is quite comparable to the result from the XGBoost model. Personlly, I THINK THE RNN model is better for sentiment analysis as the RNN is good at capturing the dimention of the words. Also, it has built-in memeroy which is useful for tasks that are time or sequence dependent like sentiment analysis.
### (TODO) More testing
We now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
```
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
```
The question we now need to answer is, how do we send this review to our model?
Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.
- Removed any html tags and stemmed the input
- Encoded the review as a sequence of integers using `word_dict`
In order process the review we will need to repeat these two steps.
**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
```
# TODO: Convert test_review into a form usable by the model and save the results in test_data
data, length = convert_and_pad(word_dict, review_to_words(test_review))
test_data = [[length] + data]
```
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
```
estimator_predictor.predict(test_data)
```
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.
### Delete the endpoint
Of course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
```
estimator.delete_endpoint()
```
## Step 6 (again) - Deploy the model for the web app
Now that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.
As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.
We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.
When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.
- `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.
- `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.
- `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.
- `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.
For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.
### (TODO) Writing inference code
Before writing our custom inference code, we will begin by taking a look at the code which has been provided.
```
!pygmentize serve/predict.py
```
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.
**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.
### Deploying the model
Now that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.
**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
```
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
### Testing the model
Now that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
```
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
pred = predictor.predict(review_input)
results.append(float(pred))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
```
As an additional test, we can try sending the `test_review` that we looked at earlier.
```
predictor.predict(test_review)
```
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.
## Step 7 (again): Use the model for the web app
> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.
So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.
<img src="Web App Diagram.svg">
The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.
In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.
Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.
### Setting up a Lambda function
The first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.
#### Part A: Create an IAM Role for the Lambda function
Since we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.
Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.
In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.
Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.
#### Part B: Create a Lambda function
Now it is time to actually create the Lambda function.
Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.
On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below.
```python
# We need to use the low-level library to interact with SageMaker since the SageMaker API
# is not available natively through Lambda.
import boto3
def lambda_handler(event, context):
# The SageMaker runtime is what allows us to invoke the endpoint that we've created.
runtime = boto3.Session().client('sagemaker-runtime')
# Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given
response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created
ContentType = 'text/plain', # The data format that is expected
Body = event['body']) # The actual review
# The response is an HTTP response whose body contains the result of our inference
result = response['Body'].read().decode('utf-8')
return {
'statusCode' : 200,
'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },
'body' : result
}
```
Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
```
predictor.endpoint
```
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.
### Setting up API Gateway
Now that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.
Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.
On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.
Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.
Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.
For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.
Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.
The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.
You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.
## Step 4: Deploying our web app
Now that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.
In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.
Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.
If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!
> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.
**TODO:** Make sure that you include the edited `index.html` file in your project submission.
Now that your web app is working, trying playing around with it and see how well it works.
**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?
**Answer:**
Review: I really enjoy watch this movie. The plot is well developed and the characters are so vivid!
Output: Your review was POSITIVE!
### Delete the endpoint
Remember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
```
predictor.delete_endpoint()
```
| github_jupyter |
# Description
This notebook contains the interpretation of a cluster (which features/latent variables in the original data are useful to distinguish traits in the cluster).
See section [LV analysis](#lv_analysis) below
# Modules loading
```
%load_ext autoreload
%autoreload 2
import pickle
import re
from pathlib import Path
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import HTML
from clustering.methods import ClusterInterpreter
from data.recount2 import LVAnalysis
from data.cache import read_data
import conf
```
# Settings
```
PARTITION_K = None
PARTITION_CLUSTER_ID = None
```
# Load MultiPLIER summary
```
multiplier_model_summary = read_data(conf.MULTIPLIER["MODEL_SUMMARY_FILE"])
multiplier_model_summary.shape
multiplier_model_summary.head()
```
# Load data
## Original data
```
INPUT_SUBSET = "z_score_std"
INPUT_STEM = "projection-smultixcan-efo_partial-mashr-zscores"
input_filepath = Path(
conf.RESULTS["DATA_TRANSFORMATIONS_DIR"],
INPUT_SUBSET,
f"{INPUT_SUBSET}-{INPUT_STEM}.pkl",
).resolve()
display(input_filepath)
assert input_filepath.exists(), "Input file does not exist"
input_filepath_stem = input_filepath.stem
display(input_filepath_stem)
data = pd.read_pickle(input_filepath)
data.shape
data.head()
```
## Clustering partitions
```
CONSENSUS_CLUSTERING_DIR = Path(
conf.RESULTS["CLUSTERING_DIR"], "consensus_clustering"
).resolve()
display(CONSENSUS_CLUSTERING_DIR)
input_file = Path(CONSENSUS_CLUSTERING_DIR, "best_partitions_by_k.pkl").resolve()
display(input_file)
best_partitions = pd.read_pickle(input_file)
best_partitions.shape
best_partitions.head()
```
# Functions
```
def show_cluster_stats(data, partition, cluster):
cluster_traits = data[partition == cluster].index
display(f"Cluster '{cluster}' has {len(cluster_traits)} traits")
display(cluster_traits)
```
# LV analysis
<a id="lv_analysis"></a>
## Associated traits
```
display(best_partitions.loc[PARTITION_K])
part = best_partitions.loc[PARTITION_K, "partition"]
show_cluster_stats(data, part, PARTITION_CLUSTER_ID)
```
## Associated latent variables
```
ci = ClusterInterpreter(
threshold=1.0,
max_features=20,
max_features_to_explore=100,
)
ci.fit(data, part, PARTITION_CLUSTER_ID)
ci.features_
# save interpreter instance
output_dir = Path(
conf.RESULTS["CLUSTERING_INTERPRETATION"]["BASE_DIR"],
"cluster_lvs",
f"part{PARTITION_K}",
)
output_dir.mkdir(exist_ok=True, parents=True)
output_file = Path(
output_dir, f"cluster_interpreter-part{PARTITION_K}_k{PARTITION_CLUSTER_ID}.pkl"
)
display(output_file)
ci.features_.to_pickle(output_file)
```
## Top attributes
Here we go through the list of associated latent variables and, for each, we show associated pathways (prior knowledge), top traits, top genes and the top tissues/cell types where those genes are expressed.
```
for lv_idx, lv_info in ci.features_.iterrows():
display(HTML(f"<h2>LV{lv_idx}</h2>"))
lv_name = lv_info["name"]
lv_obj = lv_exp = LVAnalysis(lv_name, data)
# show lv prior knowledge match (pathways)
lv_pathways = multiplier_model_summary[
multiplier_model_summary["LV index"].isin((lv_name[2:],))
& (
(multiplier_model_summary["FDR"] < 0.05)
| (multiplier_model_summary["AUC"] >= 0.75)
)
]
display(lv_pathways)
lv_data = lv_obj.get_experiments_data()
display("")
display(lv_obj.lv_traits.head(20))
display("")
display(lv_obj.lv_genes.head(10))
lv_attrs = lv_obj.get_attributes_variation_score()
_tmp = pd.Series(lv_attrs.index)
lv_attrs = lv_attrs[
_tmp.str.match(
"(?:cell.+type$)|(?:tissue$)|(?:tissue.+type$)",
case=False,
flags=re.IGNORECASE,
).values
].sort_values(ascending=False)
display(lv_attrs)
for _lva in lv_attrs.index:
display(HTML(f"<h3>{_lva}</h3>"))
display(lv_data[_lva].dropna().reset_index()["project"].unique())
with sns.plotting_context("paper", font_scale=1.0), sns.axes_style("whitegrid"):
fig, ax = plt.subplots(figsize=(14, 8))
ax = lv_obj.plot_attribute(_lva, top_x_values=20)
if ax is None:
plt.close(fig)
continue
display(fig)
plt.close(fig)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/z-arabi/notebooks/blob/main/01_introduction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Uncomment and run this cell if you're on Colab or Kaggle
!git clone https://github.com/nlp-with-transformers/notebooks.git
%cd notebooks
from install import *
install_requirements()
#hide
from utils import *
setup_chapter()
```
# Hello Transformers
## The Encoder-Decoder Framework
## Attention Mechanisms
## Transfer Learning in NLP
## Hugging Face Transformers: Bridging the Gap
## A Tour of Transformer Applications
```
text = """Dear Amazon, last week I ordered an Optimus Prime action figure \
from your online store in Germany. Unfortunately, when I opened the package, \
I discovered to my horror that I had been sent an action figure of Megatron \
instead! As a lifelong enemy of the Decepticons, I hope you can understand my \
dilemma. To resolve the issue, I demand an exchange of Megatron for the \
Optimus Prime figure I ordered. Enclosed are copies of my records concerning \
this purchase. I expect to hear from you soon. Sincerely, Bumblebee."""
```
## pipelines
https://huggingface.co/docs/transformers/main_classes/pipelines
### Text Classification
```
#hide_output
from transformers import pipeline
classifier = pipeline("text-classification")
import pandas as pd
outputs = classifier(text)
pd.DataFrame(outputs)
my_text = """Hi, I got a device from amazon, but I did not like its color. \
In addition to its function, this device also had a beautiful appearance. \
My only problem with this device was its color."""
my_output = classifier(my_text)
pd.DataFrame(my_output)
print(my_output) # the lis of dictionary
# convert the list of dictionary to the DF
pd.DataFrame(my_output)
```
### Named Entity Recognition
```
ner_tagger = pipeline("ner", aggregation_strategy="simple")
outputs = ner_tagger(text)
pd.DataFrame(outputs)
ner_output = ner_tagger(my_text)
pd.DataFrame(ner_output)
```
### Question Answering
```
reader = pipeline("question-answering")
question = "What does the customer want?"
outputs = reader(question=question, context=text)
pd.DataFrame([outputs])
my_question = "What was the customer dissatisfied with?"
q_output = reader(question=my_question, context=my_text)
pd.DataFrame([q_output])
```
### Summarization
```
summarizer = pipeline("summarization")
outputs = summarizer(text, max_length=45, clean_up_tokenization_spaces=True)
outputs
outputs[0]['summary_text']
sum_output = summarizer(my_text, clean_up_tokenization_spaces=True)
sum_output[0]['summary_text']
```
### Translation
```
translator = pipeline("translation_en_to_de",
model="Helsinki-NLP/opus-mt-en-de")
outputs = translator(text, clean_up_tokenization_spaces=True, min_length=100)
outputs[0]['translation_text']
# https://huggingface.co/languages
# ValueError: The task does not provide any default models for options ('en', 'fa')
my_translator = pipeline("translation_en_to_it",model="Helsinki-NLP/opus-mt-en-it")
translate_output = my_translator(my_text, clean_up_tokenization_spaces=True, min_length=100)
translate_output[0]['translation_text']
```
### Text Generation
```
#hide
from transformers import set_seed
set_seed(42) # Set the seed to get reproducible results
generator = pipeline("text-generation")
response = "Dear Bumblebee, I am sorry to hear that your order was mixed up."
prompt = text + "\n\nCustomer service response:\n" + response
outputs = generator(prompt, max_length=200)
outputs[0]['generated_text']
my_response = "Dear Customer, we will take care of your problem ."
prompt = my_text + "\n\nCustomer service response:\n" + my_response
gen_output = generator(prompt, max_length=200)
gen_output[0]['generated_text']
```
## The Hugging Face Ecosystem
### The Hugging Face Hub
### Hugging Face Tokenizers
### Hugging Face Datasets
### Hugging Face Accelerate
## Main Challenges with Transformers
## Conclusion
```
```
| github_jupyter |
# Lecture 55: Adversarial Autoencoder for Classification
## Load Packages
```
%matplotlib inline
import os
import math
import torch
import itertools
import torch.nn as nn
import torch.optim as optim
from IPython import display
import torch.nn.functional as F
import matplotlib.pyplot as plt
import torchvision.datasets as dsets
import torchvision.transforms as transforms
print(torch.__version__) # This code has been updated for PyTorch 1.0.0
```
## Load Data
```
# MNIST Dataset
dataset = dsets.MNIST(root='./MNIST', train=True, transform=transforms.ToTensor(), download=True)
testset = dsets.MNIST(root='./MNIST', train=False, transform=transforms.ToTensor(), download=True)
# Data Loader (Input Pipeline)
data_loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=100, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=testset, batch_size=100, shuffle=False)
# Check availability of GPU
use_gpu = torch.cuda.is_available()
# use_gpu = False # Uncomment in case of GPU memory error
if use_gpu:
print('GPU is available!')
device = "cuda"
else:
print('GPU is not available!')
device = "cpu"
```
## Defining network architecture
```
#Encoder
class Q_net(nn.Module):
def __init__(self,X_dim,N,z_dim):
super(Q_net, self).__init__()
self.lin1 = nn.Linear(X_dim, N)
self.lin2 = nn.Linear(N, N)
self.lin3gauss = nn.Linear(N, z_dim)
def forward(self, x):
x = F.dropout(self.lin1(x), p=0.25, training=self.training)
x = F.relu(x)
x = F.dropout(self.lin2(x), p=0.25, training=self.training)
x = F.relu(x)
x = self.lin3gauss(x)
return x
# Decoder
class P_net(nn.Module):
def __init__(self,X_dim,N,z_dim):
super(P_net, self).__init__()
self.lin1 = nn.Linear(z_dim, N)
self.lin2 = nn.Linear(N, N)
self.lin3 = nn.Linear(N, X_dim)
def forward(self, x):
x = F.dropout(self.lin1(x), p=0.25, training=self.training)
x = F.relu(x)
x = F.dropout(self.lin2(x), p=0.25, training=self.training)
x = self.lin3(x)
return torch.sigmoid(x)
# Discriminator
class D_net_gauss(nn.Module):
def __init__(self,N,z_dim):
super(D_net_gauss, self).__init__()
self.lin1 = nn.Linear(z_dim, N)
self.lin2 = nn.Linear(N, N)
self.lin3 = nn.Linear(N, 1)
def forward(self, x):
x = F.dropout(self.lin1(x), p=0.2, training=self.training)
x = F.relu(x)
x = F.dropout(self.lin2(x), p=0.2, training=self.training)
x = F.relu(x)
return torch.sigmoid(self.lin3(x))
```
## Define optimizer
```
z_red_dims = 100
Q = Q_net(784,1000,z_red_dims).to(device)
P = P_net(784,1000,z_red_dims).to(device)
D_gauss = D_net_gauss(500,z_red_dims).to(device)
# Set learning rates
gen_lr = 0.0001
reg_lr = 0.00005
#encode/decode optimizers
optim_P = optim.Adam(P.parameters(), lr=gen_lr)
optim_Q_enc = optim.Adam(Q.parameters(), lr=gen_lr)
#regularizing optimizers
optim_Q_gen = optim.Adam(Q.parameters(), lr=reg_lr)
optim_D = optim.Adam(D_gauss.parameters(), lr=reg_lr)
```
## Test Data
```
num_test_samples = 100
test_noise = torch.randn(num_test_samples,z_red_dims).to(device)
```
## Training
```
# create figure for plotting
size_figure_grid = int(math.sqrt(num_test_samples))
fig, ax = plt.subplots(size_figure_grid, size_figure_grid, figsize=(6, 6))
for i, j in itertools.product(range(size_figure_grid), range(size_figure_grid)):
ax[i,j].get_xaxis().set_visible(False)
ax[i,j].get_yaxis().set_visible(False)
data_iter = iter(data_loader)
iter_per_epoch = len(data_loader)
total_step = 5#5000
# Start training
for step in range(total_step):
# Reset the data_iter
if (step+1) % iter_per_epoch == 0:
data_iter = iter(data_loader)
# Fetch the images and labels and convert them to variables
images, labels = next(data_iter)
images, labels = images.view(images.size(0), -1).to(device), labels.to(device)
#reconstruction loss
P.zero_grad()
Q.zero_grad()
D_gauss.zero_grad()
z_sample = Q(images) #encode to z
X_sample = P(z_sample) #decode to X reconstruction
recon_loss = F.binary_cross_entropy(X_sample,images)
recon_loss.backward()
optim_P.step()
optim_Q_enc.step()
# Discriminator
## true prior is random normal (randn)
## this is constraining the Z-projection to be normal!
Q.eval()
z_real_gauss = torch.randn(images.size()[0], z_red_dims).to(device)
D_real_gauss = D_gauss(z_real_gauss)
z_fake_gauss = Q(images)
D_fake_gauss = D_gauss(z_fake_gauss)
D_loss = -torch.mean(torch.log(D_real_gauss) + torch.log(1 - D_fake_gauss))
D_loss.backward()
optim_D.step()
# Generator
Q.train()
z_fake_gauss = Q(images)
D_fake_gauss = D_gauss(z_fake_gauss)
G_loss = -torch.mean(torch.log(D_fake_gauss))
G_loss.backward()
optim_Q_gen.step()
P.eval()
test_images = P(test_noise)
P.train()
if use_gpu:
test_images = test_images.cpu().detach()
for k in range(num_test_samples):
i = k//10
j = k%10
ax[i,j].cla()
ax[i,j].imshow(test_images[k,:].numpy().reshape(28, 28), cmap='Greys')
display.clear_output(wait=True)
display.display(plt.gcf())
```
## Classifier
```
#Encoder
class Classifier(nn.Module):
def __init__(self):
super(Classifier, self).__init__()
self.l1 = Q
self.l2 = nn.Linear(100,10)
def forward(self, x):
x = self.l1(x)
x = self.l2(x)
return x
net = Classifier().to(device)
print(net)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=1e-4)
```
## Training
```
iterations = 10
for epoch in range(iterations): # loop over the dataset multiple times
runningLoss = 0.0
for i, data in enumerate(data_loader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.view(inputs.size(0), -1).to(device), labels.to(device)
net.train()
optimizer.zero_grad() # zeroes the gradient buffers of all parameters
outputs = net(inputs) # forward
loss = criterion(outputs, labels) # calculate loss
loss.backward() # backpropagate the loss
optimizer.step()
correct = 0
total = 0
net.eval()
with torch.no_grad():
for data in test_loader:
inputs, labels = data
inputs, labels = inputs.view(inputs.size(0), -1).to(device), labels.to(device)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels.data).sum()
print('At Iteration : %d / %d ;Test Accuracy : %f'%(epoch + 1,iterations,100 * float(correct) /float(total)))
print('Finished Training')
```
| github_jupyter |
# LightGBM
## Single Prediction
```
from backend.api.prediction import initialize_pipeline
config_path = "/home/joseph/Coding/ml_projects/earthquake_forecasting/backend/config.yml"
lgb_pipeline = initialize_pipeline(config_path, "lightgbm")
lgb_pipeline
import pandas as pd
data = {
"building_id": 802906,
"geo_level_1_id": 6,
"geo_level_2_id": 487,
"geo_level_3_id": 12198,
"count_floors_pre_eq": 2,
"age": 30,
"area_percentage": 6,
"height_percentage": 5,
"land_surface_condition": "t",
"foundation_type": "r",
"roof_type": "n",
"ground_floor_type": "f",
"other_floor_type": "q",
"position": "t",
"plan_configuration": "d",
"has_superstructure_adobe_mud": 1,
"has_superstructure_mud_mortar_stone": 1,
"has_superstructure_stone_flag": 0,
"has_superstructure_cement_mortar_stone": 0,
"has_superstructure_mud_mortar_brick": 0,
"has_superstructure_cement_mortar_brick": 0,
"has_superstructure_timber": 0,
"has_superstructure_bamboo": 0,
"has_superstructure_rc_non_engineered": 0,
"has_superstructure_rc_engineered": 0,
"has_superstructure_other": 0,
"legal_ownership_status": "v",
"count_families": 1,
"has_secondary_use": 0,
"has_secondary_use_agriculture": 0,
"has_secondary_use_hotel": 0,
"has_secondary_use_rental": 0,
"has_secondary_use_institution": 0,
"has_secondary_use_school": 0,
"has_secondary_use_industry": 0,
"has_secondary_use_health_post": 0,
"has_secondary_use_gov_office": 0,
"has_secondary_use_use_police": 0,
"has_secondary_use_other": 0,
}
df = pd.DataFrame(data, index=[0])
df.head()
out = lgb_pipeline.predict(df)
out
import shap
# explain the model's predictions using SHAP
# (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.)
explainer = shap.TreeExplainer(lgb_pipeline.model.model)
df: pd.DataFrame = lgb_pipeline.encoder.replace_with_new_embeds(df, batch_size=1)
if "building_id" in df.columns:
df = df.drop(["building_id"], axis=1)
if "Unnamed: 0" in df.columns:
df = df.drop(["Unnamed: 0"], axis=1)
df.head()
lgb_pipeline.model.model.params["objective"] = "multiclass"
shap_values = explainer(df)
shap_values
shap.initjs()
shap.force_plot(explainer.expected_value[1], shap_values.data[0,:], df.iloc[0,:])
shap.summary_plot(shap_values.data, df)
df.columns
feat_importances = lgb_pipeline.model.model.feature_importance()
importances = {col:weight for (col, weight) in zip(df.columns, feat_importances)}
importances
shap.dependence_plot("has_superstructure_adobe_mud", shap_values, df)
shap.summary_plot(shap_values, df)
shap.plots.bar(shap_values)
# visualize the first prediction's explanation
shap.plots.waterfall(shap_values[0])
```
## With Whole Dataset
```
from backend.api.prediction import initialize_pipeline
config_path = "/home/joseph/Coding/ml_projects/earthquake_forecasting/backend/config.yml"
lgb_pipeline = initialize_pipeline(config_path, "lightgbm")
lgb_pipeline
import pandas as pd
df = pd.read_csv("~/datasets/earthquake_damage_forecasting/train_data_embeds.csv").drop(["Unnamed: 0", "building_id"], axis=1)
df.info()
import shap
# explain the model's predictions using SHAP
# (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.)
explainer = shap.TreeExplainer(lgb_pipeline.model.model)
explainer
lgb_pipeline.model.model.params["objective"] = "multiclass"
shap_values = explainer(df.iloc[:50])
shap_values
shap.initjs()
shap.force_plot(explainer.expected_value[1], shap_values.data[1,:], df.iloc[1,:])
shap.summary_plot(shap_values.data, df.iloc[:50])
shap.plots.bar(shap_values[1])
help(explainer)
shap_values[0][0]
# shap_values.values = shap_values.values[:,:,1]
# shap_values.base_values = shap_values.base_values[:,1]
shap.plots.waterfall(explainer.expected_value[0], shap_values[0], df.iloc[0])
```
# Catboost
```
from backend.api.prediction import initialize_pipeline
config_path = "/home/joseph/Coding/ml_projects/earthquake_forecasting/backend/config.yml"
lgb_pipeline = initialize_pipeline(config_path, "catboost")
lgb_pipeline
```
| github_jupyter |
```
from collections import defaultdict
import pyspark.sql.types as stypes
import operator
import math
d = sc.textFile("gs://lbanor/dataproc_example/data/2017-11-01").zipW
r = (sc.textFile("gs://lbanor/dataproc_example/data/2017-11-01").zipWithIndex()
.filter(lambda x: x[1] > 0)
.map(lambda x: x[0].split(','))
.map(lambda x: (x[0], (x[1], 0.5 if x[2] == '1' else 2 if x[2] == '2' else 6)))
.groupByKey().mapValues(list)
.flatMap(lambda x: aggregate_skus(x)))
print(r.collect()[:10])
print(r.collect()[:10])
d2 = spark.read.csv("gs://lbanor/dataproc_example/data/2017-11-01", header=True)
t = sc.parallelize([('1', 'sku0', 1), ('2', 'sku2', 2), ('1', 'sku1', 1)])
t.zipWithIndex().map(lambda x: (x[0][0], (x[0][1], x[0][2]))).groupByKey().mapValues(list).collect()[:10]
def aggregate_skus(row):
"""Aggregates skus from customers and their respective scores
:type row: list
:param row: list having values [user, (sku, score)]
:rtype: list
:returns: `yield` on [user, (sku, sum(score))]
"""
d = defaultdict(float)
for inner_row in row[1]:
d[inner_row[0]] += inner_row[1]
yield (row[0], list(d.items()))
r = d2.rdd.collect()[:10]
r[0].user
print(r.flatMap(lambda x: aggregate_skus(x)).collect()[:10])
r.toDF(schema=_load_users_matrix_schema()).write.json('gs://lbanor/dataproc_example/intermediary/2017-11-01')
def _load_users_matrix_schema():
"""Loads schema with data type [user, [(sku, score), (sku, score)]]
:rtype: `pyspark.sql.type.StructType`
:returns: schema speficiation for user -> (sku, score) data.
"""
return stypes.StructType(fields=[
stypes.StructField("user", stypes.StringType()),
stypes.StructField('interactions', stypes.ArrayType(
stypes.StructType(fields=[stypes.StructField('item',
stypes.StringType()), stypes.StructField('score',
stypes.FloatType())])))])
dir()
t = sc.parallelize([[0, [1, 2]], [0, [3]]])
print(t.collect())
t.write.json?
t = spark.read.json('gs://lbanor/dataproc_example/intermediary/2017-11-02', schema=_load_users_matrix_schema())
t = spark.read.json('gs://lbanor/dataproc_example/intermediary/2017-11-02/*.gz')
t.rdd.map(lambda x: x).collect()[:10]
t.head(3)
t.rdd.reduceByKey(operator.add).collect()[:10]
print(t.reduceByKey(operator.add).collect())
data = (t.rdd
.reduceByKey(operator.add)
.flatMap(lambda x: aggregate_skus(x))
.filter(lambda x: len(x[1]) > 1 and len(x[1]) < 10))
def _process_scores(row):
"""After all user -> score aggregation is done, this method loops
through each sku for a given user and yields its squared score so
that we can compute the norm ``||c||`` for each sku column.
:type row: list
:param row: list of type [(user, (sku, score))]
:rtype: tuple
:returns: tuple of type (sku, (score ** 2))
"""
for inner_row in row[1]:
yield (inner_row[0], inner_row[1] ** 2)
norms = {sku: norm for sku, norm in (data.flatMap(lambda x: _process_scores(x))
.reduceByKey(operator.add)
.map(lambda x: (x[0], math.sqrt(x[1])))
.collect())}
data = (data
.flatMap(lambda x: process_intersections(x, norms))
.reduceByKey(operator.add)
.collect()[:20])
data
def process_intersections(row, norms):
for i in range(len(row[1])):
for j in range(i + 1, len(row[1])):
#yield row[1][i]
yield ((row[1][i][0], row[1][j][0]), row[1][i][1] * row[1][j][1] / (norms[row[1][i][0]] * norms[row[1][j][0]]))
re = t.flatMap(lambda x: process_intersections(x))
```
| github_jupyter |
# SQL
```
import psycopg2
import sys, os
import numpy as np
import pandas as pd
import example_psql as creds
import pandas.io.sql as psql
# Create connection to postgresql
import example_psql as creds
from sqlalchemy import create_engine
engine = create_engine(f'postgresql://{creds.PGUSER}:{creds.PGPASSWORD}@{creds.PGHOST}:5432/{creds.PGDATABASE}')
# Table1: EDFID
# Field: edfid, path, montage, ...,
import sys
sys.path.append('..')
from src.data.file_io import listdir_edfs
df = listdir_edfs('/Users/yanxlin/github/ids/tusz_1_5_2/edf/')
df = df.rename(columns = {'path7':'train_test'})
df.to_sql('directory', con=engine, if_exists='replace')
df.head()
df = pd.read_table('/Users/yanxlin/github/ids/tusz_1_5_2/_DOCS/ref_train.txt', header=None, sep=' ',
names =['token', 'time_start', 'time_end', 'label', 'prob']).assign(train_test='train')
df2 = pd.read_table('/Users/yanxlin/github/ids/tusz_1_5_2/_DOCS/ref_dev.txt', header=None, sep=' ',
names =['token', 'time_start', 'time_end', 'label', 'prob']).assign(train_test='test')
df.append(df2).to_sql('seiz_bckg', engine, if_exists='replace')
df.append(df2).head()
# chop edf data into pieces and compute
# read all edf
df = pd.read_sql_table('directory', engine).head(4)
chunk_size = 10
token, token_paths = [], []
# clear sql DB table
for irow, row in df.iterrows():
if irow % chunk_size != 0:
token.append(row['token'])
token_paths.append(row['token_path'])
continue
# else:
# # get a list of features from token_path
# df = token_path_to_data_frame(token_paths)
# # append to sql DB
# token, token_paths = [], []
print(token)
# compute dataset and labels
# save table: token time_abs time_rel features
from src.data import file_io
from src.features import dataset_funcs
# def get_features_():
# tokens = pd.read_sql_table('directory', engine).loc(lambda df: df['train_test']=='train').head(2).loc[:, 'token_path']
# # ds = file_io.make_dataset(tokens, 100, 100, 100)
# # return dataset_funcs.get_features(ds)
# return tokens
# get_features_().head()
tks = pd.read_sql("select token, token_path from directory where train_test = 'train' and tcp_type = '01_tcp_ar';", engine)
ds, _ = file_io.make_dataset(tks.loc[:,'token_path'].head(1).to_numpy(), 100, 100, 100)
dataset_funcs.get_features(ds)
tk = tks.loc[:, 'token_path'].sample(100, random_state = 103).head(1).to_numpy()[0]
intvs, lbls = file_io.load_tse_bi(tk)
f, s, l = file_io.read_1_token(tk)
intvs, lbls, np.shape(s)/np.mean(f)
from src.features.to_sql import __feature_1_token
fsamp = 256
tks = pd.read_sql("select token, token_path from directory where train_test = 'train' and tcp_type = '01_tcp_ar';",
SQLengine)
pd.concat([__feature_1_token(Series['token_path'], fsamp=fsamp).assign(token = Series['token'])
for (index, Series) in tks.head(1).iterrows()])
# timestamps = range(1, 500)
# rt = intvs_[-1] - np.array(list(reversed(timestamps)))
# rit = intvs_[-1] - np.array(list(reversed([0] + list(intvs_)[:-1])))
# rlb = list(reversed(lbls))
# rt, rit, rlb
# list(reversed(post_sezure_s(rt, rit, rlb)))
# pres_seizure_s(timestamps, intvs_, lbls)
# res = post_sezure_s(rt, rit, list(reversed(lbls)))
df = dataset_funcs.get_features(ds)
from src.data import label
df.assign(post = lambda df: label.post_sezure_s(df.index+1, intvs_, lbls),
pres = lambda df: label.pres_seizure_s(df.index+1, intvs_, lbls)).head(390)
```
#### backup
```
## ****** LOAD PSQL DATABASE ***** ##
# Set up a connection to the postgres server.
conn_string = "host="+ creds.PGHOST +" port="+ "5432" +" dbname="+ creds.PGDATABASE +" user=" + creds.PGUSER \
+" password="+ creds.PGPASSWORD
conn=psycopg2.connect(conn_string)
print("Connected!")
# Create a cursor object
cursor = conn.cursor()
def read_file(schema, table):
sql_command = "SELECT * FROM {}.{};".format(str(schema), str(table))
print (sql_command)
# Load the data
data = pd.read_sql(sql_command, conn)
print(data.shape)
return (data)
```
# Numpy Scipy
```
np.array([
[[1,2,3],
[4,5,6],
[7,8,9]],
[[9.2,8.2,7.2],
[6.2,5.2,4.2],
[3.2,2.2,1.2]]
]).transpose([1,2,0]).reshape([9,2])
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from sklearn.metrics import roc_auc_score
# Import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
X.shape, y.shape
# Binarize the output
y = label_binarize(y, classes=[0, 1, 2])
n_classes = y.shape[1]
# Add noisy features to make the problem harder
random_state = np.random.RandomState(0)
n_samples, n_features = X.shape
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5,
random_state=0)
# Learn to predict each class against the other
classifier = OneVsRestClassifier(svm.SVC(kernel='linear', probability=True,
random_state=random_state))
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
def calc_roc(y_test, y_score):
"""
Args:
y_test: 2-d np.array
"""
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(y_test.shape[1]):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
return fpr, tpr, roc_auc
def plot_roc(fpr, tpr, roc_auc, title='Receiver operating characteristic example')
"""ref: https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html"""
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title(title)
plt.legend(loc="lower right")
plt.show()
import numpy as np
arr = np.arange(0, 10)
np.where(arr==1)[0]
import numpy as np
import pandas as pd
import glob
import os
import re
train_path = '../tusz_1_5_2/edf/train'
tcp_type = '01_tcp_ar'
patient_group = '004'
patient = '00000492'
session = 's003_2003_07_18'
token = '00000492_s003_t001'
def listdir_edfs():
"""Returns all edf filepaths in a DataFrame
Returns:
pd.DataFrame: filepaths
"""
columns=('path0','path1','path2','path3', 'tcp_type', 'patient_group', 'patient', 'session', 'token')
filelist = glob.glob(os.path.join('../tusz_1_5_2/edf/train/01_tcp_ar', '**', '*.edf'), recursive=True)
fparts = [re.split('/|[.]edf',filename)[:-1] for filename in filelist]
df = pd.DataFrame({key:value for key, value in zip(tuple(columns), tuple(zip(*fparts)))})
# A very complicated lambda function
return df.assign(token_path = lambda x: eval("""eval("+'/'+".join(["x."""+'","x.'.join(x.columns)+'"]))'))
df = listdir_edfs()
df.shape
df.head()
import re
[re.split('/|[.]edf',filename)[:-1] for filename in filelist]
```
# PostgreSQL
```
import numpy as np
arr = np.array([[1,2,3], [4,5,6], [7,8,9]])
arr[np.array((0,1)),:-1].shape
import matplotlib.pyplot as plt
plt.plot(np.random.randn(1000))
```
# Panda.DataFrame
```
# panda data frame group by and aggregation
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
df = pd.DataFrame(dict(a=np.random.rand(10), b=np.random.rand(10), group=np.repeat(['A', 'B'], 5)))
df
# group by c=a+b
df.groupby('group').agg('sum')
res = []
for name, group in df.groupby('group'):
res.append(group.assign(c = lambda x: x.a+x.b))
pd.concat(res)
# long to wide
pd.concat(res).assign(index=np.tile(np.arange(0,5),2)).pivot(index='index',columns='group', values=['a', 'b', 'c'])
# df2.reset_index()
```
## Scikit learn
```
from sklearn.model_selection import cross_val_score
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
import numpy as np
X, y = make_blobs(n_samples=10000, n_features=10, centers=100,
random_state=0)
np.shape(X), np.shape(y)
from sklearn import preprocessing
X_scaled = preprocessing.scale(X)
X[0:5], X_scaled[0:5]
clf = DecisionTreeClassifier(max_depth=None, min_samples_split=2,
random_state=0)
scores = cross_val_score(clf, X, y, cv=5)
scores.mean()
clf = RandomForestClassifier(n_estimators=10, max_features='sqrt', max_depth=None,
min_samples_split=2, random_state=0)
scores = cross_val_score(clf, X, y, cv=5)
scores.mean()
clf = ExtraTreesClassifier(n_estimators=10, max_depth=None,
min_samples_split=2, random_state=0)
scores = cross_val_score(clf, X, y, cv=5)
scores.mean() > 0.999
```
| github_jupyter |
##### Copyright 2021 The TensorFlow Federated Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Client-efficient large-model federated learning via `federated_select` and sparse aggregation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/sparse_federated_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.20.0/docs/tutorials/sparse_federated_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.20.0/docs/tutorials/sparse_federated_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/sparse_federated_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial shows how TFF can be used to train a very large model where each client device only downloads and updates a small part of the model, using
`tff.federated_select` and sparse aggregation. While this tutorial is fairly self-contained, the [`tff.federated_select` tutorial](https://www.tensorflow.org/federated/tutorials/federated_select) and [custom FL algorithms tutorial](https://www.tensorflow.org/federated/tutorials/building_your_own_federated_learning_algorithm) provide good introductions to some of the techniques used here.
Concretely, in this tutorial we consider logistic regression for multi-label classification, predicting which "tags" are associated with a text string based on a bag-of-words feature representation. Importantly, communication and client-side computation costs are controlled by a fixed constant (`MAX_TOKENS_SELECTED_PER_CLIENT`), and *do not* scale with the overall vocabulary size, which could be extremely large in practical settings.
```
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import itertools
import numpy as np
from typing import Callable, List, Tuple
import tensorflow as tf
import tensorflow_federated as tff
tff.backends.native.set_local_python_execution_context()
```
Each client will `federated_select` the rows of the model weights for at most this many unique tokens. This upper-bounds the size of the client's local model and the amount of server -> client (`federated_select`) and client - > server `(federated_aggregate`) communication performed.
This tutorial should still run correctly even if you set this as small as 1 (ensuring not all tokens from each client are selected) or to a large value, though model convergence may be effected.
```
MAX_TOKENS_SELECTED_PER_CLIENT = 6
```
We also define a few constants for various types. For this colab, a **token** is an integer identifier for a particular word after parsing the dataset.
```
# There are some constraints on types
# here that will require some explicit type conversions:
# - `tff.federated_select` requires int32
# - `tf.SparseTensor` requires int64 indices.
TOKEN_DTYPE = tf.int64
SELECT_KEY_DTYPE = tf.int32
# Type for counts of token occurences.
TOKEN_COUNT_DTYPE = tf.int32
# A sparse feature vector can be thought of as a map
# from TOKEN_DTYPE to FEATURE_DTYPE.
# Our features are {0, 1} indicators, so we could potentially
# use tf.int8 as an optimization.
FEATURE_DTYPE = tf.int32
```
# Setting up the problem: Dataset and Model
We construct a tiny toy dataset for easy experimentation in this tutorial. However, the format of the dataset is compatible with [Federated StackOverflow](https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/stackoverflow/load_data), and
the [pre-processing](https://github.com/google-research/federated/blob/0a558bac8a724fc38175ff4f0ce46c7af3d24be2/utils/datasets/stackoverflow_tag_prediction.py) and [model architecture](https://github.com/google-research/federated/blob/49a43456aa5eaee3e1749855eed89c0087983541/utils/models/stackoverflow_lr_models.py) are adopted from the StackOverflow
tag prediction problem of [*Adaptive Federated Optimization*](https://arxiv.org/abs/2003.00295).
## Dataset parsing and pre-processing
```
NUM_OOV_BUCKETS = 1
BatchType = collections.namedtuple('BatchType', ['tokens', 'tags'])
def build_to_ids_fn(word_vocab: List[str],
tag_vocab: List[str]) -> Callable[[tf.Tensor], tf.Tensor]:
"""Constructs a function mapping examples to sequences of token indices."""
word_table_values = np.arange(len(word_vocab), dtype=np.int64)
word_table = tf.lookup.StaticVocabularyTable(
tf.lookup.KeyValueTensorInitializer(word_vocab, word_table_values),
num_oov_buckets=NUM_OOV_BUCKETS)
tag_table_values = np.arange(len(tag_vocab), dtype=np.int64)
tag_table = tf.lookup.StaticVocabularyTable(
tf.lookup.KeyValueTensorInitializer(tag_vocab, tag_table_values),
num_oov_buckets=NUM_OOV_BUCKETS)
def to_ids(example):
"""Converts a Stack Overflow example to a bag-of-words/tags format."""
sentence = tf.strings.join([example['tokens'], example['title']],
separator=' ')
# We represent that label (output tags) densely.
raw_tags = example['tags']
tags = tf.strings.split(raw_tags, sep='|')
tags = tag_table.lookup(tags)
tags, _ = tf.unique(tags)
tags = tf.one_hot(tags, len(tag_vocab) + NUM_OOV_BUCKETS)
tags = tf.reduce_max(tags, axis=0)
# We represent the features as a SparseTensor of {0, 1}s.
words = tf.strings.split(sentence)
tokens = word_table.lookup(words)
tokens, _ = tf.unique(tokens)
# Note: We could choose to use the word counts as the feature vector
# instead of just {0, 1} values (see tf.unique_with_counts).
tokens = tf.reshape(tokens, shape=(tf.size(tokens), 1))
tokens_st = tf.SparseTensor(
tokens,
tf.ones(tf.size(tokens), dtype=FEATURE_DTYPE),
dense_shape=(len(word_vocab) + NUM_OOV_BUCKETS,))
tokens_st = tf.sparse.reorder(tokens_st)
return BatchType(tokens_st, tags)
return to_ids
def build_preprocess_fn(word_vocab, tag_vocab):
@tf.function
def preprocess_fn(dataset):
to_ids = build_to_ids_fn(word_vocab, tag_vocab)
# We *don't* shuffle in order to make this colab deterministic for
# easier testing and reproducibility.
# But real-world training should use `.shuffle()`.
return dataset.map(to_ids, num_parallel_calls=tf.data.experimental.AUTOTUNE)
return preprocess_fn
```
## A tiny toy dataset
We construct a tiny toy dataset with a global vocabulary of 12 words and 3 clients. This tiny example is useful for testing edge cases (for example,
we have two clients with less than `MAX_TOKENS_SELECTED_PER_CLIENT = 6` distinct tokens, and one with more) and developing the code.
However, the real-world use cases of this approach would be global vocabularies of 10s of millions or more, with perhaps 1000s of distinct tokens appearing on each client. Because the format of the data is the same, the extension to more realistic testbed problems, e.g. the `tff.simulation.datasets.stackoverflow.load_data()` dataset, should be straightforward.
First, we define our word and tag vocabularies.
```
# Features
FRUIT_WORDS = ['apple', 'orange', 'pear', 'kiwi']
VEGETABLE_WORDS = ['carrot', 'broccoli', 'arugula', 'peas']
FISH_WORDS = ['trout', 'tuna', 'cod', 'salmon']
WORD_VOCAB = FRUIT_WORDS + VEGETABLE_WORDS + FISH_WORDS
# Labels
TAG_VOCAB = ['FRUIT', 'VEGETABLE', 'FISH']
```
Now, we create 3 clients with small local datasets. If you are running this tutorial in colab, it may be useful to use the "mirror cell in tab" feature to pin this cell and its output in order to interpret/check the output of the functions developed below.
```
preprocess_fn = build_preprocess_fn(WORD_VOCAB, TAG_VOCAB)
def make_dataset(raw):
d = tf.data.Dataset.from_tensor_slices(
# Matches the StackOverflow formatting
collections.OrderedDict(
tokens=tf.constant([t[0] for t in raw]),
tags=tf.constant([t[1] for t in raw]),
title=['' for _ in raw]))
d = preprocess_fn(d)
return d
# 4 distinct tokens
CLIENT1_DATASET = make_dataset([
('apple orange apple orange', 'FRUIT'),
('carrot trout', 'VEGETABLE|FISH'),
('orange apple', 'FRUIT'),
('orange', 'ORANGE|CITRUS') # 2 OOV tag
])
# 6 distinct tokens
CLIENT2_DATASET = make_dataset([
('pear cod', 'FRUIT|FISH'),
('arugula peas', 'VEGETABLE'),
('kiwi pear', 'FRUIT'),
('sturgeon', 'FISH'), # OOV word
('sturgeon bass', 'FISH') # 2 OOV words
])
# A client with all possible words & tags (13 distinct tokens).
# With MAX_TOKENS_SELECTED_PER_CLIENT = 6, we won't download the model
# slices for all tokens that occur on this client.
CLIENT3_DATASET = make_dataset([
(' '.join(WORD_VOCAB + ['oovword']), '|'.join(TAG_VOCAB)),
# Mathe the OOV token and 'salmon' occur in the largest number
# of examples on this client:
('salmon oovword', 'FISH|OOVTAG')
])
print('Word vocab')
for i, word in enumerate(WORD_VOCAB):
print(f'{i:2d} {word}')
print('\nTag vocab')
for i, tag in enumerate(TAG_VOCAB):
print(f'{i:2d} {tag}')
```
Define constants for the raw numbers of input features (tokens/words) and labels (post tags). Our actual input/output spaces are `NUM_OOV_BUCKETS = 1` larger because we add an OOV token / tag.
```
NUM_WORDS = len(WORD_VOCAB)
NUM_TAGS = len(TAG_VOCAB)
WORD_VOCAB_SIZE = NUM_WORDS + NUM_OOV_BUCKETS
TAG_VOCAB_SIZE = NUM_TAGS + NUM_OOV_BUCKETS
```
Create batched versions of the datasets, and individual batches, which will be useful in testing code as we go.
```
batched_dataset1 = CLIENT1_DATASET.batch(2)
batched_dataset2 = CLIENT2_DATASET.batch(3)
batched_dataset3 = CLIENT3_DATASET.batch(2)
batch1 = next(iter(batched_dataset1))
batch2 = next(iter(batched_dataset2))
batch3 = next(iter(batched_dataset3))
```
## Define a model with sparse inputs
We use a simple independent logistic regression model for each tag.
```
def create_logistic_model(word_vocab_size: int, vocab_tags_size: int):
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(word_vocab_size,), sparse=True),
tf.keras.layers.Dense(
vocab_tags_size,
activation='sigmoid',
kernel_initializer=tf.keras.initializers.zeros,
# For simplicity, don't use a bias vector; this means the model
# is a single tensor, and we only need sparse aggregation of
# the per-token slices of the model. Generalizing to also handle
# other model weights that are fully updated
# (non-dense broadcast and aggregate) would be a good exercise.
use_bias=False),
])
return model
```
Let's make sure it works, first by making predictions:
```
model = create_logistic_model(WORD_VOCAB_SIZE, TAG_VOCAB_SIZE)
p = model.predict(batch1.tokens)
print(p)
```
And some simple centralized training:
```
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.001),
loss=tf.keras.losses.BinaryCrossentropy())
model.train_on_batch(batch1.tokens, batch1.tags)
```
# Building blocks for the federated computation
We will implement a simple version of the [Federated Averaging](https://arxiv.org/abs/1602.05629) algorithm with the key difference that each device only downloads a relevant subset of the model, and only contributes updates to that subset.
We use `M` as shorthand for `MAX_TOKENS_SELECTED_PER_CLIENT`. At a high level, one round of training involves these steps:
1. Each participating client scans over its local dataset, parsing the input strings and mapping them to the correct tokens (int indexes). This requires access to the global (large) dictionary (this could potentially be avoided using [feature hashing](https://en.wikipedia.org/wiki/Feature_hashing) techniques). We then sparsely count how many times each token occurs. If `U` unique tokens occur on device, we choose the `num_actual_tokens = min(U, M)` most frequent tokens to train.
1. The clients use `federated_select` to retrieve the model coefficients for the `num_actual_tokens` selected tokens from the server. Each model slice is a tensor of shape `(TAG_VOCAB_SIZE, )`, so the total data transmitted to the client is at most of size `TAG_VOCAB_SIZE * M` (see note below).
1. The clients construct a mapping `global_token -> local_token` where the local token (int index) is the index of the global token in the list of selected tokens.
1. The clients use a "small" version of the global model that only has coefficients for at most `M` tokens, from the range `[0, num_actual_tokens)`. The `global -> local` mapping is used to initialize the dense parameters of this model from the selected model slices.
1. Clients train their local model using SGD on data preprocessed with the `global -> local` mapping.
1. Clients turn the parameters of their local model into `IndexedSlices` updates using the `local -> global` mapping to index the rows. The server aggregates these updates using a sparse sum aggregation.
1. The server takes the (dense) result of the above aggregation, divides it by the number of clients participating, and applies the resulting average update to the global model.
In this section we construct the building blocks for these steps, which will then be combined in a final `federated_computation` that captures the full logic of one training round.
> NOTE: The above description hides one technical detail: Both `federated_select` and the construction of the local model require statically known shapes, and so we cannot use the dynamic per-client `num_actual_tokens` size. Instead, we use the static value `M`, adding padding where needed. This does not impact that semantics of the algorithm.
### Count client tokens and decide which model slices to `federated_select`
Each device needs to decide which "slices" of the model are relevant to its local training dataset. For our problem, we do this by (sparsely!) counting how many examples contain each token in the client training data set.
```
@tf.function
def token_count_fn(token_counts, batch):
"""Adds counts from `batch` to the running `token_counts` sum."""
# Sum across the batch dimension.
flat_tokens = tf.sparse.reduce_sum(
batch.tokens, axis=0, output_is_sparse=True)
flat_tokens = tf.cast(flat_tokens, dtype=TOKEN_COUNT_DTYPE)
return tf.sparse.add(token_counts, flat_tokens)
# Simple tests
# Create the initial zero token counts using empty tensors.
initial_token_counts = tf.SparseTensor(
indices=tf.zeros(shape=(0, 1), dtype=TOKEN_DTYPE),
values=tf.zeros(shape=(0,), dtype=TOKEN_COUNT_DTYPE),
dense_shape=(WORD_VOCAB_SIZE,))
client_token_counts = batched_dataset1.reduce(initial_token_counts,
token_count_fn)
tokens = tf.reshape(client_token_counts.indices, (-1,)).numpy()
print('tokens:', tokens)
np.testing.assert_array_equal(tokens, [0, 1, 4, 8])
# The count is the number of *examples* in which the token/word
# occurs, not the total number of occurences, since we still featurize
# multiple occurences in the same example as a "1".
counts = client_token_counts.values.numpy()
print('counts:', counts)
np.testing.assert_array_equal(counts, [2, 3, 1, 1])
```
We will select the model parameters corresponding to the `MAX_TOKENS_SELECTED_PER_CLIENT` most frequently occuring tokens on device. If
fewer than this many tokens occur on device, we pad the list to enable the use
of `federated_select`.
Note that other strategies are possibly better, for example, randomly selecting tokens (perhaps based on their occurrence probability). This would ensure that all slices of the model (for which the client has data) have some chance of being updated.
```
@tf.function
def keys_for_client(client_dataset, max_tokens_per_client):
"""Computes a set of max_tokens_per_client keys."""
initial_token_counts = tf.SparseTensor(
indices=tf.zeros((0, 1), dtype=TOKEN_DTYPE),
values=tf.zeros((0,), dtype=TOKEN_COUNT_DTYPE),
dense_shape=(WORD_VOCAB_SIZE,))
client_token_counts = client_dataset.reduce(initial_token_counts,
token_count_fn)
# Find the most-frequently occuring tokens
tokens = tf.reshape(client_token_counts.indices, shape=(-1,))
counts = client_token_counts.values
perm = tf.argsort(counts, direction='DESCENDING')
tokens = tf.gather(tokens, perm)
counts = tf.gather(counts, perm)
num_raw_tokens = tf.shape(tokens)[0]
actual_num_tokens = tf.minimum(max_tokens_per_client, num_raw_tokens)
selected_tokens = tokens[:actual_num_tokens]
paddings = [[0, max_tokens_per_client - tf.shape(selected_tokens)[0]]]
padded_tokens = tf.pad(selected_tokens, paddings=paddings)
# Make sure the type is statically determined
padded_tokens = tf.reshape(padded_tokens, shape=(max_tokens_per_client,))
# We will pass these tokens as keys into `federated_select`, which
# requires SELECT_KEY_DTYPE=tf.int32 keys.
padded_tokens = tf.cast(padded_tokens, dtype=SELECT_KEY_DTYPE)
return padded_tokens, actual_num_tokens
# Simple test
# Case 1: actual_num_tokens > max_tokens_per_client
selected_tokens, actual_num_tokens = keys_for_client(batched_dataset1, 3)
assert tf.size(selected_tokens) == 3
assert actual_num_tokens == 3
# Case 2: actual_num_tokens < max_tokens_per_client
selected_tokens, actual_num_tokens = keys_for_client(batched_dataset1, 10)
assert tf.size(selected_tokens) == 10
assert actual_num_tokens == 4
```
### Map global tokens to local tokens
The above selection gives us a dense set of tokens in the range `[0, actual_num_tokens)` which we will use for the on-device model. However, the dataset we read has tokens from the much larger global vocabulary range `[0, WORD_VOCAB_SIZE)`.
Thus, we need to map the global tokens to their corresponding local tokens. The
local token ids are simply given by the indexes into the `selected_tokens` tensor computed in the previous step.
```
@tf.function
def map_to_local_token_ids(client_data, client_keys):
global_to_local = tf.lookup.StaticHashTable(
# Note int32 -> int64 maps are not supported
tf.lookup.KeyValueTensorInitializer(
keys=tf.cast(client_keys, dtype=TOKEN_DTYPE),
# Note we need to use tf.shape, not the static
# shape client_keys.shape[0]
values=tf.range(0, limit=tf.shape(client_keys)[0],
dtype=TOKEN_DTYPE)),
# We use -1 for tokens that were not selected, which can occur for clients
# with more than MAX_TOKENS_SELECTED_PER_CLIENT distinct tokens.
# We will simply remove these invalid indices from the batch below.
default_value=-1)
def to_local_ids(sparse_tokens):
indices_t = tf.transpose(sparse_tokens.indices)
batch_indices = indices_t[0] # First column
tokens = indices_t[1] # Second column
tokens = tf.map_fn(
lambda global_token_id: global_to_local.lookup(global_token_id), tokens)
# Remove tokens that aren't actually available (looked up as -1):
available_tokens = tokens >= 0
tokens = tokens[available_tokens]
batch_indices = batch_indices[available_tokens]
updated_indices = tf.transpose(
tf.concat([[batch_indices], [tokens]], axis=0))
st = tf.sparse.SparseTensor(
updated_indices,
tf.ones(tf.size(tokens), dtype=FEATURE_DTYPE),
dense_shape=sparse_tokens.dense_shape)
st = tf.sparse.reorder(st)
return st
return client_data.map(lambda b: BatchType(to_local_ids(b.tokens), b.tags))
# Simple test
client_keys, actual_num_tokens = keys_for_client(
batched_dataset3, MAX_TOKENS_SELECTED_PER_CLIENT)
client_keys = client_keys[:actual_num_tokens]
d = map_to_local_token_ids(batched_dataset3, client_keys)
batch = next(iter(d))
all_tokens = tf.gather(batch.tokens.indices, indices=1, axis=1)
# Confirm we have local indices in the range [0, MAX):
assert tf.math.reduce_max(all_tokens) < MAX_TOKENS_SELECTED_PER_CLIENT
assert tf.math.reduce_max(all_tokens) >= 0
```
### Train the local (sub)model on each client
Note `federated_select` will return the selected slices as a `tf.data.Dataset` in the same order as the selection keys. So, we first define a utility function to take such a Dataset and convert it to a single dense tensor which can be used as the model weights of the client model.
```
@tf.function
def slices_dataset_to_tensor(slices_dataset):
"""Convert a dataset of slices to a tensor."""
# Use batching to gather all of the slices into a single tensor.
d = slices_dataset.batch(MAX_TOKENS_SELECTED_PER_CLIENT,
drop_remainder=False)
iter_d = iter(d)
tensor = next(iter_d)
# Make sure we have consumed everything
opt = iter_d.get_next_as_optional()
tf.Assert(tf.logical_not(opt.has_value()), data=[''], name='CHECK_EMPTY')
return tensor
# Simple test
weights = np.random.random(
size=(MAX_TOKENS_SELECTED_PER_CLIENT, TAG_VOCAB_SIZE)).astype(np.float32)
model_slices_as_dataset = tf.data.Dataset.from_tensor_slices(weights)
weights2 = slices_dataset_to_tensor(model_slices_as_dataset)
np.testing.assert_array_equal(weights, weights2)
```
We now have all the components we need to define a simple local training loop which will run on each client.
```
@tf.function
def client_train_fn(model, client_optimizer,
model_slices_as_dataset, client_data,
client_keys, actual_num_tokens):
initial_model_weights = slices_dataset_to_tensor(model_slices_as_dataset)
assert len(model.trainable_variables) == 1
model.trainable_variables[0].assign(initial_model_weights)
# Only keep the "real" (unpadded) keys.
client_keys = client_keys[:actual_num_tokens]
client_data = map_to_local_token_ids(client_data, client_keys)
loss_fn = tf.keras.losses.BinaryCrossentropy()
for features, labels in client_data:
with tf.GradientTape() as tape:
predictions = model(features)
loss = loss_fn(labels, predictions)
grads = tape.gradient(loss, model.trainable_variables)
client_optimizer.apply_gradients(zip(grads, model.trainable_variables))
model_weights_delta = model.trainable_weights[0] - initial_model_weights
model_weights_delta = tf.slice(model_weights_delta, begin=[0, 0],
size=[actual_num_tokens, -1])
return client_keys, model_weights_delta
# Simple test
# Note if you execute this cell a second time, you need to also re-execute
# the preceeding cell to avoid "tf.function-decorated function tried to
# create variables on non-first call" errors.
on_device_model = create_logistic_model(MAX_TOKENS_SELECTED_PER_CLIENT,
TAG_VOCAB_SIZE)
client_optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
client_keys, actual_num_tokens = keys_for_client(
batched_dataset2, MAX_TOKENS_SELECTED_PER_CLIENT)
model_slices_as_dataset = tf.data.Dataset.from_tensor_slices(
np.zeros((MAX_TOKENS_SELECTED_PER_CLIENT, TAG_VOCAB_SIZE),
dtype=np.float32))
keys, delta = client_train_fn(
on_device_model,
client_optimizer,
model_slices_as_dataset,
client_data=batched_dataset3,
client_keys=client_keys,
actual_num_tokens=actual_num_tokens)
print(delta)
```
### Aggregate IndexedSlices
We use `tff.federated_aggregate` to construct a federated sparse sum for `IndexedSlices`. This simple implementation has the constraint that the
`dense_shape` is known statically in advance. Note also that this sum is only *semi-sparse*, in the sense that the client -> server communication is sparse, but the server maintains a dense representation of the sum in `accumulate` and `merge`, and outputs this dense representation.
```
def federated_indexed_slices_sum(slice_indices, slice_values, dense_shape):
"""
Sumes IndexedSlices@CLIENTS to a dense @SERVER Tensor.
Intermediate aggregation is performed by converting to a dense representation,
which may not be suitable for all applications.
Args:
slice_indices: An IndexedSlices.indices tensor @CLIENTS.
slice_values: An IndexedSlices.values tensor @CLIENTS.
dense_shape: A statically known dense shape.
Returns:
A dense tensor placed @SERVER representing the sum of the client's
IndexedSclies.
"""
slices_dtype = slice_values.type_signature.member.dtype
zero = tff.tf_computation(
lambda: tf.zeros(dense_shape, dtype=slices_dtype))()
@tf.function
def accumulate_slices(dense, client_value):
indices, slices = client_value
# There is no built-in way to add `IndexedSlices`, but
# tf.convert_to_tensor is a quick way to convert to a dense representation
# so we can add them.
return dense + tf.convert_to_tensor(
tf.IndexedSlices(slices, indices, dense_shape))
return tff.federated_aggregate(
(slice_indices, slice_values),
zero=zero,
accumulate=tff.tf_computation(accumulate_slices),
merge=tff.tf_computation(lambda d1, d2: tf.add(d1, d2, name='merge')),
report=tff.tf_computation(lambda d: d))
```
Construct a minimal `federated_computation` as a test
```
dense_shape = (6, 2)
indices_type = tff.TensorType(tf.int64, (None,))
values_type = tff.TensorType(tf.float32, (None, 2))
client_slice_type = tff.type_at_clients(
(indices_type, values_type))
@tff.federated_computation(client_slice_type)
def test_sum_indexed_slices(indices_values_at_client):
indices, values = indices_values_at_client
return federated_indexed_slices_sum(indices, values, dense_shape)
print(test_sum_indexed_slices.type_signature)
x = tf.IndexedSlices(
values=np.array([[2., 2.1], [0., 0.1], [1., 1.1], [5., 5.1]],
dtype=np.float32),
indices=[2, 0, 1, 5],
dense_shape=dense_shape)
y = tf.IndexedSlices(
values=np.array([[0., 0.3], [3.1, 3.2]], dtype=np.float32),
indices=[1, 3],
dense_shape=dense_shape)
# Sum one.
result = test_sum_indexed_slices([(x.indices, x.values)])
np.testing.assert_array_equal(tf.convert_to_tensor(x), result)
# Sum two.
expected = [[0., 0.1], [1., 1.4], [2., 2.1], [3.1, 3.2], [0., 0.], [5., 5.1]]
result = test_sum_indexed_slices([(x.indices, x.values), (y.indices, y.values)])
np.testing.assert_array_almost_equal(expected, result)
```
# Putting it all together in a `federated_computation`
We now uses TFF to bind together the components into a `tff.federated_computation`.
```
DENSE_MODEL_SHAPE = (WORD_VOCAB_SIZE, TAG_VOCAB_SIZE)
client_data_type = tff.SequenceType(batched_dataset1.element_spec)
model_type = tff.TensorType(tf.float32, shape=DENSE_MODEL_SHAPE)
```
We use a basic server training function based on Federated Averaging, applying the update with a server learning rate of 1.0. It is important that we apply an update (delta) to the model, rather than simply averaging client-supplied models, as otherwise if a given slice of the model wasn't trained on by any client on a given round its coefficients could be zeroed out.
```
@tff.tf_computation
def server_update(current_model_weights, update_sum, num_clients):
average_update = update_sum / num_clients
return current_model_weights + average_update
```
We need a couple more `tff.tf_computation` components:
```
# Function to select slices from the model weights in federated_select:
select_fn = tff.tf_computation(
lambda model_weights, index: tf.gather(model_weights, index))
# We need to wrap `client_train_fn` as a `tff.tf_computation`, making
# sure we do any operations that might construct `tf.Variable`s outside
# of the `tf.function` we are wrapping.
@tff.tf_computation
def client_train_fn_tff(model_slices_as_dataset, client_data, client_keys,
actual_num_tokens):
# Note this is amaller than the global model, using
# MAX_TOKENS_SELECTED_PER_CLIENT which is much smaller than WORD_VOCAB_SIZE.
# W7e would like a model of size `actual_num_tokens`, but we
# can't build the model dynamically, so we will slice off the padded
# weights at the end.
client_model = create_logistic_model(MAX_TOKENS_SELECTED_PER_CLIENT,
TAG_VOCAB_SIZE)
client_optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)
return client_train_fn(client_model, client_optimizer,
model_slices_as_dataset, client_data, client_keys,
actual_num_tokens)
@tff.tf_computation
def keys_for_client_tff(client_data):
return keys_for_client(client_data, MAX_TOKENS_SELECTED_PER_CLIENT)
```
We're now ready to put all the pieces together!
```
@tff.federated_computation(
tff.type_at_server(model_type), tff.type_at_clients(client_data_type))
def sparse_model_update(server_model, client_data):
max_tokens = tff.federated_value(MAX_TOKENS_SELECTED_PER_CLIENT, tff.SERVER)
keys_at_clients, actual_num_tokens = tff.federated_map(
keys_for_client_tff, client_data)
model_slices = tff.federated_select(keys_at_clients, max_tokens, server_model,
select_fn)
update_keys, update_slices = tff.federated_map(
client_train_fn_tff,
(model_slices, client_data, keys_at_clients, actual_num_tokens))
dense_update_sum = federated_indexed_slices_sum(update_keys, update_slices,
DENSE_MODEL_SHAPE)
num_clients = tff.federated_sum(tff.federated_value(1.0, tff.CLIENTS))
updated_server_model = tff.federated_map(
server_update, (server_model, dense_update_sum, num_clients))
return updated_server_model
print(sparse_model_update.type_signature)
```
# Let's train a model!
Now that we have our training function, let's try it out.
```
server_model = create_logistic_model(WORD_VOCAB_SIZE, TAG_VOCAB_SIZE)
server_model.compile( # Compile to make evaluation easy.
optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.0), # Unused
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.AUC(name='auc'),
tf.keras.metrics.Recall(top_k=2, name='recall_at_2'),
])
def evaluate(model, dataset, name):
metrics = model.evaluate(dataset, verbose=0)
metrics_str = ', '.join([f'{k}={v:.2f}' for k, v in
(zip(server_model.metrics_names, metrics))])
print(f'{name}: {metrics_str}')
print('Before training')
evaluate(server_model, batched_dataset1, 'Client 1')
evaluate(server_model, batched_dataset2, 'Client 2')
evaluate(server_model, batched_dataset3, 'Client 3')
model_weights = server_model.trainable_weights[0]
client_datasets = [batched_dataset1, batched_dataset2, batched_dataset3]
for _ in range(10): # Run 10 rounds of FedAvg
# We train on 1, 2, or 3 clients per round, selecting
# randomly.
cohort_size = np.random.randint(1, 4)
clients = np.random.choice([0, 1, 2], cohort_size, replace=False)
print('Training on clients', clients)
model_weights = sparse_model_update(
model_weights, [client_datasets[i] for i in clients])
server_model.set_weights([model_weights])
print('After training')
evaluate(server_model, batched_dataset1, 'Client 1')
evaluate(server_model, batched_dataset2, 'Client 2')
evaluate(server_model, batched_dataset3, 'Client 3')
```
| github_jupyter |
<em><sub>This page is available as an executable or viewable <strong>Jupyter Notebook</strong>:</sub></em>
<br/><br/>
<a href="https://mybinder.org/v2/gh/JetBrains/lets-plot/v1.5.2demos1?filepath=docs%2Fexamples%2Fjupyter-notebooks%2Fmap_titanic.ipynb"
target="_parent">
<img align="left"
src="https://mybinder.org/badge_logo.svg">
</a>
<a href="https://nbviewer.jupyter.org/github/JetBrains/lets-plot/blob/master/docs/examples/jupyter-notebooks/map_titanic.ipynb"
target="_parent">
<img align="right"
src="https://raw.githubusercontent.com/jupyter/design/master/logos/Badges/nbviewer_badge.png"
width="109" height="20">
</a>
<br/>
<br/>
## Visualization of the Titanic's voyage.
The tasks completed in this notebook:
- Load an interactive basemap layer.
- Geocode Titanic's ports of of embarkation and show them as markers on the map.
- Show the "Titanic's site" on the map.
- Geocode the Titanic destination port and show on the map.
- Connect all markers on the map with dashed lines.
- Compute a simple statistic related to the ports of of embarkation and show the plot and the map on the same figure.
We will use the [Lets-Plot for Python](https://github.com/JetBrains/lets-plot#lets-plot-for-python) library for all charting and geocoding tasks in this notebook.
The Titanic dataset for this demo was downloaded from ["Titanic: cleaned data" dataset](https://www.kaggle.com/jamesleslie/titanic-cleaned-data?select=train_clean.csv) (train_clean.csv) available at [kaggle](https://www.kaggle.com).
```
from lets_plot import *
LetsPlot.setup_html()
```
### The ports of embarkation.
The Titanic's ports of of embarkation were:
- Southampton (UK)
- Cherbourg (France)
- Cobh (Ireland)
Lets find geographical coordinates of these cities using the `Lets-Plot` geocoding package.
```
from lets_plot.geo_data import *
ports_of_embarkation = ['Southampton', 'Cherbourg', 'Cobh']
```
#### 1. Using the `regions` function.
To geocode our port cities we can try to call the `regions` function like this:
regions(level='city', request=ports_of_embarkation)
or its equivalent:
regions_city(request=ports_of_embarkation)
Unfortunately, this call results in a `ValueError`:
>Multiple objects (6) were found for Southampton:
>- Southampton (United Kingdom, England, South East)
>- Southampton (United States of America, New York, Suffolk County)
>- Southampton (United States of America, Massachusetts)
>- Southampton Township (United States of America, New Jersey, Burlington County)
>- Lower Southampton Township (United States of America, Pennsylvania, Bucks County)
>- Upper Southampton Township (United States of America, Pennsylvania, Bucks County)
>Multiple objects (2) were found for Cherbourg:
>- Saint-Jean-de-Cherbourg (Canada, Québec, Bas-Saint-Laurent, La Matanie)
>- Cherbourg-en-Cotentin (France, France métropolitaine, Normandie, Manche)
```
#
# This call will fail with an error shown above.
#
#regions_city(ports_of_embarkation)
```
#### 2. Resolving geocoding ambiguity using the `within` parameter.
We can try to resolve ambiguity of the name "Southampton" (found in the United Kingdom and in the US)
and the name "Cherbourg" (found in Canada and France) by narrowing the scope of search using
parameter `within` and function `regions_country` like this:
regions_city(ports_of_embarkation, within=regions_country(['France', 'UK']))
But this call results in another `ValueError`:
>No objects were found for Cobh.
```
#
# This call will fail with "No objects were found for Cobh." error.
#
#regions_city(ports_of_embarkation, within=regions_country(['France', 'UK']))
```
An alternative way of using parameter `within` is to specify
an array of names of all the countries.
The territory names must be in the same order
as the names of the geocoded cities:
```
regions_city(ports_of_embarkation, within=['UK', 'France', 'Ireland'])
```
#### 3. Using `regions_builder` for advanced geocoding.
There are many situations where a simple call of the function `regions`
will not resolve all geocoding ambiguities.
In other cases, we might want to retrieve all objects matching a name and
not to treat names ambiguity as an error.
The `regions builder` object provides advanced capabilities in fine tuning of geocoding queries.
Let's resolve ambiguity of names "Southampton" and "Cherbourg" with the help of `regions builder`.
```
ports_of_embarkation_geocoded = regions_builder(level='city', request=ports_of_embarkation) \
.where('Cherbourg', within='France') \
.where('Southampton', within='England') \
.build()
ports_of_embarkation_geocoded
```
### Markers on the interactive basemap.
The `Lets-Plot` API makes it easy to create an interactive basemap layer using its own vector tiles service or
by configuring 3rd party ZXY raster tile providers.
In this notebook we will use raster tiles provided by [Wikimedia Foundation](https://foundation.wikimedia.org/wiki/Maps_Terms_of_Use).
Simple markers (points) can be added to the map either via the `geom_point` layer
or directly on the `livemap` base-layer.
In this demo we will add the ports of embarkation markers right to the `livemap` base-layer (using the `map` parameter)
and, later, add the other markers and shapes via additional `geom` layers.
```
LetsPlot.set(maptiles_zxy(url='https://maps.wikimedia.org/osm-intl/{z}/{x}/{y}@2x.png'))
basemap = ggplot() + ggsize(800, 300) \
+ geom_livemap(map=ports_of_embarkation_geocoded,
size=7,
shape=21, color='black', fill='yellow')
basemap
```
### The 'Titanic's site' marker
```
from shapely.geometry import Point, LineString
titanic_site = Point(-38.056641, 46.920255)
# Add the marker using the `geom_point` geometry layer.
titanic_site_marker = geom_point(x=titanic_site.x, y = titanic_site.y, size=10, shape=9, color='red')
basemap + titanic_site_marker
```
### Connecting the markers on the map.
The `ports_of_embarkation_geocoded` variable in this demo is an object of type `Regions`.
Object `Regions`, if necessary, can be tranfrormed to a `GeoDataFrame`
by calling its `centroids()`, `boundaries()` or `limits()` method.
To create the Titanic's path we will use the `centroids()` method to obtain the points of embarkation and then append
the "Titanic's site" point to complete the polyline.
```
from geopandas import GeoSeries
from geopandas import GeoDataFrame
# The points of embarkation
embarkation_points = ports_of_embarkation_geocoded.centroids().geometry
titanic_journey_points = embarkation_points.append(GeoSeries(titanic_site), ignore_index=True)
# New GeoDataFrame containing a `LineString` geometry.
titanic_journey_gdf = GeoDataFrame(dict(geometry=[LineString(titanic_journey_points)]))
# App the polyline using the `geom_path` layer.
titanic_path = geom_path(map=titanic_journey_gdf, color='dark-blue', linetype='dotted', size=1.2)
basemap + titanic_path + titanic_site_marker
```
### The last segment that Titanic didn't made.
```
# Geocoding of The New York City is a trivial task.
NYC = regions_city(['New York']).centroids().geometry[0]
map_layers = titanic_path \
+ geom_segment(x=titanic_site.x, y=titanic_site.y,
xend=NYC.x, yend=NYC.y,
color='white', linetype='dotted', size=1.2) \
+ geom_point(x=NYC.x, y=NYC.y, size=7, shape=21, color='black', fill='white') \
+ titanic_site_marker
basemap + map_layers
```
### The Titanic survival rates by the port of embarkation.
```
import pandas as pd
df = pd.read_csv("../data/titanic.csv")
df.head()
```
In this Titanic dataset the column `Embarked`contains a single-letter codes of the Titanic's ports of embarkation:
- S: Southampton (UK)
- C: Cherbourg (France)
- Q: Cobh (Ireland)
Lets visualize the `Survived` counts by the port of embarkation:
```
from lets_plot.mapping import as_discrete
bars = ggplot(df) \
+ geom_bar(aes('Embarked', fill=as_discrete('Survived')), position='dodge') \
+ scale_fill_discrete(labels=['No', 'Yes']) \
+ scale_x_discrete(labels=['Southampton', 'Cobh', 'Cherbourg'], limits=['S', 'C', 'Q'])
bars + ggsize(800, 250)
```
### The final figure.
```
bars_settings = theme(axis_title='blank',
axis_line='blank',
axis_ticks_y='blank',
axis_text_y='blank',
legend_position=[1.12, 1.07],
legend_justification=[1, 1]) + scale_x_discrete(expand=[0, 0.05])
map = ggplot() + ggsize(800, 300) \
+ geom_livemap(map=ports_of_embarkation_geocoded.centroids(),
size=8,
shape=21, color='black', fill='yellow',
zoom=4, location=[-12, 48])
fig = GGBunch()
fig.add_plot(map + map_layers, 0, 0)
fig.add_plot(bars + bars_settings, 535, 135, 250, 150)
fig
```
| github_jupyter |
```
import os
import json
import tensorflow as tf
import numpy as np
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import cm
from tensor2tensor import problems
from tensor2tensor import models
from tensor2tensor.bin import t2t_decoder # To register the hparams set
from tensor2tensor.utils import registry
from tensor2tensor.utils import trainer_lib
from tensor2tensor.data_generators import babi_qa
```
## HParams
```
# HParams
babi_task_id = 'qa3'
subset = "1k"
problem_name = 'babi_qa_sentence_task' + babi_task_id.replace("qa", "") + "_" + subset
model_name = "babi_r_transformer"
hparams_set = "r_transformer_act_step_position_timing_tiny"
data_dir = '~/babi/data/' + problem_name
# PUT THE MODEL YOU WANT TO LOAD HERE!
CHECKPOINT = '~/babi/output/' + problem_name+ '/' + model_name + '/' + hparams_set + '/'
print(CHECKPOINT)
_TASKS = {
'qa1': 'qa1_single-supporting-fact',
'qa2': 'qa2_two-supporting-facts',
'qa3': 'qa3_three-supporting-facts',
'qa4': 'qa4_two-arg-relations',
'qa5': 'qa5_three-arg-relations',
'qa6': 'qa6_yes-no-questions',
'qa7': 'qa7_counting',
'qa8': 'qa8_lists-sets',
'qa9': 'qa9_simple-negation',
'qa10': 'qa10_indefinite-knowledge',
'qa11': 'qa11_basic-coreference',
'qa12': 'qa12_conjunction',
'qa13': 'qa13_compound-coreference',
'qa14': 'qa14_time-reasoning',
'qa15': 'qa15_basic-deduction',
'qa16': 'qa16_basic-induction',
'qa17': 'qa17_positional-reasoning',
'qa18': 'qa18_size-reasoning',
'qa19': 'qa19_path-finding',
'qa20': 'qa20_agents-motivations'
}
meta_data_filename = _TASKS[babi_task_id] + '-meta_data.json'
metadata_path = os.path.join(data_dir, meta_data_filename)
FLAGS = tf.flags.FLAGS
FLAGS.data_dir = data_dir
truncated_story_length = 130 if babi_task_id == 'qa3' else 70
with tf.gfile.GFile(metadata_path, mode='r') as f:
metadata = json.load(f)
max_story_length = metadata['max_story_length']
max_sentence_length = metadata['max_sentence_length']
max_question_length = metadata['max_question_length']
print(max_story_length)
print(max_sentence_length)
print(max_question_length)
tf.reset_default_graph()
class bAbiACTVisualizer(object):
"""Helper object for creating act visualizations."""
def __init__(
self, hparams_set, model_name, data_dir, problem_name, beam_size=1):
story, question, targets, samples, ponder_time = build_model(
hparams_set, model_name, data_dir, problem_name, beam_size=beam_size)
# Fetch the problem
babi_problem = problems.problem(problem_name)
encoders = babi_problem.feature_encoders(data_dir)
self.story = story
self.question = question
self.targets = targets
self.ponder_time = ponder_time
self.samples = samples
self.encoders = encoders
def encode(self, story_str, question_str):
"""Input str to features dict, ready for inference."""
story_str = babi_qa._normalize_string(story_str)
question_str = babi_qa._normalize_string(question_str)
story = story_str.strip().split('.')
story = [self.encoders[babi_qa.FeatureNames.STORY].encode(sentence)
for sentence in story[-truncated_story_length:]]
question = self.encoders[babi_qa.FeatureNames.QUESTION].encode(question_str)
for sentence in story:
for _ in range(max_sentence_length - len(sentence)):
sentence.append(babi_qa.PAD)
assert len(sentence) == max_sentence_length
for _ in range(max_story_length - len(story)):
story.append([babi_qa.PAD for _ in range(max_sentence_length)])
for _ in range(max_question_length - len(question)):
question.append(babi_qa.PAD)
assert len(story) == max_story_length
assert len(question) == max_question_length
story_flat = [token_id for sentence in story for token_id in sentence]
batch_story = np.reshape(np.array(story_flat),
[1, max_story_length, max_sentence_length, 1])
batch_question = np.reshape(np.array(question),
[1, 1, max_question_length, 1])
return batch_story, batch_question
def decode_story(self, integers):
"""List of ints to str."""
integers = np.squeeze(integers).tolist()
story = []
for sent in integers:
sent_decoded = self.encoders[babi_qa.FeatureNames.STORY].decode_list(sent)
sent_decoded.append('.')
story.append(sent_decoded)
return story
def decode_question(self, integers):
"""List of ints to str."""
integers = np.squeeze(integers).tolist()
return self.encoders[babi_qa.FeatureNames.QUESTION].decode_list(integers)
def decode_targets(self, integers):
"""List of ints to str."""
integers = np.squeeze(integers).tolist()
return self.encoders["targets"].decode([integers])
def get_vis_data_from_string(self, sess, story_str, question_str):
"""Constructs the data needed for visualizing ponder_time.
Args:
sess: A tf.Session object.
input_string: The input setence to be visulized.
Returns:
Tuple of (
output_string: The answer
input_list: Tokenized input sentence.
output_list: Tokenized answer.
ponder_time: ponder_time matrices;
)
"""
encoded_story, encoded_question = self.encode(story_str, question_str)
# Run inference graph to get the label.
out = sess.run(self.samples, {
self.story: encoded_story,
self.question: encoded_question,
})
# Run the decoded answer through the training graph to get the
# ponder_time tensors.
ponder_time = sess.run(self.ponder_time, {
self.story: encoded_story,
self.question: encoded_question,
self.targets: np.reshape(out, [1, -1, 1, 1]),
})
output = self.decode_targets(out)
story_list = self.decode_story(encoded_story)
question_list = self.decode_question(encoded_question)
return story_list, question_list, output, ponder_time
def build_model(hparams_set, model_name, data_dir, problem_name, beam_size=1):
"""Build the graph required to featch the ponder_times.
Args:
hparams_set: HParams set to build the model with.
model_name: Name of model.
data_dir: Path to directory contatining training data.
problem_name: Name of problem.
beam_size: (Optional) Number of beams to use when decoding a traslation.
If set to 1 (default) then greedy decoding is used.
Returns:
Tuple of (
inputs: Input placeholder to feed in ids.
targets: Targets placeholder to feed to th when fetching
ponder_time.
samples: Tensor representing the ids of the translation.
ponder_time: Tensors representing the ponder_time.
)
"""
hparams = trainer_lib.create_hparams(
hparams_set, data_dir=data_dir, problem_name=problem_name)
babi_model = registry.model(model_name)(
hparams, tf.estimator.ModeKeys.EVAL)
story = tf.placeholder(tf.int32, shape=(
1, max_story_length, max_sentence_length, 1),
name=babi_qa.FeatureNames.STORY)
question = tf.placeholder(tf.int32, shape=(
1, 1, max_question_length, 1),
name=babi_qa.FeatureNames.QUESTION)
targets = tf.placeholder(tf.int32, shape=(1, 1, 1, 1), name='targets')
babi_model({
babi_qa.FeatureNames.STORY: story,
babi_qa.FeatureNames.QUESTION: question,
'targets': targets,
})
# Must be called after building the training graph, so that the dict will
# have been filled with the ponder_time tensors. BUT before creating the
# interence graph otherwise the dict will be filled with tensors from
# inside a tf.while_loop from decoding and are marked unfetchable.
ponder_time = get_ponder_mats(babi_model)
with tf.variable_scope(tf.get_variable_scope(), reuse=True):
samples = babi_model.infer({
babi_qa.FeatureNames.STORY: story,
babi_qa.FeatureNames.QUESTION: question,
}, beam_size=beam_size)['outputs']
return story, question, targets, samples, ponder_time
def get_ponder_mats(babi_model):
"""Get's the tensors representing the ponder_time from a build model.
The ponder_time are stored in a dict on the Transformer object while building
the graph.
Args:
babi_model: Transformer object to fetch the ponder_time from.
Returns:
Tuple of ponder_time matrices
"""
# print([n.name for n in tf.get_default_graph().as_graph_def().node])
attention_tensor_name = "babi_r_transformer/parallel_0_5/babi_r_transformer/body/encoder/r_transformer_act/while/self_attention/multihead_attention/dot_product_attention/attention_weights"
ponder_time_tensor_name = "babi_r_transformer/parallel_0_5/babi_r_transformer/body/enc_ponder_times:0"
ponder_time = tf.get_default_graph().get_tensor_by_name(ponder_time_tensor_name)
return ponder_time
ponder_visualizer = bAbiACTVisualizer(hparams_set, model_name, data_dir, problem_name, beam_size=1)
tf.Variable(0, dtype=tf.int64, trainable=False, name='global_step')
sess = tf.train.MonitoredTrainingSession(
checkpoint_dir=CHECKPOINT,
save_summaries_secs=0,
)
if babi_task_id == 'qa1':
# input_story = "John travelled to the hallway.Mary journeyed to the bathroom."
# input_question = "Where is John?" #hallway
input_story = "John travelled to the hallway.Mary journeyed to the bathroom.Daniel went back to the bathroom.John moved to the bedroom."
input_question = "Where is Mary?" #bathroom
elif babi_task_id == 'qa2':
input_story = "Mary got the milk there.John moved to the bedroom.Sandra went back to the kitchen.Mary travelled to the hallway."
input_question = "Where is the milk?" #hallway
# input_story = "Mary got the milk there.John moved to the bedroom.Sandra went back to the kitchen.Mary travelled to the hallway.John got the football there.John went to the hallway."
# input_question = "Where is the football?" #hallway
elif babi_task_id == 'qa3':
input_story = "Mary got the milk.John moved to the bedroom.Daniel journeyed to the office.John grabbed the apple there.John got the football.John journeyed to the garden.Mary left the milk.John left the football.Daniel moved to the garden.Daniel grabbed the football.Mary moved to the hallway.Mary went to the kitchen.John put down the apple there.John picked up the apple.Sandra moved to the hallway.Daniel left the football there.Daniel took the football.John travelled to the kitchen.Daniel dropped the football.John dropped the apple.John grabbed the apple.John went to the office.Sandra went back to the bedroom.Sandra took the milk.John journeyed to the bathroom.John travelled to the office.Sandra left the milk.Mary went to the bedroom.Mary moved to the office.John travelled to the hallway.Sandra moved to the garden.Mary moved to the kitchen.Daniel took the football.Mary journeyed to the bedroom.Mary grabbed the milk there.Mary discarded the milk.John went to the garden.John discarded the apple there."
input_question = "Where was the apple before the bathroom?" #office
# input_story = "Mary got the milk.John moved to the bedroom.Daniel journeyed to the office.John grabbed the apple there.John got the football.John journeyed to the garden.Mary left the milk.John left the football.Daniel moved to the garden.Daniel grabbed the football.Mary moved to the hallway.Mary went to the kitchen.John put down the apple there.John picked up the apple.Sandra moved to the hallway.Daniel left the football there.Daniel took the football.John travelled to the kitchen.Daniel dropped the football.John dropped the apple.John grabbed the apple.John went to the office.Sandra went back to the bedroom.Sandra took the milk.John journeyed to the bathroom.John travelled to the office.Sandra left the milk.Mary went to the bedroom.Mary moved to the office.John travelled to the hallway.Sandra moved to the garden.Mary moved to the kitchen.Daniel took the football.Mary journeyed to the bedroom.Mary grabbed the milk there.Mary discarded the milk.John went to the garden.John discarded the apple there.Sandra travelled to the bedroom.Daniel moved to the bathroom."
# input_question = "Where was the apple before the hallway?" #office
story_text, question_text, output, ponder_time = ponder_visualizer.get_vis_data_from_string(sess, input_story, input_question)
# print(output)
# print(story_text)
# print(question_text)
inp_text = []
for sent in story_text:
inp_text.append(' '.join(sent))
inp_text.append(' '.join(question_text))
ponder_time = np.squeeze(np.array(ponder_time)).tolist()
# print(ponder_time)
def pad_remover(inp_text, ponder_time):
pad_sent_index = [ i for i, sent in enumerate(inp_text) if sent.startswith('<pad>')]
start = min(pad_sent_index)
end = max(pad_sent_index)
filtered_inp_text = inp_text[:start] + inp_text[end+1:]
filtered_inp_text = [sent.replace('<pad> ', '') for sent in filtered_inp_text]
filtered_ponder_time = ponder_time[:start] + ponder_time[end+1:]
return filtered_inp_text, filtered_ponder_time
filtered_inp_text, filtered_ponder_time = pad_remover(inp_text, ponder_time)
for sent in filtered_inp_text:
print(sent)
print(output)
print(filtered_ponder_time)
df = pd.DataFrame(
{'input': filtered_inp_text,
'ponder_time': filtered_ponder_time,
})
f_size = (10,5)
if babi_task_id == 'qa2':
f_size = (15,5)
if babi_task_id == 'qa3':
f_size = (25,5)
df.plot(kind='bar', x='input', y='ponder_time', rot=90, width=0.3, figsize=f_size, cmap='Spectral')
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c03_exercise_flowers_with_transfer_learning_solution.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c03_exercise_flowers_with_transfer_learning_solution.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
# TensorFlow Hub
[TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.
These models can either be used as is, or they can be used for Transfer Learning.
Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.
Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).
Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above.
# Imports
Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of.
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
```
# TODO: Download the Flowers Dataset using TensorFlow Datasets
In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasets#tf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
```
(training_set, validation_set), dataset_info = tfds.load(
'tf_flowers',
split=['train[:70%]', 'train[70%:]'],
with_info=True,
as_supervised=True,
)
```
# TODO: Print Information about the Flowers Dataset
Now that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
```
num_classes = dataset_info.features['label'].num_classes
num_training_examples = 0
num_validation_examples = 0
for example in training_set:
num_training_examples += 1
for example in validation_set:
num_validation_examples += 1
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
```
The images in the Flowers dataset are not all the same size.
```
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
```
# TODO: Reformat Images and Create Batches
In the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`.
```
IMAGE_RES = 224
def format_image(image, label):
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
BATCH_SIZE = 32
train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1)
```
# Do Simple Transfer Learning with TensorFlow Hub
Let's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset.
### TODO: Create a Feature Extractor
In the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
```
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES, 3))
```
### TODO: Freeze the Pre-Trained Model
In the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
```
feature_extractor.trainable = False
```
### TODO: Attach a classification head
In the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model.
```
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(num_classes)
])
model.summary()
```
### TODO: Train the model
In the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
```
model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 6
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
```
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet).
# TODO: Plot Training and Validation Graphs
In the cell below, plot the training and validation accuracy/loss graphs.
```
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(EPOCHS)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
```
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.
One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.
The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images.
# TODO: Check Predictions
In the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
```
class_names = np.array(dataset_info.features['label'].names)
print(class_names)
```
### TODO: Create an Image Batch and Make Predictions
In the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names.
```
image_batch, label_batch = next(iter(train_batches))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
print(predicted_class_names)
```
### TODO: Print True Labels and Predicted Indices
In the cell below, print the true labels and the indices of predicted labels.
```
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
```
# Plot Model Predictions
```
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
```
# TODO: Perform Transfer Learning with the Inception Model
Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) and click on `tf2-preview/inception_v3/feature_vector`. This feature vector corresponds to the Inception v3 model. In the cells below, use transfer learning to create a CNN that uses Inception v3 as the pretrained model to classify the images from the Flowers dataset. Note that Inception, takes as input, images that are 299 x 299 pixels. Compare the accuracy you get with Inception v3 to the accuracy you got with MobileNet v2.
```
IMAGE_RES = 299
(training_set, validation_set), dataset_info = tfds.load(
'tf_flowers',
with_info=True,
as_supervised=True,
split=['train[:70%]', 'train[70%:]'],
)
train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1)
URL = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES, 3),
trainable=False)
model_inception = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes)
])
model_inception.summary()
model_inception.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 6
history = model_inception.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
```
| github_jupyter |
# T & T Lab 8 - 27th Jan
## Manish Ranjan Behera - 1828249
### WAP TO PRINT THIS PATTERN AND TAKE THE NO OF LINES AS INPUT FROM USER

```
n=int(input("Enter Size:"))
for i in range(n,0,-1):
if i==n:
print("*"*((2*n)-1))
else:
print("*"*i+' '*((n-i)*2-1)+"*"*i)
```
### WAP to find whether a number is perfect number or not using Function
**Perfect number, a positive integer that is equal to the sum of its proper divisors. The smallest perfect number is 6, which is the sum of 1, 2, and 3. Other perfect numbers are 28, 496, and 8,128.**
```
def perfectNumber(n):
s=0
for i in range(1,n):
if n%i==0:
s=s+i
if s==n:
print(f'{n} is a perfect number')
else:
print(f'{n} is not a perfect number')
n=int(input('Enter a Number:'))
perfectNumber(n)
```
### WAP to find whether a number is Armstrong Number or Not using Function
```
def armstrong(n):
d=n
p=len(str(n))
print(p)
s=0
while d>0:
r=d%10
d=int(d/10)
s=s+(r**p)
if s==n:
print(f'{n} is a Armstrong Number')
else:
print(f'{n} is NOT a Armstrong Number')
n=int(input('Enter a Number:'))
armstrong(n)
```
### WAP to convert Fahrenheit to Celcius
```
def f2c(f):
c=(f-32)*(5/9)
return c
f=int(input("Enter temperature in degree Fahrenheit:"))
print(f'{f} degree Fahrenheit is equal to {f2c(f)} degree Celcius')
```
### WAP to find total surface area of a Cuboid
```
def totalSurfaceArea(l,w,h):
tsa=2*(l*w+l*h+w*h)
return tsa
l=float(input('Enter Length of the Cuboid:'))
w=float(input('Enter Width of the Cuboid:'))
h=float(input('Enter Height of the Cuboid:'))
print("Total surface area of cuboid is {a:1.2f} square Units".format(a=totalSurfaceArea(l,w,h)))
```
### WAP to print the following Patterns by taking size as user Input:
a)
```
def pattern(n):
for i in range(1,n+1):
print(" "*(n-i),end='')
for j in range(i):
print(2*i-1,end=" ")
print("")
n=int(input("Enter Number of lines:"))
pattern(n)
```
b)
```
def pattern(n):
for i in range(2*n,0,-1):
if i==2*n or i==1:
print("*"*((2*n)-1))
elif i<=n:
print("*"*(n-i+1)+' '*((n-(n-i)-1)*2-1)+"*"*(n-i+1))
else:
print("*"*(i-n)+' '*((n-(i-n))*2-1)+"*"*(i-n))
n=int(input("Enter Size:"))
pattern(n)
```
c)
```
def alphaPattern(n):
alpha='ABCDEFGHIJKLMNOPQRSTUVWXYZ'
for i in range(n):
print(" "*(n-i)+alpha[0:2*i+1])
n=int(input("Enter Number of lines:"))
alphaPattern(n)
```
### WAP to convert Decimal to Binary
```
# //: divide with integral result (discard remainder)
def decimal2binary(num):
binary=''
if num >= 1:
binary=decimal2binary(num // 2)
binary=binary+str(num%2)
return binary
num=int(input("Enter a Decimal Value:"))
print(f'Binary form of {num} is {decimal2binary(num)}')
```
### WAP to find the ASCII values of a given input string
```
def str2ascii(string):
asciiList=[ord(c) for c in string]
return asciiList
string=input("Enter a String Value:")
print(f"ASCII values of the string are {str2ascii(string)}")
```
### WAP to find the LCM of the Numbers
```
# LCM = a*b/GCD(a,b)
def gcd(a,b):
if a > b:
smaller = b
else:
smaller = a
for i in range(1, smaller+1):
if((a % i == 0) and (b % i == 0)):
hcf = i
return hcf
def lcm(a, b):
l=(a*b)//gcd(a,b)
return l
n=int(input("Enter number of number for which LCM has to be found:"))
num=[int(input('')) for x in range(n)]
result=lcm(num[0],num[1])
for i in range(2,len(num)):
result=lcm(result,num[i])
print("The L.C.M. of",end=' ')
for x in num:
print(x,end=", ")
print("is",result)
```
| github_jupyter |
# Feature Engineering and Labeling
We'll use the price-volume data and generate features that we can feed into a model. We'll use this notebook for all the coding exercises of this lesson, so please open this notebook in a separate tab of your browser.
Please run the following code up to and including "Make Factors." Then continue on with the lesson.
```
import sys
!{sys.executable} -m pip install --quiet -r requirements.txt
import numpy as np
import pandas as pd
import time
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (14, 8)
```
#### Registering data
```
import os
import project_helper
from zipline.data import bundles
os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '..', '..', 'data', 'project_4_eod')
ingest_func = bundles.csvdir.csvdir_equities(['daily'], project_helper.EOD_BUNDLE_NAME)
bundles.register(project_helper.EOD_BUNDLE_NAME, ingest_func)
print('Data Registered')
from zipline.pipeline import Pipeline
from zipline.pipeline.factors import AverageDollarVolume
from zipline.utils.calendars import get_calendar
universe = AverageDollarVolume(window_length=120).top(500)
trading_calendar = get_calendar('NYSE')
bundle_data = bundles.load(project_helper.EOD_BUNDLE_NAME)
engine = project_helper.build_pipeline_engine(bundle_data, trading_calendar)
universe_end_date = pd.Timestamp('2016-01-05', tz='UTC')
universe_tickers = engine\
.run_pipeline(
Pipeline(screen=universe),
universe_end_date,
universe_end_date)\
.index.get_level_values(1)\
.values.tolist()
from zipline.data.data_portal import DataPortal
data_portal = DataPortal(
bundle_data.asset_finder,
trading_calendar=trading_calendar,
first_trading_day=bundle_data.equity_daily_bar_reader.first_trading_day,
equity_minute_reader=None,
equity_daily_reader=bundle_data.equity_daily_bar_reader,
adjustment_reader=bundle_data.adjustment_reader)
def get_pricing(data_portal, trading_calendar, assets, start_date, end_date, field='close'):
end_dt = pd.Timestamp(end_date.strftime('%Y-%m-%d'), tz='UTC', offset='C')
start_dt = pd.Timestamp(start_date.strftime('%Y-%m-%d'), tz='UTC', offset='C')
end_loc = trading_calendar.closes.index.get_loc(end_dt)
start_loc = trading_calendar.closes.index.get_loc(start_dt)
return data_portal.get_history_window(
assets=assets,
end_dt=end_dt,
bar_count=end_loc - start_loc,
frequency='1d',
field=field,
data_frequency='daily')
```
# Make Factors
- We'll use the same factors we have been using in the lessons about alpha factor research. Factors can be features that we feed into the model.
```
from zipline.pipeline.factors import CustomFactor, DailyReturns, Returns, SimpleMovingAverage
from zipline.pipeline.data import USEquityPricing
factor_start_date = universe_end_date - pd.DateOffset(years=3, days=2)
sector = project_helper.Sector()
def momentum_1yr(window_length, universe, sector):
return Returns(window_length=window_length, mask=universe) \
.demean(groupby=sector) \
.rank() \
.zscore()
def mean_reversion_5day_sector_neutral(window_length, universe, sector):
return -Returns(window_length=window_length, mask=universe) \
.demean(groupby=sector) \
.rank() \
.zscore()
def mean_reversion_5day_sector_neutral_smoothed(window_length, universe, sector):
unsmoothed_factor = mean_reversion_5day_sector_neutral(window_length, universe, sector)
return SimpleMovingAverage(inputs=[unsmoothed_factor], window_length=window_length) \
.rank() \
.zscore()
class CTO(Returns):
"""
Computes the overnight return, per hypothesis from
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2554010
"""
inputs = [USEquityPricing.open, USEquityPricing.close]
def compute(self, today, assets, out, opens, closes):
"""
The opens and closes matrix is 2 rows x N assets, with the most recent at the bottom.
As such, opens[-1] is the most recent open, and closes[0] is the earlier close
"""
out[:] = (opens[-1] - closes[0]) / closes[0]
class TrailingOvernightReturns(Returns):
"""
Sum of trailing 1m O/N returns
"""
window_safe = True
def compute(self, today, asset_ids, out, cto):
out[:] = np.nansum(cto, axis=0)
def overnight_sentiment(cto_window_length, trail_overnight_returns_window_length, universe):
cto_out = CTO(mask=universe, window_length=cto_window_length)
return TrailingOvernightReturns(inputs=[cto_out], window_length=trail_overnight_returns_window_length) \
.rank() \
.zscore()
def overnight_sentiment_smoothed(cto_window_length, trail_overnight_returns_window_length, universe):
unsmoothed_factor = overnight_sentiment(cto_window_length, trail_overnight_returns_window_length, universe)
return SimpleMovingAverage(inputs=[unsmoothed_factor], window_length=trail_overnight_returns_window_length) \
.rank() \
.zscore()
universe = AverageDollarVolume(window_length=120).top(500)
sector = project_helper.Sector()
pipeline = Pipeline(screen=universe)
pipeline.add(
momentum_1yr(252, universe, sector),
'Momentum_1YR')
pipeline.add(
mean_reversion_5day_sector_neutral_smoothed(20, universe, sector),
'Mean_Reversion_Sector_Neutral_Smoothed')
pipeline.add(
overnight_sentiment_smoothed(2, 10, universe),
'Overnight_Sentiment_Smoothed')
all_factors = engine.run_pipeline(pipeline, factor_start_date, universe_end_date)
all_factors.head()
```
#### Stop here and continue with the lesson section titled "Features".
# Universal Quant Features
* stock volatility: zipline has a custom factor called AnnualizedVolatility. The [source code is here](https://github.com/quantopian/zipline/blob/master/zipline/pipeline/factors/basic.py) and also pasted below:
```
class AnnualizedVolatility(CustomFactor):
"""
Volatility. The degree of variation of a series over time as measured by
the standard deviation of daily returns.
https://en.wikipedia.org/wiki/Volatility_(finance)
**Default Inputs:** :data:`zipline.pipeline.factors.Returns(window_length=2)` # noqa
Parameters
----------
annualization_factor : float, optional
The number of time units per year. Defaults is 252, the number of NYSE
trading days in a normal year.
"""
inputs = [Returns(window_length=2)]
params = {'annualization_factor': 252.0}
window_length = 252
def compute(self, today, assets, out, returns, annualization_factor):
out[:] = nanstd(returns, axis=0) * (annualization_factor ** .5)
```
```
from zipline.pipeline.factors import AnnualizedVolatility
AnnualizedVolatility()
```
#### Quiz
We can see that the returns `window_length` is 2, because we're dealing with daily returns, which are calculated as the percent change from one day to the following day (2 days). The `AnnualizedVolatility` `window_length` is 252 by default, because it's the one-year volatility. Try to adjust the call to the constructor of `AnnualizedVolatility` so that this represents one-month volatility (still annualized, but calculated over a time window of 20 trading days)
#### Answer
```
# TODO
```
#### Quiz: Create one-month and six-month annualized volatility.
Create `AnnualizedVolatility` objects for 20 day and 120 day (one month and six-month) time windows. Remember to set the `mask` parameter to the `universe` object created earlier (this filters the stocks to match the list in the `universe`). Convert these to ranks, and then convert the ranks to zscores.
```
# TODO
volatility_20d # ...
volatility_120d # ...
```
#### Add to the pipeline
```
pipeline.add(volatility_20d, 'volatility_20d')
pipeline.add(volatility_120d, 'volatility_120d')
```
#### Quiz: Average Dollar Volume feature
We've been using [AverageDollarVolume](http://www.zipline.io/appendix.html#zipline.pipeline.factors.AverageDollarVolume) to choose the stock universe based on stocks that have the highest dollar volume. We can also use it as a feature that is input into a predictive model.
Use 20 day and 120 day `window_length` for average dollar volume. Then rank it and convert to a zscore.
```
"""already imported earlier, but shown here for reference"""
#from zipline.pipeline.factors import AverageDollarVolume
# TODO: 20-day and 120 day average dollar volume
adv_20d = # ...
adv_120d = # ...
```
#### Add average dollar volume features to pipeline
```
pipeline.add(adv_20d, 'adv_20d')
pipeline.add(adv_120d, 'adv_120d')
```
### Market Regime Features
We are going to try to capture market-wide regimes: Market-wide means we'll look at the aggregate movement of the universe of stocks.
High and low dispersion: dispersion is looking at the dispersion (standard deviation) of the cross section of all stocks at each period of time (on each day). We'll inherit from [CustomFactor](http://www.zipline.io/appendix.html?highlight=customfactor#zipline.pipeline.CustomFactor). We'll feed in [DailyReturns](http://www.zipline.io/appendix.html?highlight=dailyreturns#zipline.pipeline.factors.DailyReturns) as the `inputs`.
#### Quiz
If the `inputs` to our market dispersion factor are the daily returns, and we plan to calculate the market dispersion on each day, what should be the `window_length` of the market dispersion class?
#### Answer
#### Quiz: market dispersion feature
Create a class that inherits from `CustomFactor`. Override the `compute` function to calculate the population standard deviation of all the stocks over a specified window of time.
**mean returns**
$\mu = \sum_{t=0}^{T}\sum_{i=1}^{N}r_{i,t}$
**Market Dispersion**
$\sqrt{\frac{1}{T} \sum_{t=0}^{T} \frac{1}{N}\sum_{i=1}^{N}(r_{i,t} - \mu)^2}$
Use [numpy.nanmean](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.nanmean.html) to calculate the average market return $\mu$ and to calculate the average of the squared differences.
```
class MarketDispersion(CustomFactor):
inputs = [DailyReturns()]
window_length = # ...
window_safe = True
def compute(self, today, assets, out, returns):
# TODO: calculate average returns
mean_returns = # ...
#TODO: calculate standard deviation of returns
out[:] = # ...
```
#### Quiz
Create the MarketDispersion object. Apply two separate smoothing operations using [SimpleMovingAverage](https://www.zipline.io/appendix.html?highlight=simplemovingaverage#zipline.pipeline.factors.SimpleMovingAverage). One with a one-month window, and another with a 6-month window. Add both to the pipeline.
```
# TODO: create MarketDispersion object
dispersion = # ...
# TODO: apply one-month simple moving average
dispersion_20d = # ...
# TODO: apply 6-month simple moving average
dispersion_120d = # ...
# Add to pipeline
pipeline.add(dispersion_20d, 'dispersion_20d')
pipeline.add(dispersion_120d, 'dispersion_120d')
```
#### Market volatility feature
* High and low volatility
We'll also build a class for market volatility, which inherits from [CustomFactor](http://www.zipline.io/appendix.html?highlight=customfactor#zipline.pipeline.CustomFactor). This will measure the standard deviation of the returns of the "market". In this case, we're approximating the "market" as the equal weighted average return of all the stocks in the stock universe.
##### Market return
$r_{m,t} = \frac{1}{N}\sum_{i=1}^{N}r_{i,t}$ for each day $t$ in `window_length`.
##### Average market return
Also calculate the average market return over the `window_length` $T$ of days:
$\mu_{m} = \frac{1}{T}\sum_{t=1}^{T} r_{m,t}$
#### Standard deviation of market return
Then calculate the standard deviation of the market return
$\sigma_{m,t} = \sqrt{252 \times \frac{1}{N} \sum_{t=1}^{T}(r_{m,t} - \mu_{m})^2 } $
##### Hints
* Please use [numpy.nanmean](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.nanmean.html) so that it ignores null values.
* When using `numpy.nanmean`:
axis=0 will calculate one average for every column (think of it like creating a new row in a spreadsheet)
axis=1 will calculate one average for every row (think of it like creating a new column in a spreadsheet)
* The returns data in `compute` has one day in each row, and one stock in each column.
* Notice that we defined a dictionary `params` that has a key `annualization_factor`. This `annualization_factor` can be used as a regular variable, and you'll be using it in the `compute` function. This is also done in the definition of AnnualizedVolatility (as seen earlier in the notebook).
```
class MarketVolatility(CustomFactor):
inputs = [DailyReturns()]
window_length = 1 # We'll want to set this in the constructor when creating the object.
window_safe = True
params = {'annualization_factor': 252.0}
def compute(self, today, assets, out, returns, annualization_factor):
# TODO
"""
For each row (each row represents one day of returns),
calculate the average of the cross-section of stock returns
So that market_returns has one value for each day in the window_length
So choose the appropriate axis (please see hints above)
"""
mkt_returns = # ...
# TODO
# Calculate the mean of market returns
mkt_returns_mu = # ...
# TODO
# Calculate the standard deviation of the market returns, then annualize them.
out[:] = # ...
# TODO: create market volatility features using one month and six-month windows
market_vol_20d = # ...
market_vol_120d = # ...
# add market volatility features to pipeline
pipeline.add(market_vol_20d, 'market_vol_20d')
pipeline.add(market_vol_120d, 'market_vol_120d')
```
#### Stop here and continue with the lesson section "Sector and Industry"
# Sector and Industry
#### Add sector code
Note that after we run the pipeline and get the data in a dataframe, we can work on enhancing the sector code feature with one-hot encoding.
```
pipeline.add(sector, 'sector_code')
```
#### Run pipeline to calculate features
```
all_factors = engine.run_pipeline(pipeline, factor_start_date, universe_end_date)
all_factors.head()
```
#### One-hot encode sector
Let's get all the unique sector codes. Then we'll use the `==` comparison operator to check when the sector code equals a particular value. This returns a series of True/False values. For some functions that we'll use in a later lesson, it's easier to work with numbers instead of booleans. We can convert the booleans to type int. So False becomes 0, and 1 becomes True.
```
sector_code_l = set(all_factors['sector_code'])
sector_0 = all_factors['sector_code'] == 0
sector_0[0:5]
sector_0_numeric = sector_0.astype(int)
sector_0_numeric[0:5]
```
#### Quiz: One-hot encode sector
Choose column names that look like "sector_code_0", "sector_code_1" etc. Store the values as 1 when the row matches the sector code of the column, 0 otherwise.
```
# TODO: one-hot encode sector and store into dataframe
for s in sector_code_l:
# ...
all_factors.head()
```
#### Stop here and continue with the lesson section "Date Parts".
# Date Parts
* We will make features that might capture trader/investor behavior due to calendar anomalies.
* We can get the dates from the index of the dataframe that is returned from running the pipeline.
#### Accessing index of dates
* Note that we can access the date index. using `Dataframe.index.get_level_values(0)`, since the date is stored as index level 0, and the asset name is stored in index level 1. This is of type [DateTimeIndex](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DatetimeIndex.html).
```
all_factors.index.get_level_values(0)
```
#### [DateTimeIndex attributes](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DatetimeIndex.html)
* The `month` attribute is a numpy array with a 1 for January, 2 for February ... 12 for December etc.
* We can use a comparison operator such as `==` to return True or False.
* It's usually easier to have all data of a similar type (numeric), so we recommend converting booleans to integers.
The numpy ndarray has a function `.astype()` that can cast the data to a specified type.
For instance, `astype(int)` converts False to 0 and True to 1.
```
# Example
print(all_factors.index.get_level_values(0).month)
print(all_factors.index.get_level_values(0).month == 1)
print( (all_factors.index.get_level_values(0).month == 1).astype(int) )
```
## Quiz
* Create a numpy array that has 1 when the month is January, and 0 otherwise. Store it as a column in the all_factors dataframe.
* Add another similar column to indicate when the month is December
```
# TODO: create a feature that indicate whether it's January
all_factors['is_January'] = # ...
# TODO: create a feature to indicate whether it's December
all_factors['is_December'] = # ...
```
## Weekday, quarter
* add columns to the all_factors dataframe that specify the weekday, quarter and year.
* As you can see in the [documentation for DateTimeIndex](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DatetimeIndex.html), `weekday`, `quarter`, and `year` are attributes that you can use here.
```
# we can see that 0 is for Monday, 4 is for Friday
set(all_factors.index.get_level_values(0).weekday)
# Q1, Q2, Q3 and Q4 are represented by integers too
set(all_factors.index.get_level_values(0).quarter)
```
#### Quiz
Add features for weekday, quarter and year.
```
# TODO
all_factors['weekday'] = # ...
all_factors['quarter'] = # ...
all_factors['year'] = # ...
```
## Start and end-of features
* The start and end of the week, month, and quarter may have structural differences in trading activity.
* [Pandas.date_range](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html) takes the start_date, end_date, and frequency.
* The [frequency](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases) for end of month is `BM`.
```
# Example
tmp = pd.date_range(start=factor_start_date, end=universe_end_date, freq='BM')
tmp
```
#### Example
Create a DatetimeIndex that stores the dates which are the last business day of each month.
Use the `.isin` function, passing in these last days of the month, to create a series of booleans.
Convert the booleans to integers.
```
last_day_of_month = pd.date_range(start=factor_start_date, end=universe_end_date, freq='BM')
last_day_of_month
tmp_month_end = all_factors.index.get_level_values(0).isin(last_day_of_month)
tmp_month_end
tmp_month_end_int = tmp_month_end.astype(int)
tmp_month_end_int
all_factors['month_end'] = tmp_month_end_int
```
#### Quiz: Start of Month
Create a feature that indicates the first business day of each month.
**Hint:** The frequency for first business day of the month uses the code `BMS`.
```
# TODO: month_start feature
first_day_of_month = # pd.date_range()
all_factors['month_start'] = # ...
```
#### Quiz: Quarter end and quarter start
Create features for the last business day of each quarter, and first business day of each quarter.
**Hint**: use `freq=BQ` for business day end of quarter, and `freq=BQS` for business day start of quarter.
```
# TODO: qtr_end feature
last_day_qtr = # ...
all_factors['qtr_end'] = # ...
# TODO: qtr_start feature
first_day_qtr = # ...
all_factors['qtr_start'] = # ...
```
## View all features
```
list(all_factors.columns)
```
Note that we can skip the sector_code feature, since we one-hot encoded it into separate features.
```
features = ['Mean_Reversion_Sector_Neutral_Smoothed',
'Momentum_1YR',
'Overnight_Sentiment_Smoothed',
'adv_120d',
'adv_20d',
'dispersion_120d',
'dispersion_20d',
'market_vol_120d',
'market_vol_20d',
#'sector_code', # removed sector_code
'volatility_120d',
'volatility_20d',
'sector_code_0',
'sector_code_1',
'sector_code_2',
'sector_code_3',
'sector_code_4',
'sector_code_5',
'sector_code_6',
'sector_code_7',
'sector_code_8',
'sector_code_9',
'sector_code_10',
'sector_code_-1',
'is_January',
'is_December',
'weekday',
'quarter',
'year',
'month_start',
'qtr_end',
'qtr_start']
```
#### Stop here and continue to the lesson section "Targets"
# Targets (Labels)
- We are going to try to predict the go forward 1-week return
- Very important! Quantize the target. Why do we do this?
- Makes it market neutral return
- Normalizes changing volatility and dispersion over time
- Make the target robust to changes in market regimes
- The factor we create is the trailing 5-day return.
```
# we'll create a separate pipeline to handle the target
pipeline_target = Pipeline(screen=universe)
```
#### Example
We'll convert weekly returns into 2-quantiles.
```
return_5d_2q = Returns(window_length=5, mask=universe).quantiles(2)
return_5d_2q
pipeline_target.add(return_5d_2q, 'return_5d_2q')
```
#### Quiz
Create another weekly return target that's converted to 5-quantiles.
```
# TODO: create a target using 5-quantiles
return_5d_5q = # ...
# TODO: add the feature to the pipeline
# ...
# Let's run the pipeline to get the dataframe
targets_df = engine.run_pipeline(pipeline_target, factor_start_date, universe_end_date)
targets_df.head()
targets_df.columns
```
## Solution
[solution notebook](feature_engineering_solution.ipynb)
| github_jupyter |
# Introduction to Linear Algebra
This is a tutorial designed to introduce you to the basics of linear algebra.
Linear algebra is a branch of mathematics dedicated to studying the properties of matrices and vectors,
which are used extensively in quantum computing to represent quantum states and operations on them.
This tutorial doesn't come close to covering the full breadth of the topic, but it should be enough to get you comfortable with the main concepts of linear algebra used in quantum computing.
This tutorial assumes familiarity with complex numbers; if you need a review of this topic, we recommend that you complete the [Complex Arithmetic](../ComplexArithmetic/ComplexArithmetic.ipynb) tutorial before tackling this one.
This tutorial covers the following topics:
* Matrices and vectors
* Basic matrix operations
* Operations and properties of complex matrices
* Inner and outer vector products
* Tensor product
* Eigenvalues and eigenvectors
If you need to look up some formulas quickly, you can find them in [this cheatsheet](https://github.com/microsoft/QuantumKatas/blob/main/quickref/qsharp-quick-reference.pdf).
This notebook has several tasks that require you to write Python code to test your understanding of the concepts. If you are not familiar with Python, [here](https://docs.python.org/3/tutorial/index.html) is a good introductory tutorial for it.
> The exercises use Python's built-in representation of complex numbers. Most of the operations (addition, multiplication, etc.) work as you expect them to. Here are a few notes on Python-specific syntax:
>
> * If `z` is a complex number, `z.real` is the real component, and `z.imag` is the coefficient of the imaginary component.
> * To represent an imaginary number, put `j` after a real number: $3.14i$ would be `3.14j`.
> * To represent a complex number, simply add a real number and an imaginary number.
> * The built-in function `abs` computes the modulus of a complex number.
>
> You can find more information in the [official documentation](https://docs.python.org/3/library/cmath.html).
Let's start by importing some useful mathematical functions and constants, and setting up a few things necessary for testing the exercises. **Do not skip this step.**
Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac).
```
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise, create_empty_matrix
from typing import List
import math, cmath
Matrix = List[List[complex]]
```
# Part I. Matrices and Basic Operations
## Matrices and Vectors
A **matrix** is set of numbers arranged in a rectangular grid. Here is a $2$ by $2$ matrix:
$$A =
\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$
$A_{i,j}$ refers to the element in row $i$ and column $j$ of matrix $A$ (all indices are 0-based). In the above example, $A_{0,1} = 2$.
An $n \times m$ matrix will have $n$ rows and $m$ columns, like so:
$$\begin{bmatrix}
x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\
x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1}
\end{bmatrix}$$
A $1 \times 1$ matrix is equivalent to a scalar:
$$\begin{bmatrix} 3 \end{bmatrix} = 3$$
Quantum computing uses complex-valued matrices: the elements of a matrix can be complex numbers. This, for example, is a valid complex-valued matrix:
$$\begin{bmatrix}
1 & i \\
-2i & 3 + 4i
\end{bmatrix}$$
Finally, a **vector** is an $n \times 1$ matrix. Here, for example, is a $3 \times 1$ vector:
$$V = \begin{bmatrix} 1 \\ 2i \\ 3 + 4i \end{bmatrix}$$
Since vectors always have a width of $1$, vector elements are sometimes written using only one index. In the above example, $V_0 = 1$ and $V_1 = 2i$.
## Matrix Addition
The easiest matrix operation is **matrix addition**. Matrix addition works between two matrices of the same size, and adds each number from the first matrix to the number in the same position in the second matrix:
$$\begin{bmatrix}
x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\
x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1}
\end{bmatrix}
+
\begin{bmatrix}
y_{0,0} & y_{0,1} & \dotsb & y_{0,m-1} \\
y_{1,0} & y_{1,1} & \dotsb & y_{1,m-1} \\
\vdots & \vdots & \ddots & \vdots \\
y_{n-1,0} & y_{n-1,1} & \dotsb & y_{n-1,m-1}
\end{bmatrix}
=
\begin{bmatrix}
x_{0,0} + y_{0,0} & x_{0,1} + y_{0,1} & \dotsb & x_{0,m-1} + y_{0,m-1} \\
x_{1,0} + y_{1,0} & x_{1,1} + y_{1,1} & \dotsb & x_{1,m-1} + y_{1,m-1} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n-1,0} + y_{n-1,0} & x_{n-1,1} + y_{n-1,1} & \dotsb & x_{n-1,m-1} + y_{n-1,m-1}
\end{bmatrix}$$
Similarly, we can compute $A - B$ by subtracting elements of $B$ from corresponding elements of $A$.
Matrix addition has the following properties:
* Commutativity: $A + B = B + A$
* Associativity: $(A + B) + C = A + (B + C)$
### <span style="color:blue">Exercise 1</span>: Matrix addition.
**Inputs:**
1. An $n \times m$ matrix $A$, represented as a two-dimensional list.
2. An $n \times m$ matrix $B$, represented as a two-dimensional list.
**Output:** Return the sum of the matrices $A + B$ - an $n \times m$ matrix, represented as a two-dimensional list.
> When representing matrices as lists, each sub-list represents a row.
>
> For example, list `[[1, 2], [3, 4]]` represents the following matrix:
>
> $$\begin{bmatrix}
1 & 2 \\
3 & 4
\end{bmatrix}$$
Fill in the missing code and run the cell below to test your work.
<br/>
<details>
<summary><b>Need a hint? Click here</b></summary>
A video explanation can be found <a href="https://www.youtube.com/watch?v=WR9qCSXJlyY">here</a>.
</details>
```
@exercise
def matrix_add(a : Matrix, b : Matrix) -> Matrix:
# You can get the size of a matrix like this:
rows = len(a)
columns = len(a[0])
# You can use the following function to initialize a rows×columns matrix filled with 0s to store your answer
c = create_empty_matrix(rows, columns)
# You can use a for loop to execute its body several times;
# in this loop variable i will take on each value from 0 to n-1, inclusive
for i in range(rows):
# Loops can be nested
for j in range(columns):
# You can access elements of a matrix like this:
x = a[i][j]
y = b[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = x + y
return c
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-1:-Matrix-addition.).*
## Scalar Multiplication
The next matrix operation is **scalar multiplication** - multiplying the entire matrix by a scalar (real or complex number):
$$a \cdot
\begin{bmatrix}
x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\
x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1}
\end{bmatrix}
=
\begin{bmatrix}
a \cdot x_{0,0} & a \cdot x_{0,1} & \dotsb & a \cdot x_{0,m-1} \\
a \cdot x_{1,0} & a \cdot x_{1,1} & \dotsb & a \cdot x_{1,m-1} \\
\vdots & \vdots & \ddots & \vdots \\
a \cdot x_{n-1,0} & a \cdot x_{n-1,1} & \dotsb & a \cdot x_{n-1,m-1}
\end{bmatrix}$$
Scalar multiplication has the following properties:
* Associativity: $x \cdot (yA) = (x \cdot y)A$
* Distributivity over matrix addition: $x(A + B) = xA + xB$
* Distributivity over scalar addition: $(x + y)A = xA + yA$
### <span style="color:blue">Exercise 2</span>: Scalar multiplication.
**Inputs:**
1. A scalar $x$.
2. An $n \times m$ matrix $A$.
**Output:** Return the $n \times m$ matrix $x \cdot A$.
<br/>
<details>
<summary><b>Need a hint? Click here</b></summary>
A video explanation can be found <a href="https://www.youtube.com/watch?v=TbaltFbJ3wE">here</a>.
</details>
```
@exercise
def scalar_mult(x : complex, a : Matrix) -> Matrix:
# Fill in the missing code and run the cell to check your work.
rows = len(a)
columns = len(a[0])
c = create_empty_matrix(rows, columns)
# You can use a for loop to execute its body several times;
# in this loop variable i will take on each value from 0 to n-1, inclusive
for i in range(rows):
# Loops can be nested
for j in range(columns):
# You can access elements of a matrix like this:
current_cell = a[i][j]
# You can modify the elements of a matrix like this:
c[i][j] = x * current_cell
return c
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-2:-Scalar-multiplication.).*
## Matrix Multiplication
**Matrix multiplication** is a very important and somewhat unusual operation. The unusual thing about it is that neither its operands nor its output are the same size: an $n \times m$ matrix multiplied by an $m \times k$ matrix results in an $n \times k$ matrix.
That is, for matrix multiplication to be applicable, the number of columns in the first matrix must equal the number of rows in the second matrix.
Here is how matrix product is calculated: if we are calculating $AB = C$, then
$$C_{i,j} = A_{i,0} \cdot B_{0,j} + A_{i,1} \cdot B_{1,j} + \dotsb + A_{i,m-1} \cdot B_{m-1,j} = \sum_{t = 0}^{m-1} A_{i,t} \cdot B_{t,j}$$
Here is a small example:
$$\begin{bmatrix}
\color{blue} 1 & \color{blue} 2 & \color{blue} 3 \\
\color{red} 4 & \color{red} 5 & \color{red} 6
\end{bmatrix}
\begin{bmatrix}
1 \\
2 \\
3
\end{bmatrix}
=
\begin{bmatrix}
(\color{blue} 1 \cdot 1) + (\color{blue} 2 \cdot 2) + (\color{blue} 3 \cdot 3) \\
(\color{red} 4 \cdot 1) + (\color{red} 5 \cdot 2) + (\color{red} 6 \cdot 3)
\end{bmatrix}
=
\begin{bmatrix}
14 \\
32
\end{bmatrix}$$
Matrix multiplication has the following properties:
* Associativity: $A(BC) = (AB)C$
* Distributivity over matrix addition: $A(B + C) = AB + AC$ and $(A + B)C = AC + BC$
* Associativity with scalar multiplication: $xAB = x(AB) = A(xB)$
> Note that matrix multiplication is **not commutative:** $AB$ rarely equals $BA$.
Another very important property of matrix multiplication is that a matrix multiplied by a vector produces another vector.
An **identity matrix** $I_n$ is a special $n \times n$ matrix which has $1$s on the main diagonal, and $0$s everywhere else:
$$I_n =
\begin{bmatrix}
1 & 0 & \dotsb & 0 \\
0 & 1 & \dotsb & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dotsb & 1
\end{bmatrix}$$
What makes it special is that multiplying any matrix (of compatible size) by $I_n$ returns the original matrix. To put it another way, if $A$ is an $n \times m$ matrix:
$$AI_m = I_nA = A$$
This is why $I_n$ is called an identity matrix - it acts as a **multiplicative identity**. In other words, it is the matrix equivalent of the number $1$.
### <span style="color:blue">Exercise 3</span>: Matrix multiplication.
**Inputs:**
1. An $n \times m$ matrix $A$.
2. An $m \times k$ matrix $B$.
**Output:** Return the $n \times k$ matrix equal to the matrix product $AB$.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
To solve this exercise, you will need 3 <code>for</code> loops: one to go over $n$ rows of the output matrix, one to go over $k$ columns, and one to add up $m$ products that form each element of the output:
<pre>
<code>
for i in range(n):
for j in range(k):
sum = 0
for t in range(m):
sum = sum + ...
c[i][j] = sum
</code>
</pre>
A video explanation can be found <a href="https://www.youtube.com/watch?v=OMA2Mwo0aZg">here</a>.
</details>
```
@exercise
def matrix_mult(a : Matrix, b : Matrix) -> Matrix:
n = len(a)
m = len(a[0])
k = len(b[0])
c = create_empty_matrix(n, k)
def calc_sum_this_cell(i, j, m):
sum_cell = 0
for t in range(m):
sum_cell += a[i][t] * b[t][j]
return sum_cell
for i in range(n):
for j in range(k):
c[i][j] = calc_sum_this_cell(i, j, m)
return c
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-3:-Matrix-multiplication.).*
## Inverse Matrices
A square $n \times n$ matrix $A$ is **invertible** if it has an inverse $n \times n$ matrix $A^{-1}$ with the following property:
$$AA^{-1} = A^{-1}A = I_n$$
In other words, $A^{-1}$ acts as the **multiplicative inverse** of $A$.
Another, equivalent definition highlights what makes this an interesting property. For any matrices $B$ and $C$ of compatible sizes:
$$A^{-1}(AB) = A(A^{-1}B) = B \\
(CA)A^{-1} = (CA^{-1})A = C$$
A square matrix has a property called the **determinant**, with the determinant of matrix $A$ being written as $|A|$. A matrix is invertible if and only if its determinant isn't equal to $0$.
For a $2 \times 2$ matrix $A$, the determinant is defined as $|A| = (A_{0,0} \cdot A_{1,1}) - (A_{0,1} \cdot A_{1,0})$.
For larger matrices, the determinant is defined through determinants of sub-matrices. You can learn more from [Wikipedia](https://en.wikipedia.org/wiki/Determinant) or from [Wolfram MathWorld](http://mathworld.wolfram.com/Determinant.html).
### <span style="color:blue">Exercise 4</span>: Matrix Inversion.
**Input:** An invertible $2 \times 2$ matrix $A$.
**Output:** Return the inverse of $A$, a $2 \times 2$ matrix $A^{-1}$.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
Try to come up with a general method of doing it by hand first. If you get stuck, you may find <a href="https://en.wikipedia.org/wiki/Invertible_matrix#Inversion_of_2_%C3%97_2_matrices">this Wikipedia article</a> useful. For this exercise, $|A|$ is guaranteed to be non-zero. <br>
A video explanation can be found <a href="https://www.youtube.com/watch?v=01c12NaUQDw">here</a>.
</details>
```
@exercise
def matrix_inverse(m : Matrix) -> Matrix:
#inverse must be same size as original (and should be square, which we could verify)
m_inverse = create_empty_matrix(len(m), len(m[0]))
a = m[0][0]
b = m[0][1]
c = m[1][0]
d = m[1][1]
determinant_m = a * d - b * c
if determinant_m != 0:
m_inverse[0][0] = d / determinant_m
m_inverse[0][1] = -b / determinant_m
m_inverse[1][0] = -c / determinant_m
m_inverse[1][1] = a / determinant_m
return m_inverse
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-4:-Matrix-Inversion.).*
## Transpose
The **transpose** operation, denoted as $A^T$, is essentially a reflection of the matrix across the diagonal: $(A^T)_{i,j} = A_{j,i}$.
Given an $n \times m$ matrix $A$, its transpose is the $m \times n$ matrix $A^T$, such that if:
$$A =
\begin{bmatrix}
x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\
x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1}
\end{bmatrix}$$
then:
$$A^T =
\begin{bmatrix}
x_{0,0} & x_{1,0} & \dotsb & x_{n-1,0} \\
x_{0,1} & x_{1,1} & \dotsb & x_{n-1,1} \\
\vdots & \vdots & \ddots & \vdots \\
x_{0,m-1} & x_{1,m-1} & \dotsb & x_{n-1,m-1}
\end{bmatrix}$$
For example:
$$\begin{bmatrix}
1 & 2 \\
3 & 4 \\
5 & 6
\end{bmatrix}^T
=
\begin{bmatrix}
1 & 3 & 5 \\
2 & 4 & 6
\end{bmatrix}$$
A **symmetric** matrix is a square matrix which equals its own transpose: $A = A^T$. To put it another way, it has reflection symmetry (hence the name) across the main diagonal. For example, the following matrix is symmetric:
$$\begin{bmatrix}
1 & 2 & 3 \\
2 & 4 & 5 \\
3 & 5 & 6
\end{bmatrix}$$
The transpose of a matrix product is equal to the product of transposed matrices, taken in reverse order:
$$(AB)^T = B^TA^T$$
### <span style="color:blue">Exercise 5</span>: Transpose.
**Input:** An $n \times m$ matrix $A$.
**Output:** Return an $m \times n$ matrix $A^T$, the transpose of $A$.
<br/>
<details>
<summary><b>Need a hint? Click here</b></summary>
A video explanation can be found <a href="https://www.youtube.com/watch?v=TZrKrNVhbjI">here</a>.
</details>
```
@exercise
def transpose(a : Matrix) -> Matrix:
n = len(a)
m = len(a[0])
#transpose of n x m is m x n
transpose_of_a = create_empty_matrix(m, n)
#for each row, make it a column
for i in range(n):
for j in range(m):
transpose_of_a[j][i] = a[i][j]
return transpose_of_a
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-5:-Transpose.).*
## Conjugate
The next important single-matrix operation is the **matrix conjugate**, denoted as $\overline{A}$. This, as the name might suggest, involves taking the [complex conjugate](../ComplexArithmetic/ComplexArithmetic.ipynb#Complex-Conjugate) of every element of the matrix: if
$$A =
\begin{bmatrix}
x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\
x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1}
\end{bmatrix}$$
Then:
$$\overline{A} =
\begin{bmatrix}
\overline{x}_{0,0} & \overline{x}_{0,1} & \dotsb & \overline{x}_{0,m-1} \\
\overline{x}_{1,0} & \overline{x}_{1,1} & \dotsb & \overline{x}_{1,m-1} \\
\vdots & \vdots & \ddots & \vdots \\
\overline{x}_{n-1,0} & \overline{x}_{n-1,1} & \dotsb & \overline{x}_{n-1,m-1}
\end{bmatrix}$$
The conjugate of a matrix product equals to the product of conjugates of the matrices:
$$\overline{AB} = (\overline{A})(\overline{B})$$
### <span style="color:blue">Exercise 6</span>: Conjugate.
**Input:** An $n \times m$ matrix $A$.
**Output:** Return an $n \times m$ matrix $\overline{A}$, the conjugate of $A$.
> As a reminder, you can get the real and imaginary components of complex number `z` using `z.real` and `z.imag`, respectively.
<details>
<summary><b>Need a hint? Click here</b></summary>
To calculate the conjugate of a matrix take the conjugate of each element, check the <a href="../ComplexArithmetic/ComplexArithmetic.ipynb#Exercise-4:-Complex-conjugate.">complex arithmetic tutorial</a> to see how to calculate the conjugate of a complex number.
</details>
```
@exercise
def conjugate(a : Matrix) -> Matrix:
# result is same size
n = len(a)
m = len(a[0])
conjugate_of_a = create_empty_matrix(n, m)
for i in range(n):
for j in range(m):
conjugate_of_a[i][j] = a[i][j].real + (-1)* a[i][j].imag * 1j #1j is i in python ugh
return conjugate_of_a
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-6:-Conjugate.).*
## Adjoint
The final important single-matrix operation is a combination of the above two. The **conjugate transpose**, also called the **adjoint** of matrix $A$, is defined as $A^\dagger = \overline{(A^T)} = (\overline{A})^T$.
A matrix is known as **Hermitian** or **self-adjoint** if it equals its own adjoint: $A = A^\dagger$. For example, the following matrix is Hermitian:
$$\begin{bmatrix}
1 & i \\
-i & 2
\end{bmatrix}$$
The adjoint of a matrix product can be calculated as follows:
$$(AB)^\dagger = B^\dagger A^\dagger$$
### <span style="color:blue">Exercise 7</span>: Adjoint.
**Input:** An $n \times m$ matrix $A$.
**Output:** Return an $m \times n$ matrix $A^\dagger$, the adjoint of $A$.
> Don't forget, you can re-use functions you've written previously.
```
@exercise
def adjoint(a : Matrix) -> Matrix:
#first do transpose, then do conjugate
#size of result will be m x n because of the transpose
n = len(a)
m = len(a[0])
adjoint_of_a = create_empty_matrix(m, n)
#transpose - for each row, make it a column
for i in range(n):
for j in range(m):
adjoint_of_a[j][i] = a[i][j]
#conjugate let a + bi become a - bi
for i in range(m):
for j in range(n):
adjoint_of_a[i][j] = adjoint_of_a[i][j].real + (-1)* adjoint_of_a[i][j].imag * 1j
return adjoint_of_a
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-7:-Adjoint.).*
## Unitary Matrices
**Unitary matrices** are very important for quantum computing. A matrix is unitary when it is invertible, and its inverse is equal to its adjoint: $U^{-1} = U^\dagger$. That is, an $n \times n$ square matrix $U$ is unitary if and only if $UU^\dagger = U^\dagger U = I_n$.
For example, the following matrix is unitary:
$$\begin{bmatrix}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
\frac{i}{\sqrt{2}} & \frac{-i}{\sqrt{2}} \\
\end{bmatrix}$$
### <span style="color:blue">Exercise 8</span>: Unitary Verification.
**Input:** An $n \times n$ matrix $A$.
**Output:** Check if the matrix is unitary and return `True` if it is, or `False` if it isn't.
> Because of inaccuracy when dealing with floating point numbers on a computer (rounding errors), you won't always get the exact result you are expecting from a long series of calculations. To get around this, Python has a function `approx` which can be used to check if two numbers are "close enough:" `a == approx(b)`.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
Keep in mind, you have only implemented matrix inverses for $2 \times 2$ matrices, and this exercise may give you larger inputs. There is a way to solve this without taking the inverse.
</details>
```
from pytest import approx
@exercise
def is_matrix_unitary(a : Matrix) -> bool:
#if a is unitary, then a multiplied by its adjoint yields I
#this will automatically handle the zero matrix corner case
#this is for square nxn matrix
n = len(a)
product_matrix = matrix_mult(a, adjoint(a))
#check whether product_matrix is I
is_unitary = True
for i in range(n):
for j in range(n):
#diagonal must be 1, all others must be zero
#holy ugly code batman
if (i == j and product_matrix[i][j] != approx(1)) or (i != j and product_matrix[i][j] != approx(0)):
is_unitary = False
break;
return is_unitary
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-8:-Unitary-Verification.).*
## Next Steps
Congratulations! At this point, you should understand enough linear algebra to be able to get started with the tutorials on [the concept of qubit](../Qubit/Qubit.ipynb) and on [single-qubit quantum gates](../SingleQubitGates/SingleQubitGates.ipynb). The next section covers more advanced matrix operations that help explain the properties of qubits and quantum gates.
# Part II. Advanced Operations
## Inner Product
The **inner product** is yet another important matrix operation that is only applied to vectors. Given two vectors $V$ and $W$ of the same size, their inner product $\langle V , W \rangle$ is defined as a product of matrices $V^\dagger$ and $W$:
$$\langle V , W \rangle = V^\dagger W$$
Let's break this down so it's a bit easier to understand. A $1 \times n$ matrix (the adjoint of an $n \times 1$ vector) multiplied by an $n \times 1$ vector results in a $1 \times 1$ matrix (which is equivalent to a scalar). The result of an inner product is that scalar.
To put it another way, to calculate the inner product of two vectors, take the corresponding elements $V_k$ and $W_k$, multiply the complex conjugate of $V_k$ by $W_k$, and add up those products:
$$\langle V , W \rangle = \sum_{k=0}^{n-1}\overline{V_k}W_k$$
Here is a simple example:
$$\langle
\begin{bmatrix}
-6 \\
9i
\end{bmatrix}
,
\begin{bmatrix}
3 \\
-8
\end{bmatrix}
\rangle =
\begin{bmatrix}
-6 \\
9i
\end{bmatrix}^\dagger
\begin{bmatrix}
3 \\
-8
\end{bmatrix}
=
\begin{bmatrix} -6 & -9i \end{bmatrix}
\begin{bmatrix}
3 \\
-8
\end{bmatrix}
= (-6) \cdot (3) + (-9i) \cdot (-8) = -18 + 72i$$
If you are familiar with the **dot product**, you will notice that it is equivalent to inner product for real-numbered vectors.
> We use our definition for these tutorials because it matches the notation used in quantum computing. You might encounter other sources which define the inner product a little differently: $\langle V , W \rangle = W^\dagger V = V^T\overline{W}$, in contrast to the $V^\dagger W$ that we use. These definitions are almost equivalent, with some differences in the scalar multiplication by a complex number.
An immediate application for the inner product is computing the **vector norm**. The norm of vector $V$ is defined as $||V|| = \sqrt{\langle V , V \rangle}$. This condenses the vector down to a single non-negative real value. If the vector represents coordinates in space, the norm happens to be the length of the vector. A vector is called **normalized** if its norm is equal to $1$.
The inner product has the following properties:
* Distributivity over addition: $\langle V + W , X \rangle = \langle V , X \rangle + \langle W , X \rangle$ and $\langle V , W + X \rangle = \langle V , W \rangle + \langle V , X \rangle$
* Partial associativity with scalar multiplication: $x \cdot \langle V , W \rangle = \langle \overline{x}V , W \rangle = \langle V , xW \rangle$
* Skew symmetry: $\langle V , W \rangle = \overline{\langle W , V \rangle}$
* Multiplying a vector by a unitary matrix **preserves the vector's inner product with itself** (and therefore the vector's norm): $\langle UV , UV \rangle = \langle V , V \rangle$
> Note that just like matrix multiplication, the inner product is **not commutative**: $\langle V , W \rangle$ won't always equal $\langle W , V \rangle$.
### <span style="color:blue">Exercise 9</span>: Inner product.
**Inputs:**
1. An $n \times 1$ vector $V$.
2. An $n \times 1$ vector $W$.
**Output:** Return a complex number - the inner product $\langle V , W \rangle$.
<br/>
<details>
<summary><b>Need a hint? Click here</b></summary>
A video explanation can be found <a href="https://www.youtube.com/watch?v=FCmH4MqbFGs">here</a>.
</details>
```
@exercise
def inner_prod(v : Matrix, w : Matrix) -> complex:
n = len(v)
conjugate_of_v = conjugate(v)
inner_product = 0
for k in range(n):
inner_product += conjugate_of_v[k][0] * w[k][0]
return inner_product
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-9:-Inner-product.).*
### <span style="color:blue">Exercise 10</span>: Normalized vectors.
**Input:** A non-zero $n \times 1$ vector $V$.
**Output:** Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
You might need the square root function to solve this exercise. As a reminder, <a href=https://docs.python.org/3/library/math.html#math.sqrt>Python's square root function</a> is available in the <code>math</code> library.<br>
A video explanation can be found <a href="https://www.youtube.com/watch?v=7fn03DIW3Ak">here</a>. Note that when this method is used with complex vectors, you should take the modulus of the complex number for the division.
</details>
```
@exercise
def normalize(v : Matrix) -> Matrix:
# sqrt of complex number?? norm = math.sqrt(inner_prod(v, v))
#try modulus of result of inner prod bc it's a complex number
prod = inner_prod(v, v)
modulus_of_prod = math.sqrt(prod.real**2 + prod.imag**2)
norm = math.sqrt(modulus_of_prod)
v_normalized = create_empty_matrix(len(v), 1)
for k in range(len(v)):
v_normalized[k][0] = v[k][0] / norm
return v_normalized
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-10:-Normalized-vectors.).*
## Outer Product
The **outer product** of two vectors $V$ and $W$ is defined as $VW^\dagger$. That is, the outer product of an $n \times 1$ vector and an $m \times 1$ vector is an $n \times m$ matrix. If we denote the outer product of $V$ and $W$ as $X$, then $X_{i,j} = V_i \cdot \overline{W_j}$.
Here is a simple example:
outer product of $\begin{bmatrix} -3i \\ 9 \end{bmatrix}$ and $\begin{bmatrix} 9i \\ 2 \\ 7 \end{bmatrix}$ is:
$$\begin{bmatrix} \color{blue} {-3i} \\ \color{blue} 9 \end{bmatrix}
\begin{bmatrix} \color{red} {9i} \\ \color{red} 2 \\ \color{red} 7 \end{bmatrix}^\dagger
=
\begin{bmatrix} \color{blue} {-3i} \\ \color{blue} 9 \end{bmatrix}
\begin{bmatrix} \color{red} {-9i} & \color{red} 2 & \color{red} 7 \end{bmatrix}
=
\begin{bmatrix}
\color{blue} {-3i} \cdot \color{red} {(-9i)} & \color{blue} {-3i} \cdot \color{red} 2 & \color{blue} {-3i} \cdot \color{red} 7 \\
\color{blue} 9 \cdot \color{red} {(-9i)} & \color{blue} 9 \cdot \color{red} 2 & \color{blue} 9 \cdot \color{red} 7
\end{bmatrix}
=
\begin{bmatrix}
-27 & -6i & -21i \\
-81i & 18 & 63
\end{bmatrix}$$
### <span style="color:blue">Exercise 11</span>: Outer product.
**Inputs:**
1. An $n \times 1$ vector $V$.
2. An $m \times 1$ vector $W$.
**Output:** Return an $n \times m$ matrix that represents the outer product of $V$ and $W$.
```
@exercise
def outer_prod(v : Matrix, w : Matrix) -> Matrix:
#outer product equals v times adjoint of w
return matrix_mult(v, adjoint(w))
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-11:-Outer-product.).*
## Tensor Product
The **tensor product** is a different way of multiplying matrices. Rather than multiplying rows by columns, the tensor product multiplies the second matrix by every element of the first matrix.
Given $n \times m$ matrix $A$ and $k \times l$ matrix $B$, their tensor product $A \otimes B$ is an $(n \cdot k) \times (m \cdot l)$ matrix defined as follows:
$$A \otimes B =
\begin{bmatrix}
A_{0,0} \cdot B & A_{0,1} \cdot B & \dotsb & A_{0,m-1} \cdot B \\
A_{1,0} \cdot B & A_{1,1} \cdot B & \dotsb & A_{1,m-1} \cdot B \\
\vdots & \vdots & \ddots & \vdots \\
A_{n-1,0} \cdot B & A_{n-1,1} \cdot B & \dotsb & A_{n-1,m-1} \cdot B
\end{bmatrix}
=
\begin{bmatrix}
A_{0,0} \cdot \color{red} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & b_{k-1,l-1} \end{bmatrix}} & \dotsb &
A_{0,m-1} \cdot \color{blue} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}} \\
\vdots & \ddots & \vdots \\
A_{n-1,0} \cdot \color{blue} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}} & \dotsb &
A_{n-1,m-1} \cdot \color{red} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}}
\end{bmatrix}
= \\
=
\begin{bmatrix}
A_{0,0} \cdot \color{red} {B_{0,0}} & \dotsb & A_{0,0} \cdot \color{red} {B_{0,l-1}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{0,0}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{0,l-1}} \\
\vdots & \ddots & \vdots & \dotsb & \vdots & \ddots & \vdots \\
A_{0,0} \cdot \color{red} {B_{k-1,0}} & \dotsb & A_{0,0} \cdot \color{red} {B_{k-1,l-1}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{k-1,0}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{k-1,l-1}} \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
A_{n-1,0} \cdot \color{blue} {B_{0,0}} & \dotsb & A_{n-1,0} \cdot \color{blue} {B_{0,l-1}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{0,0}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{0,l-1}} \\
\vdots & \ddots & \vdots & \dotsb & \vdots & \ddots & \vdots \\
A_{n-1,0} \cdot \color{blue} {B_{k-1,0}} & \dotsb & A_{n-1,0} \cdot \color{blue} {B_{k-1,l-1}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{k-1,0}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{k-1,l-1}}
\end{bmatrix}$$
Here is a simple example:
$$\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \otimes \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} =
\begin{bmatrix}
1 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} & 2 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \\
3 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} & 4 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}
\end{bmatrix}
=
\begin{bmatrix}
1 \cdot 5 & 1 \cdot 6 & 2 \cdot 5 & 2 \cdot 6 \\
1 \cdot 7 & 1 \cdot 8 & 2 \cdot 7 & 2 \cdot 8 \\
3 \cdot 5 & 3 \cdot 6 & 4 \cdot 5 & 4 \cdot 6 \\
3 \cdot 7 & 3 \cdot 8 & 4 \cdot 7 & 4 \cdot 8
\end{bmatrix}
=
\begin{bmatrix}
5 & 6 & 10 & 12 \\
7 & 8 & 14 & 16 \\
15 & 18 & 20 & 24 \\
21 & 24 & 28 & 32
\end{bmatrix}$$
Notice that the tensor product of two vectors is another vector: if $V$ is an $n \times 1$ vector, and $W$ is an $m \times 1$ vector, $V \otimes W$ is an $(n \cdot m) \times 1$ vector.
The tensor product has the following properties:
* Distributivity over addition: $(A + B) \otimes C = A \otimes C + B \otimes C$, $A \otimes (B + C) = A \otimes B + A \otimes C$
* Associativity with scalar multiplication: $x(A \otimes B) = (xA) \otimes B = A \otimes (xB)$
* Mixed-product property (relation with matrix multiplication): $(A \otimes B) (C \otimes D) = (AC) \otimes (BD)$
### <span style="color:blue">Exercise 12</span>*: Tensor Product.
**Inputs:**
1. An $n \times m$ matrix $A$.
2. A $k \times l$ matrix $B$.
**Output:** Return an $(n \cdot k) \times (m \cdot l)$ matrix $A \otimes B$, the tensor product of $A$ and $B$.
```
@exercise
def tensor_product(a : Matrix, b : Matrix) -> Matrix:
n = len(a)
m = len(a[0])
k = len(b)
l = len(b[0])
result = create_empty_matrix(n*k, m*l)
#for each element in a, which is n x m
for arow in range(n):
for acol in range(m):
acurrent = a[arow][acol]
#copy B elements into result, multiplying by acurrent as we go
for brow in range(k):
for bcol in range(l):
bcurrent = b[brow][bcol]
#trick is indices in result
result[arow*k + brow][acol*l + bcol] = acurrent * bcurrent
return result
```
*Can't come up with a solution? See the explained solution in the* <i><a href="./Workbook_LinearAlgebra.ipynb#Exercise-12*:-Tensor-Product.">Linear Algebra Workbook</a></i>.
## Next Steps
At this point, you know enough to complete the tutorials on [the concept of qubit](../Qubit/Qubit.ipynb), [single-qubit gates](../SingleQubitGates/SingleQubitGates.ipynb), [multi-qubit systems](../MultiQubitSystems/MultiQubitSystems.ipynb), and [multi-qubit gates](../MultiQubitGates/MultiQubitGates.ipynb).
The last part of this tutorial is a brief introduction to eigenvalues and eigenvectors, which are used for more advanced topics in quantum computing.
Feel free to move on to the next tutorials, and come back here once you encounter eigenvalues and eigenvectors elsewhere.
# Part III: Eigenvalues and Eigenvectors
Consider the following example of multiplying a matrix by a vector:
$$\begin{bmatrix}
1 & -3 & 3 \\
3 & -5 & 3 \\
6 & -6 & 4
\end{bmatrix}
\begin{bmatrix}
1 \\
1 \\
2
\end{bmatrix}
=
\begin{bmatrix}
4 \\
4 \\
8
\end{bmatrix}$$
Notice that the resulting vector is just the initial vector multiplied by a scalar (in this case 4). This behavior is so noteworthy that it is described using a special set of terms.
Given a nonzero $n \times n$ matrix $A$, a nonzero vector $V$, and a scalar $x$, if $AV = xV$, then $x$ is an **eigenvalue** of $A$, and $V$ is an **eigenvector** of $A$ corresponding to that eigenvalue.
The properties of eigenvalues and eigenvectors are used extensively in quantum computing. You can learn more about eigenvalues, eigenvectors, and their properties at [Wolfram MathWorld](http://mathworld.wolfram.com/Eigenvector.html) or on [Wikipedia](https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors).
### <span style="color:blue">Exercise 13</span>: Finding an eigenvalue.
**Inputs:**
1. An $n \times n$ matrix $A$.
2. An eigenvector $V$ of matrix $A$.
**Output:** Return a real number - the eigenvalue of $A$ that is associated with the given eigenvector.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
Multiply the matrix by the vector, then divide the elements of the result by the elements of the original vector. Don't forget though, some elements of the vector may be $0$.
</details>
```
@exercise
def find_eigenvalue(a : Matrix, v : Matrix) -> float:
#eigenvalue = AV / V
#AV will be (nxn) * (n * 1) = n * 1, so can divide each element
n = len(a)
prod_av = matrix_mult(a, v)
result = create_empty_matrix(n, 1)
eigenvalue = 0
for i in range(n):
if (v[i][0] != 0):
result[i][0] = prod_av[i][0] / v[i][0]
#find first non-zero result for eigenvalue
if result[i][0] != 0:
eigenvalue = result[i][0]
break;
return eigenvalue
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-13:-Finding-an-eigenvalue.).*
### <span style="color:blue">Exercise 14</span>**: Finding an eigenvector.
**Inputs:**
1. A $2 \times 2$ matrix $A$.
2. An eigenvalue $x$ of matrix $A$.
**Output:** Return any non-zero eigenvector of $A$ that is associated with $x$.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
A matrix and an eigenvalue will have multiple eigenvectors (infinitely many, in fact), but you only need to find one.<br/>
Try treating the elements of the vector as variables in a system of two equations. Watch out for division by $0$!
</details>
```
@exercise
def find_eigenvector(a : Matrix, x : float) -> Matrix:
result = create_empty_matrix(len(a), 1)
return result
```
*Can't come up with a solution? See the explained solution in the [Linear Algebra Workbook](./Workbook_LinearAlgebra.ipynb#Exercise-14**:-Finding-an-eigenvector.).*
| github_jupyter |
```
import gym
import numpy as np
import torch
import wandb
import pandas as pd
import argparse
import pickle
import random
import sys
sys.path.append('/Users/shiro/research/projects/rl-nlp/can-wikipedia-help-offline-rl/code')
from decision_transformer.evaluation.evaluate_episodes import (
evaluate_episode,
evaluate_episode_rtg,
)
from decision_transformer.models.decision_transformer import DecisionTransformer
from decision_transformer.models.mlp_bc import MLPBCModel
from decision_transformer.training.act_trainer import ActTrainer
from decision_transformer.training.seq_trainer import SequenceTrainer
from utils import get_optimizer
import os
from tqdm.notebook import tqdm
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style("ticks")
sns.set_context("paper", 1.5, {"lines.linewidth": 2})
def discount_cumsum(x, gamma):
discount_cumsum = np.zeros_like(x)
discount_cumsum[-1] = x[-1]
for t in reversed(range(x.shape[0] - 1)):
discount_cumsum[t] = x[t] + gamma * discount_cumsum[t + 1]
return discount_cumsum
def prepare_data(variant):
env_name, dataset = variant["env"], variant["dataset"]
model_type = variant["model_type"]
exp_prefix = 'gym-experiment'
group_name = f"{exp_prefix}-{env_name}-{dataset}"
exp_prefix = f"{group_name}-{random.randint(int(1e5), int(1e6) - 1)}"
if env_name == "hopper":
env = gym.make("Hopper-v3")
max_ep_len = 1000
env_targets = [3600, 1800] # evaluation conditioning targets
scale = 1000.0 # normalization for rewards/returns
elif env_name == "halfcheetah":
env = gym.make("HalfCheetah-v3")
max_ep_len = 1000
env_targets = [12000, 6000]
scale = 1000.0
elif env_name == "walker2d":
env = gym.make("Walker2d-v3")
max_ep_len = 1000
env_targets = [5000, 2500]
scale = 1000.0
elif env_name == "reacher2d":
from decision_transformer.envs.reacher_2d import Reacher2dEnv
env = Reacher2dEnv()
max_ep_len = 100
env_targets = [76, 40]
scale = 10.0
else:
raise NotImplementedError
if model_type == "bc":
env_targets = env_targets[
:1
] # since BC ignores target, no need for different evaluations
state_dim = env.observation_space.shape[0]
act_dim = env.action_space.shape[0]
# load dataset
dataset_path = f"../data/{env_name}-{dataset}-v2.pkl"
with open(dataset_path, "rb") as f:
trajectories = pickle.load(f)
# save all path information into separate lists
mode = variant.get("mode", "normal")
states, traj_lens, returns = [], [], []
for path in trajectories:
if mode == "delayed": # delayed: all rewards moved to end of trajectory
path["rewards"][-1] = path["rewards"].sum()
path["rewards"][:-1] = 0.0
states.append(path["observations"])
traj_lens.append(len(path["observations"]))
returns.append(path["rewards"].sum())
traj_lens, returns = np.array(traj_lens), np.array(returns)
# used for input normalization
states = np.concatenate(states, axis=0)
state_mean, state_std = np.mean(states, axis=0), np.std(states, axis=0) + 1e-6
num_timesteps = sum(traj_lens)
print("=" * 50)
print(f"Starting new experiment: {env_name} {dataset}")
print(f"{len(traj_lens)} trajectories, {num_timesteps} timesteps found")
print(f"Average return: {np.mean(returns):.2f}, std: {np.std(returns):.2f}")
print(f"Max return: {np.max(returns):.2f}, min: {np.min(returns):.2f}")
print("=" * 50)
pct_traj = variant.get("pct_traj", 1.0)
# only train on top pct_traj trajectories (for %BC experiment)
num_timesteps = max(int(pct_traj * num_timesteps), 1)
sorted_inds = np.argsort(returns) # lowest to highest
num_trajectories = 1
timesteps = traj_lens[sorted_inds[-1]]
ind = len(trajectories) - 2
while ind >= 0 and timesteps + traj_lens[sorted_inds[ind]] < num_timesteps:
timesteps += traj_lens[sorted_inds[ind]]
num_trajectories += 1
ind -= 1
sorted_inds = sorted_inds[-num_trajectories:]
# used to reweight sampling so we sample according to timesteps instead of trajectories
p_sample = traj_lens[sorted_inds] / sum(traj_lens[sorted_inds])
return trajectories, sorted_inds, state_dim, act_dim, max_ep_len, state_mean, state_std, num_trajectories, p_sample, scale
def get_batch(
batch_size,
max_len,
trajectories,
sorted_inds,
state_dim,
act_dim,
max_ep_len,
state_mean,
state_std,
num_trajectories,
p_sample,
scale,
device
):
batch_inds = np.random.choice(
np.arange(num_trajectories),
size=batch_size,
replace=True,
p=p_sample, # reweights so we sample according to timesteps
)
s, a, r, d, rtg, timesteps, mask = [], [], [], [], [], [], []
for i in range(batch_size):
traj = trajectories[int(sorted_inds[batch_inds[i]])]
si = random.randint(0, traj["rewards"].shape[0] - 1)
# get sequences from dataset
s.append(traj["observations"][si : si + max_len].reshape(1, -1, state_dim))
a.append(traj["actions"][si : si + max_len].reshape(1, -1, act_dim))
r.append(traj["rewards"][si : si + max_len].reshape(1, -1, 1))
if "terminals" in traj:
d.append(traj["terminals"][si : si + max_len].reshape(1, -1))
else:
d.append(traj["dones"][si : si + max_len].reshape(1, -1))
timesteps.append(np.arange(si, si + s[-1].shape[1]).reshape(1, -1))
timesteps[-1][timesteps[-1] >= max_ep_len] = (
max_ep_len - 1
) # padding cutoff
rtg.append(
discount_cumsum(traj["rewards"][si:], gamma=1.0)[
: s[-1].shape[1] + 1
].reshape(1, -1, 1)
)
if rtg[-1].shape[1] <= s[-1].shape[1]:
rtg[-1] = np.concatenate([rtg[-1], np.zeros((1, 1, 1))], axis=1)
# padding and state + reward normalization
tlen = s[-1].shape[1]
s[-1] = np.concatenate(
[np.zeros((1, max_len - tlen, state_dim)), s[-1]], axis=1
)
s[-1] = (s[-1] - state_mean) / state_std
a[-1] = np.concatenate(
[np.ones((1, max_len - tlen, act_dim)) * -10.0, a[-1]], axis=1
)
r[-1] = np.concatenate([np.zeros((1, max_len - tlen, 1)), r[-1]], axis=1)
d[-1] = np.concatenate([np.ones((1, max_len - tlen)) * 2, d[-1]], axis=1)
rtg[-1] = (
np.concatenate([np.zeros((1, max_len - tlen, 1)), rtg[-1]], axis=1)
/ scale
)
timesteps[-1] = np.concatenate(
[np.zeros((1, max_len - tlen)), timesteps[-1]], axis=1
)
mask.append(
np.concatenate(
[np.zeros((1, max_len - tlen)), np.ones((1, tlen))], axis=1
)
)
s = torch.from_numpy(np.concatenate(s, axis=0)).to(
dtype=torch.float32, device=device
)
a = torch.from_numpy(np.concatenate(a, axis=0)).to(
dtype=torch.float32, device=device
)
r = torch.from_numpy(np.concatenate(r, axis=0)).to(
dtype=torch.float32, device=device
)
d = torch.from_numpy(np.concatenate(d, axis=0)).to(
dtype=torch.long, device=device
)
rtg = torch.from_numpy(np.concatenate(rtg, axis=0)).to(
dtype=torch.float32, device=device
)
timesteps = torch.from_numpy(np.concatenate(timesteps, axis=0)).to(
dtype=torch.long, device=device
)
mask = torch.from_numpy(np.concatenate(mask, axis=0)).to(device=device)
return s, a, r, d, rtg, timesteps, mask
seed=666
epoch=1
env_name='hopper'
reward_state_action = 'state'
model_name = 'igpt'
torch.manual_seed(seed)
dataset_name = 'medium'
# model_names = ['gpt2', 'igpt', 'dt'] # ['gpt2', 'igpt', 'dt']
grad_norms_list = []
if model_name == 'gpt2':
pretrained_lm1 = 'gpt2'
elif model_name == 'clip':
pretrained_lm1 = 'openai/clip-vit-base-patch32'
elif model_name == 'igpt':
pretrained_lm1 = 'openai/imagegpt-small'
elif model_name == 'dt':
pretrained_lm1 = False
variant = {
'embed_dim': 768,
'n_layer': 12,
'n_head': 1,
'activation_function': 'relu',
'dropout': 0.2, # 0.1
'load_checkpoint': False if epoch==0 else f'../checkpoints/{model_name}_medium_{env_name}_666/model_{epoch}.pt',
'seed': seed,
'outdir': f"checkpoints/{model_name}_{dataset_name}_{env_name}_{seed}",
'env': env_name,
'dataset': dataset_name,
'model_type': 'dt',
'K': 20, # 2
'pct_traj': 1.0,
'batch_size': 100, # 64
'num_eval_episodes': 100,
'max_iters': 40,
'num_steps_per_iter': 2500,
'pretrained_lm': pretrained_lm1,
'gpt_kmeans': None,
'kmeans_cache': None,
'frozen': False,
'extend_positions': False,
'share_input_output_proj': True
}
os.makedirs(variant["outdir"], exist_ok=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
trajectories, sorted_inds, state_dim, act_dim, max_ep_len, state_mean, state_std, num_trajectories, p_sample, scale = prepare_data(variant)
K = variant["K"]
batch_size = variant["batch_size"]
loss_fn = lambda s_hat, a_hat, r_hat, s, a, r: torch.mean((a_hat - a) ** 2)
model = DecisionTransformer(
args=variant,
state_dim=state_dim,
act_dim=act_dim,
max_length=K,
max_ep_len=max_ep_len,
hidden_size=variant["embed_dim"],
n_layer=variant["n_layer"],
n_head=variant["n_head"],
n_inner=4 * variant["embed_dim"],
activation_function=variant["activation_function"],
n_positions=1024,
resid_pdrop=variant["dropout"],
attn_pdrop=0.1,
)
if variant["load_checkpoint"]:
state_dict = torch.load(variant["load_checkpoint"], map_location=torch.device('cpu'))
model.load_state_dict(state_dict)
print(f"Loaded from {variant['load_checkpoint']}")
# model.eval()
# grad = {}
# def get_grad(name):
# def hook(model, input, output):
# grad[name] = output.detach()
# return hook
# for block_id in range(len(model.transformer.h)):
# model.transformer.h[block_id].ln_1.register_backward_hook(get_grad(f'{block_id}.ln_1'))
# model.transformer.h[block_id].attn.c_attn.register_backward_hook(get_grad(f'{block_id}.attn.c_attn'))
# model.transformer.h[block_id].attn.c_proj.register_backward_hook(get_grad(f'{block_id}.attn.c_proj'))
# model.transformer.h[block_id].attn.attn_dropout.register_backward_hook(get_grad(f'{block_id}.attn.attn_dropout'))
# model.transformer.h[block_id].attn.resid_dropout.register_backward_hook(get_grad(f'{block_id}.attn.resid_dropout'))
# model.transformer.h[block_id].ln_2.register_backward_hook(get_grad(f'{block_id}.ln_2'))
# model.transformer.h[block_id].mlp.c_fc.register_backward_hook(get_grad(f'{block_id}.mlp.c_fc'))
# model.transformer.h[block_id].mlp.c_proj.register_backward_hook(get_grad(f'{block_id}.mlp.c_proj'))
# model.transformer.h[block_id].mlp.act.register_backward_hook(get_grad(f'{block_id}.mlp.act'))
# model.transformer.h[block_id].mlp.dropout.register_backward_hook(get_grad(f'{block_id}.mlp.dropout'))
states, actions, rewards, dones, rtg, timesteps, attention_mask = get_batch(batch_size,
K,
trajectories,
sorted_inds,
state_dim,
act_dim,
max_ep_len,
state_mean,
state_std,
num_trajectories,
p_sample,
scale,
device
)
action_target = torch.clone(actions)
grads_list = []
for batch_id in tqdm(range(batch_size)):
##### 勾配計算 #####
action_target_batch = action_target[batch_id, :, :].unsqueeze(0)
state_preds, action_preds, reward_preds, all_embs = model.forward(
states[batch_id, :, :].unsqueeze(0),
actions[batch_id, :, :].unsqueeze(0),
rewards[batch_id, :, :].unsqueeze(0),
rtg[batch_id, :-1].unsqueeze(0),
timesteps[batch_id, :].unsqueeze(0),
attention_mask=attention_mask[batch_id, :].unsqueeze(0),
)
act_dim = action_preds.shape[2]
action_preds = action_preds.reshape(-1, act_dim)[attention_mask[batch_id, :].unsqueeze(0).reshape(-1) > 0]
action_target_batch = action_target_batch.reshape(-1, act_dim)[
attention_mask[batch_id, :].unsqueeze(0).reshape(-1) > 0
]
model.zero_grad()
loss = loss_fn(
None,
action_preds,
None,
None,
action_target_batch,
None,
)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), .25)
grad_norm = {}
for name, param in model.transformer.h.named_parameters():
grad_norm[name] = torch.norm(param.grad.view(-1)).numpy()
plt.figure(figsize=(20, 4))
plt.bar(x=range(len(grad_norm)), height=list(grad_norm.values()), color=(0.372, 0.537, 0.537))
# plt.xticks([], [])
plt.xlabel('Parameter of Each Layer', fontsize=20)
plt.ylabel('Clipped Gradient Norm', fontsize=20)
plt.title('Gradient Norm of Each Parameter', fontsize=20)
plt.ylim(0, 0.25)
plt.tight_layout()
plt.savefig(f'figs/gradnorm_perparam_{epoch}_igpt_{env_name}_{dataset_name}_{seed}_{reward_state_action}.pdf')
plt.show()
df = pd.DataFrame([grad_norm]).astype(float)
df
df = pd.DataFrame([grad_norm]).astype(float)
x = np.linspace(0, 1, len(grad_norm))#[(i+1)/len(grad_norm) for i in range(len(grad_norm))]
df.plot.barh(stacked=True, figsize=(24, 40)) # color=plt.cm.Blues_r(x),
plt.legend(loc="lower left", ncol=12)
plt.show()
grad_norm_others = {
'0.ln_1.weight': 0,
'0.ln_1.bias': 0,
'others': 0
}
total = np.sum(list(grad_norm.values()))
for key, value in grad_norm.items():
if key == '0.ln_1.weight' or key == '0.ln_1.bias':
grad_norm_others[key] = value / total
else:
grad_norm_others['others'] += value / total
total = np.sum(grad_norm_others.values())
df_others = pd.DataFrame([grad_norm_others]).astype(float)
x = np.linspace(0, 1, len(grad_norm_others))#[(i+1)/len(grad_norm) for i in range(len(grad_norm))]
df_others.plot.barh(stacked=True, figsize=(10, 5), color=[(0.372, 0.537, 0.537), (0.627, 0.352, 0.470), (0.733, 0.737, 0.870)], fontsize=12) # color=plt.cm.Blues_r(x),
plt.yticks([], [])
plt.xlim(0, 1)
plt.title('Gradient Norm per Parameter', fontsize=12)
plt.xlabel('Clipped Gradient Norm Ratio', fontsize=12)
plt.legend(loc="lower left", ncol=12, fontsize=12)
plt.savefig(f'figs/gradnorm_perparam_ratio_{epoch}_igpt_{env_name}_{dataset_name}_{seed}_{reward_state_action}.pdf')
plt.show()
total = np.sum(list(grad_norm.values()))
total
grad_norm_others
grad_norm
df['0.ln_1.weight']
```
| github_jupyter |
Author: Vo, Huynh Quang Nguyen
# Acknowledgments
The contents of this note are based on the lecture notes and the materials from the sources below. All rights reserved to respective owners.
1. **Deep Learning** textbook by Dr Ian Goodfellow, Prof. Yoshua Bengio, and Prof. Aaron Courville. Available at: [Deep Learning textbook](https://www.deeplearningbook.org/)
2. **Machine Learning with Python** course given by Prof. Alexander Jung from Aalto University, Finland.
3. **Machine Learning** course by Prof. Andrew Ng. Available in Coursera: [Machine Learning](https://www.coursera.org/learn/machine-learning)
4. **Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow** by Aurélien Géron.
## Disclaimer
1. This lecture note serves as a summary of fundamental concepts that are commonly used in machine learning. Thus, we strongly recommend this note to be used strictly as a reference:
* For lectures, teachers on which topics they should include when organizing their own Machine Learning classes, and
* For learners to get an overview of machine learning.
2. This lecture note is the second of the two-episodes series about the fundamentals of data science and machine learning. Thus, we strongly recommend to read this note after having finished the previous one.
# Overview of Machine Learning
## Components of Machine Learning
1. As mentioned in the previous note, machine learning (ML) programs are algorithms that is capable of learning from data. According to the definition from Tom Mitchell, which was also introduced in the previous note, a ML program is a program that "learn from **experience $\mathcal{E}$** with respect to some **task $\mathcal{T}$** and some **performance measure $\mathcal{P}$**, if its performance on $\mathcal{T}$, as measured by $\mathcal{P}$, improves with experience $\mathcal{E}$".
2. Let's dive into details each component mentioned in Mitchell's definition.
### Task
1. ML tasks are usually described in terms of how the machine learning system should process an **example**, the latter of which is a collection of features that have been quantitatively measured from some object or event that we want the machine learning system to process. An example is typically represented as a vector $\mathbf{x} \in \mathbb{R}^n$ where each entry $x_i$ of the vector is another feature (also known as variable).
2. Here are the list of commonly tasks in ML. Noted that we have already encountered most of them in Data Science.
* **Classification**: In this type of task, the computer program is asked to specify which of $k$ categories some input belongs to. To solve this task, the learning algorithm is usually asked to produce a function $f : \mathbb{R}^n \rightarrow {1, . . . , k}$. When $y = f(\mathbf{x})$, the model assigns an input $\mathbf{x}$ to a category identified by numeric code $y$. A harder version of this task is **classification with missing inputs**, where every measurement in its input is not guaranteed to always be provided.
* **Regression**: In this type of task, the computer program is asked to predict a numerical value given some input. To solve this task, the learning algorithm is asked to output a function $f : \mathbb{R}^n \rightarrow \mathbb{R}$. This type of task is similar to classification, except that the format of output is different.
* **Machine translation**: In a machine translation task, the input already consists of a sequence of symbols in some language, and the computer program must convert this into a sequence of symbols in another language.
* **Anomaly detection**: In this type of task, the computer program sifts
through a set of events or objects, and flags some of them as being unusual
or atypical.
* **Synthesis and sampling**: In this type of task, the machine learning algorithm is asked to generate new examples that are similar to those in the
training data.
* **Imputation of missing values**: In this type of task, the machine learning algorithm is given a new example $\mathbf{x} \in \mathbb{R}^n$, but with some entries $x_i$ of $\mathbf{x}$ missing. The algorithm must provide a prediction of the values of the missing entries.
* **Denoising**: In this type of task, the machine learning algorithm is given in
input a corrupted example x˜ ∈ Rn obtained by an unknown corruption process
from a clean example x ∈ Rn
. The learner must predict the clean example
x from its corrupted version x˜, or more generally predict the conditional
probability distribution p(x | x˜).
* Density estimation or probability mass function estimation: In
the density estimation problem, the machine learning algorithm is asked
to learn a function pmodel : R n → R, where pmodel(x) can be interpreted
as a probability density function (if x is continuous) or a probability mass
function (if x is discrete) on the space that the examples were drawn from.
To do such a task well (we will specify exactly what that means when we
discuss performance measures P), the algorithm needs to learn the structure
of the data it has seen
| github_jupyter |
<a href="https://colab.research.google.com/github/andrewcgaitskell/dmtoolnotes/blob/main/Lists%2C_Arrays%2C_Tensors%2C_Dataframes%2C_and_Datasets.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
https://colab.research.google.com/github/tensorpig/learning_tensorflow/blob/master/Lists%2C_Arrays%2C_Tensors%2C_Dataframes%2C_and_Datasets.ipynb#scrollTo=0-i0PylHrjWs
```
import pandas as pd
data = [[1,2,3],[4.0,5.0,6.0],['100','101','102']]
data
data_df_raw = pd.DataFrame(data=data)
data_df = data_df_raw.T
data_df.columns=['legs','weight','version']
data_df
```
Let's pretend we have a simple regression like problem. We start out with 3 features describing a robotic spider we're building. For example: number of legs (feature 1), weight (feature 2), and version number (feature 3). Say that we so far built three prototype robots, so have 3 values for each feature.
```
data_dict = {'legs':[1,2,3],
'weight':[4.0,5.0,6.0],
'version':['100','101','102']}
data_df_dict = pd.DataFrame(data=data_dict)
data_df_dict
feature1 = [1,2,3]
feature2 = [4.0,5.0,6.0]
feature3 = ['100','101','102']
print(type(feature1))
data_df['legs'].tolist()
data_df.iloc[0]
```
We'll look at the various different data structures you will probably run into when doing ML/AI in pyhton and tensorflow. Combining the features into matrices etc. Starting from basic python lists and progressing up to keras Datasets which you will typically feed into your neural network.
First up: the basic python LIST
```
list2d = [feature1, feature2, feature3]
print(type(list2d))
print(list2d)
print('({},{})'.format(len(list2d),len(list2d[0]))) #nr of rows and cols
print(list2d[0]) #first row
print([row[0] for row in list2d]) #first col
print(list2d[0][0]) # value at 0,0
print([[row[i] for row in list2d] for i in range(len(list2d[0]))]) # transpose to make more like excel sheet
```
A python list is a collection of any data types. The items in a list can be lists again, and there are no requirements for the items in a list to be of the same type, or of the same length.
There is also the Tuple, which has () around the feautes instead of []. A Tuple works hte same, but once creatd, cannot be changed.
Next up the Numpy ARRAY
```
import numpy as np
array2d = np.array([feature1, feature2, feature3], dtype=object)
print(type(array2d))
print(array2d)
print(array2d.shape) #nr of rows and cols
print(array2d[0,:]) #first element/row = array, could also be just array2d[0]
print(array2d[:,0]) #first column, or actually first element from each 1d array in the 2d array
print(array2d[0,0]) # value at 0,0
print(array2d.transpose()) #more like excel sheet
```
A numpy array expects all items to be of the same type. If the dtype=object is not used above, all of the values will be converted to strings as this is the minimum type that can hold all values. A numpy array can handle features of different length, but then each element in the array will be of type 'list', so no direct indexing like you would expect from a matrix.
Next up the Pandas DATAFRAME
```
import pandas as pd
dataframe = pd.DataFrame()
dataframe['feature1'] = feature1
dataframe['feature2'] = feature2
dataframe['feature3'] = feature3
print(type(dataframe))
print(dataframe)
print(dataframe.shape)
print(dataframe.iloc[0].tolist()) # first row, without .tolist() it also shows the column headers as row headers. You can also use loc[0], where 0 is now value in the index column (same as row number here)
print(dataframe['feature1'].tolist()) #first column, without .tolist() it also shows the index. You can also use .iloc[:,0]
print(dataframe.iloc[0,0]) #value at 0,0
```
A Pandas dataframe is basically an excel sheet. It can handle features with different datatypes, but not different lengths of feature arrays.
Next up TENSORs
```
import tensorflow as tf
feature3int = [int(x) for x in feature3 ] # map string values to numerical representation (in this case the string is a number so easy)
tensorRank2 = tf.constant([feature1, feature2, feature3int], dtype=float)
print(type(tensorRank2))
print(tensorRank2)
print(tensorRank2.shape)
print(tensorRank2[0,:].numpy()) #first row, without .numpy() a tensor object is returned. Could also use just [0]
print(tensorRank2[:,0].numpy()) #first col
print(tensorRank2[0,0].numpy()) # value at 0,0
print(tf.transpose(tensorRank2)) # more like excel sheet
```
Tensors are n-dimensional generalizations of matrices. Vectors are tensors, and can be seen as 1-dimensional matrices. All are represented using n-dimensional arrays with a uniform type, and features with uniform length. I had to convert feature3 list to int, although I could also have converted feature1 and fature2 lists to strings.
Next up DATASETs
```
feature1f = [float(x) for x in feature1 ] # map string values to numerical representation
feature3f = [float(x) for x in feature3 ] # map string values to numerical representation
dataset = tf.data.Dataset.from_tensor_slices([feature1f, feature2, feature3f])
print(type(dataset))
print(dataset.element_spec)
print(dataset)
print(list(dataset.as_numpy_iterator()))
print(list(dataset.take(1).as_numpy_iterator())[0]) #first "row"
print(list(dataset.take(1).as_numpy_iterator())[0][0]) # value at 0,0
```
A Dataset is a sequence of elements, each element consisting of one or more components. In this case, each element of the Dataset is a TensorSliceDataset of shape (3,) which, when converted to a list, is shown to wrap around an array of 3 floats as expected.
A Dataset is aimed at creating data pipelines, which get data from somewhere, process and transform it (typically in smaller batches), and then output it to a neural network (or somewhere else). A main goal of such a piepline is to avoid getting (all) the data in memory and enable large data sets to be handled in smaller peices. As such, getting values for specific elements in the dataset is not what Dataset are built for (and it shows).
```
datasett = tf.data.Dataset.from_tensor_slices((feature1, feature2, feature3))
print(type(datasett))
print(datasett.element_spec)
print(datasett)
print(list(datasett.as_numpy_iterator()))
```
If you create a Dataset from a tuple of arrays, instead of an array of arrays, you can see each element is now a tuple of 3 TensorSpec of different type and shape () which can be seen wrap around a tuple for transposed feature values.
This shows that from_tensor_slices() "slices" the tensors along the first dimension
| github_jupyter |
# Movie Review Text Classification with Text processing
This tutorial: https://www.tensorflow.org/tutorials/keras/text_classification
```
!pip install -q tf-nightly
import tensorflow as tf
from tensorflow import keras
!pip install -q tfds-nightly
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
import numpy as np
print(tf.__version__)
(train_data, test_data), info = tfds.load(
# Use the version pre-encoded with an ~8k vocabulary.
'imdb_reviews/subwords8k',
# Return the train/test datasets as a tuple.
split = (tfds.Split.TRAIN, tfds.Split.TEST),
# Return (example, label) pairs from the dataset (instead of a dictionary).
as_supervised=True,
# Also return the `info` structure.
with_info=True)
encoder = info.features['text'].encoder
print ('Vocabulary size: {}'.format(encoder.vocab_size))
sample_string = 'Hello TensorFlow.'
encoded_string = encoder.encode(sample_string)
print ('Encoded string is {}'.format(encoded_string))
original_string = encoder.decode(encoded_string)
print ('The original string: "{}"'.format(original_string))
assert original_string == sample_string
for ts in encoded_string:
print ('{} ----> {}'.format(ts, encoder.decode([ts])))
for train_example, train_label in train_data.take(5):
print('Encoded text:', train_example[:10].numpy())
print('Label:', train_label.numpy())
print(encoder.decode(train_example)[:150])
BUFFER_SIZE = 1000
train_batches = (
train_data
.shuffle(BUFFER_SIZE)
.padded_batch(32, padded_shapes=([None],[])))
test_batches = (
test_data
.padded_batch(32, padded_shapes=([None],[])))
train_batches = (
train_data
.shuffle(BUFFER_SIZE)
.padded_batch(32))
test_batches = (
test_data
.padded_batch(32))
for example_batch, label_batch in train_batches.take(2):
print("Batch shape:", example_batch.shape)
print("label shape:", label_batch.shape)
model = keras.Sequential([
keras.layers.Embedding(encoder.vocab_size, 16),
keras.layers.GlobalAveragePooling1D(),
keras.layers.Dense(1)])
model.summary()
model.compile(optimizer='adam',
loss=tf.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches,
epochs=10,
validation_data=test_batches,
validation_steps=30)
loss, accuracy = model.evaluate(test_batches)
print("Loss: ", loss)
print("Accuracy: ", accuracy)
history_dict = history.history
history_dict.keys()
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
```
I want a probablistic binary output, but the prediction is actually in numeric value range (raw output, logits=inverse of sigmoid). So after training, we add the sigmoid function as the last layer to get range \[0,1\].
```
probability_model = tf.keras.Sequential([
model,
tf.keras.layers.Activation('sigmoid')
])
reviews = list(test_data.take(30))
for (review, label) in reviews:
reviewPredictable = tf.expand_dims(review, 0)
[[p]] = probability_model.predict(reviewPredictable)
l = label.numpy()
print('actual', l, 'predicted', p, "\x1b[32m\"correct\"\x1b[0m" if (l==1 and p>=0.5) or (l==0 and p<0.5) else "\x1b[31m\"wrong\"\x1b[0m")
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from analysis_utils import *
PAREDAO = "paredao13"
CAND1_PATH = "data/paredao13/flay.csv"
CAND2_PATH = "data/paredao13/thelma.csv"
CAND3_PATH = "data/paredao13/babu.csv"
DATE = 3
IGNORE_HASHTAGS = ["#bbb20", "#redebbb", "#bbb2020"]
candidate1_df = pd.read_csv(CAND1_PATH)
candidate2_df = pd.read_csv(CAND2_PATH)
candidate3_df = pd.read_csv(CAND3_PATH)
cand1 = candidate1_df[["tweet", "sentiment", "date", "likes_count", "retweets_count", "hashtags"]]
cand2 = candidate2_df[["tweet", "sentiment", "date", "likes_count", "retweets_count", "hashtags"]]
cand3 = candidate3_df[["tweet", "sentiment", "date", "likes_count", "retweets_count", "hashtags"]]
```
# Flayslene (eliminada)
```
cand1["sentiment"].hist()
```
# Thelma
```
cand2["sentiment"].hist()
```
# Babu
```
cand3["sentiment"].hist()
```
# Quantidades absolutas
```
candidates = {"flayslene": cand1, "thelma": cand2, "babu": cand3}
qtds_df = get_raw_quantities(candidates)
qtds_df
qtds_df.plot.bar(rot=45, color=['green', 'gray', 'red'])
```
# Porcentagens em relação aos total de tweets de cada candidato
```
pcts_df = get_pct_by_candidate(candidates)
pcts_df
pcts_df.plot.bar(rot=45, color=['green', 'gray', 'red'])
```
# Porcentagens em relação ao total de tweets por categoria
```
qtds_df_copy = qtds_df.copy()
qtds_df["positivos"] /= qtds_df["positivos"].sum()
qtds_df["neutros"] /= qtds_df["neutros"].sum()
qtds_df["negativos"] /= qtds_df["negativos"].sum()
qtds_df
qtds_df.plot.bar(rot=45, color=['green', 'gray', 'red'])
```
# Tweets por dia
```
names = list(candidates.keys())
tweets_by_day_df = get_tweets_by_day(candidates[names[0]], names[0])
for name in names[1:]:
current = get_tweets_by_day(candidates[name], name)
tweets_by_day_df = tweets_by_day_df.append(current)
tweets_by_day_df.transpose().plot()
```
# Análise de hashtags
```
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (20,10)
unique_df = get_unique_hashtags(list(candidates.values()))
unique_df.drop(index=IGNORE_HASHTAGS, inplace=True)
unique_df.sort_values(by="quantidade", ascending=False).head(30).plot.bar(rot=45)
alias = {"flayslene": "flay", "thelma": "thelma", "babu": "babu"}
fica_fora_df = get_fica_fora_quantities(unique_df, alias)
fica_fora_df
```
# Seleção de atributos
```
atributes_df = qtds_df_copy.join(pcts_df, rsuffix="_individual_pct")
atributes_df = atributes_df.join(qtds_df, rsuffix="_global_pct")
atributes_df = atributes_df.join(tweets_by_day_df)
atributes_df = atributes_df.join(fica_fora_df)
raw_participantes_info = get_participantes_info()[DATE]
print("Seguidores atualizados em:", raw_participantes_info["date"])
participantes_info = raw_participantes_info["infos"]
paredoes_info = get_paredoes_info()
followers = [participantes_info[participante]["seguidores"] for participante in atributes_df.index]
likes = [get_likes_count(candidates[participante]) for participante in atributes_df.index]
retweets = [get_retweets_count(candidates[participante]) for participante in atributes_df.index]
paredao_info = paredoes_info[PAREDAO]["candidatos"]
results_info = {candidate["nome"]: candidate["porcentagem"]/100 for candidate in paredao_info}
rejection = [results_info[participante] for participante in atributes_df.index]
atributes_df["likes"] = likes
atributes_df["retweets"] = retweets
atributes_df["seguidores"] = followers
atributes_df["rejeicao"] = rejection
atributes_df
atributes_df.to_csv("data/{}/paredao_atributes.csv".format(PAREDAO))
```
| github_jupyter |
```
# -*- coding: utf-8 -*-
"""
Created on Fri Nov 27 23:01:16 2015
@author: yilin
"""
# useful code: https://www.kaggle.com/cast42/rossmann-store-sales/xgboost-in-python-with-rmspe-v2/code
import pandas as pd
import numpy as np
import re
from dateutil.parser import parse
import random
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(context="paper", font="monospace")
import plotly
import plotly.plotly as py
py.sign_in('lemonsong', '3lcplsq1a3')
import plotly.graph_objs as go
#import datetime
from sklearn.utils import shuffle
from sklearn import preprocessing
from numpy import float32
from sklearn.preprocessing import Imputer
def getxy(x):
y = x.Sales
x.drop('Sales', axis=1, inplace=True)
#x.drop('Store', axis=1, inplace=True)
return x,y
data = pd.read_csv("train0forkagglewtcustomer.csv")
data1 = pd.read_csv("train1forkagglewtcustomer.csv")
data = pd.read_csv("train0forkagglewtcustomer.csv")
data = data[(data['Year']==2013) & (data['Month']==7) | (data['Year']==2014) & (data['Month']==7) |\
(data['Year']==2013) & (data['Month']==8) | (data['Year']==2014) & (data['Month']==8) |\
(data['Year']==2013) & (data['Month']==9) | (data['Year']==2014) & (data['Month']==9) |\
(data['Year']==2015) & (data['Month']==6) | (data['Year']==2014) & (data['Month']==5) |
(data['Year']==2015) & (data['Month']==7) ]
data1 = pd.read_csv("train1forkagglewtcustomer.csv")
data1 = data1[(data1['Year']==2013) & (data1['Month']==7) | (data1['Year']==2014) & (data1['Month']==7) |\
(data1['Year']==2013) & (data1['Month']==8) | (data1['Year']==2014) & (data1['Month']==8) |\
(data1['Year']==2013) & (data1['Month']==9) | (data1['Year']==2014) & (data1['Month']==9) |\
(data1['Year']==2015) & (data1['Month']==6) | (data1['Year']==2014) & (data1['Month']==5) |
(data1['Year']==2015) & (data1['Month']==7) ]
data=pd.DataFrame(data)
data.to_csv("bigml0.csv", index=False)
data1=pd.DataFrame(data1)
data = pd.read_csv("train0forkagglewtcustomer.csv")
data = data[(data['Year']==2015) & (data['Month']==6) | (data['Year']==2014) & (data['Month']==5) |
(data['Year']==2015) & (data['Month']==7) ]
data1 = pd.read_csv("train1forkagglewtcustomer.csv")
data1 = data1[(data1['Year']==2015) & (data1['Month']==6) | (data1['Year']==2014) & (data1['Month']==5) |
(data1['Year']==2015) & (data1['Month']==7) ]
data = pd.read_csv("train0forkagglewtcustomer.csv")
data = data[(data['Year']==2013) & (data['Month']==7) | (data['Year']==2014) & (data['Month']==7) |\
(data['Year']==2013) & (data['Month']==8) | (data['Year']==2014) & (data['Month']==8) |\
(data['Year']==2013) & (data['Month']==9) | (data['Year']==2014) & (data['Month']==9)]
data1 = pd.read_csv("train1forkagglewtcustomer.csv")
data1 = data1[(data1['Year']==2013) & (data1['Month']==7) | (data1['Year']==2014) & (data1['Month']==7) |\
(data1['Year']==2013) & (data1['Month']==8) | (data1['Year']==2014) & (data1['Month']==8) |\
(data1['Year']==2013) & (data1['Month']==9) | (data1['Year']==2014) & (data1['Month']==9)]
x,y=getxy(data)
x1,y1=getxy(data1)
```
## Split Data
```
def splitdata(x,y):# Split data into train and test
train, test = shuffle(x,y, random_state=15)
offset = int(train.shape[0] * 0.7)
x_train, y_train = train[:offset], test[:offset]
x_test, y_test = train[offset:], test[offset:]
return x_train, y_train,x_test, y_test
x_train, y_train,x_test, y_test = splitdata(x,y)
print x_train.columns
x_train1, y_train1,x_test1, y_test1 = splitdata(x1,y1)
```
## Builde Model
##### DT
```
from sklearn import tree
clf2 = tree.DecisionTreeRegressor(max_features='auto')
clf2.fit(x_train, y_train)
y_pred2 = clf2.predict(x_test)
from sklearn import tree
clf12 = tree.DecisionTreeRegressor(max_features='auto')
clf12.fit(x_train1, y_train1)
y_pred12 = clf12.predict(x_test1)
```
##### KNN
```
from sklearn.neighbors import KNeighborsRegressor
clf3 = KNeighborsRegressor(n_neighbors=5,weights='distance',algorithm='auto')
clf3.fit(x_train, y_train)
y_pred3=clf3.predict(x_test)
from sklearn.neighbors import KNeighborsRegressor
clf13 = KNeighborsRegressor(n_neighbors=10,weights='distance',algorithm='auto')
clf13.fit(x_train1, y_train1)
y_pred13=clf13.predict(x_test1)
```
##### RF
```
from sklearn.ensemble import RandomForestRegressor
clf4 = RandomForestRegressor(n_estimators=300)
clf4.fit(x_train, y_train)
y_pred4=clf4.predict(x_test)
from sklearn.ensemble import RandomForestRegressor
clf14 = RandomForestRegressor(n_estimators=300)
clf14.fit(x_train1, y_train1)
y_pred14=clf14.predict(x_test1)
```
#### Feature Importance
```
def getfeature_importance(df,clf):
feature_importance= pd.concat([pd.Series(list(df.columns),name='Feature'),\
pd.Series(clf.feature_importances_,name='Importance')],\
axis=1).sort(['Importance'], ascending=[1])
return feature_importance
feature_importance=getfeature_importance(x_train,clf4)
feature_importance1=getfeature_importance(x_train1,clf14)
featureimportance = pd.merge(feature_importance,feature_importance1,on="Feature", how='outer')
print featureimportance
featureimportance.to_csv("featureimportance.csv", index=False)
%matplotlib inline
trace1 = go.Bar(
y=featureimportance.Feature,
x=featureimportance.Importance_x,
name='Promo2==0',
orientation = 'h',
marker = dict(
color = 'rgba(55, 128, 191, 0.6)',
line = dict(
color = 'rgba(55, 128, 191, 1.0)',
width = 1,
)
)
)
trace2 = go.Bar(
y=featureimportance.Feature,
x=featureimportance.Importance_y,
name='Promo2==1',
orientation = 'h',
marker = dict(
color = 'rgba(255, 153, 51, 0.6)',
line = dict(
color = 'rgba(255, 153, 51, 1.0)',
width = 1,
)
)
)
data = [trace1, trace2]
layout = go.Layout(
barmode='group'
)
fig = go.Figure(data=data, layout=layout)
plot_url = py.plot(fig, filename='marker-h-bar')
import plotly.tools as tls
tls.embed("https://plot.ly/~lemonsong/43/promo20-vs-promo21/")
```
###### Predict based on average of three algorithm
```
predcollect=pd.concat([pd.Series(y_pred2,name='dt'),pd.Series(y_pred3,name='knn'),pd.Series(y_pred4,name='rf')], axis=1)
pred1collect=pd.concat([pd.Series(y_pred12,name='dt'),pd.Series(y_pred13,name='knn'),pd.Series(y_pred14,name='rf')], axis=1)
predavg= predcollect.mean(axis=1)
pred1avg= pred1collect.mean(axis=1)
```
## Evaluation
```
def rmspe(y, yhat):
return np.sqrt(np.mean((yhat/y-1) ** 2))
def rmspe_xg(yhat, y):
y = np.expm1(y)
yhat = np.expm1(yhat)
print y
return "rmspe", rmspe(y,yhat)
```
Function to calculate RMSPE for both Promo2==0 and Promo2==1 test
```
def compare(y_test,y_pred,y_test1,y_pred1):
y_test=np.append(y_test,y_test1)
y_pred=np.append(y_pred,y_pred1)
return rmspe(y_test,y_pred)
```
##### DT
Promo2==0
```
print rmspe(y_test,y_pred2)
```
Promo2==1
```
print rmspe(y_test1,y_pred12)
```
Promo2==0 & Promo2==1
```
print compare(y_test,y_pred2,y_test1,y_pred12)
```
##### KNN
```
print rmspe(y_test,y_pred3)
print rmspe(y_test1,y_pred13)
print compare(y_test,y_pred3,y_test1,y_pred13)
```
##### RF
```
print rmspe(y_test,y_pred4)
print rmspe(y_test1,y_pred14)
print compare(y_test,y_pred4,y_test1,y_pred14)
```
##### Average method
Predict sales based on average of predictions from three algorithms
```
print rmspe(y_test,predavg)
print rmspe(y_test1,pred1avg)
print compare(y_test,predavg,y_test1,pred1avg)
```
#### Export Decision Tree
```
tree.export_graphviz(clf2,out_file='tree0.dot',max_depth=8)
tree.export_graphviz(clf12,out_file='tree1.dot',max_depth=8)
```
## Make Prediction
```
def makeprediction(testfile,feature,clf):
#train_x = pd.read_csv(trainfile).astype(float32)
pre_x = pd.read_csv(testfile).astype(float32)
#print np.all(np.isfinite(train_x))
print np.all(np.isfinite(pre_x))
#train_x,train_y=getxy(train_x)
pre_y = clf.predict(pre_x[feature])
prediction = pd.concat([pre_x, pd.Series(pre_y,name='Sales')], axis=1)
return prediction
feature0=["Store","DayOfWeek","Promo","SchoolHoliday",'HaveCompetitor',
"CompetitionDistance",
"Year","Month","Day","Week",
"StoreType_a","StoreType_b","StoreType_c","StoreType_d",
"Assortment_a","Assortment_b","Assortment_c",
"StateHoliday_0","StateHoliday_a",
"CompetitionMonth",'Customers'
]
feature1=["Store","DayOfWeek","Promo","SchoolHoliday",'HaveCompetitor',
"CompetitionDistance",
"Year","Month","Day","Week",
"StoreType_a","StoreType_b","StoreType_c","StoreType_d",
"Assortment_a","Assortment_b","Assortment_c",
"StateHoliday_0","StateHoliday_a",
"CompetitionMonth",
"Promo2Month","Promo2Week",'Customers'
]
prediction0=makeprediction('pre0wtcustomers.csv',feature0,clf4)
prediction1=makeprediction('pre1wtcustomers.csv',feature1,clf14)
```
#### average method
When make submission based on average prediction of three algorithms, use this part
```
prediction02=makeprediction('pre0.csv',feature0,clf2)
prediction03=makeprediction('pre0.csv',feature0,clf3)
prediction04=makeprediction('pre0.csv',feature0,clf4)
prediction12=makeprediction('pre1.csv',feature1,clf12)
prediction13=makeprediction('pre1.csv',feature1,clf13)
prediction14=makeprediction('pre1.csv',feature1,clf14)
def mergeavg(predition2,predition3,predition4):
predcollect=pd.concat([pd.Series(predition2,name='dt'),pd.Series(predition3,name='knn'),pd.Series(predition4,name='rf')], axis=1)
predavg= predcollect.mean(axis=1)
return predavg
prediction0=mergeavg(prediction02.Sales,prediction03.Sales,prediction04.Sales)
prediction1=mergeavg(prediction12.Sales,prediction13.Sales,prediction14.Sales)
def generatepreforsub(filename,pred):
pre_x = pd.read_csv(filename).astype(float32)
prediction = pd.concat([pre_x.Id, pd.Series(pred,name='Sales')], axis=1)
return prediction
prediction0=generatepreforsub('pre0.csv',prediction0)
prediction1=generatepreforsub('pre1.csv',prediction1)
```
## Make Submission
```
prediction_sub0=pd.DataFrame(prediction0[["Id","Sales"]],columns=["Id","Sales"])
prediction_sub1=pd.DataFrame(prediction1[["Id","Sales"]],columns=["Id","Sales"])
prediction_sub=pd.concat([prediction_sub0,prediction_sub1])
print len(prediction_sub)
submission = pd.read_csv("submission.csv")
submission = pd.merge(submission,prediction_sub,on="Id", how='outer')
submission.fillna(0, inplace=True)
submission.to_csv("submission4.csv", index=False)
```
## Generat Data for Advance Analysis
Only includ test and prediction of test with open==1 or open==null
```
prediction0.to_csv("prediction0.csv", index=False)
prediction1.to_csv("prediction1.csv", index=False)
fet=["Store","DayOfWeek","Promo","SchoolHoliday","StateHoliday_0","StateHoliday_a",
"Year","Month","Day",
"StoreType_a","StoreType_b","StoreType_c","StoreType_d",
"Assortment_a","Assortment_b","Assortment_c",
"Customers","Sales"]
prediction_ana0=pd.DataFrame(prediction0[fet])
prediction_ana0["Promo2"]=0
print prediction_ana0.head()
prediction_ana1=pd.DataFrame(prediction1[fet])
prediction_ana0["Promo2"]=1
data_ana0=pd.DataFrame(data[fet])
prediction_ana0["Promo2"]=0
data_ana1=pd.DataFrame(data1[fet])
prediction_ana0["Promo2"]=1
prediction_ana=pd.concat([prediction_ana0,prediction_ana1,data_ana0,data_ana1])
```
#### Creat Date column
```
y = np.array(prediction_ana['Year']-1970, dtype='<M8[Y]')
m = np.array(prediction_ana['Month']-1, dtype='<m8[M]')
d = np.array(prediction_ana['Day']-1, dtype='<m8[D]')
prediction_ana['Date'] = pd.Series(y+m+d)
print prediction_ana.dtypes
print prediction_ana.head()
prediction_ana.drop(["Day","Month","Year"], axis=1, inplace=True)
```
### Sales and Customers Prediction by Date
```
gr_date=prediction_ana.groupby(['Date'])
gr_date_sales=gr_date.agg({'Customers' : 'mean', 'Sales' : 'mean'})
print gr_date_sales.head()
trace1 = go.Scatter(
x=gr_date_sales.index,
y=gr_date_sales.Customers,
name='Customers',
line=dict(
color='#ae32e4',
width = 1 ,
)
)
trace2 = go.Scatter(
x=gr_date_sales.index,
y=gr_date_sales.Sales,
name='Sales',
mode = 'lines+markers',
yaxis='y2',
line=dict(
color='#3268e4',
width = 1
),
opacity=0.8
)
data = [trace1, trace2]
layout = go.Layout(
title='Time Series of Prediction',
yaxis=dict(
title='Customers'
),
yaxis2=dict(
title='Sales',
titlefont=dict(
color='rgb(174,50,228)'
),
tickfont=dict(
color='rgb(174,50,228)'
),
overlaying='y',
side='right'
)
)
fig = go.Figure(data=data, layout=layout)
plot_url = py.plot(fig, filename='multiple-axes-double')
tls.embed("https://plot.ly/~lemonsong/54/time-series-of-prediction/")
gr_assortment=prediction_ana
#gr_assortment.query('Assortment_a==1')['Assortment']='basic'
gr_assortment.ix[gr_assortment.Assortment_a==1, 'Assortment'] = 'basic'
gr_assortment.ix[gr_assortment.Assortment_b==1, 'Assortment'] = 'extra'
gr_assortment.ix[gr_assortment.Assortment_c==1, 'Assortment'] = 'extended'
gr_assortment.drop(['Assortment_a','Assortment_b','Assortment_c'], axis=1, inplace=True)
print gr_assortment.columns
gr_assortment1=gr_assortment.groupby(['Assortment', 'DayOfWeek'])
gr_assortment1=gr_assortment1.agg({ 'Customers' : 'sum','Store':'count'}).reset_index()
gr_assortment1['Coustomers_by_store']=gr_assortment1['Customers']/gr_assortment1['Store']
gr_assortment1
gr_assortment2=gr_assortment1.pivot('Assortment', 'DayOfWeek', 'Coustomers_by_store')
print gr_assortment2
data = [
go.Heatmap(
z=gr_assortment2.values,
x=gr_assortment2.columns,
y=gr_assortment2.index,
colorscale=[[0, '"rgb(228, 174, 50)"'],[1, 'rgb(174, 50, 228)']]
)
]
layout = go.Layout(
title='Average Customers',
yaxis=dict(
title='Assortment',
),
xaxis=dict(
type="category",
title='WeekOfDay',
)
)
fig = go.Figure(data=data, layout=layout)
plot_url = py.plot(fig, filename='labelled-heatmap')
tls.embed("https://plot.ly/~lemonsong/80/average-sales/")
gr_store=prediction_ana
gr_store=gr_store.groupby(['Store'])
gr_store_sales=gr_store.agg({'Customers' : 'sum', 'Sales' : 'sum','Promo':'sum','Promo2':'sum'}).reset_index()
gr_store1=pd.merge(gr_store_sales,prediction_ana[['Store','Assortment']],on="Store", how='left').drop_duplicates()
gr_store1.head()
gr_store1_assort=gr_store1.groupby(['Assortment'])
gr_store_sales_agg=gr_store1_assort.agg({'Customers' : 'sum', 'Sales' : 'sum','Store':'count','Promo':'sum','Promo2':'sum'}).reset_index()
gr_store_sales_agg
fig = {
"data": [
{
"values": gr_store_sales_agg.Store,
"labels": gr_store_sales_agg.Assortment,
"domain": {"x": [0, .33]},
"name": "Store",
"hoverinfo":"label+percent+name",
"hole": .4,
"type": "pie"
},
{
"values": gr_store_sales_agg.Customers,
"labels":gr_store_sales_agg.Assortment,
"text":"Customers",
"textposition":"inside",
"domain": {"x": [.33, .66]},
"name": "Customers",
"hoverinfo":"label+percent+name",
"hole": .4,
"type": "pie"
},
{
"values": gr_store_sales_agg.Sales,
"labels":gr_store_sales_agg.Assortment,
"text":"Sales",
"textposition":"inside",
"domain": {"x": [.66, 1]},
"name": "Sales",
"hoverinfo":"label+percent+name",
"hole": .4,
"type": "pie"
},
],
"layout": {
"title":"Percentage by Assortment Type",
"annotations": [
{
"font": {
"size": 20
},
"showarrow": False,
"text": "Store",
"x": 0.10,
"y": 0.5
},
{
"font": {
"size": 20
},
"showarrow": False,
"text": "Customers",
"x": 0.5,
"y": 0.5
},
{
"font": {
"size": 20
},
"showarrow": False,
"text": "Sales",
"x": 0.9,
"y": 0.5
}
]
}
}
url = py.plot(fig, filename='Global Emissions 1990-2011')
```
| github_jupyter |
## Installation
```
!pip install -q --upgrade transformers datasets tokenizers
!pip install -q emoji pythainlp sklearn-pycrfsuite seqeval
!rm -r thai2transformers thai2transformers_parent
!git clone -b dev https://github.com/vistec-AI/thai2transformers/
!mv thai2transformers thai2transformers_parent
!mv thai2transformers_parent/thai2transformers .
!pip install accelerate==0.5.1
!apt install git-lfs
!pip install sentencepiece
! git clone https://github.com/Bjarten/early-stopping-pytorch.git
import sys
sys.path.insert(0, '/content/early-stopping-pytorch')
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
```
## Importing the libraries
```
from datasets import load_dataset,Dataset,DatasetDict,load_from_disk
from transformers import DataCollatorWithPadding,AutoModelForSequenceClassification, Trainer, TrainingArguments,AutoTokenizer,AutoModel,AutoConfig
from transformers.modeling_outputs import SequenceClassifierOutput
from thai2transformers.preprocess import process_transformers
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
from sklearn.metrics import classification_report
from pytorchtools import EarlyStopping
from google.colab import drive
drive.mount('/content/drive')
```
## Loading the dataset
```
data = load_from_disk('/content/drive/MyDrive/Fake news/News-Dataset/dataset')
def clean_function(examples):
examples['text'] = process_transformers(examples['text'])
return examples
data = data.map(clean_function)
```
## Fine-tuning
```
checkpoint = "airesearch/wangchanberta-base-att-spm-uncased"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
tokenizer.model_max_len=416
def tokenize(batch):
return tokenizer(batch["text"], truncation=True,max_length=416)
tokenized_dataset = data.map(tokenize, batched=True)
tokenized_dataset
tokenized_dataset.set_format("torch",columns=["input_ids", "attention_mask", "labels"])
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
class extract_tensor(nn.Module):
def forward(self,x):
# Output shape (batch, features, hidden)
tensor, _ = x
# Reshape shape (batch, hidden)
return tensor[:, :]
class CustomModel(nn.Module):
def __init__(self,checkpoint,num_labels):
super(CustomModel,self).__init__()
self.num_labels = num_labels
#Load Model with given checkpoint and extract its body
self.model = model = AutoModel.from_pretrained(checkpoint,config=AutoConfig.from_pretrained(checkpoint, output_attentions=True,output_hidden_states=True))
self.dropout = nn.Dropout(0.1)
self.classifier = nn.Sequential(
nn.LSTM(768, 256, 1, batch_first=True),
extract_tensor(),
nn.Linear(256, 2)
)
def forward(self, input_ids=None, attention_mask=None,labels=None):
#Extract outputs from the body
outputs = self.model(input_ids=input_ids, attention_mask=attention_mask)
#Add custom layers
sequence_output = self.dropout(outputs[0]) #outputs[0]=last hidden state
logits = self.classifier(sequence_output[:,0,:].view(-1,768)) # calculate losses
loss = None
if labels is not None:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
return SequenceClassifierOutput(loss=loss, logits=logits, hidden_states=outputs.hidden_states,attentions=outputs.attentions)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model=CustomModel(checkpoint=checkpoint,num_labels=2).to(device)
from torch.utils.data import DataLoader
train_dataloader = DataLoader(
tokenized_dataset["train"], shuffle=True, batch_size=8, collate_fn=data_collator
)
eval_dataloader = DataLoader(
tokenized_dataset["valid"], batch_size=8, collate_fn=data_collator
)
from transformers import AdamW,get_scheduler
optimizer = AdamW(model.parameters(), lr=5e-5)
num_epochs = 50
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_training_steps,
)
print(num_training_steps)
from datasets import load_metric
metric = load_metric("f1")
from tqdm.auto import tqdm
progress_bar_train = tqdm(range(num_training_steps))
progress_bar_eval = tqdm(range(num_epochs * len(eval_dataloader)))
# to track the training loss as the model trains
train_losses = []
# to track the validation loss as the model trains
valid_losses = []
# to track the average training loss per epoch as the model trains
avg_train_losses = []
# to track the average validation loss per epoch as the model trains
avg_valid_losses = []
early_stopping = EarlyStopping(patience=7, verbose=True)
for epoch in range(num_epochs):
model.train()
size = len(train_dataloader.dataset)
for batch, X in enumerate(train_dataloader):
X = {k: v.to(device) for k, v in X.items()}
outputs = model(**X)
loss = outputs.loss
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar_train.update(1)
train_losses.append(loss.item())
model.eval()
for batch, X in enumerate(eval_dataloader):
X = {k: v.to(device) for k, v in X.items()}
with torch.no_grad():
outputs = model(**X)
loss = outputs.loss
valid_losses.append(loss.item())
logits = outputs.logits
predictions = torch.argmax(logits, dim=-1)
metric.add_batch(predictions=predictions, references=X["labels"])
progress_bar_eval.update(1)
# print training/validation statistics
# calculate average loss over an epoch
train_loss = np.average(train_losses)
valid_loss = np.average(valid_losses)
avg_train_losses.append(train_loss)
avg_valid_losses.append(valid_loss)
epoch_len = len(str(num_epochs))
loss_msg = (f'[{epoch+1:>{epoch_len}}/{num_epochs:>{epoch_len}}] ' +
f'train_loss: {train_loss:.5f} ' +
f'valid_loss: {valid_loss:.5f}')
print(loss_msg)
# clear lists to track next epoch
train_losses = []
valid_losses = []
# early_stopping needs the validation loss to check if it has decresed,
# and if it has, it will make a checkpoint of the current model
early_stopping(valid_loss, model)
if early_stopping.early_stop:
print("Early stopping")
break
print(metric.compute())
print('\n')
model.load_state_dict(torch.load('checkpoint.pt'))
# visualize the loss as the network trained
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,8))
plt.plot(range(1,len(avg_train_losses)+1),avg_train_losses, label='Training Loss')
plt.plot(range(1,len(avg_valid_losses)+1),avg_valid_losses,label='Validation Loss')
# find position of lowest validation loss
minposs = avg_valid_losses.index(min(avg_valid_losses))+1
plt.axvline(minposs, linestyle='--', color='r',label='Early Stopping Checkpoint')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.ylim(0, 0.5) # consistent scale
plt.xlim(0, len(avg_train_losses)+1) # consistent scale
plt.grid(True)
plt.legend()
plt.tight_layout()
plt.show()
fig.savefig('loss_plot.png', bbox_inches='tight')
```
## Test Result
```
preds = torch.empty(0).cuda()
model.eval()
test_dataloader = DataLoader(
tokenized_dataset["test"], batch_size=8, collate_fn=data_collator
)
for batch in test_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
with torch.no_grad():
outputs = model(**batch)
logits = outputs.logits
predictions = torch.argmax(logits, dim=-1)
metric.add_batch(predictions=predictions, references=batch["labels"])
preds = torch.cat((preds, predictions), 0)
metric.compute()
text = tokenized_dataset["test"]["text"]
y_true = tokenized_dataset["test"]["labels"]
y_pred = preds.cpu()
print(classification_report(y_true, y_pred, target_names=['true','fake']))
```
## Wrong Prediction
```
test_result = pd.DataFrame(zip(text, [int(x) for x in y_pred.tolist()], y_true.tolist()), columns=['text','pred','true'])
wrong_prediction = test_result[test_result['pred'] != test_result['true']]
wrong_prediction.head()
```
## Confusion Matrix
```
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
import seaborn as sn
array = confusion_matrix(y_true, y_pred)
df_cm = pd.DataFrame(array, range(2), range(2))
sn.heatmap(df_cm, annot=True, annot_kws={"size": 16}, fmt='g', cmap="flare")
plt.show()
torch.save(model, '/content/drive/MyDrive/Fake news/Model/sodabert-lstm')
```
| github_jupyter |
# Pre-Processing Methods
```
%%capture
!pip3 install sparqlwrapper
# Common methods to retrieve data from Wikidata
import time
from SPARQLWrapper import SPARQLWrapper, JSON
import pandas as pd
import urllib.request as url
import json
from SPARQLWrapper import SPARQLWrapper
wiki_sparql = SPARQLWrapper("https://query.wikidata.org/sparql")
wiki_sparql.setReturnFormat(JSON)
wiki_sparql.setTimeout(timeout=25)
wiki_cache = {}
def get_wikidata_label(entity):
if (entity in cache):
#print("use of cache!")
return cache[entity]
query = """
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wd: <http://www.wikidata.org/entity/>
SELECT *
WHERE {
wd:ENTITY rdfs:label ?label .
FILTER (langMatches( lang(?label), "EN" ) )
}
LIMIT 1
"""
query_text = query.replace('ENTITY',entity)
wiki_sparql.setQuery(query_text)
result = ""
while (result == ""):
try:
ret = wiki_sparql.queryAndConvert()
if (len(ret["results"]["bindings"]) == 0):
result = "-"
for r in ret["results"]["bindings"]:
result = r['label']['value']
except Exception as e:
print("Error on wikidata query:",e)
if "timed out" in str(e):
result = "-"
break
cache[entity] = result
return result
def get_wikidata(query):
if ("ASK" not in query) and ("LIMIT" not in query):
query += " LIMIT 10"
#print(query)
key = query.replace(" ","_")
if (key in cache):
#print("use of cache!")
return cache[key]
wiki_sparql.setQuery(query)
result = []
retries = 0
while (len(result) == 0) and (retries < 5):
try:
ret = wiki_sparql.queryAndConvert()
#print(ret)
if ("ASK" in query):
result.append(str(ret['boolean']))
elif (len(ret["results"]["bindings"]) == 0):
result.append("-")
else:
for r in ret["results"]["bindings"]:
for k in r.keys():
tokens = r[k]['value'].split("/")
result.append(tokens[len(tokens)-1])
except Exception as e:
retries += 1
print("Error on wikidata query:",e)
if "timed out" in str(e):
result.append("-")
break
cache[key] = result
return result
def preprocess_questions(questions):
rows = []
counter = 0
for question in data['questions']:
if (counter % 1000 == 0):
print("Queries processed:",counter, "Cache Size:",len(cache))
#print("#",question['question_id'])
answer = question['query_answer'][0]
subject_labels = []
subjects = []
predicates = [e.split(":")[1] for e in answer['sparql_template'].split(" ") if ":" in e]
predicate_labels = []
for p in predicates:
predicate_labels.append(get_wikidata_label(p.replace("*","").split("/")[0]))
objects = get_wikidata(answer['sparql_query'])
object_labels = []
for o in objects:
if (len(o)>0) and (o[0]=="Q"):
object_labels.append(get_wikidata_label(o))
else:
object_labels.append(o)
for entity in answer['entities']:
subject_labels.append(entity['label'])
subjects.append(entity['entity'].split(":")[1])
row = {
'subjects':subjects,
'predicates' : predicates,
'objects': objects,
'question': question['natural_language_question'],
'subject_labels':subject_labels,
'predicate_labels':predicate_labels,
'object_labels':object_labels
}
#print(row)
rows.append(row)
counter += 1
df = pd.DataFrame(rows)
return df
# Common methods to retrieve data from Wikidata
import time
from SPARQLWrapper import SPARQLWrapper, JSON
import pandas as pd
import urllib.request as url
import json
from SPARQLWrapper import SPARQLWrapper
dbpedia_sparql = SPARQLWrapper("https://dbpedia.org/sparql/")
dbpedia_sparql.setReturnFormat(JSON)
dbpedia_sparql.setTimeout(timeout=60)
dbpedia_cache = {}
import hashlib
def hash_text(text):
hash_object = hashlib.md5(text.encode())
md5_hash = hash_object.hexdigest()
return str(md5_hash)
def get_dbpedia_label(entity,use_cache=True,verbose=False):
key = entity+"_label"
if (use_cache) and (key in dbpedia_cache):
#print("use of cache!")
return dbpedia_cache[key].copy()
query = """
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX dbr: <http://dbpedia.org/resource/>
select distinct ?label {
<ENTITY> rdfs:label ?label .
filter langMatches(lang(?label), 'en')
}
LIMIT 250
"""
query_text = query.replace('ENTITY',entity)
dbpedia_sparql.setQuery(query_text)
result = []
while (len(result) == 0):
try:
if (verbose):
print("SPARQL Query:",query_text)
ret = dbpedia_sparql.queryAndConvert()
if (verbose):
print("SPARQL Response:",ret)
for r in ret["results"]["bindings"]:
id = entity
value = id
if ('label' in r) and ('value' in r['label']):
value = r['label']['value']
if (' id ' not in value.lower()) and (' link ' not in value.lower()) and ('has abstract' not in value.lower()) and ('wiki' not in value.lower()) and ('instance of' not in value.lower()):
result.append({'id':id, 'value':value})
except Exception as e:
print("Error on SPARQL query:",e)
break
dbpedia_cache[key] = result
#print(len(result),"properties found")
return result
def get_dbpedia_property_value(filter,use_cache=True,verbose=False):
key = hash_text(filter)
if (use_cache) and (key in dbpedia_cache):
#print("use of cache!")
return dbpedia_cache[key].copy()
query = """
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX dbr: <http://dbpedia.org/resource/>
select distinct ?object ?label {
{ FILTER }
optional {
?object rdfs:label ?label .
filter langMatches(lang(?label), 'en')
}
}
LIMIT 250
"""
query_text = query.replace('FILTER',filter)
dbpedia_sparql.setQuery(query_text)
result = []
while (len(result) == 0):
try:
if (verbose):
print("SPARQL Query:",query_text)
ret = dbpedia_sparql.queryAndConvert()
if (verbose):
print("SPARQL Response:",ret)
for r in ret["results"]["bindings"]:
id = r['object']['value']
value = id
if ('label' in r) and ('value' in r['label']):
value = r['label']['value']
if (' id ' not in value.lower()) and (' link ' not in value.lower()) and ('has abstract' not in value.lower()) and ('wiki' not in value.lower()) and ('instance of' not in value.lower()):
result.append({'id':id, 'value':value})
except Exception as e:
print("Error on SPARQL query:",e)
break
dbpedia_cache[key] = result
#print(len(result),"properties found")
return result
def get_forward_dbpedia_property_value(entity,property,use_cache=True,verbose=False):
query_filter ="<ENTITY> <PROPERTY> ?object"
return get_dbpedia_property_value(query_filter.replace("ENTITY",entity).replace("PROPERTY",property),use_cache,verbose)
def get_backward_dbpedia_property_value(entity,property,use_cache=True,verbose=False):
query_filter ="?object <PROPERTY> <ENTITY>"
return get_dbpedia_property_value(query_filter.replace("ENTITY",entity).replace("PROPERTY",property),use_cache,verbose)
```
# Datasets
## SimpleQuestions Dataset
### Wikidata SimpleQuestions
```
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/askplatypus/wikidata-simplequestions/master/annotated_wd_data_test_answerable.txt', sep="\t", index_col=False, header=None, names=['subject','predicate','object','question'])
df.head()
```
Retrieve labels from wikidata for subject, predicate and object:
```
object_labels = []
subject_labels = []
predicate_labels = []
for index, row in df.iterrows():
print(index,":",row)
subject_labels.append(get_wikidata_label(row['subject']))
predicate_labels.append(get_wikidata_label(row['predicate'].replace("R","P")))
object_labels.append(get_wikidata_label(row['object']))
if (index % 100 == 0 ):
print("Labels Identified:",index,"Cache Size:",len(cache))
index += 1
print(len(object_labels),"labels retrieved!")
df['subject_label']=subject_labels
df['predicate_label']=predicate_labels
df['object_label']=object_labels
df.to_csv('wsq-labels.csv')
df.head()
```
### SimpleDBpediaQuestions
```
# read dbpedia compatible SimpleQuestions
import urllib.request as url
import json
import unidecode
import pandas as pd
def normalize(label):
return unidecode.unidecode(label.strip()).lower()
stream = url.urlopen("https://raw.githubusercontent.com/castorini/SimpleDBpediaQA/master/V1/test.json")
content = stream.read()
data = json.loads(content)
ref_questions = [e.lower().strip() for e in pd.read_csv('data/wsq-labels.csv', index_col=0)['question'].tolist()]
counter = 0
total = 0
rows = []
dbpedia_questions = []
for question in data['Questions']:
total += 1
if (total % 100 == 0):
print(total)
question_query = question['Query']
if (question_query.lower().strip() in ref_questions):
counter += 1
subject_val = question['Subject']
subject_label = ''
ss = get_dbpedia_label(subject_val)
if (len(ss) > 0):
subject_label = ss[0]['value']
predicate = question['PredicateList'][0]
property_val = predicate['Predicate']
property_label = ''
pp = get_dbpedia_label(property_val)
if (len(pp) > 0):
property_label = pp[0]['value']
if (predicate['Direction'] == 'forward'):
object_val = get_forward_dbpedia_property_value(subject_val,property_val)
else:
object_val = get_backward_dbpedia_property_value(subject_val,property_val)
object_id = ''
object_label = ''
if len(object_val) > 0:
object_id = object_val[0]['id']
object_label = object_val[0]['value']
row = {'subject':subject_val, 'predicate':property_val, 'object': object_id, 'question':question_query, 'subject_label':subject_label, 'property_label':property_label, 'object_label': object_label}
rows.append(row)
print("Total:",len(rows))
df = pd.DataFrame(rows)
df.to_csv('dsq-labels.csv')
df.head(10)
```
## Wikidata QA Dataset
From paper: https://arxiv.org/pdf/2107.02865v1.pdf
```
import urllib.request as url
import json
stream = url.urlopen("https://raw.githubusercontent.com/thesemanticwebhero/ElNeuKGQA/main/data/dataset_wikisparql.json")
content = stream.read()
data = json.loads(content)
df = preprocess_questions(data)
df.to_csv('wqa-labels.csv')
df.head()
df.describe(include='all')
```
## LC-QuAD 2.0 Dataset
From paper: https://arxiv.org/pdf/2107.02865v1.pdf
```
import urllib.request as url
import json
stream = url.urlopen("https://raw.githubusercontent.com/thesemanticwebhero/ElNeuKGQA/main/data/dataset_lcquad2.json")
content = stream.read()
data = json.loads(content)
df = preprocess_questions(data)
df.to_csv('lcquad2-labels.csv')
df.head()
```
## COVID-QA Dataset
From paper: https://aclanthology.org/2020.nlpcovid19-acl.18.pdf
```
import urllib.request as url
import json
import pandas as pd
stream = url.urlopen("https://raw.githubusercontent.com/sharonlevy/Open_Domain_COVIDQA/main/data/qa_test.json")
content = stream.read()
data = json.loads(content)
rows = []
counter = 0
for item in data['data']:
row = {
'article': item['title'],
'text' : item['context'],
'question': item['question'],
'answer': item['answers'][0]['text']
}
rows.append(row)
counter += 1
if (counter % 100 == 0 ):
print("Questions processed:",counter)
df = pd.DataFrame(rows)
df.to_csv('covidqa-labels.csv')
df.head()
```
| github_jupyter |
# Computer Vision Nanodegree
## Project: Image Captioning
---
In this notebook, you will train your CNN-RNN model.
You are welcome and encouraged to try out many different architectures and hyperparameters when searching for a good model.
This does have the potential to make the project quite messy! Before submitting your project, make sure that you clean up:
- the code you write in this notebook. The notebook should describe how to train a single CNN-RNN architecture, corresponding to your final choice of hyperparameters. You should structure the notebook so that the reviewer can replicate your results by running the code in this notebook.
- the output of the code cell in **Step 2**. The output should show the output obtained when training the model from scratch.
This notebook **will be graded**.
Feel free to use the links below to navigate the notebook:
- [Step 1](#step1): Training Setup
- [Step 2](#step2): Train your Model
- [Step 3](#step3): (Optional) Validate your Model
<a id='step1'></a>
## Step 1: Training Setup
In this step of the notebook, you will customize the training of your CNN-RNN model by specifying hyperparameters and setting other options that are important to the training procedure. The values you set now will be used when training your model in **Step 2** below.
You should only amend blocks of code that are preceded by a `TODO` statement. **Any code blocks that are not preceded by a `TODO` statement should not be modified**.
### Task #1
Begin by setting the following variables:
- `batch_size` - the batch size of each training batch. It is the number of image-caption pairs used to amend the model weights in each training step.
- `vocab_threshold` - the minimum word count threshold. Note that a larger threshold will result in a smaller vocabulary, whereas a smaller threshold will include rarer words and result in a larger vocabulary.
- `vocab_from_file` - a Boolean that decides whether to load the vocabulary from file.
- `embed_size` - the dimensionality of the image and word embeddings.
- `hidden_size` - the number of features in the hidden state of the RNN decoder.
- `num_epochs` - the number of epochs to train the model. We recommend that you set `num_epochs=3`, but feel free to increase or decrease this number as you wish. [This paper](https://arxiv.org/pdf/1502.03044.pdf) trained a captioning model on a single state-of-the-art GPU for 3 days, but you'll soon see that you can get reasonable results in a matter of a few hours! (_But of course, if you want your model to compete with current research, you will have to train for much longer._)
- `save_every` - determines how often to save the model weights. We recommend that you set `save_every=1`, to save the model weights after each epoch. This way, after the `i`th epoch, the encoder and decoder weights will be saved in the `models/` folder as `encoder-i.pkl` and `decoder-i.pkl`, respectively.
- `print_every` - determines how often to print the batch loss to the Jupyter notebook while training. Note that you **will not** observe a monotonic decrease in the loss function while training - this is perfectly fine and completely expected! You are encouraged to keep this at its default value of `100` to avoid clogging the notebook, but feel free to change it.
- `log_file` - the name of the text file containing - for every step - how the loss and perplexity evolved during training.
If you're not sure where to begin to set some of the values above, you can peruse [this paper](https://arxiv.org/pdf/1502.03044.pdf) and [this paper](https://arxiv.org/pdf/1411.4555.pdf) for useful guidance! **To avoid spending too long on this notebook**, you are encouraged to consult these suggested research papers to obtain a strong initial guess for which hyperparameters are likely to work best. Then, train a single model, and proceed to the next notebook (**3_Inference.ipynb**). If you are unhappy with your performance, you can return to this notebook to tweak the hyperparameters (and/or the architecture in **model.py**) and re-train your model.
### Question 1
**Question:** Describe your CNN-RNN architecture in detail. With this architecture in mind, how did you select the values of the variables in Task 1? If you consulted a research paper detailing a successful implementation of an image captioning model, please provide the reference.
**Answer:** The Encoder which consists of a CNN (transfer learning used here as the resnet50 weights were loaded) was already given. I kept the decoder RNN simple for the start. It consists of one layer with 512 features. I used a standard value as the batch-size (128) and set the vocab-threshold to 4 in order to delete very uncommon words but still have a big enough vocabulary (about 10000 words) in order to be able to describe very specific pictures. I set the embedding size to 512 which worked fine in my case but probably could be set lower and still yield good results.
### (Optional) Task #2
Note that we have provided a recommended image transform `transform_train` for pre-processing the training images, but you are welcome (and encouraged!) to modify it as you wish. When modifying this transform, keep in mind that:
- the images in the dataset have varying heights and widths, and
- if using a pre-trained model, you must perform the corresponding appropriate normalization.
### Question 2
**Question:** How did you select the transform in `transform_train`? If you left the transform at its provided value, why do you think that it is a good choice for your CNN architecture?
**Answer:** I left the transform unchanged because I think it already produces very good data augmentation. Random parts of the pictures are cropped and flipped horizontically with a 50-50 chance. It is always good to introduce this kind of randomness in your data to prevent overfitting.
### Task #3
Next, you will specify a Python list containing the learnable parameters of the model. For instance, if you decide to make all weights in the decoder trainable, but only want to train the weights in the embedding layer of the encoder, then you should set `params` to something like:
```
params = list(decoder.parameters()) + list(encoder.embed.parameters())
```
### Question 3
**Question:** How did you select the trainable parameters of your architecture? Why do you think this is a good choice?
**Answer:** I decided - as described above - to make all weights in the decoder trainable and only train the weights in the embedding layer of the encoder. Both hidden-size and embedding-size were set to 512, which I did not change during the different tests. I experimented more with vocab-threshold and learning rate.
### Task #4
Finally, you will select an [optimizer](http://pytorch.org/docs/master/optim.html#torch.optim.Optimizer).
### Question 4
**Question:** How did you select the optimizer used to train your model?
**Answer:** I selected the adam optimizer which is usually a good one to start with because it adjusts learning rate and momentum for each parameter individually.
```
import torch
import torch.nn as nn
from torchvision import transforms
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from model import EncoderCNN, DecoderRNN
import math
import nltk
nltk.download('punkt')
## TODO #1: Select appropriate values for the Python variables below.
batch_size = 128 # batch size
vocab_threshold = 4 # minimum word count threshold
vocab_from_file = True # if True, load existing vocab file
embed_size = 512 # dimensionality of image and word embeddings
hidden_size = 512 # number of features in hidden state of the RNN decoder
num_epochs = 10 # number of training epochs
save_every = 1 # determines frequency of saving model weights
print_every = 100 # determines window for printing average loss
log_file = 'training_log.txt' # name of file with saved training loss and perplexity
# (Optional) TODO #2: Amend the image transform below.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Build data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=vocab_from_file)
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
print(vocab_size)
# Initialize the encoder and decoder.
encoder = EncoderCNN(embed_size)
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move models to GPU if CUDA is available.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder.to(device)
decoder.to(device)
# Define the loss function.
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()
# TODO #3: Specify the learnable parameters of the model.
params = list(decoder.parameters()) + list(encoder.embed.parameters())
# TODO #4: Define the optimizer.
#optimizer = torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08)
optimizer = torch.optim.Adam(params, lr=0.001, weight_decay=0)
# Set the total number of training steps per epoch.
total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)
```
<a id='step2'></a>
## Step 2: Train your Model
Once you have executed the code cell in **Step 1**, the training procedure below should run without issue.
It is completely fine to leave the code cell below as-is without modifications to train your model. However, if you would like to modify the code used to train the model below, you must ensure that your changes are easily parsed by your reviewer. In other words, make sure to provide appropriate comments to describe how your code works!
You may find it useful to load saved weights to resume training. In that case, note the names of the files containing the encoder and decoder weights that you'd like to load (`encoder_file` and `decoder_file`). Then you can load the weights by using the lines below:
```python
# Load pre-trained weights before resuming training.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
```
While trying out parameters, make sure to take extensive notes and record the settings that you used in your various training runs. In particular, you don't want to encounter a situation where you've trained a model for several hours but can't remember what settings you used :).
### A Note on Tuning Hyperparameters
To figure out how well your model is doing, you can look at how the training loss and perplexity evolve during training - and for the purposes of this project, you are encouraged to amend the hyperparameters based on this information.
However, this will not tell you if your model is overfitting to the training data, and, unfortunately, overfitting is a problem that is commonly encountered when training image captioning models.
For this project, you need not worry about overfitting. **This project does not have strict requirements regarding the performance of your model**, and you just need to demonstrate that your model has learned **_something_** when you generate captions on the test data. For now, we strongly encourage you to train your model for the suggested 3 epochs without worrying about performance; then, you should immediately transition to the next notebook in the sequence (**3_Inference.ipynb**) to see how your model performs on the test data. If your model needs to be changed, you can come back to this notebook, amend hyperparameters (if necessary), and re-train the model.
That said, if you would like to go above and beyond in this project, you can read about some approaches to minimizing overfitting in section 4.3.1 of [this paper](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7505636). In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset.
```
import torch.utils.data as data
import numpy as np
import os
import requests
import time
# Open the training log file.
f = open(log_file, 'w')
old_time = time.time()
response = requests.request("GET",
"http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token",
headers={"Metadata-Flavor":"Google"})
for epoch in range(1, num_epochs+1):
for i_step in range(1, total_step+1):
if time.time() - old_time > 60:
old_time = time.time()
requests.request("POST",
"https://nebula.udacity.com/api/v1/remote/keep-alive",
headers={'Authorization': "STAR " + response.text})
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Zero the gradients.
decoder.zero_grad()
encoder.zero_grad()
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
outputs = decoder(features, captions)
# Calculate the batch loss.
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
# Backward pass.
loss.backward()
# Update the parameters in the optimizer.
optimizer.step()
# Get training statistics.
stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
# Print training statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# Print training statistics to file.
f.write(stats + '\n')
f.flush()
# Print training statistics (on different line).
if i_step % print_every == 0:
print('\r' + stats)
# Save the weights.
if epoch % save_every == 0:
torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d-bs128-voc4_lr001_bftrue.pkl' % epoch))
torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d-bs128-voc4_lr001_bftrue.pkl' % epoch))
# Close the training log file.
f.close()
```
<a id='step3'></a>
## Step 3: (Optional) Validate your Model
To assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this **optional** task, you are required to first complete all of the steps in the next notebook in the sequence (**3_Inference.ipynb**); as part of that notebook, you will write and test code (specifically, the `sample` method in the `DecoderRNN` class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here.
If you decide to validate your model, please do not edit the data loader in **data_loader.py**. Instead, create a new file named **data_loader_val.py** containing the code for obtaining the data loader for the validation data. You can access:
- the validation images at filepath `'/opt/cocoapi/images/train2014/'`, and
- the validation image caption annotation file at filepath `'/opt/cocoapi/annotations/captions_val2014.json'`.
The suggested approach to validating your model involves creating a json file such as [this one](https://github.com/cocodataset/cocoapi/blob/master/results/captions_val2014_fakecap_results.json) containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you [find online](https://github.com/tylin/coco-caption) to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of [this paper](https://arxiv.org/pdf/1411.4555.pdf). For more information about how to use the annotation file, check out the [website](http://cocodataset.org/#download) for the COCO dataset.
```
# (Optional) TODO: Validate your model.
```
| github_jupyter |
# Appendix E: Validation of FDR’s control of false positive node proportion
This appendix contains RFT and FDR results (Fig.E1) from six experimental datasets and a total of eight different analyses (Table E1) that were conducted but were not included in the main manuscript. The datasets represent a variety of biomechanical modalities, experimental designs and tasks.
___
**Table E1**. Experimental datasets and analyses. J and Q are the sample size and number of time nodes, respectively. GRF = ground reaction force. EMG = electromyography.
| Dataset | Source | J | Q | Model | Task | Variables |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| A | Caravaggi et al., 2010 | 10 | 101 | Paired t-test | Walking | Plantar arch deformation |
| B | Dorn, Schache & Pandy, 2012 | 7 | 100 | Linear regression | Running/ sprinting | GRF |
| C | Pataky et al., 2008 | 59 | 101 | Linear regression | Walking | GRF
| D | Neptune, Wright & Van Den Bogert, 1999 | 15 | 101 | Two sample t-test | Cutting movement | Kinematics, EMG |
| E | Pataky et al., 2014 | 10 | 101 | Paired t-test | Walking | Center of pressure |
| F | Caravaggi et al., 2010 | 19 | 101 | Two sample t-test | Walking | Plantar arch deformation |
| G | Pataky et al., 2008 | 20 | 101 | One sample t-test | Walking | GRF |
| H | Besier et al., 2009 | 40 | 100 | Two sample t-test | Walking, running | GRF, muscle forces
___
| | |
|------|------|
| <img src="./figs/A.png" alt="FigA" width="300"/> | <img src="./figs/B.png" alt="FigB" width="300"/> |
| <img src="./figs/C.png" alt="FigC" width="300"/> | <img src="./figs/D.png" alt="FigD" width="300"/> |
| <img src="./figs/E.png" alt="FigE" width="300"/> | <img src="./figs/F.png" alt="FigF" width="300"/> |
| <img src="./figs/G.png" alt="FigG" width="300"/> | <img src="./figs/H.png" alt="FigH" width="300"/> |
**Figure E1**. Results from six datasets depicting two thresholds: false discovery rate (FDR) and random field theory (RFT). The null hypothesis is rejected if the t value traverses a threshold.
## References
1. Besier TF, Fredericson M, Gold GE, Beaupré GS, Delp SL. 2009. Knee muscle forces during walking and running in patellofemoral pain patients and pain-free controls. Journal of Biomechanics 42:898–905. DOI: 10.1016/j.jbiomech.2009.01.032.
1. Caravaggi P, Pataky T, Günther M, Savage R, Crompton R. 2010. Dynamics of longitudinal arch support in relation to walking speed: Contribution of the plantar aponeurosis. Journal of Anatomy 217:254–261. DOI: 10.1111/j.1469-7580.2010.01261.x.
1. Dorn TW, Schache AG, Pandy MG. 2012. Muscular strategy shift in human running: dependence of running speed on hip and ankle muscle performance. Journal of Experimental Biology 215:1944–1956. DOI: 10.1242/jeb.064527.
1. Neptune RR, Wright IC, Van Den Bogert AJ. 1999. Muscle coordination and function during cutting movements. Medicine and Science in Sports and Exercise 31:294–302. DOI: 10.1097/00005768-199902000-00014.
1. Pataky TC, Caravaggi P, Savage R, Parker D, Goulermas JY, Sellers WI, Crompton RH. 2008. New insights into the plantar pressure correlates of walking speed using pedobarographic statistical parametric mapping (pSPM). Journal of Biomechanics 41:1987–1994. DOI: 10.1016/j.jbiomech.2008.03.034.
1. Pataky TC, Robinson MA, Vanrenterghem J, Savage R, Bates KT, Crompton RH. 2014. Vector field statistics for objective center-of-pressure trajectory analysis during gait, with evidence of scalar sensitivity to small coordinate system rotations. Gait and Posture 40:255–258. DOI: 10.1016/j.gaitpost.2014.01.023.
| github_jupyter |
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import numexpr as ne
from scipy.ndimage import correlate1d
from dphutils import scale
import scipy.signal
from timeit import Timer
import pyfftw
# test monkey patching (it doesn't work for rfftn)
a = pyfftw.empty_aligned((512, 512), dtype='complex128')
b = pyfftw.empty_aligned((512, 512), dtype='complex128')
a[:] = np.random.randn(512, 512) + 1j*np.random.randn(512, 512)
b[:] = np.random.randn(512, 512) + 1j*np.random.randn(512, 512)
t = Timer(lambda: scipy.signal.fftconvolve(a, b, 'same'))
print('Time with scipy.fftpack: %1.3f seconds' % t.timeit(number=10))
# Monkey patch in fftn and ifftn from pyfftw.interfaces.scipy_fftpack
scipy.signal.signaltools.fftn = pyfftw.interfaces.scipy_fftpack.fftn
scipy.signal.signaltools.ifftn = pyfftw.interfaces.scipy_fftpack.ifftn
scipy.signal.signaltools.fftpack = pyfftw.interfaces.scipy_fftpack
# can't monkey patch the rfft because it's used through np in the package.
scipy.signal.fftconvolve(a, b, 'same') # We cheat a bit by doing the planning first
# Turn on the cache for optimum performance
pyfftw.interfaces.cache.enable()
print('Time with monkey patched scipy_fftpack: %1.3f seconds' %
t.timeit(number=10))
# Testing the best method to enforce positivity constraint.
a = np.random.randn(1e3,1e3)
print(a.max(), a.min())
%timeit a[a<0] = 0
print(a.max(), a.min())
a = np.random.randn(1e3,1e3)
b=np.zeros_like(a)
print(a.max(), a.min())
%timeit c = np.minimum(a,b)
print(a.max(), a.min())
# testing speedups for numexpr
a = np.random.randn(2**9,2**9)
b = np.random.randn(2**9,2**9)
%timeit a-b
%timeit ne.evaluate("a-b")
%timeit a/b
%timeit ne.evaluate("a/b")
# Standard Richardson-Lucy form skimage
from skimage import color, data, restoration
camera = color.rgb2gray(data.camera())
from scipy.signal import convolve2d
psf = np.ones((5, 5)) / 25
camera = convolve2d(camera, psf, 'same')
camera += 0.1 * camera.std() * np.random.poisson(size=camera.shape)
deconvolved = restoration.richardson_lucy(camera, psf, 30, False)
plt.matshow(camera, cmap='Greys_r')
plt.matshow(deconvolved, cmap='Greys_r', vmin=camera.min(), vmax=camera.max())
# test monkey patching properly.
from pyfftw.interfaces.numpy_fft import (ifftshift, fftshift, fftn, ifftn,
rfftn, irfftn)
from scipy.signal.signaltools import _rfft_lock, _rfft_mt_safe, _next_regular,_check_valid_mode_shapes,_centered
def fftconvolve2(in1, in2, mode="full"):
if in1.ndim == in2.ndim == 0: # scalar inputs
return in1 * in2
elif not in1.ndim == in2.ndim:
raise ValueError("in1 and in2 should have the same dimensionality")
elif in1.size == 0 or in2.size == 0: # empty arrays
return array([])
s1 = np.array(in1.shape)
s2 = np.array(in2.shape)
complex_result = (np.issubdtype(in1.dtype, complex) or
np.issubdtype(in2.dtype, complex))
shape = s1 + s2 - 1
if mode == "valid":
_check_valid_mode_shapes(s1, s2)
# Speed up FFT by padding to optimal size for FFTPACK
fshape = [_next_regular(int(d)) for d in shape]
fslice = tuple([slice(0, int(sz)) for sz in shape])
# Pre-1.9 NumPy FFT routines are not threadsafe. For older NumPys, make
# sure we only call rfftn/irfftn from one thread at a time.
if not complex_result and (_rfft_mt_safe or _rfft_lock.acquire(False)):
try:
ret = (irfftn(rfftn(in1, fshape) *
rfftn(in2, fshape), fshape)[fslice].
copy())
finally:
if not _rfft_mt_safe:
_rfft_lock.release()
else:
# If we're here, it's either because we need a complex result, or we
# failed to acquire _rfft_lock (meaning rfftn isn't threadsafe and
# is already in use by another thread). In either case, use the
# (threadsafe but slower) SciPy complex-FFT routines instead.
ret = ifftn(fftn(in1, fshape) *
fftn(in2, fshape))[fslice].copy()
if not complex_result:
ret = ret.real
if mode == "full":
return ret
elif mode == "same":
return _centered(ret, s1)
elif mode == "valid":
return _centered(ret, s1 - s2 + 1)
else:
raise ValueError("Acceptable mode flags are 'valid',"
" 'same', or 'full'.")
%timeit scipy.signal.fftconvolve(camera, psf, 'same')
%timeit fftconvolve2(camera, psf, 'same')
def tv(im):
"""
Calculate the total variation image
(1) Laasmaa, M.; Vendelin, M.; Peterson, P. Application of Regularized Richardson–Lucy Algorithm for
Deconvolution of Confocal Microscopy Images. Journal of Microscopy 2011, 243 (2), 124–140.
dx.doi.org/10.1111/j.1365-2818.2011.03486.x
"""
def m(a, b):
'''
As described in (1)
'''
return (sign(a)+sign(b))/2*minimum(abs(a), abs(b))
ndim = im.ndim
g = np.zeros_like(p)
i = 0
# g stores the gradients of out along each axis
# e.g. g[0] is the first order finite difference along axis 0
for ax in range(ndim):
a = 2*ax
# backward difference
g[a] = correlate1d(im, [-1, 1], ax)
# forward difference
g[a+1] = correlate1d(im, [-1, 1], ax, origin=-1)
eps = finfo(float).eps
oym, oyp, oxm, oxp = g
return oxm*oxp/sqrt(oxp**2 +m(oyp,oym)**2+eps)+oym*oyp/sqrt(oyp**2 +m(oxp,oxm)**2+eps)
def rl_update(convolve_method, kwargs):
'''
A function that represents the core rl operation:
$u^{(t+1)} = u^{(t)}\cdot\left(\frac{d}{u^{(t)}\otimes p}\otimes \hat{p}\right)$
Parameters
----------
image : ndarray
original image to be deconvolved
u_tm1 : ndarray
previous
u_t
u_tp1
psf
convolve_method
'''
image = kwargs['image']
psf = kwargs['psf']
# use the prediction step to iterate on
y_t = kwargs['y_t']
u_t = kwargs['u_t']
u_tm1 = kwargs['u_tm1']
g_tm1 = kwargs['g_tm1']
psf_mirror = psf[::-1, ::-1]
blur = convolve_method(y_t, psf, 'same')
relative_blur = ne.evaluate("image / blur")
blur_blur = convolve_method(relative_blur, psf_mirror, 'same')
u_tp1 = ne.evaluate("y_t*blur_blur")
u_tp1[u_tp1 < 0] = 0
# update
kwargs.update(dict(
u_tm2 = u_tm1,
u_tm1 = u_t,
u_t = u_tp1,
blur = blur_blur,
g_tm2 = g_tm1,
g_tm1 = ne.evaluate("u_tp1 - y_t")
))
def richardson_lucy(image, psf, iterations=50, clip=False):
"""Richardson-Lucy deconvolution.
Parameters
----------
image : ndarray
Input degraded image (can be N dimensional).
psf : ndarray
The point spread function.
iterations : int
Number of iterations. This parameter plays the role of
regularisation.
clip : boolean, optional
True by default. If true, pixel value of the result above 1 or
under -1 are thresholded for skimage pipeline compatibility.
Returns
-------
im_deconv : ndarray
The deconvolved image.
Examples
--------
>>> from skimage import color, data, restoration
>>> camera = color.rgb2gray(data.camera())
>>> from scipy.signal import convolve2d
>>> psf = np.ones((5, 5)) / 25
>>> camera = convolve2d(camera, psf, 'same')
>>> camera += 0.1 * camera.std() * np.random.standard_normal(camera.shape)
>>> deconvolved = restoration.richardson_lucy(camera, psf, 5, False)
References
----------
.. [1] http://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution
"""
# Stolen from the dev branch of skimage because stable branch is slow
# compute the times for direct convolution and the fft method. The fft is of
# complexity O(N log(N)) for each dimension and the direct method does
# straight arithmetic (and is O(n*k) to add n elements k times)
direct_time = np.prod(image.shape + psf.shape)
fft_time = np.sum([n*np.log(n) for n in image.shape + psf.shape])
# see whether the fourier transform convolution method or the direct
# convolution method is faster (discussed in scikit-image PR #1792)
time_ratio = 40.032 * fft_time / direct_time
if time_ratio <= 1 or len(image.shape) > 2:
convolve_method = fftconvolve2
else:
convolve_method = convolve
image = image.astype(np.float)
psf = psf.astype(np.float)
im_deconv = 0.5 * np.ones(image.shape)
psf_mirror = psf[::-1, ::-1]
rl_dict = dict(
image=image,
u_tm2=None,
u_tm1=None,
g_tm2=None,
g_tm1=None,
u_t=None,
y_t=image,
psf=psf
)
for i in range(iterations):
# d/(u_t \otimes p)
rl_update(convolve_method, rl_dict)
alpha = 0
if rl_dict['g_tm1'] is not None and rl_dict['g_tm2'] is not None and i > 1:
alpha = (rl_dict['g_tm1'] * rl_dict['g_tm2']).sum()/(rl_dict['g_tm2']**2).sum()
alpha = max(min(alpha,1),0)
if alpha != 0:
if rl_dict['u_tm1'] is not None:
h1_t = rl_dict['u_t'] - rl_dict['u_tm1']
h1_t
if rl_dict['u_tm2'] is not None:
h2_t = rl_dict['u_t'] - 2 * rl_dict['u_tm1'] + rl_dict['u_tm2']
else:
h2_t = 0
else:
h1_t = 0
else:
h2_t = 0
h1_t = 0
rl_dict['y_t'] = rl_dict['u_t']+alpha*h1_t+alpha**2/2*h2_t
rl_dict['y_t'][rl_dict['y_t'] < 0] = 0
im_deconv = rl_dict['u_t']
if clip:
im_deconv[im_deconv > 1] = 1
im_deconv[im_deconv < -1] = -1
return rl_dict
deconvolved2 = richardson_lucy(camera, psf, 10)
plt.matshow(camera, cmap='Greys_r')
plt.matshow(np.real(deconvolved2['u_t']), cmap='Greys_r', vmin=camera.min(), vmax=camera.max())
%timeit deconvolved2 = richardson_lucy(camera, psf, 10)
```
| github_jupyter |
# Exploratory Data Analysis
Statistical functions can be found here: https://nbviewer.org/github/AllenDowney/empiricaldist/blob/master/empiricaldist/dist_demo.ipynb
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set()
```
## Question: What's the average weight of a newborn?
```
nsfg = pd.read_hdf('data/nsfg.hdf5')
nsfg.head(3)
pounds = nsfg['birthwgt_lb1']
pounds.describe()
# replace 98-99 values since it's not weight
pounds = pounds.replace([98, 99], np.nan)
pounds.describe()
```
### Clean a variable
In the NSFG dataset, the variable 'nbrnaliv' records the number of babies born alive at the end of a pregnancy.
If you use .value_counts() to view the responses, you'll see that the value 8 appears once, and if you consult the codebook, you'll see that this value indicates that the respondent refused to answer the question.
Your job in this exercise is to replace this value with np.nan. Recall from the video how Allen replaced the values 98 and 99 in the ounces column using the .replace() method:
```
# Replace the value 8 with NaN
nsfg['nbrnaliv'].replace(8, np.nan, inplace=True)
# Print the values and their frequencies
print(nsfg['nbrnaliv'].value_counts())
```
### Calculate pregnancy length
For each pregnancy in the NSFG dataset, the variable 'agecon' encodes the respondent's age at conception, and 'agepreg' the respondent's age at the end of the pregnancy.
Both variables are recorded as integers with two implicit decimal places, so the value 2575 means that the respondent's age was 25.75
```
# Select the columns and divide by 100
agecon = nsfg['agecon'] / 100
agepreg = nsfg['agepreg'] / 100
# Compute the difference
preg_length = agepreg - agecon
# Compute summary statistics
print(preg_length.describe())
plt.hist(preg_length.dropna(), bins=30)
plt.show()
```
### Compare the weights in pre-term and normal births
```
preterm = nsfg['prglngth'] < 37
print('Number of pre-term births:', preterm.sum())
print('Pre-term mean baby weight:', nsfg[preterm]['birthwgt_lb1'].mean())
print('Normal mean baby weight:', nsfg[~preterm]['birthwgt_lb1'].mean())
```
### Investigate age column
```
def resample_rows_weighted(df, column='wgt2013_2015'):
"""Resamples a DataFrame using probabilities proportional to given column.
Args:
df: DataFrame
column: string column name to use as weights
returns:
DataFrame
"""
weights = df[column].copy()
weights /= sum(weights)
indices = np.random.choice(df.index, len(df), replace=True, p=weights)
sample = df.loc[indices]
return sample
# Plot the histogram
plt.hist(agecon, bins=20, histtype='step')
# Label the axes
plt.xlabel('Age at conception')
plt.ylabel('Number of pregnancies')
# Show the figure
plt.show()
# Resample the data
nsfg = resample_rows_weighted(nsfg, 'wgt2013_2015')
# Clean the weight variables
pounds = nsfg['birthwgt_lb1'].replace([98, 99], np.nan)
ounces = nsfg['birthwgt_oz1'].replace([98, 99], np.nan)
# Compute total birth weight
birth_weight = pounds + ounces/16
# Create a Boolean Series for full-term babies
full_term = nsfg['prglngth'] >= 37
# Select the weights of full-term babies
full_term_weight = birth_weight[full_term]
# Compute the mean weight of full-term babies
print(full_term_weight.mean())
# Filter full-term babies
full_term = nsfg['prglngth'] >= 37
# Filter single births
single = nsfg['nbrnaliv'] == 1
# Compute birth weight for single full-term babies
single_full_term_weight = birth_weight[full_term & single]
print('Single full-term mean:', single_full_term_weight.mean())
# Compute birth weight for multiple full-term babies
mult_full_term_weight = birth_weight[full_term & ~single]
print('Multiple full-term mean:', mult_full_term_weight.mean())
```
## Distributions
```
gss = pd.read_hdf('data/gss.hdf5', 'gss')
gss.head(3)
def pdf(df, col, normalize=False):
df2 = df.copy()
if not normalize:
return df2[col].value_counts().sort_index()
N = 10000
outcomes = np.zeros(N)
for i in range(N):
outcome = np.random.choice(df[col])
outcomes[i] = outcome
val, cnt = np.unique(outcomes, return_counts=True)
prop = cnt / len(outcomes)
return pd.DataFrame({'index': val, 'probability': prop}).dropna()
educ_pdf = pdf(gss, 'educ', normalize=True)
educ_pdf.probability.plot(kind='bar')
plt.show()
!pip install empiricaldist
import empiricaldist
# Select realinc
income = gss['realinc']
# Make the CDF
cdf_income = empiricaldist.Cdf.from_seq(income)
# Plot it
cdf_income.plot()
# Label the axes
plt.xlabel('Income (1986 USD)')
plt.ylabel('CDF')
plt.show()
income = gss['realinc']
pre95 = gss['year'] < 1995
empiricaldist.Pmf.from_seq(income[pre95]).plot(label='Before 1995')
empiricaldist.Pmf.from_seq(income[~pre95]).plot(label='After 1995')
plt.xlabel('Income (1986 USD)')
plt.ylabel('PMF')
plt.show()
income = gss['realinc']
pre95 = gss['year'] < 1995
empiricaldist.Cdf.from_seq(income[pre95]).plot(label='Before 1995')
empiricaldist.Cdf.from_seq(income[~pre95]).plot(label='After 1995')
plt.xlabel('Income (1986 USD)')
plt.ylabel('PMF')
plt.show()
# Select educ
educ = gss['educ']
# Bachelor's degree
bach = (educ >= 16)
# Associate degree
assc = ((educ >= 14) & (educ < 16))
# High school (12 or fewer years of education)
high = (educ <= 12)
print(high.mean())
income = gss['realinc']
# Plot the CDFs
empiricaldist.Cdf.from_seq(income[high]).plot(label='High school')
empiricaldist.Cdf.from_seq(income[assc]).plot(label='Associate')
empiricaldist.Cdf.from_seq(income[bach]).plot(label='Bachelor')
# Label the axes
plt.xlabel('Income (1986 USD)')
plt.ylabel('CDF')
plt.legend()
plt.show()
# normal CDF
from scipy.stats import norm
xs = np.linspace(-3, 3)
ys = norm(0, 1).cdf(xs)
plt.plot(xs, ys)
plt.show()
xs = np.linspace(-3, 3)
ys = norm(0, 1).pdf(xs)
plt.plot(xs, ys)
plt.show()
# Extract realinc and compute its log
income = gss['realinc']
log_income = np.log10(income)
# Compute mean and standard deviation
mean = np.mean(log_income)
std = np.std(log_income)
print(mean, std)
# Make a norm object
from scipy.stats import norm
dist = norm(mean, std)
# Evaluate the model CDF
xs = np.linspace(2, 5.5)
ys = dist.cdf(xs)
# Plot the model CDF
plt.clf()
plt.plot(xs, ys, color='gray')
# Create and plot the Cdf of log_income
empiricaldist.Cdf.from_seq(log_income).plot()
# Label the axes
plt.xlabel('log10 of realinc')
plt.ylabel('CDF')
plt.show()
# Evaluate the normal PDF
xs = np.linspace(2, 5.5)
ys = dist.pdf(xs)
# Plot the model PDF
plt.clf()
plt.plot(xs, ys, color='gray')
# Plot the data KDE
sns.kdeplot(log_income)
# Label the axes
plt.xlabel('log10 of realinc')
plt.ylabel('PDF')
plt.show()
```
| github_jupyter |
# Using matplotlib basemap to project California data
```
%matplotlib inline
import pandas as pd, numpy as np, matplotlib.pyplot as plt
from geopandas import GeoDataFrame
from mpl_toolkits.basemap import Basemap
from shapely.geometry import Point
# define basemap colors
land_color = '#F6F6F6'
water_color = '#D2F5FF'
coastline_color = '#333333'
border_color = '#999999'
# load the point data and select only points in california
df = pd.read_csv('data/usa-latlong.csv')
usa_points = GeoDataFrame(df)
usa_points['geometry'] = usa_points.apply(lambda row: Point(row['longitude'], row['latitude']), axis=1)
states = GeoDataFrame.from_file('data/states_21basic/states.shp')
california = states[states['STATE_NAME']=='California']['geometry']
california_polygon = california.iloc[0]
california_points = usa_points[usa_points.within(california_polygon)]
# first define a transverse mercator projection
map_width_m = 1000 * 1000
map_height_m = 1200 * 1000
target_crs = {'datum':'WGS84',
'ellps':'WGS84',
'proj':'tmerc',
'lon_0':-119,
'lat_0':37.5}
# plot the map
fig_width = 6
plt.figure(figsize=[fig_width, fig_width * map_height_m / float(map_width_m)])
m = Basemap(ellps=target_crs['ellps'],
projection=target_crs['proj'],
lon_0=target_crs['lon_0'],
lat_0=target_crs['lat_0'],
width=map_width_m,
height=map_height_m,
resolution='l',
area_thresh=10000)
m.drawcoastlines(color=coastline_color)
m.drawcountries(color=border_color)
m.fillcontinents(color=land_color, lake_color=water_color)
m.drawstates(color=border_color)
m.drawmapboundary(fill_color=water_color)
x, y = m(np.array(california_points['longitude']), np.array(california_points['latitude']))
m.scatter(x, y, s=80, color='r', edgecolor='#333333', alpha=0.4, zorder=10)
plt.show()
# next define an albers projection for california
target_crs = {'datum':'NAD83',
'ellps':'GRS80',
'proj':'aea',
'lat_1':35,
'lat_2':39,
'lon_0':-119,
'lat_0':37.5,
'x_0':map_width_m/2,
'y_0':map_height_m/2,
'units':'m'}
# plot the map
fig_width = 6
plt.figure(figsize=[fig_width, fig_width * map_height_m / float(map_width_m)])
m = Basemap(ellps=target_crs['ellps'],
projection=target_crs['proj'],
lat_1=target_crs['lat_1'],
lat_2=target_crs['lat_2'],
lon_0=target_crs['lon_0'],
lat_0=target_crs['lat_0'],
width=map_width_m,
height=map_height_m,
resolution='l',
area_thresh=10000)
m.drawcoastlines(color=coastline_color)
m.drawcountries(color=border_color)
m.fillcontinents(color=land_color, lake_color=water_color)
m.drawstates(color=border_color)
m.drawmapboundary(fill_color=water_color)
x, y = m(np.array(california_points['longitude']), np.array(california_points['latitude']))
m.scatter(x, y, s=80, color='r', edgecolor='#333333', alpha=0.4, zorder=10)
plt.show()
```
| github_jupyter |
## Summarize all common compounds and their percent strong scores
```
suppressPackageStartupMessages(library(dplyr))
suppressPackageStartupMessages(library(ggplot2))
suppressPackageStartupMessages(library(patchwork))
source("viz_themes.R")
source("plotting_functions.R")
source("data_functions.R")
results_dir <- file.path("../1.Data-exploration/Profiles_level4/results/")
# First, obtain the threshold to consider strong phenotype
cell_painting_pr_df <- load_percent_strong(assay = "cellpainting", results_dir = results_dir)
l1000_pr_df <- load_percent_strong(assay = "l1000", results_dir = results_dir)
pr_df <- dplyr::bind_rows(cell_painting_pr_df, l1000_pr_df)
pr_df$dose <- factor(pr_df$dose, levels = dose_order)
threshold_df <- pr_df %>%
dplyr::filter(type == 'non_replicate') %>%
dplyr::group_by(assay, dose) %>%
dplyr::summarise(threshold = quantile(replicate_correlation, 0.95))
threshold_plot_ready_df <- threshold_df %>% reshape2::dcast(dose ~ assay, value.var = "threshold")
# Next, get the median pairwise correlations and determine if they pass the threshold
cell_painting_comp_df <- load_median_correlation_scores(assay = "cellpainting", results_dir = results_dir)
l1000_comp_df <- load_median_correlation_scores(assay = "l1000", results_dir = results_dir)
# Note that the variable significant_compounds contains ALL compounds and a variable indicating if they pass the threshold
significant_compounds_df <- cell_painting_comp_df %>%
dplyr::left_join(l1000_comp_df, by = c("dose", "compound"), suffix = c("_cellpainting", "_l1000")) %>%
tidyr::drop_na() %>%
dplyr::left_join(threshold_df %>% dplyr::filter(assay == "Cell Painting"), by = "dose") %>%
dplyr::left_join(threshold_df %>% dplyr::filter(assay == "L1000"), by = "dose", suffix = c("_cellpainting", "_l1000")) %>%
dplyr::mutate(
pass_cellpainting_thresh = median_replicate_score_cellpainting > threshold_cellpainting,
pass_l1000_thresh = median_replicate_score_l1000 > threshold_l1000
) %>%
dplyr::mutate(pass_both = pass_cellpainting_thresh + pass_l1000_thresh) %>%
dplyr::mutate(pass_both = ifelse(pass_both == 2, TRUE, FALSE)) %>%
dplyr::select(
compound,
dose,
median_replicate_score_cellpainting,
median_replicate_score_l1000,
pass_cellpainting_thresh,
pass_l1000_thresh,
pass_both
)
# Count in how many doses the particular compound was reproducible
cp_reprod_count_df <- significant_compounds_df %>%
dplyr::filter(pass_cellpainting_thresh) %>%
dplyr::group_by(compound) %>%
dplyr::count() %>%
dplyr::rename(cell_painting_num_reproducible = n)
l1000_reprod_count_df <- significant_compounds_df %>%
dplyr::filter(pass_l1000_thresh) %>%
dplyr::group_by(compound) %>%
dplyr::count() %>%
dplyr::rename(l1000_num_reproducible = n)
significant_compounds_df <- significant_compounds_df %>%
dplyr::left_join(cp_reprod_count_df, by = "compound") %>%
dplyr::left_join(l1000_reprod_count_df, by = "compound") %>%
tidyr::replace_na(list(l1000_num_reproducible = 0, cell_painting_num_reproducible = 0)) %>%
dplyr::mutate(total_reproducible = cell_painting_num_reproducible + l1000_num_reproducible)
significant_compounds_df$dose <- factor(significant_compounds_df$dose, levels = dose_order)
significant_compounds_df$compound <- tolower(significant_compounds_df$compound)
print(length(unique(significant_compounds_df$compound)))
# Output file for further use
output_file <- file.path("data", "significant_compounds_by_threshold_both_assays.tsv.gz")
significant_compounds_df %>% readr::write_tsv(output_file)
print(dim(significant_compounds_df))
head(significant_compounds_df, 3)
```
| github_jupyter |
# Parameter Values
In this notebook, we explain how parameter values are set for a model. Information on how to add parameter values is provided in our [online documentation](https://pybamm.readthedocs.io/en/latest/tutorials/add-parameter-values.html)
## Setting up parameter values
```
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import tests
import numpy as np
import os
import matplotlib.pyplot as plt
from pprint import pprint
os.chdir(pybamm.__path__[0]+'/..')
```
In `pybamm`, the object that sets parameter values for a model is the `ParameterValues` class, which extends `dict`. This takes the values of the parameters as input, which can be either a dictionary,
```
param_dict = {"a": 1, "b": 2, "c": 3}
parameter_values = pybamm.ParameterValues(param_dict)
print("parameter values are {}".format(parameter_values))
```
or a csv file,
```
f = open("param_file.csv", "w+")
f.write(
"""
Name [units],Value
a, 4
b, 5
c, 6
"""
)
f.close()
parameter_values = pybamm.ParameterValues("param_file.csv")
print("parameter values are {}".format(parameter_values))
```
or using one of the pre-set chemistries
```
print("Marquis2019 chemistry set is {}".format(pybamm.parameter_sets.Marquis2019))
chem_parameter_values = pybamm.ParameterValues(chemistry=pybamm.parameter_sets.Marquis2019)
print("Negative current collector thickness is {} m".format(
chem_parameter_values["Negative current collector thickness [m]"])
)
```
We can input functions into the parameter values, either directly (note we bypass the check that the parameter already exists)
```
def cubed(x):
return x ** 3
parameter_values.update({"cube function": cubed}, check_already_exists=False)
print("parameter values are {}".format(parameter_values))
```
or by using `pybamm.load_function` to load from a path to the function or just a name (in which case the whole directory is searched)
```
f = open("squared.py","w+")
f.write(
"""
def squared(x):
return x ** 2
"""
)
f.close()
parameter_values.update({"square function": pybamm.load_function("squared.py")}, check_already_exists=False)
print("parameter values are {}".format(parameter_values))
```
## Setting parameters for an expression
We represent parameters in models using the classes `Parameter` and `FunctionParameter`. These cannot be evaluated directly,
```
a = pybamm.Parameter("a")
b = pybamm.Parameter("b")
c = pybamm.Parameter("c")
func = pybamm.FunctionParameter("square function", {"a": a})
expr = a + b * c
try:
expr.evaluate()
except NotImplementedError as e:
print(e)
```
However, the `ParameterValues` class can walk through an expression, changing an `Parameter` objects it sees to the appropriate `Scalar` and any `FunctionParameter` object to the appropriate `Function`, and the resulting expression can be evaluated
```
expr_eval = parameter_values.process_symbol(expr)
print("{} = {}".format(expr_eval, expr_eval.evaluate()))
func_eval = parameter_values.process_symbol(func)
print("{} = {}".format(func_eval, func_eval.evaluate()))
```
If a parameter needs to be changed often (for example, for convergence studies or parameter estimation), the `InputParameter` class should be used. This is not fixed by parameter values, and its value can be set on evaluation (or on solve):
```
d = pybamm.InputParameter("d")
expr = 2 + d
expr_eval = parameter_values.process_symbol(expr)
print("with d = {}, {} = {}".format(3, expr_eval, expr_eval.evaluate(inputs={"d": 3})))
print("with d = {}, {} = {}".format(5, expr_eval, expr_eval.evaluate(inputs={"d": 5})))
```
## Solving a model
The code below shows the entire workflow of:
1. Proposing a toy model
2. Discretising and solving it first with one set of parameters,
3. then updating the parameters and solving again
The toy model used is:
$$\frac{\mathrm{d} u}{\mathrm{d} t} = -a u$$
with initial conditions $u(0) = b$. The model is first solved with $a = 3, b = 2$, then with $a = 4, b = -1$
```
# Create model
model = pybamm.BaseModel()
u = pybamm.Variable("u")
a = pybamm.Parameter("a")
b = pybamm.Parameter("b")
model.rhs = {u: -a * u}
model.initial_conditions = {u: b}
model.variables = {"u": u, "a": a, "b": b}
# Set parameters, with a as an input ########################
parameter_values = pybamm.ParameterValues({"a": "[input]", "b": 2})
parameter_values.process_model(model)
#############################################################
# Discretise using default discretisation
disc = pybamm.Discretisation()
disc.process_model(model)
# Solve
t_eval = np.linspace(0, 2, 30)
ode_solver = pybamm.ScipySolver()
solution = ode_solver.solve(model, t_eval, inputs={"a": 3})
# Post-process, so that u1 can be called at any time t (using interpolation)
t_sol1 = solution.t
u1 = solution["u"]
# Solve again with different inputs ###############################
solution = ode_solver.solve(model, t_eval, inputs={"a": -1})
t_sol2 = solution.t
u2 = solution["u"]
###################################################################
# Plot
t_fine = np.linspace(0,t_eval[-1],1000)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,4))
ax1.plot(t_fine, 2 * np.exp(-3 * t_fine), t_sol1, u1(t_sol1), "o")
ax1.set_xlabel("t")
ax1.legend(["2 * exp(-3 * t)", "u1"], loc="best")
ax1.set_title("a = 3, b = 2")
ax2.plot(t_fine, 2 * np.exp(t_fine), t_sol2, u2(t_sol2), "o")
ax2.set_xlabel("t")
ax2.legend(["2 * exp(t)", "u2"], loc="best")
ax2.set_title("a = -1, b = 2")
plt.tight_layout()
plt.show()
model.rhs
```
## Printing parameter values
In most models, it is useful to define dimensionless parameters, which are combinations of other parameters. However, since parameters objects must be processed by the `ParameterValues` class before they can be evaluated, it can be difficult to quickly check the value of a dimensionless parameter.
You can print all of the dimensionless parameters in a model by using the `print_parameters` function. Note that the `print_parameters` function also gives the dependence of the parameters on C-rate (as some dimensionless parameters vary with C-rate), but we can ignore that here
```
a = pybamm.Parameter("a")
b = pybamm.Parameter("b")
parameter_values = pybamm.ParameterValues({"a": 4, "b": 3})
parameters = {"a": a, "b": b, "a + b": a + b, "a * b": a * b}
param_eval = parameter_values.print_parameters(parameters)
for name, (value,C_dependence) in param_eval.items():
print("{}: {}".format(name, value))
```
If you provide an output file to `print_parameters`, the parameters will be printed to that output file.
| github_jupyter |
# Talks markdown generator for academicpages
Adapted from generator in academicpages
Takes a TSV of talks with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `talks.py`. Run either from the `markdown_generator` folder after replacing `talks.tsv` with one containing your data.
TODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style.
```
import pandas as pd
import os
```
## Data format
The TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV.
- Fields that cannot be blank: `title`, `url_slug`, `date`. All else can be blank. `type` defaults to "Talk"
- `date` must be formatted as YYYY-MM-DD.
- `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper.
- The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/talks/YYYY-MM-DD-[url_slug]`
- The combination of `url_slug` and `date` must be unique, as it will be the basis for your filenames
This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
Note: edit in Excel and save as tsv. Then open in Notepad and save as with utf-8 encoding.
EDIT 8/19/21: Excel doesn't let me save as tsv anymore? works with txt(tab delimited), didn't seem to need notepad but changed filename below
## Import TSV
Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`.
I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
```
talks = pd.read_csv("talks.txt", sep="\t", header=0)
talks
```
## Escape special characters
YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
```
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
if type(text) is str:
return "".join(html_escape_table.get(c,c) for c in text)
else:
return "False"
```
## Creating the markdown files
This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
```
loc_dict = {}
for row, item in talks.iterrows():
md_filename = str(item.date) + "-" + item.url_slug + ".md"
html_filename = str(item.date) + "-" + item.url_slug
year = item.date[:4]
md = "---\ntitle: \"" + item.title + '"\n'
md += "collection: talks" + "\n"
if len(str(item.type)) > 3:
md += 'type: "' + item.type + '"\n'
else:
md += 'type: "Talk"\n'
md += "permalink: /talks/" + html_filename + "\n"
if len(str(item.venue)) > 3:
md += 'venue: "' + item.venue + '"\n'
if len(str(item.location)) > 3:
md += "date: " + str(item.date) + "\n"
if len(str(item.location)) > 3:
md += 'location: "' + str(item.location) + '"\n'
md += 'excerpt: "'
if len(str(item.description)) > 3:
md += item.description + " \n"
if len(str(item.talk_url)) > 3:
md += "[Download](" + item.talk_url + ")"
#close excerpt
md += '"\n'
if len(str(item.tags)) > 3:
md += "tags: [" + html_escape(item.tags) + "]"
md += "\n---\n"
#start of main text
md += "\n" + item.type + " \n" + item.venue + " \n"
if len(str(item.location)) > 3:
md += html_escape(item.location) + "\n"
else:
md += "Virtual\n"
if len(str(item.description)) > 3:
md += "\n" + html_escape(item.description) + "\n"
if len(str(item.image)) > 3:
md += "\n" + html_escape(item.image) + "\n"
if len(str(item.attr)) > 3:
md += "\n_Photo by " + html_escape(item.attr) + "._\n"
else:
md += "\n_Photo by Emily Hastings._\n"
if len(str(item.talk_url)) > 3:
if len(str(item.type)) > 3 :
if item.type == 'Poster':
md += "\n[Download poster here](" + item.talk_url + ")\n"
else:
md += "\n[Download slides here](" + item.talk_url + ")\n"
else:
md += "\n[Download here](" + item.talk_url + ")\n"
md_filename = os.path.basename(md_filename)
#print(md)
with open("../_talks/" + md_filename, 'w') as f:
f.write(md)
```
| github_jupyter |
# Under and over fitting
> Validation and learning curves
- toc: true
- badges: false
- comments: true
- author: Cécile Gallioz
- categories: [sklearn]
# Underfitting vs. Overfitting - Actual vs estimated function
[scikit-learn documentation](https://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html#sphx-glr-auto-examples-model-selection-plot-underfitting-overfitting-py)
This example demonstrates the problems of underfitting and overfitting and how we can use linear regression with polynomial features to approximate nonlinear functions.
The plot shows the function that we want to approximate, which is a part of the cosine function. In addition, the samples from the real function and the approximations of different models are displayed. The models have polynomial features of different degrees.
We can see that a linear function (polynomial with degree 1) is not sufficient to fit the training samples. This is called underfitting.
A polynomial of degree 4 approximates the true function almost perfectly.
However, for higher degrees the model will overfit the training data, i.e. it learns the noise of the training data.
We evaluate quantitatively overfitting / underfitting by using cross-validation. We calculate the mean squared error (MSE) on the validation set, the higher, the less likely the model generalizes correctly from the training data.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
def true_fun(X):
return np.cos(1.5 * np.pi * X)
np.random.seed(0)
n_samples = 50
degrees = [1, 4, 15]
X = np.sort(np.random.rand(n_samples))
y = true_fun(X) + np.random.randn(n_samples) * 0.1
plt.figure(figsize=(15, 5))
for i in range(len(degrees)):
ax = plt.subplot(1, len(degrees), i + 1)
plt.setp(ax, xticks=(), yticks=())
polynomial_features = PolynomialFeatures(degree=degrees[i],
include_bias=False)
linear_regression = LinearRegression()
pipeline = Pipeline([("polynomial_features", polynomial_features),
("linear_regression", linear_regression)])
pipeline.fit(X[:, np.newaxis], y)
# Evaluate the models using crossvalidation
scores = cross_val_score(pipeline, X[:, np.newaxis], y,
scoring="neg_mean_squared_error", cv=10)
X_test = np.linspace(0, 1, 100)
plt.plot(X_test, pipeline.predict(X_test[:, np.newaxis]), label="Model")
plt.plot(X_test, true_fun(X_test), label="True function")
plt.scatter(X, y, edgecolor='b', s=20, label="Samples")
plt.xlabel("x")
plt.ylabel("y")
plt.xlim((0, 1))
plt.ylim((-2, 2))
plt.legend(loc="best")
plt.title("Degree {}\nMSE {:.2e}(+/- {:.2e})".format(
degrees[i], -scores.mean(), scores.std()))
plt.show()
```
# Underfitting vs. Overfitting - Train vs test error
## Preparation
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import time
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_validate
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import validation_curve
from sklearn.model_selection import learning_curve
from sklearn.datasets import fetch_california_housing
myDataFrame = fetch_california_housing(as_frame=True)
data, target = myDataFrame.data, myDataFrame.target
target *= 100 # rescale the target in k$
print(f"The dataset data contains {data.shape[0]} samples and {data.shape[1]} features")
data.dtypes
```
## Validation curve
```
regressor = DecisionTreeRegressor()
cv = ShuffleSplit(n_splits=30, test_size=0.2)
cv_results = cross_validate(regressor, data, target,
cv=cv, scoring="neg_mean_absolute_error",
return_train_score=True, n_jobs=2)
scores = cv_results["test_score"]
fit_time = cv_results["fit_time"]
print("The accuracy is "
f"{scores.mean():.3f} +/- {scores.std():.3f}, for {fit_time.mean():.3f} seconds")
cv_results = pd.DataFrame(cv_results)
scores = pd.DataFrame()
scores[["train error", "test error"]] = -cv_results[
["train_score", "test_score"]]
scores.plot.hist(bins=50, edgecolor="black", density=True)
plt.xlabel("Mean absolute error (k$)")
_ = plt.title("Train and test errors distribution via cross-validation")
```
Here, we observe a **small training error** (actually zero), meaning that
the model is **not under-fitting**: it is flexible enough to capture any
variations present in the training set.
However the **significantly larger testing error** tells us that the
model is **over-fitting**: the model has memorized many variations of the
training set that could be considered "noisy" because they do not generalize
to help us make good prediction on the test set.
```
%%time
max_depth = [1, 5, 10, 15, 20, 25]
train_scores, test_scores = validation_curve(
regressor, data, target, param_name="max_depth", param_range=max_depth,
cv=cv, scoring="neg_mean_absolute_error", n_jobs=2)
train_errors, test_errors = -train_scores, -test_scores
plt.plot(max_depth, train_errors.mean(axis=1), label="Training error")
plt.plot(max_depth, test_errors.mean(axis=1), label="Testing error")
plt.legend()
plt.xlabel("Maximum depth of decision tree")
plt.ylabel("Mean absolute error (k$)")
_ = plt.title("Validation curve for decision tree")
plt.errorbar(max_depth, train_errors.mean(axis=1),
yerr=train_errors.std(axis=1), label='Training error')
plt.errorbar(max_depth, test_errors.mean(axis=1),
yerr=test_errors.std(axis=1), label='Testing error')
plt.legend()
plt.xlabel("Maximum depth of decision tree")
plt.ylabel("Mean absolute error (k$)")
_ = plt.title("Validation curve for decision tree")
```
## Learning curve
Let's compute the learning curve for a decision tree and vary the
proportion of the training set from 10% to 100%.
```
train_sizes = np.linspace(0.1, 1.0, num=5, endpoint=True)
train_sizes
cv = ShuffleSplit(n_splits=30, test_size=0.2)
results = learning_curve(
regressor, data, target, train_sizes=train_sizes, cv=cv,
scoring="neg_mean_absolute_error", n_jobs=2)
train_size, train_scores, test_scores = results[:3]
# Convert the scores into errors
train_errors, test_errors = -train_scores, -test_scores
plt.errorbar(train_size, train_errors.mean(axis=1),
yerr=train_errors.std(axis=1), label="Training error")
plt.errorbar(train_size, test_errors.mean(axis=1),
yerr=test_errors.std(axis=1), label="Testing error")
plt.legend()
plt.xscale("log")
plt.xlabel("Number of samples in the training set")
plt.ylabel("Mean absolute error (k$)")
_ = plt.title("Learning curve for decision tree")
```
Looking at the training error alone, we see that we get an error of 0 k$. It
means that the trained model (i.e. decision tree) is clearly overfitting the
training data.
Looking at the testing error alone, we observe that the more samples are
added into the training set, the lower the testing error becomes. Also, we
are searching for the plateau of the testing error for which there is no
benefit to adding samples anymore or assessing the potential gain of adding
more samples into the training set.
If we achieve a plateau and adding new samples in the training set does not
reduce the testing error, we might have reach the Bayes error rate using the
available model. Using a more complex model might be the only possibility to
reduce the testing error further.
| github_jupyter |
# **Spit some [tensor] flow**
We need to learn the intricacies of tensorflow to master deep learning
`Let's get this over with`
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import cv2
print(tf.__version__)
```
## A time series is just a TxD matrix right?
so instead of the rows, each column is a time series data. Don't worry, lemme explain:
1 2 3 4 5 6 7 8 9 10
With T = 2 becomes
| X1 | X2 |
|----|----|
| 1 | 2 |
| 2 | 3 |
| 3 | 4 |
| 4 | 5 |
| 5 | 6 |
| 6 | 7 |
| 7 | 8 |
| 8 | 9 |
```
from tensorflow.keras.layers import Input, LSTM, GRU, Dropout, Dense, Flatten
from tensorflow.keras.models import Model
from tensorflow.keras.datasets import fashion_mnist
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
X_train, X_test = X_train / 255.0 , X_test / 255.0
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
classes = len(set(y_train))
print(classes)
input_shape = X_train[0].shape
print(input_shape)
# Here T = 28, D = 28
i_layer = Input(shape = input_shape)
h_layer = LSTM(256)(i_layer)
o_layer = Dense(classes, activation='softmax')(h_layer)
model = Model(i_layer, o_layer)
model.compile(optimizer='adam',
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy'])
report = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20)
y_pred = model.predict(X_test).argmax(axis=1)
# only for sparse categorical crossentropy
# Taken from https://www.kaggle.com/zalando-research/fashionmnist?select=fashion-mnist_test.csv
labels = "T-shirt/top,Trouser,Pullover,Dress,Coat,Sandal,Shirt,Sneaker,Bag,AnkleBoot".split(",")
def evaluation_tf(report, y_test, y_pred, classes):
plt.plot(report.history['loss'], label = 'training_loss')
plt.plot(report.history['val_loss'], label = 'validation_loss')
plt.legend()
plt.show()
plt.plot(report.history['accuracy'], label = 'training_accuracy')
plt.plot(report.history['val_accuracy'], label = 'validation_accuracy')
plt.legend()
plt.show()
from sklearn.metrics import confusion_matrix
import itertools
cm = confusion_matrix(y_test, y_pred)
plt.figure(figsize=(10,10))
plt.imshow(cm, cmap=plt.cm.Blues)
for i,j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i,j], 'd'),
horizontalalignment = 'center',
color='black')
plt.xlabel("Predicted labels")
plt.ylabel("True labels")
plt.xticks(range(0,classes))
plt.yticks(range(0,classes))
plt.title('Confusion matrix')
plt.colorbar()
plt.show()
evaluation_tf(report, y_test, y_pred, classes)
misshits = np.where(y_pred!=y_test)[0]
index = np.random.choice(misshits)
plt.imshow(X_test[index], cmap='gray')
plt.title("Predicted = " + str(labels[y_pred[index]]) + ", Real = " + str(labels[y_test[index]]))
```
| github_jupyter |
# Classifying Ionosphere structure using K nearest neigbours algorithm
<hr>
### Nearest neighbors
Amongst the standard machine algorithms, Nearest neighbors is perhaps one of the most intuitive algorithms. To predict the class of a new sample, we look through the training dataset for the samples that are most similar to our new sample.
We take the most similar sample and predict the class that the majority of those samples have. As an example, we wish to predict the class of the '?', based on which class it is more similar to (represented here by having similar objects closer together). We find the five nearest neighbors, which are three triangles, one circle and one plus. There are more
triangles than circles and plus, and hence the predicted class for the '?' is, therefore, a triangle.
<img src = "images/knn.png">
[[image source]](https://github.com/rasbt/python-machine-learning-book/tree/master/images/image_gallery)
Nearest neighbors can be used for nearly any dataset-however, since we will have to compute the distance between all pairs of samples, it can be very computationally expensive to do so.
For example if there are 10 samples in the dataset, there are 45 unique distances
to compute. However, if there are 1000 samples, there are nearly 500,000!
#### Distance metrics
If we have two samples, we need to know how close they are to each other. Further more, we need to answer
questions such as are these two samples more similar than the other two?
The most common distance metric that you might have heard of is Euclidean
distance, which is the real-world distance. Formally, Euclidean distance is the square root of the sum of the squared
distances for each feature. It is intuitive, albeit provides poor accuracy if some features have larger values than others. It also gives poor results when lots of features have a value of 0, i.e our data is 'sparse'. There are other distance metrics in use; two commonly employed ones are the Manhattan and Cosine distance. The Manhattan distance is the sum of the absolute differences in each feature (with no use of square distances). While the Manhattan distance does suffer if
some features have larger values than others, the effect is not as dramatic as in the
case of Euclidean. Regardless for the implementation of KNN algorithm here, we would consider the Euclidean distance.
## Dataset
To understand KNNs, We will use the Ionosphere dataset, which is the recording of many
high-frequency antennas. The aim of the antennas is to determine whether there is a
structure in the ionosphere and a region in the upper atmosphere. Those that have a
structure are classified as good, while those that do not are classified as bad. Our aim is to determine whether an image
is good or bad.
You can download the dataset from : http://archive.ics.uci.edu/ml/datasets/Ionosphere.
Save the ionosphere.data file from the Data Folder to a folder named "data" on your computer.
For each row in the dataset, there are 35 values. The first 34 are measurements taken
from the 17 antennas (two values for each antenna). The last is either 'g' or 'b'; that
stands for good and bad, respectively.
```
import csv
import numpy as np
# Size taken from the dataset and is known
X = np.zeros((351, 34), dtype='float')
y = np.zeros((351,), dtype='bool')
with open("data/Ionosphere/ionosphere.data", 'r') as input_file:
reader = csv.reader(input_file)
for i, row in enumerate(reader):
# Get the data, converting each item to a float
data = [float(datum) for datum in row[:-1]]
# Set the appropriate row in our dataset
X[i] = data
# 1 if the class is 'g', 0 otherwise
y[i] = row[-1] == 'g'
```
First, we load up the NumPy and csv modules. Then we create the X and y NumPy arrays to store the dataset in. The sizes of these
arrays are known from the dataset. We take the first 34 values from this sample, turn each into a float, and save that to
our dataset. Finally, we take the last value of the row and set the class. We set it to 1 (or True) if it
is a good sample, and 0 if it is not. We now have a dataset of samples and features in X, and the corresponding classes in y
Estimators in scikit-learn have two main functions: fit() and predict().
We train the algorithm using the fit method and our training set. We evaluate it
using the predict method on our testing set.
First, we need to create these training and testing sets. As before, import and run the
train_test_split function:
```
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=14)
print("There are {} samples in the training dataset".format(X_train.shape[0]))
print("There are {} samples in the testing dataset".format(X_test.shape[0]))
print("Each sample has {} features".format(X_train.shape[1]))
```
Then, we import the nearest neighbor class and create an instance for it using the default parameters. By default, the algorithm will choose the five nearest neighbors to predict
the class of a testing sample:
```
from sklearn.neighbors import KNeighborsClassifier
estimator = KNeighborsClassifier()
```
After creating our estimator, we must then fit it on our training dataset. For the
nearest neighbor class, this records our dataset, allowing us to find the nearest
neighbor for a new data point, by comparing that point to the training dataset:
estimator.fit(X_train, y_train)
We then train the algorithm with our test set and evaluate with our testing set:
```
estimator.fit(X_train, y_train)
y_predicted = estimator.predict(X_test)
accuracy = np.mean(y_test == y_predicted) * 100
print("The accuracy is {0:.1f}%".format(accuracy))
```
This scores 86.4 percent accuracy, which is impressive for a default algorithm and
just a few lines of code! Most scikit-learn default parameters are chosen explicitly
to work well with a range of datasets. However, you should always aim to choose
parameters based on knowledge of the application experiment.
```
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(estimator, X, y, scoring='accuracy')
average_accuracy = np.mean(scores) * 100
print("The average accuracy is {0:.1f}%".format(average_accuracy))
```
Using cross validation, the model this gives a slightly more modest result of 82.3 percent, but it is still quite good
considering we have not yet tried setting better parameters.
### Tuning parameters
Almost all data mining algorithms have parameters that the user can set. This is
often a cause of generalizing an algorithm to allow it to be applicable in a wide
variety of circumstances. Setting these parameters can be quite difficult, as choosing
good parameter values is often highly reliant on features of the dataset.
The nearest neighbor algorithm has several parameters, but the most important
one is that of the number of nearest neighbors to use when predicting the class of
an unseen attribution. In scikit-learn, this parameter is called n_neighbors.
In the following figure, we show that when this number is too low, a randomly
labeled sample can cause an error. In contrast, when it is too high, the actual nearest
neighbors have a lower effect on the result.
If we want to test a number of values for the n_neighbors parameter, for example,
each of the values from 1 to 20, we can rerun the experiment many times by setting
n_neighbors and observing the result:
```
avg_scores = []
all_scores = []
parameter_values = list(range(1, 21)) # Including 20
for n_neighbors in parameter_values:
estimator = KNeighborsClassifier(n_neighbors=n_neighbors)
scores = cross_val_score(estimator, X, y, scoring='accuracy')
avg_scores.append(np.mean(scores))
all_scores.append(scores)
```
We compute and store the average in our list of scores. We also store the full set of
scores for later analysis. We can then plot the relationship between the value of n_neighbors and the
accuracy.
```
%matplotlib inline
```
We then import pyplot from the matplotlib library and plot the parameter values
alongside average scores:
```
from matplotlib import pyplot as plt
plt.figure(figsize=(32,20))
plt.plot(parameter_values, avg_scores, '-o', linewidth=5, markersize=24)
#plt.axis([0, max(parameter_values), 0, 1.0])
```
While there is a lot of variance, the plot shows a decreasing trend as the number of
neighbors increases.
### Preprocessing using pipelines
When taking measurements of real-world objects, we can often get features in
very different ranges. Like we saw in the case of classifying Animal data using Naive Bayes, if we are measuring the qualities of an animal,
we considered several features, as follows:
* Number of legs: This is between the range of 0-8 for most animals, while
some have many more!
* Weight: This is between the range of only a few micrograms, all the way
to a blue whale with a weight of 190,000 kilograms!
* Number of hearts: This can be between zero to five, in the case of
the earthworm.
For a mathematical-based algorithm to compare each of these features, the differences in the scale, range, and units can be difficult to interpret. If we used the above features in many algorithms, the weight would probably be the most
influential feature due to only the larger numbers and not anything to do with the actual effectiveness of the feature.
One of the methods to overcome this is to use a process called preprocessing to normalize the features so that they all have the same range, or are put into categories like small, medium and large. Suddenly, the large difference in the
types of features has less of an impact on the algorithm, and can lead to large
increases in the accuracy.
Preprocessing can also be used to choose only the more effective features, create new
features, and so on. Preprocessing in scikit-learn is done through Transformer
objects, which take a dataset in one form and return an altered dataset after some
transformation of the data. These don't have to be numerical, as Transformers are also
used to extract features-however, in this section, we will stick with preprocessing.
An example
We can show an example of the problem by breaking the Ionosphere dataset.
While this is only an example, many real-world datasets have problems of this
form. First, we create a copy of the array so that we do not alter the original dataset:
```
X_broken = np.array(X)
```
Next, we break the dataset by dividing every second feature by 10:
```
X_broken[:,::2] /= 10
```
In theory, this should not have a great effect on the result. After all, the values
for these features are still relatively the same. The major issue is that the scale has
changed and the odd features are now larger than the even features. We can see the
effect of this by computing the accuracy:
```
estimator = KNeighborsClassifier()
original_scores = cross_val_score(estimator, X, y,scoring='accuracy')
print("The original average accuracy for is {0:.1f}%".format(np.mean(original_scores) * 100))
broken_scores = cross_val_score(estimator, X_broken, y,scoring='accuracy')
print("The 'broken' average accuracy for is {0:.1f}%".format(np.mean(broken_scores) * 100))
```
This gives a score of 82.3 percent for the original dataset, which drops down to
71.5 percent on the broken dataset. We can fix this by scaling all the features to
the range 0 to 1.
### Standard preprocessing
The preprocessing we will perform for this experiment is called feature-based
normalization through the MinMaxScaler class.
```
from sklearn.preprocessing import MinMaxScaler
```
This class takes each feature and scales it to the range 0 to 1. The minimum value is
replaced with 0, the maximum with 1, and the other values somewhere in between.
To apply our preprocessor, we run the transform function on it. While MinMaxScaler
doesn't, some transformers need to be trained first in the same way that the classifiers
do. We can combine these steps by running the fit_transform function instead:
```
X_transformed = MinMaxScaler().fit_transform(X)
```
Here, X_transformed will have the same shape as X. However, each column will
have a maximum of 1 and a minimum of 0.
There are various other forms of normalizing in this way, which is effective for other
applications and feature types:
* Ensure the sum of the values for each sample equals to 1, using sklearn.
preprocessing.Normalizer
* Force each feature to have a zero mean and a variance of 1, using sklearn.
preprocessing.StandardScaler, which is a commonly used starting point
for normalization
* Turn numerical features into binary features, where any value above
a threshold is 1 and any below is 0, using sklearn.preprocessing.
Binarizer .
We can now create a workflow by combining the code from the previous sections,
using the broken dataset previously calculated:
```
X_transformed = MinMaxScaler().fit_transform(X_broken)
estimator = KNeighborsClassifier()
transformed_scores = cross_val_score(estimator, X_transformed, y,scoring='accuracy')
print("The average accuracy for is {0:.1f}%".format(np.mean(transformed_scores) * 100))
```
This gives us back our score of 82.3 percent accuracy. The MinMaxScaler resulted in
features of the same scale, meaning that no features overpowered others by simply
being bigger values. While the Nearest Neighbor algorithm can be confused with
larger features, some algorithms handle scale differences better. In contrast, some
are much worse!
### Pipelines
As experiments grow, so does the complexity of the operations. We may split up
our dataset, binarize features, perform feature-based scaling, perform sample-based
scaling, and many more operations.
Keeping track of all of these operations can get quite confusing and can result in
being unable to replicate the result. Problems include forgetting a step, incorrectly
applying a transformation, or adding a transformation that wasn't needed.
Another issue is the order of the code. In the previous section, we created our
X_transformed dataset and then created a new estimator for the cross validation.
If we had multiple steps, we would need to track all of these changes to the dataset
in the code.
Pipelines are a construct that addresses these problems (and others, which we will
see in the next chapter). Pipelines store the steps in your data mining workflow. They
can take your raw data in, perform all the necessary transformations, and then create
a prediction. This allows us to use pipelines in functions such as cross_val_score,
where they expect an estimator. First, import the Pipeline object:
```
from sklearn.pipeline import Pipeline
```
Pipelines take a list of steps as input, representing the chain of the data mining
application. The last step needs to be an Estimator, while all previous steps are
Transformers. The input dataset is altered by each Transformer, with the output of
one step being the input of the next step. Finally, the samples are classified by the last
step's estimator. In our pipeline, we have two steps:
1. Use MinMaxScaler to scale the feature values from 0 to 1
2. Use KNeighborsClassifier as the classification algorithms
Each step is then represented by a tuple ('name', step). We can then create
our pipeline:
```
scaling_pipeline = Pipeline([('scale', MinMaxScaler()),
('predict', KNeighborsClassifier())])
```
The key here is the list of tuples. The first tuple is our scaling step and the second
tuple is the predicting step. We give each step a name: the first we call scale and the
second we call predict, but you can choose your own names. The second part of the
tuple is the actual Transformer or estimator object.
Running this pipeline is now very easy, using the cross validation code from before:
```
scores = cross_val_score(scaling_pipeline, X_broken, y, scoring='accuracy')
print("The pipeline scored an average accuracy for is {0:.1f}%".format(np.mean(transformed_scores) * 100))
```
This gives us the same score as before (82.3 percent), which is expected, as we are
effectively running the same steps.
Setting
up pipelines is a great way to ensure that the code complexity does not
grow unmanageably.
<hr>
### Notes:
The right choice of k is crucial to find a good balance between over- and underfitting. We also have to make sure that we choose a distance metric that is appropriate for the features in the dataset. Often, while using the Euclidean distance measure, it is important to standardize the data so that each feature contributes equally to the distance.
#### The curse of dimensionality
It is important to mention that KNN is very susceptible to overfitting due to the curse of dimensionality. The curse of dimensionality describes the phenomenon where the feature
space becomes increasingly sparse for an increasing number
of dimensions of a fixed-size training dataset. Intuitively, we
can think of even the closest neighbors being too far away in a
high-dimensional space to give a good estimate.
In models where regularization is not applicable such as decision trees and KNN, we can use feature selection and dimensionality reduction techniques to help us avoid the curse of dimensionality.
#### Parametric versus nonparametric models
Machine learning algorithms can be grouped into parametric and nonparametric models. Using parametric models, we estimate parameters from the training dataset to learn a function that can classify new data points without requiring the original training dataset anymore. Typical examples of parametric models are the perceptron, logistic regression, and the linear SVM. In contrast, nonparametric models can't be characterized by a fixed set of parameters, and the number of parameters grows with the training data. Two examples of nonparametric models that we have seen so far are the decision tree classifier/random forest and the kernel SVM.
KNN belongs to a subcategory of nonparametric models that is described as instance-based learning. Models based on instance-based learning are characterized by memorizing the training dataset, and lazy learning is a special case of instance-based learning that is associated with no (zero) cost during the learning process
_____
### Summary
In this chapter, we used several of scikit-learn's methods for building a
standard workflow to run and evaluate data mining models. We introduced the
Nearest Neighbors algorithm, which is already implemented in scikit-learn as an
estimator. Using this class is quite easy; first, we call the fit function on our training
data, and second, we use the predict function to predict the class of testing samples.
We then looked at preprocessing by fixing poor feature scaling. This was done using
a Transformer object and the MinMaxScaler class. These functions also have a
fit method and then a transform, which takes a dataset as an input and returns a
transformed dataset as an output.
___
| github_jupyter |
# Notebook 4: Quantum operations and distance
In this notebook we will be taking a closer look at quantum operations, i.e. parts of a quantum circuit that are _not necessarily_ unitary.
```
import numpy as np
# Import cirq, install it if it's not installed.
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
```
## Working with density matrices
To work with quantum operations we need work with density matrices instead of pure states as we have been before. Let's first see how we can do simulations with a density matrix in a unitary quantum circuit using `DensityMatrixSimulator`.
```
circuit = cirq.Circuit()
num_qubits = 2
qubits = cirq.LineQubit.range(num_qubits)
circuit.append([cirq.H(qubits[0])])
circuit.append([cirq.CNOT(qubits[0], qubits[1])])
print(circuit)
simulator = cirq.DensityMatrixSimulator()
result = simulator.simulate(circuit)
rho = result.final_density_matrix
rho
```
The resulting density matrix is a hermitian positive semi-definite (PSD) matrix with trace equal to 1. Because the input is a pure state $|000\rangle$, and all the operations are unitary, the output should also be a pure state. Recall that a state $\rho$ is pure if and only if $\mathrm{tr}(\rho^2)=1$. Let's verify these properties.
```
print("Trace =", np.real_if_close(np.trace(rho)))
print("Hermitian?: ", np.allclose(rho, rho.conjugate().T))
print("PSD?: ", np.all(np.linalg.eigvalsh(rho >= 0)))
# np.linalg.eigvals computes eigenvalues of a matrix
# np.linalg.eigvalsh computes eigenvalues of a hermitian matrix. The assumption that
# the matrix is hermitian allows for a faster more numerically stable computation.
print("Pure state?", np.trace(rho @ rho) > 1 - 1e-4)
```
## Noisy channels
Usually we need the quantum operator formalism because we want to model _noise_ in a quantum circuit. There are many types of noise that can occur in real-life quantum circuits. Perhaps the simplest type is the _bit-flip channel_, which flips the state (applies the X gate) of a single qubit with a certain probability.
If $q$ is the probability of flipping the state, then the bit-flip channel acts as:
$$
\rho \mapsto (1-q)\rho+qX\rho X^\dagger
$$
in the _operator sum formalism_, the _operation elements_ are therefore $\sqrt{1-q}I$ and $\sqrt{q}X$.
Let's modify the circuit above to use the bit-flip channel on the first qubit, before the CNOT gate.
```
circuit = cirq.Circuit()
num_qubits = 2
qubits = cirq.LineQubit.range(num_qubits)
circuit.append([cirq.H(qubits[0]), cirq.bit_flip(0.1)(qubits[1])])
circuit.append([cirq.CNOT(qubits[0], qubits[1])])
print(circuit)
simulator = cirq.DensityMatrixSimulator()
result = simulator.simulate(circuit)
rho = result.final_density_matrix
rho
```
Unlike before, this circuit is _not_ unitary. And hence the output is not a pure state. Here's what happens if we compute $\mathrm{tr}(\rho^2)$:
```
np.trace(rho @ rho)
```
## Exercise 1a
> While $\rho$ is not a pure state, it is the mixture of two pure states: $\rho = 0.1|\psi\rangle\langle \psi| + 0.9|\varphi\rangle\langle \varphi|$. Use the eigenvalue decomposition `np.linalg.eigh` to find $|\psi\rangle $ and $|\varphi \rangle$. (Hint: look carefully at the eigenvalues `eigvals` to select the right eigenvectors). Use `cirq.qis.dirac_notation` to neatly format the resulting vector.
```
eigvals, eigvects = np.linalg.eigh(rho)
# YOUR CODE HERE
```
## Exercise 1b
> Denote the entire circuit above by $\mathcal E$, then we defined $\rho =\mathcal E(|0\rangle\langle 0|)$, and observed that $\mathrm{tr}(\rho^2)<1$. What happens if we iterate the circuit a few times? Use a for loop to show experimentally that $\mathrm{tr}(\mathcal E^n(\rho))$ converges to 0.5.
- To apply the circuit multiple times, we can use the `initial_state=rho` keyword for the function `simulator.simulate`. This sets the initial state of the simulator to the density matrix `rho`.
- If you use too many iterations, you might get this error:
```py
ValueError: The density matrix is not hermitian.
```
This is because of accumulating numerical errors. To avoid this, simply use fewer iterations. The convergence should be pretty good after 10 iterations.
```
rho = simulator.simulate(circuit).final_density_matrix
# Your code here
```
## Exercise 1c
> We can get sates $\rho$ such that $\mathrm{tr}(\rho^2)$ is even smaller than 0.5. Modify the circuit by adding _a single_ `bit_flip(0.1)` gate to the circuit at the right place and repeat the experiment of Exercise 1a to converge to a state with $\mathrm{tr}(\rho^2)\to 0.25$
```
circuit = cirq.Circuit()
num_qubits = 2
qubits = cirq.LineQubit.range(num_qubits)
# YOUR CODE HERE
```
## Exercise 1d
> The lowest value of $\mathrm{tr}(\rho^2)$ we can possibly achieve is when $\rho = I/d$, where d is the dimension of the system. Show that this state is a fixed point of the circuit $\mathcal E$; i.e. $\mathcal E(\rho) = \rho$.
```
rho_worst = np.eye(4, dtype=np.complex64) / 4
# YOUR CODE HERE
```
## Trace distance and fidelity
We will investigate how different types of noise can affect the fidelity and trace distance between states. Your first job is to implement trace distance and fidelity.
## Exercise 2a
> Recall that the trace distance is defined by $D(\rho,\sigma) = \mathrm{tr}|\rho-\sigma|$. Implement the trace distance in a function `trace_distance`. Here you can use the fact that for any hermitian matrix $A$ we have $\mathrm{tr}|A|=\sum_i \sigma_i(A)$ where $\sigma_i$ is the $i\!$ th _singular value_ of $A$. You can compute singular values using `scipy.linalg.svdvals`
```
import scipy.linalg
def trace_distance(rho, sigma):
# YOUR CODE HERE
...
# rho = |00><00|
rho = np.zeros((4, 4), dtype=np.complex64)
rho[0, 0] = 1
# sigma = E(rho)
sigma = simulator.simulate(circuit, initial_state=rho).final_density_matrix
print(trace_distance(rho, rho)) # should be 0
print(trace_distance(rho, sigma)) # should be around 1.33
```
## Exercise 2b
> The fidelity is defined by $F(\rho,\sigma) = \mathrm{tr}\sqrt{\rho^{1/2}\sigma \rho^{1/2}}$. Implement the fidelity in a function `fidelity`. You can compute the square root of a matrix using `scipy.linalg.sqrtm`. Make sure that the result is a real number, possibly by using `np.abs( ... )` on the result.
```
def fidelity(rho, sigma):
# YOUR CODE HERE
...
print(fidelity(rho, rho)) # should be 1
print(fidelity(rho, sigma)) # should be around 0.67
```
It is perhaps worth noting that while the definition $F(\rho,\sigma) = \mathrm{tr}\sqrt{\rho^{1/2}\sigma \rho^{1/2}}$ is used in Nielsen-Chuang, the definition $F(\rho,\sigma) = (\mathrm{tr}\sqrt{\rho^{1/2}\sigma \rho^{1/2}})^2$ is more common in contemporary literature. Since we follow the book, we will keep using the former definition.
### Fidelity of a quantum operation
We often need to know how much a quantum operation (in particular noise) can distort a state. We can do this by computing the fidelity between a state and the result of applying the operation to the state. That is we consider $F(\rho,\mathcal E(\rho))$.
This fidelity is going to be bigger for some states, and smaller for others. Therefore we are for example interested in the _minimum_ fidelity obtained among all states $\rho$. This is however not easy to compute in general. Instead we consider the _average_ fidelity
$$
\overline F(\mathcal E) := \int_{S^{n-1}}\! F(|\psi\rangle\langle \psi|,\mathcal E(|\psi\rangle\langle \psi|))\,\mathrm d\psi
$$
Here we took the average over all the pure states, but instead we could also take the average over all the mixed states. For now we can estimate this integral using Monte-Carlo integration. That is, we randomly sample over states $\rho$, compute $F(|\psi\rangle\langle \psi|,\mathcal E(|\psi\rangle\langle \psi|))$, and then average the result.
## Exercise 2c
> Implement the function `average_fidelity(circuit, N)` that estimates the average fidelity of a quantum circuit `circuit` using `N` samples. You can use the function `random_pure_state(num_qubits)` to generate random pure states.
```
circuit = cirq.Circuit()
qubit1 = cirq.LineQubit(0)
circuit.append(cirq.bit_flip(0.1)(qubit1))
def random_pure_state(num_qubits):
n = 2**num_qubits
# Vector of random normal complex numbers
psi = np.random.normal(size=n) + 1j * np.random.normal(size=n)
# Normalize
psi = psi / np.linalg.norm(psi)
psi = psi.astype(np.complex64)
# Compute rank-1 matrix |psi><psi|
state = np.outer(psi, psi.conj())
return state
def average_fidelity(circuit, N):
num_qubits = len(circuit.all_qubits())
# YOUR CODE HERE
average_fidelity(circuit, 200) # Should be around 0.967
```
Now let's try to understand how the average fidelity changes if we use noise of different strength. Instead of the bit-flip channel we will be considering the _depolarizing channel_, which is defined as:
$$
\mathcal E(\rho) = \frac{pI}{2} + (1-p)\rho
$$
i.e. with probability $p$ we change the state into $I/2$ -- a completely random state, and with probability $1-p$ we leave the state unchanged. This channel can be implemented using the `depolarize(p)` gate.
## Exercise 2d
> Using a `for` loop, define different circuits consisting of the depolarizing channel of strength $p$ on a single qubit. Then compute the average fidelity of this circuit, and add the result to the list `fidelities_list`. The result is then plotted for you. If done correctly, the two plotted lines should perfectly overlap.
```
import matplotlib.pyplot as plt
fidelities_list = []
p_values = np.linspace(0, 1, 20)
for p in p_values:
# YOUR CODE HERE
...
plt.plot(p_values, fidelities_list, "-o", label="Estimated")
plt.plot(p_values, np.sqrt(1 - 2 * p_values / 3), label="Theoretical")
plt.legend()
```
| github_jupyter |
```
#load watermark
%load_ext watermark
%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras,tflearn,bokeh,gensim
from preamble import *
%matplotlib inline
```
## Algorithm Chains and Pipelines
```
from sklearn.svm import SVC
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
# load and split the data
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, random_state=0)
# compute minimum and maximum on the training data
scaler = MinMaxScaler().fit(X_train)
# rescale the training data
X_train_scaled = scaler.transform(X_train)
svm = SVC()
# learn an SVM on the scaled training data
svm.fit(X_train_scaled, y_train)
# scale the test data and score the scaled data
X_test_scaled = scaler.transform(X_test)
print("Test score: {:.2f}".format(svm.score(X_test_scaled, y_test)))
```
### Parameter Selection with Preprocessing
```
from sklearn.model_selection import GridSearchCV
# for illustration purposes only, don't use this code!
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100],
'gamma': [0.001, 0.01, 0.1, 1, 10, 100]}
grid = GridSearchCV(SVC(), param_grid=param_grid, cv=5)
grid.fit(X_train_scaled, y_train)
print("Best cross-validation accuracy: {:.2f}".format(grid.best_score_))
print("Best parameters: ", grid.best_params_)
print("Test set accuracy: {:.2f}".format(grid.score(X_test_scaled, y_test)))
mglearn.plots.plot_improper_processing()
```
### Building Pipelines
```
from sklearn.pipeline import Pipeline
pipe = Pipeline([("scaler", MinMaxScaler()), ("svm", SVC())])
pipe.fit(X_train, y_train)
print("Test score: {:.2f}".format(pipe.score(X_test, y_test)))
```
### Using Pipelines in Grid-searches
```
param_grid = {'svm__C': [0.001, 0.01, 0.1, 1, 10, 100],
'svm__gamma': [0.001, 0.01, 0.1, 1, 10, 100]}
grid = GridSearchCV(pipe, param_grid=param_grid, cv=5)
grid.fit(X_train, y_train)
print("Best cross-validation accuracy: {:.2f}".format(grid.best_score_))
print("Test set score: {:.2f}".format(grid.score(X_test, y_test)))
print("Best parameters: {}".format(grid.best_params_))
mglearn.plots.plot_proper_processing()
rnd = np.random.RandomState(seed=0)
X = rnd.normal(size=(100, 10000))
y = rnd.normal(size=(100,))
from sklearn.feature_selection import SelectPercentile, f_regression
select = SelectPercentile(score_func=f_regression, percentile=5).fit(X, y)
X_selected = select.transform(X)
print("X_selected.shape: {}".format(X_selected.shape))
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import Ridge
print("Cross-validation accuracy (cv only on ridge): {:.2f}".format(
np.mean(cross_val_score(Ridge(), X_selected, y, cv=5))))
pipe = Pipeline([("select", SelectPercentile(score_func=f_regression,
percentile=5)),
("ridge", Ridge())])
print("Cross-validation accuracy (pipeline): {:.2f}".format(
np.mean(cross_val_score(pipe, X, y, cv=5))))
```
### The General Pipeline Interface
```
def fit(self, X, y):
X_transformed = X
for name, estimator in self.steps[:-1]:
# iterate over all but the final step
# fit and transform the data
X_transformed = estimator.fit_transform(X_transformed, y)
# fit the last step
self.steps[-1][1].fit(X_transformed, y)
return self
def predict(self, X):
X_transformed = X
for step in self.steps[:-1]:
# iterate over all but the final step
# transform the data
X_transformed = step[1].transform(X_transformed)
# predict using the last step
return self.steps[-1][1].predict(X_transformed)
```

### Convenient Pipeline creation with ``make_pipeline``
```
from sklearn.pipeline import make_pipeline
# standard syntax
pipe_long = Pipeline([("scaler", MinMaxScaler()), ("svm", SVC(C=100))])
# abbreviated syntax
pipe_short = make_pipeline(MinMaxScaler(), SVC(C=100))
print("Pipeline steps:\n{}".format(pipe_short.steps))
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
pipe = make_pipeline(StandardScaler(), PCA(n_components=2), StandardScaler())
print("Pipeline steps:\n{}".format(pipe.steps))
```
#### Accessing step attributes
```
# fit the pipeline defined before to the cancer dataset
pipe.fit(cancer.data)
# extract the first two principal components from the "pca" step
components = pipe.named_steps["pca"].components_
print("components.shape: {}".format(components.shape))
```
#### Accessing Attributes in a Pipeline inside GridSearchCV
```
from sklearn.linear_model import LogisticRegression
pipe = make_pipeline(StandardScaler(), LogisticRegression())
param_grid = {'logisticregression__C': [0.01, 0.1, 1, 10, 100]}
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, random_state=4)
grid = GridSearchCV(pipe, param_grid, cv=5)
grid.fit(X_train, y_train)
print("Best estimator:\n{}".format(grid.best_estimator_))
print("Logistic regression step:\n{}".format(
grid.best_estimator_.named_steps["logisticregression"]))
print("Logistic regression coefficients:\n{}".format(
grid.best_estimator_.named_steps["logisticregression"].coef_))
```
### Grid-searching preprocessing steps and model parameters
```
from sklearn.datasets import load_boston
boston = load_boston()
X_train, X_test, y_train, y_test = train_test_split(boston.data, boston.target,
random_state=0)
from sklearn.preprocessing import PolynomialFeatures
pipe = make_pipeline(
StandardScaler(),
PolynomialFeatures(),
Ridge())
param_grid = {'polynomialfeatures__degree': [1, 2, 3],
'ridge__alpha': [0.001, 0.01, 0.1, 1, 10, 100]}
grid = GridSearchCV(pipe, param_grid=param_grid, cv=5, n_jobs=-1)
grid.fit(X_train, y_train)
mglearn.tools.heatmap(grid.cv_results_['mean_test_score'].reshape(3, -1),
xlabel="ridge__alpha", ylabel="polynomialfeatures__degree",
xticklabels=param_grid['ridge__alpha'],
yticklabels=param_grid['polynomialfeatures__degree'], vmin=0)
print("Best parameters: {}".format(grid.best_params_))
print("Test-set score: {:.2f}".format(grid.score(X_test, y_test)))
param_grid = {'ridge__alpha': [0.001, 0.01, 0.1, 1, 10, 100]}
pipe = make_pipeline(StandardScaler(), Ridge())
grid = GridSearchCV(pipe, param_grid, cv=5)
grid.fit(X_train, y_train)
print("Score without poly features: {:.2f}".format(grid.score(X_test, y_test)))
pipe = Pipeline([('preprocessing', StandardScaler()), ('classifier', SVC())])
from sklearn.ensemble import RandomForestClassifier
param_grid = [
{'classifier': [SVC()], 'preprocessing': [StandardScaler(), None],
'classifier__gamma': [0.001, 0.01, 0.1, 1, 10, 100],
'classifier__C': [0.001, 0.01, 0.1, 1, 10, 100]},
{'classifier': [RandomForestClassifier(n_estimators=100)],
'preprocessing': [None], 'classifier__max_features': [1, 2, 3]}]
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, random_state=0)
grid = GridSearchCV(pipe, param_grid, cv=5)
grid.fit(X_train, y_train)
print("Best params:\n{}\n".format(grid.best_params_))
print("Best cross-validation score: {:.2f}".format(grid.best_score_))
print("Test-set score: {:.2f}".format(grid.score(X_test, y_test)))
```
### Summary and Outlook
```
test complete ; Gopal
```
| github_jupyter |
[](https://colab.research.google.com/github/jfcrenshaw/pzflow/blob/main/examples/marginalization.ipynb)
If running in Colab, to switch to GPU, go to the menu and select Runtime -> Change runtime type -> Hardware accelerator -> GPU.
In addition, uncomment and run the following code:
```
# !pip install pzflow
```
-------------------
## Marginalization during posterior calculation
This example notebook demonstrates how to marginalize over missing variables during posterior calculation.
We will use the Flow trained in the [redshift example](https://github.com/jfcrenshaw/pzflow/blob/main/examples/redshift_example.ipynb).
```
import jax.numpy as np
import matplotlib.pyplot as plt
from pzflow.examples import get_example_flow
```
First let's load the pre-trained flow, and use it to generate some samples:
```
flow = get_example_flow()
samples = flow.sample(2, seed=123)
samples
```
Remember that we can calculate posteriors for the data in samples. For example, let's plot redshift posteriors:
```
grid = np.linspace(0.25, 1.45, 100)
pdfs = flow.posterior(samples, column="redshift", grid=grid)
fig, axes = plt.subplots(1, 2, figsize=(5.5, 2), dpi=120, constrained_layout=True)
for i, ax in enumerate(axes.flatten()):
ax.plot(grid, pdfs[i], label="Redshift posterior")
ztrue = samples["redshift"][i]
ax.axvline(ztrue, c="C3", label="True redshift")
ax.set(
xlabel="redshift",
xlim=(ztrue - 0.25, ztrue + 0.25),
yticks=[]
)
axes[0].legend(
bbox_to_anchor=(0.55, 1.05, 1, 0.2),
loc="lower left",
mode="expand",
borderaxespad=0,
ncol=2,
fontsize=8,
)
plt.show()
```
But what if we have missing values? E.g. let's imagine that galaxy 1 wasn't observed in the u band, while galaxy 2 wasn't observed in the u or y bands. We will mark these non-observations with the value 99:
```
# make a new copy of the samples
samples2 = samples.copy()
# make the non-observations
samples2.iloc[0, 1] = 99
samples2.iloc[1, 1] = 99
samples2.iloc[1, -1] = 99
# print the new samples
samples2
```
Now if we want to calculate posteriors, we can't simply call `flow.posterior()` as before because the flow will think that 99 is the actual value for those bands, rather than just a flag for a missing value. What we can do, however, is pass `marg_rules`, which is a dictionary of rules that tells the Flow how to marginalize over missing variables.
`marg_rules` must include:
- "flag": 99, which tells the posterior method that 99 is the flag for missing values
- "u": callable, which returns an array of values for the u band over which to marginalize
- "y": callable, which returns an array of values for the y band over which to marginalize
"u" and "y" both map to callable, because you can use a function of the other values to decide what values of u and y to marginalize over. For example, maybe you expect the value of u to be close to the value of g. In which case you might use:
```
"u": lambda row: np.linspace(row["g"] - 1, row["g"] + 1, 100)
```
The only constraint is that regardless of the values of the other variables, the callable must *always* return an array of the same length.
For this example, we won't make the marginalization rules a function of the other variables, but will instead return a fixed array.
```
marg_rules = {
"flag": 99, # tells the posterior method that 99 means missing value
"u": lambda row: np.linspace(26, 28, 40), # the array of u values to marginalize over
"y": lambda row: np.linspace(24, 26, 40), # the array of y values to marginalize over
}
pdfs2 = flow.posterior(samples2, column="redshift", grid=grid, marg_rules=marg_rules)
fig, axes = plt.subplots(1, 2, figsize=(5.5, 2), dpi=120, constrained_layout=True)
for i, ax in enumerate(axes.flatten()):
ax.plot(grid, pdfs[i], label="Posterior w/ all bands")
ax.plot(grid, pdfs2[i], label="Posterior w/ missing bands marginalized")
ztrue = samples["redshift"][i]
ax.axvline(ztrue, c="C3", label="True redshift")
ax.set(
xlabel="redshift",
xlim=(ztrue - 0.25, ztrue + 0.25),
yticks=[]
)
axes[0].legend(
bbox_to_anchor=(0, 1.05, 2, 0.2),
loc="lower left",
mode="expand",
borderaxespad=0,
ncol=3,
fontsize=7.5,
)
plt.show()
```
You can see that marginalizing over the bands (aka throwing out information), degrades the posteriors.
Warning that marginalizing over fine grids quickly gets very computationally expensive, especially when you have rows in your data frame that are missing multiple values.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import ndimage as ndi
import os
from PIL import Image
import PIL.ImageOps
from skimage.morphology import watershed
from skimage.feature import peak_local_max
from skimage.filters import threshold_otsu
from skimage.morphology import binary_closing
from skimage.color import rgb2gray
```
Watershed with binarization first
```
arraydirectory= './edge_array/'
photodirectory='./photos/'
image=np.array(Image.open(photodirectory + '1449.jpg'))
image = rgb2gray(image)
thresh = threshold_otsu(image)
img_bin = image > thresh
image_closed=binary_closing(img_bin)
# Now we want to separate the two objects in image
# Generate the markers as local maxima of the distance to the background
distance = ndi.distance_transform_edt(image_closed)
local_maxi = peak_local_max(distance, indices=False)
markers = ndi.label(local_maxi)[0]
labels = watershed(-distance, markers, mask=image_closed)
fig, axes = plt.subplots(ncols=3, figsize=(9, 3), sharex=True, sharey=True,
subplot_kw={'adjustable': 'box-forced'})
ax = axes.ravel()
ax[0].imshow(image_closed, cmap=plt.cm.gray, interpolation='nearest')
ax[0].set_title('Overlapping objects')
ax[1].imshow(-distance, cmap=plt.cm.gray, interpolation='nearest')
ax[1].set_title('Distances')
ax[2].imshow(labels, cmap=plt.cm.spectral, interpolation='nearest')
ax[2].set_title('Separated objects')
for a in ax:
a.set_axis_off()
fig.tight_layout()
plt.show()
```
Watershed on image itself
```
arraydirectory= './edge_array/'
photodirectory='./photos/'
image=np.array(Image.open(photodirectory + '1449.jpg'))
# Now we want to separate the two objects in image
# Generate the markers as local maxima of the distance to the background
distance = ndi.distance_transform_edt(image)
local_maxi = peak_local_max(distance, indices=False)
markers = ndi.label(local_maxi)[0]
labels = watershed(-distance, markers, mask=image)
fig, axes = plt.subplots(ncols=3, figsize=(9, 3), sharex=True, sharey=True,
subplot_kw={'adjustable': 'box-forced'})
ax = axes.ravel()
ax[0].imshow(image, cmap=plt.cm.gray, interpolation='nearest')
ax[0].set_title('Overlapping objects')
ax[1].imshow(-distance, cmap=plt.cm.gray, interpolation='nearest')
ax[1].set_title('Distances')
ax[2].imshow(labels, cmap=plt.cm.spectral, interpolation='nearest')
ax[2].set_title('Separated objects')
for a in ax:
a.set_axis_off()
fig.tight_layout()
plt.show()
```
So we use Watershed on the binary picture.
```
arraydirectory= '../FeatureSampleFoodClassification/watershed_array/'
photodirectory='../SampleFoodClassifier_Norm/'
if not os.path.exists(arraydirectory):
os.makedirs(arraydirectory)
for fn in os.listdir(photodirectory):
if os.path.isfile(photodirectory + fn) and '.jpg' in fn:
img=np.array(Image.open(photodirectory + fn))
img = rgb2gray(img)
thresh = threshold_otsu(img)
img_bin = img > thresh
img_closed=binary_closing(img_bin)
# Now we want to separate the two objects in image
# Generate the markers as local maxima of the distance to the background
distance = ndi.distance_transform_edt(img_closed)
local_maxi = peak_local_max(distance, indices=False)
markers = ndi.label(local_maxi)[0]
ws = watershed(-distance, markers, mask=img_closed)
ws_flat=[item for sublist in ws for item in sublist]
np.save(arraydirectory + fn,ws_flat)
print('done')
```
| github_jupyter |
# Rerank with MonoT5
```
!nvidia-smi
from pygaggle.rerank.base import Query, Text
from pygaggle.rerank.transformer import MonoT5
from trectools import TrecRun
import ir_datasets
monoT5Reranker = MonoT5()
DIR='/mnt/ceph/storage/data-in-progress/data-teaching/theses/wstud-thesis-probst/retrievalExperiments/runs-ecir22/'
DIR_v2='/mnt/ceph/storage/data-in-progress/data-teaching/theses/wstud-thesis-probst/retrievalExperiments/runs-marco-v2-ecir22/'
def load_topics(version, file):
import pandas as pd
return pd.read_csv('../../Data/navigational-topics-and-qrels-ms-marco-v' + str(version) + '/' + file, sep='\t', names=['num', 'query'])
df_popular_queries = load_topics(1, 'topics.msmarco-entrypage-popular.tsv')
df_random_queries = load_topics(1, 'topics.msmarco-entrypage-random.tsv')
df_popular_run = TrecRun(DIR + 'entrypage-popular/run.ms-marco-content.bm25-default.txt')
df_random_run = TrecRun(DIR + 'entrypage-random/run.ms-marco-content.bm25-default.txt')
df_popular_queries_v2 = load_topics(2, 'topics.msmarco-v2-entrypage-popular.tsv')
df_random_queries_v2 = load_topics(2, 'topics.msmarco-v2-entrypage-random.tsv')
df_popular_run_v2 = TrecRun(DIR_v2 + 'entrypage-popular/run.msmarco-doc-v2.bm25-default.txt')
df_random_run_v2 = TrecRun(DIR_v2 + 'entrypage-random/run.msmarco-doc-v2.bm25-default.txt')
df_popular_queries
df_popular_run
df_random_queries
df_random_run
df_random_run.run_data
```
# The actual reranking
```
def get_query_or_fail(df_queries, topic_number):
ret = df_queries[df_queries['num'] == int(topic_number)]
if len(ret) != 1:
raise ValueError('Could not handle ' + str(topic_number))
return ret.iloc[0]['query']
marco_v1_doc_store = ir_datasets.load('msmarco-document').docs_store()
marco_v2_doc_store = ir_datasets.load('msmarco-document-v2').docs_store()
def get_doc_text(doc_id):
if doc_id.startswith('msmarco_doc_'):
ret = marco_v2_doc_store.get(doc_id)
else:
ret = marco_v1_doc_store.get(doc_id)
return ret.title + ' ' + ret.body
def docs_for_topic(df_run, topic_number):
return df_run.run_data[df_run.run_data['query'] == int(topic_number)].docid
def rerank_with_model(topic, df_queries, df_run, model):
query = get_query_or_fail(df_queries, topic)
print('rerank query ' + query)
documents = [Text(get_doc_text(i), {'docid': i}, 0) for i in docs_for_topic(df_run, topic)[:100]]
ret = sorted(model.rerank(Query(query), documents), key=lambda i: i.score, reverse=True)
return [{'score': i.score, 'id': i.metadata['docid'], 'body': i.text} for i in ret]
def rerank(file_name, df_run, df_queries, model, tag):
from tqdm import tqdm
with open(file_name, 'w') as out_file:
for topic in tqdm(df_queries.num):
for i in zip(range(100), rerank_with_model(topic, df_queries, df_run, model)):
out_file.write(str(topic) + ' Q0 ' + i[1]['id'] + ' ' + str(i[0] + 1) + ' ' + str(i[1]['score']) + ' ' + tag + '\n')
```
# Marco V1
```
rerank(DIR + 'entrypage-random/run.ms-marco-content.bm25-mono-t5.txt', df_random_run, df_random_queries.copy(), monoT5Reranker, 'mono-t5-at-bm25')
rerank(DIR + 'entrypage-popular/run.ms-marco-content.bm25-mono-t5.txt', df_popular_run, df_popular_queries.copy(), monoT5Reranker, 'mono-t5-at-bm25')
```
# Marco V2
```
rerank(DIR_v2 + 'entrypage-random/run.ms-marco-content.bm25-mono-t5.txt', df_random_run_v2, df_random_queries_v2.copy(), monoT5Reranker, 'mono-t5-at-bm25')
rerank(DIR_v2 + 'entrypage-popular/run.ms-marco-content.bm25-mono-t5.txt', df_popular_run_v2, df_popular_queries_v2.copy(), monoT5Reranker, 'mono-t5-at-bm25')
```
# Rerank with MonoBERT
```
from pygaggle.rerank.transformer import MonoBERT
monoBert = MonoBERT()
rerank(DIR + 'entrypage-random/run.ms-marco-content.bm25-mono-bert.txt', df_random_run, df_random_queries.copy(), monoBert, 'mono-bert-at-bm25')
rerank(DIR + 'entrypage-popular/run.ms-marco-content.bm25-mono-bert.txt', df_popular_run, df_popular_queries.copy(), monoBert, 'mono-bert-at-bm25')
rerank(DIR_v2 + 'entrypage-random/run.ms-marco-content.bm25-mono-bert.txt', df_random_run_v2, df_random_queries_v2.copy(), monoBert, 'mono-bert-at-bm25')
rerank(DIR_v2 + 'entrypage-popular/run.ms-marco-content.bm25-mono-bert.txt', df_popular_run_v2, df_popular_queries_v2.copy(), monoBert, 'mono-bert-at-bm25')
```
| github_jupyter |
# Python for Policy Analysts
## Session 0: Setting Up Python
Created by: O Downs (odowns@berkeley.edu)
Instructor Edition
### Goals:
* Getting you started with Python!
* Download Anaconda, which will facilitate your Python use
* Understand Terminal commands
* Learn how to `pip install`
* Learn how to start up a Jupyter notebook
## Step 0: Understanding Some Core Concepts
If you're new to coding, congratulations! You've taken the first step towards being an amazing programmer.
If you're new to Python, awesome!
Here are some things you need to understand before coding:
* Zero-indexing: in computer science, you generally start counting at zero. So for example, the first item in a list is item ZERO, not item one. This can get confusinig and lead to off-by-one errors, but never fear! OFten you can solve those problems by tweaking your code.
* Computers aren't smart: although they can do amazing calculations and run programs, computers aren't intelligent the way humans are. When you're coding, be aware that your computer doesn't know what you want it to do until you tell it exactly–it won't assume anything. So be patient, and remember that errors are inevitable! Even the best programmers make lots of errors and spend a lot of time debugging.
* Getting help: everyone needs help sometimes. Thankfully, the internet is great for getting help! If you're stuck on a problem, try Googling it. No need to reinvent the wheel–somebody's probably had exactly the same problem as you. But BE MINDFUL of websites like StackOverflow and others: not everyone is always right, so be smart and critical about code you see. And of course, remember that somebody wrote that code! Don't steal code!
## Step 1: Anaconda
The first step in getting you set up with Python is getting Anaconda. Anaconda can be downloaded [here](https://www.anaconda.com/distribution/) and is the platform you'll use to write code.
(heres where i dont really understand things...i dont know what anaconda actually does)
So go ahead and download it.
## Step 2: Understanding Terminal (Mac)
(also youll have to put in stuff for pc because i dont know anything about that)
It's not neccesary to understand what's happening inside the computer if you're not interested in doing hardcore computer science, but it is important to know some key commands in the Command Line.
The Command Line is your interface with the core of the computer. It's comparable to the windows you can open to view and move files on a Mac, but you only use your keyboard. It also has additional functionalities.
On Mac, open Terminal. Don't worry if it looks scary–this is your friend!
I dont really want to explain all this so im just linking the website i used to learn it. [here](https://www.codecademy.com/learn/learn-the-command-line)
## Step 3: `pip install`
This is a command you'll use a lot when you're starting out. `pip install` installs libraries for your computer. For example, in the Command Line, type `pip install seaborn` and you'll be able to use the seaborn library!
## Step 4: Start Up a Notebook
There are two easy ways to set up a notebook.
Way 1: Using Anaconda
1. Open Anaconda
2. Click on the Jupyter Notebook "Launch" button. This will open a Terminal window which will open a Jupyter window in your browser.
3. Navigate via this GUI to your preferred directory (folder)
4. In the top right hand corner, click "New" and under "Python" click "Notebook"
5. Write your code!
6. You can move this notebook around using the Command Line or with your regular interface
Way 2: Using the Command Line
1. Open Terminal
2. Type `jupyter notebook` and hit Enter
3. Navigate via this GUI to your preferred directory (folder)
4. In the top right hand corner, click "New" and under "Python" click "Notebook"
5. Write your code!
6. You can move this notebook around using the Command Line or with your regular interface
| github_jupyter |
# Hurricane Ike Maximum Water Levels
Compute the maximum water level during Hurricane Ike on a 9 million node triangular mesh storm surge model. Plot the results with Datashader.
```
import xarray as xr
import numpy as np
import pandas as pd
import hvplot.xarray
import fsspec
from dask.distributed import Client, progress
#from dask_kubernetes import KubeCluster
#cluster = KubeCluster()
%%time
from dask_cloudprovider import FargateCluster
cluster = FargateCluster(n_workers=1, image='rsignell/pangeo-worker:2020-01-23c')
cluster.dashboard_link
```
### Start a dask cluster to crunch the data
```
cluster.scale(2);
cluster
```
For demos, I often click in this cell and do "Cell=>Run All Above", then wait until the workers appear. This can take several minutes (up to 6!) for instances to spin up and Docker containers to be downloaded. Then I shutdown the notebook and run again from the beginning, and the workers will fire up quickly because the instances have not spun down yet.
```
%%time
client = Client(cluster)
client
```
### Read the data using the cloud-friendly zarr data format
```
ds = xr.open_zarr(fsspec.get_mapper('s3://pangeo-data-uswest2/esip/adcirc/ike', anon=False, requester_pays=True))
ds['zeta']
```
How many GB of sea surface height data do we have?
```
ds['zeta'].nbytes/1.e9
```
Take the maximum over the time dimension and persist the data on the workers in case we would like to use it later. This is the computationally intensive step.
```
%%time
max_var = ds['zeta'].max(dim='time').compute()
progress(max_var)
```
### Visualize data on mesh using HoloViz.org tools
```
import numpy as np
import datashader as dshade
import holoviews as hv
import geoviews as gv
import cartopy.crs as ccrs
import hvplot.xarray
import holoviews.operation.datashader as dshade
dshade.datashade.precompute = True
hv.extension('bokeh')
v = np.vstack((ds['x'], ds['y'], max_var)).T
verts = pd.DataFrame(v, columns=['x','y','vmax'])
points = gv.operation.project_points(gv.Points(verts, vdims=['vmax']))
tris = pd.DataFrame(ds['element'].values.astype('int')-1, columns=['v0','v1','v2'])
tiles = gv.tile_sources.OSM
value = 'max water level'
label = '{} (m)'.format(value)
trimesh = gv.TriMesh((tris, points), label=label)
mesh = dshade.rasterize(trimesh).opts(
cmap='rainbow', colorbar=True, width=600, height=400)
tiles * mesh
```
### Extract a time series at a specified lon, lat location
Because Xarray does not yet understand that `x` and `y` are coordinate variables on this triangular mesh, we create our own simple function to find the closest point. If we had a lot of these, we could use a more fancy tree algorithm.
```
# find the indices of the points in (x,y) closest to the points in (xi,yi)
def nearxy(x,y,xi,yi):
ind = np.ones(len(xi),dtype=int)
for i in range(len(xi)):
dist = np.sqrt((x-xi[i])**2+(y-yi[i])**2)
ind[i] = dist.argmin()
return ind
#just offshore of Galveston
lat = 29.2329856
lon = -95.1535041
ind = nearxy(ds['x'].values,ds['y'].values,[lon], [lat])[0]
ds['zeta'][:,ind].hvplot(grid=True)
```
| github_jupyter |
# GitHub : Le réseau social des développeurs grâce à Git
_Auteur_: Hugo Ducommun
_Date_: 30 Mai 2019
_GitHub_ est un plateforme de projets de jeunes développeurs motivés qui souhaient publier leur travail de manière libre (OpenSource). _GitHub_ est connu pour être pratique lorsqu'on travaille en équipe. Il permet à chaque collaborateurs de travailler sur un seul et même projet sans influencer l'avancement des autres. Ce site web peut également être utilisé de manière professionnelle grâce à des comptes payants.
## On parle souvent de `git`, de quoi s'agit-il ?
**git** est un logiciel de versioning (gestion de versions) que le site _GitHub_ utilise.
Il permet par conséquent de faciliter l'accès à l'historique des anciennes versions d'un projet et de synchroniser facilement les fichiers entre eux grâce à un système de **branches** que je développerai dans le point suivant.
En réalité c'est git qui est à la base du site web _GitHub_. Mais _GitHub_ a rajouté une interface graphique à git qui s'utilise principalement dans un terminal, c'est pour cela que _GitHub_ est plus connu que le logiciel de versioning lui-même. Pour cette raison, j'étudirai ici principalement le logiciel git et rajouterai quelques informations supplémentaires sur _GitHub_.
# Fonctionnement de `git`
### Introduction imagée de la notion de *branch*.
Git procède en branches. Voici un petit schéma que j'ai trouvé très expressif sur la manière dont ça fonctionne.

Nous avons donc deux types de *branches* différentes, oui oui deux et pas trois comme sur le schéma. Il y a la *branch* du milieu appelé **master branch**, c'est là où se situera la version officielle et fonctionnelle de votre projet. Puis un deuxième type appelé **feature branch** qui est caractérisé par les branches nommées sur le schéma, hat et glasses (en réalité ce sont toutes les branches exceptés la master branch).
Le fonctionnement est simple, le projet est ici de rajouter un chapeau et des lunettes à l'image du pouple de base. Un collaborateur s'occupera donc du chapeau (C1) et un autre des lunettes (C2). Ils vont procéder de cette manière :
1. C1 et C2 vont donc copier le projet actuel (master branch) dans une feature branch personnelle.
2. Faire leurs modifications et les rendre fonctionnelles (ajoutez un chapeau ou des lunettes).
3. Uploader leurs modifications dans la master branch pour avoir un projet complet.
### Termes techniques
Bien sûr, ceci est un peu plus compliqué dans un vrai projet, il y a beaucoup plus de choses à faire que rajouter deux accessoires, et nous devons procéder par ligne de commandes. Mais c'est une bonne approche de cette notion.
Je vais donc ici détaillé quelques termes qu'utilise git dans son fonctionnement.
---
#### 
#### Repository
Un repository (en français dépôt) est l'ensemble de votre projet : les documents que vous éditez et dont
vous suivez les modifications s'y trouvent.
Le repository peut être locale ou se trouver sur votre serveur dédié.
---
#### Branch
C'est une des branches copiées de la master branch par défaut. Après avoir créé la branche copiée de master, elle ne sera plus affectée par les changement opérés sur les autres branches du projet.
Sur le schéma ci-dessous, 'Copie de A' est une branche de 'Branche A'. D'ailleurs pour avoir copié la branche A, l'utilisateur a dû **merge**.
La commande pour créer une nouvelle branche est : `git branch nomNouvelleBranche`
La commande : `git branch --list` vous permet de voir la liste de toutes les branches du repository actuel.

---
#### 
#### Pull request
Ce terme peut être traduit par demande de fusion (**merge**).
C'est lorsque le collaborateur veut fusion sa branch avec une autre (généralement la master branch) pour appliquer les changements tels que les corrections de bugs ou ajout de fonctionnalité à la branche cible.
Le responsable de la branche ciblée est libres de refuser ou accepter ce **pull request**.

---
#### 
#### Fork
Fork (littéralement fourchette en français), correspond à copier une branche déjà existante. On fork souvent la master branch au début pour pouvoir se créer notre propre branch et modifier le projet sans impacter sur la master branch.
---
#### 
#### Merge
Merge est un peu le contraire de fork, après avoir modifié tout ce que l'on voulait, on peut fusionner notre branche avec une autre. Cette fonctionnalité est souvent protégée par un pull request sinon tout le monde pourrait modifier n'importe quelle branche.
Les modifications de la branche B seront donc appliquées à la branche A si le merge fonctionne.
La commande pour fusionner la branche B est : `git merge brancheB`
Attention il est important d'exécuter cette commande depuis la branche A !

---
#### 
#### Commit
Commit est l'action la plus courante que vous allez exécuter avec git. Elle correspond, comme l'indique son icône à une modification de la branche en question. Lorsque vous avez localement modifié une branche, vous devez **commit** pour enregister les changements, généralement avec un message d'information pour pouvoir par la suite mieux retrouver des anciennes modifications.
La commande pour commit en rajoutant un message d'exemple est : `git commit -m 'Add the cow-boy hat'`

---
#### 
#### Push
Envoye tous vos commits dans le serveur dédié sur lequel est hebergé le repository (dépôt distant). Vous 'envoyez' en quelque sorte vos fichiers à vos collaborateurs.
La commande pour push est : `git push`
---
#### 
#### Pull
Effet contraire de push, vous recevez les fichiers envoyés par vos collègues. Avant chaque grosse séance de travail, assurez vous de pull pour voir l'avancement de votre équipe. Il charge les dossiers et fichiers du repository sur votre machine en local.
La commande pour pull est : `git pull`
---
# Autres commandes dans git
Nous avons déjà vu quelques commandes dans les termes techniques, voici le reste des commandes basiques :
* `git init` : initialise votre dossier en tant qu'un dossier git
* `git clone URL` : clone un repository déjà existant dans le dossier où vous exécutez la commande (exemple url : https://github.com/Bugnon/oc-2018.git)
* `git status` : affiche le statuts des fichiers de votre repository. Permet de voir où nous en sommes.
* `git add nomFichier` : ajouter des fichiers dans l'index (pré-sauvegarde). La commande `git add *` ajoute tous les fichiers modifiés.
* `git checkout nomBranche` : s'utilise en tant que switch d'une branche à une autre (utilisation basique)
Lors de la première utilisation de git, vous devrez enregistrer votre pseudo et votre email.
Après avoir utilisez la commande `git init` qui initialise votre dossier en tant qu'un dossier git, utilisez les deux commandes ci-dessous :
* `git config --global user.name 'hugoducom'`
* `git config --global user.email 'prenom.nom@bugnon.educanet2.ch'`
L'avantage de Git reste que c'est une source particulièrement bien documentée en ligne car ceux qui le maîtrise sont en général assez actif sur les forums. On arrive toujours à trouver de l'aide sur les différentes plateformes. Aussi grâce aux commandes : `git help nomCmd` par exemple `git help checkout`.
# Schéma récapitulatif de git et ses commandes

_La commande `git fetch` ne sera pas traitée ici._
### Publication d'un fichier
En somme lorsqu'on veut publier un fichier, par exemple _index.html_ dans notre repository, il faut taper dans l'ordre :
1. `git pull`
2. `git add index.html`
3. `git commit -m 'Ajoute de mon fichier html'`
4. `git push`
Le `git pull` du début permet de ne pas avoir de conflit de fichier lorsqu'on push en mettant à jour notre copie de travail versionnée.
# Vous n'avez toujours pas compris ? Voici un exemple pratique
Je suis un jeune codeur web qui souhaite partager mes premiers pas sur une plateforme de développement comme GitHub. Je crée donc un dossier sur mon bureau appelé "web". C'est ce dossier que je souhaite partager sur GitHub. Je télécharge donc [Git](https://git-scm.com/downloads).
Il s'agit d'un petit projet, je travaillerai donc seulement dans la _master branch_ et ne créerai pas d'autre branche.
Après l'installation, je fais clique-droit sur mon dossier et appuie sur **Git Bash Here**. Une console s'ouvrira et c'est depuis là que vous taperez vos commandes.
Comme je me suis informé sur ce notebook, je tape d'abord `git init`. Et enregistre mes informations à l'aide de `git config`.

Je commence ensuite à développer tranquillement dans ce dossier. Je crée donc mon fichier _index.html_. Je commence à coder et arrive le moment où je souhaite mettre en ligne ce que j'ai fait. Je fais donc un `git add index.html` ou `git add *` si je veux add tous les fichiers de mon dossier web.
Je remarque en utilisant la commande `git status` que mes fichiers sont prêts à être commit au dépôt local.
Puis `git commit -m 'Ajout de la première version de mon site'`.

Rendez-vous maintenant sur [GitHub](https://github.com/new) pour créer notre repository. Je me connecte et remplis les informations nécessaire.

Par la suite il faut exécuter les deux commandes que GitHub nous demande :
* `git remote add origin https://github.com/hugoducom/web.git`
* `git push -u origin master`
Il se peut que lors de la deuxième commande, on vous demande votre login et votre mot de passe GitHub. Il faut donc avoir un compte GitHub.

Le plus dur est fait ! Votre repository est en ligne sur GitHub, bravo ! En rechargeant la page https://github.com/hugoducom/web vous allez tomber sur votre fichier _index.html_.

Pour la suite de votre aventure de développement web, il vous faudra simplement suivre le point 'Publication d'un fichier' un peu plus haut de ce dossier.
Lorsque vous aurez compris le principe de git et serez capable de tout faire en lignes de commande, vous pourrez télécharger des applications qui feront le travail à votre place comme [GitHub Desktop](https://desktop.github.com/), qui facilitera grandement vos partages de fichier dans votre carrière de développement.
---
#### Sources:
* https://gerardnico.com/code/version/git/branch
* https://fr.wikipedia.org/wiki/GitHub
* https://fr.wikipedia.org/wiki/Git
* https://www.sebastien-gandossi.fr/blog/difference-entre-git-reset-et-git-rm-cached
* https://www.youtube.com/watch?v=4o9qzbssfII
| github_jupyter |
```
import numpy as np
from nose.tools import assert_almost_equal, assert_almost_equals, assert_equal
```
Ответами на задачи являются функции. Они будут проверены автоматическими тестами на стороне сервера.
Некоторые тесты выполняются локально для самопроверки.
### Вопросы для самоконтроля
Эта часть задания не оценивается, ответы можно не записывать
1. Что такое решающее дерево? Как по построенному дереву найти прогноз для объекта?
2. Почему для любой выборки можно построить дерево, имеющее нулевую ошибку на ней? Приведите примеры.
3. Почему не рекомендуется строить небинарные деревья (имеющие более двух потомков у каждой вершины)?
4. Как устроен жадный алгоритм построения дерева?
5. Какие критерии информативности для решения задачи классификации вы знаете?
6. Какой смысл у критерия Джини и энтропийного критерия?
7. Какие критерии информативности для решения задачи регрессии вы знаете?
8. Что такое pruning (стрижка) дерева? Чем отличаются post-pruning и pre-pruning?
9. Какие методы обработки пропущенных значений вы знаете?
10. Как учитывать категориальные признаки в решающем дереве?
### Критерии информативности (45%)
Критерий информативности для набора объектов $R$ вычисляется на основе того, насколько хорошо их целевые переменные предсказываются константой (при оптимальном выборе этой константы):
$$
H(R) = \min_{c \in Y} \dfrac{1}{|R|} \sum_{(x^i,y^i) \in R} L(y^i, c),
$$
где $L(y^i, c)$- некоторая функция потерь. Соответственно, чтобы получить вид критерия при конкретной функции потерь, можно аналитически найти оптимальное значение константы и подставить его в формулу для $H(R)$.
Выведите критерии информативности для следующих функций потерь:
Для задачи регрессии,
1. $L(y,c) = (y-c)^2$, где $y$ - скаляр, c - константа.
Для задачи классификации на $K$ классов, с дополнительным ограничением
$$c = [c_1,\ldots,c_k], 0 \leq c_i \leq 1 \forall i, \sum_{k=1}^K c_k = 1,$$
2. $L(y,c) = \sum_{k=1}^K (c_k-[y_k=1])^2$, где $y$ - это one-hot вектор, $y_k$ - его элемент k-тый элемент, $c$ - вектор вероятностей.
3. $L(y,c) = -\sum_{k=1}^K [y_k=1]\log c_k$, где $y$ - это one-hot вектор, $y_k$ - его элемент k-тый элемент, $c$ - вектор вероятностей.
```
def H_1(ys):
"""
ys is a 1-dimentional numpy array containing y values for every object from R.
"""
h = np.var(ys)
return h
def H_2(ys):
"""
ys is a numpy array with shape (num_items, num_classes).
Where each row is a one-vector of class probabilities (e.g. [0, 0, 1] for object of class 2 from 0, 1, 2).
"""
p = np.sum(ys,axis=0)/ys.shape[0]
c = 1 - np.sum(p**2)
return c
epsilon = 1e-5
def H_3(ys):
"""
ys is a numpy array with shape (num_items, num_classes).
Where each row is a one-vector of class probabilities (e.g. [0, 0, 1] for object of class 2 from 0, 1, 2).
log2 should be used as logarithm.
Do not forget to add epsilon to the probabitlities vector in the logarithm.
"""
p = np.sum(ys,axis=0)/ys.shape[0]
b = np.log2(p+epsilon)
c = -np.sum(p * b)
return c
a_r = np.arange(10)
b_r = np.ones(10)
c_r = np.arange(25)/10.
assert_equal(H_1(a_r), 8.25)
assert_equal(H_1(b_r), 0.0)
assert_equal(H_1(c_r), 0.52)
a = np.vstack((np.ones(10), np.zeros(10))).T
b = np.hstack([np.vstack((np.ones(5), np.zeros(5))), np.vstack((np.zeros(5), np.ones(5)))]).T
c = np.hstack([np.vstack((np.ones(9), np.zeros(9))), np.vstack((np.zeros(1), np.ones(1)))]).T
print('a:\n{}\nb:\n{}\nc:\n{}'.format(a, b, c))
assert_almost_equal(H_2(a), 0.0, places=4)
assert_almost_equal(H_2(b), 0.5, places=4)
assert_almost_equal(H_2(c), 0.18, places=4)
assert_almost_equal(H_3(a), 0.0, places=4)
assert_almost_equal(H_3(b), 1.0, places=4)
assert_almost_equal(H_3(c), 0.469, places=3)
```
### Сложность дерева (15%)
Запишите оценку сложности построения одного решающего дерева в зависимости от размера обучающей выборки $l$, числа признаков $d$, максимальной глубины дерева $D$. В качестве предикатов используются пороговые функции $[x_j>t]$. При выборе предиката в каждой вершине перебираются все признаки, а в качестве порогов рассматриваются величины $t$, равные значениям этого признака на объектах, попавших в текущую вершину. Считайте сложность вычисления критерия информативности на подвыборке константной (т.е. $O(1)$).
Оценку сложности представьте в формате $O($`get_tree_complexity(D, l, d)`$)$, где `get_tree_complexity` - некоторая функция от $D$, $l$ и $d$. Функцию реализуйте ниже.
Пример использования (числа и зависимости случайны):
```
def get_tree_complexity(D, l, d):
return D+l+d
a = get_tree_complexity(1, 2, 3)
```
Тогда число a == 6.
```
def get_tree_complexity(D, l, d):
"""
Compute tree complexity in form O("some_expression") and return the "some_expression".
"""
return D*l*d
#This cell is executed on the server side.
```
### Bootstrap (40%)
В данной задаче необходимо вычислить вероятность попадания объекта в boostrap-выборку, а затем оценить ее численно.
Пусть выборка $\hat{X}^{n}$ размера $n$ сгененирована методом bootstrap на основе выборки $X^{n}={\boldsymbol{x}_{1},\dots\boldsymbol{x}_{n}}$. Найдите вероятность попадания объекта $x_{i}$ в выборку $\hat{X}^{n}$ и вычислите ее для случая $n\rightarrow\infty$. Реализуйте функцию `probability_to_get_into_X_b`, которая возвращает эту вероятность как число от `0` до `1`. В качесте экспоненты можете использовать `math.exp(1)`.
```
def probability_to_get_into_X_b():
p = 1 - 1/np.exp(1)
return p
assert_almost_equal(probability_to_get_into_X_b(), 0.6, places=1)
```
Реализуйте свою функцию, генерирующую bootstrap-выборку из исходной. Пусть исходная выборка представлена в виде `numpy`-массива (например, `np.arange(100)`). Тогда bootstrap-выборка тоже должна быть `numpy`-массивом тех же размеров, что и исходная.
```
def my_bootstrap(X):
"""
Implement the function that returns the
bootstraped dataset of the same size the
original dataset was.
"""
bs = np.random.randint(0,X.shape[0],X.shape[0])
return X[bs]
```
Численно оцените вероятность попадания объекта исходной выборки в bootstrap-выборку для размера выборки `N`. Функция `get_sample_proba` должна возвращать число от `0` до `1`.
Не забывайте, что мы живем в случайном мире ;)
```
def get_sample_proba(N):
sample_proba = my_bootstrap(np.arange(N))
prob = np.sum(np.isin(np.arange(N),sample_proba))/N
return prob
#This cell is executed on the server side.
```
Поздравляем, задание завершено. Не забудьте остановить свой виртуальный инстанс перед уходом (Control Panel -> Stop My Server).
| github_jupyter |
# [Applied Statistics](https://lamastex.github.io/scalable-data-science/as/2019/)
## 1MS926, Spring 2019, Uppsala University
©2019 Raazesh Sainudiin. [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
# 11. Non-parametric Estimation and Testing
### Topics
- Non-parametric Estimation
- Glivenko-Cantelli Theorem
- Dvoretsky-Kiefer-Wolfowitz Inequality
- Hypothesis Testing
- Permutation Testing
- Permutation Testing with Shells Data
- Plug-in Estimation and Bootstraps
## Inference and Estimation: The Big Picture
The Big Picture is about inference and estimation, and especially inference and estimation problems where computational techniques are helpful.
<table border="1" cellspacing="2" cellpadding="2" align="center">
<tbody>
<tr>
<td style="background-color: #ccccff;" align="center"> </td>
<td style="background-color: #ccccff;" align="center"><strong>Point estimation</strong></td>
<td style="background-color: #ccccff;" align="center"><strong>Set estimation</strong></td>
</tr>
<tr>
<td style="background-color: #ccccff;">
<p><strong>Parametric</strong></p>
<p> </p>
</td>
<td style="background-color: #ccccff;" align="center">
<p>MLE of finitely many parameters<br /><span style="color: #3366ff;"><em>done</em></span></p>
</td>
<td style="background-color: #ccccff;" align="center">
<p>Confidence intervals,<br /> via the central limit theorem</p>
</td>
</tr>
<tr>
<td style="background-color: #ccccff;">
<p><strong>Non-parametric</strong><br /> (infinite-dimensional parameter space)</p>
</td>
<td style="background-color: #ccccff;" align="center"><strong><em><span style="color: #3366ff;">about to see ... </span></em></strong></td>
<td style="background-color: #ccccff;" align="center"><strong><em><span style="color: #3366ff;">about to see ... </span></em></strong></td>
</tr>
</tbody>
</table>
So far we have seen parametric models, for example
- $X_1, X_2, \ldots, X_n \overset{IID}{\sim} Bernoulli (\theta)$, $\theta \in [0,1]$
- $X_1, X_2, \ldots, X_n \overset{IID}{\sim} Exponential (\lambda)$, $\lambda \in (0,\infty)$
- $X_1, X_2, \ldots, X_n \overset{IID}{\sim} Normal(\mu^*, \sigma)$, $\mu \in \mathbb{R}$, $\sigma \in (0,\infty)$
In all these cases **the parameter space** (the space within which the parameter(s) can take values) is **finite dimensional**:
- for the $Bernoulli$, $\theta \in [0,1] \subseteq \mathbb{R}^1$
- for the $Exponential$, $\lambda \in (0, \infty) \subseteq \mathbb{R}^1$
- for the $Normal$, $\mu \in \mathbb{R}^1$, $\sigma \in (0,\infty) \subseteq \mathbb{R}^1$, so $(\mu, \sigma) \subseteq \mathbb{R}^2$
For parametric experiments, we can use the maximum likelihood principle and estimate the parameters using the **Maximum Likelihood Estimator (MLE)**, for instance.
# Non-parametric estimation
Suppose we don't know what the distribution function (DF) is? We are not trying to estimate some fixed but unknown parameter $\theta^*$ for some RV we are assuming to be $Bernoulli(\theta^*)$, we are trying to estimate the DF itself. In real life, data does not come neatly labeled "I am a realisation of a $Bernoulli$ RV", or "I am a realisation of an $Exponential$ RV": an important part of inference and estimation is to make inferences about the DF itself from our observations.
#### Observations from some unknown process
<img src="images/unknownProcessTimesAnim.gif" width=400>
Consider the following non-parametric product experiment:
$$X_1, X_2, \ldots, X_n\ \overset{IID}{\sim} F^* \in \{\text{all DFs}\}$$
We want to produce a point estimate for $F^*$, which is a allowed to be any DF ("lives in the set of all DFs"), i.e., $F^* \in \{\text{all DFs}\}$
Crucially, $\{\text{all DFs}\}$, i.e., the set of all distribution functions over $\mathbb{R}$ is infinite dimensional.
<img src="images/TwoDFs.png" width=400>
We have already seen an estimate, made using the data, of a distribution function: the empirical or data-based distribution function (or empirical cumulative distribution function). This can be formalized as the following process of adding indicator functions of the half-lines beginning at the data points $[X_1,+\infty),[X_2,+\infty),\ldots,[X_n,+\infty)$:
$$\widehat{F}_n (x) = \frac{1}{n} \sum_{i=1}^n \mathbf{1}_{[X_i,+\infty)}(x)$$
where,
$$\mathbf{1}_{[X_i,+\infty)}(x) := \begin{cases} & 1 \quad \text{ if } X_i \leq x \\ & 0 \quad \text{ if }X_i > x \end{cases}$$
First let us evaluate a set of functions that will help us conceptualize faster:
```
def makeEMFHidden(myDataList):
'''Make an empirical mass function from a data list.
Param myDataList, list of data to make emf from.
Return list of tuples comprising (data value, relative frequency) ordered by data value.'''
sortedUniqueValues = sorted(list(set(myDataList)))
freqs = [myDataList.count(i) for i in sortedUniqueValues]
relFreqs = [ZZ(fr)/len(myDataList) for fr in freqs] # use a list comprehension
return zip(sortedUniqueValues, relFreqs)
from pylab import array
def makeEDFHidden(myDataList, offset=0):
'''Make an empirical distribution function from a data list.
Param myDataList, list of data to make ecdf from.
Param offset is an offset to adjust the edf by, used for doing confidence bands.
Return list of tuples comprising (data value, cumulative relative frequency) ordered by data value.'''
sortedUniqueValues = sorted(list(set(myDataList)))
freqs = [myDataList.count(i) for i in sortedUniqueValues]
from pylab import cumsum
cumFreqs = list(cumsum(freqs)) #
cumRelFreqs = [ZZ(i)/len(myDataList) for i in cumFreqs] # get cumulative relative frequencies as rationals
if offset > 0: # an upper band
cumRelFreqs = [min(i ,1) for i in cumRelFreqs] # use a list comprehension
if offset < 0: # a lower band
cumRelFreqs = [max(i, 0) for i in cumFreqs] # use a list comprehension
return zip(sortedUniqueValues, cumRelFreqs)
# EPMF plot
def epmfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
epmf_pairs = makeEMFHidden(samples)
epmf = point(epmf_pairs, rgbcolor = "blue", pointsize="20")
for k in epmf_pairs: # for each tuple in the list
kkey, kheight = k # unpack tuple
epmf += line([(kkey, 0),(kkey, kheight)], rgbcolor="blue", linestyle=":")
# padding
epmf += point((0,1), rgbcolor="black", pointsize="0")
return epmf
# ECDF plot
def ecdfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
ecdf_pairs = makeEDFHidden(samples)
ecdf = point(ecdf_pairs, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(ecdf_pairs)):
x, kheight = ecdf_pairs[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = ecdf_pairs[k-1] # unpack previous tuple
ecdf += line([(previous_x, previous_height),(x, previous_height)], rgbcolor="grey")
ecdf += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
ecdf += line([(x, previous_height),(x, kheight)], rgbcolor="grey", linestyle=":")
# padding
ecdf += line([(ecdf_pairs[0][0]-0.2, 0),(ecdf_pairs[0][0], 0)], rgbcolor="grey")
max_index = len(ecdf_pairs)-1
ecdf += line([(ecdf_pairs[max_index][0], ecdf_pairs[max_index][1]),(ecdf_pairs[max_index][0]+0.2, ecdf_pairs[max_index][1])],rgbcolor="grey")
return ecdf
def calcEpsilon(alphaE, nE):
'''Return confidence band epsilon calculated from parameters alphaE > 0 and nE > 0.'''
return sqrt(1/(2*nE)*log(2/alphaE))
```
### Let us continue with the concepts
We can remind ourselves of this for a small sample of $de\,Moivre(k=5)$ RVs:
```
deMs=[randint(1,5) for i in range(20)] # randint can be used to uniformly sample integers in a specified range
deMs
sortedUniqueValues = sorted(list(set(deMs)))
freqs = [deMs.count(i) for i in sortedUniqueValues]
from pylab import cumsum
cumFreqs = list(cumsum(freqs)) #
cumRelFreqs = [ZZ(i)/len(deMs) for i in cumFreqs] # get cumulative relative frequencies as rationals
zip(sortedUniqueValues, cumRelFreqs)
show(ecdfPlot(deMs), figsize=[6,3]) # use hidden ecdfPlot function to plot
```
We can use the empirical cumulative distribution function $\widehat{F}_n$ for our non-parametric estimate because this kind of estimation is possible in infinite-dimensional contexts due to the following two theorems:
- Glivenko-Cantelli Theorem (*Fundamental Theorem of Statistics*)
- Dvoretsky-Kiefer-Wolfowitz (DKW) Inequality
# Glivenko-Cantelli Theorem
Let $X_1, X_2, \ldots, X_n \overset{IID}{\sim} F^* \in \{\text{all DFs}\}$
and the empirical distribution function (EDF) is $\widehat{F}_n(x) := \displaystyle\frac{1}{n} \sum_{i=1}^n \mathbf{1}_{[X_i,+\infty)}(x)$, then
$$\sup_x { | \widehat{F}_n(x) - F^*(x) | } \overset{P}{\rightarrow} 0$$
Remember that the EDF is a statistic of the data, a statistic is an RV, and (from our work the convergence of random variables), $\overset{P}{\rightarrow}$ means "converges in probability". The proof is beyond the scope of this course, but we can gain an appreciation of what it means by looking at what happens to the ECDF for $n$ simulations from:
- $de\,Moivre(1/5,1/5,1/5,1/5,1/5)$ and
- $Uniform(0,1)$ as $n$ increases:
```
@interact
def _(n=(10,(0..200))):
'''Interactive function to plot ecdf for obs from de Moirve (5).'''
if (n > 0):
us = [randint(1,5) for i in range(n)]
p=ecdfPlot(us) # use hidden ecdfPlot function to plot
#p+=line([(-0.2,0),(0,0),(1,1),(1.2,1)],linestyle=':')
p.show(figsize=[8,2])
@interact
def _(n=(10,(0..200))):
'''Interactive function to plot ecdf for obs from Uniform(0,1).'''
if (n > 0):
us = [random() for i in range(n)]
p=ecdfPlot(us) # use hidden ecdfPlot function to plot
p+=line([(-0.2,0),(0,0),(1,1),(1.2,1)],linestyle='-')
p.show(figsize=[3,3],aspect_ratio=1)
```
It is clear, that as $n$ increases, the ECDF $\widehat{F}_n$ gets closer and closer to the true DF $F^*$, $\displaystyle\sup_x { | \widehat{F}_n(x) - F^*(x) | } \overset{P}{\rightarrow} 0$.
This will hold no matter what the (possibly unknown) $F^*$ is. Thus, $\widehat{F}_n$ is a point estimate of $F^*$.
We need to add the DKW Inequality be able to get confidence sets or a 'confidence band' that traps $F^*$ with high probability.
# Dvoretsky-Kiefer-Wolfowitz (DKW) Inequality
Let $X_1, X_2, \ldots, X_n \overset{IID}{\sim} F^* \in \{\text{all DFs}\}$
and the empirical distribution function (EDF) is $\widehat{F}_n(x) := \displaystyle\frac{1}{n} \sum_{i=1}^n \mathbf{1}_{[X_i,+\infty)}(x)$,
then, for any $\varepsilon > 0$,
$$P\left( \sup_x { | \widehat{F}_n(x) - F^*(x) | > \varepsilon }\right) \leq 2 \exp(-2n\varepsilon^2) $$
We can use this inequality to get a $1-\alpha$ confidence band $C_n(x) := \left[\underline{C}_n(x), \overline{C}_n(x)\right]$ about our point estimate $\widehat{F}_n$ of our possibly unknown $F^*$ such that the $F^*$ is 'trapped' by the band with probability at least $1-\varepsilon$.
$$\begin{eqnarray} \underline{C}_{\, n}(x) &=& \max \{ \widehat{F}_n(x)-\varepsilon_n, 0 \}, \notag \\ \overline{C}_{\, n}(x) &=& \min \{ \widehat{F}_n(x)+\varepsilon_n, 1 \}, \notag \\ \varepsilon_n &=& \sqrt{ \frac{1}{2n} \log \left( \frac{2}{\alpha}\right)} \\ \end{eqnarray}$$
and
$$P\left(\underline{C}_n(x) \leq F^*(x) \leq \overline{C}_n(x)\right) \geq 1-\alpha$$
### YouTry in class
Try this out for a simple sample from the $Uniform(0,1)$, which you can generate using random. First we will just make the point estimate for $F^*$, the EDF $\widehat{F}_n$
```
n=10
uniformSample = [random() for i in range(n)]
print(uniformSample)
```
In one of the assessments, you did a question that took you through the steps for getting the list of points that you would plot for an empirical distribution function (EDF). We will do exactly the same thing here.
First we find the unique values in the sample, in order from smallest to largest, and get the frequency with which each unique value occurs:
```
sortedUniqueValuesUniform = sorted(list(set(uniformSample)))
print(sortedUniqueValuesUniform)
freqsUniform = [uniformSample.count(i) for i in sortedUniqueValuesUniform]
freqsUniform
```
Then we accumulate the frequences to get the cumulative frequencies:
```
from pylab import cumsum
cumFreqsUniform = list(cumsum(freqsUniform)) # accumulate
cumFreqsUniform
```
And the the relative cumlative frequencies:
```
# cumulative rel freqs as rationals
cumRelFreqsUniform = [ZZ(i)/len(uniformSample) for i in cumFreqsUniform]
cumRelFreqsUniform
```
And finally zip these up with the sorted unique values to get a list of points we can plot:
```
ecdfPointsUniform = zip(sortedUniqueValuesUniform, cumRelFreqsUniform)
ecdfPointsUniform
```
Here is a function that you can just use to do a ECDF plot:
```
# ECDF plot given a list of points to plot
def ecdfPointsPlot(listOfPoints, colour='grey', lines_only=False):
'''Returns an empirical probability mass function plot from a list of points to plot.
Param listOfPoints is the list of points to plot.
Param colour is used for plotting the lines, defaulting to grey.
Param lines_only controls wether only lines are plotted (true) or points are added (false, the default value).
Returns an ecdf plot graphic.'''
ecdfP = point((0,0), pointsize="0")
if not lines_only: ecdfP = point(listOfPoints, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(listOfPoints)):
x, kheight = listOfPoints[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = listOfPoints[k-1] # unpack previous tuple
ecdfP += line([(previous_x, previous_height),(x, previous_height)], rgbcolor=colour)
ecdfP += line([(x, previous_height),(x, kheight)], rgbcolor=colour, linestyle=":")
if not lines_only:
ecdfP += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
# padding
max_index = len(listOfPoints)-1
ecdfP += line([(listOfPoints[0][0]-0.2, 0),(listOfPoints[0][0], 0)], rgbcolor=colour)
ecdfP += line([(listOfPoints[max_index][0], listOfPoints[max_index][1]),(listOfPoints[max_index][0]+0.2, listOfPoints[max_index][1])],rgbcolor=colour)
return ecdfP
```
This makes the plot of the $\widehat{F}_{10}$, the point estimate for $F^*$ for these $n=10$ simulated samples.
```
show(ecdfPointsPlot(ecdfPointsUniform), figsize=[6,3])
```
What about adding those confidence bands? You will do essentially the same thing, but adjusting for the required $\varepsilon$. First we need to decide on an $\alpha$ and calculate the $\varepsilon$ corresponding to this alpha. Here is some of our code to calculate the $\varepsilon$ corresponding to $\alpha=0.05$ (95% confidence bands), using a hidden function calcEpsilon:
```
alpha = 0.05
epsilon = calcEpsilon(alpha, n)
epsilon
```
See if you can write your own code to do this calculation, $\varepsilon_n = \sqrt{ \frac{1}{2n} \log \left( \frac{2}{\alpha}\right)}$. For completeness, do the whole thing:assign the value 0.05 to a variable named alpha, and then use this and the variable called n that we have already declared to calculate a value for $\varepsilon$. Call the variable to which you assign the value for $\varepsilon$ epsilon so that it replaces the value we calculated in the cell above (you should get the same value as us!).
Now we need to use this to adjust the EDF plot. In the two cells below we first of all do the adjustment for $\underline{C}_{\,n}(x) =\max \{ \widehat{F}_n(x)-\varepsilon_n, 0 \}$, and then use zip again to get the points to actually plot for the lower boundary of the 95% confidence band.
Now we need to use this to adjust the EDF plot. In the two cells below we first of all do the adjustment for $\overline{C}_{\,n}(x) =\min \{ \widehat{F}_n(x)+\varepsilon_n, 1 \}$, and then use zip again to get the points to actually plot for the lower boundary of the 95% confidence band.
```
# heights for the lower band
cumRelFreqsUniformLower = [max(crf - epsilon, 0) for crf in cumRelFreqsUniform]
print(cumRelFreqsUniformLower)
ecdfPointsUniformLower = zip(sortedUniqueValuesUniform, cumRelFreqsUniformLower)
ecdfPointsUniformLower
```
We carefully gave our `ecdfPointsPlo`t function the flexibility to be able to plot bands, by having a colour parameter (which defaults to 'grey') and a `lines_only` parameter (which defaults to `false`). Here we can plot the lower bound of the confidence interval by adding `ecdfPointsPlot(ecdfPointsUniformLower, colour='green', lines_only=true)` to the previous plot:
```
pointEstimate = ecdfPointsPlot(ecdfPointsUniform)
lowerBound = ecdfPointsPlot(ecdfPointsUniformLower, colour='green', lines_only=true)
show(pointEstimate + lowerBound, figsize=[6,3])
```
### YouTry
You try writing the code to create the list of points needed for plotting the upper band $\overline{C}_{\,n}(x) =\min \{ \widehat{F}_n(x)+\varepsilon_n, 1 \}$. You will need to first of all get the upper heights (call them say `cumRelFreqsUniformUpper`) and then `zip` them up with the `sortedUniqueValuesUniform` to get the points to plot.
```
# heights for the upper band
```
Once you have got done this you can add them to the plot by altering the code below:
```
pointEstimate = ecdfPointsPlot(ecdfPointsUniform)
lowerBound = ecdfPointsPlot(ecdfPointsUniformLower,colour='green', lines_only=true)
show(pointEstimate + lowerBound, figsize=[6,3])
```
(end of YouTry)
---
If we are doing lots of collections of EDF points we may as well define a function to do it, rather than repeating the same code again and again. We use an offset parameter to give us the flexibility to use this to make points for confidence bands as well.
```
def makeEDFPoints(myDataList, offset=0):
'''Make a list empirical distribution plotting points from from a data list.
Param myDataList, list of data to make ecdf from.
Param offset is an offset to adjust the edf by, used for doing confidence bands.
Return list of tuples comprising (data value, cumulative relative frequency(with offset)) ordered by data value.'''
sortedUniqueValues = sorted(list(set(myDataList)))
freqs = [myDataList.count(i) for i in sortedUniqueValues]
from pylab import cumsum
cumFreqs = list(cumsum(freqs))
cumRelFreqs = [ZZ(i)/len(myDataList) for i in cumFreqs] # get cumulative relative frequencies as rationals
if offset > 0: # an upper band
cumRelFreqs = [min(i+offset ,1) for i in cumRelFreqs]
if offset < 0: # a lower band
cumRelFreqs = [max(i+offset, 0) for i in cumRelFreqs]
return zip(sortedUniqueValues, cumRelFreqs)
```
## NZ EartQuakes
Now we will try looking at the Earthquakes data we have used before to get a confidence band around an EDF for that. We start by bringing in the data and the function we wrote earlier to parse that data.
First check if you have already `unzip`-ped `data/earthquakes.csv.zip` file by dropping in shell via `%%sh`.
```
%%sh
ls data/
%%sh
# only do this once! So, you don't need to do this step if you see earthquakes.csv file above
cd data
# windows and mac users should first try to unzip
# unzip earthquakes.csv.zip
## if unzip is not found try tar by uncommenting next line and commenting the above line
## tar zxvf earthquakes.tgz
ls -al
def getLonLatMagDepTimes(NZEQCsvFileName):
'''returns longitude, latitude, magnitude, depth and the origin time as unix time
for each observed earthquake in the csv filr named NZEQCsvFileName'''
from datetime import datetime
import time
from dateutil.parser import parse
import numpy as np
with open(NZEQCsvFileName) as f:
reader = f.read()
dataList = reader.split('\n')
myDataAccumulatorList =[]
for data in dataList[1:-1]:
dataRow = data.split(',')
myTimeString = dataRow[2] # origintime
# let's also grab longitude, latitude, magnitude, depth
myDataString = [dataRow[4],dataRow[5],dataRow[6],dataRow[7]]
try:
myTypedTime = time.mktime(parse(myTimeString).timetuple())
myFloatData = [float(x) for x in myDataString]
myFloatData.append(myTypedTime) # append the processed timestamp
myDataAccumulatorList.append(myFloatData)
except TypeError, e: # error handling for type incompatibilities
print 'Error: Error is ', e
#return np.array(myDataAccumulatorList)
return myDataAccumulatorList
myProcessedList = getLonLatMagDepTimes('data/earthquakes.csv')
def interQuakeTimes(quakeTimes):
'''Return a list inter-earthquake times in seconds from earthquake origin times
Date and time elements are expected to be in the 5th column of the array
Return a list of inter-quake times in seconds. NEEDS sorted quakeTimes Data'''
import numpy as np
retList = []
if len(quakeTimes) > 1:
retList = [quakeTimes[i]-quakeTimes[i-1] for i in range(1,len(quakeTimes))]
#return np.array(retList)
return retList
interQuakesSecs = interQuakeTimes(sorted([x[4] for x in myProcessedList]))
len(interQuakesSecs)
interQuakesSecs[0:10]
```
There is a lot of data here, so let's use an interactive plot to do the non-parametric DF estimation just for some of the last data:
```
@interact
def _(takeLast=(500,(0..min(len(interQuakesSecs),1999))), alpha=(0.05)):
'''Interactive function to plot the edf estimate and confidence bands for inter earthquake times.'''
if takeLast > 0 and alpha > 0 and alpha < 1:
lastInterQuakesSecs = interQuakesSecs[len(interQuakesSecs)-takeLast:len(interQuakesSecs)]
interQuakePoints = makeEDFPoints(lastInterQuakesSecs)
p=ecdfPointsPlot(interQuakePoints, lines_only=true)
epQuakes = calcEpsilon(alpha, len(lastInterQuakesSecs))
interQuakePointsLower = makeEDFPoints(lastInterQuakesSecs, offset=-epQuakes)
lowerQuakesBound = ecdfPointsPlot(interQuakePointsLower, colour='green', lines_only=true)
interQuakePointsUpper = makeEDFPoints(lastInterQuakesSecs, offset=epQuakes)
upperQuakesBound = ecdfPointsPlot(interQuakePointsUpper, colour='green', lines_only=true)
show(p + lowerQuakesBound + upperQuakesBound, figsize=[6,3])
else:
print "check your input values"
```
What if we are not interested in estimating $F^*$ itself, but we are interested in scientificially investigating whether two distributions are the same. For example, perhaps, whether the distribution of earthquake magnitudes was the same in April as it was in March. Then, we should attempt to reject a falsifiable hypothesis ...
# Hypothesis Testing
**Recall:**
A formal definition of hypothesis testing is beyond our current scope. Here we will look in particular at a non-parametric hypothesis test called a permutation test. First, a quick review:
The outcomes of a hypothesis test, in general, are:
<table border="1" cellspacing="2" cellpadding="2" align="center">
<tbody>
<tr>
<td align="center">'true state of nature'</td>
<td align="center"><strong>Do not reject $H_0$<br /></strong></td>
<td align="center"><strong>Reject $H_0$<br /></strong></td>
</tr>
<tr>
<td>
<p><strong>$H_0$ is true<br /></strong></p>
<p> </p>
</td>
<td align="center">
<p>OK<span style="color: #3366ff;"> </span></p>
</td>
<td align="center">
<p>Type I error</p>
</td>
</tr>
<tr>
<td>
<p><strong>$H_0$ is false</strong></p>
</td>
<td align="center">Type II error</td>
<td align="center">OK</td>
</tr>
</tbody>
</table>
So, we want a small probability that we reject $H_0$ when $H_0$ is true (minimise Type I error). Similarly, we want to minimise the probability that we fail to reject $H_0$ when $H_0$ is false (type II error).
The P-value is one way to conduct a desirable hypothesis test. The scale of the evidence against $H_0$ is stated in terms of the P-value. The following interpretation of P-values is commonly used:
- P-value $\in (0, 0.01]$: Very strong evidence against $H_0$
- P-value $\in (0.01, 0.05]$: Strong evidence against $H_0$
- P-value $\in (0.05, 0.1]$: Weak evidence against $H_0$
- P-value $\in (0.1, 1]$: Little or no evidence against $H_0$
## Permutation Testing
A Permuation Test is a **non-parametric exact** method for testing whether two distributions are the same based on samples from each of them. In industry analogs and variants of permutation testing is known as *A/B Testing*.
What do we mean by "non-parametric exact"? It is non-parametric because we do not impose any parametric assumptions. It is exact because it works for any sample size.
Formally, we suppose that:
$$ X_1,X_2,\ldots,X_m \overset{IID}{\sim} F^* \quad \text{and} \quad X_{m+1}, X_{m+2},\ldots,X_{m+n} \overset{IID}{\sim} G^* \enspace , $$
are two sets of independent samples where the possibly unknown DFs
$F^*,\,G^* \in \{ \text{all DFs} \}$.
(Notice that we have written it so that the subscripts on the $X$s run from 1 to $m+n$.)
Now, consider the following hypothesis test:
$$H_0: F^*=G^* \quad \text{versus} \quad H_1: F^* \neq G^* \enspace . $$
Our test statistic uses the observations in both both samples. We want a test statistic that is a sensible one for the test, i.e., will be large when when $F^*$ is 'too different' from $G^*$
So, let our test statistic $T(X_1,\ldots,X_m,X_{m+1},\ldots,X_{m+n})$ be say:
$$
T:=T(X_1,\ldots,X_m,X_{m+1},\ldots,X_{m+n})= \text{abs} \left( \frac{1}{m} \sum_{i=1}^m X_i - \frac{1}{n} \sum_{i=m+1}^n X_i \right) \enspace .
$$
(In words, we have chosen a test statistic that is the absolute value of the difference in the sample means. Note the limitation of this: if $F^*$ and $G^*$ have the same mean but different variances, our test statistic $T$ will not be large.)
Then the idea of a permutation test is as follows:
- Let $N:=m+n$ be the pooled sample size and consider all $N!$ permutations of the observed data $x_{obs}:=(x_1,x_2,\ldots,x_m,x_{m+1},x_{m+2},\ldots,x_{m+n})$.
- For each permutation of the data compute the statistic $T(\text{permuted data } x)$ and denote these $N!$ values of $T$ by $t_1,t_2,\ldots,t_{N!}$.
- Under $H_0: X_1,\ldots,X_m,X_{m+1},\ldots,X_{m+n} \overset{IID}{\sim}F^*=G^*$, each of the permutations of $x= (x_1,x_2,\ldots,x_m,x_{m+1},x_{m+2},\ldots,x_{m+n})$ has the same joint probability $\prod_{i=1}^{m+n} f(x_i)$, where $f(x_i)$ is the density function corresponding to $F^*=G^*$, $f(x_i)=dF(x_i)=dG(x_i)$.
- Therefore, the transformation of the data by our statistic $T$ also has the same probability over the values of $T$, namely $\{t_1,t_2,\ldots,t_{N!}\}$. Let $\mathbf{P}_0$ be this permutation distribution under the null hypothesis. $\mathbf{P}_0$ is discrete and uniform over $\{t_1,t_2,\ldots,t_{N!}\}$.
- Let $t_{obs} := T(x_{obs})$ be the observed value of the test statistic.
- Assuming we reject $H_0$ when $T$ is large, the P-value = $\mathbf{P}_0 \left( T \geq t_{obs} \right)$
- Saying that $\mathbf{P}_0$ is discrete and uniform over $\{t_1, t_2, \ldots, t_{N!}\}$ says that each possible permutation has an equal probabability of occuring (under the null hypothesis). There are $N!$ possible permutations and so the probability of any individual permutation is $\frac{1}{N!}$
$$
\text{P-value} = \mathbf{P}_0 \left( T \geq t_{obs} \right) = \frac{1}{N!} \left( \sum_{j=1}^{N!} \mathbf{1} (t_j \geq t_{obs}) \right), \qquad \mathbf{1} (t_j \geq t_{obs}) = \begin{cases} 1 & \text{if } \quad t_j \geq t_{obs} \\ 0 & \text{otherwise} \end{cases}
$$
This will make more sense if we look at some real data.
## Permutation Testing with Shell Data
In 2008, Guo Yaozong and Chen Shun collected data on the diameters of coarse venus shells from New Brighton beach for a course project. They recorded the diameters for two samples of shells, one from each side of the New Brighton Pier. The data is given in the following two cells.
```
leftSide = [52, 54, 60, 60, 54, 47, 57, 58, 61, 57, 50, 60, 60, 60, 62, 44, 55, 58, 55,\
60, 59, 65, 59, 63, 51, 61, 62, 61, 60, 61, 65, 43, 59, 58, 67, 56, 64, 47,\
64, 60, 55, 58, 41, 53, 61, 60, 49, 48, 47, 42, 50, 58, 48, 59, 55, 59, 50, \
47, 47, 33, 51, 61, 61, 52, 62, 64, 64, 47, 58, 58, 61, 50, 55, 47, 39, 59,\
64, 63, 63, 62, 64, 61, 50, 62, 61, 65, 62, 66, 60, 59, 58, 58, 60, 59, 61,\
55, 55, 62, 51, 61, 49, 52, 59, 60, 66, 50, 59, 64, 64, 62, 60, 65, 44, 58, 63]
rightSide = [58, 54, 60, 55, 56, 44, 60, 52, 57, 58, 61, 66, 56, 59, 49, 48, 69, 66, 49,\
72, 49, 50, 59, 59, 59, 66, 62, 44, 49, 40, 59, 55, 61, 51, 62, 52, 63, 39,\
63, 52, 62, 49, 48, 65, 68, 45, 63, 58, 55, 56, 55, 57, 34, 64, 66, 54, 65,\
61, 56, 57, 59, 58, 62, 58, 40, 43, 62, 59, 64, 64, 65, 65, 59, 64, 63, 65,\
62, 61, 47, 59, 63, 44, 43, 59, 67, 64, 60, 62, 64, 65, 59, 55, 38, 57, 61,\
52, 61, 61, 60, 34, 62, 64, 58, 39, 63, 47, 55, 54, 48, 60, 55, 60, 65, 41,\
61, 59, 65, 50, 54, 60, 48, 51, 68, 52, 51, 61, 57, 49, 51, 62, 63, 59, 62,\
54, 59, 46, 64, 49, 61]
len(leftSide), len(rightSide)
```
$(115 + 139)!$ is a very big number. Lets start small, and take a subselection of the shell data to demonstrate the permutation test concept: the first two shells from the left of the pier and the first one from the right:
```
rightSub = [52, 54]
leftSub = [58]
totalSample = rightSub + leftSub
totalSample
```
So now we are testing the hypotheses
$$\begin{array}{lcl}H_0&:& X_1,X_2,X_3 \overset{IID}{\sim} F^*=G^* \\H_1&:&X_1, X_2 \overset{IID}{\sim} F^*, \,\,X_3 \overset{IID}{\sim} G^*, F^* \neq G^*\end{array}$$
With the test statistic
$$\begin{array}{lcl}T(X_1,X_2,X_3) &=& \text{abs} \left(\displaystyle\frac{1}{2}\displaystyle\sum_{i=1}^2X_i - \displaystyle\frac{1}{1}\displaystyle\sum_{i=2+1}^3X_i\right) \\ &=&\text{abs}\left(\displaystyle\frac{X_1+ X_2}{2} - \displaystyle\frac{X_3}{1}\right)\end{array}$$
Our observed data $x_{obs} = (x_1, x_2, x_3) = (52, 54, 58)$
and the realisation of the test statistic for this data is $t_{obs} = \text{abs}\left(\displaystyle\frac{52+54}{2} - \frac{58}{1}\right) = \text{abs}\left(53 - 58\right) = \text{abs}(-5) = 5$
Now we need to tabulate the permutations and their probabilities. There are 3! = 6 possible permutataions of three items. For larger samples, you could use the `factorial` function to calculate this:
```
factorial(3)
```
We said that under the null hypotheses (the samples have the same DF) each permutation is equally likely, so each permutation has probability $\displaystyle\frac{1}{6}$.
There is a way in Python (the language under the hood in Sage), to get all the permuations of a sequence:
```
list(Permutations(totalSample))
```
We can tabulate the permuations, their probabilities, and the value of the test statistic that would be associated with that permutation:
<table border="1" cellpadding="5" align="center">
<tbody>
<tr>
<td style="text-align: center;">Permutation</td>
<td style="text-align: center;">$t$</td>
<td style="text-align: center;">$\mathbf{P}_0(T=t)$</td>
</tr>
<tr>
<td style="text-align: center;"> </td>
<td style="text-align: center;"> </td>
<td style="text-align: center;">Probability under Null</td>
</tr>
<tr>
<td style="text-align: center;">(52, 54, 58)</td>
<td style="text-align: center;">5</td>
<td style="text-align: center;">$\frac{1}{6}$</td>
</tr>
<tr>
<td style="text-align: center;">(52, 58, 54)</td>
<td style="text-align: center;"> 1</td>
<td style="text-align: center;">$\frac{1}{6}$</td>
</tr>
<tr>
<td style="text-align: center;">(54, 52, 58)</td>
<td style="text-align: center;">5</td>
<td style="text-align: center;">$\frac{1}{6}$</td>
</tr>
<tr>
<td style="text-align: center;">(54, 58, 52)</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">$\frac{1}{6}$</td>
</tr>
<tr>
<td style="text-align: center;">(58, 52, 54)</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">$\frac{1}{6}$</td>
</tr>
<tr>
<td style="text-align: center;">(58, 54, 52)</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">$\frac{1}{6}$</td>
</tr>
</tbody>
</table>
```
allPerms = list(Permutations(totalSample))
for p in allPerms:
t = abs((p[0] + p[1])/2 - p[2]/1)
print p, " has t = ", t
```
To calculate the P-value for our test statistic $t_{obs} = 5$, we need to look at how many permutations would give rise to test statistics that are at least as big, and add up their probabilities.
$$
\begin{array}{lcl}\text{P-value} &=& \mathbf{P}_0(T \geq t_{obs}) \\&=&\mathbf{P}_0(T \geq 5)\\&=&\frac{1}{6} + \frac {1}{6} \\&=&\frac{2}{6}\\ &=&\frac{1}{3} \\ &\approx & 0.333\end{array}
$$
We could write ourselves a little bit of code to do this in SageMath. As you can see, we could easily improve this to make it more flexible so that we could use it for different numbers of samples, but it will do for now.
```
allPerms = list(Permutations(totalSample))
pProb = 1/len(allPerms)
pValue = 0
tobs = 5
for p in allPerms:
t = abs((p[0] + p[1])/2 - p[2]/1)
if t >= tobs:
pValue = pValue + pProb
pValue
```
This means that there is little or no evidence against the null hypothesis (that the shell diameter observations are from the same DF).
### Pooled sample size
The lowest possible P-value for a pooled sample of size $N=m+n$ is $\displaystyle\frac{1}{N!}$. Can you see why this is?
So with our small sub-samples the smallest possible P-value would be $\frac{1}{6} \approx 0.167$. If we are looking for P-value $\leq 0.01$ to constitute very strong evidence against $H_0$, then we have to have a large enough pooled sample for this to be possible. Since $5! = 5 \times 4 \times 3 \times 2 \times 1 = 120$, it is good to have $N \geq 5$
### YouTry in class
Try copying and pasting our code and then adapting it to deal with a sub-sample (52, 54, 60) from the left of the pier and (58, 54) from the right side of the pier.
```
rightSub = [52, 54, 60]
leftSub = [58, 54]
totalSample = rightSub + leftSub
totalSample
```
### You will have to think about:
- calculating the value of the test statistic for the observed data and for all the permuations of the total sample
- calculating the probability of each permutation
- calculating the P-value by adding the probabilities for the permutations with test statistics at least as large as the observed value of the test statistic
(add more cells if you need them)
(end of You Try)
---
We can use the sample function and the Python method for making permutations to experiment with a larger sample, say 5 of each.
```
n, m = 5, 5
leftSub = sample(leftSide, n)
rightSub = sample(rightSide,m)
totalSample = leftSub + rightSub
leftSub; rightSub; totalSample
tobs = abs(mean(leftSub) - mean(rightSub))
tobs
```
We have met sample briefly already: it is part of the Python random module and it does exactly what you would expect from the name: it samples a specified number of elements randomly from a sequence.
```
#define a helper function for calculating the tstat from a permutation
def tForPerm(perm, samplesize1, samplesize2):
'''Calculates the t statistic for a permutation of data given the sample sizes to split the permuation into.
Param perm is the permutation of data to be split into the two samples.
Param samplesize1, samplesize2 are the two sample sizes.
Returns the absolute value of the difference in the means of the two samples split out from perm.'''
sample1 = [perm[i] for i in range(samplesize1)]
sample2 = [perm[samplesize1+j] for j in range(samplesize2)]
return abs(mean(sample1) - mean(sample2))
allPerms = list(Permutations(totalSample))
pProb = 1/len(allPerms)
pValue = 0
tobs = abs(mean(leftSub) - mean(rightSub))
for p in allPerms:
t = tForPerm(p, n, m)
if t >= tobs:
pValue = pValue + pProb
pValue
n+m
factorial(n+m) # how many permutations is it checking
```
As you can see from the length of time it takes to do the calculation for $(5+5)! = 10!$ permutations, we will be here a long time if we try to this on all of both shell data sets. Monte Carlo methods to the rescue: we can use Monte Carlo integration to calculate an approximate P-value, and this will be our next topic.
### You try
Try working out the P-value for a sub-sample (58, 63) from the left of the pier and (61) from the right (the two last values in the left-side data set and the last value in the right-side one). Do it as you would if given a similar question in the exam: you choose how much you want to use Sage to help and how much you do just with pen and paper.
# Plug-in Estimation and Bootstrap
*Raaz needs 4-5 hours*
| github_jupyter |
<a href="https://colab.research.google.com/github/davemcg/scEiaD/blob/master/colab/cell_type_ML_labelling.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Auto Label Retinal Cell Types
## tldr
You can take your (retina) scRNA data and fairly quickly use the scEiaD ML model
to auto label your cell types. I say fairly quickly because it is *best* if you re-quantify your data with the same reference and counter (kallisto) that we use. You *could* try using your counts from cellranger/whatever....but uh...stuff might get weird.
# Install scvi and kallisto-bustools
```
import sys
import re
#if True, will install via pypi, else will install from source
stable = True
IN_COLAB = "google.colab" in sys.modules
if IN_COLAB and stable:
!pip install --quiet scvi-tools[tutorials]==0.9.0
#!pip install --quiet python==3.8 pandas numpy scikit-learn xgboost==1.3
!pip install --quiet kb-python
!pip install --quiet pandas numpy scikit-learn xgboost==1.3.1
```
# Download our kallisto index
As our example set is mouse, we use the Gencode vM25 transcript reference.
The script that makes the idx and t2g file is [here](https://github.com/davemcg/scEiaD/raw/c3a9dd09a1a159b1f489065a3f23a753f35b83c9/src/build_idx_and_t2g_for_colab.sh). This is precomputed as it takes about 30 minutes and 32GB of memory.
There's one more wrinkle worth noting: as scEiaD was built across human, mouse, and macaque unified gene names are required. We chose to use the *human* ensembl ID (e.g. CRX is ENSG00000105392) as the base gene naming system.
(Download links):
```
# Mouse
https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/gencode.vM25.transcripts.idx
https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/vM25.tr2gX.humanized.tsv
# Human
https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/gencode.v35.transcripts.idx
https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/v35.tr2gX.tsv
```
```
%%time
!wget -O idx.idx https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/gencode.vM25.transcripts.idx
!wget -O t2g.txt https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/vM25.tr2gX.humanized.tsv
```
# Quantify with kbtools (Kallisto - Bustools wrapper) in one easy step.
Going into the vagaries of turning a SRA deposit into a non-borked pair of fastq files is beyond the scope of this document. Plus I would swear a lot. So we just give an example set from a Human organoid retina 10x (version 2) experiment.
The Pachter Lab has a discussion of how/where to get public data here: https://colab.research.google.com/github/pachterlab/kallistobustools/blob/master/notebooks/data_download.ipynb
If you have your own 10X bam file, then 10X provides a very nice and simple tool to turn it into fastq file here: https://github.com/10XGenomics/bamtofastq
To reduce run-time we have taken the first five million reads from this fastq pair.
This will take ~3 minutes, depending on the internet speed between Google and our server
You can also directly stream the file to improve wall-time, but I was getting periodic errors, so we are doing the simpler thing and downloading each fastq file here first.
```
%%time
!wget -O sample_1.fastq.gz https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/SRR11799731_1.head.fastq.gz
!wget -O sample_2.fastq.gz https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/SRR11799731_2.head.fastq.gz
!kb count --overwrite --h5ad -i idx.idx -g t2g.txt -x DropSeq -o output --filter bustools -t 2 \
sample_1.fastq.gz \
sample_2.fastq.gz
```
# Download models
(and our xgboost functions for cell type labelling)
The scVI model is the same that we use to create the data for plae.nei.nih.gov
The xgboost model is a simplified version that *only* uses the scVI latent dims and omits the Early/Late/RPC cell types and collapses them all into "RPC"
```
!wget -O scVI_scEiaD.tgz https://hpc.nih.gov/~mcgaugheyd/scEiaD/2021_03_17/2021_03_17__scVI_scEiaD.tgz
!tar -xzf scVI_scEiaD.tgz
!wget -O celltype_ML_model.tar https://hpc.nih.gov/~mcgaugheyd/scEiaD/2021_03_17/2021_cell_type_ML_all.tar
!tar -xf celltype_ML_model.tar
!wget -O celltype_predictor.py https://raw.githubusercontent.com/davemcg/scEiaD/master/src/cell_type_predictor.py
```
# Python time
```
import anndata
import sys
import os
import numpy as np
import pandas as pd
import random
import scanpy as sc
from scipy import sparse
import scvi
import torch
# 2 cores
sc.settings.n_jobs = 2
# set seeds
random.seed(234)
scvi.settings.seed = 234
# set some args
org = 'mouse'
n_epochs = 15
confidence = 0.5
```
# Load adata
And process (mouse processing requires a bit more jiggling that can be skipped if you have human data)
```
# load query data
adata_query = sc.read_h5ad('output/counts_filtered/adata.h5ad')
adata_query.layers["counts"] = adata_query.X.copy()
adata_query.layers["counts"] = sparse.csr_matrix(adata_query.layers["counts"])
# Set scVI model path
scVI_model_dir_path = 'scVIprojectionSO_scEiaD_model/n_features-5000__transform-counts__partition-universe__covariate-batch__method-scVIprojectionSO__dims-8/'
# Read in HVG genes used in scVI model
var_names = pd.read_csv(scVI_model_dir_path + '/var_names.csv', header = None)
# cut down query adata object to use just the var_names used in the scVI model training
if org.lower() == 'mouse':
adata_query.var_names = adata_query.var['gene_name']
n_missing_genes = sum(~var_names[0].isin(adata_query.var_names))
dummy_adata = anndata.AnnData(X=sparse.csr_matrix((adata_query.shape[0], n_missing_genes)))
dummy_adata.obs_names = adata_query.obs_names
dummy_adata.var_names = var_names[0][~var_names[0].isin(adata_query.var_names)]
adata_fixed = anndata.concat([adata_query, dummy_adata], axis=1)
adata_query_HVG = adata_fixed[:, var_names[0]]
```
# Run scVI (trained on scEiaD data)
Goal: get scEiaD batch corrected latent space for *your* data
```
adata_query_HVG.obs['batch'] = 'New Data'
scvi.data.setup_anndata(adata_query_HVG, batch_key="batch")
vae_query = scvi.model.SCVI.load_query_data(
adata_query_HVG,
scVI_model_dir_path
)
# project scVI latent dims from scEiaD onto query data
vae_query.train(max_epochs=n_epochs, plan_kwargs=dict(weight_decay=0.0))
# get the latent dims into the adata
adata_query_HVG.obsm["X_scVI"] = vae_query.get_latent_representation()
```
# Get Cell Type predictions
(this xgboost model does NOT use the organim or Age information, but as those field were often used by use, they got hard-coded in. So we will put dummy values in).
```
# extract latent dimensions
obs=pd.DataFrame(adata_query_HVG.obs)
obsm=pd.DataFrame(adata_query_HVG.obsm["X_scVI"])
features = list(obsm.columns)
obsm.index = obs.index.values
obsm['Barcode'] = obsm.index
obsm['Age'] = 1000
obsm['organism'] = 'x'
# xgboost ML time
from celltype_predictor import *
CT_predictions = scEiaD_classifier_predict(inputMatrix=obsm,
labelIdCol='ID',
labelNameCol='CellType',
trainedModelFile= os.getcwd() + '/2021_cell_type_ML_all',
featureCols=features,
predProbThresh=confidence)
```
# What do we have?
```
CT_predictions['CellType'].value_counts()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.