text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# NearestCentroid with MaxAbsScaler and QuantileTransformer
This Code template is for the Classification task using a simple NearestCentroid and data rescaling technique MaxAbsScaler and feature transformation QuantileTransformer in a pipeline.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from imblearn.over_sampling import RandomOverSampler
from sklearn.preprocessing import LabelEncoder, MaxAbsScaler,QuantileTransformer
from sklearn.model_selection import train_test_split
from sklearn.neighbors import NearestCentroid
from sklearn.preprocessing import PowerTransformer
from sklearn.pipeline import make_pipeline
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Handling Target Imbalance
The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.
One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
```
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
```
### Data Rescaling:
Scale each feature by its maximum absolute value.
This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity.
This scaler can also be applied to sparse CSR or CSC matrices.[MaxAbsScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html)
### Feature Transformation
#### Quantile Transformer
Transform features using quantiles information.
This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.
[For More Reference](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html)
### Model
The NearestCentroid classifier is a simple algorithm that represents each class by the centroid of its members. In effect, this makes it similar to the label updating phase of the KMeans algorithm. It also has no parameters to choose, making it a good baseline classifier. It does, however, suffer on non-convex classes, as well as when classes have drastically different variances, as equal variance in all dimensions is assumed.
#### Tuning Parameter
> **metric** : The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by metrics.pairwise.pairwise_distances for its metric parameter. The centroids for the samples corresponding to each class is the point from which the sum of the distances of all samples that belong to that particular class are minimized. If the “manhattan” metric is provided, this centroid is the median and for all other metrics, the centroid is now set to be the mean.
> **shrink_threshold** :Threshold for shrinking centroids to remove features.
```
# Build Model here
model = make_pipeline(MaxAbsScaler(),QuantileTransformer(),NearestCentroid())
model.fit(x_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
### Creator: Jay Shimpi, GitHub: [profile](https://github.com/JayShimpi22)
| github_jupyter |
# Spatial Analysis
<br>
### Imports
```
import pandas as pd
import geopandas as gpd
import requests
import warnings
import matplotlib.pyplot as plt
def df_to_gdf(
df: pd.DataFrame,
crs: str='EPSG:4326',
lat_col: str='Latitude',
lon_col: str='Longitude'
):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
gdf = gpd.GeoDataFrame(
df.drop(columns=[lat_col, lon_col]),
geometry=gpd.points_from_xy(df[lat_col].values, df[lon_col].values, crs=crs),
crs=crs
)
return gdf
def load_subsation_locs_gdf(
wpd_network_capacity_map_url: str='https://connecteddata.westernpower.co.uk/dataset/967404e0-f25c-469b-8857-1a396f3c363f/resource/d1895bd3-d9d2-4886-a0a3-b7eadd9ab6c2/download/wpd-network-capacity-map.csv',
network_ids_filter: list=[15130, 15264, 15246]
):
df_wpd_map = pd.read_csv(wpd_network_capacity_map_url)
df_wpd_map_focus = df_wpd_map.query('Network_Reference_ID in @network_ids_filter')
df_subsation_locs = df_wpd_map_focus.set_index('Substation_Name')[['Latitude', 'Longitude']]
df_subsation_locs.index = df_subsation_locs.index.str.lower()
gdf_subsation_locs = df_to_gdf(df_subsation_locs)
return gdf_subsation_locs
gdf_subsation_locs = load_subsation_locs_gdf()
gdf_subsation_locs
def load_weather_grid_locs_gdf(
weather_grid_locs: list=[
{'Name': 'mousehole_1', 'Latitude': 50.0, 'Longitude': -5.625},
{'Name': 'mousehole_2', 'Latitude': 50.0, 'Longitude': -5.0},
{'Name': 'mousehole_3', 'Latitude': 50.5, 'Longitude': -5.625},
{'Name': 'mousehole_4', 'Latitude': 50.5, 'Longitude': -5.0},
{'Name': 'mousehole_5', 'Latitude': 50.5, 'Longitude': -4.375},
{'Name': 'staplegrove_1', 'Latitude': 51.0, 'Longitude': -3.125},
{'Name': 'staplegrove_2', 'Latitude': 51.0, 'Longitude': -2.5},
{'Name': 'staplegrove_3', 'Latitude': 51.5, 'Longitude': -3.125},
{'Name': 'staplegrove_4', 'Latitude': 51.5, 'Longitude': -2.5},
{'Name': 'staplegrove_5', 'Latitude': 51.0, 'Longitude': -3.75}
]
):
gdf_weather_grid_locs = df_to_gdf(pd.DataFrame(weather_grid_locs).set_index('Name'))
return gdf_weather_grid_locs
gdf_weather_grid_locs = load_weather_grid_locs_gdf()
gdf_weather_grid_locs
fig, ax = plt.subplots(dpi=150)
gdf_weather_grid_locs.plot(ax=ax, label='Weather grid')
gdf_subsation_locs.plot(ax=ax, label='Substation')
ax.legend(frameon=False, bbox_to_anchor=(1, 1))
```
| github_jupyter |
<img src="fig/scikit-hep-logo.svg" style="height: 200px; margin-left: auto; margin-bottom: -75px">
# Scikit-HEP tutorial for the STAR collaboration
This notebook shows you how to do physics analysis in Python using Scikit-HEP tools: Uproot, Awkward Array, Vector, hist, etc., and it uses a STAR PicoDST file as an example. I presented this tutorial on Zoom on September 13, 2021 (see [STAR collaboration website](https://drupal.star.bnl.gov/STAR/meetings/star-collaboration-meeting-september-2021/juniors-day), if you have access). You can also find it on GitHub at [jpivarski-talks/2021-09-13-star-uproot-awkward-tutorial](https://github.com/jpivarski-talks/2021-09-13-star-uproot-awkward-tutorial).
You can [run this notebook on Binder](https://mybinder.org/v2/gh/jpivarski-talks/2021-09-13-star-uproot-awkward-tutorial/HEAD?urlpath=lab/tree/tutorial.ipynb), which loads all of the package dependencies on Binder's servers; you don't have to install anything on your computer. But if you would like to run it on your computer, see the [requirements.txt](https://github.com/jpivarski-talks/2021-09-13-star-uproot-awkward-tutorial/blob/main/requirements.txt) file. This specifies exact versions of dependencies that are known to work for this notebook, though if you plan to use these packages later on, you'll want the latest versions of each.
The first 5 sections are introductory, and the last contains exercises. In the live tutorial, we spent one hour on the introductory material and one hour in small groups, working on the exercises.
## 1. Python: interactively building up an analysis
<img src="fig/python-logo.svg" style="height: 150px">
### Introduction: why Python?
**It's where the party's at.**
I mean it: the best argument for Python is that there's a huge community of _people_ who can help you and _people_ developing infrastructure.
You probably learned Python in a university course, and you'll probably use it a lot in your career.
<br><br><br>
**Python is especially popular for data analysis:**
<img src="fig/analytics-by-language.svg" width="800px">
See also Jake VanderPlas's [_Unexpected Effectiveness of Python in Science_](https://speakerdeck.com/jakevdp/the-unexpected-effectiveness-of-python-in-science) talk (2017).
<br><br><br>
**Python is one of a large class of languages in which "debugging mode is always on."**
* Except for bugs in external libraries (and very rare bugs in the language itself), you can't cause a segmentation fault or memory leak.
* Errors produce a stack trace _with line numbers_.
* You can print any object.
* Interactive debuggers are superfluous; the language itself is interactive (raise an exception at a break point that exports variables).
* Although long-running programs are slow, it _starts up_ quickly, minimizing the debug cycle.
C++ is not one of these languages.
<br><br><br>
**"Always debugging" has a cost: performance.**
Most of Python's design choices prevent fast computation:
* runtime type checking
* garbage collection
* boxing numbers as objects
* no value types or move semantics at all (all Python references are "pointer chasing")
* virtual machine indirection
* Global Interpreter Lock (GIL) prevents threads from running in parallel
_(Maybe not a necessary cost: [Julia](https://julialang.org/) might be exempt, but Julia is not popular—yet.)_
<br><br><br>
**And yet, scientists with big datasets have made Python their home.**
This was only possible because an ecosystem grew around _arrays_ and _array-oriented programming._
<img src="fig/harris-array-programming-nature.png" width="800px">
From Harris, C.R., Millman, K.J., van der Walt, S.J. _et al._ Array programming with NumPy. _Nature_ **585,** 357–362 (2020).
<br>
https://doi.org/10.1038/s41586-020-2649-2
<br><br><br>
**Array-oriented programming is also the paradigm of GPU programming.**
The slowness of Python might have pushed us toward it, but dividing your work into
* complex bookkeeping that doesn't need to be fast and
* a mathematical formula to compute on billions of data points
is also the right way to massively parallelize.
<br><br><br>
### What is Scikit-HEP?
[Scikit-HEP](https://scikit-hep.org/) is a collection of Python packages for array-oriented data analysis in particle physics.
(I know, "High Energy" does not describe all of Particle Physics, but Scikit-PP was a worse name.)
The goal of this umbrella organization is to make sure that these packages
* work well together
* work well with the Python ecosystem (NumPy, Pandas, Matplotlib, ...) and the traditional HEP ecosystem (ROOT, formats, generators, ...)
* are packaged well and designed well, to minimize physicist frustration!
I'm the author of Uproot and Awkward Array, so I'll have the most to say about these.
But the context is that they works with a suite of other libraries.
<br><br><br>
### Python language features that we'll be using
We don't have time for an intro course, but many exist.
<br><br><br>
The basic control structures:
```
x = 5
if x < 5:
print("small")
else:
print("big")
for i in range(x):
print(i)
```
are less relevant for this tutorial because these are what we avoid when working with large arrays.
<br><br><br>
We will be focusing on operations like
```python
import compiled_library
compiled_library.do_computationally_expensive_thing(big_array)
```
because array-oriented Python is about separating small scale, complex bookkeeping (with `if` and `for`) from large-scale processing in compiled libraries.
<br><br><br>
The trick is for the Python-side code to be expressive enough and the compiled code to be general enough that you don't need a new `compiled_library` for every little thing.
<br><br><br>
Much of this expressivity grew up around Python's syntax for slicing lists:
```
some_list = [0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9]
```
Single-element access is like C (counting starts at zero):
```
some_list[0]
some_list[4]
```
Except when the index is negative; then it counts from the end of the list.
```
some_list[-1]
some_list[-2]
```
Running off the end of a list is an exception, not a segmentation fault or data corruption.
```
some_list[10]
some_list[-11]
```
With a colon (`:`), you can also select ranges. The _start_ of each range is included; the _stop_ is excluded.
```
some_list[2:7]
some_list[-6:]
```
Out-of-bounds ranges don't cause exceptions; they get clipped.
```
some_list[8:100]
some_list[-1000:-500]
```
Slices have a third argument for how many elements to skip between each step.
```
some_list[1:10:2]
some_list[::2]
```
This _step_ is not as useful as the _start_ and _stop_, but you can always reverse a list:
```
some_list[::-1]
```
The _start_, _stop_, and _step_ are exactly the arguments of `range`:
```
for i in range(1, 9, 3):
print(i)
some_list[1:9:3]
```
<br><br><br>
In Python, "dicts" (mappings, like `std::map`) are just as important as lists (like `std::vector`).
They use the square-bracket syntax in a different way:
```
some_dict = {"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}
some_dict["two"]
some_dict["not in dict"]
```
<br><br><br>
Apart from function calls, arithmetic, variables and assignment—which are the same in all mainstream languages—slicing is the only language feature we need for NumPy.
<br><br><br>
## 2. NumPy: thinking one array at a time
<img src="fig/numpy-logo.svg" style="height: 150px">
### Introduction
NumPy is a Python library consisting of one major data type, `np.ndarray`, and a suite of functions to manipulate objects of that type.
<br><br><br>
<img src="fig/Numpy_Python_Cheat_Sheet.svg" width="100%">
<br><br><br>
This is "array-oriented." In each step, you decide what to do to _all values in a large dataset_.
There is a tradition of languages built around this paradigm. Most of them have been for interactive data processing. NumPy is unusual in that it's a library.
<img src="fig/apl-timeline.svg" width="800px">
<br><br><br>
### Slicing in NumPy
NumPy has the same slicing syntax for arrays as Python has for lists.
```
import numpy as np
some_array = np.array([0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9])
some_array[3]
some_array[-2]
some_array[-3::-1]
```
But NumPy slices can operate on multiple dimensions.
```
array3d = np.arange(2*3*5).reshape(2, 3, 5)
array3d
array3d[1:, 1:, 1:]
```
Python lists _do not_ do this.
```
list3d = array3d.tolist()
list3d
list3d[1:, 1:, 1:]
```
NumPy slices can have mixed types:
```
array3d[:, -1, -1]
```
<img src="fig/numpy-slicing.png" width="400px">
Most importantly, NumPy slices can also be arrays.
```
some_array = np.array([ 0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9])
boolean_slice = np.array([True, True, True, True, True, False, True, False, True, False])
some_array[boolean_slice]
integer_slice = np.array([4, 2, 2, 0, 9, 8, 3])
some_array[integer_slice]
some_array[np.random.permutation(10)]
```
#### **Application:** A boolean-array slice is what we like to call a cut!
```
primary_vertexes = np.random.normal(0, 1, (1000000, 2))
primary_vertexes
len(primary_vertexes)
trigger_decision = np.random.randint(0, 2, 1000000, dtype=np.bool_)
trigger_decision
primary_vertexes[trigger_decision]
len(primary_vertexes[trigger_decision])
```
#### **Observation:** An integer-array slice is more general than a boolean-array slice.
```
indexes_that_pass_trigger = np.nonzero(trigger_decision)[0]
indexes_that_pass_trigger
primary_vertexes[indexes_that_pass_trigger]
len(primary_vertexes[indexes_that_pass_trigger])
```
In fact, integer-array slicing is [as general as function composition](https://github.com/scikit-hep/awkward-1.0/blob/0.2.6/docs/theory/arrays-are-functions.pdf).
Often, a hard problem can be solved by (1) constructing the appropriate integer array and (2) using it as a slice.
Many problems can be solved at compile-time speed (through NumPy) without compiling a new extension module.
#### **Application:** An integer-array slice maps objects in one collection to objects in another collection.
```
tracks = np.array([(0, 0.0, -1), (1, 1.1, 3), (2, 2.2, 0), (3, 3.3, -1), (4, 4.4, 2)], [("id", int), ("pt", float), ("shower_id", int)])
tracks["id"], tracks["pt"], tracks["shower_id"]
showers = np.array([(0, 2.0, 2), (1, 9.9, -1), (2, 4.0, 4), (3, 1.0, 1)], [("id", int), ("E", float), ("track_id", int)])
showers["id"], showers["E"], showers["track_id"]
showers[2]
showers[2]["track_id"]
tracks[showers[2]["track_id"]]
tracks[showers["track_id"]]
showers[tracks["shower_id"]]
```
### Elementwise operations
All "scalars → scalar" math functions, such as `+`, `sqrt`, `sinh`, can also take arrays as input and return an array as output.
The mathematical operation is performed elementwise.
```
some_array = np.arange(10)
some_array
some_array + 100
some_array + np.array([0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
np.sqrt(some_array)
```
This is the part of array-oriented programming in Python that most resembles GPU programming.
One step ("kernel") of a GPU calculation is usually a simple function applied to every element of an array on a GPU.
#### **Observation:** Elementwise operations can be used to construct boolean arrays for cuts.
```
evens_and_odds = np.arange(10)
evens_and_odds
is_even = (evens_and_odds % 2 == 0)
is_even
evens_and_odds[is_even]
evens_and_odds[~is_even]
```
This works because the comparison operators, `==`, `!=`, `<`, `<=`, `>`, `>=` are also elementwise.
```
np.arange(9) < 5
```
Cuts can be combined with _bitwise_ operators: `&` (and), `|` (or), `~` (not).
Python's normal set of _logical_ operators—`and`, `or`, `not`—do not apply elementwise. (They just raise an error if you try to use them with arrays.)
The worst part of this is that the bitwise operators have stronger precedence than comparisons:
```python
cut = pt > 5 & abs(eta) < 3 # WRONG!
cut = (pt > 5) & (abs(eta) < 3) # right
```
#### **Application:** Applying a mathematical formula to every event of a large dataset.
```
zmumu = np.load("data/Zmumu.npz") # NumPy's I/O format; only using it here because I haven't introduced Uproot yet
pt1 = zmumu["pt1"]
eta1 = zmumu["eta1"]
phi1 = zmumu["phi1"]
pt2 = zmumu["pt2"]
eta2 = zmumu["eta2"]
phi2 = zmumu["phi2"]
pt1
eta1
phi1
```
This formula computes invariant mass of particles 1 and 2 from $p_T$, $\eta$, and $\phi$.
Let's apply it to the first item of each of the arrays.
```
np.sqrt(2*pt1[0]*pt2[0]*(np.cosh(eta1[0] - eta2[0]) - np.cos(phi1[0] - phi2[0])))
```
Now let's apply it to every item of the arrays.
```
np.sqrt(2*pt1*pt2*(np.cosh(eta1 - eta2) - np.cos(phi1 - phi2)))
```
#### **Application:** Building up an analysis from quick plots.
Elementwise array formulas provide a quick way to make a plot.
```
import matplotlib.pyplot as plt
plt.hist(np.sqrt(2*pt1*pt2*(np.cosh(eta1 - eta2) - np.cos(phi1 - phi2))), bins=120, range=(0, 120));
```
In ROOT, there's a temptation to do a whole analysis in `TTree::Draw` expressions because the feedback is immediate, like array formulas.
However, that wouldn't scale to large datasets. (`TTree::Draw` would repeatedly read from disk.)
Array formulas, however, can be both quick plots _and_ scale to large datasets: **put the array formula into a loop over batches**.
```
def get_batch(i):
"Fetches a batch from a large dataset (1000 events in this example)."
zmumu = np.load("data/Zmumu.npz")
pt1 = zmumu["pt1"][i*1000 : (i+1)*1000]
eta1 = zmumu["eta1"][i*1000 : (i+1)*1000]
phi1 = zmumu["phi1"][i*1000 : (i+1)*1000]
pt2 = zmumu["pt2"][i*1000 : (i+1)*1000]
eta2 = zmumu["eta2"][i*1000 : (i+1)*1000]
phi2 = zmumu["phi2"][i*1000 : (i+1)*1000]
return pt1, eta1, phi1, pt2, eta2, phi2
```
The array formula from the quick plot can be pasted directly into this loop over batches.
```
# accumulated histogram
counts, edges = None, None
for i in range(3):
pt1, eta1, phi1, pt2, eta2, phi2 = get_batch(i)
# exactly the same array formula
invariant_mass = np.sqrt(2*pt1*pt2*(np.cosh(eta1 - eta2) - np.cos(phi1 - phi2)))
batch_counts, batch_edges = np.histogram(invariant_mass, bins=120, range=(0, 120))
if counts is None:
counts, edges = batch_counts, batch_edges
else:
counts += batch_counts # not the first time: add these counts to the previous counts
counts, edges
import mplhep
mplhep.histplot(counts, edges);
```
## 3. Uproot: array-oriented ROOT I/O
<img src="fig/uproot-logo.svg" style="height: 150px">
### Introduction
**Uproot is a reimplementation of ROOT file I/O in Python.**
See [uproot.readthedocs.io](https://uproot.readthedocs.io/) for tutorials and reference documentation.
Uproot can read most data types, but is particularly good for simple data (such as the contents of a PicoDST file). Uproot can also write simple data types (not covered in this tutorial).
<img src="fig/abstraction-layers.svg" width="800px">
<br><br><br>
**Data in ROOT files are stored in arrays (batches called "TBaskets"). The format is ideally suited for array-centric programming.**
Navigating the file in Python is slower than it would be in C++ (the "complex bookkeeping"), but the time to extract and decompress a large array is independent of C++ vs Python.
<br>
<img src="fig/terminology.svg" width="650px">
<br><br><br>
### Navigating a file: PicoDST
For this tutorial, we will be using a 1 GB PicoDST file.
Local and remote files can be opened with [uproot.open](https://uproot.readthedocs.io/en/latest/uproot.reading.open.html).
```
import uproot
picodst_file = uproot.open("https://pivarski-princeton.s3.amazonaws.com/pythia_ppZee_run17emb.picoDst.root")
picodst_file
```
A [ReadOnlyDirectory](https://uproot.readthedocs.io/en/latest/uproot.reading.ReadOnlyDirectory.html) is a mapping, like a Python dict.
The `keys()` method and square-bracket syntax works on a directory as it would on a dict.
```
picodst_file.keys()
picodst = picodst_file["PicoDst"]
picodst
```
In Uproot, a [TTree](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html) is also a mapping, but this time the keys are [TBranch](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html)es.
```
picodst.keys()
picodst["Event"]
picodst["Event"].keys()
picodst["Event/Event.mEventId"]
```
These objects have a lot of methods and properties, some of which are very low-level (e.g. get the [number of uncompressed bytes for a TBasket](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html#basket-uncompressed-bytes)).
[TBasket.typename](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html#typename), which returns the C++ type as a string, is a useful one.
```
picodst["Event/Event.mEventId"].typename
picodst["Event/Event.mTriggerIds"].typename
picodst["Event.mNHitsHFT[4]"].typename
```
The [TBasket.keys](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html#keys) function has `filter_name` and `filter_typename` to search for TBranches.
```
picodst.keys(filter_name="*Primary*")
for key in picodst.keys(filter_name="*Event*", filter_typename="*float*"):
print(key.ljust(40), picodst[key].typename)
```
A convenient way to see all the branches and types at once is [TTree.show](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#show).
```
picodst.show()
```
### Extracting arrays
[TBranch.array](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html#array) reads an entire TBranch (all TBaskets) from the file.
```
picodst["Event/Event.mEventId"].array()
```
For large and/or remote files, you can limit how much it reads with `entry_start` and `entry_stop`.
That can save time when you're exploring, or it can be used in a parallel job (e.g. "this task is responsible for `entry_start=10000, entry_stop=20000`").
```
picodst["Event/Event.mEventId"].array(entry_stop=10)
```
[TTree.arrays](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#arrays) reads multiple TBranches from the file.
It is important to filter the TBranches somehow, with `filter_name`/`filter_typename` or with a set of `expressions`, to prevent it from trying to read the whole file!
First, test the filter on [TTree.keys](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#keys) to be sure it's selecting the TBranches you want.
```
picodst.keys(filter_name="*mPrimaryVertex[XYZ]")
```
Then, apply the same filters to [TTree.arrays](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#arrays) to read everything in one network request.
```
primary_vertex = picodst.arrays(filter_name="*mPrimaryVertex[XYZ]")
primary_vertex
```
The data have been downloaded. Extracting its parts is fast.
```
primary_vertex["Event.mPrimaryVertexX"]
primary_vertex["Event.mPrimaryVertexY"]
primary_vertex["Event.mPrimaryVertexZ"]
```
### Iterating over data in batches
The last section of the NumPy tutorial advocated an "iterate over batches" approach.
[TTree.iterate](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#iterate) does that with an interface that's similar to [TTree.arrays](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#arrays), and [uproot.iterate](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.iterate.html) does that with a collection of files.
```
picodst.keys(filter_name="*mGMomentum[XYZ]")
```
Quick check to see how much this is going to download (234 MB).
```
picodst["Track/Track.mGMomentumX"].compressed_bytes * 3 / 1000**2
picodst.num_entries
for arrays in picodst.iterate(filter_name="*mGMomentum[XYZ]", step_size=100):
# put your analysis in here
print(len(arrays), arrays)
```
## 4. Awkward Array: complex data in arrays
<img src="fig/awkward-logo.svg" style="height: 150px">
### Introduction
You might have noticed that the arrays Uproot returned (by default) were not NumPy arrays.
That's because our data frequently has variable-length structures, and NumPy deals strictly with rectilinear arrays.
**Awkward Array is an extension of NumPy to generic data types, including nested lists.**
See [awkward-array.org](https://awkward-array.org/) for tutorials and reference documentation.
<img src="fig/pivarski-one-slide-summary.svg" width="1000px">
### Slicing in Awkward Array
Like NumPy, the Awkward Array library has one major type, `ak.Array`, and a suite of functions that operate on arrays.
Like NumPy, `ak.Arrays` can be sliced in multiple dimensions, including boolean-array and integer-array slicing.
```
import awkward as ak
some_array = ak.Array([0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9])
some_array
some_array[2]
some_array[-3::-1]
some_array = ak.Array([ 0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9])
boolean_slice = ak.Array([True, True, True, True, True, False, True, False, True, False])
some_array[boolean_slice]
integer_slice = ak.Array([4, 2, 2, 0, 9, 8, 3])
some_array[integer_slice]
```
Unlike NumPy, lists within a multidimensional array need not have the same length.
```
another_array = ak.Array([[0.0, 1.1, 2.2], [], [3.3, 4.4], [5.5], [6.6, 7.7, 8.8, 9.9]])
another_array
```
But it can still be sliced.
```
another_array[:, 2:]
another_array[::2, 0]
```
Since inner lists can have different lengths and some might even be empty, it's much easier to make slices that can't be satisfied. For instance,
```
another_array[:, 0]
```
Some of Awkward Array's functions, such as [ak.num](https://awkward-array.readthedocs.io/en/latest/_auto/ak.num.html), can help to construct valid slices.
```
ak.num(another_array)
ak.num(another_array) > 0
another_array[ak.num(another_array) > 0]
another_array[ak.num(another_array) > 0, 0]
```
### Slices beyond NumPy
Since Awkward Arrays can have shapes that aren't possible in NumPy, there will be situations where we'll want to make slices that have no NumPy equivalent.
Like, what if we want to select all the even numbers from:
```
even_and_odd = ak.Array([[0, 1, 2], [], [3, 4], [5], [6, 7, 8, 9]])
even_and_odd
```
We can use an elementwise comparison to make a boolean array.
```
is_even = (even_and_odd % 2 == 0)
is_even
```
This `is_even` contains variable-length lists. You can see that from the type and when we turn it back into Python lists:
```
is_even.type
is_even.tolist()
```
But the lengths of the lists in this array _are the same_ as the lengths of the lists in `even_and_odd` because the elementwise operation does not change list lengths.
We'd like to use this as a cut:
```
even_and_odd[is_even]
```
As long as the slicing array "fits into" the array to slice, the array can be sliced.
This lets us use Awkward Arrays in the same situations as NumPy arrays, the only difference being that the data can't be arranged into a rectilinear tensor.
### Record arrays
Another data type that Awkward Array supports is a "record." This is an object with named, typed fields, like a `class` or `struct` in C++.
When converting to or from Python objects, Awkward records correspond to Python dicts. But these are not general mappings like dicts: every instance of a record in the array must have the same fields.
```
array_of_records = ak.Array([{"x": 1, "y": 1.1}, {"x": 2, "y": 2.2}, {"x": 3, "y": 3.3}])
array_of_records
array_of_records.type
array_of_records.tolist()
```
Records can be _in_ lists and/or their fields can _contain_ lists.
```
array_of_lists_of_records = ak.Array([
[{"x": 1, "y": 1.1}, {"x": 2, "y": 2.2}, {"x": 3, "y": 3.3}],
[],
[{"x": 4, "y": 4.4}, {"x": 5, "y": 5.5}],
])
array_of_lists_of_records
array_of_lists_of_records.type
array_of_records_with_list = ak.Array([
{"x": 1, "y": [1]}, {"x": 2, "y": [1, 2]}, {"x": 3, "y": [1, 2, 3]}
])
array_of_records_with_list
array_of_records_with_list.type
```
### Fluidity of records
Whereas a class instance is a "solid" object in Python or C++, whose existence takes up memory and constructing or deconstructing them takes time, records in Awkward Array are very fluid.
Records can be "projected" to separate arrays and separate arrays can be "zipped" into records for very little computational cost.
```
array_of_records
array_of_records["x"] # or array_of_records.x
array_of_records["y"] # or array_of_records.y
array_of_lists_of_records.x
array_of_lists_of_records.y
array_of_records_with_list.x
array_of_records_with_list.y
```
Going in the opposite direction, we can [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html) these together:
```
new_records = ak.zip({"a": array_of_records.y, "b": array_of_records_with_list.y})
new_records
new_records.type
new_records.tolist()
```
The only constraint is that any arrays list variable-length lists have the same list lengths as the other arrays.
If they don't, they can't be "zipped" in all dimensions.
However, you can limit how deeply it _attempts_ to zip them with `depth_limit`.
```
everything = ak.zip(
{
"a": array_of_records.x,
"b": array_of_records.y,
"c": array_of_lists_of_records.x,
"d": array_of_lists_of_records.y,
"e": array_of_records_with_list.x,
"f": array_of_records_with_list.y,
}, depth_limit=1
)
everything
everything.type
everything.tolist()
```
### Combinatorics
With this much structure—variable-length lists and records—we can compute particle combinatorics in an array-oriented way.
Suppose we have two arrays of lists with _different_ lengths in each list.
```
numbers = ak.Array([[0, 1, 2], [], [3, 4], [5]])
letters = ak.Array([["a", "b"], ["c"], ["d"], ["e", "f"]])
ak.num(numbers), ak.num(letters)
```
Now suppose that we're interested in all _pairs_ of letters and numbers, with one letter and one number in each pair.
However, we want to do this per list: all pairs in `numbers[0]` and `letters[0]`, followed by all pairs in `numbers[1]` and `letters[1]`, etc.
If the nested lists represent particles in events, what this means is that we do not want to mix data from one event with data from another event: the usual case in particle physics.
Awkward Array has a function that does that: [ak.cartesian](https://awkward-array.readthedocs.io/en/latest/_auto/ak.cartesian.html).
<img src="fig/cartoon-cartesian.svg" width="300px">
```
pairs = ak.cartesian((numbers, letters))
pairs
pairs.type
pairs.tolist()
```
Very often, we want to create these (per-event) Cartesian products so that we can use both halves in a formula.
The left-sides and right-sides of these 2-tuples are the sort of thing that could have been constructed with [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html), so they can be deconstructed with [ak.unzip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.unzip.html).
```
lefts, rights = ak.unzip(pairs)
lefts, rights
```
Note that `lefts` and `rights` do not have the lengths of either `numbers` or `letters`, but they have the same lengths as each other because the came from the same array of 2-tuples.
```
ak.num(lefts), ak.num(rights)
```
The Cartesian product is equivalent to this C++ `for` loop:
```c++
for (int i = 0; i < numbers.size(); i++) {
for (int j = 0; j < letters.size(); j++) {
// compute formula with numbers[i] and letters[j]
}
}
```
[ak.cartesian](https://awkward-array.readthedocs.io/en/latest/_auto/ak.cartesian.html) is often used to search for particles that decay into two different types of daughters, such as $\Lambda \to p \, \pi^-$ where the protons $p$ are all in one array and the pions $\pi^-$ are in another array, having been selected by particle ID.
It can also be used for track-shower matching, in which tracks and showers are the two collections.
Sometimes, however, you have a single collection and want to find all pairs within it without repetition, equivalent to this C++ `for` loop:
```c++
for (int i = 0; i < numbers.size(); i++) {
for (int j = i + 1; i < numbers.size(); j++) {
// compute formula with numbers[i] and numbers[j]
}
}
```
For that, there's [ak.combinations](https://awkward-array.readthedocs.io/en/latest/_auto/ak.combinations.html).
<img src="fig/cartoon-combinations.svg" width="300px">
```
numbers
pairs = ak.combinations(numbers, 2)
pairs
pairs.type
pairs.tolist()
lefts, rights = ak.unzip(pairs)
lefts, rights
```
## 5. Vector, hist, Particle...
<img src="fig/vector-logo.svg" style="height: 60px; margin-bottom: 40px"> <img src="fig/hist-logo.svg" style="height: 200px"> <img src="fig/particle-logo.svg" style="height: 120px; margin-bottom: 30px">
There are a lot more libraries, but these are the ones used in the exercises.
### Vector
Vector performs 2D, 3D, and 4D (Lorentz) vector calculations on a variety of backends.
One of these is Python itself: `vector.obj` makes a single vector as a Python object.
```
import vector
v = vector.obj(px=1.1, py=2.2, pz=3.3, E=4.4)
v
v.pt
v.boostZ(0.999)
v.to_rhophieta()
```
But we're primarily interested in _arrays of vectors_ (for array-oriented programming).
Vector can make relativity-aware subclasses of NumPy arrays. Any [structured array](https://numpy.org/doc/stable/user/basics.rec.html) with field names that can be recognized as coordinates can be cast as an array of vectors.
```
vec_numpy = (
np.random.normal(10, 1, (1000000, 4)) # a bunch of random numbers
.view([("px", float), ("py", float), ("pz", float), ("M", float)]) # label the fields with coordinates
.view(vector.MomentumNumpy4D) # cast as an array of vectors
)
vec_numpy
```
Now all of the geometry and Lorentz-vector methods apply elementwise to the whole array.
```
# compute deltaR between each vector and the one after it
vec_numpy[1:].deltaR(vec_numpy[:-1])
```
Vector canalso make relativity-aware Awkward Arrays.
Any records with coordinate-named fields and a `"Momentum4D"` as a record label will have Lorentz-vector methods if `vector.register_awkward()` has been called.
```
vector.register_awkward()
# get momenta of tracks from the PicoDST file
px, py, pz = picodst.arrays(filter_name="*mGMomentum[XYZ]", entry_stop=100, how=tuple)
vec_awkward = ak.zip({"px": px, "py": py, "pz": pz, "M": 0}, with_name="Momentum4D")
vec_awkward
```
Now we can use Awkward Array's irregular slicing with Lorentz-vector methods.
```
# compute deltaR between each vector and the one after it WITHIN each event
vec_awkward[:, 1:].deltaR(vec_awkward[:, :-1])
```
### Hist
The hist library fills N-dimensional histograms with [advanced slicing](https://uhi.readthedocs.io/en/latest/indexing.html#examples) and [plotting](https://hist.readthedocs.io/en/latest/user-guide/notebooks/Plots.html).
```
import hist
```
Instead of predefined 1D, 2D, and 3D histogram classes, hist builds a histogram from a series of [axes of various types](https://hist.readthedocs.io/en/latest/user-guide/axes.html).
```
vertexhist = hist.Hist(
hist.axis.Regular(300, -1, 1, label="x"),
hist.axis.Regular(300, -1, 1, label="y"),
hist.axis.Regular(20, -200, 200, label="z"),
hist.axis.Variable([-1030000, -500000, 500000] + list(range(1000000, 1030000, 2000)), label="ranking"),
)
```
The `fill` method accepts arrays. ([ak.flatten](https://awkward-array.readthedocs.io/en/latest/_auto/ak.flatten.html) reduces the nested lists into one-dimensional arrays.)
```
# get primary vertex positions from the PicoDST file
vertex_data = picodst.arrays(filter_name=["*mPrimaryVertex[XYZ]", "*mRanking"])
vertexhist.fill(
ak.flatten(vertex_data["Event.mPrimaryVertexX"]),
ak.flatten(vertex_data["Event.mPrimaryVertexY"]),
ak.flatten(vertex_data["Event.mPrimaryVertexZ"]),
ak.flatten(vertex_data["Event.mRanking"]),
)
```
Now the binned data can be sliced in some dimensions and summed over in others.
```
vertexhist[:, :, sum, sum].plot2d_full();
```
`bh.loc` computes the index position of coordinates from the data. You can use that to zoom in.
```
import boost_histogram as bh
vertexhist[bh.loc(-0.25):bh.loc(0.25), bh.loc(-0.25):bh.loc(0.25), sum, sum].plot2d_full();
vertexhist[sum, sum, :, sum].plot();
```
The `mRanking` quantity is irregularly binned because it has a lot of detail above 1000000, but featureless peaks at 0 and -1000000.
```
vertexhist[sum, sum, sum, :].plot();
vertexhist[sum, sum, sum, bh.loc(1000000):].plot();
```
With a large enough (or cleverly binned) histogram, you can do an exploratory analysis in aggregated data.
Here, we investigate the shape of the $x$-$y$ primary vertex projection for `mRanking` above and below 1000000.
The feasibility of this analysis depends on available memory and the number of bins, but _not_ the number of events.
```
vertexhist[bh.loc(-0.25):bh.loc(0.25), bh.loc(-0.25):bh.loc(0.25), sum, bh.loc(1000000)::sum].plot2d_full();
vertexhist[bh.loc(-0.25):bh.loc(0.25), bh.loc(-0.25):bh.loc(0.25), sum, :bh.loc(1000000):sum].plot2d_full();
```
### Particle
Particle is like a searchable Particle Data Group (PDG) booklet in Python.
```
import particle
[particle.Particle.from_string("p~")]
particle.Particle.from_string("p~")
from hepunits import GeV
z_boson = particle.Particle.from_string("Z0")
z_boson.mass / GeV, z_boson.width / GeV
[particle.Particle.from_pdgid(111)]
particle.Particle.findall(lambda p: p.pdgid.is_meson and p.pdgid.has_strange and p.pdgid.has_charm)
from hepunits import mm
particle.Particle.findall(lambda p: p.pdgid.is_meson and p.ctau > 1 * mm)
```
## Exercises: translating a C++ analysis into array-oriented Python
In this section, we will plot the mass of e⁺e⁻ pairs from the PicoDST file, using a C++ framework ([star-picodst-reference](star-picodst-reference)) as a guide.
The solutions are hidden. Try to solve each exercise on your own (by filling in the "`???`") before comparing with the solutions we've provided.
### C++ version of the analysis
The analysis we want to reproduce is the following:
```c++
// histogram to fill
TH1F *hM = new TH1F("hM", "e+e- invariant mass (GeV/c)", 120, 0, 120);
// get a reader and initialize it
const Char_t *inFile = "pythia_ppZee_run17emb.picoDst.root";
StPicoDstReader* picoReader = new StPicoDstReader(inFile);
picoReader->Init();
Long64_t events2read = picoReader->chain()->GetEntries();
// loop over events
for (Long64_t iEvent = 0; iEvent < events2read; iEvent++) {
Bool_t readEvent = picoReader->readPicoEvent(iEvent);
StPicoDst *dst = picoReader->picoDst();
Int_t nTracks = dst->numberOfTracks();
// for collecting good tracks
std::vector<StPicoTrack*> goodTracks;
// loop over tracks
for (Int_t iTrk = 0; iTrk < nTracks; iTrk++) {
StPicoTrack *picoTrack = dst->track(iTrk);
// track quality cuts
if (!picoTrack->isPrimary()) continue;
if (picoTrack->nHitsFit() / picoTrack()->nHitsMax() < 0.2) continue;
// track -> associated electromagnetic calorimeter energy
if (picoTrack->isBemcTrack()) {
StPicoBEmcPidTraits *trait = dst->bemcPidTraits(
picoTrack->bemcPidTraitsIndex()
);
// matched energy cut
double pOverE = picoTrack->pMom().Mag() / trait->btowE();
if (pOverE < 0.1) continue;
// this is a good track
goodTracks.push_back(picoTrack);
}
}
// loop over good pairs with opposite-sign charge and fill the invariant mass plot
for (UInt_t i = 0; i < goodTracks.size(); i++) {
for (UInt_t j = i + 1; j < goodTracks.size(); j++) {
// make Lorentz vectors with electron mass
TLorentzVector one(goodTracks[i].pMom(), 0.0005109989461);
TLorentzVector two(goodTracks[j].pMom(), 0.0005109989461);
// opposite-sign charge cut
if (goodTracks[i].charge() != goodTracks[j].charge()) {
// fill the histogram
hM->Fill((one + two).M());
}
}
}
}
```
### Reading the data
As before, we start by reading the file.
```
import uproot
import awkward as ak
import numpy as np
picodst = uproot.open("https://pivarski-princeton.s3.amazonaws.com/pythia_ppZee_run17emb.picoDst.root:PicoDst")
picodst
```
By examining the C++ code, we see that we need to compute
```c++
StPicoTrack::isPrimary
StPicoTrack::nHitsFit
StPicoTrack::nHitsMax
StPicoTrack::isBemcTrack
StPicoTrack::bemcPidTraitsIndex
StPicoTrack::pMom
StPicoBEmcPidTraits::btowE
```
From [star-picodst-reference/StPicoTrack.h](star-picodst-reference/StPicoTrack.h) and [star-picodst-reference/StPicoBEmcPidTraits.h](star-picodst-reference/StPicoBEmcPidTraits.h), we learn that these are derived from the following TBranches:
* `mPMomentumX`
* `mPMomentumY`
* `mPMomentumZ`
* `mNHitsFit`
* `mNHitsMax`
* `mBEmcPidTraitsIndex`
* `mBtowE`
#### **Exercise 1:** Extract these TBranches as arrays, using the same names as variable names.
**Hint:** To avoid long download times while you experiment, set `entry_stop=10` in your calls to [TTree.arrays](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#arrays) or [TBranch.array](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html#array). Just be sure to remove it to get all entries in the end.
From the location of Binder's servers on the Internet, it takes about 1 minute to read. If it's taking 2 or more minutes, you're probably downloading more than you intended.
```
mPMomentumX = ???
mPMomentumY = ???
mPMomentumZ = ???
mNHitsFit = ???
mNHitsMax = ???
mBEmcPidTraitsIndex = ???
mBtowE = ???
```
The types of these arrays should be
```
(
mPMomentumX.type,
mPMomentumY.type,
mPMomentumZ.type,
mNHitsFit.type,
mNHitsMax.type,
mBEmcPidTraitsIndex.type,
mBtowE.type,
)
```
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
There are several ways to get these data; here are two.
**(1)** You could navigate to each TBranch and ask for its `array`.
```python
mPMomentumX = picodst["Track.mPMomentumX"].array()
mPMomentumY = picodst["Track.mPMomentumY"].array()
mPMomentumZ = picodst["Track.mPMomentumZ"].array()
mNHitsFit = picodst["Track.mNHitsFit"].array()
mNHitsMax = picodst["Track.mNHitsMax"].array()
mBEmcPidTraitsIndex = picodst["Track.mBEmcPidTraitsIndex"].array()
mBtowE = picodst["EmcPidTraits.mBtowE"].array() / 1000
```
<br>
**(2)** You could ask the TTree for its `arrays`, with a filter to keep from reading everything over the network, then extract each field of the resulting record array. (There's a slight performance advantage to this method, since it only has to make 1 request across the network, rather than 7, _if_ you filter the TBranches to read. If you don't, _it will read all branches_, which will take much longer.)
```python
single_array = picodst.arrays(filter_name=[
"Track.mPMomentum[XYZ]",
"Track.mNHits*",
"Track.mBEmcPidTraitsIndex",
"EmcPidTraits.mBtowE",
])
mPMomentumX = single_array["Track.mPMomentumX"]
mPMomentumY = single_array["Track.mPMomentumY"]
mPMomentumZ = single_array["Track.mPMomentumZ"]
mNHitsFit = single_array["Track.mNHitsFit"]
mNHitsMax = single_array["Track.mNHitsMax"]
mBEmcPidTraitsIndex = single_array["Track.mBEmcPidTraitsIndex"]
mBtowE = single_array["EmcPidTraits.mBtowE"] / 1000
```
<br>
Either way, be sure to divide the `mBtowE` branch by 1000, as it is in the C++ code.
</details>
### Making momentum objects with charges
The C++ code uses ROOT [TVector3](https://root.cern.ch/doc/master/classTVector3.html) and [TLorentzVector](https://root.cern.ch/doc/master/classTLorentzVector.html) objects for vector calculations. We'll use the (array-oriented) Vector library.
The definitions of `pMom` and `charge` in [star-picodst-reference/StPicoTrack.h](star-picodst-reference/StPicoTrack.h) are
```c++
TVector3 pMom() const { return TVector3(mPMomentumX, mPMomentumY, mPMomentumZ); }
Short_t charge() const { return (mNHitsFit > 0) ? 1 : -1; }
```
(Yes, the `charge` bit is hidden inside the `mNHitsFit` integer.)
```
import vector
vector.register_awkward()
import particle, hepunits
electron_mass = particle.Particle.find("e-").mass / hepunits.GeV
electron_mass
```
#### **Exercise 2a:** First, make an array of `charge` as positive and negative `1` integers. You may use [ak.where](https://awkward-array.readthedocs.io/en/latest/_auto/ak.where.html) or clever arithmetic.
```
charge = ???
```
The type should be:
```
charge.type
```
And the first and last values should be:
```
charge
```
#### **Exercise 2b:** Next, use [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html) to combine `mPMomentumX`, `mPMomentumY`, `mPMomentumZ`, `electron_mass`, and `charge` into a single array of type
```python
8004 * var * {"px": float32, "py": float32, "pz": float32, "M": float64, "charge": int64}
```
It is very important that the type is lists of records (`var * {"px": float32, ...}`), not records of lists (`{"px": var * float32, ...}`).
```
record_array = ???
```
The second record in the first event should be:
```
record_array[0, 1].tolist()
```
#### **Exercise 2c:** Finally, search Awkward Array's [reference documentation](https://awkward-array.readthedocs.io/) for a way to add the `"Momentum4D"` name to these records to turn them into Lorentz vectors.
```
pMom = ???
```
The type of `pMom` should be:
```
pMom.type
```
Lorentz vector operations don't require the `"charge"`, but it will be convenient to keep that in the same package. The Vector library will ignore it.
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
The charge can be computed using:
```python
charge = ak.where(mNHitsFit > 0, 1, -1)
```
<br>
or "clever arithmetic" (booleans in a numerical expression become `false → 0`, `true → 1`):
```python
charge = (mNHitsFit > 0) * 2 - 1
```
<br>
Making the record array is a direct application of [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html):
```python
record_array = ak.zip(
{"px": mPMomentumX, "py": mPMomentumY, "pz": mPMomentumZ, "M": electron_mass, "charge": charge}
)
```
<br>
Combining the variable-length lists of `mPMomentumX`, `mPMomentumY`, `mPMomentumZ`, and `charge` is just what [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html) does (and if those lists had different lengths, it would raise an error). Only using `depth_limit=1` or the [ak.Array](https://awkward-array.readthedocs.io/en/latest/_auto/ak.Array.html) constructor would produce the wrong type.
Also, the constant `electron_mass` does not need special handling. Constants and lower-dimension arrays are [broadcasted](https://awkward-array.readthedocs.io/en/latest/_auto/ak.broadcast_arrays.html) to the same shape as larger-dimension arrays when used in the same function. (This is similar to, but an extension of, NumPy's [concept of broadcasting](https://numpy.org/doc/stable/user/basics.broadcasting.html).)
Finally, to add the `"Momentum4D"` name to all the records, you could use [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html) again, as it has a `with_name` argument:
```python
pMom = ak.zip(
{"px": mPMomentumX, "py": mPMomentumY, "pz": mPMomentumZ, "M": electron_mass, "charge": charge},
with_name="Momentum4D",
)
```
<br>
Or pass the already-built `record_array` into [ak.with_name](https://awkward-array.readthedocs.io/en/latest/_auto/ak.with_name.html):
```python
pMom = ak.with_name(record_array, "Momentum4D")
```
<br>
Or pass the already-built `record_array` into the [ak.Array](https://awkward-array.readthedocs.io/en/latest/_auto/ak.Array.html) constructor with a `with_name` argument:
```python
pMom = ak.Array(record_array, with_name="Momentum4D")
```
</details>
### Computing track cuts
In the C++, the following cuts are applied to the tracks:
```c++
if (!picoTrack->isPrimary()) continue;
if (picoTrack->nHitsFit() / picoTrack()->nHitsMax() < 0.2) continue;
if (picoTrack->isBemcTrack()) {
// ...
}
```
Some of the cuts in C++ are applied by jumping to the next loop iteration with `continue` (a dangerous practice, in my opinion) while another is in a nested `if` statement. Note that the `continue` conditions describe the _opposite_ of a good track.
The quantities used in the cuts are defined in [star-picodst-reference/StPicoTrack.h](star-picodst-reference/StPicoTrack.h):
```c++
Bool_t isPrimary() const { return ( pMom().Mag()>0 ); }
TVector3 pMom() const { return TVector3(mPMomentumX, mPMomentumY, mPMomentumZ); }
Int_t nHitsFit() const { return (mNHitsFit > 0) ? (Int_t)mNHitsFit : (Int_t)(-1 * mNHitsFit); }
Int_t nHitsMax() const { return (Int_t)mNHitsMax; }
Bool_t isBemcTrack() const { return (mBEmcPidTraitsIndex<0) ? false : true; }
```
#### **Exercise 3:** Convert these cuts into a [boolean array slice](https://awkward-array.readthedocs.io/en/latest/_auto/ak.Array.html#filtering).
```
isPrimary = ???
nHitsFit = ???
nHitsMax = ???
isBemcTrack = ???
track_quality_cuts = ???
```
The type of `track_quality_cuts` should be:
```
track_quality_cuts.type
```
And the number of passing tracks in the first and last events should be:
```
np.count_nonzero(track_quality_cuts, axis=1)
```
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
There are several equivalent ways to compute `isPrimary`:
```python
isPrimary = pMom.mag > 0
```
<br>
and
```python
isPrimary = (abs(mPMomentumX) > 0) & (abs(mPMomentumY) > 0) & (abs(mPMomentumZ) > 0)
```
<br>
and
```python
isPrimary = mPMomentumX**2 + mPMomentumY**2 + mPMomentumZ**2 > 0 # or with np.sqrt
```
<br>
The most straightforward way to compute `nHitsFit` is:
```python
nHitsFit = abs(mNHitsFit)
```
<br>
but you could use [ak.where](https://awkward-array.readthedocs.io/en/latest/_auto/ak.where.html)/[np.where](https://numpy.org/doc/stable/reference/generated/numpy.where.html) to make it look more like the C++:
```python
nHitsFit = np.where(mNHitsFit > 0, mNHitsFit, -1 * mNHitsFit)
```
<br>
`nHitsMax` is exactly equal to `mNHitsMax`, and `isBemcTrack` is:
```python
isBemcTrack = mBEmcPidTraitsIndex >= 0 # be sure to get the inequality right
```
<br>
or
```python
isBemcTrack = np.where(mBEmcPidTraitsIndex < 0, False, True)
```
<br>
to make it look more like the C++.
Finally, `track_quality_cuts` is a logical-AND of three selections:
```python
track_quality_cuts = isPrimary & (nHitsFit / nHitsMax >= 0.2) & isBemcTrack
```
<br>
Be sure to get the inequality right: `continue` _throws away_ bad tracks, but we want an expression that will _keep_ good tracks.
</details>
### Matching tracks to electromagnetic showers
The final track quality cut requires us to match the track with its corresponding shower. Tracks and showers have different multiplicities.
```
ak.num(mPMomentumX), ak.num(mBtowE)
```
The PicoDst file provides us with an index for each track that is the position of the corresponding shower in the showers array. It is `-1` when there is no corresponding shower.
```
mBEmcPidTraitsIndex
```
#### **Exercise 4:** Filter `mBEmcPidTraitsIndex` with `track_quality_cuts` and make an array of shower energy `mBtowE` for each quality track.
```
quality_mBtowE = ???
```
The type of `quality_mBtowE` should be:
```
quality_mBtowE.type
```
Its first and last values should be:
```
quality_mBtowE
```
And it should have as many values in each event as there are "`true`" booleans in `track_quality_cuts`:
```
ak.num(quality_mBtowE)
np.all(ak.num(quality_mBtowE) == np.count_nonzero(track_quality_cuts, axis=1))
```
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
The answer could be written in one line:
```python
quality_mBtowE = mBtowE[mBEmcPidTraitsIndex[track_quality_cuts]]
```
<br>
The first part, `mBEmcPidTraitsIndex[track_quality_cuts]`, applies the track quality cuts to the `mBEmcPidTraitsIndex` so that there are no more `-1` values in it. The remaining array of lists of integers is exactly what is required to pick energy values from `mBtowE` in lists of the right lengths and orders.
Naturally, you could write it in two lines (or as many as you find easy to read).
If you know about [ak.mask](https://awkward-array.readthedocs.io/en/latest/_auto/ak.mask.html), you might have tried masking `mBEmcPidTraitsIndex` instead of filtering it:
```python
quality_mBEmcPidTraitsIndex = mBEmcPidTraitsIndex.mask[track_quality_cuts]
quality_mBtowE = mBtowE[quality_mBEmcPidTraitsIndex]
```
<br>
Instead of changing the lengths of the lists by dropping bad tracks, this would replace them with missing value placeholders ("`None`"). _This is not wrong,_ and it's a good alternative to the overall problem because it simplifies the process of filtering filtered data. (The placeholders keep the arrays the same lengths, so cuts can be applied in any order.)
However, it changes how the next step would have to be handled, and you'd eventually have to use [ak.is_none](https://awkward-array.readthedocs.io/en/latest/_auto/ak.is_none.html) to remove the missing values. For the sake of this walkthrough, to keep everyone on the same page, let's not do that.
</details>
### Applying the energy cut and making track-pairs
The last quality cut requires total track momentum divided by shower energy to be at least 0.1.
We can get the total track momentum from `pMom.mag` (3D magnitude of 3D or 4D vectors), but apply the quality cuts to it so that it has the same length as `quality_mBtowE` (which already has quality cuts applied).
```
pMom.mag
quality_total_momentum = pMom[track_quality_cuts].mag
quality_total_momentum
quality_pOverE = quality_total_momentum / quality_mBtowE
quality_pOverE
```
(You may see a warning when calculating the above; some values of the denominator are zero. It's possible to selectively suppress such messages with NumPy's [np.errstate](https://numpy.org/doc/stable/reference/generated/numpy.errstate.html).)
```
quality_pOverE_cut = (quality_pOverE >= 0.1)
np.count_nonzero(quality_pOverE_cut, axis=1)
```
An array with all cuts applied, the equivalent of `goodTracks` in the C++, is:
```
goodTracks = pMom[track_quality_cuts][quality_pOverE_cut]
goodTracks
```
(As mentioned in one of the solutions, above, [ak.mask](https://awkward-array.readthedocs.io/en/latest/_auto/ak.mask.html) would allow `track_quality_cuts` and `quality_pOverE_cut` to be applied in either order, at the expense of having to remove the placeholder "`None`" values with [ak.is_none](https://awkward-array.readthedocs.io/en/latest/_auto/ak.is_none.html). Extra credit if you can rework all of the above to use this technique.)
#### **Exercise 5a:** Use [ak.combinations](https://awkward-array.readthedocs.io/en/latest/_auto/ak.combinations.html) to make all pairs of good tracks, per event, the equivalent of this code:
```c++
for (UInt_t i = 0; i < goodTracks.size(); i++) {
for (UInt_t j = i + 1; j < goodTracks.size(); j++) {
// make Lorentz vectors with electron mass
TLorentzVector one(goodTracks[i].pMom(), 0.0005109989461);
TLorentzVector two(goodTracks[j].pMom(), 0.0005109989461);
```
```
pairs = ???
```
The type of `pairs` should be lists of 2-tuples of Momentum4D:
```
pairs.type
```
And the number of such pairs in the first and last events should be:
```
ak.num(pairs)
```
Note that this is not the same as the number of good tracks:
```
ak.num(goodTracks)
```
In particular, 3 good tracks → 3 pairs, 2 good tracks → 1 pair, and 4 good tracks → 6 pairs ($n$ choose $2$ = $n(n - 1)/2$ for $n$ good tracks).
#### **Exercise 5b:** Search Awkward Array's [reference documentation](https://awkward-array.readthedocs.io/) for a way to get an array named `one` with the first of each pair and an array named `two` with the second of each pair, as two arrays with equal-length lists.
```
one, two = ???
```
The types of `one` and `two` should be:
```
one.type, two.type
```
And the lengths of their lists should be the same as `ak.num(goodTracks)` (above).
```
ak.num(one), ak.num(two)
```
**Hint:** Remember how we _combined_ arrays of lists of the same lengths into `record_array`? This is the opposite of that.
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
The first step is a direct application of the [ak.combinations](https://awkward-array.readthedocs.io/en/latest/_auto/ak.combinations.html) function:
```python
pairs = ak.combinations(goodTracks, 2)
```
<br>
The default `axis` is `axis=1`, which means to find all combinations in each entry. (Not all combinations of entries, which would be `axis=0`!)
The second step could be a direct application of [ak.unzip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.unzip.html):
```python
one, two = ak.unzip(pairs)
```
<br>
But tuples, like the 2-tuples in these `pairs`, are just records with unnamed fields. We can extract record fields with string-valued slices, and tuples can be indexed by position, so the slices that would extract the first of all tuple fields and the second of all tuple fields is `"0"` (a string!) and `"1"` (a string!).
```python
one, two = pairs["0"], pairs["1"]
```
<br>
That's prone to misunderstanding: the numbers really must be inside strings. Perhaps a safer way to do it is:
```python
one, two = pairs.slot0, pairs.slot1
```
<br>
which works up to `slot9`. Whereas these methods extract one tuple-field at a time, [ak.unzip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.unzip.html) extracts all fields (of any tuple _or_ record).
</details>
### Selecting opposite-sign charges among those pairs
The opposite-sign charge cut is not a track quality cut, since it depends on a relationship between two tracks.
Now we have arrays `one` and `two` representing the left and right halves of those pairs, and we can define and apply the cut.
#### **Exercise 6a:** Make an array of booleans that are `true` for opposite-sign charges and `false` for same-sign charges.
```
opposite_charge_cut = ???
```
The type should be:
```
opposite_charge_cut.type
```
And the number of `true` values in the first and last events should be:
```
ak.count_nonzero(opposite_charge_cut, axis=1)
```
#### **Exercise 6b:** Apply that cut to `one` and `two`.
```
quality_one = ???
quality_two = ???
```
The types should be remain:
```
quality_one.type, quality_two.type
```
And the lengths of the lists should become:
```
ak.num(quality_one), ak.num(quality_two)
```
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
I've seen three different ways people calculate opposite-sign charges. I think this is the simplest:
```python
opposite_charge_cut = one.charge != two.charge
```
<br>
This one is also intuitive, since the Z boson that decays to two electrons has net zero charge:
```python
opposite_charge_cut = one.charge + two.charge == 0
```
<br>
This one is odd, but I see it quite a lot:
```python
opposite_charge_cut = one.charge * two.charge == -1
```
<br>
As for applying the cut, the pattern should be getting familiar:
```python
quality_one = one[opposite_charge_cut]
quality_two = two[opposite_charge_cut]
```
</details>
### Computing invariant mass of the track pairs
Up to this point, the only Lorentz vector method that we used was `mag`. Now we want to add the left and right halves of each pair and compute their invariant mass.
#### **Exercise 7:** Check the [Vector documentation](https://vector.readthedocs.io/en/latest/usage/intro.html) and figure out how to do that.
```
invariant_mass = ???
```
The type should be:
```
invariant_mass.type
```
The first and last values should be:
```
invariant_mass
```
And the lengths of each list in the first and last events should be:
```
ak.num(invariant_mass)
```
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
It could look (almost) exactly like the C++:
```python
invariant_mass = (quality_one + quality_two).M
```
<br>
But I prefer:
```python
invariant_mass = (quality_one + quality_two).mass
```
<br>
The Vector library has only one way to "spell" this quantity for purely geometric vectors, "`tau`" (for proper time), but when vectors are labeled as "Momentum", they get synonyms: "`mass`", "`M`", "`m`".
It's worth noting that `(quality_one + quality_two)` is a new array of vectors, and therefore the fields Vector doesn't recognize are lost. The type of `(quality_one + quality_two)` is:
```
8004 * var * Momentum4D["x": float32, "y": float32, "z": float32, "tau": float64]
```
<br>
with no `"charge"`. Vector does not add the charges because adding would not be the correct thing to do with any unrecognized field. (You might have named it `"q"` or `"Q"`.) Of course, you can add it yourself:
```python
quality_one.charge + quality_two.charge
```
<br>
and insert it into a new object. This works, for instance:
```python
z_bosons = (quality_one + quality_two)
z_bosons["charge"] = quality_one.charge + quality_two.charge
```
<br>
and `z_bosons` has type
```
8004 * var * Momentum4D["x": float32, "y": float32, "z": float32, "tau": float64, "charge": int64]
```
</details>
### Plotting the invariant mass
Constructing a histogram, which was the first step in C++:
```c++
TH1F *hM = new TH1F("hM", "e+e- invariant mass (GeV/c)", 120, 0, 120);
```
is the last step here.
```
import hist
```
#### **Exercise 8:** Check the [hist documentation](https://hist.readthedocs.io/en/latest/user-guide/quickstart.html) and define a one-dimensional histogram with 120 regularly-spaced bins from 0 to 120 GeV. Then fill it with the `invariant_mass` data.
**Hint:** The `invariant_mass` array contains _lists_ of numbers, but histograms present a distribution of _numbers_. Search Awkward Array's [reference documentation](https://awkward-array.readthedocs.io/) for a way to flatten these lists into a one-dimensional array, and experiment with that step _before_ attempting to fill the histogram. (The error messages will be easier to understand.)
```
flat_invariant_mass = ???
hM = ???
hM.fill(???)
```
The flattened invariant mass should look like this:
```
flat_invariant_mass
```
Note that the type does not have any "`var`" in it.
Whenever a `hist.Hist` is the return value of an expression in Jupyter (such as after the `fill`), you'll see a mini-plot to aid in interactive analysis. But professional-quality plots are made through Matplotlib:
```
hM.plot();
```
By importing Matplotlib, we can configure the plot, mix it with other plots, tweak how it looks, etc.
```
import matplotlib.pyplot as plt
hM.plot()
plt.yscale("log")
```
**Physics note:** The broad peak at 85 GeV _is_ the Z boson (in this Monte Carlo sample). It's offset from the 91 GeV Z mass and has a width of 14 GeV due to tracking resolution for these high-momentum tracks (roughly 40 GeV per track).
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
The `invariant_mass` array can be flattened with [ak.flatten](https://awkward-array.readthedocs.io/en/latest/_auto/ak.flatten.html) or [ak.ravel](https://awkward-array.readthedocs.io/en/latest/_auto/ak.ravel.html)/[np.ravel](https://numpy.org/doc/stable/reference/generated/numpy.ravel.html). The [ak.flatten](https://awkward-array.readthedocs.io/en/latest/_auto/ak.flatten.html) function only flattens one dimension (by default, `axis=1`), which is all we need in this case. "Ravel" is NumPy's spelling for "flatten all dimensions."
```python
flat_invariant_mass = ak.flatten(invariant_mass)
```
<br>
or
```python
flat_invariant_mass = np.ravel(invariant_mass)
```
<br>
The reason you have to do this manually is because it's an information-losing operation: [there are many ways](https://awkward-array.org/how-to-restructure-flatten.html) to get a dimensionless set of values from nested data, and in some circumstances, you might have wanted one of the other ones. For instance, maybe you want to ensure that you only plot one Z candidate per event, and you have some criteria for selecting the "best" one. This is where you would put that alternative.
As for constructing the histogram and filling it:
```python
hM = hist.Hist(hist.axis.Regular(120, 0, 120, label="e+e- invariant mass (GeV/c)"))
hM.fill(flat_invariant_mass)
```
<br>
Be sure to use hist's array-oriented `fill` method. Iterating over the values in the array (or even the lists withing an array of lists) would be hundreds of times slower than filling it in one call.
Calling `fill` multiple times to accumulate batches, however, is fine: the important thing is to give it a large array with each call, so that most of its time can be spent in its compiled histogram-fill loop, not in Python loops.</details>
### Retrospective
(Spoilers; keep hidden until you're done.)
This might have seemed like a lot of steps to produce a simple invariant mass plot, but the intention of the exercises above was to walk you through it slowly.
A speedrun would look more like this:
```
import awkward as ak
import numpy as np
import matplotlib.pyplot as plt
import uproot
import particle
import hepunits
import hist
import vector
vector.register_awkward()
picodst = uproot.open("https://pivarski-princeton.s3.amazonaws.com/pythia_ppZee_run17emb.picoDst.root:PicoDst")
# make an array of track momentum vectors
pMom = ak.zip(dict(zip(["px", "py", "pz"], picodst.arrays(filter_name="Track.mPMomentum[XYZ]", how=tuple))), with_name="Momentum4D")
pMom["M"] = particle.Particle.find("e-").mass / hepunits.GeV
# get all the other arrays we need
mNHitsFit, mNHitsMax, mBEmcPidTraitsIndex, mBtowE = \
picodst.arrays(filter_name=["Track.mNHitsFit", "Track.mNHitsMax", "Track.mBEmcPidTraitsIndex", "EmcPidTraits.mBtowE"], how=tuple)
mBtowE = mBtowE / 1000
# add charge to the momentum vector
pMom["charge"] = (mNHitsFit > 0) * 2 - 1
# compute track quality cuts
isPrimary = pMom.mag > 0
isBemcTrack = mBEmcPidTraitsIndex >= 0
track_quality_cuts = isPrimary & (abs(mNHitsFit) / mNHitsMax >= 0.2) & isBemcTrack
# find shower energies for quality tracks
quality_mBtowE = mBtowE[mBEmcPidTraitsIndex[track_quality_cuts]]
# compute the momentum-over-energy cut (some denominators are zero)
with np.errstate(divide="ignore"):
quality_pOverE_cut = (pMom[track_quality_cuts].mag / quality_mBtowE >= 0.1)
# apply all track quality cuts, including momentum-over-energy
goodTracks = pMom[track_quality_cuts][quality_pOverE_cut]
# form pairs of quality cuts and apply an opposite-sign charge constraint
pairs = ak.combinations(goodTracks, 2)
one, two = ak.unzip(pairs)
quality_one, quality_two = ak.unzip(pairs[one.charge != two.charge])
# make the plot
hM = hist.Hist(hist.axis.Regular(120, 0, 120, label="e+e- invariant mass (GeV/c)"))
hM.fill(ak.flatten((quality_one + quality_two).mass))
hM.plot()
plt.yscale("log")
```
#### **Final words about array-oriented data analysis**
The key thing about this interface is the _order_ in which you do things.
1. Scan through the TBranch names to see what you can play with.
2. Get some promising-looking arrays. If the dataset is big or remote, use `entry_stop=small_number` to fetch only as much as you need to investigate.
3. Compute _one quantity_ on the _entire array(s)_. Then look at a few of its values or plot it.
4. Decide whether that was what you wanted to compute or cut. If not, go back to 2.
5. When you've built up a final result (on a small dataset), clean up the notebook or copy it to a non-notebook script.
6. Put the computation code in an [uproot.iterate](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.iterate.html) loop or a parallel process that writes and adds up histograms.
7. Parallelize, collect the histograms, beautify them, publish.
Most importantly, each computation _step_ applies to _entire_ (possibly small) datasets, so you can look at/plot what you've computed before you decide what to compute next.
Imperative code forces you to put all steps into a loop; you have to run the whole loop to see any of the results. You can still do iterative data analysis, but the turn-around time to identify and fix mistakes is longer.
#### **I'm not just the president; I'm also a client**
I experienced this first-hand (again) while preparing this tutorial. There were some things I didn't understand about STAR's detectors and I didn't believe the final result (I thought the Z peak was fake; sculped by cuts), so I furiously plotted everything versus everything in this TTree until I came to understand that the Z peak was correct after all. (Dmitry Kalinkin helped: thanks!)
Offline, I have a ridiculously messy `Untitled.ipynb` with all those plots, mostly in the form
```python
plt.hist(ak.flatten(some_quantity), bins=100); # maybe add range=(low, high)
plt.yscale("log")
```
for brevity. Only when _I_ felt _I_ understood what was going on could I clean up all of that mess into something coherent, which is the exercises above.
| github_jupyter |
```
!jupyter nbconvert eesardocs.ipynb --to slides --post serve
import warnings
# these are innocuous but irritating
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
```
# Change Detection with Sentinel-1 PolSAR imagery on the GEE
### Mort Canty
mort.canty@gmail.com
### Joshua Rutkowski, Irmgard Niemeyer
Jülich Forschungszentrum, Germany
### Allan A. Nielsen, Knut Conradsen, Henning Skriver
Technical University of Denmark
### September 2018
## Software Installation
Pull and/or run the container with
docker run -d -p 443:8888 --name=eesar mort/eesardocker
or, if you are on a Raspberry Pi,
docker run -d -p 443:8888 --name=eesar mort/rpi-eesardocker
Point your browser to http://localhost:443 to see the Jupyter notebook home page.
Open the Notebook
interface.ipynb
Stop the container with
docker stop eesar
Re-start with
docker start eesar
### The GEE Sentinel-1 Archive
https://explorer.earthengine.google.com/#detail/COPERNICUS%2FS1_GRD
## Background
### Vector and matrix representations
A fully polarimetric SAR measures a
$2\times 2$ _scattering matrix_ $S$ at each resolution cell on the ground.
The scattering matrix relates the incident and the backscattered
electric fields $E^i$ and $E^b$ according to
$$
\pmatrix{E_h^b \cr E_v^b}
=\pmatrix{S_{hh} & S_{hv}\cr S_{vh} & S_{vv}}\pmatrix{E_h^i \cr E_v^i}.
$$
The per-pixel polarimetric information in the scattering matrix $S$, under the assumption
of reciprocity ($S_{hv} = S_{vh}$), can then be expressed as a three-component complex vector
$$
s = \pmatrix{S_{hh}\cr \sqrt{2}S_{hv}\cr S_{vv}},
$$
The total intensity is referred to as the _span_ and is the complex inner product of the vector $s$,
$$
{\rm span} = s^\top s = |S_{hh}|^2 + 2|S_{hv}|^2 + |S_{vv}|^2.
$$
The polarimetric signal is can also be represented by taking the complex outer product of $s$ with itself:
$$
C = s s^\top = \pmatrix{ |S_{hh}|^2 & \sqrt{2}S_{hh}S_{hv}^* & S_{hh}S_{vv}^* \cr
\sqrt{2}S_{hv}S_{hh}^* & 2|S_{hv}|^2 & \sqrt{2}S_{hv}S_{vv}^* \cr
S_{vv}S_{hh}^* & \sqrt{2}S_{vv}S_{hv}^* & |S_{vv}|^2 }.
$$
### Multi-looking
The matrix $C$ can be averaged over the number of looks (number of adjacent cells used to average out the effect of speckle) to give an estimate of the __covariance matrix__ of each multi-look pixel:
$$
\bar{C} ={1\over m}\sum_{\nu=1}^m s(\nu) s(\nu)^\top = \langle s s^\top \rangle
= \pmatrix{ \langle |S_{hh}|^2\rangle & \langle\sqrt{2}S_{hh}S_{hv}^*\rangle & \langle S_{hh}S_{vv}^*\rangle \cr
\langle\sqrt{2} S_{hv}S_{hh}^*\rangle & \langle 2|S_{hv}|^2\rangle & \langle\sqrt{2}S_{hv}S_{vv}^*\rangle \cr
\langle S_{vv}S_{hh}^*\rangle & \langle\sqrt{2}S_{vv}S_{hv}^*\rangle & \langle |S_{vv}|^2\rangle },
$$
### Dual polarimetric imagery
The Sentinel-1 sensors operate in reduced, power-saving polarization modes, emitting only one polarization and receiving two (dual polarization) or one (single polarization).
For vertical transmission and horizontal and vertical reception,
$$
\bar{C} = \pmatrix{ \langle |S_{vv}|^2\rangle & \langle S_{vv}S_{vh}^*\rangle \cr
\langle S_{vh}S_{vv}^*\rangle & \langle |S_{vh}|^2\rangle },
$$
The GEE archives only the diagonal (intensity) matrix elements, so we work in fact with
$$
\bar{C} = \pmatrix{ \langle |S_{vv}|^2\rangle & 0 \cr
0 & \langle |S_{vh}|^2\rangle },
$$
### Change detection, bitemporal imagery
The probability distribution of $\bar C$ is completely determined by the parameter $\Sigma$ (the covariance matrix) and by the __equivalent number of looks__ ENL.
Given two measurements of polarized backscatter, one can set up an hypothesis test in order to decide whether or not a change has occurred.
$$H_0: \Sigma_1 = \Sigma_2$$
i.e., the two observations were sampled from the same distribution and no change has occurred
$$H_1: \Sigma_1\ne\Sigma_2$$
in other words, there was a change.
Since the distributions are known, a test statistic can be formulated which allows one to decide to a desired degree of significance whether or not to reject the null hypothesis.
### Change detection, multitemporal imagery
In the case of $k > 2$ observations this procedure can be generalized to test a null hypothesis that all of the $k$ pixels are characterized by the same $\Sigma$, against the alternative that at least one of the $\Sigma_i$, $i=1\dots k$, are different, i.e., that at least one change has taken place.
Furthermore this so-called __omnibus test procedure__ can be factored into a sequence of tests involving hypotheses of the form:
$\Sigma_1 = \Sigma_2$ against $\Sigma_1 \ne \Sigma_2$,
$\Sigma_1 = \Sigma_2 = \Sigma_3$ against $\Sigma_1 = \Sigma_2 \ne \Sigma_3$,
and so forth.
Denoting the test statistics $R^\ell_j,\ \ell = 1\dots k-1,\ j=\ell+1\dots k$, for a series of, say, $k=5$ images, we have the following tests to consider
$$
\matrix{
\ell/j &2 &3 &4 &5\cr
1 & R^1_2 & R^1_3 & R^1_4 & R^1_5 \cr
2 & & R^2_3 & R^2_4 & R^2_5 \cr
3 & & & R^3_4 & R^3_5 \cr
4 & & & & R^4_5 }
$$
## The GEE interface
The interface is programmed against the GEE Python API and uses Jupyter widgets to generate the desired Sentinel-1 times series for processing.
Results (changes maps) can be previewed on-the-fly and then exported to the GEE Code Editor for visualization and animation.
```
from auxil.eeSar_seq import run
run()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
# Today's data
400 fotos of human faces. Each face is a 2d array [64x64] of pixel brightness.
```
from sklearn.datasets import fetch_olivetti_faces
data = fetch_olivetti_faces().images
# @this code showcases matplotlib subplots. The syntax is: plt.subplot(height, width, index_starting_from_1)
plt.subplot(2,2,1)
plt.imshow(data[0],cmap='gray')
plt.subplot(2,2,2)
plt.imshow(data[1],cmap='gray')
plt.subplot(2,2,3)
plt.imshow(data[2],cmap='gray')
plt.subplot(2,2,4)
plt.imshow(data[3],cmap='gray')
```
# Face reconstruction problem
Let's solve the face reconstruction problem: given left halves of facex __(X)__, our algorithm shall predict the right half __(y)__. Our first step is to slice the photos into X and y using slices.
__Slices in numpy:__
* In regular python, slice looks roughly like this: `a[2:5]` _(select elements from 2 to 5)_
* Numpy allows you to slice N-dimensional arrays along each dimension: [image_index, height, width]
* `data[:10]` - Select first 10 images
* `data[:, :10]` - For all images, select a horizontal stripe 10 pixels high at the top of the image
* `data[10:20, :, -25:-15]` - Take images [10, 11, ..., 19], for each image select a _vetrical stripe_ of width 10 pixels, 15 pixels away from the _right_ side.
__Your task:__
Let's use slices to select all __left image halves as X__ and all __right halves as y__.
```
# select left half of each face as X, right half as Y
X = <Slice left half-images>
y = <Slice right half-images>
# If you did everything right, you're gonna see left half-image and right half-image drawn separately in natural order
plt.subplot(1,2,1)
plt.imshow(X[0],cmap='gray')
plt.subplot(1,2,2)
plt.imshow(y[0],cmap='gray')
assert X.shape == y.shape == (len(data), 64, 32), "Please slice exactly the left half-face to X and right half-face to Y"
def glue(left_half,right_half):
# merge photos back together
left_half = left_half.reshape([-1,64,32])
right_half = right_half.reshape([-1,64,32])
return np.concatenate([left_half,right_half],axis=-1)
# if you did everything right, you're gonna see a valid face
plt.imshow(glue(X,y)[99],cmap='gray')
```
# Machine learning stuff
```
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test = train_test_split(X.reshape([len(X),-1]),
y.reshape([len(y),-1]),
test_size=0.05,random_state=42)
print(X_test.shape)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train,Y_train)
```
measure mean squared error
```
from sklearn.metrics import mean_squared_error
print("Train MSE:", mean_squared_error(Y_train,model.predict(X_train)))
print("Test MSE:", mean_squared_error(Y_test,model.predict(X_test)))
# Train predictions
pics = glue(X_train,model.predict(X_train))
plt.figure(figsize=[16,12])
for i in range(20):
plt.subplot(4,5,i+1)
plt.imshow(pics[i],cmap='gray')
# Test predictions
pics = glue(X_test,model.predict(X_test))
plt.figure(figsize=[16,12])
for i in range(20):
plt.subplot(4,5,i+1)
plt.imshow(pics[i],cmap='gray')
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
# Ridge regression
RidgeRegression is just a LinearRegression, with l2 regularization - penalized for $ \alpha \cdot \sum _i w_i^2$
Let's train such a model with alpha=0.5
```
from sklearn.linear_model import Ridge
ridge = Ridge(alpha=0.5)
<YOUR CODE: fit the model on training set>
<YOUR CODE: predict and measure MSE on train and test>
# Test predictions
pics = glue(X_test,ridge.predict(X_test))
plt.figure(figsize=[16,12])
for i in range(20):
plt.subplot(4,5,i+1)
plt.imshow(pics[i],cmap='gray')
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
# Grid search
Train model with diferent $\alpha$ and find one that has minimal test MSE. It's okay to use loops or any other python stuff here.
```
<YOUR CODE>
# Test predictions
pics = glue(X_test,<predict with your best model>)
plt.figure(figsize=[16,12])
for i in range(20):
plt.subplot(4,5,i+1)
plt.imshow(pics[i],cmap='gray')
```
| github_jupyter |
# Creating a Sentiment Analysis Web App
## Using PyTorch and SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## General Outline
Recall the general outline for SageMaker projects using a notebook instance.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
For this project, you will be following the steps in the general outline with some modifications.
First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.
In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.
## Step 1: Downloading the data
As in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
```
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
```
## Step 2: Preparing and Processing the data
Also, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
```
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
```
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
```
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
```
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
```
print(train_X[100])
print(train_y[100])
```
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
```
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
```
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
```
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[100])
```
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?
**Answer:**
1. It removes all HTML tags.
2. It also removes all non alphanumerical characters.
3. It also converts all letters to lowercase.
4. It also removes all words found in stopwords
5. It also converts a string into a list of words.
The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
```
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
```
## Transform the data
In the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.
Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.
### (TODO) Create a word dictionary
To begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.
> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
```
import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
# A dict storing the words that appear in the reviews along with how often they occur
word_count = {}
for text in data:
for w in text:
if not(w in word_count):
word_count[w] = 0
word_count[w] += 1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
w_freq = []
for k, v in word_count.items():
w_freq.append((v, k))
w_freq.sort(reverse=True)
sorted_words = []
for _, word in w_freq:
sorted_words.append(word)
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
```
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?
**Answer:**
The words are 'movi', 'film', 'one', 'like', and 'time'.
Yes, because this is about movies and films. And, It makes sense that people can say "this one ...". And, I can see that most of the movie reviews must have been positive given 'like' is in the top 5 frequent words. I think 'time' may appear a lot because people may refer to a point in 'time' in the movie they are reviewing.
```
# TODO: Use this space to determine the five most frequently appearing words in the training set.
for w, idx in word_dict.items():
if idx > 1 and idx < 7:
print('{} th frequent word is {}'.format(idx-1, w))
```
### Save `word_dict`
Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
```
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
```
### Transform the reviews
Now that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
```
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
```
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
```
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print(train_X_len[300])
train_X[300]
```
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?
**Answer:**
preprocess_data: It is not a problem. In fact, it is good that we are using the same preprocessor for both train and test data for consistency.
convert_and_pad_data: This is also good. We want to make sure that the input is in the expected format and size for both the train and test datasets.
## Step 3: Upload the data to S3
As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.
### Save the processed training dataset locally
It is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
```
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Uploading the training data
Next, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
```
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
```
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.
## Step 4: Build and Train the PyTorch Model
In the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects
- Model Artifacts,
- Training Code, and
- Inference Code,
each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.
We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
```
!pygmentize train/model.py
```
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.
First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
```
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
```
### (TODO) Writing the training method
Next we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
```
from torch import nn
def train(model, train_loader, epochs, optimizer, loss_fn, device, clip=5):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
out = model(batch_X)
loss = loss_fn(out.squeeze(), batch_y.float())
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
```
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
```
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 2, optimizer, loss_fn, device)
```
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.
### (TODO) Training the model
When a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.
**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.
The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
```
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})
```
## Step 5: Testing the model
As mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.
## Step 6: Deploy the model for testing
Now that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.
There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.
**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )
Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.
**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.
In other words **If you are no longer using a deployed endpoint, shut it down!**
**TODO:** Deploy the trained model.
```
# TODO: Deploy the trained model
predictor = estimator.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
```
## Step 7 - Use the model for testing
Once deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
```
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
```
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?
**Answer:**
XGBoost model's accuracy on the test dataset is 0.8566. This one's accuracy is 0.83432. I think there isn't too much of a difference, but if I have to choose, XGBoost model seems to perform better by 2%.
### (TODO) More testing
We now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
```
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
```
The question we now need to answer is, how do we send this review to our model?
Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.
- Removed any html tags and stemmed the input
- Encoded the review as a sequence of integers using `word_dict`
In order process the review we will need to repeat these two steps.
**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
```
# TODO: Convert test_review into a form usable by the model and save the results in test_data
test_data = review_to_words(test_review)
test_data, test_data_len = convert_and_pad_data(word_dict, [test_data])
test_data = pd.concat([pd.DataFrame(test_data_len), pd.DataFrame(test_data)], axis=1)
test_data = test_data.values
```
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
```
predictor.predict(test_data).item()
```
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.
### Delete the endpoint
Of course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
```
estimator.delete_endpoint()
```
## Step 6 (again) - Deploy the model for the web app
Now that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.
As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.
We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.
When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.
- `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.
- `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.
- `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.
- `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.
For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.
### (TODO) Writing inference code
Before writing our custom inference code, we will begin by taking a look at the code which has been provided.
```
!pygmentize serve/predict.py
print(word_dict)
```
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.
**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.
```
import argparse
import json
import os
import pickle
import sys
# import sagemaker_containers
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
# from model import LSTMClassifier
# from utils import review_to_words, convert_and_pad
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
import pickle
import os
import glob
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def predict_fn(input_data, model):
print('Inferring sentiment of input data.')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if model.word_dict is None:
raise Exception('Model has not been loaded properly, no word_dict.')
# TODO: Process input_data so that it is ready to be sent to our model.
# You should produce two variables:
# data_X - A sequence of length 500 which represents the converted review
# data_len - The length of the review
data_X = review_to_words(input_data)
data_X, data_len = convert_and_pad(model.word_dict, data_X)
# Using data_X and data_len we construct an appropriate input tensor. Remember
# that our model expects input data of the form 'len, review[500]'.
data_pack = np.hstack((data_len, data_X))
data_pack = data_pack.reshape(1, -1)
data = torch.from_numpy(data_pack)
data = data.to(device)
# Make sure to put the model into evaluation mode
model.eval()
# TODO: Compute the result of applying the model to the input data. The variable `result` should
# be a numpy array which contains a single integer which is either 1 or 0
prediction = model(data)
prediction = torch.round(prediction).cpu().squeeze().int()
result = prediction.detach().numpy()
return result
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
net = LSTMClassifier(32, 100, 5000).to(device)
net.word_dict = {'movi': 2, 'film': 3, 'one': 4, 'like': 5, 'time': 6, 'good': 7, 'make': 8, 'charact': 9, 'get': 10, 'see': 11, 'watch': 12, 'stori': 13, 'even': 14, 'would': 15, 'realli': 16, 'well': 17, 'scene': 18, 'look': 19, 'show': 20, 'much': 21, 'end': 22, 'peopl': 23, 'bad': 24, 'go': 25, 'great': 26, 'also': 27, 'first': 28, 'love': 29, 'think': 30, 'way': 31, 'act': 32, 'play': 33, 'made': 34, 'thing': 35, 'could': 36, 'know': 37, 'say': 38, 'seem': 39, 'work': 40, 'plot': 41, 'two': 42, 'actor': 43, 'year': 44, 'come': 45, 'mani': 46, 'seen': 47, 'take': 48, 'want': 49, 'life': 50, 'never': 51, 'littl': 52, 'best': 53, 'tri': 54, 'man': 55, 'ever': 56, 'give': 57, 'better': 58, 'still': 59, 'perform': 60, 'find': 61, 'feel': 62, 'part': 63, 'back': 64, 'use': 65, 'someth': 66, 'director': 67, 'actual': 68, 'interest': 69, 'lot': 70, 'real': 71, 'old': 72, 'cast': 73, 'though': 74, 'live': 75, 'star': 76, 'enjoy': 77, 'guy': 78, 'anoth': 79, 'new': 80, 'role': 81, 'noth': 82, '10': 83, 'funni': 84, 'music': 85, 'point': 86, 'start': 87, 'set': 88, 'girl': 89, 'origin': 90, 'day': 91, 'world': 92, 'everi': 93, 'believ': 94, 'turn': 95, 'quit': 96, 'us': 97, 'direct': 98, 'thought': 99, 'fact': 100, 'minut': 101, 'horror': 102, 'kill': 103, 'action': 104, 'comedi': 105, 'pretti': 106, 'young': 107, 'wonder': 108, 'happen': 109, 'around': 110, 'got': 111, 'effect': 112, 'right': 113, 'long': 114, 'howev': 115, 'big': 116, 'line': 117, 'famili': 118, 'enough': 119, 'seri': 120, 'may': 121, 'need': 122, 'fan': 123, 'bit': 124, 'script': 125, 'beauti': 126, 'person': 127, 'becom': 128, 'without': 129, 'must': 130, 'alway': 131, 'friend': 132, 'tell': 133, 'reason': 134, 'saw': 135, 'last': 136, 'final': 137, 'kid': 138, 'almost': 139, 'put': 140, 'least': 141, 'sure': 142, 'done': 143, 'whole': 144, 'place': 145, 'complet': 146, 'kind': 147, 'expect': 148, 'differ': 149, 'shot': 150, 'far': 151, 'mean': 152, 'anyth': 153, 'book': 154, 'laugh': 155, 'might': 156, 'name': 157, 'sinc': 158, 'begin': 159, '2': 160, 'probabl': 161, 'woman': 162, 'help': 163, 'entertain': 164, 'let': 165, 'screen': 166, 'call': 167, 'tv': 168, 'moment': 169, 'away': 170, 'read': 171, 'yet': 172, 'rather': 173, 'worst': 174, 'run': 175, 'fun': 176, 'lead': 177, 'hard': 178, 'audienc': 179, 'idea': 180, 'anyon': 181, 'episod': 182, 'american': 183, 'found': 184, 'appear': 185, 'bore': 186, 'especi': 187, 'although': 188, 'hope': 189, 'keep': 190, 'cours': 191, 'anim': 192, 'job': 193, 'goe': 194, 'move': 195, 'sens': 196, 'version': 197, 'dvd': 198, 'war': 199, 'money': 200, 'someon': 201, 'mind': 202, 'mayb': 203, 'problem': 204, 'true': 205, 'hous': 206, 'everyth': 207, 'nice': 208, 'second': 209, 'rate': 210, 'three': 211, 'night': 212, 'follow': 213, 'face': 214, 'recommend': 215, 'product': 216, 'main': 217, 'worth': 218, 'leav': 219, 'human': 220, 'special': 221, 'excel': 222, 'togeth': 223, 'wast': 224, 'sound': 225, 'everyon': 226, 'john': 227, 'hand': 228, '1': 229, 'father': 230, 'later': 231, 'eye': 232, 'said': 233, 'view': 234, 'instead': 235, 'review': 236, 'boy': 237, 'high': 238, 'hour': 239, 'miss': 240, 'talk': 241, 'classic': 242, 'wife': 243, 'understand': 244, 'left': 245, 'care': 246, 'black': 247, 'death': 248, 'open': 249, 'murder': 250, 'write': 251, 'half': 252, 'head': 253, 'rememb': 254, 'chang': 255, 'viewer': 256, 'fight': 257, 'gener': 258, 'surpris': 259, 'short': 260, 'includ': 261, 'die': 262, 'fall': 263, 'less': 264, 'els': 265, 'entir': 266, 'piec': 267, 'involv': 268, 'pictur': 269, 'simpli': 270, 'top': 271, 'power': 272, 'home': 273, 'total': 274, 'usual': 275, 'budget': 276, 'attempt': 277, 'suppos': 278, 'releas': 279, 'hollywood': 280, 'terribl': 281, 'song': 282, 'men': 283, 'possibl': 284, 'featur': 285, 'portray': 286, 'disappoint': 287, 'poor': 288, '3': 289, 'coupl': 290, 'stupid': 291, 'camera': 292, 'dead': 293, 'wrong': 294, 'produc': 295, 'low': 296, 'video': 297, 'either': 298, 'aw': 299, 'definit': 300, 'except': 301, 'rest': 302, 'given': 303, 'absolut': 304, 'women': 305, 'lack': 306, 'word': 307, 'writer': 308, 'titl': 309, 'talent': 310, 'decid': 311, 'full': 312, 'perfect': 313, 'along': 314, 'style': 315, 'close': 316, 'truli': 317, 'school': 318, 'save': 319, 'emot': 320, 'sex': 321, 'age': 322, 'next': 323, 'bring': 324, 'mr': 325, 'case': 326, 'killer': 327, 'heart': 328, 'comment': 329, 'sort': 330, 'creat': 331, 'perhap': 332, 'came': 333, 'brother': 334, 'sever': 335, 'joke': 336, 'art': 337, 'dialogu': 338, 'game': 339, 'small': 340, 'base': 341, 'flick': 342, 'written': 343, 'sequenc': 344, 'meet': 345, 'earli': 346, 'often': 347, 'other': 348, 'mother': 349, 'develop': 350, 'humor': 351, 'actress': 352, 'consid': 353, 'dark': 354, 'guess': 355, 'amaz': 356, 'unfortun': 357, 'lost': 358, 'light': 359, 'exampl': 360, 'cinema': 361, 'drama': 362, 'ye': 363, 'white': 364, 'experi': 365, 'imagin': 366, 'mention': 367, 'stop': 368, 'natur': 369, 'forc': 370, 'manag': 371, 'felt': 372, 'present': 373, 'cut': 374, 'children': 375, 'fail': 376, 'son': 377, 'support': 378, 'qualiti': 379, 'car': 380, 'ask': 381, 'hit': 382, 'side': 383, 'voic': 384, 'extrem': 385, 'impress': 386, 'wors': 387, 'evil': 388, 'went': 389, 'stand': 390, 'certainli': 391, 'basic': 392, 'oh': 393, 'overal': 394, 'favorit': 395, 'horribl': 396, 'mysteri': 397, 'number': 398, 'type': 399, 'danc': 400, 'wait': 401, 'hero': 402, 'alreadi': 403, '5': 404, 'learn': 405, 'matter': 406, '4': 407, 'michael': 408, 'genr': 409, 'fine': 410, 'despit': 411, 'throughout': 412, 'walk': 413, 'success': 414, 'histori': 415, 'question': 416, 'zombi': 417, 'town': 418, 'relationship': 419, 'realiz': 420, 'past': 421, 'child': 422, 'daughter': 423, 'late': 424, 'b': 425, 'wish': 426, 'hate': 427, 'credit': 428, 'event': 429, 'theme': 430, 'touch': 431, 'citi': 432, 'today': 433, 'sometim': 434, 'behind': 435, 'god': 436, 'twist': 437, 'sit': 438, 'stay': 439, 'deal': 440, 'annoy': 441, 'abl': 442, 'rent': 443, 'pleas': 444, 'edit': 445, 'blood': 446, 'deserv': 447, 'comic': 448, 'anyway': 449, 'appar': 450, 'soon': 451, 'gave': 452, 'etc': 453, 'level': 454, 'slow': 455, 'chanc': 456, 'score': 457, 'bodi': 458, 'brilliant': 459, 'incred': 460, 'figur': 461, 'situat': 462, 'self': 463, 'major': 464, 'stuff': 465, 'decent': 466, 'element': 467, 'return': 468, 'dream': 469, 'obvious': 470, 'order': 471, 'continu': 472, 'pace': 473, 'ridicul': 474, 'happi': 475, 'highli': 476, 'group': 477, 'add': 478, 'thank': 479, 'ladi': 480, 'novel': 481, 'speak': 482, 'pain': 483, 'career': 484, 'shoot': 485, 'strang': 486, 'heard': 487, 'sad': 488, 'polic': 489, 'husband': 490, 'import': 491, 'break': 492, 'took': 493, 'strong': 494, 'cannot': 495, 'robert': 496, 'predict': 497, 'violenc': 498, 'hilari': 499, 'recent': 500, 'countri': 501, 'known': 502, 'particularli': 503, 'pick': 504, 'documentari': 505, 'season': 506, 'critic': 507, 'jame': 508, 'compar': 509, 'obviou': 510, 'alon': 511, 'told': 512, 'state': 513, 'visual': 514, 'rock': 515, 'theater': 516, 'offer': 517, 'exist': 518, 'opinion': 519, 'gore': 520, 'hold': 521, 'crap': 522, 'result': 523, 'room': 524, 'realiti': 525, 'hear': 526, 'effort': 527, 'clich': 528, 'thriller': 529, 'caus': 530, 'serious': 531, 'sequel': 532, 'explain': 533, 'king': 534, 'local': 535, 'ago': 536, 'none': 537, 'hell': 538, 'note': 539, 'allow': 540, 'sister': 541, 'david': 542, 'simpl': 543, 'femal': 544, 'deliv': 545, 'ok': 546, 'convinc': 547, 'class': 548, 'check': 549, 'suspens': 550, 'win': 551, 'oscar': 552, 'buy': 553, 'huge': 554, 'valu': 555, 'sexual': 556, 'scari': 557, 'cool': 558, 'similar': 559, 'excit': 560, 'provid': 561, 'exactli': 562, 'apart': 563, 'shown': 564, 'avoid': 565, 'seriou': 566, 'english': 567, 'whose': 568, 'taken': 569, 'cinematographi': 570, 'shock': 571, 'polit': 572, 'spoiler': 573, 'offic': 574, 'across': 575, 'middl': 576, 'street': 577, 'pass': 578, 'messag': 579, 'somewhat': 580, 'silli': 581, 'charm': 582, 'modern': 583, 'filmmak': 584, 'confus': 585, 'form': 586, 'tale': 587, 'singl': 588, 'jack': 589, 'mostli': 590, 'william': 591, 'carri': 592, 'attent': 593, 'sing': 594, 'subject': 595, 'five': 596, 'richard': 597, 'prove': 598, 'team': 599, 'stage': 600, 'unlik': 601, 'cop': 602, 'georg': 603, 'televis': 604, 'monster': 605, 'earth': 606, 'villain': 607, 'cover': 608, 'pay': 609, 'marri': 610, 'toward': 611, 'build': 612, 'pull': 613, 'parent': 614, 'due': 615, 'respect': 616, 'fill': 617, 'four': 618, 'dialog': 619, 'remind': 620, 'futur': 621, 'weak': 622, 'typic': 623, '7': 624, 'cheap': 625, 'intellig': 626, 'british': 627, 'atmospher': 628, 'clearli': 629, '80': 630, 'paul': 631, 'non': 632, 'dog': 633, 'knew': 634, 'fast': 635, 'artist': 636, '8': 637, 'crime': 638, 'easili': 639, 'escap': 640, 'doubt': 641, 'adult': 642, 'detail': 643, 'date': 644, 'romant': 645, 'member': 646, 'fire': 647, 'gun': 648, 'drive': 649, 'straight': 650, 'fit': 651, 'beyond': 652, 'attack': 653, 'imag': 654, 'upon': 655, 'posit': 656, 'whether': 657, 'peter': 658, 'fantast': 659, 'captur': 660, 'aspect': 661, 'appreci': 662, 'ten': 663, 'plan': 664, 'discov': 665, 'remain': 666, 'period': 667, 'near': 668, 'realist': 669, 'air': 670, 'mark': 671, 'red': 672, 'dull': 673, 'adapt': 674, 'within': 675, 'spend': 676, 'lose': 677, 'materi': 678, 'color': 679, 'chase': 680, 'mari': 681, 'storylin': 682, 'forget': 683, 'bunch': 684, 'clear': 685, 'lee': 686, 'victim': 687, 'nearli': 688, 'box': 689, 'york': 690, 'match': 691, 'inspir': 692, 'mess': 693, 'finish': 694, 'standard': 695, 'easi': 696, 'truth': 697, 'suffer': 698, 'busi': 699, 'space': 700, 'dramat': 701, 'bill': 702, 'western': 703, 'e': 704, 'list': 705, 'battl': 706, 'notic': 707, 'de': 708, 'french': 709, 'ad': 710, '9': 711, 'tom': 712, 'larg': 713, 'among': 714, 'eventu': 715, 'train': 716, 'accept': 717, 'agre': 718, 'spirit': 719, 'soundtrack': 720, 'third': 721, 'teenag': 722, 'soldier': 723, 'adventur': 724, 'suggest': 725, 'sorri': 726, 'famou': 727, 'drug': 728, 'normal': 729, 'cri': 730, 'babi': 731, 'ultim': 732, 'troubl': 733, 'contain': 734, 'certain': 735, 'cultur': 736, 'romanc': 737, 'rare': 738, 'lame': 739, 'somehow': 740, 'mix': 741, 'disney': 742, 'gone': 743, 'cartoon': 744, 'student': 745, 'reveal': 746, 'fear': 747, 'suck': 748, 'kept': 749, 'attract': 750, 'appeal': 751, 'premis': 752, 'secret': 753, 'greatest': 754, 'design': 755, 'shame': 756, 'throw': 757, 'scare': 758, 'copi': 759, 'wit': 760, 'america': 761, 'admit': 762, 'relat': 763, 'particular': 764, 'brought': 765, 'screenplay': 766, 'whatev': 767, 'pure': 768, '70': 769, 'harri': 770, 'averag': 771, 'master': 772, 'describ': 773, 'treat': 774, 'male': 775, '20': 776, 'issu': 777, 'fantasi': 778, 'warn': 779, 'inde': 780, 'forward': 781, 'background': 782, 'project': 783, 'free': 784, 'memor': 785, 'japanes': 786, 'poorli': 787, 'award': 788, 'locat': 789, 'potenti': 790, 'amus': 791, 'struggl': 792, 'weird': 793, 'magic': 794, 'societi': 795, 'okay': 796, 'imdb': 797, 'doctor': 798, 'accent': 799, 'water': 800, 'hot': 801, 'express': 802, 'dr': 803, 'alien': 804, '30': 805, 'odd': 806, 'crazi': 807, 'choic': 808, 'studio': 809, 'fiction': 810, 'control': 811, 'becam': 812, 'masterpiec': 813, 'fli': 814, 'difficult': 815, 'joe': 816, 'scream': 817, 'costum': 818, 'lover': 819, 'uniqu': 820, 'refer': 821, 'remak': 822, 'vampir': 823, 'girlfriend': 824, 'prison': 825, 'execut': 826, 'wear': 827, 'jump': 828, 'wood': 829, 'unless': 830, 'creepi': 831, 'cheesi': 832, 'superb': 833, 'otherwis': 834, 'parti': 835, 'roll': 836, 'ghost': 837, 'public': 838, 'mad': 839, 'depict': 840, 'week': 841, 'moral': 842, 'jane': 843, 'earlier': 844, 'badli': 845, 'fi': 846, 'dumb': 847, 'grow': 848, 'flaw': 849, 'sci': 850, 'deep': 851, 'maker': 852, 'cat': 853, 'older': 854, 'footag': 855, 'connect': 856, 'plenti': 857, 'bother': 858, 'outsid': 859, 'stick': 860, 'gay': 861, 'catch': 862, 'plu': 863, 'co': 864, 'popular': 865, 'equal': 866, 'social': 867, 'quickli': 868, 'disturb': 869, 'perfectli': 870, 'dress': 871, 'era': 872, '90': 873, 'mistak': 874, 'lie': 875, 'ride': 876, 'previou': 877, 'combin': 878, 'concept': 879, 'band': 880, 'surviv': 881, 'rich': 882, 'answer': 883, 'front': 884, 'sweet': 885, 'christma': 886, 'insid': 887, 'eat': 888, 'concern': 889, 'bare': 890, 'listen': 891, 'ben': 892, 'beat': 893, 'c': 894, 'term': 895, 'serv': 896, 'meant': 897, 'la': 898, 'german': 899, 'stereotyp': 900, 'hardli': 901, 'law': 902, 'innoc': 903, 'desper': 904, 'promis': 905, 'memori': 906, 'intent': 907, 'cute': 908, 'variou': 909, 'steal': 910, 'inform': 911, 'brain': 912, 'post': 913, 'tone': 914, 'island': 915, 'amount': 916, 'track': 917, 'nuditi': 918, 'compani': 919, 'store': 920, 'claim': 921, 'hair': 922, 'flat': 923, '50': 924, 'univers': 925, 'land': 926, 'scott': 927, 'kick': 928, 'fairli': 929, 'danger': 930, 'player': 931, 'step': 932, 'plain': 933, 'crew': 934, 'toni': 935, 'share': 936, 'tast': 937, 'centuri': 938, 'engag': 939, 'achiev': 940, 'travel': 941, 'cold': 942, 'suit': 943, 'rip': 944, 'record': 945, 'sadli': 946, 'manner': 947, 'wrote': 948, 'tension': 949, 'spot': 950, 'intens': 951, 'fascin': 952, 'familiar': 953, 'remark': 954, 'depth': 955, 'burn': 956, 'histor': 957, 'destroy': 958, 'sleep': 959, 'purpos': 960, 'languag': 961, 'ruin': 962, 'ignor': 963, 'delight': 964, 'unbeliev': 965, 'italian': 966, 'soul': 967, 'collect': 968, 'abil': 969, 'detect': 970, 'clever': 971, 'violent': 972, 'rape': 973, 'reach': 974, 'door': 975, 'trash': 976, 'scienc': 977, 'liter': 978, 'reveng': 979, 'commun': 980, 'caught': 981, 'creatur': 982, 'trip': 983, 'approach': 984, 'intrigu': 985, 'fashion': 986, 'skill': 987, 'paint': 988, 'introduc': 989, 'complex': 990, 'channel': 991, 'camp': 992, 'christian': 993, 'hole': 994, 'extra': 995, 'mental': 996, 'limit': 997, 'immedi': 998, 'ann': 999, 'slightli': 1000, 'million': 1001, 'mere': 1002, 'comput': 1003, '6': 1004, 'slasher': 1005, 'conclus': 1006, 'suddenli': 1007, 'imposs': 1008, 'teen': 1009, 'neither': 1010, 'crimin': 1011, 'spent': 1012, 'physic': 1013, 'nation': 1014, 'respons': 1015, 'planet': 1016, 'receiv': 1017, 'fake': 1018, 'sick': 1019, 'blue': 1020, 'bizarr': 1021, 'embarrass': 1022, 'indian': 1023, 'ring': 1024, '15': 1025, 'pop': 1026, 'drop': 1027, 'drag': 1028, 'haunt': 1029, 'suspect': 1030, 'pointless': 1031, 'search': 1032, 'edg': 1033, 'handl': 1034, 'common': 1035, 'biggest': 1036, 'hurt': 1037, 'faith': 1038, 'arriv': 1039, 'technic': 1040, 'angel': 1041, 'genuin': 1042, 'dad': 1043, 'solid': 1044, 'f': 1045, 'awesom': 1046, 'van': 1047, 'former': 1048, 'focu': 1049, 'colleg': 1050, 'count': 1051, 'tear': 1052, 'heavi': 1053, 'wall': 1054, 'rais': 1055, 'younger': 1056, 'visit': 1057, 'laughabl': 1058, 'sign': 1059, 'fair': 1060, 'excus': 1061, 'cult': 1062, 'tough': 1063, 'motion': 1064, 'key': 1065, 'super': 1066, 'desir': 1067, 'stun': 1068, 'addit': 1069, 'exploit': 1070, 'cloth': 1071, 'tortur': 1072, 'smith': 1073, 'race': 1074, 'davi': 1075, 'cross': 1076, 'author': 1077, 'jim': 1078, 'minor': 1079, 'focus': 1080, 'consist': 1081, 'compel': 1082, 'pathet': 1083, 'commit': 1084, 'chemistri': 1085, 'park': 1086, 'tradit': 1087, 'obsess': 1088, 'frank': 1089, 'grade': 1090, 'asid': 1091, '60': 1092, 'brutal': 1093, 'steve': 1094, 'somewher': 1095, 'u': 1096, 'rule': 1097, 'opportun': 1098, 'grant': 1099, 'explor': 1100, 'depress': 1101, 'honest': 1102, 'besid': 1103, 'dub': 1104, 'anti': 1105, 'trailer': 1106, 'intend': 1107, 'bar': 1108, 'west': 1109, 'scientist': 1110, 'regard': 1111, 'longer': 1112, 'judg': 1113, 'decad': 1114, 'silent': 1115, 'creativ': 1116, 'armi': 1117, 'wild': 1118, 'stewart': 1119, 'south': 1120, 'g': 1121, 'draw': 1122, 'road': 1123, 'govern': 1124, 'ex': 1125, 'boss': 1126, 'practic': 1127, 'surprisingli': 1128, 'motiv': 1129, 'gang': 1130, 'festiv': 1131, 'club': 1132, 'redeem': 1133, 'page': 1134, 'london': 1135, 'green': 1136, 'militari': 1137, 'machin': 1138, 'idiot': 1139, 'display': 1140, 'aliv': 1141, 'thrill': 1142, 'repeat': 1143, 'yeah': 1144, 'nobodi': 1145, 'folk': 1146, '100': 1147, '40': 1148, 'journey': 1149, 'garbag': 1150, 'tire': 1151, 'smile': 1152, 'ground': 1153, 'mood': 1154, 'bought': 1155, 'stone': 1156, 'sam': 1157, 'cost': 1158, 'noir': 1159, 'mouth': 1160, 'terrif': 1161, 'agent': 1162, 'utterli': 1163, 'requir': 1164, 'sexi': 1165, 'honestli': 1166, 'area': 1167, 'report': 1168, 'geniu': 1169, 'investig': 1170, 'humour': 1171, 'glad': 1172, 'enter': 1173, 'serial': 1174, 'passion': 1175, 'occasion': 1176, 'narr': 1177, 'marriag': 1178, 'climax': 1179, 'studi': 1180, 'industri': 1181, 'ship': 1182, 'nowher': 1183, 'demon': 1184, 'charli': 1185, 'center': 1186, 'loos': 1187, 'hors': 1188, 'bear': 1189, 'wow': 1190, 'hang': 1191, 'graphic': 1192, 'giant': 1193, 'admir': 1194, 'send': 1195, 'loud': 1196, 'damn': 1197, 'subtl': 1198, 'rel': 1199, 'profession': 1200, 'nake': 1201, 'blow': 1202, 'bottom': 1203, 'insult': 1204, 'batman': 1205, 'r': 1206, 'kelli': 1207, 'doubl': 1208, 'boyfriend': 1209, 'initi': 1210, 'frame': 1211, 'opera': 1212, 'gem': 1213, 'drawn': 1214, 'cinemat': 1215, 'church': 1216, 'challeng': 1217, 'affect': 1218, 'seek': 1219, 'nightmar': 1220, 'l': 1221, 'j': 1222, 'fulli': 1223, 'evid': 1224, 'essenti': 1225, 'conflict': 1226, 'arm': 1227, 'wind': 1228, 'henri': 1229, 'grace': 1230, 'christoph': 1231, 'witch': 1232, 'narrat': 1233, 'assum': 1234, 'push': 1235, 'hunt': 1236, 'wise': 1237, 'chri': 1238, 'repres': 1239, 'nomin': 1240, 'month': 1241, 'sceneri': 1242, 'hide': 1243, 'avail': 1244, 'affair': 1245, 'thu': 1246, 'smart': 1247, 'justic': 1248, 'bond': 1249, 'outstand': 1250, 'interview': 1251, 'flashback': 1252, 'satisfi': 1253, 'presenc': 1254, 'constantli': 1255, 'central': 1256, 'bed': 1257, 'sell': 1258, 'iron': 1259, 'content': 1260, 'gag': 1261, 'everybodi': 1262, 'slowli': 1263, 'hotel': 1264, 'hire': 1265, 'system': 1266, 'thrown': 1267, 'individu': 1268, 'hey': 1269, 'charl': 1270, 'adam': 1271, 'mediocr': 1272, 'jone': 1273, 'allen': 1274, 'ray': 1275, 'lesson': 1276, 'billi': 1277, 'photographi': 1278, 'cameo': 1279, 'pari': 1280, 'fellow': 1281, 'strike': 1282, 'rise': 1283, 'independ': 1284, 'brief': 1285, 'absurd': 1286, 'neg': 1287, 'phone': 1288, 'impact': 1289, 'model': 1290, 'ill': 1291, 'born': 1292, 'spoil': 1293, 'fresh': 1294, 'angl': 1295, 'likabl': 1296, 'abus': 1297, 'hill': 1298, 'discuss': 1299, 'sight': 1300, 'ahead': 1301, 'sent': 1302, 'photograph': 1303, 'shine': 1304, 'occur': 1305, 'logic': 1306, 'blame': 1307, 'mainli': 1308, 'bruce': 1309, 'skip': 1310, 'forev': 1311, 'commerci': 1312, 'teacher': 1313, 'surround': 1314, 'segment': 1315, 'held': 1316, 'zero': 1317, 'blond': 1318, 'trap': 1319, 'summer': 1320, 'satir': 1321, 'resembl': 1322, 'six': 1323, 'queen': 1324, 'fool': 1325, 'ball': 1326, 'twice': 1327, 'tragedi': 1328, 'sub': 1329, 'reaction': 1330, 'pack': 1331, 'bomb': 1332, 'will': 1333, 'protagonist': 1334, 'hospit': 1335, 'sport': 1336, 'mile': 1337, 'vote': 1338, 'trust': 1339, 'mom': 1340, 'jerri': 1341, 'drink': 1342, 'encount': 1343, 'plane': 1344, 'station': 1345, 'program': 1346, 'current': 1347, 'al': 1348, 'martin': 1349, 'choos': 1350, 'celebr': 1351, 'join': 1352, 'tragic': 1353, 'round': 1354, 'lord': 1355, 'field': 1356, 'favourit': 1357, 'vision': 1358, 'robot': 1359, 'jean': 1360, 'tie': 1361, 'arthur': 1362, 'roger': 1363, 'random': 1364, 'fortun': 1365, 'psycholog': 1366, 'intern': 1367, 'dread': 1368, 'prefer': 1369, 'nonsens': 1370, 'improv': 1371, 'epic': 1372, 'pleasur': 1373, 'legend': 1374, 'highlight': 1375, 'formula': 1376, 'tape': 1377, 'dollar': 1378, '11': 1379, 'wide': 1380, 'thin': 1381, 'porn': 1382, 'object': 1383, 'gorgeou': 1384, 'fox': 1385, 'ugli': 1386, 'influenc': 1387, 'buddi': 1388, 'prepar': 1389, 'nasti': 1390, 'ii': 1391, 'warm': 1392, 'supposedli': 1393, 'reflect': 1394, 'progress': 1395, 'youth': 1396, 'worthi': 1397, 'unusu': 1398, 'length': 1399, 'latter': 1400, 'crash': 1401, 'superior': 1402, 'shop': 1403, 'seven': 1404, 'childhood': 1405, 'theatr': 1406, 'remot': 1407, 'pilot': 1408, 'paid': 1409, 'funniest': 1410, 'disgust': 1411, 'trick': 1412, 'fell': 1413, 'convers': 1414, 'castl': 1415, 'rob': 1416, 'gangster': 1417, 'establish': 1418, 'disast': 1419, 'suicid': 1420, 'mine': 1421, 'ident': 1422, 'heaven': 1423, 'disappear': 1424, 'tend': 1425, 'singer': 1426, 'mask': 1427, 'heroin': 1428, 'forgotten': 1429, 'decis': 1430, 'partner': 1431, 'brian': 1432, 'recogn': 1433, 'desert': 1434, 'alan': 1435, 'thoroughli': 1436, 'stuck': 1437, 'sky': 1438, 'p': 1439, 'ms': 1440, 'replac': 1441, 'accur': 1442, 'market': 1443, 'uncl': 1444, 'seemingli': 1445, 'eddi': 1446, 'danni': 1447, 'commentari': 1448, 'clue': 1449, 'andi': 1450, 'jackson': 1451, 'devil': 1452, 'therefor': 1453, 'that': 1454, 'refus': 1455, 'pair': 1456, 'unit': 1457, 'river': 1458, 'fault': 1459, 'fate': 1460, 'ed': 1461, 'accid': 1462, 'tune': 1463, 'afraid': 1464, 'stephen': 1465, 'russian': 1466, 'hidden': 1467, 'clean': 1468, 'test': 1469, 'readi': 1470, 'quick': 1471, 'irrit': 1472, 'instanc': 1473, 'convey': 1474, 'captain': 1475, 'european': 1476, 'insan': 1477, 'frustrat': 1478, 'daniel': 1479, 'wed': 1480, 'rescu': 1481, 'food': 1482, 'chines': 1483, '1950': 1484, 'lock': 1485, 'dirti': 1486, 'angri': 1487, 'joy': 1488, 'steven': 1489, 'price': 1490, 'cage': 1491, 'bland': 1492, 'rang': 1493, 'anymor': 1494, 'wooden': 1495, 'rush': 1496, 'news': 1497, 'n': 1498, 'jason': 1499, 'worri': 1500, 'twenti': 1501, 'martial': 1502, 'led': 1503, 'board': 1504, '12': 1505, 'transform': 1506, 'symbol': 1507, 'hunter': 1508, 'cgi': 1509, 'x': 1510, 'sentiment': 1511, 'piti': 1512, 'onto': 1513, 'johnni': 1514, 'invent': 1515, 'process': 1516, 'explan': 1517, 'attitud': 1518, 'owner': 1519, 'awar': 1520, 'aim': 1521, 'target': 1522, 'necessari': 1523, 'floor': 1524, 'favor': 1525, 'energi': 1526, 'religi': 1527, 'opposit': 1528, 'window': 1529, 'insight': 1530, 'chick': 1531, 'blind': 1532, 'movement': 1533, 'research': 1534, 'possess': 1535, 'mountain': 1536, 'deepli': 1537, 'comparison': 1538, 'whatsoev': 1539, 'rain': 1540, 'grand': 1541, 'comed': 1542, 'shadow': 1543, 'mid': 1544, 'began': 1545, 'bank': 1546, 'princ': 1547, 'parodi': 1548, 'weapon': 1549, 'taylor': 1550, 'pre': 1551, 'friendship': 1552, 'credibl': 1553, 'teach': 1554, 'flesh': 1555, 'dougla': 1556, 'terror': 1557, 'protect': 1558, 'hint': 1559, 'bloodi': 1560, 'marvel': 1561, 'watchabl': 1562, 'superman': 1563, 'load': 1564, 'leader': 1565, 'drunk': 1566, 'anybodi': 1567, 'accord': 1568, 'freddi': 1569, 'brown': 1570, 'tim': 1571, 'seat': 1572, 'jeff': 1573, 'hitler': 1574, 'appropri': 1575, 'villag': 1576, 'unknown': 1577, 'knock': 1578, 'keaton': 1579, 'charg': 1580, 'unnecessari': 1581, 'media': 1582, 'england': 1583, 'enemi': 1584, 'empti': 1585, 'wave': 1586, 'utter': 1587, 'strength': 1588, 'perspect': 1589, 'dare': 1590, 'craft': 1591, 'buck': 1592, 'nativ': 1593, 'kiss': 1594, 'ford': 1595, 'correct': 1596, 'contrast': 1597, 'speed': 1598, 'soap': 1599, 'nazi': 1600, 'magnific': 1601, 'knowledg': 1602, 'distract': 1603, 'chill': 1604, 'anywher': 1605, 'mission': 1606, 'ice': 1607, 'fred': 1608, 'breath': 1609, '1980': 1610, 'moon': 1611, 'jr': 1612, 'joan': 1613, 'crowd': 1614, 'soft': 1615, 'kate': 1616, 'frighten': 1617, '000': 1618, 'nick': 1619, 'hundr': 1620, 'dick': 1621, 'dan': 1622, 'somebodi': 1623, 'simon': 1624, 'radio': 1625, 'dozen': 1626, 'thousand': 1627, 'shakespear': 1628, 'loss': 1629, 'andrew': 1630, 'academi': 1631, 'vehicl': 1632, 'sum': 1633, 'root': 1634, 'quot': 1635, 'account': 1636, 'leg': 1637, 'convent': 1638, 'behavior': 1639, '1970': 1640, 'regular': 1641, 'gold': 1642, 'worker': 1643, 'pretenti': 1644, 'demand': 1645, 'compet': 1646, 'stretch': 1647, 'privat': 1648, 'notabl': 1649, 'lynch': 1650, 'japan': 1651, 'interpret': 1652, 'explos': 1653, 'candi': 1654, 'tarzan': 1655, 'debut': 1656, 'constant': 1657, 'translat': 1658, 'spi': 1659, 'sea': 1660, 'revolv': 1661, 'prais': 1662, 'threaten': 1663, 'technolog': 1664, 'sat': 1665, 'quiet': 1666, 'jesu': 1667, 'franc': 1668, 'failur': 1669, 'ass': 1670, 'toy': 1671, 'punch': 1672, 'met': 1673, 'kevin': 1674, 'higher': 1675, 'aid': 1676, 'vh': 1677, 'mike': 1678, 'interact': 1679, 'abandon': 1680, 'separ': 1681, 'confront': 1682, 'command': 1683, 'bet': 1684, 'techniqu': 1685, 'stunt': 1686, 'site': 1687, 'servic': 1688, 'recal': 1689, 'gotten': 1690, 'belong': 1691, 'freak': 1692, 'foot': 1693, 'cabl': 1694, 'bug': 1695, 'jimmi': 1696, 'fu': 1697, 'capabl': 1698, 'bright': 1699, 'african': 1700, 'succeed': 1701, 'stock': 1702, 'presid': 1703, 'fat': 1704, 'clark': 1705, 'boat': 1706, 'structur': 1707, 'spanish': 1708, 'gene': 1709, 'paper': 1710, 'kidnap': 1711, 'whilst': 1712, 'factor': 1713, 'belief': 1714, 'witti': 1715, 'tree': 1716, 'realism': 1717, 'realis': 1718, 'educ': 1719, 'complic': 1720, 'bob': 1721, 'attend': 1722, 'santa': 1723, 'finest': 1724, 'broken': 1725, 'assist': 1726, 'v': 1727, 'up': 1728, 'smoke': 1729, 'observ': 1730, 'determin': 1731, 'depart': 1732, 'rubbish': 1733, 'routin': 1734, 'oper': 1735, 'lewi': 1736, 'hat': 1737, 'fame': 1738, 'domin': 1739, 'safe': 1740, 'morgan': 1741, 'lone': 1742, 'kinda': 1743, 'hook': 1744, 'foreign': 1745, 'advanc': 1746, 'rank': 1747, 'numer': 1748, 'werewolf': 1749, 'washington': 1750, 'vs': 1751, 'shape': 1752, 'shallow': 1753, 'rose': 1754, 'civil': 1755, 'morn': 1756, 'gari': 1757, 'winner': 1758, 'ordinari': 1759, 'kong': 1760, 'accomplish': 1761, 'whenev': 1762, 'virtual': 1763, 'peac': 1764, 'grab': 1765, 'offens': 1766, 'luck': 1767, 'h': 1768, 'welcom': 1769, 'unfunni': 1770, 'patient': 1771, 'contriv': 1772, 'complain': 1773, 'bigger': 1774, 'activ': 1775, 'trek': 1776, 'pretend': 1777, 'dimension': 1778, 'con': 1779, 'wake': 1780, 'lesbian': 1781, 'flash': 1782, 'eric': 1783, 'dri': 1784, 'code': 1785, 'cain': 1786, 'statu': 1787, 'manipul': 1788, 'guard': 1789, 'dancer': 1790, 'corrupt': 1791, 'albert': 1792, 'speech': 1793, 'sourc': 1794, 'signific': 1795, 'gain': 1796, 'context': 1797, 'awkward': 1798, 'sean': 1799, 'psycho': 1800, 'corni': 1801, 'clip': 1802, 'anthoni': 1803, '13': 1804, 'w': 1805, 'theatric': 1806, 'religion': 1807, 'reli': 1808, 'priest': 1809, 'curiou': 1810, 'advic': 1811, 'flow': 1812, 'addict': 1813, 'specif': 1814, 'skin': 1815, 'secur': 1816, 'jennif': 1817, 'howard': 1818, 'asian': 1819, 'promot': 1820, 'organ': 1821, 'luke': 1822, 'golden': 1823, 'core': 1824, 'comfort': 1825, 'lucki': 1826, 'cheat': 1827, 'cash': 1828, 'lower': 1829, 'dislik': 1830, 'associ': 1831, 'wing': 1832, 'spell': 1833, 'regret': 1834, 'frequent': 1835, 'frankli': 1836, 'devic': 1837, 'degre': 1838, 'contribut': 1839, 'balanc': 1840, 'sake': 1841, 'print': 1842, 'lake': 1843, 'forgiv': 1844, 'thoma': 1845, 'mass': 1846, 'betti': 1847, 'unexpect': 1848, 'gordon': 1849, 'crack': 1850, 'unfold': 1851, 'invit': 1852, 'grown': 1853, 'depend': 1854, 'construct': 1855, 'categori': 1856, 'amateur': 1857, 'walter': 1858, 'matur': 1859, 'intellectu': 1860, 'honor': 1861, 'grew': 1862, 'condit': 1863, 'anna': 1864, 'veteran': 1865, 'sudden': 1866, 'spectacular': 1867, 'sole': 1868, 'mirror': 1869, 'robin': 1870, 'overli': 1871, 'meanwhil': 1872, 'liner': 1873, 'grip': 1874, 'gift': 1875, 'freedom': 1876, 'experienc': 1877, 'demonstr': 1878, 'card': 1879, 'unabl': 1880, 'theori': 1881, 'subtitl': 1882, 'sheriff': 1883, 'section': 1884, 'oliv': 1885, 'drew': 1886, 'crappi': 1887, 'colour': 1888, 'circumst': 1889, 'brilliantli': 1890, 'sheer': 1891, 'pile': 1892, 'path': 1893, 'parker': 1894, 'matt': 1895, 'laughter': 1896, 'cook': 1897, 'altern': 1898, 'wander': 1899, 'treatment': 1900, 'sinatra': 1901, 'relief': 1902, 'lawyer': 1903, 'hall': 1904, 'defin': 1905, 'accident': 1906, 'hank': 1907, 'dragon': 1908, 'captiv': 1909, 'moor': 1910, 'halloween': 1911, 'gratuit': 1912, 'wound': 1913, 'wayn': 1914, 'unintent': 1915, 'kung': 1916, 'k': 1917, 'jacki': 1918, 'cowboy': 1919, 'broadway': 1920, 'barbara': 1921, 'winter': 1922, 'surreal': 1923, 'statement': 1924, 'spoof': 1925, 'canadian': 1926, 'treasur': 1927, 'gonna': 1928, 'fish': 1929, 'fare': 1930, 'compos': 1931, 'cheer': 1932, 'woodi': 1933, 'victor': 1934, 'unrealist': 1935, 'sensit': 1936, 'emerg': 1937, 'sympathet': 1938, 'ran': 1939, 'neighbor': 1940, 'driven': 1941, 'topic': 1942, 'overlook': 1943, 'menac': 1944, 'glass': 1945, 'expos': 1946, 'authent': 1947, 'michel': 1948, 'handsom': 1949, 'gross': 1950, 'chief': 1951, 'ancient': 1952, 'stranger': 1953, 'russel': 1954, 'pleasant': 1955, 'nevertheless': 1956, 'network': 1957, 'feet': 1958, 'contemporari': 1959, 'comedian': 1960, 'cinderella': 1961, 'built': 1962, 'underr': 1963, 'miser': 1964, 'letter': 1965, 'gori': 1966, 'endless': 1967, 'earn': 1968, 'consider': 1969, 'blockbust': 1970, 'switch': 1971, 'solv': 1972, 'brook': 1973, 'virgin': 1974, 'victoria': 1975, 'joseph': 1976, 'edward': 1977, 'convict': 1978, 'bullet': 1979, 'scenario': 1980, 'scale': 1981, 'cynic': 1982, 'chosen': 1983, 'alex': 1984, '0': 1985, 'sword': 1986, 'outrag': 1987, 'gut': 1988, 'curs': 1989, 'com': 1990, 'wrap': 1991, 'uk': 1992, 'substanc': 1993, 'screenwrit': 1994, 'proper': 1995, 'monkey': 1996, 'juli': 1997, 'driver': 1998, 'remov': 1999, 'par': 2000, 'indic': 2001, 'court': 2002, 'bird': 2003, 'roy': 2004, 'rental': 2005, 'nanci': 2006, 'naiv': 2007, 'loser': 2008, 'inevit': 2009, 'grave': 2010, 'consequ': 2011, 'advertis': 2012, 'slap': 2013, 'le': 2014, 'invis': 2015, 'germani': 2016, 'fatal': 2017, 'bridg': 2018, 'brave': 2019, 'provok': 2020, 'loui': 2021, 'footbal': 2022, 'anger': 2023, 'ador': 2024, 'chan': 2025, 'anderson': 2026, 'alcohol': 2027, 'willi': 2028, 'stumbl': 2029, 'ryan': 2030, 'professor': 2031, 'sharp': 2032, 'patrick': 2033, 'bat': 2034, 'australian': 2035, 'assassin': 2036, '1930': 2037, 'trilog': 2038, 'strongli': 2039, 'saturday': 2040, 'refresh': 2041, 'lousi': 2042, 'liber': 2043, 'heck': 2044, 'eight': 2045, 'deni': 2046, 'cell': 2047, 'ape': 2048, 'amateurish': 2049, 'sin': 2050, 'vagu': 2051, 'san': 2052, 'resid': 2053, 'justifi': 2054, 'terrifi': 2055, 'sympathi': 2056, 'reput': 2057, 'mini': 2058, 'indi': 2059, 'defeat': 2060, 'creator': 2061, 'tediou': 2062, 'task': 2063, 'tabl': 2064, 'prevent': 2065, 'expert': 2066, 'endur': 2067, 'trial': 2068, 'rival': 2069, 'offend': 2070, 'imit': 2071, 'employ': 2072, 'che': 2073, 'basebal': 2074, 'weekend': 2075, 'pitch': 2076, 'max': 2077, 'fairi': 2078, 'europ': 2079, 'dig': 2080, 'complaint': 2081, 'beach': 2082, 'risk': 2083, 'purchas': 2084, 'murphi': 2085, 'format': 2086, 'titan': 2087, 'tini': 2088, 'reminisc': 2089, 'powel': 2090, 'nois': 2091, 'hype': 2092, 'harsh': 2093, 'glimps': 2094, 'bite': 2095, 'till': 2096, 'strip': 2097, 'prime': 2098, 'north': 2099, 'fals': 2100, 'asleep': 2101, '14': 2102, 'texa': 2103, 'revel': 2104, 'destruct': 2105, 'descript': 2106, 'africa': 2107, 'uninterest': 2108, 'surfac': 2109, 'spin': 2110, 'sitcom': 2111, 'semi': 2112, 'inner': 2113, 'excess': 2114, 'arrest': 2115, 'twin': 2116, 'massiv': 2117, 'makeup': 2118, 'maintain': 2119, 'hitchcock': 2120, 'dinosaur': 2121, 'controversi': 2122, 'argu': 2123, 'stare': 2124, 'reject': 2125, 'melodrama': 2126, 'ludicr': 2127, 'kim': 2128, 'insist': 2129, 'ideal': 2130, 'expens': 2131, 'supernatur': 2132, 'subplot': 2133, 'press': 2134, 'nail': 2135, 'host': 2136, 'ga': 2137, 'forest': 2138, 'erot': 2139, 'columbo': 2140, 'atroci': 2141, 'ala': 2142, 'presum': 2143, 'notch': 2144, 'identifi': 2145, 'dude': 2146, 'cant': 2147, 'plagu': 2148, 'method': 2149, 'guest': 2150, 'forgett': 2151, 'crude': 2152, 'closer': 2153, 'character': 2154, 'princess': 2155, 'lion': 2156, 'landscap': 2157, 'foster': 2158, 'ear': 2159, 'border': 2160, 'beast': 2161, 'urban': 2162, 'storytel': 2163, 'previous': 2164, 'pacino': 2165, 'jungl': 2166, 'damag': 2167, 'bound': 2168, 'birth': 2169, 'aunt': 2170, 'accus': 2171, 'thirti': 2172, 'propaganda': 2173, 'nude': 2174, 'jess': 2175, 'guid': 2176, 'emma': 2177, 'doll': 2178, 'chose': 2179, 'whoever': 2180, 'warrior': 2181, 'pet': 2182, 'mate': 2183, 'mainstream': 2184, '25': 2185, 'upset': 2186, 'size': 2187, 'poster': 2188, 'merit': 2189, 'latest': 2190, 'gritti': 2191, 'friday': 2192, 'exact': 2193, 'deadli': 2194, 'cooper': 2195, 'wilson': 2196, 'warner': 2197, 'ton': 2198, 'sun': 2199, 'settl': 2200, 'rough': 2201, 'popul': 2202, 'corps': 2203, 'contest': 2204, 'contact': 2205, 'citizen': 2206, 'buff': 2207, 'blend': 2208, '1990': 2209, 'widow': 2210, 'select': 2211, 'rat': 2212, 'pitt': 2213, 'overcom': 2214, 'mgm': 2215, 'metal': 2216, 'environ': 2217, 'bu': 2218, 'alic': 2219, 'ted': 2220, 'revolut': 2221, 'particip': 2222, 'link': 2223, 'lift': 2224, 'guilti': 2225, 'prostitut': 2226, 'moron': 2227, 'matrix': 2228, 'johnson': 2229, 'exagger': 2230, 'corpor': 2231, 'corner': 2232, 'afternoon': 2233, 'accompani': 2234, '1960': 2235, 'sincer': 2236, 'multipl': 2237, 'leagu': 2238, 'instal': 2239, 'hood': 2240, 'holm': 2241, 'friendli': 2242, 'doom': 2243, 'clair': 2244, 'sunday': 2245, 'string': 2246, 'lugosi': 2247, 'junk': 2248, 'irish': 2249, 'hip': 2250, 'grim': 2251, 'examin': 2252, 'defend': 2253, 'campi': 2254, 'blah': 2255, 'aka': 2256, 'advis': 2257, 'varieti': 2258, 'tight': 2259, 'shut': 2260, 'shake': 2261, 'rachel': 2262, 'pro': 2263, 'icon': 2264, 'confid': 2265, 'sullivan': 2266, 'mexican': 2267, 'medic': 2268, 'jaw': 2269, 'goal': 2270, 'directli': 2271, 'denni': 2272, 'attach': 2273, 'vietnam': 2274, 'truck': 2275, 'terrorist': 2276, 'sentenc': 2277, 'sarah': 2278, 'prior': 2279, 'legendari': 2280, 'duke': 2281, 'dean': 2282, 'courag': 2283, 'breast': 2284, 'bourn': 2285, 'yell': 2286, 'un': 2287, 'split': 2288, 'proceed': 2289, 'nose': 2290, 'hong': 2291, 'entri': 2292, 'donald': 2293, 'behav': 2294, 'unconvinc': 2295, 'swim': 2296, 'stolen': 2297, 'lifetim': 2298, 'jerk': 2299, 'gather': 2300, 'forth': 2301, 'everywher': 2302, 'crush': 2303, 'confess': 2304, 'concentr': 2305, 'buri': 2306, 'borrow': 2307, 'turkey': 2308, 'spite': 2309, 'pan': 2310, 'lip': 2311, 'julia': 2312, 'deliveri': 2313, 'california': 2314, 'reward': 2315, 'quest': 2316, 'proud': 2317, 'offici': 2318, 'hoffman': 2319, 'freeman': 2320, 'flight': 2321, 'downright': 2322, 'china': 2323, 'worthwhil': 2324, 'sir': 2325, 'sink': 2326, 'notori': 2327, 'lazi': 2328, 'jon': 2329, 'jail': 2330, 'inept': 2331, 'fade': 2332, 'fabul': 2333, 'encourag': 2334, 'betray': 2335, 'teeth': 2336, 'susan': 2337, 'survivor': 2338, 'storm': 2339, 'shower': 2340, 'retard': 2341, 'relev': 2342, 'lisa': 2343, 'imageri': 2344, 'cousin': 2345, 'branagh': 2346, 'bell': 2347, 'bag': 2348, 'tremend': 2349, 'trade': 2350, 'toler': 2351, 'summari': 2352, 'stab': 2353, 'shark': 2354, 'quirki': 2355, 'mexico': 2356, 'hugh': 2357, 'finger': 2358, 'facial': 2359, 'bride': 2360, 'alright': 2361, 'von': 2362, 'pose': 2363, 'hyster': 2364, 'ha': 2365, 'blown': 2366, 'bitter': 2367, 'scheme': 2368, 'ron': 2369, 'ned': 2370, 'larri': 2371, 'cruel': 2372, 'christ': 2373, 'bone': 2374, 'afterward': 2375, 'address': 2376, 'traci': 2377, 'tour': 2378, 'thumb': 2379, 'swear': 2380, 'snake': 2381, 'screw': 2382, 'pursu': 2383, 'feed': 2384, 'distinct': 2385, 'beg': 2386, 'stomach': 2387, 'raw': 2388, 'photo': 2389, 'occas': 2390, 'obscur': 2391, 'mechan': 2392, 'chair': 2393, 'southern': 2394, 'sidney': 2395, 'resist': 2396, 'render': 2397, 'necessarili': 2398, 'holiday': 2399, 'heavili': 2400, 'hardi': 2401, 'gruesom': 2402, 'chain': 2403, 'cabin': 2404, 'argument': 2405, 'understood': 2406, 'satan': 2407, 'racist': 2408, 'philip': 2409, 'indulg': 2410, 'india': 2411, 'tongu': 2412, 'stalk': 2413, 'pregnant': 2414, 'outfit': 2415, 'obnoxi': 2416, 'midnight': 2417, 'lay': 2418, 'integr': 2419, 'fourth': 2420, 'forgot': 2421, 'belov': 2422, 'ticket': 2423, 'slapstick': 2424, 'restor': 2425, 'magazin': 2426, 'inhabit': 2427, 'garden': 2428, 'deeper': 2429, 'carol': 2430, '17': 2431, 'shoe': 2432, 'lincoln': 2433, 'incid': 2434, 'devot': 2435, 'brad': 2436, 'underground': 2437, 'sandler': 2438, 'maria': 2439, 'lili': 2440, 'guarante': 2441, 'elizabeth': 2442, 'divorc': 2443, 'disbelief': 2444, 'benefit': 2445, 'anticip': 2446, 'slave': 2447, 'princip': 2448, 'mildli': 2449, 'greater': 2450, 'explod': 2451, 'cring': 2452, 'creation': 2453, 'capit': 2454, 'bbc': 2455, 'amazingli': 2456, 'lesli': 2457, 'introduct': 2458, 'halfway': 2459, 'funnier': 2460, 'extraordinari': 2461, 'wreck': 2462, 'transfer': 2463, 'text': 2464, 'tap': 2465, 'punish': 2466, 'overwhelm': 2467, 'extent': 2468, 'enhanc': 2469, 'advantag': 2470, 'preview': 2471, 'plant': 2472, 'lo': 2473, 'lane': 2474, 'jessica': 2475, 'horrif': 2476, 'error': 2477, 'east': 2478, 'dynam': 2479, 'deliber': 2480, 'vincent': 2481, 'vacat': 2482, 'sophist': 2483, 'miscast': 2484, 'miller': 2485, 'homosexu': 2486, 'ensu': 2487, 'basi': 2488, 'appli': 2489, '2000': 2490, 'via': 2491, 'uncomfort': 2492, 'steel': 2493, 'spoken': 2494, 'sleazi': 2495, 'reed': 2496, 'measur': 2497, 'mansion': 2498, 'extend': 2499, 'elev': 2500, 'bollywood': 2501, 'stanley': 2502, 'savag': 2503, 'overact': 2504, 'mous': 2505, 'melt': 2506, 'hippi': 2507, 'goofi': 2508, 'fix': 2509, 'dentist': 2510, 'daili': 2511, 'conceiv': 2512, 'cathol': 2513, 'breathtak': 2514, 'blair': 2515, 'beer': 2516, 'assign': 2517, 'alter': 2518, 'succe': 2519, 'subsequ': 2520, 'sacrific': 2521, 'properli': 2522, 'oppos': 2523, 'nowaday': 2524, 'inspector': 2525, 'everyday': 2526, 'carpent': 2527, 'burt': 2528, 'neck': 2529, 'massacr': 2530, 'laura': 2531, 'circl': 2532, 'block': 2533, 'seagal': 2534, 'portrait': 2535, 'pool': 2536, 'mob': 2537, 'lesser': 2538, 'grey': 2539, 'fay': 2540, 'fallen': 2541, 'concert': 2542, 'christi': 2543, 'access': 2544, 'usa': 2545, 'sinist': 2546, 'relax': 2547, 'react': 2548, 'jewish': 2549, 'jake': 2550, 'isol': 2551, 'competit': 2552, 'chees': 2553, 'suitabl': 2554, 'stink': 2555, 'spiritu': 2556, 'nonetheless': 2557, 'nine': 2558, 'lyric': 2559, 'ironi': 2560, 'immens': 2561, 'creep': 2562, 'chop': 2563, 'appal': 2564, '2006': 2565, 'user': 2566, 'spring': 2567, 'sold': 2568, 'showcas': 2569, 'shirt': 2570, 'retir': 2571, 'reduc': 2572, 'rage': 2573, 'nut': 2574, 'needless': 2575, 'navi': 2576, 'luci': 2577, 'franchis': 2578, 'adopt': 2579, 'zone': 2580, 'uninspir': 2581, 'stanwyck': 2582, 'per': 2583, 'nurs': 2584, 'jay': 2585, 'digit': 2586, 'bulli': 2587, 'bath': 2588, 'asham': 2589, 'upper': 2590, 'sutherland': 2591, 'oddli': 2592, 'laid': 2593, 'illustr': 2594, 'broadcast': 2595, 'amongst': 2596, '2001': 2597, '1940': 2598, 'throat': 2599, 'stylish': 2600, 'fulfil': 2601, 'disguis': 2602, 'brando': 2603, 'baker': 2604, 'aspir': 2605, 'wwii': 2606, 'wanna': 2607, 'thief': 2608, 'pride': 2609, 'pound': 2610, 'nobl': 2611, 'neighborhood': 2612, 'impli': 2613, 'endear': 2614, 'em': 2615, '18': 2616, 'tens': 2617, 'shoulder': 2618, 'shift': 2619, 'rochest': 2620, 'prop': 2621, 'distribut': 2622, 'diseas': 2623, 'dinner': 2624, 'dawn': 2625, 'coher': 2626, 'cinematograph': 2627, 'bo': 2628, 'bett': 2629, 'albeit': 2630, '16': 2631, 'wash': 2632, 'surf': 2633, 'snow': 2634, 'silenc': 2635, 'shout': 2636, 'rebel': 2637, 'poignant': 2638, 'matthau': 2639, 'knife': 2640, 'function': 2641, 'forti': 2642, 'contract': 2643, 'widmark': 2644, 'silver': 2645, 'reunion': 2646, 'proof': 2647, 'mindless': 2648, 'internet': 2649, 'instinct': 2650, 'horrend': 2651, 'henc': 2652, 'height': 2653, 'heat': 2654, 'elvira': 2655, 'eeri': 2656, 'duti': 2657, 'derek': 2658, 'chuck': 2659, 'cannib': 2660, 'cancel': 2661, 'torn': 2662, 'spielberg': 2663, 'repetit': 2664, 'premier': 2665, 'pie': 2666, 'neat': 2667, 'musician': 2668, 'mill': 2669, 'innov': 2670, 'incoher': 2671, 'greatli': 2672, 'glori': 2673, 'etern': 2674, 'elvi': 2675, 'alik': 2676, 'absorb': 2677, 'wealthi': 2678, 'trite': 2679, 'redempt': 2680, 'racism': 2681, 'precis': 2682, 'nelson': 2683, 'lovabl': 2684, 'itali': 2685, 'infam': 2686, 'horrifi': 2687, 'homag': 2688, 'fbi': 2689, 'diamond': 2690, 'crisi': 2691, 'burton': 2692, 'britain': 2693, 'blank': 2694, 'bang': 2695, 'announc': 2696, 'wilder': 2697, 'streisand': 2698, 'resolut': 2699, 'pat': 2700, 'parallel': 2701, 'helen': 2702, 'happili': 2703, 'hammer': 2704, 'flop': 2705, 'ensembl': 2706, 'dedic': 2707, 'chaplin': 2708, 'triumph': 2709, 'st': 2710, 'plastic': 2711, 'oil': 2712, 'mar': 2713, 'factori': 2714, 'disagre': 2715, 'cube': 2716, 'conclud': 2717, 'carter': 2718, 'broke': 2719, 'weight': 2720, 'vega': 2721, 'row': 2722, 'rocket': 2723, 'own': 2724, 'march': 2725, 'fighter': 2726, 'climb': 2727, 'chuckl': 2728, 'bush': 2729, 'wherea': 2730, 'unforgett': 2731, 'thug': 2732, 'spare': 2733, 'sensibl': 2734, 'mst3k': 2735, 'meaning': 2736, 'lust': 2737, 'luca': 2738, 'kurt': 2739, 'enorm': 2740, 'dump': 2741, 'dane': 2742, 'boot': 2743, 'threat': 2744, 'stress': 2745, 'rap': 2746, 'karloff': 2747, 'fifti': 2748, 'engin': 2749, 'difficulti': 2750, 'dear': 2751, 'caricatur': 2752, 'butt': 2753, 'brand': 2754, 'bobbi': 2755, 'arnold': 2756, 'adequ': 2757, 'swing': 2758, 'secretari': 2759, 'ralph': 2760, 'polish': 2761, 'journalist': 2762, 'homeless': 2763, 'hamlet': 2764, 'flynn': 2765, 'fest': 2766, 'elabor': 2767, 'ego': 2768, 'barri': 2769, 'arrog': 2770, 'unbear': 2771, 'tool': 2772, 'spike': 2773, 'simpson': 2774, 'resort': 2775, 'puppet': 2776, 'induc': 2777, 'grate': 2778, 'float': 2779, 'fanci': 2780, 'conspiraci': 2781, 'arrang': 2782, 'tribut': 2783, 'pig': 2784, 'phillip': 2785, 'muppet': 2786, 'guilt': 2787, 'exercis': 2788, 'cruis': 2789, 'choreograph': 2790, 'boll': 2791, 'basement': 2792, 'ward': 2793, 'tower': 2794, 'toilet': 2795, 'stan': 2796, 'slip': 2797, 'scarecrow': 2798, 'puzzl': 2799, 'medium': 2800, 'layer': 2801, 'korean': 2802, 'item': 2803, 'ham': 2804, 'file': 2805, 'fianc': 2806, 'editor': 2807, 'document': 2808, 'babe': 2809, '24': 2810, 'transit': 2811, 'territori': 2812, 'superfici': 2813, 'spark': 2814, 'slaughter': 2815, 'portion': 2816, 'philosoph': 2817, 'persona': 2818, 'orient': 2819, 'minim': 2820, 'librari': 2821, 'larger': 2822, 'inexplic': 2823, 'glover': 2824, 'doc': 2825, 'denzel': 2826, 'catherin': 2827, 'assur': 2828, 'wolf': 2829, 'walken': 2830, 'sneak': 2831, 'shi': 2832, 'pg': 2833, 'owe': 2834, 'jet': 2835, 'jeremi': 2836, 'financi': 2837, 'dorothi': 2838, 'curti': 2839, 'boredom': 2840, 'ban': 2841, 'whale': 2842, 'profound': 2843, 'multi': 2844, 'metaphor': 2845, 'hudson': 2846, 'eleph': 2847, 'cusack': 2848, 'backdrop': 2849, 'ambigu': 2850, 'viru': 2851, 'union': 2852, 'ultra': 2853, 'stiff': 2854, 'rave': 2855, 'notion': 2856, 'implaus': 2857, 'hack': 2858, 'gadget': 2859, 'elsewher': 2860, 'birthday': 2861, '2005': 2862, 'urg': 2863, 'superhero': 2864, 'squar': 2865, 'slight': 2866, 'reader': 2867, 'poison': 2868, 'pad': 2869, 'newspap': 2870, 'lloyd': 2871, 'hawk': 2872, 'eva': 2873, 'eastwood': 2874, 'distanc': 2875, 'disc': 2876, 'deriv': 2877, 'canada': 2878, 'bibl': 2879, 'afford': 2880, '1st': 2881, 'spread': 2882, 'skit': 2883, 'sadist': 2884, 'restaur': 2885, 'montag': 2886, 'huh': 2887, 'heston': 2888, 'health': 2889, 'essenc': 2890, 'drown': 2891, 'cure': 2892, 'charisma': 2893, 'button': 2894, 'scoobi': 2895, 'peak': 2896, 'muslim': 2897, 'maniac': 2898, 'lab': 2899, 'invest': 2900, 'gradual': 2901, 'godfath': 2902, 'fetch': 2903, 'estat': 2904, 'dealt': 2905, 'companion': 2906, 'tea': 2907, 'subtleti': 2908, 'servant': 2909, 'ritter': 2910, 'miik': 2911, 'kane': 2912, 'gothic': 2913, 'cup': 2914, 'countless': 2915, 'alli': 2916, 'salli': 2917, 'iii': 2918, 'heroic': 2919, 'electr': 2920, 'elect': 2921, 'charismat': 2922, 'briefli': 2923, 'wannab': 2924, 'toss': 2925, 'tender': 2926, 'resourc': 2927, 'reel': 2928, 'nuanc': 2929, 'neil': 2930, 'ingredi': 2931, 'grandmoth': 2932, 'cole': 2933, 'bud': 2934, 'admittedli': 2935, 'stronger': 2936, 'stood': 2937, 'shall': 2938, 'reev': 2939, 'punk': 2940, 'poverti': 2941, 'pit': 2942, 'pauli': 2943, 'mild': 2944, 'mafia': 2945, 'label': 2946, 'kubrick': 2947, 'gate': 2948, 'dawson': 2949, 'carrey': 2950, 'useless': 2951, 'updat': 2952, 'terri': 2953, 'tag': 2954, 'smooth': 2955, 'smash': 2956, 'outcom': 2957, 'ian': 2958, 'fond': 2959, 'easier': 2960, 'cox': 2961, 'cardboard': 2962, 'burst': 2963, 'bakshi': 2964, 'astair': 2965, 'assault': 2966, 'vulner': 2967, 'vari': 2968, 'sketch': 2969, 'samurai': 2970, 'rex': 2971, 'resolv': 2972, 'qualifi': 2973, 'melodramat': 2974, 'increasingli': 2975, 'fist': 2976, 'exchang': 2977, 'divers': 2978, 'coincid': 2979, '2002': 2980, 'templ': 2981, 'tame': 2982, 'suspend': 2983, 'scratch': 2984, 'reynold': 2985, 'luckili': 2986, 'insert': 2987, 'conveni': 2988, 'brillianc': 2989, 'blast': 2990, 'be': 2991, 'walker': 2992, 'strictli': 2993, 'soprano': 2994, 'seventi': 2995, 'pin': 2996, 'nuclear': 2997, 'meat': 2998, 'matthew': 2999, 'jami': 3000, 'hamilton': 3001, 'gotta': 3002, 'fisher': 3003, 'farm': 3004, 'coach': 3005, 'ambiti': 3006, 'worthless': 3007, 'timeless': 3008, 'struck': 3009, 'spooki': 3010, 'revers': 3011, 'recreat': 3012, 'ninja': 3013, 'monk': 3014, 'kudo': 3015, 'joey': 3016, 'instantli': 3017, 'grasp': 3018, 'empir': 3019, 'eccentr': 3020, 'discoveri': 3021, 'convolut': 3022, 'closet': 3023, 'clock': 3024, 'cave': 3025, 'butcher': 3026, 'brosnan': 3027, 'wipe': 3028, 'sloppi': 3029, 'sidekick': 3030, 'seller': 3031, 'selfish': 3032, 'partli': 3033, 'pal': 3034, 'norman': 3035, 'mitchel': 3036, 'miracl': 3037, 'inconsist': 3038, 'importantli': 3039, 'gray': 3040, 'fifteen': 3041, 'evok': 3042, 'eighti': 3043, 'declar': 3044, 'communist': 3045, 'clown': 3046, 'cliff': 3047, 'bleak': 3048, 'websit': 3049, 'superbl': 3050, 'stoog': 3051, 'seed': 3052, 'psychiatrist': 3053, 'piano': 3054, 'lifestyl': 3055, 'ho': 3056, 'flawless': 3057, 'farc': 3058, 'enthusiast': 3059, 'destin': 3060, 'debat': 3061, 'chew': 3062, 'cheek': 3063, 'australia': 3064, 'aforement': 3065, '45': 3066, 'wrestl': 3067, 'wick': 3068, 'splatter': 3069, 'soviet': 3070, 'slice': 3071, 'regardless': 3072, 'pressur': 3073, 'kitchen': 3074, 'incompet': 3075, 'emili': 3076, 'drivel': 3077, 'directori': 3078, 'dire': 3079, 'dash': 3080, 'bash': 3081, 'anni': 3082, 'akshay': 3083, 'abc': 3084, 'suppli': 3085, 'seduc': 3086, 'recov': 3087, 'prize': 3088, 'pleasantli': 3089, 'mann': 3090, 'lou': 3091, 'ken': 3092, 'judi': 3093, 'jar': 3094, 'increas': 3095, 'helicopt': 3096, 'glow': 3097, 'flower': 3098, 'duo': 3099, 'doo': 3100, 'distant': 3101, 'dave': 3102, 'curios': 3103, 'cia': 3104, 'chapter': 3105, 'cameron': 3106, 'cagney': 3107, 'boil': 3108, 'blob': 3109, 'beaten': 3110, 'artifici': 3111, 'web': 3112, 'turner': 3113, 'splendid': 3114, 'ranger': 3115, 'psychot': 3116, 'perri': 3117, 'panic': 3118, 'laurel': 3119, 'hop': 3120, 'goldberg': 3121, 'glenn': 3122, 'francisco': 3123, 'favour': 3124, 'ellen': 3125, 'eleg': 3126, 'drunken': 3127, 'craven': 3128, 'craig': 3129, 'combat': 3130, 'wizard': 3131, 'slightest': 3132, 'shortli': 3133, 'ruth': 3134, 'rid': 3135, 'plausibl': 3136, 'philosophi': 3137, 'modesti': 3138, 'min': 3139, 'hatr': 3140, 'greek': 3141, 'graduat': 3142, 'gentl': 3143, 'gandhi': 3144, 'fx': 3145, 'flip': 3146, 'falk': 3147, 'alexand': 3148, '20th': 3149, 'we': 3150, 'unpleas': 3151, 'tall': 3152, 'preciou': 3153, 'ocean': 3154, 'manhattan': 3155, 'lend': 3156, 'legal': 3157, 'knight': 3158, 'jealou': 3159, 'holi': 3160, 'harm': 3161, 'futurist': 3162, 'fund': 3163, 'felix': 3164, 'dracula': 3165, 'thread': 3166, 'tank': 3167, 'scientif': 3168, 'reviv': 3169, 'overdon': 3170, 'nod': 3171, 'mock': 3172, 'giallo': 3173, 'forbidden': 3174, 'explicit': 3175, 'digniti': 3176, 'childish': 3177, 'bless': 3178, 'ami': 3179, 'yesterday': 3180, 'verhoeven': 3181, 'unwatch': 3182, 'torment': 3183, 'thick': 3184, 'repeatedli': 3185, 'pirat': 3186, 'nerv': 3187, 'mel': 3188, 'margaret': 3189, 'fever': 3190, 'eve': 3191, 'elderli': 3192, 'broad': 3193, 'awe': 3194, 'awaken': 3195, '99': 3196, '2004': 3197, 'uniform': 3198, 'timothi': 3199, 'stiller': 3200, 'royal': 3201, 'romero': 3202, 'roman': 3203, 'rivet': 3204, 'publish': 3205, 'politician': 3206, 'lean': 3207, 'launch': 3208, 'kay': 3209, 'griffith': 3210, 'eas': 3211, 'custom': 3212, 'bin': 3213, 'automat': 3214, 'ambit': 3215, 'ah': 3216, 'acclaim': 3217, 'absenc': 3218, 'warren': 3219, 'wallac': 3220, 'transport': 3221, 'tomato': 3222, 'termin': 3223, 'sunshin': 3224, 'stinker': 3225, 'purpl': 3226, 'pulp': 3227, 'pierc': 3228, 'phrase': 3229, 'homicid': 3230, 'gabriel': 3231, 'foul': 3232, 'darker': 3233, 'crook': 3234, 'bathroom': 3235, 'antic': 3236, 'viciou': 3237, 'sixti': 3238, 'saint': 3239, 'revolutionari': 3240, 'rambo': 3241, 'q': 3242, 'prom': 3243, 'pray': 3244, 'packag': 3245, 'ought': 3246, 'marin': 3247, 'li': 3248, 'kenneth': 3249, 'karen': 3250, 'juvenil': 3251, 'horrid': 3252, 'hollow': 3253, 'eyr': 3254, 'evolv': 3255, 'donna': 3256, 'contrari': 3257, 'coloni': 3258, 'choreographi': 3259, 'brazil': 3260, 'awak': 3261, 'album': 3262, '2003': 3263, 'twelv': 3264, 'stole': 3265, 'ramon': 3266, 'overr': 3267, 'option': 3268, 'nerd': 3269, 'mummi': 3270, 'mildr': 3271, 'kapoor': 3272, 'ireland': 3273, 'dose': 3274, 'defi': 3275, 'conserv': 3276, 'candid': 3277, 'boast': 3278, 'blade': 3279, 'beatti': 3280, 'trio': 3281, 'protest': 3282, 'natali': 3283, 'kirk': 3284, 'jazz': 3285, 'global': 3286, 'funer': 3287, 'fulci': 3288, 'flame': 3289, 'detract': 3290, 'confirm': 3291, 'collabor': 3292, 'astonish': 3293, 'altman': 3294, 'yellow': 3295, 'whip': 3296, 'tommi': 3297, 'spit': 3298, 'shade': 3299, 'racial': 3300, 'nicholson': 3301, 'mystic': 3302, 'leap': 3303, 'enterpris': 3304, 'destini': 3305, 'delici': 3306, 'bull': 3307, 'bottl': 3308, 'blake': 3309, 'audio': 3310, 'vivid': 3311, 'visibl': 3312, 'todd': 3313, 'threw': 3314, 'swedish': 3315, 'staff': 3316, 'reunit': 3317, 'pseudo': 3318, 'popcorn': 3319, 'neo': 3320, 'merci': 3321, 'meaningless': 3322, 'inherit': 3323, 'harder': 3324, 'fonda': 3325, 'enchant': 3326, 'bedroom': 3327, 'altogeth': 3328, 'adolesc': 3329, 'wire': 3330, 'voight': 3331, 'uneven': 3332, 'tip': 3333, 'synopsi': 3334, 'suspici': 3335, 'ruthless': 3336, 'roommat': 3337, 'respond': 3338, 'reserv': 3339, 'moodi': 3340, 'madonna': 3341, 'leonard': 3342, 'lemmon': 3343, 'lawrenc': 3344, 'kennedi': 3345, 'jew': 3346, 'fanat': 3347, 'exhibit': 3348, 'edi': 3349, 'decor': 3350, 'crocodil': 3351, 'bust': 3352, 'befriend': 3353, 'await': 3354, 'atlanti': 3355, 'voyag': 3356, 'ventur': 3357, 'unsettl': 3358, 'rural': 3359, 'palma': 3360, 'incident': 3361, 'holli': 3362, 'garner': 3363, 'dimens': 3364, 'clumsi': 3365, 'clint': 3366, 'chao': 3367, 'centr': 3368, 'carl': 3369, 'bradi': 3370, 'bold': 3371, 'bargain': 3372, 'audit': 3373, 'abysm': 3374, '2007': 3375, 'wealth': 3376, 'versu': 3377, 'troop': 3378, 'trail': 3379, 'timon': 3380, 'tiger': 3381, 'poetic': 3382, 'neglect': 3383, 'nearbi': 3384, 'mall': 3385, 'lit': 3386, 'imperson': 3387, 'immigr': 3388, 'humili': 3389, 'hart': 3390, 'elimin': 3391, 'echo': 3392, 'daddi': 3393, 'cuba': 3394, 'characterist': 3395, 'cd': 3396, 'cari': 3397, 'ant': 3398, 'acknowledg': 3399, '2nd': 3400, 'solo': 3401, 'saga': 3402, 'repuls': 3403, 'pun': 3404, 'prejudic': 3405, 'paus': 3406, 'mistaken': 3407, 'mickey': 3408, 'marshal': 3409, 'jeffrey': 3410, 'infect': 3411, 'homer': 3412, 'domest': 3413, 'collaps': 3414, 'celluloid': 3415, 'undoubtedli': 3416, 'tribe': 3417, 'sore': 3418, 'promin': 3419, 'pant': 3420, 'olivi': 3421, 'milk': 3422, 'leon': 3423, 'interrupt': 3424, 'inappropri': 3425, 'inan': 3426, 'hbo': 3427, 'harvey': 3428, 'ginger': 3429, 'gear': 3430, 'equip': 3431, 'coffe': 3432, 'coat': 3433, 'chest': 3434, 'cake': 3435, 'assembl': 3436, 'apolog': 3437, '1996': 3438, 'vulgar': 3439, 'trace': 3440, 'solut': 3441, 'retain': 3442, 'primari': 3443, 'pot': 3444, 'polanski': 3445, 'pen': 3446, 'maggi': 3447, 'jenni': 3448, 'institut': 3449, 'instant': 3450, 'humbl': 3451, 'highest': 3452, 'furthermor': 3453, 'florida': 3454, 'exot': 3455, 'embrac': 3456, 'devast': 3457, 'consum': 3458, 'colonel': 3459, 'colleagu': 3460, 'brooklyn': 3461, 'aveng': 3462, 'airplan': 3463, 'ya': 3464, 'wive': 3465, 'strain': 3466, 'smaller': 3467, 'seduct': 3468, 'sale': 3469, 'rick': 3470, 'principl': 3471, 'poke': 3472, 'outer': 3473, 'linda': 3474, 'illog': 3475, 'godzilla': 3476, 'gender': 3477, 'dutch': 3478, 'disabl': 3479, 'dian': 3480, 'descend': 3481, 'cope': 3482, 'bowl': 3483, '3rd': 3484, '1999': 3485, 'yard': 3486, 'vast': 3487, 'secondli': 3488, 'scope': 3489, 'rabbit': 3490, 'primarili': 3491, 'predecessor': 3492, 'mixtur': 3493, 'lol': 3494, 'inferior': 3495, 'hal': 3496, 'gundam': 3497, 'gloriou': 3498, 'glamor': 3499, 'dud': 3500, 'dive': 3501, 'devoid': 3502, 'cue': 3503, 'bubbl': 3504, 'blatant': 3505, 'beneath': 3506, 'z': 3507, 'trademark': 3508, 'talki': 3509, 'streep': 3510, 'simplist': 3511, 'shirley': 3512, 'shelf': 3513, 'senseless': 3514, 'pearl': 3515, 'myer': 3516, 'museum': 3517, 'invas': 3518, 'hideou': 3519, 'grinch': 3520, 'garbo': 3521, 'et': 3522, 'domino': 3523, 'disjoint': 3524, 'countrysid': 3525, 'casual': 3526, 'breed': 3527, 'arab': 3528, 'april': 3529, 'alfr': 3530, 'alert': 3531, 'aggress': 3532, 'vanish': 3533, 'uwe': 3534, 'unhappi': 3535, 'stir': 3536, 'stellar': 3537, 'stack': 3538, 'slide': 3539, 'sh': 3540, 'robinson': 3541, 'robberi': 3542, 'rendit': 3543, 'oz': 3544, 'obtain': 3545, 'mayor': 3546, 'mail': 3547, 'maci': 3548, 'loyal': 3549, 'khan': 3550, 'illeg': 3551, 'hopeless': 3552, 'hardcor': 3553, 'experiment': 3554, 'disgrac': 3555, 'defens': 3556, 'boom': 3557, 'applaud': 3558, 'acid': 3559, 'wont': 3560, 'topless': 3561, 'tenant': 3562, 'tempt': 3563, 'spider': 3564, 'span': 3565, 'soccer': 3566, 'scroog': 3567, 'rifl': 3568, 'recruit': 3569, 'psychic': 3570, 'incomprehens': 3571, 'hartley': 3572, 'grandfath': 3573, 'fri': 3574, 'emphasi': 3575, 'dismiss': 3576, 'dicken': 3577, 'diana': 3578, 'declin': 3579, 'craze': 3580, 'counter': 3581, 'blew': 3582, 'berlin': 3583, 'amanda': 3584, 'woo': 3585, 'wet': 3586, 'trashi': 3587, 'sympath': 3588, 'sibl': 3589, 'shed': 3590, 'shaw': 3591, 'riot': 3592, 'revolt': 3593, 'resurrect': 3594, 'ration': 3595, 'porno': 3596, 'parad': 3597, 'niro': 3598, 'lumet': 3599, 'justin': 3600, 'intim': 3601, 'goer': 3602, 'faster': 3603, 'ethnic': 3604, 'bitch': 3605, 'worm': 3606, 'wheel': 3607, 'wendi': 3608, 'weakest': 3609, 'unreal': 3610, 'steam': 3611, 'slick': 3612, 'rider': 3613, 'region': 3614, 'patriot': 3615, 'partial': 3616, 'nephew': 3617, 'mario': 3618, 'lena': 3619, 'jonathan': 3620, 'immort': 3621, 'hopper': 3622, 'honesti': 3623, 'hesit': 3624, 'gap': 3625, 'feminist': 3626, 'farmer': 3627, 'ensur': 3628, 'enlighten': 3629, 'eager': 3630, 'dealer': 3631, 'commend': 3632, 'choru': 3633, 'biographi': 3634, 'ballet': 3635, 'andr': 3636, '00': 3637, 'wore': 3638, 'victori': 3639, 'vice': 3640, 'util': 3641, 'snap': 3642, 'skull': 3643, 'similarli': 3644, 'sappi': 3645, 'sandra': 3646, 'safeti': 3647, 'repress': 3648, 'psychopath': 3649, 'properti': 3650, 'prequel': 3651, 'owen': 3652, 'nostalg': 3653, 'mutant': 3654, 'morri': 3655, 'macarthur': 3656, 'leo': 3657, 'kingdom': 3658, 'hung': 3659, 'franco': 3660, 'confin': 3661, 'composit': 3662, 'charlott': 3663, 'blunt': 3664, 'whoopi': 3665, 'valuabl': 3666, 'thru': 3667, 'tail': 3668, 'tad': 3669, 'strand': 3670, 'speci': 3671, 'snl': 3672, 'rope': 3673, 'rocki': 3674, 'repli': 3675, 'recycl': 3676, 'rambl': 3677, 'pattern': 3678, 'nervou': 3679, 'montana': 3680, 'miseri': 3681, 'latin': 3682, 'kyle': 3683, 'hyde': 3684, 'heartbreak': 3685, 'farrel': 3686, 'exit': 3687, 'emperor': 3688, 'dust': 3689, 'drum': 3690, 'drain': 3691, 'despair': 3692, 'del': 3693, 'deed': 3694, 'dalton': 3695, 'compens': 3696, 'compass': 3697, 'cg': 3698, 'campbel': 3699, 'bumbl': 3700, 'bow': 3701, 'bonu': 3702, 'bergman': 3703, 'acquir': 3704, '1972': 3705, 'wacki': 3706, 'tonight': 3707, 'slug': 3708, 'rotten': 3709, 'roth': 3710, 'romp': 3711, 'rapist': 3712, 'radic': 3713, 'pour': 3714, 'percept': 3715, 'orson': 3716, 'oppress': 3717, 'olli': 3718, 'mistress': 3719, 'martian': 3720, 'gimmick': 3721, 'gal': 3722, 'downhil': 3723, 'da': 3724, 'contempl': 3725, 'chess': 3726, 'carradin': 3727, 'bleed': 3728, 'airport': 3729, '35': 3730, 'unpredict': 3731, 'tooth': 3732, 'taught': 3733, 'tackl': 3734, 'stilt': 3735, 'slash': 3736, 'shelley': 3737, 'pursuit': 3738, 'programm': 3739, 'preach': 3740, 'pervert': 3741, 'pervers': 3742, 'paltrow': 3743, 'mislead': 3744, 'melodi': 3745, 'heal': 3746, 'edgar': 3747, 'dazzl': 3748, 'champion': 3749, 'belt': 3750, 'banal': 3751, 'attorney': 3752, 'arguabl': 3753, 'arc': 3754, '1983': 3755, 'vocal': 3756, 'virginia': 3757, 'vengeanc': 3758, 'uplift': 3759, 'tiresom': 3760, 'sensat': 3761, 'rubi': 3762, 'raymond': 3763, 'poem': 3764, 'plight': 3765, 'passeng': 3766, 'orang': 3767, 'mesmer': 3768, 'marti': 3769, 'maid': 3770, 'graham': 3771, 'gambl': 3772, 'franki': 3773, 'employe': 3774, 'duval': 3775, 'dixon': 3776, 'conneri': 3777, 'closest': 3778, 'cleverli': 3779, 'chicken': 3780, 'bela': 3781, 'yawn': 3782, 'whine': 3783, 'volum': 3784, 'tube': 3785, 'swallow': 3786, 'suffic': 3787, 'sirk': 3788, 'secretli': 3789, 'scottish': 3790, 'quarter': 3791, 'profan': 3792, 'pokemon': 3793, 'paranoia': 3794, 'outing': 3795, 'numb': 3796, 'mute': 3797, 'monologu': 3798, 'lundgren': 3799, 'iran': 3800, 'inject': 3801, 'habit': 3802, 'giggl': 3803, 'gerard': 3804, 'extens': 3805, 'engross': 3806, 'crystal': 3807, 'convincingli': 3808, 'clone': 3809, 'climact': 3810, 'calm': 3811, 'bay': 3812, 'amitabh': 3813, 'abraham': 3814, '1968': 3815, 'underst': 3816, 'trend': 3817, 'taxi': 3818, 'surpass': 3819, 'spock': 3820, 'septemb': 3821, 'richardson': 3822, 'profess': 3823, 'poetri': 3824, 'plod': 3825, 'nichola': 3826, 'meander': 3827, 'lowest': 3828, 'linger': 3829, 'junior': 3830, 'im': 3831, 'grotesqu': 3832, 'frankenstein': 3833, 'franci': 3834, 'fed': 3835, 'expand': 3836, 'ethan': 3837, 'earl': 3838, 'dispos': 3839, 'chicago': 3840, 'bend': 3841, 'backward': 3842, 'austen': 3843, 'abort': 3844, 'waitress': 3845, 'tourist': 3846, 'sue': 3847, 'stallon': 3848, 'spoke': 3849, 'simplic': 3850, 'rubber': 3851, 'rant': 3852, 'nostalgia': 3853, 'myth': 3854, 'mundan': 3855, 'muddl': 3856, 'lure': 3857, 'literatur': 3858, 'instrument': 3859, 'hum': 3860, 'household': 3861, 'greedi': 3862, 'eugen': 3863, 'econom': 3864, 'dysfunct': 3865, 'descent': 3866, 'der': 3867, 'compliment': 3868, 'catchi': 3869, 'cannon': 3870, 'stale': 3871, 'sissi': 3872, 'recognit': 3873, 'recognis': 3874, 'randi': 3875, 'phoni': 3876, 'phantom': 3877, 'omen': 3878, 'occupi': 3879, 'mortal': 3880, 'molli': 3881, 'map': 3882, 'mankind': 3883, 'louis': 3884, 'lang': 3885, 'june': 3886, 'irrelev': 3887, 'insur': 3888, 'hello': 3889, 'furi': 3890, 'flee': 3891, 'firstli': 3892, 'equival': 3893, 'eaten': 3894, 'duck': 3895, 'dictat': 3896, 'dement': 3897, 'deaf': 3898, 'damon': 3899, 'crucial': 3900, 'coast': 3901, 'cent': 3902, 'carel': 3903, 'bacal': 3904, 'alongsid': 3905, 'wisdom': 3906, 'twilight': 3907, 'rude': 3908, 'rooney': 3909, 'reign': 3910, 'onlin': 3911, 'newli': 3912, 'loyalti': 3913, 'likewis': 3914, 'lengthi': 3915, 'labor': 3916, 'heel': 3917, 'grayson': 3918, 'freez': 3919, 'dreari': 3920, 'drake': 3921, 'distinguish': 3922, 'damm': 3923, 'daisi': 3924, 'cyborg': 3925, 'bump': 3926, 'buffalo': 3927, 'blackmail': 3928, 'biko': 3929, 'bike': 3930, 'ashley': 3931, 'antwon': 3932, '1973': 3933, 'worn': 3934, 'vein': 3935, 'unorigin': 3936, 'tunnel': 3937, 'startl': 3938, 'sailor': 3939, 'ridden': 3940, 'provoc': 3941, 'proce': 3942, 'prey': 3943, 'pink': 3944, 'nineti': 3945, 'keith': 3946, 'interior': 3947, 'inher': 3948, 'incorpor': 3949, 'exposur': 3950, 'emphas': 3951, 'chronicl': 3952, 'butler': 3953, 'boxer': 3954, 'basketbal': 3955, 'barrymor': 3956, 'baddi': 3957, 'attribut': 3958, 'approv': 3959, 'analysi': 3960, 'walsh': 3961, 'unrel': 3962, 'underli': 3963, 'undeni': 3964, 'substitut': 3965, 'stalker': 3966, 'simmon': 3967, 'robbin': 3968, 'predat': 3969, 'othello': 3970, 'nicol': 3971, 'mormon': 3972, 'millionair': 3973, 'mighti': 3974, 'meyer': 3975, 'meg': 3976, 'julian': 3977, 'indiffer': 3978, 'improvis': 3979, 'hypnot': 3980, 'fleet': 3981, 'er': 3982, 'elm': 3983, 'drift': 3984, 'degrad': 3985, 'condemn': 3986, 'carla': 3987, 'bunni': 3988, 'belushi': 3989, 'barrel': 3990, 'watson': 3991, 'warmth': 3992, 'vital': 3993, 'unawar': 3994, 'shove': 3995, 'rukh': 3996, 'roof': 3997, 'reid': 3998, 'priceless': 3999, 'palac': 4000, 'nyc': 4001, 'novak': 4002, 'mtv': 4003, 'marion': 4004, 'lampoon': 4005, 'hay': 4006, 'greed': 4007, 'firm': 4008, 'exquisit': 4009, 'errol': 4010, 'enthusiasm': 4011, 'edgi': 4012, 'dolph': 4013, 'disord': 4014, 'alison': 4015, 'alarm': 4016, 'agenda': 4017, '3d': 4018, 'zizek': 4019, 'what': 4020, 'valentin': 4021, 'unleash': 4022, 'thompson': 4023, 'testament': 4024, 'spain': 4025, 'simultan': 4026, 'showdown': 4027, 'session': 4028, 'sergeant': 4029, 'randomli': 4030, 'profit': 4031, 'preserv': 4032, 'ponder': 4033, 'petti': 4034, 'peril': 4035, 'peck': 4036, 'pamela': 4037, 'orlean': 4038, 'nun': 4039, 'minimum': 4040, 'israel': 4041, 'iraq': 4042, 'glanc': 4043, 'gestur': 4044, 'eastern': 4045, 'drip': 4046, 'distort': 4047, 'crown': 4048, 'coup': 4049, 'championship': 4050, 'cassidi': 4051, 'campaign': 4052, 'beatl': 4053, 'angela': 4054, '1933': 4055, '13th': 4056, 'wig': 4057, 'valley': 4058, 'unimagin': 4059, 'travesti': 4060, 'stroke': 4061, 'stake': 4062, 'shootout': 4063, 'scotland': 4064, 'sabrina': 4065, 'rout': 4066, 'restrain': 4067, 'reson': 4068, 'represent': 4069, 'regist': 4070, 'realm': 4071, 'quinn': 4072, 'perpetu': 4073, 'mon': 4074, 'miyazaki': 4075, 'kurosawa': 4076, 'jan': 4077, 'han': 4078, 'gentleman': 4079, 'fido': 4080, 'exposit': 4081, 'empathi': 4082, 'din': 4083, 'crow': 4084, 'cream': 4085, 'crawl': 4086, 'cooki': 4087, 'contradict': 4088, 'climat': 4089, 'calib': 4090, 'buster': 4091, 'bro': 4092, 'brenda': 4093, '1984': 4094, 'wax': 4095, 'warrant': 4096, 'ustinov': 4097, 'unseen': 4098, 'unsatisfi': 4099, 'traumat': 4100, 'tacki': 4101, 'sucker': 4102, 'stargat': 4103, 'spacey': 4104, 'soderbergh': 4105, 'shoddi': 4106, 'shaki': 4107, 'sammi': 4108, 'ross': 4109, 'pretens': 4110, 'pole': 4111, 'perceiv': 4112, 'passabl': 4113, 'painter': 4114, 'monoton': 4115, 'meryl': 4116, 'mclaglen': 4117, 'josh': 4118, 'greg': 4119, 'geek': 4120, 'fuller': 4121, 'femm': 4122, 'distress': 4123, 'derang': 4124, 'demis': 4125, 'delic': 4126, 'darren': 4127, 'dana': 4128, 'crawford': 4129, 'compromis': 4130, 'cloud': 4131, 'censor': 4132, 'businessman': 4133, 'baldwin': 4134, 'absent': 4135, 'abomin': 4136, '1997': 4137, '1987': 4138, 'wholli': 4139, 'verbal': 4140, 'valid': 4141, 'unravel': 4142, 'uncov': 4143, 'tech': 4144, 'tarantino': 4145, 'sid': 4146, 'seal': 4147, 'reluct': 4148, 'primit': 4149, 'polici': 4150, 'norm': 4151, 'nathan': 4152, 'kumar': 4153, 'judgment': 4154, 'jewel': 4155, 'furiou': 4156, 'fog': 4157, 'fenc': 4158, 'expedit': 4159, 'exclus': 4160, 'deniro': 4161, 'dee': 4162, 'deceas': 4163, 'correctli': 4164, 'click': 4165, 'clash': 4166, 'austin': 4167, 'antonioni': 4168, 'anchor': 4169, 'accuraci': 4170, '1993': 4171, 'wretch': 4172, 'wang': 4173, 'vanc': 4174, 'unfair': 4175, 'trait': 4176, 'temper': 4177, 'tax': 4178, 'sustain': 4179, 'sunni': 4180, 'slam': 4181, 'sheet': 4182, 'shanghai': 4183, 'seldom': 4184, 'sand': 4185, 'roller': 4186, 'ritual': 4187, 'pocket': 4188, 'patienc': 4189, 'nicola': 4190, 'murray': 4191, 'mode': 4192, 'malon': 4193, 'logan': 4194, 'joel': 4195, 'hallucin': 4196, 'fought': 4197, 'fart': 4198, 'fabric': 4199, 'enforc': 4200, 'dreck': 4201, 'debt': 4202, 'darn': 4203, 'crippl': 4204, 'conduct': 4205, 'clerk': 4206, 'behold': 4207, 'bake': 4208, 'alec': 4209, '3000': 4210, '2008': 4211, '1995': 4212, '1971': 4213, 'technicolor': 4214, 'tactic': 4215, 'sweep': 4216, 'stuart': 4217, 'stark': 4218, 'squad': 4219, 'soup': 4220, 'shell': 4221, 'scriptwrit': 4222, 'schedul': 4223, 'runner': 4224, 'robber': 4225, 'rita': 4226, 'preston': 4227, 'preposter': 4228, 'phil': 4229, 'pete': 4230, 'penni': 4231, 'outlin': 4232, 'legaci': 4233, 'isabel': 4234, 'helpless': 4235, 'guitar': 4236, 'grief': 4237, 'fundament': 4238, 'exhaust': 4239, 'divid': 4240, 'despis': 4241, 'critiqu': 4242, 'conscious': 4243, 'clau': 4244, 'canyon': 4245, 'bridget': 4246, 'bias': 4247, 'vomit': 4248, 'unexpectedli': 4249, 'sugar': 4250, 'sniper': 4251, 'sentinel': 4252, 'russia': 4253, 'restrict': 4254, 'rehash': 4255, 'rear': 4256, 'propos': 4257, 'passag': 4258, 'palanc': 4259, 'newman': 4260, 'marc': 4261, 'liberti': 4262, 'lacklust': 4263, 'kansa': 4264, 'jodi': 4265, 'jacket': 4266, 'invad': 4267, 'inabl': 4268, 'implic': 4269, 'gregori': 4270, 'flair': 4271, 'drove': 4272, 'downey': 4273, 'delv': 4274, 'culmin': 4275, 'consciou': 4276, 'connor': 4277, 'cigarett': 4278, 'boyl': 4279, 'bloom': 4280, 'alley': 4281, 'alicia': 4282, 'agenc': 4283, 'yeti': 4284, 'wrench': 4285, 'vet': 4286, 'tripe': 4287, 'tendenc': 4288, 'sharon': 4289, 'rod': 4290, 'rehears': 4291, 'rampag': 4292, 'pale': 4293, 'mccoy': 4294, 'lush': 4295, 'ladder': 4296, 'kolchak': 4297, 'karl': 4298, 'improb': 4299, 'horn': 4300, 'foxx': 4301, 'feat': 4302, 'delet': 4303, 'chainsaw': 4304, 'cap': 4305, 'behaviour': 4306, 'bacon': 4307, 'awhil': 4308, 'asylum': 4309, 'arrow': 4310, 'aesthet': 4311, '22': 4312, '1936': 4313, 'wildli': 4314, 'weav': 4315, 'wagner': 4316, 'visitor': 4317, 'underneath': 4318, 'tomorrow': 4319, 'thunderbird': 4320, 'tasteless': 4321, 'suspicion': 4322, 'sung': 4323, 'suffici': 4324, 'stream': 4325, 'spice': 4326, 'shortcom': 4327, 'scoop': 4328, 'rumor': 4329, 'rhythm': 4330, 'prank': 4331, 'paramount': 4332, 'paradis': 4333, 'newcom': 4334, 'minu': 4335, 'lurk': 4336, 'loneli': 4337, 'hungri': 4338, 'hulk': 4339, 'hackney': 4340, 'globe': 4341, 'fright': 4342, 'financ': 4343, 'filler': 4344, 'elit': 4345, 'el': 4346, 'conscienc': 4347, 'coaster': 4348, 'basing': 4349, 'aristocrat': 4350, 'amazon': 4351, '19th': 4352, '1988': 4353, '1978': 4354, '1920': 4355, 'wwe': 4356, 'worship': 4357, 'tierney': 4358, 'teas': 4359, 'straightforward': 4360, 'standout': 4361, 'springer': 4362, 'smell': 4363, 'secondari': 4364, 'rub': 4365, 'recogniz': 4366, 'ram': 4367, 'quietli': 4368, 'posey': 4369, 'penn': 4370, 'paxton': 4371, 'naughti': 4372, 'minist': 4373, 'literari': 4374, 'leigh': 4375, 'lectur': 4376, 'iv': 4377, 'inmat': 4378, 'ingeni': 4379, 'impos': 4380, 'immers': 4381, 'hopkin': 4382, 'heist': 4383, 'grudg': 4384, 'entranc': 4385, 'en': 4386, 'dirt': 4387, 'curli': 4388, 'counterpart': 4389, 'couch': 4390, 'choppi': 4391, 'chavez': 4392, 'chamberlain': 4393, 'cancer': 4394, 'brit': 4395, 'bread': 4396, 'beverli': 4397, 'atroc': 4398, 'abrupt': 4399, '75': 4400, '1989': 4401, '1939': 4402, 'yearn': 4403, 'watcher': 4404, 'variat': 4405, 'transcend': 4406, 'sublim': 4407, 'skeptic': 4408, 'sassi': 4409, 'ratso': 4410, 'quaid': 4411, 'policeman': 4412, 'nolan': 4413, 'net': 4414, 'nemesi': 4415, 'moreov': 4416, 'morbid': 4417, 'missil': 4418, 'misguid': 4419, 'lindsay': 4420, 'laurenc': 4421, 'injuri': 4422, 'heartfelt': 4423, 'geni': 4424, 'esther': 4425, 'entitl': 4426, 'enthral': 4427, 'duel': 4428, 'convert': 4429, 'clan': 4430, 'cattl': 4431, 'bernard': 4432, 'attenborough': 4433, 'ace': 4434, '1986': 4435, 'youngest': 4436, 'vader': 4437, 'unexplain': 4438, 'uncut': 4439, 'tyler': 4440, 'steadi': 4441, 'spiral': 4442, 'setup': 4443, 'rosemari': 4444, 'reliabl': 4445, 'puppi': 4446, 'poe': 4447, 'out': 4448, 'obstacl': 4449, 'mytholog': 4450, 'moder': 4451, 'kitti': 4452, 'kidman': 4453, 'hopelessli': 4454, 'hk': 4455, 'grin': 4456, 'graini': 4457, 'facil': 4458, 'enabl': 4459, 'egg': 4460, 'dont': 4461, 'diari': 4462, 'cruelti': 4463, 'characteris': 4464, 'carlito': 4465, 'bye': 4466, 'buzz': 4467, 'brood': 4468, 'bean': 4469, 'artsi': 4470, '1979': 4471, 'weather': 4472, 'underworld': 4473, 'sweat': 4474, 'spontan': 4475, 'preming': 4476, 'patricia': 4477, 'oblig': 4478, 'niec': 4479, 'narrow': 4480, 'martha': 4481, 'kline': 4482, 'heap': 4483, 'hammi': 4484, 'hain': 4485, 'gina': 4486, 'gillian': 4487, 'fuel': 4488, 'exterior': 4489, 'effici': 4490, 'disastr': 4491, 'despic': 4492, 'decept': 4493, 'clueless': 4494, 'christin': 4495, 'bronson': 4496, 'brendan': 4497, 'bounc': 4498, 'bewar': 4499, 'baffl': 4500, 'athlet': 4501, 'acquaint': 4502, '1969': 4503, 'virtu': 4504, 'viewpoint': 4505, 'uh': 4506, 'trigger': 4507, 'tick': 4508, 'taboo': 4509, 'suprem': 4510, 'sooner': 4511, 'sleepwalk': 4512, 'shatter': 4513, 'scar': 4514, 'rome': 4515, 'renaiss': 4516, 'preachi': 4517, 'outlaw': 4518, 'mermaid': 4519, 'mayhem': 4520, 'loi': 4521, 'loath': 4522, 'lester': 4523, 'insipid': 4524, 'injur': 4525, 'housewif': 4526, 'hepburn': 4527, 'headach': 4528, 'harmless': 4529, 'goof': 4530, 'fontain': 4531, 'enlist': 4532, 'dilemma': 4533, 'dandi': 4534, 'circu': 4535, 'candl': 4536, 'biker': 4537, 'astound': 4538, 'angst': 4539, 'analyz': 4540, '73': 4541, '19': 4542, 'zoom': 4543, 'whore': 4544, 'tripl': 4545, 'surgeri': 4546, 'stimul': 4547, 'steer': 4548, 'stair': 4549, 'spade': 4550, 'sox': 4551, 'slimi': 4552, 'scorses': 4553, 'salt': 4554, 'redund': 4555, 'phenomenon': 4556, 'overlong': 4557, 'oldest': 4558, 'macho': 4559, 'intric': 4560, 'immatur': 4561, 'idol': 4562, 'hostag': 4563, 'hooker': 4564, 'hokey': 4565, 'guin': 4566, 'glorifi': 4567, 'gere': 4568, 'foolish': 4569, 'fluff': 4570, 'filth': 4571, 'ebert': 4572, 'dismal': 4573, 'dish': 4574, 'corbett': 4575, 'contempt': 4576, 'claustrophob': 4577, 'cassavet': 4578, 'camcord': 4579, 'boston': 4580, 'bent': 4581, 'ariel': 4582, 'amor': 4583, 'zane': 4584, 'widescreen': 4585, 'trivia': 4586, 'transplant': 4587, 'strongest': 4588, 'spree': 4589, 'spinal': 4590, 'shred': 4591, 'shield': 4592, 'schlock': 4593, 'rhyme': 4594, 'remad': 4595, 'radiat': 4596, 'proport': 4597, 'preced': 4598, 'perman': 4599, 'obligatori': 4600, 'nolt': 4601, 'naschi': 4602, 'mutual': 4603, 'muscl': 4604, 'mount': 4605, 'messi': 4606, 'margin': 4607, 'keen': 4608, 'joker': 4609, 'harold': 4610, 'gasp': 4611, 'gabl': 4612, 'frantic': 4613, 'flirt': 4614, 'flashi': 4615, 'flag': 4616, 'fascist': 4617, 'faint': 4618, 'dwarf': 4619, 'down': 4620, 'cush': 4621, 'cow': 4622, 'corman': 4623, 'conquer': 4624, 'cohen': 4625, 'beard': 4626, 'astronaut': 4627, 'assert': 4628, 'antagonist': 4629, 'alvin': 4630, '1981': 4631, '1976': 4632, 'www': 4633, 'wield': 4634, 'vaniti': 4635, 'triangl': 4636, 'strive': 4637, 'someday': 4638, 'sensual': 4639, 'scandal': 4640, 'ritchi': 4641, 'resum': 4642, 'repris': 4643, 'raj': 4644, 'persuad': 4645, 'off': 4646, 'neurot': 4647, 'mol': 4648, 'mobil': 4649, 'interestingli': 4650, 'instruct': 4651, 'info': 4652, 'inflict': 4653, 'hara': 4654, 'flock': 4655, 'fishburn': 4656, 'divin': 4657, 'discern': 4658, 'departur': 4659, 'deer': 4660, 'danish': 4661, 'claud': 4662, 'carey': 4663, 'brush': 4664, 'boob': 4665, 'bitten': 4666, 'barn': 4667, 'bachelor': 4668, 'archiv': 4669, 'aborigin': 4670, '95': 4671, '28': 4672, '1945': 4673, 'wendigo': 4674, 'vibrant': 4675, 'undermin': 4676, 'traffic': 4677, 'timberlak': 4678, 'submit': 4679, 'senior': 4680, 'rot': 4681, 'recit': 4682, 'prophet': 4683, 'proclaim': 4684, 'pixar': 4685, 'pickford': 4686, 'parson': 4687, 'pacif': 4688, 'neill': 4689, 'mobster': 4690, 'miracul': 4691, 'melissa': 4692, 'luka': 4693, 'loretta': 4694, 'kathryn': 4695, 'jade': 4696, 'ish': 4697, 'hug': 4698, 'hilar': 4699, 'helm': 4700, 'heartwarm': 4701, 'harrison': 4702, 'hapless': 4703, 'frontier': 4704, 'fragil': 4705, 'europa': 4706, 'earnest': 4707, 'dylan': 4708, 'dim': 4709, 'dame': 4710, 'cycl': 4711, 'colin': 4712, 'cliffhang': 4713, 'clad': 4714, 'cher': 4715, 'cb': 4716, 'casino': 4717, 'carlo': 4718, 'biblic': 4719, 'bate': 4720, 'banter': 4721, 'axe': 4722, 'artwork': 4723, 'anton': 4724, 'winchest': 4725, 'wardrob': 4726, 'vile': 4727, 'venom': 4728, 'vanessa': 4729, 'uma': 4730, 'trier': 4731, 'toronto': 4732, 'token': 4733, 'static': 4734, 'sicken': 4735, 'shepherd': 4736, 'seedi': 4737, 'rooki': 4738, 'redneck': 4739, 'razor': 4740, 'pc': 4741, 'orphan': 4742, 'northern': 4743, 'nope': 4744, 'misfortun': 4745, 'milo': 4746, 'mathieu': 4747, 'mason': 4748, 'marlon': 4749, 'lui': 4750, 'lucil': 4751, 'legitim': 4752, 'jordan': 4753, 'jo': 4754, 'isra': 4755, 'illus': 4756, 'http': 4757, 'holocaust': 4758, 'foil': 4759, 'flavor': 4760, 'feast': 4761, 'estrang': 4762, 'eli': 4763, 'electron': 4764, 'choke': 4765, 'cerebr': 4766, 'breakfast': 4767, 'bondag': 4768, 'blatantli': 4769, 'bikini': 4770, 'articl': 4771, 'aris': 4772, 'antholog': 4773, 'alexandr': 4774, 'akin': 4775, 'wrestler': 4776, 'turd': 4777, 'tack': 4778, 'swept': 4779, 'styliz': 4780, 'smack': 4781, 'shorter': 4782, 'retriev': 4783, 'psych': 4784, 'peer': 4785, 'outdat': 4786, 'oppon': 4787, 'nightclub': 4788, 'magician': 4789, 'linear': 4790, 'leather': 4791, 'knightley': 4792, 'ideolog': 4793, 'huston': 4794, 'howl': 4795, 'highway': 4796, 'gunga': 4797, 'glare': 4798, 'gilbert': 4799, 'frog': 4800, 'fifth': 4801, 'feminin': 4802, 'dudley': 4803, 'disregard': 4804, 'deem': 4805, 'comprehend': 4806, 'clinic': 4807, 'charlton': 4808, 'ceremoni': 4809, 'cartoonish': 4810, 'boyer': 4811, 'audrey': 4812, 'affleck': 4813, 'abund': 4814, 'whack': 4815, 'uniformli': 4816, 'toe': 4817, 'tara': 4818, 'summar': 4819, 'spine': 4820, 'spawn': 4821, 'snatch': 4822, 'sleaz': 4823, 'senat': 4824, 'salman': 4825, 'potter': 4826, 'plate': 4827, 'phenomen': 4828, 'newer': 4829, 'monument': 4830, 'moe': 4831, 'mitch': 4832, 'lighter': 4833, 'lifeless': 4834, 'lavish': 4835, 'greet': 4836, 'goldsworthi': 4837, 'evolut': 4838, 'energet': 4839, 'einstein': 4840, 'durat': 4841, 'deliver': 4842, 'cuban': 4843, 'corn': 4844, 'conrad': 4845, 'compris': 4846, 'collector': 4847, 'client': 4848, 'chip': 4849, 'cemeteri': 4850, 'btw': 4851, 'breakdown': 4852, 'braveheart': 4853, 'boo': 4854, 'bogu': 4855, 'bastard': 4856, '4th': 4857, '1994': 4858, '1991': 4859, 'wtf': 4860, 'undertak': 4861, 'undead': 4862, 'trauma': 4863, 'spectacl': 4864, 'sorrow': 4865, 'signal': 4866, 'replay': 4867, 'randolph': 4868, 'pronounc': 4869, 'outright': 4870, 'ol': 4871, 'occup': 4872, 'nina': 4873, 'neatli': 4874, 'mcqueen': 4875, 'luxuri': 4876, 'liu': 4877, 'lex': 4878, 'kent': 4879, 'kazan': 4880, 'jule': 4881, 'judd': 4882, 'jedi': 4883, 'jare': 4884, 'jam': 4885, 'inaccuraci': 4886, 'ie': 4887, 'historian': 4888, 'healthi': 4889, 'gilliam': 4890, 'fluid': 4891, 'firmli': 4892, 'evelyn': 4893, 'embark': 4894, 'eleven': 4895, 'creek': 4896, 'constitut': 4897, 'clara': 4898, 'cecil': 4899, 'capot': 4900, 'bulk': 4901, 'bori': 4902, 'belli': 4903, 'armstrong': 4904, 'appl': 4905, 'alleg': 4906, '1977': 4907, '1974': 4908, 'walt': 4909, 'vignett': 4910, 'vain': 4911, 'unsuspect': 4912, 'unattract': 4913, 'truman': 4914, 'tokyo': 4915, 'subtli': 4916, 'spray': 4917, 'sidewalk': 4918, 'sacrif': 4919, 'rosario': 4920, 'roar': 4921, 'relentless': 4922, 'propheci': 4923, 'porter': 4924, 'poker': 4925, 'pioneer': 4926, 'pepper': 4927, 'paula': 4928, 'palm': 4929, 'mum': 4930, 'miniseri': 4931, 'miami': 4932, 'meal': 4933, 'lauren': 4934, 'lanc': 4935, 'knee': 4936, 'kiddi': 4937, 'inclus': 4938, 'inaccur': 4939, 'id': 4940, 'groan': 4941, 'goldblum': 4942, 'genet': 4943, 'galaxi': 4944, 'fruit': 4945, 'forgiven': 4946, 'decapit': 4947, 'curtain': 4948, 'congratul': 4949, 'conan': 4950, 'comprehens': 4951, 'comb': 4952, 'carmen': 4953, 'cape': 4954, 'bsg': 4955, 'blur': 4956, 'basket': 4957, 'bait': 4958, 'aussi': 4959, 'ash': 4960, 'antonio': 4961, 'abound': 4962, '1985': 4963, 'weari': 4964, 'weaker': 4965, 'victorian': 4966, 'verg': 4967, 'vastli': 4968, 'turtl': 4969, 'substanti': 4970, 'spill': 4971, 'sparkl': 4972, 'sophi': 4973, 'scariest': 4974, 'scarfac': 4975, 'reincarn': 4976, 'rapidli': 4977, 'profil': 4978, 'playboy': 4979, 'orchestr': 4980, 'optimist': 4981, 'omin': 4982, 'motorcycl': 4983, 'monti': 4984, 'modest': 4985, 'mice': 4986, 'masterson': 4987, 'macabr': 4988, 'jill': 4989, 'ingrid': 4990, 'incorrect': 4991, 'hostil': 4992, 'handicap': 4993, 'hackman': 4994, 'growth': 4995, 'ghetto': 4996, 'frontal': 4997, 'evan': 4998, 'epitom': 4999}
out = predict_fn(test_review, net)
print(out)
print(int(out))
```
### Deploying the model
Now that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.
**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
```
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
### Testing the model
Now that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
```
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(int(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
```
As an additional test, we can try sending the `test_review` that we looked at earlier.
```
predictor.predict(test_review)
```
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.
## Step 7 (again): Use the model for the web app
> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.
So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.
<img src="Web App Diagram.svg">
The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.
In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.
Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.
### Setting up a Lambda function
The first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.
#### Part A: Create an IAM Role for the Lambda function
Since we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.
Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.
In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.
Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.
#### Part B: Create a Lambda function
Now it is time to actually create the Lambda function.
Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.
On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below.
```python
# We need to use the low-level library to interact with SageMaker since the SageMaker API
# is not available natively through Lambda.
import boto3
def lambda_handler(event, context):
# The SageMaker runtime is what allows us to invoke the endpoint that we've created.
runtime = boto3.Session().client('sagemaker-runtime')
# Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given
response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created
ContentType = 'text/plain', # The data format that is expected
Body = event['body']) # The actual review
# The response is an HTTP response whose body contains the result of our inference
result = response['Body'].read().decode('utf-8')
return {
'statusCode' : 200,
'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },
'body' : result
}
```
Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
```
predictor.endpoint
```
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.
### Setting up API Gateway
Now that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.
Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.
On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.
Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.
Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.
For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.
Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.
The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.
You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.
## Step 4: Deploying our web app
Now that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.
In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.
Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.
If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!
> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.
**TODO:** Make sure that you include the edited `index.html` file in your project submission.
Now that your web app is working, trying playing around with it and see how well it works.
**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?
**Answer:**
Example Review from RottenTomato
"It's a great start to the season, providing Ali a wonderful character in Wayne Hays, and creator Nic Pizzolatto and crew have crafted a mesmerizing beginning about the power of the past as someone loses grip of it."
It's predicted to be positive. And, that's correct! :)
Another example review from RottenTomato
"There are a few thoughtfully placed cameras and thrilling moments - Bruce Willis vs. a door, for one - but they're not nearly enough to make this self-conscious live-action comic book worthwhile."
It's predicted to be negative. And, that's correct, too! :)
### Delete the endpoint
Remember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
```
predictor.delete_endpoint()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
from scipy.optimize import minimize_scalar, minimize
from time import time
import seaborn as sns
sns.set_style('darkgrid')
sns.set_context('paper')
import sys
sys.path.append('..')
from osd import Problem
from osd.components import GaussNoise, SmoothSecondDifference, SparseFirstDiffConvex, SparseSecondDiffConvex
from osd.utilities import progress
import cvxpy as cvx
# SOLVER = 'MOSEK'
SOLVER = 'SCS'
```
# Convex example, $K=3$
```
np.random.seed(142)
t = np.linspace(0, 250, 1000)
c0 = 0.1 * np.random.randn(len(t))
c2 = 2 * np.abs(signal.sawtooth(2 * np.pi / 50 * t))
# c3 = 0.5 * (np.sin(2 * np.pi * t * 5 / (500.)) + np.cos(2 * np.pi * t * 7 / (550.)))
c3 = 0.25 * (np.sin(2 * np.pi * t * 5 / (500.)) + np.cos(2 * np.pi * t * 2.5 / (500.) - 50))
y = np.sum([c0, c2, c3], axis=0)
signal1 = c2
signal2 = c3
components = [c0, c2, c3]
# np.random.seed(42)
# t = np.linspace(0, 1000, 3000)
# signal1 = np.sin(2 * np.pi * t * 1 / (500.))
# signal2 = signal.square(2 * np.pi * t * 1 / (450.))
# y = signal1 + signal2 + 0.25 * np.random.randn(len(signal1))
plt.figure(figsize=(10, 6))
plt.plot(t, signal1 + signal2, label='true signal minus noise')
plt.plot(t, y, alpha=0.5, label='observed signal')
plt.legend()
plt.show()
```
# Solve problem all at once with CVXPY
```
problem = Problem(data=y, components=[GaussNoise, SparseSecondDiffConvex(vmax=2, vmin=0),
SmoothSecondDifference])
problem.weights.value = [1, 2e0, 1e4]
problem.decompose(solver='MOSEK')
problem.problem.value
fig, ax = plt.subplots(nrows=3, figsize=(10//1.1, 12//1.5))
ax[0].plot(t, signal1, label='hidden component 1', ls='--')
ax[0].plot(t, problem.estimates[1], label='estimate 1')
ax[1].plot(t, signal2, label='hidden component 2', ls='--')
ax[1].plot(t, problem.estimates[2], label='estimate 2')
ax[2].plot(t, signal1 + signal2, label='true composite signal', ls='--')
ax[2].plot(t, problem.estimates[1] + problem.estimates[2], label='estimated signal');
ax[2].plot(t, y, label='observed signal', linewidth=1, marker='.', alpha=0.1);
for a in ax:
a.legend()
foo = cvx.Parameter((2, 3), value=np.array([[1, 0, 0], [0, 0, 1]]))
bar = cvx.Variable(3)
foo @ bar
bar[foo]
foo.value
problem.problem.parameters()
import cvxpy as cvx
import torch
from cvxpylayers.torch import CvxpyLayer
# def create_layer(osd_problem):
# prob = osd_problem.problem
# layer = CvxpyLayer(
# prob,
# parameters=prob.parameters(),
# variables=prob.variables())
# return layer
def create_layer(signal_length, index_set):
n = signal_length
y_cvx = cvx.Variable(n)
x1_cvx = cvx.Variable(n)
x2_cvx = cvx.Variable(n)
x3_cvx = cvx.Variable(n)
y_data = cvx.Parameter(n)
weight_param = cvx.Parameter(2, pos=True)
costs = [cvx.sum_squares(x1_cvx), cvx.sum_squares(cvx.diff(x2_cvx, k=2)), cvx.sum(cvx.abs(cvx.diff(x3_cvx, k=1)))]
objective = costs[0] + weight_param[0] * costs[1] + weight_param[1] * costs[2]
constraints = [
y_cvx == x1_cvx + x2_cvx + x3_cvx,
y_cvx[index_set] - y_data[index_set] == 0
]
prob = cvx.Problem(cvx.Minimize(objective), constraints)
layer = CvxpyLayer(
prob,
parameters=[y_data, weight_param],
variables=[x1_cvx, x2_cvx, x3_cvx]
)
return layer
index_set = np.random.uniform(size=len(y)) > 0.2
layer = create_layer(len(y), index_set)
import torch
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
from cvxpylayers.torch import CvxpyLayer
torch.set_default_dtype(torch.double)
from tqdm.notebook import tqdm
def fit(loss, params, X, Y, Xval, Yval, batch_size=128, lr=1e-3, epochs=100, verbose=False, print_every=1, callback=None):
"""
Arguments:
loss: given x and y in batched form, evaluates loss.
params: list of parameters to optimize.
X: input data, torch tensor.
Y: output data, torch tensor.
Xval: input validation data, torch tensor.
Yval: output validation data, torch tensor.
"""
train_dset = TensorDataset(X, Y)
train_loader = DataLoader(train_dset, batch_size=batch_size, shuffle=True)
opt = torch.optim.Adam(params, lr=lr)
train_losses = []
val_losses = []
for epoch in tqdm(range(epochs)):
if callback is not None:
callback()
with torch.no_grad():
val_losses.append(loss(Xval, Yval).item())
if verbose and epoch % print_every == 0:
print("val loss %03d | %3.5f" % (epoch + 1, val_losses[-1]))
batch = 1
train_losses.append([])
for Xbatch, Ybatch in train_loader:
opt.zero_grad()
l = loss(Xbatch, Ybatch)
l.backward()
opt.step()
train_losses[-1].append(l.item())
if verbose and epoch % print_every == 0:
print("batch %03d / %03d | %3.5f" %
(batch, len(train_loader), np.mean(train_losses[-1])))
batch += 1
return val_losses, train_losses
weights_tch = torch.tensor([1e7, 1e1], requires_grad=True)
def loss_fn(Y, index_set, cvx_layer):
preds = cvx_layer(X, weights_tch)[0]
mse_per_example = (preds - actual).pow(2).mean(axis=1)
return mse_per_example.mean()
weights_tch = torch.tensor([1e7, 1e1], requires_grad=True)
layer(torch.tensor(y, requires_grad=True), weights_tch)
```
# Simple implementation of ADMM algorithm
Nothing fancy here. Just a quick and dirty implementation of the three proximal operators.
```
def prox1(v, theta, rho):
r = rho / (2 * theta + rho)
return r * v
def prox2(v, theta, rho, A=None, return_A=True):
if A is None:
n = len(v)
M = np.diff(np.eye(n), axis=0, n=2)
r = 2 * theta / rho
A = np.linalg.inv(np.eye(n) + r * M.T.dot(M))
if not return_A:
return A.dot(v)
else:
return A.dot(v), A
def prox3_cvx(v, theta, rho):
n = len(v)
M = np.diff(np.eye(n), axis=0, n=1)
x = cvx.Variable(n)
cost = theta * cvx.norm1(cvx.diff(x)) + (rho / 2) * cvx.sum_squares(x - v)
problem = cvx.Problem(cvx.Minimize(cost), [cvx.sum(x) == 0])
problem.solve(solver='MOSEK')
return x.value
def calc_obj(y, x2, x3, rho1=1, rho2=1e7, rho3=1e1):
x1 = y - x2 - x3
t1 = rho1 * np.sum(np.power(x1, 2))
t2 = rho2 * np.sum(np.power(np.diff(x2, 2), 2))
t3 = rho3 * np.sum(np.abs(np.diff(x3, 1)))
return t1 + t2 + t3
def run_admm(data, num_iter=50, rho=0.5, verbose=True, prox3=prox3_cvx):
y = data
A = None
u = np.zeros_like(y)
x1 = y / 3
x2 = y / 3
x3 = y / 3
residuals = []
obj_vals = []
ti = time()
for it in range(num_iter):
if verbose:
td = time() - ti
progress(it, num_iter, '{:.2f} sec'.format(td))
x1 = prox1(x1 - u, 1, rho)
x2, A = prox2(x2 - u, 1e7, rho, A=A, return_A=True)
x3 = prox3(x3 - u, 1e1, rho)
u += 2 * (np.average([x1, x2, x3], axis=0) - y / 3)
# mean-square-error
error = np.sum([x1, x2, x3], axis=0) - y
mse = np.sum(np.power(error, 2)) / error.size
residuals.append(mse)
obj_vals.append(calc_obj(y, x2, x3))
if verbose:
td = time() - ti
progress(it + 1, num_iter, '{:.2f} sec\n'.format(td))
outdict = {
'x1': x1,
'x2': x2,
'x3': x3,
'u': u,
'residuals': residuals,
'obj_vals': obj_vals
}
return outdict
run1 = run_admm(y, num_iter=1000, rho=1e-1)
run2 = run_admm(y, num_iter=1000, rho=1e0)
run3 = run_admm(y, num_iter=1000, rho=1e1)
error = np.sum(problem.estimates, axis=0) - y
mse = np.sum(np.power(error, 2)) / error.size
plt.figure(figsize=(10,8))
plt.plot(run1['residuals'], label='$\\rho=0.1$', linewidth=1)
plt.plot(run2['residuals'], label='$\\rho=1$', linewidth=1)
plt.plot(run3['residuals'], label='$\\rho=10$', linewidth=1)
plt.axhline(mse, ls='--', color='red', label='cvxpy')
plt.yscale('log')
plt.legend(loc=1)
plt.title('Infeasibility')
plt.xlabel('iteration');
plt.plot(run1['obj_vals'], label='admm_run1', linewidth=1)
plt.plot(run2['obj_vals'], label='admm_run2', linewidth=1)
plt.plot(run3['obj_vals'], label='admm_run3, linewidth=1')
plt.axhline(problem.problem.value, ls='--', color='red', label='cvxpy')
plt.legend()
plt.title('Objective Value')
plt.xlabel('iteration')
plt.ylim(260, 270);
plt.plot(1e0 * run2['u'], problem.problem.constraints[-1].dual_value, ls='none', marker='.')
plt.xlabel('ADMM $\\nu = \\rho u$')
plt.ylabel('CVXPY dual value');
fig, ax = plt.subplots(nrows=3, figsize=(10//1.1, 12//1.5))
ax[0].plot(t, signal1, label='hidden component 1', ls='--')
ax[0].plot(t, problem.estimates[1], label='CVXPY estimate 1')
ax[0].plot(t, run2['x2'], label='ADMM estimate 1')
ax[1].plot(t, signal2, label='hidden component 2', ls='--')
ax[1].plot(t, problem.estimates[2], label='CVXPY estimate 2')
ax[1].plot(t, run2['x3'], label='ADMM estimate 2')
ax[2].plot(t, signal1 + signal2, label='true composite signal', ls='--')
ax[2].plot(t, problem.estimates[1] + problem.estimates[2], label='CVXPY estimated signal');
ax[2].plot(t, run2['x2'] + run2['x3'], label='ADMM estimated signal');
ax[2].plot(t, y, label='observed signal', linewidth=1, marker='.', alpha=0.1);
for a in ax:
a.legend()
```
# Non-convex model
Replace the heuristic for a sparse first difference with the constraint that $x^3\in\left\{-1,1\right\}^T$. Objective function is calculated using the L1-heuristic to allow for an apples-to-apples comparison to previous results.
```
def prox3_noncvx(v, theta, rho):
v1 = np.ones_like(v)
v2 = -1 * np.ones_like(v)
d1 = np.abs(v - v1)
d2 = np.abs(v - v2)
x = np.ones_like(v1)
x[d2 < d1] = -1
return x
run_noncvx = run_admm(y, num_iter=1000, rho=5, prox3=prox3_noncvx)
r = np.linalg.norm(
np.average(problem.estimates, axis=0) - y / 3
)
plt.plot(run1['residuals'], label='run1')
plt.plot(run2['residuals'], label='run2')
plt.plot(run3['residuals'], label='run3')
plt.plot(run_noncvx['residuals'], label='run_noncvx', ls='-.')
plt.axhline(r, ls='--', color='red', label='cvxpy')
plt.yscale('log')
plt.legend()
plt.title('Infeasibility')
plt.xlabel('iteration');
plt.plot(run1['obj_vals'], label='run1')
plt.plot(run2['obj_vals'], label='run2')
plt.plot(run3['obj_vals'], label='run3')
plt.plot(run_noncvx['obj_vals'], label='run_noncvx', ls='-.')
plt.axhline(problem.problem.objective.value, ls='--', color='red', label='cvxpy')
plt.legend()
plt.title('Objective Value')
plt.xlabel('iteration')
plt.ylim(260, 400);
fig, ax = plt.subplots(nrows=3, figsize=(10//1.1, 12//1.5))
ax[0].plot(t, signal1, label='hidden component 1', ls='--')
ax[0].plot(t, problem.estimates[1], label='CVXPY estimate 1')
ax[0].plot(t, run_noncvx['x2'], label='ADMM estimate 1')
ax[1].plot(t, signal2, label='hidden component 2', ls='--')
ax[1].plot(t, problem.estimates[2], label='CVXPY estimate 2')
ax[1].plot(t, run_noncvx['x3'], label='ADMM estimate 2')
ax[2].plot(t, signal1 + signal2, label='true composite signal', ls='--')
ax[2].plot(t, problem.estimates[1] + problem.estimates[2], label='CVXPY estimated signal');
ax[2].plot(t, run_noncvx['x2'] + run_noncvx['x3'], label='ADMM estimated signal');
ax[2].plot(t, y, label='observed signal', linewidth=1, marker='.', alpha=0.1);
for a in ax:
a.legend()
```
| github_jupyter |
# Swish-based classifier
- Swish activation, 4 layers, 100 neurons per layer
- Validation score use ensemble of 10 models weighted by loss
### Import modules
```
%matplotlib inline
from __future__ import division
import sys
import os
sys.path.append('../')
from Modules.Basics import *
from Modules.Class_Basics import *
```
## Options
```
with open(dirLoc + 'features.pkl', 'rb') as fin:
classTrainFeatures = pickle.load(fin)
nSplits = 10
patience = 50
maxEpochs = 200
ensembleSize = 10
ensembleMode = 'loss'
compileArgs = {'loss':'binary_crossentropy', 'optimizer':'adam'}
trainParams = {'epochs' : 1, 'batch_size' : 256, 'verbose' : 0}
modelParams = {'version':'modelSwish', 'nIn':len(classTrainFeatures), 'compileArgs':compileArgs, 'mode':'classifier'}
print ("\nTraining on", len(classTrainFeatures), "features:", [var for var in classTrainFeatures])
```
## Import data
```
trainData = BatchYielder(h5py.File(dirLoc + 'train.hdf5', "r+"))
```
## Determine LR
```
lrFinder = batchLRFind(trainData, getModel, modelParams, trainParams,
lrBounds=[1e-5,1e-1], trainOnWeights=True, verbose=0)
```
## Train classifier
```
results, histories = batchTrainClassifier(trainData, nSplits, getModel,
{**modelParams, 'compileArgs':{**compileArgs, 'lr':2e-3}},
trainParams, trainOnWeights=True, maxEpochs=maxEpochs,
patience=patience, verbose=1, amsSize=250000)
```
Comparing to the ReLU baseline, the Swish model reaches a lower loss (3.26e-5 compared to 3.29e-5) and a higher AMS (3.57 compared to 3.49)
## Construct ensemble
```
with open('train_weights/resultsFile.pkl', 'rb') as fin:
results = pickle.load(fin)
ensemble, weights = assembleEnsemble(results, ensembleSize, ensembleMode, compileArgs)
```
## Response on validation data
```
valData = BatchYielder(h5py.File(dirLoc + 'val.hdf5', "r+"))
batchEnsemblePredict(ensemble, weights, valData, ensembleSize=ensembleSize, verbose=1)
print('Testing ROC AUC: unweighted {}, weighted {}'.format(roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source)),
roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source), sample_weight=getFeature('weights', valData.source))))
amsScanSlow(convertToDF(valData.source))
%%time
bootstrapMeanAMS(convertToDF(valData.source), N=512)
```
Comparing the Swish model to the ReLU baseline on the validation data, we find improvements in both the overall AMS (3.78 c.f. 3.72) and the AMS corresonding to the mean cut (3.72 c.f. 3.64).
Since the Swish model shows improvements in both the training and validation data we can conclude that the Swish activation function provides genuine, reproducable improvements to the architecture.
# Test scoring
```
testData = BatchYielder(h5py.File(dirLoc + 'testing.hdf5', "r+"))
%%time
batchEnsemblePredict(ensemble, weights, testData, ensembleSize=ensembleSize, verbose=1)
scoreTestOD(testData.source, 0.9438693160191178)
```
# Save/Load
```
name = "weights/Swish"
saveEnsemble(name, ensemble, weights, compileArgs)
ensemble, weights, compileArgs, _, _ = loadEnsemble(name)
```
| github_jupyter |
# Replication - High Dimensional Case2 - Table
Here we provide a notebook to replicate the summary tables for the high-dimensional case simulation.
The notebook replicates the results in:
- /out/simulation/tables/sim_hd2*
The main script can be found at:
- /scripts/simulation/tables/highdimensional_case2.py
## Please choose the settup for replication:
```
suffix = 'rank5' # rank5, rank50
R_suffix = 'R_lasso_theta_1se' # ''R_lasso_theta', 'R_lasso_theta_1se', 'R_Alasso1_theta', 'R_Alasso1_theta_1se', 'R_Alasso2_theta', 'R_Alasso2_theta_1se', 'R_SCAD_theta', 'R_MCP_theta', 'R_SCAD_theta'
# Modules
# =======================================================================================================================
import os
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
sim_name = 'sim_hd2'
# Function
# =======================================================================================================================
def custom_mean(X, W, col_idx):
'''
- average for paramters of an array selcted by an indexing matrix
X :: array to apply mean along axis=0
W :: indexing which elements to use for mean computatiuon
col_idx :: indexing the columns where W is applied - otherwise standard mean without selecting elements
'''
m = []
assert X.shape == W.shape
N, M = X.shape
for jj in range(M):
if col_idx[jj] == True:
m.append(np.mean(X[W[:, jj], jj]))
else:
m.append(np.mean(X[:, jj]))
return(np.asarray(m))
def custom_var(X, W, col_idx):
'''
- variance for paramters of an array selcted by an indexing matrix
X :: array to apply variance along axis=0
W :: indexing which elements to use for variance computatiuon
col_idx :: indexing the columns where W is applied - otherwise standard mean without selecting elements
'''
m = []
assert X.shape == W.shape
N, M = X.shape
for jj in range(M):
if col_idx[jj] == True:
m.append(np.var(X[W[:, jj], jj]))
else:
m.append(np.var(X[:, jj]))
return(np.asarray(m))
# Simulation Settings
# =======================================================================================================================
I = 750
P = 1000
theta = np.concatenate((np.asarray([-0.5, 0.7, 1.2, 0.65, -0.9, 1.4, 0.2, -0.4, -1.3, 0.1]), np.zeros((990,))))[:, None]
# Overall Parameters
# =======================================================================================================================
url = 'https://raw.githubusercontent.com/alexwjung/ProbCox/main/paper/ProbCox/out/simulation/sim_hd2/N_obs.txt'
N_obs = pd.read_csv(url, header=None, sep=';')
print('Obs: ', np.min(N_obs.iloc[:, 1]), np.median(N_obs.iloc[:, 1]), np.max(N_obs.iloc[:, 1]))
print('Censorpship: ', np.min(1-N_obs.iloc[:, 2]/I), np.median(1-N_obs.iloc[:, 2]/I), np.max(1-N_obs.iloc[:, 2]/I))
#print('Tied Events', np.min(N_obs.iloc[:, 3]), np.median(N_obs.iloc[:, 3]), np.max(N_obs.iloc[:, 3]))
# ProbCox Table
# =======================================================================================================================
res = np.zeros((P, 7))
res[:, 0] = theta[:, 0]
url1 = 'https://raw.githubusercontent.com/alexwjung/ProbCox/main/paper/ProbCox/out/simulation/sim_hd2/probcox' + suffix +'_theta.txt'
url2 = 'https://raw.githubusercontent.com/alexwjung/ProbCox/main/paper/ProbCox/out/simulation/sim_hd2/probcox' + suffix +'_theta_lower.txt'
url3 = 'https://raw.githubusercontent.com/alexwjung/ProbCox/main/paper/ProbCox/out/simulation/sim_hd2/probcox' + suffix +'_theta_upper.txt'
theta_est = pd.read_csv(url1, header=None, sep=';')
theta_est_lower = pd.read_csv(url2, header=None, sep=';')
theta_est_upper = pd.read_csv(url3, header=None, sep=';')
theta_est = theta_est.dropna(axis=0)
theta_est = theta_est.groupby(0).first().reset_index()
theta_est = theta_est.iloc[:, :-1]
assert theta_est.shape[0] == 200
theta_est_lower = theta_est_lower.dropna(axis=0)
theta_est_lower = theta_est_lower.groupby(0).first().reset_index()
theta_est_lower = theta_est_lower.iloc[:, :-1]
assert theta_est_lower.shape[0] == 200
theta_est_upper = theta_est_upper.dropna(axis=0)
theta_est_upper = theta_est_upper.groupby(0).first().reset_index()
theta_est_upper = theta_est_upper.iloc[:, :-1]
assert theta_est_upper.shape[0] == 200
theta_bound = theta_est_lower.merge(theta_est_upper, how='inner', on=0)
theta_bound = theta_bound.merge(theta_est, how='inner', on=0)
theta_est = np.asarray(theta_bound.iloc[:, -P:]).astype(float)
theta_bound = theta_bound.iloc[:, :-P]
theta_bound = np.asarray(theta_bound.iloc[:, 1:]).astype(float)
theta_est_lower = np.asarray(theta_est_lower.iloc[:, 1:])
theta_est_upper = np.asarray(theta_est_upper.iloc[:, 1:])
W = np.sign(theta_est_lower) == np.sign(theta_est_upper) # non zero parameters estimates (based on HPD95%)
col_idx = np.logical_and(np.squeeze(theta != 0), np.sum(W, axis=0) > 5) # true non-zero parameters
res[:, 1] = custom_mean(theta_est, W, col_idx)
res[:, 2] = np.sqrt(custom_var(theta_est, W, col_idx))
res[:, 3] = np.sqrt(custom_mean((theta_est - theta[:, 0][None, :])**2, W, col_idx))
res[:, 4] = custom_mean(theta_bound[:, -P:] - theta_bound[:, :P], W, col_idx)
res[:, 5] = custom_mean(np.logical_and(np.squeeze(theta)[None, :] >= theta_bound[:, :P], np.squeeze(theta)[None, :] <= theta_bound[:, -P:])
, W, col_idx)
res[:, 6] = np.mean(W, axis=0)
res = np.round(res, 2)
#pd.DataFrame(res) # full table with 0 parameters
pd.DataFrame(res[:10, :])
# column headings
#$\theta$ $\bar{\hat{\theta}}$ $\overline{\sigma_{\hat{\theta}}}$ $RMSE$ $\overline{HPD}_{95\%}$ $Coverage_{95\%}$ $p_{|\hat{\theta}| > 0}$
# Evaluating identification
theta_est_lower = theta_bound[:, :1000]
theta_est_upper = theta_bound[:, 1000:]
pd.DataFrame(np.concatenate((np.round(np.mean(np.sum(np.sign(theta_est_lower[:, :]) == np.sign(theta_est_upper[:, :]), axis=1)))[None, None], np.round(np.sqrt(np.var(np.sum(np.sign(theta_est_lower[:, :]) == np.sign(theta_est_upper[:, :]), axis=1))))[None, None], np.round(np.mean(np.sum((np.sign(theta_est_lower[:, :]) == np.sign(theta_est_upper[:, :])) * np.squeeze(theta == 0)[None, :], axis=1)))[None, None]), axis=1))
# column headings
# number of covariates identified standard error falsly identified
# R-Cox Table
# =======================================================================================================================
res = np.zeros((P, 7))
res[:, 0] = theta[:, 0]
url = 'https://raw.githubusercontent.com/alexwjung/ProbCox/main/paper/ProbCox/out/simulation/sim_hd2/' + R_suffix + '.txt'
theta_est = pd.read_csv(url, header=None, sep=';')
theta_est = theta_est.dropna(axis=0)
theta_est = theta_est.groupby(0).first().reset_index()
theta_est = np.asarray(theta_est.iloc[:, 1:])
assert theta_est.shape[0] == 200
W = theta_est!=0 # non zero parameters estimates (based on HPD95%)
col_idx = np.logical_and(np.squeeze(theta != 0), np.sum(W, axis=0) > 5) # true non-zero parameters
res[:, 1] = custom_mean(theta_est, W, col_idx)
res[:, 2] = np.sqrt(custom_var(theta_est, W, col_idx))
res[:, 3] = np.sqrt(custom_mean((theta_est - theta[:, 0][None, :])**2, W, col_idx))
res[:, 6] = np.mean(W, axis=0)
res = np.round(res, 2)
# pd.DataFrame(res) # full table with 0 parameters
res = pd.DataFrame(res[:10, :])
res.iloc[:, 4] = '-'
res.iloc[:, 5] = '-'
res
# column headings
#$\theta$ $\bar{\hat{\theta}}$ $\overline{\sigma_{\hat{\theta}}}$ $RMSE$ $\overline{CI}_{95\%}$ $Coverage_{95\%}$ $p_{|\hat{\theta}| > 0}$
# Evaluating identification
pd.DataFrame(np.concatenate((np.round(np.mean(np.sum(theta_est != 0, axis=1)))[None, None], np.round(np.sqrt(np.var(np.sum(theta_est != 0, axis=1))))[None, None],np.round(np.mean(np.sum((theta_est != 0) * np.squeeze(theta == 0)[None, :], axis=1)))[None, None]), axis=1))
# column headings
# number of covariates identified standard error falsly identified
```
| github_jupyter |
# Jacobi Method
From: https://en.wikipedia.org/wiki/Jacobi_method :
#### Jacobi Method
In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a diagonally dominant system of linear equations.
<br>
<br>
#### Convergence
A sufficient (but not necessary) condition for the method to converge is that the matrix A is strictly or irreducibly diagonally dominant.
<br>
<br>
#### Description
Let
:$A\mathbf x = \mathbf b$
be a square system of $n$ linear equations, where:
$A=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \qquad \mathbf{x} = \begin{bmatrix} x_{1} \\ x_2 \\ \vdots \\ x_n \end{bmatrix} , \qquad \mathbf{b} = \begin{bmatrix} b_{1} \\ b_2 \\ \vdots \\ b_n \end{bmatrix}.$
Then ''A'' can be decomposed into a diagonal matrix $D$, and the remainder $R$:
:$A=D+R \qquad \text{where} \qquad D = \begin{bmatrix} a_{11} & 0 & \cdots & 0 \\ 0 & a_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\0 & 0 & \cdots & a_{nn} \end{bmatrix} \text{ and } R = \begin{bmatrix} 0 & a_{12} & \cdots & a_{1n} \\ a_{21} & 0 & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & 0 \end{bmatrix}. $
The solution is then obtained iteratively via
:$ \mathbf{x}^{(k+1)} = D^{-1} (\mathbf{b} - R \mathbf{x}^{(k)}), $
where $\mathbf{x}^{(k)}$ is the $k$th approximation or iteration of $\mathbf{x}$ and $\mathbf{x}^{(k+1)}$ is the next or $k$ + 1 iteration of $\mathbf{x}$.
$$x^{(k+1)}=D^{-1}(b - Rx^{(k)})$$
#### Equivalently:
##### (In the following Code following equations have been used):
$$x^{(k+1)}= Tx^{(k)} + C $$
$$T=-D^{-1}R $$
$$C = D^{-1}b $$
#### Stop Condition:
$$ \lVert X^{(k+1)} - X^{(k)} \rVert_2 \le 10^{-4}$$
```
import numpy as np
def jacobi(A,b,initial_guess):
#Extracting Diagonal elements from input matrix A:
Diagnal = np.diag(A)
D = np.diagflat(Diagnal)
#Calculating Invese of D:
D_inv = np.linalg.inv(D)
#Calculating R:
R = A - D
#Symbol of matrix multiplication in numpy is @
T = -D_inv@R
C = D_inv@b
x = initial_guess
while(1):
x_old = x
x = T@x + C
x_new = x
#using norm2:
if np.linalg.norm(x_new-x_old) <= 10**(-4):
break
return x
A = np.matrix([[2.0,1.0],
[5.0,7.0]])
b = np.matrix([[11.0],[13.0]])
initialGuess = np.matrix([[1.0],[1.0]])
sol = jacobi(A,b,initialGuess)
print ('A:')
print(A)
print ('\nb:')
print(b)
print('\nSolution:')
print(sol)
```
| github_jupyter |
# deep-muse (ver 0.8) [WIP]
***
# Advanced text-to-music generator
***
## Inspired by https://github.com/lucidrains/deep-daze
## Powered by tegridy-tools TMIDI Optimus Processors
***
### Project Los Angeles
### Tegridy Code 2021
***
# Setup environment
```
#@title Install dependencies
!git clone https://github.com/asigalov61/tegridy-tools
!pip install tqdm
# for data
!pip install fuzzywuzzy[speedup]
# for listening
!apt install fluidsynth #Pip does not work for some reason. Only apt works
!pip install midi2audio
# packages below are for plotting pianoroll only
# they are not needed for anything else
!pip install pretty_midi
!pip install librosa
!pip install matplotlib
#@title Load needed modules
print('Loading needed modules. Please wait...')
import sys
import os
import json
import secrets
import copy
os.chdir('/content/tegridy-tools/tegridy-tools/')
import TMIDI
os.chdir('/content/')
from fuzzywuzzy import fuzz
from fuzzywuzzy import process
from itertools import islice, accumulate
from pprint import pprint
import tqdm.auto
from tqdm import auto
from midi2audio import FluidSynth
from IPython.display import display, Javascript, HTML, Audio
# only for plotting pianoroll
import pretty_midi
import librosa.display
import matplotlib.pyplot as plt
from google.colab import output, drive
print('Creating Dataset dir...')
if not os.path.exists('/content/Dataset'):
os.makedirs('/content/Dataset')
os.chdir('/content/')
print('Loading complete. Enjoy! :)')
```
# Prep statistics dictionary
```
#@title Download English Karaoke MIDI classification model
%cd /content/
!wget --no-check-certificate -O Karaoke-English-Full.pickle "https://onedrive.live.com/download?cid=8A0D502FC99C608F&resid=8A0D502FC99C608F%2118485&authkey=ABXca9Cn2L-64UE"
#@title Load and prep the model
print('Loading the Karaoke model. Please wait...')
data = TMIDI.Tegridy_Any_Pickle_File_Loader('/content/Karaoke-English-Full')
print('Done!')
print('Prepping data...')
kar_ev_f = data[2]
kar = []
karaoke = []
for k in auto.tqdm(kar_ev_f):
k.sort(reverse=False, key=lambda x: x[1])
for kk in k:
if kk[0] == 'note' or kk[0] == 'text_event':
kar.append(kk)
kar_words = []
for o in auto.tqdm(kar):
if o[0] != 'note':
kar_words.append(str(o[2]).lower())
print('Done! Enjoy! :)')
```
# Generate Music
```
#@title Generate Music from the lyrics below
#@markdown NOTE: No symbols, special chars, commas, etc., please.
#@markdown ProTip: Be as ambiguous and general as possible for best results as the current dictionary is too small for anything specific.
randomize_words_matching = False #@param {type:"boolean"}
lyric1 = 'I love you very very much' #@param {type:"string"}
lyric2 = 'I can not live without you' #@param {type:"string"}
lyric3 = 'You always present on my mind' #@param {type:"string"}
lyric4 = 'I often think about you' #@param {type:"string"}
lyric5 = 'I am all out of love I am so lost without you' #@param {type:"string"}
lyric6 = 'I know you were right believing for so long' #@param {type:"string"}
lyric7 = 'I am all out of love what am I without you' #@param {type:"string"}
lyric8 = 'I cant be too late to say that I was so wrong' #@param {type:"string"}
text = [lyric1, lyric2, lyric3, lyric4, lyric5, lyric6, lyric7, lyric8]
song = []
words_lst = ''
print('=' * 100)
print('Deep-Muse Text to Music Generator')
print('Starting up...')
print('=' * 100)
for t in auto.tqdm(text):
txt = t.lower().split(' ')
kar_words_split = list(TMIDI.Tegridy_List_Slicer(kar_words, len(txt)))
ratings = []
for k in kar_words_split:
ratings.append(fuzz.ratio(txt, k))
if randomize_words_matching:
try:
ind = ratings.index(secrets.choice([max(ratings)-5, max(ratings)-4, max(ratings)-3, max(ratings)-2, max(ratings)-1, max(ratings)]))
except:
ind = ratings.index(max(ratings))
else:
ind = ratings.index(max(ratings))
words_list = kar_words_split[ind]
pos = ind * len(txt)
print(words_list)
words_lst += ' '.join(words_list) + chr(10)
c = 0
for i in range(len(kar)):
if kar[i][0] != 'note':
if c == pos:
idx = i
break
if kar[i][0] != 'note':
c += 1
c = 0
for i in range(idx, len(kar)):
if kar[i][0] != 'note':
if c == len(txt):
break
if kar[i][0] == 'note':
song.append(kar[i])
if kar[i][0] != 'note':
c += 1
song.append(kar[i])
so = [y for y in song if len(y) > 3]
if so != []: sigs = TMIDI.Tegridy_MIDI_Signature(so, so)
print('=' * 100)
print(sigs[0])
print('=' * 100)
song1 = []
p = song[0]
p[1] = 0
time = 0
song.sort(reverse=False, key=lambda x: x[1])
for i in range(len(song)-1):
ss = copy.deepcopy(song[i])
if song[i][1] != p[1]:
if abs(song[i][1] - p[1]) > 1000:
time += 300
else:
time += abs(song[i][1] - p[1])
ss[1] = time
song1.append(ss)
p = copy.deepcopy(song[i])
else:
ss[1] = time
song1.append(ss)
p = copy.deepcopy(song[i])
pprint(words_lst, compact=True)
print('=' * 100)
```
# Convert generated music composition to MIDI file and download/listen to the output :)
```
#@title Convert to MIDI
TMIDI.Tegridy_SONG_to_MIDI_Converter(song1, output_file_name='/content/deep-muse-Output-MIDI')
#@title Plot and listen to the last generated composition
#@markdown NOTE: May be very slow with the long compositions
fname = '/content/deep-muse-Output-MIDI'
fn = os.path.basename(fname + '.mid')
fn1 = fn.split('.')[0]
print('Playing and plotting composition...')
pm = pretty_midi.PrettyMIDI(fname + '.mid')
# Retrieve piano roll of the MIDI file
piano_roll = pm.get_piano_roll()
plt.figure(figsize=(14, 5))
librosa.display.specshow(piano_roll, x_axis='time', y_axis='cqt_note', fmin=1, hop_length=160, sr=16000, cmap=plt.cm.hot)
plt.title('Composition: ' + fn1)
print('Synthesizing the last output MIDI. Please stand-by... ')
FluidSynth("/usr/share/sounds/sf2/FluidR3_GM.sf2", 16000).midi_to_audio(str(fname + '.mid'), str(fname + '.wav'))
Audio(str(fname + '.wav'), rate=16000)
```
# Congrats! You did it! :)
| github_jupyter |
(Real_Non_Linear_Neural_Network)=
# Chapter 7 -- Real (Non-linear) Neural Network
So in the previous example, we derived the gradients for a two layers neural network. This is to find the straight line that bisects the two groups in figure 7.1 in the introduction. However, in reality, we often have the following groups:
<img src="images/groups.PNG" width="400">
Figure 7.1
For data like this, a linear separator cannot satisfy our needs. The solution is to add another linear separator on top of the original linear separator.
<img src="images/groups1.PNG" width="500">
Figure 7.2
This is a classic three layers neural network. The layer at the left hand side is called the input layer; the layer at the right hand side is known as the output layer; the layer in between is called the hidden layer. The hidden layer is like a black-box that we cannot usually interpret by our instinct. We will dig in more details later.
<img src="images/threeLayers.PNG" width="500">
Figure 7.3
<img src="images/threeLayers1.PNG" width="400">
Figure 7.4
This is becoming something like a network finally. But the way we express the weights get more complicated. Here is how we define it by tradition:
$$
w_{ab}^{(n)}
$$ (eq7_1)
where $n$ means the $n^{th}$ layer in the neural net; $n$ = $1$ at the input layer. Suppose $n=1$ at the input layer, then $a$ and $b$ means that the weight is pointing from the $a^{th}$ neural in the second (hidden) layer to the $b^{th}$ neural in the input layer. This is going backwards (to the left).
<img src="images/bpg.PNG" width="400">
Figure 7.5
For example, the weights in the input layer in the figure 7.4 can be defined as follow
$$
W^{(1)}= \begin{bmatrix}
w^{(1)}_{11} & w^{(1)}_{21} \\
w^{(1)}_{12} & w^{(1)}_{22} \\
w^{(1)}_{13} & w^{(1)}_{23}
\end{bmatrix} =
\begin{bmatrix}
5 & 7 \\
-2 & -3 \\
-8 & 1
\end{bmatrix}
$$ (eq7_2)
And the weights in the hidden layer can be defined as
$$
W^{(2)}= \begin{bmatrix}
w^{(2)}_{11} \\
w^{(2)}_{12} \\
w^{(2)}_{13}
\end{bmatrix} =
\begin{bmatrix}
7 \\
5 \\
-6
\end{bmatrix}
$$ (eq7_3)
In python, we can describe the core features of such network by defining a Network clas. Here's the code we use to initialize a Network object:
```
import numpy as np
class Network(object):
def __init__(self, sizes):
self.num_layers = len(sizes)
self.sizes = sizes
self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
self.weights = [np.random.randn(y, x)
for x, y in zip(sizes[:-1], sizes[1:])]
```
In this code, the list sizes contains the number of neurons in the respective layers. So, for example, if we want to create a Network object with 2 neurons in the first layer, 3 neurons in the second layer, and 1 neuron in the final layer, we'd do this with the code:
```
net = Network([2, 3, 1])
```
The biases and weights in the Network object are all initialized randomly, using the Numpy np.random.randn function to generate Gaussian distributions with mean 0 and standard deviation 1. This random initialization gives our stochastic gradient descent algorithm a place to start from. In later chapters we'll find better ways of initializing the weights and biases, but this will do for now. Note that the Network initialization code assumes that the first layer of neurons is an input layer, and omits to set any biases for those neurons, since biases are only ever used in computing the outputs from later layers.
Note also that the biases and weights are stored as lists of Numpy matrices. So, for example net.weights[1] is a Numpy matrix storing the weights connecting the second and third layers of neurons. (It's not the first and second layers, since Python's list indexing starts at 0.) Since net.weights[1] is rather verbose, let's just denote that matrix $w$. It's a matrix such that $w_{jk}$ is the weight for the connection between the $k_{th}$ neuron in the second layer, and the $j_{th}$ neuron in the third layer. This ordering of the j and k indices may seem strange - surely it'd make more sense to swap the j and k indices around? The big advantage of using this ordering is that it means that the vector of activations of the third layer of neurons is:
$$
a^{'}=\sigma(wa+b)
$$ (eq7_4)
There's quite a bit going on in this equation, so let's unpack it piece by piece. $a$ is the vector of activations of the second layer of neurons. To obtain $a^{'}$ we multiply a by the weight matrix $w$, and add the vector b of biases. We then apply the function $\sigma$ elementwise to every entry in the vector wa+b. (This is called vectorizing the function $\sigma$.)
| github_jupyter |
* By: Proskurin Oleksandr
* Email: proskurinolexandr@gmail.com
* Reference: Advances in Financial Machine Learning, Marcos Lopez De Prado, pg 30, https://towardsdatascience.com/financial-machine-learning-part-0-bars-745897d4e4ba
```
from IPython.display import Image
```
# Imbalance bars generation algorithm
Let's discuss imbalance bars generation on example of volume imbalance bars. As it is described in Advances in Financial Machine Learning book:
First let's define what is the tick rule:
For any given $t$, where $p_t$ is the price associated with $t$ and $v_t$ is volume, the tick rule $b_t$ is defined as:
$$b_t = \begin{cases} b_{t-1}, & \mbox{if } \Delta p_t\mbox{=0} \\ |\Delta p_t| / \Delta p_{t}, & \mbox{if } \Delta p_t \neq\mbox{0} \end{cases}$$
Tick rule is used as a proxy of trade direction, however, some data providers already provide customers with tick direction, in this case we don't need to calculate tick rule, just use the provided tick direction instead.
Cumulative volume imbalance from $1$ to $T$ is defined as:
$$ \theta_t = \sum_{t=1}^T b_t*v_t $$
$T$ is the time when the bar is sampled.
Next we need to define $E_0[T]$ as expected number of ticks, the book suggests to use EWMA of expected number of ticks from previously generated bars. Let's introduce the first hyperparameter for imbalance bars generation: __num_prev_bars__ which corresponds to window used for EWMA calculation.
Here we face the problem of first bar generation, because we don't know expected number of ticks with no bars generated.
To solve this we introduce the second hyperparameter: __expected_num_ticks_init__ which corresponds to initial guess for expected number of ticks before the first imbalance bar is generated.
Bar is sampled when:
$$|\theta_t| >= E_0[T]*[2v^+ - E_0[v_t]]$$
To estimate $2v^+ - E_0[v_t]$ (expected imbalance) we simply calculate EWMA of volume imbalance from previous bars, that is why we need to store volume imbalances in _imbalance array_, the window for estimation is either __expected_num_ticks_init__ before the first bar is sampled, or expected number of ticks($E_0[T]$) * __num_prev_bars__ when the first bar is generated.
Note that when we have at least one imbalance bar generated we update $2v^+ - E_0[v_t]$ only when the next bar is sampled not on every trade observed
## Algorithm logic
Now we have understood the logic of imbalance bar generation, let's understand how the process looks in details
```python
num_prev_bars = 3
expected_num_ticks_init = 100000
expected_num_ticks = expected_num_ticks_init
cum_theta = 0
num_ticks = 0
imbalance_array = []
imbalance_bars = []
bar_length_array = []
for row in data.rows:
#track high,low,close, volume info
num_ticks += 1
tick_rule = get_tick_rule(price, prev_price)
volume_imbalance = tick_rule * row['volume']
imbalance_array.append(volume_imbalance)
cum_theta += volume_imbalance
if len(imbalance_bars) == 0 and len(imbalance_array) >= expected_num_ticks_init:
expected_imbalance = ewma(imbalance_array, window=expected_num_ticks_init)
if abs(cum_theta) >= expected_num_ticks * abs(expected_imbalance):
bar = form_bar(open, high, low, close, volume)
imbalance_bars.append(bar)
bar_length_array.append(num_ticks)
cum_theta, num_ticks = 0, 0
expected_num_ticks = ewma(bar_lenght_array, window=num_prev_bars)
expected_imbalance = ewma(imbalance_array, window = num_prev_bars * expected_num_ticks)
```
Note that in algorithm pseudo-code we reset $\theta_t$ when bar is formed, in our case the formula for $\theta_t$ is:
$$\theta_t = \sum_{t=t^*}^T b_t*v_t$$
<center> $t^*$ is time when previous imbalance bar was formed<center>
Let's look at dynamics of $|\theta_t|$ and $E_0[T] * |2v^+ - E_0[v_t]|$ to understand why we decided to reset $\theta_t$ when bar is formed.
The dynamics when theta value is reset:
```
Image('images/mlfinlab_implementation.png')
```
Note that on the first ticks, threshold condition is not stable. Remember, before the first bar is generated, expected imbalance is calculated on every tick with window = expected_num_ticks_init, that is why it changes with every tick. After the first bar was generated both expected number of ticks ($E_0[T]$) and exptected volume imbalance ($2v^+ - E_0[v_t]$) are updated only when the next bar is generated
When theta is not reset:
```
Image('images/book_implementation.png')
```
The reason for that is due to the fact that theta is accumulated when several bars are generated theta value is not reset $\Rightarrow$ condition is met on small number of ticks $\Rightarrow$ length of the next bar converges to 1 $\Rightarrow$ bar is sampled on the next consecutive tick.
The logic described above is implemented in __mlfinlab__ package in _ImbalanceBars_
# Statistical properties of imbalance bars. Exercise 2.2 from the book
```
import seaborn as sns
import matplotlib.pyplot as plt
from statsmodels.graphics.tsaplots import plot_acf
import numpy as np
import pandas as pd
# imbalance bars generated with num_prev_bars = 3, num_ticks_init = 100000
imb_bars = pd.read_csv('../Sample-Data/imbalance_bars_3_100000.csv')
dollar_bars = pd.read_csv('../Sample-Data/dollar_bars_ex_2.2.csv')
dollar_bars['log_ret'] = np.log(dollar_bars['close']).diff().fillna(0) # get log returns
imb_bars['log_ret'] = np.log(imb_bars['close']).diff().fillna(0)
plt.figure(figsize=(20,10))
sns.kdeplot((imb_bars.log_ret - imb_bars.log_ret.mean()) / imb_bars.log_ret.std(), label="Imbalance bars")
sns.kdeplot((dollar_bars.log_ret - dollar_bars.log_ret.mean()) / dollar_bars.log_ret.std(), label="Dollar bars")
sns.kdeplot(np.random.normal(size=len(imb_bars)), label="Normal", color='black', linestyle="--")
plt.title()
plot_acf(dollar_bars.log_ret, lags=10, zero=False)
plt.title('Dollar Bars AutoCorrelation')
plt.show()
plot_acf(imb_bars.log_ret, lags=10, zero=False)
plt.title('Dollar Imbalance Bars (num_ticks_init = 100k, num_prev_bars=3) AutoCorrelation')
plt.show()
imb_bars['date_time'] = pd.to_datetime(imb_bars.date_time)
dollar_bars['date_time'] = pd.to_datetime(dollar_bars.date_time)
```
## Conclusion
As we can see imbalance bars distribution is far from normal, they are more autocorrelated comparing to dollar bars, however, the key goal of imbalance/run bars is equal amount of information inside of each bar. That is why we should consider using information theory to research properties of imbalance bars in comparison with time/dollar bars.
| github_jupyter |
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
from utils_accelerate import *
tokenizer = T5Tokenizer.from_pretrained('t5-small')
# input = "predict tail: barack obama | position_held |"
# input = "translate English to German: How are you doing?"
# model = T5ForConditionalGeneration.from_pretrained('models/codex_m_accelerate_1gpu.pt')
# checkpoint_location = 'models/codex_m_accelerate_1gpu/115000.pt'
checkpoint_location = 'models/codex-m_tiny/60000.pt'
model = load_accelerator_model(checkpoint_location, only_model=True)
model.eval()
model.cpu()
input = "predict tail: united states of america | member of |"
input_ids = tokenizer(input, return_tensors="pt").input_ids # Batch size 1
model.cpu()
# outputs = model.sample(input_ids)
from transformers import (
LogitsProcessorList,
MinLengthLogitsProcessor,
BeamSearchScorer,
)
fname = 'data/codex-m/valid.txt'
f = open(fname, 'r')
data = []
for line in f:
data.append(line.strip())
f.close()
len(data)
data[0]
import torch
# data_point = 'predict tail: novalis | occupation | philosopher'
id = 0
data_point = data[id]
encoder_input_str, target = data_point.split('\t')
encoder_input_str = [encoder_input_str]
encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
num_beams = 10
num_predictions = 3
input_ids = torch.ones((len(encoder_input_str) * num_beams, 1), device=model.device, dtype=torch.long)
input_ids = input_ids * model.config.decoder_start_token_id
model_kwargs = {
"encoder_outputs": model.get_encoder()(encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True)
}
beam_scorer = BeamSearchScorer(
batch_size=len(encoder_input_str),
max_length=model.config.max_length,
num_beams=num_beams,
device=model.device,
num_beam_hyps_to_keep=num_predictions,
length_penalty=0.3
)
logits_processor = LogitsProcessorList([])
encoder_input_str
outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs)
print("Beam:", tokenizer.batch_decode(outputs, skip_special_tokens=True))
outputs = model.generate(encoder_input_ids)
print('Greedy:', tokenizer.batch_decode(outputs, skip_special_tokens=True))
print('Target:', target)
def getGreedyOutput(model, tokenizer, encoder_input_str):
encoder_input_str = [encoder_input_str]
encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
outputs = model.generate(encoder_input_ids)
return tokenizer.batch_decode(outputs, skip_special_tokens=True)
def getBeamOutput(model, tokenizer, encoder_input_str, num_beams=10,
num_predictions=3, length_penalty=0.3):
encoder_input_str = [encoder_input_str]
encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
input_ids = torch.ones((len(encoder_input_str) * num_beams, 1), device=model.device, dtype=torch.long)
input_ids = input_ids * model.config.decoder_start_token_id
model_kwargs = {
"encoder_outputs": model.get_encoder()(encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True)
}
beam_scorer = BeamSearchScorer(
batch_size=len(encoder_input_str),
max_length=model.config.max_length,
num_beams=num_beams,
device=model.device,
num_beam_hyps_to_keep=num_predictions,
length_penalty=length_penalty
)
logits_processor = LogitsProcessorList([])
outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs)
return tokenizer.batch_decode(outputs, skip_special_tokens=True)
model.eval()
from tqdm import tqdm
# id = 100
scorer_function = getBeamOutput
# scorer_function = getGreedyOutput
num_points = 200
correct = 0
for id in tqdm(range(0, num_points)):
data_point = data[id]
input, target = data_point.split('\t')
predicted = set(scorer_function(model, tokenizer, input))
if target in predicted:
correct += 1
print(correct/num_points)
outputs.shape
outputs
print(input)
print(''.join(tokenizer.convert_ids_to_tokens(outputs[0])))
from dataset import T5_Dataset
valid_dataset = T5_Dataset('test', dataset_name='codex-m')
from eval_accelerate import removePadding, eval
class Args:
batch_size = 200
args=Args()
acc = eval(model, valid_dataset, args)
acc
actual = tokenizer("international development association", return_tensors="pt").input_ids[0].numpy()
actual
predicted = outputs[0][1:].numpy()
predicted
actual == predicted
actual.numpy()
```
| github_jupyter |
```
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
```
# Linear models
Linear models are useful when little data is available or for very large feature spaces as in text classification. In addition, they form a good case study for regularization.
# Linear models for regression
All linear models for regression learn a coefficient parameter ``coef_`` and an offset ``intercept_`` to make predictions using a linear combination of features:
```
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_
```
The difference between the linear models for regression is what kind of restrictions or penalties are put on ``coef_`` as regularization , in addition to fitting the training data well.
The most standard linear model is the 'ordinary least squares regression', often simply called 'linear regression'. It doesn't put any additional restrictions on ``coef_``, so when the number of features is large, it becomes ill-posed and the model overfits.
Let us generate a simple simulation, to see the behavior of these models.
```
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
X, y, true_coefficient = make_regression(n_samples=200, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5, train_size=60)
print(X_train.shape)
print(y_train.shape)
```
## Linear Regression
$$ \text{min}_{w, b} \sum_i || w^\mathsf{T}x_i + b - y_i||^2 $$
```
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression().fit(X_train, y_train)
print("R^2 on training set: %f" % linear_regression.score(X_train, y_train))
print("R^2 on test set: %f" % linear_regression.score(X_test, y_test))
from sklearn.metrics import r2_score
print(r2_score(np.dot(X, true_coefficient), y))
plt.figure(figsize=(10, 5))
coefficient_sorting = np.argsort(true_coefficient)[::-1]
plt.plot(true_coefficient[coefficient_sorting], "o", label="true")
plt.plot(linear_regression.coef_[coefficient_sorting], "o", label="linear regression")
plt.legend()
from sklearn.learning_curve import learning_curve
def plot_learning_curve(est, X, y):
training_set_size, train_scores, test_scores = learning_curve(est, X, y, train_sizes=np.linspace(.1, 1, 20))
estimator_name = est.__class__.__name__
line = plt.plot(training_set_size, train_scores.mean(axis=1), '--', label="training scores " + estimator_name)
plt.plot(training_set_size, test_scores.mean(axis=1), '-', label="test scores " + estimator_name, c=line[0].get_color())
plt.xlabel('Training set size')
plt.legend(loc='best')
plt.ylim(-0.1, 1.1)
plt.figure()
plot_learning_curve(LinearRegression(), X, y)
```
## Ridge Regression (L2 penalty)
**The Ridge estimator** is a simple regularization (called l2 penalty) of the ordinary LinearRegression. In particular, it has the benefit of being not computationally more expensive than the ordinary least square estimate.
$$ \text{min}_{w,b} \sum_i || w^\mathsf{T}x_i + b - y_i||^2 + \alpha ||w||_2^2$$
The amount of regularization is set via the `alpha` parameter of the Ridge.
```
from sklearn.linear_model import Ridge
ridge_models = {}
training_scores = []
test_scores = []
for alpha in [100, 10, 1, .01]:
ridge = Ridge(alpha=alpha).fit(X_train, y_train)
training_scores.append(ridge.score(X_train, y_train))
test_scores.append(ridge.score(X_test, y_test))
ridge_models[alpha] = ridge
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [100, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([100, 10, 1, .01]):
plt.plot(ridge_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
```
Tuning alpha is critical for performance.
```
plt.figure()
plot_learning_curve(LinearRegression(), X, y)
plot_learning_curve(Ridge(alpha=10), X, y)
```
## Lasso (L1 penalty
**The Lasso estimator** is useful to impose sparsity on the coefficient. In other words, it is to be prefered if we believe that many of the features are not relevant. This is done via the so-called l1 penalty.
$$ \text{min}_{w, b} \sum_i || w^\mathsf{T}x_i + b - y_i||^2 + \alpha ||w||_1$$
```
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
plt.figure()
plot_learning_curve(LinearRegression(), X, y)
plot_learning_curve(Ridge(alpha=10), X, y)
plot_learning_curve(Lasso(alpha=10), X, y)
```
| github_jupyter |
# Which citation styles do we have in Crossref data?
Dominika Tkaczyk
16.11.2018
In this notebook I use the style classifier to find out which styles are present in the Crossref collection.
```
import sys
sys.path.append('..')
%matplotlib inline
import warnings
warnings.simplefilter('ignore')
import json
import pandas as pd
import re
from config import *
from dataset import read_ref_strings_data, generate_unknown
from features import get_features, select_features_chi2
from sklearn.linear_model import LogisticRegression
from train import clean, train
```
## Training
First, I will train the classifier. To do that, I have to read the training data first:
```
dataset = read_ref_strings_data('../data/dataset/')
print('Dataset size: {}'.format(dataset.shape[0]))
dataset.head()
```
Cleaning and preprocessing the data (more about this procedure can be found [here](https://github.com/CrossRef/citation-style-classifier/blob/master/analyses/citation_style_classification.ipynb)):
```
dataset = clean(dataset, random_state=0)
dataset_unknown = dataset_unknown = generate_unknown(dataset, 5000, random_state=0)
dataset = pd.concat([dataset, dataset_unknown])
print('Dataset size: {}'.format(dataset.shape[0]))
```
Training the classification model (more about the parameters and the training can be found [here](https://github.com/CrossRef/citation-style-classifier/blob/master/analyses/citation_style_classification.ipynb)):
```
count_vectorizer, tfidf_transformer, model = train(dataset, random_state=0)
```
*model* contains the complete trained style classifier. It can be used to infer the citation style of a new reference string.
## Classifying real reference strings
Let's read a sample of metadata records drawn from Crossref API:
```
with open('../data/samples/sample-10K.json', 'r') as file:
data = json.loads(file.read())['sample']
```
Next, I iterate over all unstructured reference strings found in the records and assign a citation style (or "unknown") to each of them:
```
strings = []
styles = []
probabilities = []
for d in data:
for r in d.get('reference', []):
if 'unstructured' in r:
r['unstructured'] = re.sub('http[^ ]+', '', r['unstructured']).strip()
r['unstructured'] = re.sub('(?<!\d)10\.\d{4,9}/[-\._;\(\)/:a-zA-Z0-9]+', '', r['unstructured'])
r['unstructured'] = re.sub('doi:?', '', r['unstructured']).strip()
if len(r['unstructured']) < 11:
continue
_, _, test_features = get_features([r['unstructured']], count_vectorizer=count_vectorizer,
tfidf_transformer=tfidf_transformer)
prediction = model.predict(test_features)
probabilities.append(max(model.predict_proba(test_features)[0]))
strings.append(r['unstructured'])
styles.append(prediction[0])
existing_styles = pd.DataFrame({'string': strings, 'style': styles})
```
## The distribution of the styles
Let's look at the fraction of each style in our dataset:
```
styles_distr = existing_styles.groupby(['style']).size().reset_index(name='counts')
styles_distr['fraction'] = styles_distr['counts'] / len(strings)
styles_distr = styles_distr.sort_values(by='counts', ascending=False).reset_index(drop=True)
styles_distr
```
The most popular style is *springer-basic-author-date* (29%), followed by *apa* (14%) and *springer-lecture-notes-in-computer-science* (9%). We also have 13% of the strings classified as "unknown". Let's see a sample of those strings:
```
for i, s in enumerate(existing_styles.loc[existing_styles['style'] == 'unknown'].sample(10, random_state=10)['string']):
print('['+str(i)+']', s)
```
| github_jupyter |
# Accessing System Configurations With MPI
## Overview
### Questions
* How can I access the state of the simulation in parallel simulations?
* What are the differences between local and global snapshots?
### Objectives
* Describe how to write GSD files in MPI simulations.
* Show examples using **local snapshots**.
* Show examples using **global snapshots**.
```
import os
fn = os.path.join(os.getcwd(), 'trajectory.gsd')
![ -e "$fn" ] && rm "$fn"
```
## Writing GSD files in parallel jobs
You can write GSD files in parallel jobs just as you do in serial.
Saving the simulation trajectory to a file is useful for visualization and analysis after the simulation completes.
As mentioned in the previous section, write a **single program** and add the `write.GSD` operation with identical parameters on all ranks:
```
%pycat lj_trajectory.py
!mpirun -n 4 python3 lj_trajectory.py
```
## Modifying particle properties with local snapshots
Use snapshots when you need to modify particle properties during a simulation, or perform analysis where the results need to be known as the simulation progresses (e.g. umbrella sampling).
**Local snapshots** provide high performance direct access to the particle data stored in HOOMD-blue.
The direct access comes with several costs.
Your script can only access particles local to the **domain** of the current **rank**.
These particles may appear in any order in the local snapshot and a given particle is only present on one rank.
To access the properties of a specific particle, find the index given the particle's tag and handle the condition where it is not present on the rank.
The example below demonstrates this with an example that doubles the mass of all particles, and quadruples the mass of the particle with tag 100:
```
%pycat local_snapshot.py
!mpirun -n 4 python3 local_snapshot.py
```
Notice how the example uses the `rtag` lookup array to efficiently find the index of the particle with the given tag.
When the particle is not present on the local rank, `rtag` is set to a number greater than the local number of particles.
<div class="alert alert-info">
Any analysis you perform may require <b>MPI</b> communication to combine results across ranks using <a href="http://mpi4py.readthedocs.io/">mpi4py</a> (this is beyond the scope of this tutorial).
</div>
## Using global snapshots with MPI
**Global snapshots** collect all particles onto rank 0 and sort them by tag.
This removes a number of the inconveniences of the local snapshot API, but at the cost of *much slower* performance.
When you use **global snapshots** in **MPI** simulations, you need to add `if snapshot.communicator.rank == 0:` checks around all the code that accesses the data in the snapshot.
The `get_snapshot()` call itself *MUST* be made on all ranks.
Here is an example that computes the total mass of the system using a global snapshot:
```
%pycat global_snapshot.py
!mpirun -n 4 python3 global_snapshot.py
```
## Summary
In this section, you have written trajectories to a GSD file, modified the state of the system efficiently using **local snapshots**, and analyzed the state of the system with a **global snapshot** - all with conditions that work in both **MPI** parallel and serial simulations.
The next section of this tutorial shows you how to use **MPI** to run many independent simulations with different inputs.
| github_jupyter |
```
import numpy as np
# load data from ReachData.npz
data=np.load('/Users/yangrenqin/GitHub/HW5/ReachData.npz')
r=data['r']
targets=data['targets']
target_index=data['cfr']
data.close()
targets
# convert x,y coordiantes to respective degreees
import math
degrees=[]
for i in targets:
degree=math.degrees(math.atan2(i[1],i[0]))
if degree < 0:
degree=360+degree
degrees.append(degree)
degrees
import pandas as pd
import random
cfr=pd.Series(target_index)
training_data=np.array([])
testing_data=np.array([])
# randomly select 400 trials(50 trials for each target) as traning data, and also pick out remaining data as test data
for i in range(8):
i+=1
cfr_i=cfr[cfr.values==i]
t1=random.sample(range(len(cfr_i.index)),50)
t1.sort()
t2=[cfr_i.index[l] for l in t1]
t3=list(set(cfr_i.index)-set(t2))
training_data=np.append(training_data,t2)
testing_data=np.append(testing_data,t3)
training_data.sort()
training_data=np.int_(training_data)
# calculate spikes in plan, move and combined window individually, and its respective time with all the 190 neurons.
N=[]
N_time=[]
n_plan=[]
n_plantime=[]
n_move=[]
n_movetime=[]
for i in range(len(training_data)):
p1=r[training_data[i]].timeTouchHeld
p2=r[training_data[i]].timeGoCue
p3=r[training_data[i]].timeTargetAcquire
N2,n_plan2,n_move2=np.array([]),np.array([]),np.array([])
for l in range(190):
if type(r[training_data[i]].unit[l].spikeTimes) == float: # when there is only one spike and its spiketime
N0=(r[training_data[i]].unit[l].spikeTimes>p1) & (r[training_data[i]].unit[l].spikeTimes<p3)
N1=np.sum(N0)
n_plan0=(r[training_data[i]].unit[l].spikeTimes>p1) & (r[training_data[i]].unit[l].spikeTimes<p2)
n_plan1=np.sum(n_plan0)
n_move0=(r[training_data[i]].unit[l].spikeTimes>p2) & (r[training_data[i]].unit[l].spikeTimes<p3)
n_move1=np.sum(n_move0)
elif list(r[training_data[i]].unit[l].spikeTimes) == []: # when there is no spike and its spiketime
N1=0
n_plan1=0
n_move1=0
else:
N0=(r[training_data[i]].unit[l].spikeTimes>p1) & (r[training_data[i]].unit[l].spikeTimes<p3)
N1=np.sum(N0)
n_plan0=(r[training_data[i]].unit[l].spikeTimes>p1) & (r[training_data[i]].unit[l].spikeTimes<p2)
n_plan1=np.sum(n_plan0)
n_move0=(r[training_data[i]].unit[l].spikeTimes>p2) & (r[training_data[i]].unit[l].spikeTimes<p3)
n_move1=np.sum(n_move0)
N_time1=p3-p1
n_movetime1=p3-p2
n_plantime1=p2-p1
N2=np.append(N2,N1)
n_plan2=np.append(n_plan2,n_plan1)
n_move2=np.append(n_move2,n_move1)
N.append(N2)
N_time.append(N_time1)
n_plan.append(n_plan2)
n_plantime.append(n_plantime1)
n_move.append(n_move2)
n_movetime.append(n_movetime1)
target0=[cfr[i] for i in training_data]
table1=pd.DataFrame(target0,index=training_data,columns=['targets']) # index represent the i th trials
table1['Combined']=N
table1['Combined_time']=N_time
table1['n_plan']=n_plan
table1['n_plantime']=n_plantime
table1['n_move']=n_move
table1['n_movetime']=n_movetime
table1['combined_rate']=table1['Combined']/table1['Combined_time']
table1['plan_rate']=table1['n_plan']/table1['n_plantime']
table1['move_rate']=table1['n_move']/table1['n_movetime']
# Group different rates(combined, plan and move window rates) by eight targets,
# then calculate the mean and covariance matrix for each targets through different rates
# For any neuron whose averaged mean rates equals zero, delete them from dataset, and record which neurons are deleted
combined_mean=[]
combined_cov=[]
combined_deleted_targets=[]
combined_deleted_index=[]
plan_mean=[]
plan_cov=[]
plan_deleted_targets=[]
plan_deleted_index=[]
move_mean=[]
move_cov=[]
move_deleted_targets=[]
move_deleted_index=[]
for i in range(8):
i=i+1
combined=np.array(list(table1[table1.targets==i]['combined_rate']))
combined_mean1=np.mean(combined,axis=0)
plan=np.array(list(table1[table1.targets==i]['plan_rate']))
plan_mean1=np.mean(plan,axis=0)
move=np.array(list(table1[table1.targets==i]['move_rate']))
move_mean1=np.mean(move,axis=0)
if np.any(plan_mean1==0) or np.any(move_mean1==0):
id1=np.array(list(set(np.append(np.where(plan_mean1==0)[0],np.where(move_mean1==0)[0]))))
combined=np.delete(combined,id1,axis=1)
combined_mean1=np.mean(combined,axis=0)
combined_deleted_targets.append(i)
combined_deleted_index.append(id1)
combined_mean.append(combined_mean1)
combined_cov.append(np.cov(combined.T))
if np.any(plan_mean1==0):
id2=np.where(plan_mean1==0)[0]
plan=np.delete(plan,id2,axis=1)
plan_mean1=np.mean(plan,axis=0)
plan_deleted_targets.append(i)
plan_deleted_index.append(id2)
plan_mean.append(plan_mean1)
plan_cov.append(np.cov(plan.T))
if np.any(move_mean1==0):
id3=np.where(move_mean1==0)[0]
move=np.delete(move,id3,axis=1)
move_mean1=np.mean(move,axis=0)
move_deleted_targets.append(i)
move_deleted_index.append(id3)
move_mean.append(move_mean1)
move_cov.append(np.cov(move.T))
testing_data.sort()
testing_data=np.int_(testing_data)
test_N=[]
test_N_time=[]
test_n_plan=[]
test_n_plantime=[]
test_n_move=[]
test_n_movetime=[]
# calculate spikes in plan, move and combined window individually, and its respective time with all the 190 neurons.
for i in range(len(testing_data)):
p1=r[testing_data[i]].timeTouchHeld
p2=r[testing_data[i]].timeGoCue
p3=r[testing_data[i]].timeTargetAcquire
test_N2,test_n_plan2,test_n_move2=np.array([]),np.array([]),np.array([])
for l in range(190):
if type(r[testing_data[i]].unit[l].spikeTimes) == float:
test_N0=(r[testing_data[i]].unit[l].spikeTimes>p1) & (r[testing_data[i]].unit[l].spikeTimes<p3)
test_N1=np.sum(test_N0)
test_n_plan0=(r[testing_data[i]].unit[l].spikeTimes>p1) & (r[testing_data[i]].unit[l].spikeTimes<p2)
test_n_plan1=np.sum(test_n_plan0)
test_n_move0=(r[testing_data[i]].unit[l].spikeTimes>p2) & (r[testing_data[i]].unit[l].spikeTimes<p3)
test_n_move1=np.sum(test_n_move0)
elif list(r[testing_data[i]].unit[l].spikeTimes) == []:
test_N1=0
test_n_plan1=0
test_n_move1=0
else:
test_N0=(r[testing_data[i]].unit[l].spikeTimes>p1) & (r[testing_data[i]].unit[l].spikeTimes<p3)
test_N1=np.sum(test_N0)
test_n_plan0=(r[testing_data[i]].unit[l].spikeTimes>p1) & (r[testing_data[i]].unit[l].spikeTimes<p2)
test_n_plan1=np.sum(test_n_plan0)
test_n_move0=(r[testing_data[i]].unit[l].spikeTimes>p2) & (r[testing_data[i]].unit[l].spikeTimes<p3)
test_n_move1=np.sum(test_n_move0)
test_N_time1=p3-p1
test_n_movetime1=p3-p2
test_n_plantime1=p2-p1
test_N2=np.append(test_N2,test_N1)
test_n_plan2=np.append(test_n_plan2,test_n_plan1)
test_n_move2=np.append(test_n_move2,test_n_move1)
test_N.append(test_N2)
test_N_time.append(test_N_time1)
test_n_plan.append(test_n_plan2)
test_n_plantime.append(test_n_plantime1)
test_n_move.append(test_n_move2)
test_n_movetime.append(test_n_movetime1)
test_target0=[cfr[i] for i in testing_data]
test_table1=pd.DataFrame(test_target0,index=testing_data,columns=['targets']) # index represent the i th trials
test_table1['Combined']=test_N
test_table1['Combined_time']=test_N_time
test_table1['n_plan']=test_n_plan
test_table1['n_plantime']=test_n_plantime
test_table1['n_move']=test_n_move
test_table1['n_movetime']=test_n_movetime
test_table1['combined_rate']=test_table1['Combined']/test_table1['Combined_time']
test_table1['plan_rate']=test_table1['n_plan']/test_table1['n_plantime']
test_table1['move_rate']=test_table1['n_move']/test_table1['n_movetime']
```
# Undifferentiated rate model(combined window)
```
# I fited the trial-by-trial firing rates and/or PC scores using a multivariate Gaussian distribution(f(r|d)),
# which has a built in function in scipy. Then decoded reach direction using maximum likelihood:
# d=argmax P(d|r), ignoring items which remain the same for every direction.
# Fianlly, we got d=argmax f(r|d)
# Please note, I also deleted the same number and poistion of neurons, which deleted in the training dataset,
# for the testing dataset.
from scipy.stats import multivariate_normal
def combined_simulate(r1):
f=[]
for l in range(8):
l=l+1
if l in combined_deleted_targets:
r1_deleted=np.delete(r1,combined_deleted_index[combined_deleted_targets.index(l)])
f1=multivariate_normal.logpdf(r1_deleted, mean=combined_mean[l-1], cov=np.diag(np.diag(combined_cov[l-1])))
else:
f1=multivariate_normal.logpdf(r1, mean=combined_mean[l-1], cov=np.diag(np.diag(combined_cov[l-1])))
f.append(f1)
simulate_target=f.index(max(f))+1
return simulate_target
# Make inference for each trials in the testing dataset
combined_simulate_targets=[]
for i in range(len(test_table1)):
r1=list(test_table1['combined_rate'])[i]
simulate_target=combined_simulate(r1)
combined_simulate_targets.append(simulate_target)
# Compare inference with the acctual targets, and calulate respective absolute angular error and accuracy.
orginal_degrees=[degrees[i-1] for i in test_table1['targets']]
combined_simulate_degrees=[degrees[i-1] for i in combined_simulate_targets]
combined_e=abs(np.array(orginal_degrees)-np.array(combined_simulate_degrees))
correct_combined=[i==j for i,j in zip(test_table1['targets'],combined_simulate_targets)]
combined_percent=sum(correct_combined)/len(test_table1['targets'])
combined_d=np.mean(combined_e)
combined_d_sem=np.std(combined_e)/np.sqrt(len(combined_e))
print('Mean of angular error for the Undifferent rate model is %.4f'%combined_d)
print('Sem of angular error for the Undifferent rate model is %.4f'%combined_d_sem)
print('Simulation accuracy for the Undifferent rate model is %.4f%%'%(combined_percent*100))
```
# Only used plan window and its rate
```
def plan_simulate(r1):
f=[]
for l in range(8):
l=l+1
if l in plan_deleted_targets:
r1_deleted=np.delete(r1,plan_deleted_index[plan_deleted_targets.index(l)])
f1=multivariate_normal.logpdf(r1_deleted, mean=plan_mean[l-1], cov=np.diag(np.diag(plan_cov[l-1])))
else:
f1=multivariate_normal.logpdf(r1, mean=plan_mean[l-1], cov=np.diag(np.diag(plan_cov[l-1])))
f.append(f1)
simulate_target=f.index(max(f))+1
return simulate_target
plan_simulate_targets=[]
for i in range(len(test_table1)):
r1=list(test_table1['plan_rate'])[i]
simulate_target=plan_simulate(r1)
plan_simulate_targets.append(simulate_target)
plan_simulate_degrees=[degrees[i-1] for i in plan_simulate_targets]
plan_e=abs(np.array(orginal_degrees)-np.array(plan_simulate_degrees))
correct_plan=[i==j for i,j in zip(test_table1['targets'],plan_simulate_targets)]
plan_percent=sum(correct_plan)/len(test_table1['targets'])
plan_d=np.mean(plan_e)
plan_d_sem=np.std(plan_e)/np.sqrt(len(plan_e))
print('Mean of angular error for the Plan rate model is %.4f'%plan_d)
print('Sem of angular error for the Plan rate model is %.4f'%plan_d_sem)
print('Simulation accuracy for the Plan rate model is %.4f%%'%(plan_percent*100))
```
# Only used move window and its rate
```
def move_simulate(r1):
f=[]
for l in range(8):
l=l+1
if l in move_deleted_targets:
r1_deleted=np.delete(r1,move_deleted_index[move_deleted_targets.index(l)])
f1=multivariate_normal.logpdf(r1_deleted, mean=move_mean[l-1], cov=np.diag(np.diag(move_cov[l-1])))
else:
f1=multivariate_normal.logpdf(r1, mean=move_mean[l-1], cov=np.diag(np.diag(move_cov[l-1])))
f.append(f1)
simulate_target=f.index(max(f))+1
return simulate_target
move_simulate_targets=[]
for i in range(len(test_table1)):
r1=list(test_table1['move_rate'])[i]
simulate_target=move_simulate(r1)
move_simulate_targets.append(simulate_target)
move_simulate_degrees=[degrees[i-1] for i in move_simulate_targets]
move_e=abs(np.array(orginal_degrees)-np.array(move_simulate_degrees))
correct_move=[i==j for i,j in zip(test_table1['targets'],move_simulate_targets)]
move_percent=sum(correct_move)/len(test_table1['targets'])
move_d=np.mean(move_e)
move_d_sem=np.std(move_e)/np.sqrt(len(move_e))
print('Mean of angular error for the Move rate model is %.4f'%move_d)
print('Sem of angular error for the Move rate model is %.4f'%move_d_sem)
print('Simulation accuracy for the Move rate model is %.4f%%'%(move_percent*100))
```
# Plan rate/Move rate model
```
def P_M_rate_simulate(r1):
f=[]
for l in range(8):
l=l+1
if l in (plan_deleted_targets) or l in (move_deleted_targets):
r1_deleted=r1
if l in plan_deleted_targets:
r1_deleted1=np.delete(r1_deleted[:190],plan_deleted_index[plan_deleted_targets.index(l)])
if l in move_deleted_targets:
r1_deleted2=np.delete(r1_deleted[190:],move_deleted_index[move_deleted_targets.index(l)])
r1_deleted=np.append(r1_deleted1,r1_deleted2)
f1=multivariate_normal.logpdf(r1_deleted, \
mean=np.append(plan_mean[l-1],move_mean[l-1]),\
cov=np.diag(np.append(np.diag(plan_cov[l-1]),np.diag(move_cov[l-1]))))
else:
f1=multivariate_normal.logpdf(r1, \
mean=np.append(plan_mean[l-1],move_mean[l-1]),\
cov=np.diag(np.append(np.diag(plan_cov[l-1]),np.diag(move_cov[l-1]))))
f.append(f1)
simulate_target=f.index(max(f))+1
return simulate_target
PMrate_simulate_targets=[]
for i in range(len(test_table1)):
r1=np.append(list(test_table1['plan_rate'])[i],list(test_table1['move_rate'])[i])
simulate_target=P_M_rate_simulate(r1)
PMrate_simulate_targets.append(simulate_target)
PMrate_simulate_degrees=[degrees[i-1] for i in PMrate_simulate_targets]
PMrate_e=abs(np.array(orginal_degrees)-np.array(PMrate_simulate_degrees))
correct_PMrate=[i==j for i,j in zip(test_table1['targets'],PMrate_simulate_targets)]
PMrate_percent=sum(correct_PMrate)/len(test_table1['targets'])
PMrate_d=np.mean(PMrate_e)
PMrate_d_sem=np.std(PMrate_e)/np.sqrt(len(PMrate_e))
print('Mean of angular error for the Plan rate/Move rate model is %.4f'%PMrate_d)
print('Sem of angular error for the Plan rate/Move rate model is %.4f'%PMrate_d_sem)
print('Simulation accuracy for the Plan rate/Move rate model is %.4f%%'%(PMrate_percent*100))
```
# PC score
```
def pc_projection(X):
mu = np.mean(X,axis=0) # calculate mean
w,v = np.linalg.eig(np.cov(X.T)) # calculate eigenvalues of covariance matrix
scores = np.dot((X - mu),v[:,0]) # project into lower dimensional space
return scores
# For each neuron of a trial, used 5 ms bins to convert SpikeTimes array to impulse-like array which have same time series.
# Then used Gaussian kernel(50 ms length) to convolve this impulse-like spike train for each neuron.
# Finally, performed PCA, and take the first PC score of each trial as the PC score for the trial.
from scipy import ndimage
plan_pc=[]
move_pc=[]
for i in range(len(training_data)):
plan_pc1=[]
move_pc1=[]
p1=r[training_data[i]].timeTouchHeld
p2=r[training_data[i]].timeGoCue
p3=r[training_data[i]].timeTargetAcquire
plan_series=np.linspace(p1,p2,5+1)
move_series=np.linspace(p2,p3,5+1)
for l in range(190):
plan_bin=np.zeros(len(plan_series))
move_bin=np.zeros(len(move_series))
if type(r[training_data[i]].unit[l].spikeTimes) == float:
if (r[training_data[i]].unit[l].spikeTimes>=p1) & (r[training_data[i]].unit[l].spikeTimes<p2):
id_plan=math.floor((r[training_data[i]].unit[l].spikeTimes-p1)/((p2-p1)/5))
plan_bin[id_plan] += 1
if (r[training_data[i]].unit[l].spikeTimes>=p2) & (r[training_data[i]].unit[l].spikeTimes<p3):
id_move=math.floor((r[training_data[i]].unit[l].spikeTimes-p2)/((p3-p2)/5))
move_bin[id_move] += 1
elif list(r[training_data[i]].unit[l].spikeTimes) == []:
pass
else:
for m in r[training_data[i]].unit[l].spikeTimes:
if (m>=p1) & (m<p2):
id_plan=math.floor((m-p1)/((p2-p1)/5))
plan_bin[id_plan] += 1
if (m>=p2) & (m<p3):
id_move=math.floor((m-p2)/((p3-p2)/5))
move_bin[id_move] += 1
plan_bin=plan_bin/((p2-p1)/5)
move_bin=move_bin/((p3-p2)/5)
plan_convolve=ndimage.filters.gaussian_filter(plan_bin,sigma=5,truncate=5)
move_convolve=ndimage.filters.gaussian_filter(move_bin,sigma=5,truncate=5)
plan_pc1.append(plan_convolve)
move_pc1.append(move_convolve)
plan_pc1=np.array(plan_pc1)
move_pc1=np.array(move_pc1)
plan_pcscore=abs(pc_projection(plan_pc1))
move_pcscore=abs(pc_projection(move_pc1))
plan_pc.append(plan_pcscore)
move_pc.append(move_pcscore)
target0=[cfr[i] for i in training_data]
table_pc=pd.DataFrame(target0,index=training_data,columns=['targets']) # index represent the i th trials
table_pc['plan_pc']=plan_pc
table_pc['move_pc']=move_pc
table_pc
plan_pc_mean=[]
plan_pc_cov=[]
plan_pc_deleted_targets=[]
plan_pc_deleted_index=[]
move_pc_mean=[]
move_pc_cov=[]
move_pc_deleted_targets=[]
move_pc_deleted_index=[]
for i in range(8):
i=i+1
plan_pc=np.array(list(table_pc[table_pc.targets==i]['plan_pc']))
plan_pc_mean1=np.mean(plan_pc,axis=0)
if np.any(plan_pc_mean1==0):
id2=np.where(plan_pc_mean1==0)[0]
plan_pc=np.delete(plan_pc,id2,axis=1)
plan_pc_mean1=np.mean(plan_pc,axis=0)
plan_pc_deleted_targets.append(i)
plan_pc_deleted_index.append(id2)
plan_pc_mean.append(plan_pc_mean1)
plan_pc_cov.append(np.cov(plan_pc.T))
move_pc=np.array(list(table_pc[table_pc.targets==i]['move_pc']))
move_pc_mean1=np.mean(move_pc,axis=0)
if np.any(move_pc_mean1==0):
id3=np.where(move_pc_mean1==0)[0]
move_pc=np.delete(move_pc,id3,axis=1)
move_pc_mean1=np.mean(move_pc,axis=0)
move_pc_deleted_targets.append(i)
move_pc_deleted_index.append(id3)
move_pc_mean.append(move_pc_mean1)
move_pc_cov.append(np.cov(move_pc.T))
test_plan_pc=[]
test_move_pc=[]
for i in range(len(testing_data)):
test_plan_pc1=[]
test_move_pc1=[]
p1=r[testing_data[i]].timeTouchHeld
p2=r[testing_data[i]].timeGoCue
p3=r[testing_data[i]].timeTargetAcquire
test_plan_series=np.linspace(p1,p2,5+1)
test_move_series=np.linspace(p2,p3,5+1)
for l in range(190):
test_plan_bin=np.zeros(len(test_plan_series))
test_move_bin=np.zeros(len(test_move_series))
if type(r[testing_data[i]].unit[l].spikeTimes) == float: # when there is only one spike and its spiketime
if (r[testing_data[i]].unit[l].spikeTimes>=p1) & (r[testing_data[i]].unit[l].spikeTimes<p2):
test_id_plan=math.floor((r[testing_data[i]].unit[l].spikeTimes-p1)/((p2-p1)/5))
test_plan_bin[test_id_plan] += 1
if (r[testing_data[i]].unit[l].spikeTimes>=p2) & (r[testing_data[i]].unit[l].spikeTimes<p3):
test_id_move=math.floor((r[testing_data[i]].unit[l].spikeTimes-p2)/((p3-p2)/5))
test_move_bin[test_id_move] += 1
elif list(r[testing_data[i]].unit[l].spikeTimes) == []: # when there is no spike and its spiketime
pass
else:
for m in r[testing_data[i]].unit[l].spikeTimes:
if (m>=p1) & (m<p2):
test_id_plan=math.floor((m-p1)/((p2-p1)/5))
test_plan_bin[test_id_plan] += 1
if (m>=p2) & (m<p3):
test_id_move=math.floor((m-p2)/((p3-p2)/5))
test_move_bin[test_id_move] += 1
test_plan_bin=test_plan_bin/((p2-p1)/5)
test_move_bin=test_move_bin/((p3-p2)/5)
test_plan_convolve=ndimage.filters.gaussian_filter(test_plan_bin,sigma=5,truncate=5)
test_move_convolve=ndimage.filters.gaussian_filter(test_move_bin,sigma=5,truncate=5)
test_plan_pc1.append(test_plan_convolve)
test_move_pc1.append(test_move_convolve)
test_plan_pc1=np.array(test_plan_pc1)
test_move_pc1=np.array(test_move_pc1)
test_plan_pc.append(abs(pc_projection(test_plan_pc1)))
test_move_pc.append(abs(pc_projection(test_move_pc1)))
target0=[cfr[i] for i in testing_data]
test_table_pc=pd.DataFrame(target0,index=testing_data,columns=['targets']) # index represent the i th trials
test_table_pc['plan_pc']=test_plan_pc
test_table_pc['move_pc']=test_move_pc
test_table_pc
```
## Plan PC and Move PC
```
def P_M_pcscore_simulate(r1):
f=[]
for l in range(8):
l=l+1
if l in (plan_pc_deleted_targets) or l in (move_pc_deleted_targets):
r1_deleted=r1
r1_deleted1=r1[:190]
r1_deleted2=r1[190:]
if l in plan_pc_deleted_targets:
r1_deleted1=np.delete(r1_deleted[:190],plan_pc_deleted_index[plan_pc_deleted_targets.index(l)])
if l in move_pc_deleted_targets:
r1_deleted2=np.delete(r1_deleted[190:],move_pc_deleted_index[move_pc_deleted_targets.index(l)])
r1_deleted=np.append(r1_deleted1,r1_deleted2)
f1=multivariate_normal.logpdf(r1_deleted, \
mean=np.append(plan_pc_mean[l-1],move_pc_mean[l-1]),\
cov=np.diag(np.append(np.diag(plan_pc_cov[l-1]),np.diag(move_pc_cov[l-1]))))
else:
f1=multivariate_normal.logpdf(r1, \
mean=np.append(plan_pc_mean[l-1],move_pc_mean[l-1]),\
cov=np.diag(np.append(np.diag(plan_pc_cov[l-1]),np.diag(move_pc_cov[l-1]))))
f.append(f1)
simulate_target=f.index(max(f))+1
return simulate_target
PMpcscore_simulate_targets=[]
for i in range(len(test_table_pc)):
r1=np.append(list(test_table_pc['plan_pc'])[i],list(test_table_pc['move_pc'])[i])
simulate_target=P_M_pcscore_simulate(r1)
PMpcscore_simulate_targets.append(simulate_target)
PMpcscore_simulate_degrees=[degrees[i-1] for i in PMpcscore_simulate_targets]
PMpcscore_e=abs(np.array(orginal_degrees)-np.array(PMpcscore_simulate_degrees))
correct_PMpcscore=[i==j for i,j in zip(test_table_pc['targets'],PMpcscore_simulate_targets)]
PMpcscore_percent=sum(correct_PMpcscore)/len(test_table_pc['targets'])
PMpcscore_d=np.mean(PMpcscore_e)
PMpcscore_d_sem=np.std(PMpcscore_e)/np.sqrt(len(PMpcscore_e))
print('Mean of angular error for the Plan PC score/Move PC score model is %.4f'%PMpcscore_d)
print('Sem of angular error for the Plan PC score/Move PC score model is %.4f'%PMpcscore_d_sem)
print('Simulation accuracy for the Plan PC score/Move PC score model is %.4f%%'%(PMpcscore_percent*100))
```
## Plan rate and Move PC
```
def Prate_Mpc_simulate(r1):
f=[]
for l in range(8):
l=l+1
if l in (plan_deleted_targets) or l in (move_pc_deleted_targets):
r1_deleted=r1
r1_deleted1=r1[:190]
r1_deleted2=r1[190:]
if l in plan_deleted_targets:
r1_deleted1=np.delete(r1_deleted[:190],plan_deleted_index[plan_deleted_targets.index(l)])
if l in move_pc_deleted_targets:
r1_deleted2=np.delete(r1_deleted[190:],move_pc_deleted_index[move_pc_deleted_targets.index(l)])
r1_deleted=np.append(r1_deleted1,r1_deleted2)
f1=multivariate_normal.logpdf(r1_deleted, \
mean=np.append(plan_mean[l-1],move_pc_mean[l-1]),\
cov=np.diag(np.append(np.diag(plan_cov[l-1]),np.diag(move_pc_cov[l-1]))))
else:
f1=multivariate_normal.logpdf(r1, \
mean=np.append(plan_mean[l-1],move_pc_mean[l-1]),\
cov=np.diag(np.append(np.diag(plan_cov[l-1]),np.diag(move_pc_cov[l-1]))))
f.append(f1)
simulate_target=f.index(max(f))+1
return simulate_target
Prate_Mpc_simulate_targets=[]
for i in range(len(test_table_pc)):
r1=np.append(list(test_table1['plan_rate'])[i],list(test_table_pc['move_pc'])[i])
simulate_target=Prate_Mpc_simulate(r1)
Prate_Mpc_simulate_targets.append(simulate_target)
Prate_Mpc_simulate_degrees=[degrees[i-1] for i in Prate_Mpc_simulate_targets]
Prate_Mpc_e=abs(np.array(orginal_degrees)-np.array(Prate_Mpc_simulate_degrees))
correct_Prate_Mpc=[i==j for i,j in zip(test_table_pc['targets'],Prate_Mpc_simulate_targets)]
Prate_Mpc_percent=sum(correct_Prate_Mpc)/len(test_table_pc['targets'])
Prate_Mpc_d=np.mean(Prate_Mpc_e)
Prate_Mpc_d_sem=np.std(Prate_Mpc_e)/np.sqrt(len(Prate_Mpc_e))
print('Mean of angular error for the Plan rate/Move PC score model is %.4f'%Prate_Mpc_d)
print('Sem of angular error for the Plan rate/Move PC score model is %.4f'%Prate_Mpc_d_sem)
print('Simulation accuracy for the Plan rate/Move PC score model is %.4f%%'%(Prate_Mpc_percent*100))
```
## Plan PC and move rate
```
def Ppc_Mrate_simulate(r1):
f=[]
for l in range(8):
l=l+1
if l in (plan_pc_deleted_targets) or l in (move_deleted_targets):
r1_deleted=r1
r1_deleted1=r1[:190]
r1_deleted2=r1[190:]
if l in plan_pc_deleted_targets:
r1_deleted1=np.delete(r1_deleted[:190],plan_pc_deleted_index[plan_pc_deleted_targets.index(l)])
if l in move_deleted_targets:
r1_deleted2=np.delete(r1_deleted[190:],move_deleted_index[move_deleted_targets.index(l)])
r1_deleted=np.append(r1_deleted1,r1_deleted2)
f1=multivariate_normal.logpdf(r1_deleted, \
mean=np.append(plan_pc_mean[l-1],move_mean[l-1]),\
cov=np.diag(np.append(np.diag(plan_pc_cov[l-1]),np.diag(move_cov[l-1]))))
else:
f1=multivariate_normal.logpdf(r1, \
mean=np.append(plan_pc_mean[l-1],move_mean[l-1]),\
cov=np.diag(np.append(np.diag(plan_pc_cov[l-1]),np.diag(move_cov[l-1]))))
f.append(f1)
simulate_target=f.index(max(f))+1
return simulate_target
Ppc_Mrate_simulate_targets=[]
for i in range(len(test_table_pc)):
r1=np.append(list(test_table_pc['plan_pc'])[i],list(test_table1['move_rate'])[i])
simulate_target=Ppc_Mrate_simulate(r1)
Ppc_Mrate_simulate_targets.append(simulate_target)
Ppc_Mrate_simulate_degrees=[degrees[i-1] for i in Ppc_Mrate_simulate_targets]
Ppc_Mrate_e=abs(np.array(orginal_degrees)-np.array(Ppc_Mrate_simulate_degrees))
correct_Ppc_Mrate=[i==j for i,j in zip(test_table_pc['targets'],Ppc_Mrate_simulate_targets)]
Ppc_Mrate_percent=sum(correct_Ppc_Mrate)/len(test_table_pc['targets'])
Ppc_Mrate_d=np.mean(Ppc_Mrate_e)
Ppc_Mrate_d_sem=np.std(Ppc_Mrate_e)/np.sqrt(len(Ppc_Mrate_e))
print('Mean of angular error for the Plan PC score/Move rate model is %.4f'%Ppc_Mrate_d)
print('Sem of angular error for the Plan PC score/Move rate model is %.4f'%Ppc_Mrate_d_sem)
print('Simulation accuracy for the Plan PC score/Move rate model is %.4f%%'%(Ppc_Mrate_percent*100))
```
# Results
```
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
results_accuracy=[plan_percent,move_percent,combined_percent,PMrate_percent,\
PMpcscore_percent,Prate_Mpc_percent,Ppc_Mrate_percent]
results_degrees=[plan_d,move_d,combined_d,PMrate_d,\
PMpcscore_d,Prate_Mpc_d,Ppc_Mrate_d]
results_sem=[plan_d_sem,move_d_sem,combined_d_sem,PMrate_d_sem,\
PMpcscore_d_sem,Prate_Mpc_d_sem,Ppc_Mrate_d_sem]
category=['Move\n Rate','Plan\n Rate','Undiff.\n Rate','Plan Rate/\n Move Rate',\
'Plan PC/\n Move PC','Plan Rate/\n Move PC','Plan PC/\n Move Rate']
x=np.arange(len(results_accuracy))+1
plt.bar(left=x,height=np.array(results_accuracy)*100,align='center',tick_label=category)
plt.xticks(horizontalalignment='center',fontsize=8)
plt.ylim(80,100)
plt.title('Simulation accuracy of diiferent models')
for a,b in zip(x,np.array(results_accuracy)*100):
c=str(b)[:5]+'%'
plt.text(a,b+0.1,c,horizontalalignment='center')
x=np.arange(len(results_accuracy))+1
plt.bar(left=x,height=results_degrees,align='center',tick_label=category)
plt.xticks(horizontalalignment='center',fontsize=8)
plt.ylim(0,12)
plt.title('Absolute error(mean$\pm$sem) of different models')
for a,b in zip(x,results_degrees):
c=str(b)[:5]+'$^{\circ}$'+'\n''$\pm$'+str(results_sem[a-1])[:5]
plt.text(a,b+0.1,c,horizontalalignment='center')
```
# End
| github_jupyter |
<center> <font size=6> <b> Table of Contents </b> </font> </center>
<div id="toc"></div>
The following cell is a Javascript section of code for building the Jupyter notebook's table of content.
```
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
<center> <font size=5> <h1>Define working environment</h1> </font> </center>
The following cells are used to:
- Import needed libraries
- Set the environment variables for Python, Anaconda, GRASS GIS and R statistical computing
- Define the ["GRASSDATA" folder](https://grass.osgeo.org/grass73/manuals/helptext.html), the name of "location" and "mapset" where you will to work.
**Import libraries**
```
## Import libraries needed for setting parameters of operating system
import os
import sys
## Import library for temporary files creation
import tempfile
## Import Pandas library
import pandas as pd
## Import Numpy library
import numpy
## Import Psycopg2 library (interection with postgres database)
import psycopg2 as pg
# Import Math library (usefull for rounding number, e.g.)
import math
## Import Subprocess + subprocess.call
import subprocess
from subprocess import call, Popen, PIPE, STDOUT
```
<center> <font size=3> <h3>Environment variables when working on Linux Mint</h3> </font> </center>
**Set 'Python' and 'GRASS GIS' environment variables**
Here, we set [the environment variables allowing to use of GRASS GIS](https://grass.osgeo.org/grass64/manuals/variables.html) inside this Jupyter notebook. Please change the directory path according to your own system configuration.
```
### Define GRASS GIS environment variables for LINUX UBUNTU Mint 18.1 (Serena)
# Check is environmental variables exists and create them (empty) if not exists.
if not 'PYTHONPATH' in os.environ:
os.environ['PYTHONPATH']=''
if not 'LD_LIBRARY_PATH' in os.environ:
os.environ['LD_LIBRARY_PATH']=''
# Set environmental variables
os.environ['GISBASE'] = '/home/tais/SRC/GRASS/grass_trunk/dist.x86_64-pc-linux-gnu'
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'bin')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'script')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'lib')
#os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python')
os.environ['PYTHONPATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python')
os.environ['PYTHONPATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python','grass')
os.environ['PYTHONPATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python','grass','script')
os.environ['PYTHONLIB'] = '/usr/lib/python2.7'
os.environ['LD_LIBRARY_PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'lib')
os.environ['GIS_LOCK'] = '$$'
os.environ['GISRC'] = os.path.join(os.environ['HOME'],'.grass7','rc')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons','bin')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons','scripts')
## Define GRASS-Python environment
sys.path.append(os.path.join(os.environ['GISBASE'],'etc','python'))
```
**Import GRASS Python packages**
```
## Import libraries needed to launch GRASS GIS in the jupyter notebook
import grass.script.setup as gsetup
## Import libraries needed to call GRASS using Python
import grass.script as grass
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
**Display current environment variables of your computer**
```
## Display the current defined environment variables
for key in os.environ.keys():
print "%s = %s \t" % (key,os.environ[key])
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
<center> <font size=5> <h1>User inputs</h1> </font> </center>
```
## Define a empty dictionnary for saving user inputs
user={}
```
Here after:
- Enter the path to the directory you want to use as "[GRASSDATA](https://grass.osgeo.org/programming7/loc_struct.png)".
- Enter the name of the location in which you want to work and its projection information in [EPSG code](http://spatialreference.org/ref/epsg/) format. Please note that the GRASSDATA folder and locations will be automatically created if not existing yet. If the location name already exists, the projection information will not be used.
- Enter the name you want for the mapsets which will be used later for Unsupervised Segmentation Parameter Optimization (USPO), Segmentation and Classification steps.
```
## Enter the path to GRASSDATA folder
user["gisdb"] = "/media/tais/My_Book_1/MAUPP/Traitement/Ouagadougou/Segmentation_fullAOI_localapproach/GRASSDATA"
## Enter the name of the location (existing or for a new one)
user["location"] = "Ouaga_32630"
## Enter the EPSG code for this location
user["locationepsg"] = "32630"
## Enter the name of the mapset to use for segmentation
user["segmentation_mapsetname"] = "LOCAL_SEGMENT"
## Enter the name of the mapset to use for classification
user["classificationA_mapsetname"] = "CLASSIF"
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
<center> <font size=5> <h1>Define functions</h1> </font> </center>
This section of the notebook is dedicated to defining functions which will then be called later in the script. If you want to create your own functions, define them here.
### Function for computing processing time
The "print_processing_time" is used to calculate and display the processing time for various stages of the processing chain. At the beginning of each major step, the current time is stored in a new variable, using [time.time() function](https://docs.python.org/2/library/time.html). At the end of the stage in question, the "print_processing_time" function is called and takes as argument the name of this new variable containing the recorded time at the beginning of the stage, and an output message.
```
## Import library for managing time in python
import time
## Function "print_processing_time()" compute processing time and printing it.
# The argument "begintime" wait for a variable containing the begintime (result of time.time()) of the process for which to compute processing time.
# The argument "printmessage" wait for a string format with information about the process.
def print_processing_time(begintime, printmessage):
endtime=time.time()
processtime=endtime-begintime
remainingtime=processtime
days=int((remainingtime)/86400)
remainingtime-=(days*86400)
hours=int((remainingtime)/3600)
remainingtime-=(hours*3600)
minutes=int((remainingtime)/60)
remainingtime-=(minutes*60)
seconds=round((remainingtime)%60,1)
if processtime<60:
finalprintmessage=str(printmessage)+str(seconds)+" seconds"
elif processtime<3600:
finalprintmessage=str(printmessage)+str(minutes)+" minutes and "+str(seconds)+" seconds"
elif processtime<86400:
finalprintmessage=str(printmessage)+str(hours)+" hours and "+str(minutes)+" minutes and "+str(seconds)+" seconds"
elif processtime>=86400:
finalprintmessage=str(printmessage)+str(days)+" days, "+str(hours)+" hours and "+str(minutes)+" minutes and "+str(seconds)+" seconds"
return finalprintmessage
```
### Function for concatenate individual .csv files and replace unwanted values
```
## Function which concatenate individual .csv files stored in a folder.
# 'indir' parameter wait for a string with the path to the repertory to look for individual .csv files
# 'pattern' parameter wait for a string with the pattern of filename to look for (example: "TMP_*.csv")
# 'sep' parameter wait for a string with the delimiter of the .csv file
# 'replacedict' parameter wait for a dictionary containing the unwanted values as keys and the replace string as corresponding value
# 'outputfilename' parameter wait for the name of the outputfile (including extansion but without the complete path)
import os
import glob
import csv
def concat_findreplace(indir,pattern,sep,replacedict,outputfilename):
# Initialise some variables
messagetoprint=None
returnmessage=None
countdict={}
for k in replacedict:
countdict[k]=0
# Change the working directory
os.chdir(indir)
# Get a list with file in the current directory corresponding to a specific pattern
fileList=None
fileList=glob.glob(pattern)
# Print
messagetoprint="Going to concatenate "+str(len(fileList))+" .csv files together and replace unwanted values."
#print (messagetoprint)
returnmessage=messagetoprint
# Create a new file
outfile=os.path.join(indir, outputfilename)
writercsvSubset = open(outfile, 'wb')
writercsv=csv.writer(writercsvSubset,delimiter=sep)
# Concatenate individuals files and replace unwanted values
for indivfile in fileList:
with open(indivfile) as readercsvSubset:
readercsv=csv.reader(readercsvSubset, delimiter=sep)
if indivfile!=fileList[0]:
readercsv.next()
count=0
for row in readercsv:
newline=[]
for i, x in enumerate(row):
if x in replacedict:
newline.append(replacedict[x])
countdict[x]+=1
else:
newline.append(row[i])
writercsv.writerow(newline)
# Close the current input file
readercsvSubset.close()
# Close the outputfile
writercsvSubset.close()
# Count number of changes
countchange=0
for k in countdict:
countchange+=countdict[k]
# Print
if countchange>0:
messagetoprint="\n"
returnmessage+=messagetoprint
messagetoprint="Values have been changed:"+"\n"
#print (messagetoprint)
returnmessage+=messagetoprint
for k in replacedict:
if countdict[k]>0:
messagetoprint=str(countdict[k])+" '"+k+"' value(s) replaced by '"+replacedict[k]+"'\n"
#print (messagetoprint)
returnmessage+=messagetoprint
else:
messagetoprint="Nothing changed. No unwanted values found !"
#print (messagetoprint)
returnmessage+=messagetoprint
# Return
return returnmessage[:-1]
```
### Function for Postgres database vaccum
```
# Do a VACUUM on the current Postgresql database
def vacuum(db):
old_isolation_level = db.isolation_level
db.set_isolation_level(0)
query = "VACUUM"
cur.execute(query)
db.set_isolation_level(old_isolation_level)
```
### Function for finding duplicated 'cat' in the PostGis table
```
def find_duplicated_cat():
# Build a query to drop table if exists
query="DROP TABLE IF EXISTS "+schema+".duplic_cat"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Build a query to select duplicated 'cat' values
query="CREATE TABLE "+schema+".duplic_cat AS \
SELECT cat, min(w7_homo_stddev) as w7homostddev_min, count(*) as duplic_nbr \
FROM "+schema+"."+object_stats_table+" \
GROUP BY cat HAVING count(*) > 1.0 \
ORDER BY duplic_nbr, cat"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Build a query to select duplicated 'cat' values
query="SELECT *\
FROM "+schema+".duplic_cat"
# Execute the query
df=pd.read_sql(query, db)
## Create a list with duplicated 'cat' values to be droped
global cattodrop
cattodrop=list(df['cat'])
```
### Function for finding duplicated 'key' in the PostGis table
```
def find_duplicated_key():
# Build a query to select 'key_value' of duplicated objects values
query="SELECT b.key_value, a.cat \
FROM "+schema+".duplic_cat AS a \
LEFT JOIN (SELECT key_value, cat, w7_homo_stddev FROM "+schema+"."+object_stats_table+") AS b \
ON a.cat = b.cat WHERE a.w7homostddev_min=b.w7_homo_stddev \
ORDER BY a.cat ASC"
# Execute the query
df=pd.read_sql(query, db)
## Create a list with 'key_value' of rows to be droped
global keytodrop
keytodrop=list(df['key_value'])
# Build a query to drop table if exists
query="DROP TABLE IF EXISTS "+schema+".duplic_cat"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
```
### Function for removing the duplicated 'key' in the PostGis table
```
def remove_duplicated_key():
# If list of key to drop is not empty
if len(keytodrop)>0:
# Build a query to delete specific rows
query="DELETE FROM "+schema+"."+object_stats_table+" \
WHERE key_value IN ("+""+",".join(str(x) for x in keytodrop)+")"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
else:
print "There is no duplicates to delete from '"+schema+"."+object_stats_table+"'"
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
### Instal GRASS extensions
GRASS GIS have both a core part (the one installed by default on your computer) and add-ons (which have to be installed using the extension manager ['g.extension'](https://grass.osgeo.org/grass72/manuals/g.extension.html)).
In the next cell, 'i.segment.uspo' will be installed (if not yet) and also other add-ons ['r.neighborhoodmatrix'](https://grass.osgeo.org/grass70/manuals/addons/r.neighborhoodmatrix.html) and ['i.segment.hierarchical'](https://grass.osgeo.org/grass70/manuals/addons/i.segment.hierarchical.html) required for running i.segment.uspo.
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
```
## Saving current time for processing time management
begintime_full=time.time()
```
### Launch GRASS GIS working session
```
## Set the name of the mapset in which to work
mapsetname=user["classificationA_mapsetname"]
## Launch GRASS GIS working session in the mapset
if os.path.exists(os.path.join(user["gisdb"],user["location"],mapsetname)):
gsetup.init(os.environ['GISBASE'], user["gisdb"], user["location"], mapsetname)
print "You are now working in mapset '"+mapsetname+"'"
else:
print "'"+mapsetname+"' mapset doesn't exists in "+user["gisdb"]
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
# Importing SAR Features
**VHR Radarsat2 preprocessed layer (calibrated mosaic, speckle filtered 3x3 and histogram cutted for very high values**
**3x3 features**
**7x7 features**
```
## Set the path to the image file
pathtodata="/media/tais/data/MAUPP/SAR_data/Ouagadougou/VHR_WP2/VHR_Textures/subset_2_of_mosaic_Spk_glcm_7x_7text_mask/subset_2_of_mosaic_Spk_glcm_7x_7text_mask_32630.tif"
## Saving current time for processing time management
begintime_sar=time.time()
## Import optical imagery and rename band with color name
print ("Importing 7x7 VHR SAR features at " + time.ctime())
grass.run_command('r.import', input=pathtodata,
output="SAR_seven", overwrite=True)
for rast in grass.list_strings("rast", pattern="SAR_seven", flag='r'):
if rast.find("1")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"7_Homo"))
elif rast.find("2")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"7_Var"))
elif rast.find("3")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"7_Mean"))
elif rast.find("4")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"7_Entr"))
elif rast.find("5")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"7_Dis"))
elif rast.find("6")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"7_Cont"))
elif rast.find("7")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"7_SM"))
print_processing_time(begintime_sar ,"7x7 VHR SAR features have been imported in ")
```
**11x11 features**
```
## Set the path to the image file
pathtodata="/media/tais/data/MAUPP/SAR_data/Ouagadougou/VHR_WP2/VHR_Textures/Ouaga_vhr_glcm11xd2_mask_selected/Ouaga_vhr_glcm11xd2_mask_selected_32630.tif"
## Saving current time for processing time management
begintime_sar=time.time()
## Import optical imagery and rename band with color name
print ("Importing 11x11 VHR SAR features at " + time.ctime())
grass.run_command('r.import', input=pathtodata,
output="SAR_eleven", overwrite=True)
for rast in grass.list_strings("rast", pattern="SAR_eleven", flag='r'):
if rast.find("1")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"11_Homo"))
elif rast.find("2")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"11_Var"))
elif rast.find("3")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"11_Mean"))
elif rast.find("4")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"11_Entr"))
elif rast.find("5")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"11_Dis"))
elif rast.find("6")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"11_Cont"))
elif rast.find("7")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"11_SM"))
print_processing_time(begintime_sar ,"11x11 VHR SAR features have been imported in ")
```
### Set SAR texture layers' null values to zero to avoid error in i.segment.stats
**Remove current mask**
```
## Check if there is a raster layer named "MASK"
if not grass.list_strings("rast", pattern="MASK", mapset=mapsetname, flag='r'):
print 'There is currently no MASK'
else:
## Remove the current MASK layer
grass.run_command('r.mask',flags='r')
print 'The current MASK has been removed'
## Loop on each SAR layer and set null value to zero
raster_list=grass.parse_command('g.list',type="raster", mapset=user["classificationA_mapsetname"])
for layer in raster_list:
if layer[:1] in [str(x) for x in range(10)]:
grass.run_command('g.region', raster=layer)
grass.run_command('r.null', map=layer, null="0")
print "Null values of '"+layer+"' layer have been set to zero"
## Check if there is no null values
raster_list=grass.parse_command('g.list',type="raster", mapset=user["classificationA_mapsetname"])
for layer in raster_list:
if layer[:1] in [str(x) for x in range(10)]:
grass.run_command('g.region', raster=layer)
count=grass.parse_command('r.univar', map=layer, flags='g')['null_cells']
if int(count)==0:
print "There are no null values in '"+layer+"' layer."
else:
print count+" null values have been found in '"+layer+"' layer"
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
# Define input raster for computing statistics of segments
```
## Display the name of rasters available in PERMANENT and CLASSIFICATION mapset
print grass.read_command('g.list',type="raster", mapset="PERMANENT", flags='rp')
print grass.read_command('g.list',type="raster", mapset=user["classificationA_mapsetname"], flags='rp')
## Define the list of raster layers for which statistics will be computed
inputstats=[]
#inputstats.append("SAR")
#inputstats.append("3_Mean")
#inputstats.append("3_Var")
#inputstats.append("3_Dis")
#inputstats.append("3_Entr")
#inputstats.append("3_SM")
inputstats.append("7_Homo")
inputstats.append("7_Var")
inputstats.append("7_Mean")
inputstats.append("7_Entr")
inputstats.append("7_Dis")
inputstats.append("7_Cont")
inputstats.append("7_SM")
inputstats.append("11_Homo")
inputstats.append("11_Var")
inputstats.append("11_Mean")
inputstats.append("11_Entr")
inputstats.append("11_Dis")
inputstats.append("11_Cont")
inputstats.append("11_SM")
print "Layer to be used to compute raster statistics of segments:\n"+'\n'.join(inputstats)
## Define the list of raster statistics to be computed for each raster layer
rasterstats=[]
rasterstats.append("min")
rasterstats.append("max")
rasterstats.append("range")
rasterstats.append("mean")
rasterstats.append("stddev")
#rasterstats.append("coeff_var") # Seems that this statistic create null values
rasterstats.append("median")
rasterstats.append("first_quart")
rasterstats.append("third_quart")
rasterstats.append("perc_90")
print "Raster statistics to be computed:\n"+'\n'.join(rasterstats)
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
<center> <font size=5> <h1>Compute objects' statistics</h1> </font> </center>
```
## Saving current time for processing time management
begintime_computeobjstat=time.time()
```
## Define the folder where to save the results and create it if necessary
In the next cell, please adapt the path to the directory where you want to save the .csv output of i.segment.uspo.
```
## Folder in which save processing time output
outputfolder="/media/tais/My_Book_1/MAUPP/Traitement/Ouagadougou/Segmentation_fullAOI_localapproach/Results/CLASSIF/stats_SAR"
## Create the folder if does not exists
if not os.path.exists(outputfolder):
os.makedirs(outputfolder)
print "Folder '"+outputfolder+"' created"
```
### Copy data from other mapset to the current mapset
Some data need to be copied from other mapsets into the current mapset.
### Remove current mask
```
## Check if there is a raster layer named "MASK"
if not grass.list_strings("rast", pattern="MASK", mapset=mapsetname, flag='r'):
print 'There is currently no MASK'
else:
## Remove the current MASK layer
grass.run_command('r.mask',flags='r')
print 'The current MASK has been removed'
```
***Copy segmentation raster***
***Copy morphological zone (raster)***
***Copy morphological zone (vector)***
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
# Compute statistics of segments (Shared AOI between optical and SAR)
### Compute statistics of segment using i.segment.stats
The process is make to compute statistics iteratively for each morphological zones, used here as tiles.
This section uses the ['i.segment.stats' add-on](https://grass.osgeo.org/grass70/manuals/addons/i.segment.stats.html) to compute statistics for each object.
```
## Save name of the layer to be used as tiles
tile_layer='zone_morpho'+'@'+mapsetname
## Save name of the segmentation layer to be used by i.segment.stats
segment_layer='segments'+'@'+mapsetname
## Save name of the column containing area_km value
area_column='area_km2'
## Save name of the column containing morphological type value
type_column='type'
## Save the prefix to be used for the outputfiles of i.segment.stats
prefix="Segstat"
```
### Clip the segmentation raster to match the extend of SAR products
```
## Define region
grass.run_command('g.region', raster='11_Homo')
grass.run_command('g.region', align=segment_layer)
# Check if spatial resolution well correspond to the segmentation layer
nsres_region=float(grass.parse_command('g.region', flags='pg')['nsres'])
nsres_layer=float(grass.raster_info(segment_layer)['nsres'])
if nsres_region <> nsres_layer:
sys.exit("There is a problem with the spatial resolution of the region, please check")
## Remove mask if exists
try:
grass.run_command('r.mask', flags='r')
except Exception:
print 'No mask to remove'
## Create new layer with segment not masked
formula="segments_sar="+segment_layer
grass.mapcalc(formula, overwrite=True)
## Redefine the segmentation layer to be used by i.segment.stats
segment_layer='segments_sar'
print "Segmentation layer has been clipped to match the extend of SAR products"
## Save the list of polygons to be processed (save the 'cat' value)
listofregion=list(grass.parse_command('v.db.select', map=tile_layer,
columns='cat', flags='c'))[:]
for count, cat in enumerate(listofregion):
print str(count)+" cat:"+str(cat)
```
```
## Initialize a empty string for saving print outputs
txtcontent=""
## Running i.segment.stats
messagetoprint="Start computing statistics for segments to be classified, using i.segment.stats on "+time.ctime()+"\n"
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
begintime_isegmentstats=time.time()
## Compute total area to be processed for process progression information
processed_area=0
nbrtile=len(listofregion)
attributes=grass.parse_command('db.univar', flags='g', table=tile_layer.split("@")[0], column=area_column, driver='sqlite')
total_area=float(attributes['sum'])
messagetoprint=str(nbrtile)+" region(s) will be processed, covering an area of "+str(round(total_area,3))+" Sqkm."+"\n\n"
print (messagetoprint)
txtcontent+=messagetoprint
## Save time before looping
begintime_isegmentstats=time.time()
## Declare a list with cat of region where i.segment.stats procedure failed.
cat_segmentstats_failed=[]
## Start loop on morphological zones
count=1
for cat in listofregion[:]:
## Save current time at loop' start.
begintime_current_id=time.time()
## Create a computional region for the current polygon
condition="cat="+cat
outputname="tmp_"+cat
grass.run_command('v.extract', overwrite=True, quiet=True,
input=tile_layer, type='area', where=condition, output=outputname)
grass.run_command('g.region', overwrite=True, vector=outputname, align=segment_layer)
grass.run_command('r.mask', overwrite=True, raster=tile_layer, maskcats=cat)
grass.run_command('g.remove', quiet=True, type="vector", name=outputname, flags="f")
## Save size of the current polygon and add it to the already processed area
size=round(float(grass.read_command('v.db.select', map=tile_layer,
columns=area_column, where=condition,flags="c")),2)
## Print
messagetoprint="Computing segments's statistics for tile n°"+str(cat)
messagetoprint+=" ("+str(count)+"/"+str(len(listofregion))+")"
messagetoprint+=" corresponding to "+str(size)+" km2"
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
## Define the csv output file name, according to the optimization function selected
outputcsv=os.path.join(outputfolder,prefix+"_"+str(cat)+".csv")
try:
## Compute statistics of objets using i.segment.stats only with .csv output (no vectormap output).
grass.run_command('i.segment.stats', overwrite=True, map=segment_layer, flags='s',
rasters=','.join(inputstats), raster_statistics=','.join(rasterstats),
csvfile=outputcsv, processes='20')
except Exception as e:
print "An error occured for i.segment.stats when processing region cat: "+str(cat)
cat_segmentstats_failed.append(cat)
## Add the size of the zone to the already processed area
processed_area+=size
## Print
messagetoprint=print_processing_time(begintime_current_id,
"i.segment.stats finishes to process th current tile in ")
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
remainingtile=nbrtile-count
if remainingtile>0:
messagetoprint=str(round((processed_area/total_area)*100,2))+" percent of the total area processed. "
messagetoprint+="Still "+str(remainingtile)+" zone(s) to process."+"\n"
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
else:
messagetoprint="\n"
print (messagetoprint)
txtcontent+=messagetoprint
## Adapt the count
count+=1
## Remove current mask
grass.run_command('r.mask', flags='r')
## Inform about region for which i.segment.stats failed
if len(cat_segmentstats_failed)>0:
print "WARNING: Somme region faced issue. Please re-run i.segment.stats for those regions : "+",".join(cat_segmentstats_failed)
## Compute processing time and print it
messagetoprint=print_processing_time(begintime_isegmentstats, "Statitics computed in ")
print (messagetoprint)
txtcontent+=messagetoprint
#### Write text file with log of processing time
## Create the .txt file for processing time output and begin to write
f = open(os.path.join(outputfolder,mapsetname+"_processingtime_isegmentstats.txt"), 'w')
f.write(mapsetname+" processing time information for i.segment.stats"+"\n\n")
f.write(txtcontent)
f.close()
## print
print_processing_time(begintime_computeobjstat,"Object statistics computed in ")
```
## Concatenate individuals .csv files and replace unwanted values
BE CAREFUL! Before runing the following cells, please check your data to be sure that it makes sens to replace the 'nan', 'null', or 'inf' values with "0"
```
## Define the outputfile for .csv containing statistics for all segments
outputfile=os.path.join(outputfolder,"all_segments_stats.csv")
print outputfile
# Create a dictionary with 'key' to be replaced by 'values'
findreplacedict={}
findreplacedict['nan']="0"
findreplacedict['null']="0"
findreplacedict['inf']="0"
# Define pattern of file to concatenate
pat=prefix+"_*.csv"
sep="|"
## Initialize a empty string for saving print outputs
txtcontent=""
## Saving current time for processing time management
begintime_concat=time.time()
## Print
messagetoprint="Start concatenate individual .csv files and replacing unwanted values."
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
# Concatenate and replace unwanted values
messagetoprint=concat_findreplace(outputfolder,pat,sep,findreplacedict,outputfile)
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
## Compute processing time and print it
messagetoprint=print_processing_time(begintime_concat, "Process achieved in ")
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
#### Write text file with log of processing time
## Create the .txt file for processing time output and begin to write
filepath=os.path.join(outputfolder,mapsetname+"_processingtime_concatreplace.txt")
f = open(filepath, 'w')
f.write(mapsetname+" processing time information for concatenation of individual .csv files and replacing of unwanted values."+"\n\n")
f.write(txtcontent)
f.close()
```
# Create new database in postgresql
```
# User for postgresql connexion
dbuser="tais"
# Password of user
dbpassword="tais"
# Host of database
host="localhost"
# Name of the new database
dbname="ouaga_fullaoi_localsegment"
# Set name of schema for objects statistics
stat_schema="statistics"
# Set name of table with statistics of segmentobject_stats_table="object_stats_sar"s - FOR SAR
object_stats_table="object_stats_sar"
break
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
# Connect to postgres database
db=None
db=pg.connect(dbname='postgres', user=dbuser, password=dbpassword, host=host)
# Allow to create a new database
db.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
# Execute the CREATE DATABASE query
cur=db.cursor()
#cur.execute('DROP DATABASE IF EXISTS ' + dbname) #Comment this to avoid deleting existing DB
cur.execute('CREATE DATABASE ' + dbname)
cur.close()
db.close()
```
### Create PostGIS Extension in the database
```
break
# Connect to the database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Open a cursor to perform database operations
cur=db.cursor()
# Execute the query
cur.execute('CREATE EXTENSION IF NOT EXISTS postgis')
# Make the changes to the database persistent
db.commit()
# Close connection with database
cur.close()
db.close()
```
<center> <font size=4> <h2>Import statistics of segments in a Postgresql database</h2> </font> </center>
## Create new schema in the postgresql database
```
schema=stat_schema
print "Current schema is: "+str(schema)
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
# Connect to postgres database
db=None
db=pg.connect(dbname=dbname, user='tais', password='tais', host='localhost')
# Allow to create a new database
db.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
# Execute the CREATE DATABASE query
cur=db.cursor()
#cur.execute('DROP SCHEMA IF EXISTS '+schema+' CASCADE') #Comment this to avoid deleting existing DB
try:
cur.execute('CREATE SCHEMA '+schema)
except Exception as e:
print ("Exception occured : "+str(e))
cur.close()
db.close()
```
## Create a new table
```
# Connect to an existing database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Open a cursor to perform database operations
cur=db.cursor()
# Drop table if exists:
cur.execute("DROP TABLE IF EXISTS "+schema+"."+object_stats_table)
# Make the changes to the database persistent
db.commit()
import csv
# Create a empty list for saving of column name
column_name=[]
# Create a reader for the first csv file in the stack of csv to be imported
pathtofile=os.path.join(outputfolder, outputfile)
readercsvSubset=open(pathtofile)
readercsv=csv.reader(readercsvSubset, delimiter='|')
headerline=readercsv.next()
print "Create a new table '"+schema+"."+object_stats_table+"' with header corresponding to the first row of file '"+pathtofile+"'"
## Build a query for creation of a new table with auto-incremental key-value (thus avoiding potential duplicates of 'cat' value)
# All column data-types are set to 'text' in order to be able to import some 'nan', 'inf' or 'null' values present in statistics files
# This table will allow to import all individual csv files in a single Postgres table, which will be cleaned after
query="CREATE TABLE "+schema+"."+object_stats_table+" ("
query+="key_value serial PRIMARY KEY"
query+=", "+str(headerline[0])+" text"
column_name.append(str(headerline[0]))
for column in headerline[1:]:
if column[0] in ('1','2','3','4','5','6','7','8','9','0'):
query+=","
query+=" "+"W"+str(column)+" double precision"
column_name.append("W"+str(column))
else:
query+=","
query+=" "+str(column)+" double precision"
column_name.append(str(column))
query+=")"
# Execute the CREATE TABLE query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Close cursor and communication with the database
cur.close()
db.close()
```
## Copy objects statistics from csv to Postgresql database
```
# Connect to an existing database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Open a cursor to perform database operations
cur=db.cursor()
## Initialize a empty string for saving print outputs
txtcontent=""
## Saving current time for processing time management
begintime_copy=time.time()
## Print
messagetoprint="Start copy of segments' statistics in the postgresql table '"+schema+"."+object_stats_table+"'"
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
# Create query for copy data from csv, avoiding the header, and updating only the column which are in the csv (to allow auto-incremental key value to wokr)
query="COPY "+schema+"."+object_stats_table+"("+', '.join(column_name)+") "
query+=" FROM '"+str(pathtofile)+"' HEADER DELIMITER '|' CSV;"
# Execute the COPY FROM CSV query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
## Compute processing time and print it
messagetoprint=print_processing_time(begintime_copy, "Process achieved in ")
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
#### Write text file with log of processing time
## Create the .txt file for processing time output and begin to write
filepath=os.path.join(outputfolder,mapsetname+"_processingtime_PostGimport.txt")
f = open(filepath, 'w')
f.write(mapsetname+" processing time information for importation of segments' statistics in the PostGreSQL Database."+"\n\n")
f.write(txtcontent)
f.close()
# Close cursor and communication with the database
cur.close()
db.close()
```
# Drop duplicate values of CAT
Here, we will find duplicates. Indeed, as statistics are computed for each tile (morphological area) and computational region aligned to the pixels raster, some objets could appear in two different tile resulting on duplicates on "CAT" column.
We firs select the "CAT" of duplicated objets and then puting them in a list. Then, for each duplicated "CAT", we select the key-value (primary key) of the smallest object (area_min). The row corresponding to those key-values are then remoed using the "DELETE FROM" query.
```
# Connect to an existing database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Open a cursor to perform database operations
cur=db.cursor()
## Initialize a empty string for saving print outputs
txtcontent=""
## Saving current time for processing time management
begintime_removeduplic=time.time()
## Print
messagetoprint="Start removing duplicates in the postgresql table '"+schema+"."+object_stats_table+"'"
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
# Find duplicated 'CAT'
find_duplicated_cat()
# Remove duplicated
count_pass=1
count_removedduplic=0
while len(cattodrop)>0:
messagetoprint="Removing duplicates - Pass "+str(count_pass)
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
find_duplicated_key()
remove_duplicated_key()
messagetoprint=str(len(keytodrop))+" duplicates removed."
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
count_removedduplic+=len(keytodrop)
# Find again duplicated 'CAT'
find_duplicated_cat()
count_pass+=1
messagetoprint="A total of "+str(count_removedduplic)+" duplicates were removed."
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
## Compute processing time and print it
messagetoprint=print_processing_time(begintime_removeduplic, "Process achieved in ")
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
#### Write text file with log of processing time
## Create the .txt file for processing time output and begin to write
filepath=os.path.join(outputfolder,mapsetname+"_processingtime_RemoveDuplic.txt")
f = open(filepath, 'w')
f.write(mapsetname+" processing time information for removing duplicated objects."+"\n\n")
f.write(txtcontent)
f.close()
# Vacuum the current Postgresql database
vacuum(db)
```
# Change the primary key from 'key_value' to 'cat'
```
# Connect to an existing database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Open a cursor to perform database operations
cur=db.cursor()
# Build a query to drop the current constraint on primary key
query="ALTER TABLE "+schema+"."+object_stats_table+" \
DROP CONSTRAINT "+object_stats_table+"_pkey"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Build a query to change the datatype of 'cat' to 'integer'
query="ALTER TABLE "+schema+"."+object_stats_table+" \
ALTER COLUMN cat TYPE integer USING cat::integer"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Build a query to add primary key on 'cat'
query="ALTER TABLE "+schema+"."+object_stats_table+" \
ADD PRIMARY KEY (cat)"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Build a query to drop column 'key_value'
query="ALTER TABLE "+schema+"."+object_stats_table+" \
DROP COLUMN key_value"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Vacuum the current Postgresql database
vacuum(db)
# Close cursor and communication with the database
cur.close()
db.close()
```
### Show first rows of statistics
```
# Connect to an existing database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Number of line to show (please limit to 100 for saving computing time)
nbrow=15
# Query
query="SELECT * FROM "+schema+"."+object_stats_table+" \
ORDER BY cat \
ASC LIMIT "+str(nbrow)
# Execute query through panda
df=pd.read_sql(query, db)
# Show dataframe
df.head(15)
```
<left> <font size=4> <b> End of classification part </b> </font> </left>
```
print("The script ends at "+ time.ctime())
print_processing_time(begintime_segmentation_full, "Entire process has been achieved in ")
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
| github_jupyter |
```
!pip install mesa
import sys
sys.path.insert(0, '/Users/ben/covid19-sim-mesa/')
%matplotlib inline
# from https://github.com/ziofil/live_plot
from collections import defaultdict
from matplotlib import pyplot as plt
from IPython.display import clear_output
from itertools import cycle
lines = ['-', '--', '-.', ':']
markers = ['o', 's', '*', 'v', '^', 'D', 'h', 'x', '+', '8', 'p', '<', '>', 'd', 'H']
size=15
params = {'legend.fontsize': 'large',
'figure.figsize': (20, 8),
'axes.labelsize': size,
'axes.titlesize': size,
'xtick.labelsize': size*0.75,
'ytick.labelsize': size*0.75,
'axes.titlepad': 25}
plt.rcParams.update(params)
def get_stepsize(data_list, target_dim=100):
if len(data_list) < target_dim:
return 1
return target_dim / len(data_list)
def live_plot(data_dict, figsize=(20, 10), title=''):
linecycler = cycle(lines)
markercycler = cycle(markers)
clear_output(wait=True)
plt.figure(figsize=figsize)
for label, data in data_dict.items():
plt.plot(
np.array(data) * 25.733,
label=label,
marker=next(markercycler),
markersize=7,
linestyle=next(linecycler),
linewidth=2.5,
markevery=get_stepsize(data)
)
plt.title(title)
plt.grid(True)
plt.xlabel('iteration')
plt.legend(loc='best') # upper left
plt.show()
import sys
sys.path.insert(0,'/home/ben/covid19-sim-mesa/')
import math
from person import Person
from model import *
# Simulation parameters
scale_factor = 0.001
area = 242495 # km2 uk
side = int(math.sqrt(area)) # 492
def lockdown_policy(infected, deaths, population_size):
"""Given infected and deaths over time (lists)
determine if we should declare a national lockdown.
"""
return 0
sim_params = {
"grid_x": 100, # size of grid: X axis
"grid_y": 100, # size of grid: Y axis
"density": 259 * scale_factor, # population density # 259 uk, https://en.wikipedia.org/wiki/Demography_of_the_United_Kingdom
"initial_infected": 0.05, # initial percentage of population infected
"infect_rate": 0.1, # chance to infect someone in close contact
"recovery_period": 14 * 12, # number of hours to recover after being infected, 0 for never
"critical_rate": 0.05, # critical illness rate among those infected over the whole recovery period
"hospital_capacity_rate": .02, # hospital beds per person
"active_ratio": 8 / 24.0, # ratio of hours in the day when active
"immunity_chance": 1.0, # chance of infection granting immunity after recovery
"quarantine_rate": 0.6, # percentage infected person goes into quarantine
"lockdown_policy": lockdown_policy,
"cycles": 200 * 12, # cycles to run, 0 for infinity
'hospital_period': 21 * 12, # how long in hospital
}
model = Simulation(sim_params)
current_cycle = 0
cycles_to_run = sim_params.get('cycles')
print(cycles_to_run)
print(sim_params)
for current_cycle in range(cycles_to_run):
model.step()
if (current_cycle % 10) == 0:
live_plot(model.datacollector.model_vars)
#model_data = model.datacollector.get_model_vars_dataframe()
#model_data.plot()
#print(model_data)
#plt.show()
#Free beds in the hospital: 250.77600000000004
#Population: 62694
print('Total deaths: {}'.format(
model.datacollector.model_vars['Deaths'][-1] * 25.733
))
import sys
sys.path.insert(0,'/home/ben/covid19-sim-mesa/')
import math
from person import Person
from model import *
# Simulation parameters
scale_factor = 0.001
area = 242495 # km2 uk
side = int(math.sqrt(area)) # 492
def lockdown_policy(infected, deaths, population_size):
"""Given infected and deaths over time (lists)
determine if we should declare a national lockdown.
"""
if infected[-1] / population_size > 0.2:
return 21 * 12
return 0
sim_params = {
"grid_x": 100, # size of grid: X axis
"grid_y": 100, # size of grid: Y axis
"density": 259 * scale_factor, # population density # 259 uk, https://en.wikipedia.org/wiki/Demography_of_the_United_Kingdom
"initial_infected": 0.05, # initial percentage of population infected
"infect_rate": 0.1, # chance to infect someone in close contact
"recovery_period": 14 * 12, # number of hours to recover after being infected, 0 for never
"critical_rate": 0.05, # critical illness rate among those infected over the whole recovery period
"hospital_capacity_rate": .02, # hospital beds per person
"active_ratio": 8 / 24.0, # ratio of hours in the day when active
"immunity_chance": 1.0, # chance of infection granting immunity after recovery
"quarantine_rate": 0.6, # percentage infected person goes into quarantine
"lockdown_policy": lockdown_policy,
"cycles": 150 * 12, # cycles to run, 0 for infinity
'hospital_period': 21 * 12, # how long in hospital
}
model = Simulation(sim_params)
current_cycle = 0
cycles_to_run = sim_params.get('cycles')
print(cycles_to_run)
print(sim_params)
for current_cycle in range(cycles_to_run):
model.step()
if (current_cycle % 10) == 0:
live_plot(model.datacollector.model_vars)
#model_data = model.datacollector.get_model_vars_dataframe()
#model_data.plot()
#print(model_data)
#plt.show()
print('Total deaths: {}'.format(
model.datacollector.model_vars['Deaths'][-1] * 25.733
))
import sys
sys.path.insert(0,'/home/ben/covid19-sim-mesa/')
import math
from person import Person
from model import *
# Simulation parameters
scale_factor = 0.001
area = 242495 # km2 uk
side = int(math.sqrt(area)) # 492
def lockdown_policy(infected, deaths, population_size):
"""Given infected and deaths over time (lists)
determine if we should declare a national lockdown.
"""
if (infected[-1] / population_size) > 0.2:
return 21 * 12
return 0
sim_params = {
"grid_x": 100, # size of grid: X axis
"grid_y": 100, # size of grid: Y axis
"density": 259 * scale_factor, # population density # 259 uk, https://en.wikipedia.org/wiki/Demography_of_the_United_Kingdom
"initial_infected": 0.05, # initial percentage of population infected
"infect_rate": 0.1, # chance to infect someone in close contact
"recovery_period": 14 * 12, # number of hours to recover after being infected, 0 for never
"critical_rate": 0.05, # critical illness rate among those infected over the whole recovery period
"hospital_capacity_rate": .02, # hospital beds per person
"active_ratio": 8 / 24.0, # ratio of hours in the day when active
"immunity_chance": 1.0, # chance of infection granting immunity after recovery
"quarantine_rate": 0.6, # percentage infected person goes into quarantine
"lockdown_policy": lockdown_policy,
"cycles": 150 * 12, # cycles to run, 0 for infinity
'hospital_period': 21 * 12, # how long in hospital
}
model = Simulation(sim_params)
current_cycle = 0
cycles_to_run = sim_params.get('cycles')
print(cycles_to_run)
print(sim_params)
for current_cycle in range(cycles_to_run):
model.step()
if (current_cycle % 10) == 0:
live_plot(model.datacollector.model_vars)
#model_data = model.datacollector.get_model_vars_dataframe()
#model_data.plot()
#print(model_data)
#plt.show()
#Free beds in the hospital: 250.77600000000004
#Population: 62694
print('Total deaths: {}'.format(
model.datacollector.model_vars['Deaths'][-1] * 25.733
))
import sys
sys.path.insert(0,'/home/ben/covid19-sim-mesa/')
import math
from person import Person
from model import *
# Simulation parameters
scale_factor = 0.001
area = 242495 # km2 uk
side = int(math.sqrt(area)) # 492
def lockdown_policy(infected, deaths, population_size):
"""Given infected and deaths over time (lists)
determine if we should declare a national lockdown.
"""
if (max(infected[-20:]) / population_size) > 0.2:
return 14 * 12
return 0
sim_params = {
"grid_x": 100, # size of grid: X axis
"grid_y": 100, # size of grid: Y axis
"density": 259 * scale_factor, # population density # 259 uk, https://en.wikipedia.org/wiki/Demography_of_the_United_Kingdom
"initial_infected": 0.05, # initial percentage of population infected
"infect_rate": 0.1, # chance to infect someone in close contact
"recovery_period": 14 * 12, # number of hours to recover after being infected, 0 for never
"critical_rate": 0.05, # critical illness rate among those infected over the whole recovery period
"hospital_capacity_rate": .02, # hospital beds per person
"active_ratio": 8 / 24.0, # ratio of hours in the day when active
"immunity_chance": 1.0, # chance of infection granting immunity after recovery
"quarantine_rate": 0.6, # percentage infected person goes into quarantine
"lockdown_policy": lockdown_policy,
"cycles": 150 * 12, # cycles to run, 0 for infinity
'hospital_period': 21 * 12, # how long in hospital
}
model = Simulation(sim_params)
current_cycle = 0
cycles_to_run = sim_params.get('cycles')
print(cycles_to_run)
print(sim_params)
for current_cycle in range(cycles_to_run):
model.step()
if (current_cycle % 10) == 0:
live_plot(model.datacollector.model_vars)
#model_data = model.datacollector.get_model_vars_dataframe()
#model_data.plot()
#print(model_data)
#plt.show()
#Free beds in the hospital: 250.77600000000004
#Population: 62694
print('Total deaths: {}'.format(
model.datacollector.model_vars['Deaths'][-1] * 25.733
))
import sys
sys.path.insert(0,'/home/ben/covid19-sim-mesa/')
import math
from person import Person
from model import *
# Simulation parameters
scale_factor = 0.001
area = 242495 # km2 uk
side = int(math.sqrt(area)) # 492
def lockdown_policy(infected, deaths, population_size):
"""Given infected and deaths over time (lists)
determine if we should declare a national lockdown.
"""
if (
(max(infected[-21 * 12:]) / population_size) > 0.2
and deaths[-1] > deaths[-2]
):
return 14 * 12
return 0
sim_params = {
"grid_x": 100, # size of grid: X axis
"grid_y": 100, # size of grid: Y axis
"density": 259 * scale_factor, # population density # 259 uk, https://en.wikipedia.org/wiki/Demography_of_the_United_Kingdom
"initial_infected": 0.05, # initial percentage of population infected
"infect_rate": 0.1, # chance to infect someone in close contact
"recovery_period": 14 * 12, # number of hours to recover after being infected, 0 for never
"critical_rate": 0.05, # critical illness rate among those infected over the whole recovery period
"hospital_capacity_rate": .02, # hospital beds per person
"active_ratio": 8 / 24.0, # ratio of hours in the day when active
"immunity_chance": 1.0, # chance of infection granting immunity after recovery
"quarantine_rate": 0.6, # percentage infected person goes into quarantine
"lockdown_policy": lockdown_policy,
"cycles": 150 * 12, # cycles to run, 0 for infinity
'hospital_period': 21 * 12, # how long in hospital
}
model = Simulation(sim_params)
current_cycle = 0
cycles_to_run = sim_params.get('cycles')
print(cycles_to_run)
print(sim_params)
for current_cycle in range(cycles_to_run):
model.step()
if (current_cycle % 10) == 0:
live_plot(model.datacollector.model_vars)
#model_data = model.datacollector.get_model_vars_dataframe()
#model_data.plot()
#print(model_data)
#plt.show()
#Free beds in the hospital: 250.77600000000004
#Population: 62694
print('Total deaths: {}'.format(
model.datacollector.model_vars['Deaths'][-1] * 25.733
))
def debug():
print(model.datacollector.model_vars['Active Cases'][-10:])
print(model.datacollector.model_vars['Immune'][-10:])
print(model.datacollector.model_vars['Deaths'][-10:])
print(model.datacollector.model_vars['Hospitalized'][-10:])
print(model.datacollector.model_vars['Lockdown'][-10:])
import sys
sys.path.insert(0,'/home/ben/covid19-sim-mesa/')
import math
from person import Person
from model import *
# Simulation parameters
scale_factor = 0.001
area = 242495 # km2 uk
side = int(math.sqrt(area)) # 492
def lockdown_policy(infected, deaths, population_size):
"""Given infected and deaths over time (lists)
determine if we should declare a national lockdown.
"""
return 999999
sim_params = {
"grid_x": 100, # size of grid: X axis
"grid_y": 100, # size of grid: Y axis
"density": 259 * scale_factor, # population density # 259 uk, https://en.wikipedia.org/wiki/Demography_of_the_United_Kingdom
"initial_infected": 0.05, # initial percentage of population infected
"infect_rate": 0.1, # chance to infect someone in close contact
"recovery_period": 14 * 12, # number of hours to recover after being infected, 0 for never
"critical_rate": 0.05, # critical illness rate among those infected over the whole recovery period
"hospital_capacity_rate": .02, # hospital beds per person
"active_ratio": 8 / 24.0, # ratio of hours in the day when active
"immunity_chance": 1.0, # chance of infection granting immunity after recovery
"quarantine_rate": 0.6, # percentage infected person goes into quarantine
"lockdown_policy": lockdown_policy,
"cycles": 150 * 12, # cycles to run, 0 for infinity
'hospital_period': 21 * 12, # how long in hospital
}
model = Simulation(sim_params)
current_cycle = 0
cycles_to_run = sim_params.get('cycles')
print(cycles_to_run)
print(sim_params)
for current_cycle in range(cycles_to_run):
model.step()
if (current_cycle % 10) == 0:
live_plot(model.datacollector.model_vars)
#model_data = model.datacollector.get_model_vars_dataframe()
#model_data.plot()
#print(model_data)
#plt.show()
#Free beds in the hospital: 250.77600000000004
#Population: 62694
print('Total deaths: {}'.format(
model.datacollector.model_vars['Deaths'][-1] * 25.733
))
import sys
sys.path.insert(0,'/home/ben/covid19-sim-mesa/')
import math
from person import Person
from model import *
# Simulation parameters
scale_factor = 0.001
area = 242495 # km2 uk
side = int(math.sqrt(area)) # 492
def lockdown_policy(infected, deaths, population_size):
"""Given infected and deaths over time (lists)
determine if we should declare a national lockdown.
"""
if (
(max(infected[-21 * 12:]) / population_size) > 0.01
and len(deaths) > 2
and deaths[-1] > deaths[-2]
):
return 14 * 12
return 0
sim_params = {
"grid_x": 100, # size of grid: X axis
"grid_y": 100, # size of grid: Y axis
"density": 259 * scale_factor, # population density # 259 uk, https://en.wikipedia.org/wiki/Demography_of_the_United_Kingdom
"initial_infected": 0.05, # initial percentage of population infected
"infect_rate": 0.1, # chance to infect someone in close contact
"recovery_period": 14 * 12, # number of hours to recover after being infected, 0 for never
"critical_rate": 0.05, # critical illness rate among those infected over the whole recovery period
"hospital_capacity_rate": .02, # hospital beds per person
"active_ratio": 8 / 24.0, # ratio of hours in the day when active
"immunity_chance": 1.0, # chance of infection granting immunity after recovery
"quarantine_rate": 0.6, # percentage infected person goes into quarantine
"lockdown_policy": lockdown_policy,
"cycles": 150 * 12, # cycles to run, 0 for infinity
'hospital_period': 21 * 12, # how long in hospital
}
model = Simulation(sim_params)
current_cycle = 0
cycles_to_run = sim_params.get('cycles')
print(cycles_to_run)
print(sim_params)
for current_cycle in range(cycles_to_run):
model.step()
if (current_cycle % 10) == 0:
live_plot(model.datacollector.model_vars)
#model_data = model.datacollector.get_model_vars_dataframe()
#model_data.plot()
#print(model_data)
#plt.show()
#Free beds in the hospital: 250.77600000000004
#Population: 62694
print('Total deaths: {}'.format(
model.datacollector.model_vars['Deaths'][-1] * 25.733
))
import sys
sys.path.insert(0,'/home/ben/covid19-sim-mesa/')
import math
from person import Person
from model import *
# Simulation parameters
scale_factor = 0.001
area = 242495 # km2 uk
side = int(math.sqrt(area)) # 492
def lockdown_policy(infected, deaths, population_size):
"""Given infected and deaths over time (lists)
determine if we should declare a national lockdown.
"""
if (
(max(infected[-5 * 10:]) / population_size) > 0.6
and
(
len(deaths) > 2
and deaths[-1] > deaths[-2]
)
):
return 7 * 12
return 0
sim_params = {
"grid_x": 100, # size of grid: X axis
"grid_y": 100, # size of grid: Y axis
"density": 259 * scale_factor, # population density # 259 uk, https://en.wikipedia.org/wiki/Demography_of_the_United_Kingdom
"initial_infected": 0.05, # initial percentage of population infected
"infect_rate": 0.1, # chance to infect someone in close contact
"recovery_period": 14 * 12, # number of hours to recover after being infected, 0 for never
#"mortality_rate": 0.005, # mortality rate among those infected
"critical_rate": 0.15, # critical illness rate among those infected
"hospital_capacity_rate": .02, # hospital beds per person
# https://www.hsj.co.uk/acute-care/nhs-hospitals-have-four-times-more-empty-beds-than-normal/7027392.article
# https://www.kingsfund.org.uk/publications/nhs-hospital-bed-numbers
"active_ratio": 8 / 24.0, # ratio of hours in the day when active
"immunity_chance": 1.0, # chance of infection granting immunity after recovery
"quarantine_rate": 0.6, # percentage infected person goes into quarantine
"lockdown_policy": lockdown_policy,
"cycles": 150 * 12, # cycles to run, 0 for infinity
'hospital_period': 21 * 12, # how long in hospital
} # end of parameters
model = Simulation(sim_params)
current_cycle = 0
cycles_to_run = sim_params.get('cycles')
print(cycles_to_run)
print(sim_params)
for current_cycle in range(cycles_to_run):
model.step()
if (current_cycle % 10) == 0:
live_plot(model.datacollector.model_vars)
#model_data = model.datacollector.get_model_vars_dataframe()
#model_data.plot()
#print(model_data)
#plt.show()
#Free beds in the hospital: 250.77600000000004
#Population: 62694
print('Total deaths: {}'.format(
model.datacollector.model_vars['Deaths'][-1] * 25.733
))
* https://teck78.blogspot.com/2020/04/using-mesa-framework-to-simulate-spread.html
* https://mesa.readthedocs.io/en/master/index.html
* https://github.com/benman1/covid19-sim-mesa
```
| github_jupyter |
# Input HMP
This notebook pulls the HMP accelerometer sensor data classification data set
```
%%bash
export version=`python --version |awk '{print $2}' |awk -F"." '{print $1$2}'`
if [ $version == '36' ]; then
pip install pyspark==2.4.8 wget==3.2 pyspark2pmml==0.5.1
elif [ $version == '38' ]; then
pip install pyspark==3.1.2 wget==3.2 pyspark2pmml==0.5.1
else
echo 'Currently only python 3.6 and 3.8 is supported, in case you need a different version please open an issue at https://github.com/elyra-ai/component-library/issues'
exit -1
fi
import fnmatch
import os
from pathlib import Path
from pyspark import SparkConf
from pyspark import SparkContext
from pyspark.sql.functions import lit
from pyspark.sql import SparkSession
from pyspark.sql.types import IntegerType
from pyspark.sql.types import StructField
from pyspark.sql.types import StructType
import random
import re
import shutil
import sys
# path and file name for output (default: data.csv)
data_csv = os.environ.get('data_csv', 'data.csv')
# url of master (default: local mode)
master = os.environ.get('master', "local[*]")
# temporal data storage for local execution
data_dir = os.environ.get('data_dir', '../../data/')
# sample on input data to increase processing speed 0..1 (default: 1.0)
sample = os.environ.get('sample', '1.0')
# override parameters received from a potential call using %run magic
parameters = list(
map(
lambda s: re.sub('$', '"', s),
map(
lambda s: s.replace('=', '="'),
filter(
lambda s: s.find('=') > -1,
sys.argv
)
)
)
)
for parameter in parameters:
exec(parameter)
# cast parameters to appropriate type
sample = float(sample)
```
Lets create a local spark context (sc) and session (spark)
```
sc = SparkContext.getOrCreate(SparkConf().setMaster(master))
spark = SparkSession \
.builder \
.getOrCreate()
```
Lets pull the data in raw format from the source (github)
```
!rm -Rf HMP_Dataset
!git clone https://github.com/wchill/HMP_Dataset
schema = StructType([
StructField("x", IntegerType(), True),
StructField("y", IntegerType(), True),
StructField("z", IntegerType(), True)])
```
This step takes a while, it parses through all files and folders and creates a temporary dataframe for each file which gets appended to an overall data-frame "df". In addition, a column called "class" is added to allow for straightforward usage in Spark afterwards in a supervised machine learning scenario for example.
```
d = 'HMP_Dataset/'
# filter list for all folders containing data (folders that don't start with .)
file_list_filtered = [s for s in os.listdir(d)
if os.path.isdir(os.path.join(d, s)) & ~fnmatch.fnmatch(s, '.*')]
# create pandas data frame for all the data
df = None
for category in file_list_filtered:
data_files = os.listdir('HMP_Dataset/' + category)
# create a temporary pandas data frame for each data file
for data_file in data_files:
if sample < 1.0:
if random.random() > sample:
print('Skipping: ' + data_file)
continue
print(data_file)
temp_df = spark.read. \
option("header", "false"). \
option("delimiter", " "). \
csv('HMP_Dataset/' + category + '/' + data_file, schema=schema)
# create a column called "source" storing the current CSV file
temp_df = temp_df.withColumn("source", lit(data_file))
# create a column called "class" storing the current data folder
temp_df = temp_df.withColumn("class", lit(category))
if df is None:
df = temp_df
else:
df = df.union(temp_df)
```
Lets write the dataf-rame to a file in "CSV" format, this will also take quite some time:
```
if Path(data_dir + data_csv).exists():
shutil.rmtree(data_dir + data_csv)
df.write.option("header", "true").csv(data_dir + data_csv)
```
Now we should have a CSV file with our contents
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Instructions" data-toc-modified-id="Instructions-1"><span class="toc-item-num">1 </span>Instructions</a></span></li></ul></div>
# Instructions
Run all of the cells. You will be prompted to input a topic.
```
from esper.prelude import *
from esper.stdlib import *
from esper.topics import *
from esper.spark_util import *
from esper.plot_util import *
from esper.major_canonical_shows import MAJOR_CANONICAL_SHOWS
from datetime import timedelta
from collections import defaultdict
import _pickle as pickle
FACE_GENDERS = get_face_genders()
FACE_GENDERS = FACE_GENDERS.where(
(FACE_GENDERS.in_commercial == False) &
(FACE_GENDERS.size_percentile >= 25) &
(FACE_GENDERS.gender_id != Gender.objects.get(name='U').id)
)
CACHE_BASELINE_NO_HOST_FILE = '/tmp/base_screentime_gender_no_host_by_show.pkl'
def get_base_screentime_by_show():
try:
with open(CACHE_BASELINE_NO_HOST_FILE, 'rb') as f:
return pickle.load(f)
except:
print('Could not load baseline gender by show from cache.')
nh_woman = {
CANONICAL_SHOW_MAP[k[0]] : (timedelta(seconds=v[0]), v[1])
for k, v in sum_distinct_over_column(
FACE_GENDERS.where(FACE_GENDERS.host_probability <= 0.25),
'duration', distinct_columns, group_by_columns,
probability_column='female_probability'
).items() if CANONICAL_SHOW_MAP[k[0]] in MAJOR_CANONICAL_SHOWS
}
nh_man = {
CANONICAL_SHOW_MAP[k[0]] : (timedelta(seconds=v[0]), v[1])
for k, v in sum_distinct_over_column(
FACE_GENDERS.where(FACE_GENDERS.host_probability <= 0.25),
'duration', distinct_columns, group_by_columns,
probability_column='male_probability'
).items() if CANONICAL_SHOW_MAP[k[0]] in MAJOR_CANONICAL_SHOWS
}
with open(CACHE_BASELINE_NO_HOST_FILE, 'wb') as f:
pickle.dump([nh_man, nh_woman], f)
return nh_man, nh_woman
BASE_SCREENTIME_NH_MAN_BY_SHOW, BASE_SCREENTIME_NH_WOMAN_BY_SHOW = \
get_base_screentime_by_show()
CANONICAL_SHOW_MAP = { c.id : c.name for c in CanonicalShow.objects.all() }
CHANNEL_NAME_CMAP = {
'CNN': 'DarkBlue',
'FOXNEWS': 'DarkRed',
'MSNBC': 'DarkGreen'
}
CANONICAL_SHOW_CMAP = {
v['show__canonical_show__name'] : CHANNEL_NAME_CMAP[v['channel__name']]
for v in Video.objects.distinct(
'show__canonical_show'
).values('show__canonical_show__name', 'channel__name')
}
def run_analysis(topic):
print('Building the topic lexicon')
lexicon = mutual_info(topic)
print('Searching for segments')
segments = find_segments(lexicon, window_size=500,
threshold=100, merge_overlaps=True)
intervals_by_video = defaultdict(list)
for video_id, _, interval, _, _ in segments:
intervals_by_video[video_id].append(interval)
face_genders_with_topic_overlap = annotate_interval_overlap(
FACE_GENDERS, intervals_by_video)
face_genders_with_topic_overlap = face_genders_with_topic_overlap.where(
face_genders_with_topic_overlap.overlap_seconds > 0)
distinct_columns = []
group_by_columns = ['canonical_show_id']
overlap_field = 'overlap_seconds'
print('Computing screen times with gender')
topic_screentime_with_woman_by_show = {
CANONICAL_SHOW_MAP[k[0]] : (timedelta(seconds=v[0]), v[1])
for k, v in sum_distinct_over_column(
face_genders_with_topic_overlap,
overlap_field, distinct_columns, group_by_columns,
probability_column='female_probability'
).items() if CANONICAL_SHOW_MAP[k[0]] in MAJOR_CANONICAL_SHOWS
}
print('[Topic] Woman on screen: done')
topic_screentime_with_man_by_show = {
CANONICAL_SHOW_MAP[k[0]] : (timedelta(seconds=v[0]), v[1])
for k, v in sum_distinct_over_column(
face_genders_with_topic_overlap,
overlap_field, distinct_columns, group_by_columns,
probability_column='male_probability'
).items() if CANONICAL_SHOW_MAP[k[0]] in MAJOR_CANONICAL_SHOWS
}
print('[Topic] Man on screen: done')
topic_screentime_with_nh_woman_by_show = {
CANONICAL_SHOW_MAP[k[0]] : (timedelta(seconds=v[0]), v[1])
for k, v in sum_distinct_over_column(
face_genders_with_topic_overlap.where(
face_genders_with_topic_overlap.host_probability <= 0.25),
overlap_field, distinct_columns, group_by_columns,
probability_column='female_probability'
).items() if CANONICAL_SHOW_MAP[k[0]] in MAJOR_CANONICAL_SHOWS
}
print('[Topic] Woman (non-host) on screen: done')
topic_screentime_with_nh_man_by_show = {
CANONICAL_SHOW_MAP[k[0]] : (timedelta(seconds=v[0]), v[1])
for k, v in sum_distinct_over_column(
face_genders_with_topic_overlap.where(
face_genders_with_topic_overlap.host_probability <= 0.25),
overlap_field, distinct_columns, group_by_columns,
probability_column='male_probability'
).items() if CANONICAL_SHOW_MAP[k[0]] in MAJOR_CANONICAL_SHOWS
}
print('[Topic] Man (non-host) on screen: done')
plot_binary_screentime_proportion_comparison(
['Male (non-host)', 'Female (non-host)'],
[
topic_screentime_with_nh_man_by_show,
topic_screentime_with_nh_woman_by_show
],
'Proportion of gendered screen time by show for topic "{}"'.format(topic),
'Show name',
'Proportion of screen time',
secondary_series_names=['Baseline Male (non-host)', 'Baseline Female (non-host)'],
secondary_data=[BASE_SCREENTIME_NH_MAN_BY_SHOW,
BASE_SCREENTIME_NH_WOMAN_BY_SHOW],
tertiary_series_names=['Male (incl-host)', 'Female (incl-host)'],
tertiary_data=[topic_screentime_with_man_by_show,
topic_screentime_with_woman_by_show],
category_color_map=CANONICAL_SHOW_CMAP
)
print('X-axis color map: {}'.format(', '.join('{}: {}'.format(x, y)
for x, y in CHANNEL_NAME_CMAP.items())))
run_analysis(input('Input a topic: ').strip())
```
| github_jupyter |
# Solution b.
Create a inference script. Let's call it `inference.py`.
Let's also create the `input_fn`, `predict_fn`, `output_fn` and `model_fn` functions.
Copy the cells below and paste in [the main notebook](../xgboost_customer_churn_studio.ipynb).
```
%%writefile inference.py
import os
import pickle
import xgboost
import sagemaker_xgboost_container.encoder as xgb_encoders
# Same as in the training script
def model_fn(model_dir):
"""Load a model. For XGBoost Framework, a default function to load a model is not provided.
Users should provide customized model_fn() in script.
Args:
model_dir: a directory where model is saved.
Returns:
A XGBoost model.
XGBoost model format type.
"""
model_files = (file for file in os.listdir(model_dir) if os.path.isfile(os.path.join(model_dir, file)))
model_file = next(model_files)
try:
booster = pickle.load(open(os.path.join(model_dir, model_file), 'rb'))
format = 'pkl_format'
except Exception as exp_pkl:
try:
booster = xgboost.Booster()
booster.load_model(os.path.join(model_dir, model_file))
format = 'xgb_format'
except Exception as exp_xgb:
raise ModelLoadInferenceError("Unable to load model: {} {}".format(str(exp_pkl), str(exp_xgb)))
booster.set_param('nthread', 1)
return booster
def input_fn(request_body, request_content_type):
"""
The SageMaker XGBoost model server receives the request data body and the content type,
and invokes the `input_fn`.
The input_fn that just validates request_content_type and prints
"""
print("Hello from the PRE-processing function!!!")
if request_content_type == "text/csv":
return xgb_encoders.csv_to_dmatrix(request_body)
else:
raise ValueError(
"Content type {} is not supported.".format(request_content_type)
)
def predict_fn(input_object, model):
"""
SageMaker XGBoost model server invokes `predict_fn` on the return value of `input_fn`.
"""
return model.predict(input_object)[0]
def output_fn(prediction, response_content_type):
"""
After invoking predict_fn, the model server invokes `output_fn`.
An output_fn that just adds a column to the output and validates response_content_type
"""
print("Hello from the POST-processing function!!!")
appended_output = "hello from pos-processing function!!!"
predictions = [prediction, appended_output]
if response_content_type == "text/csv":
return ','.join(str(x) for x in predictions)
else:
raise ValueError("Content type {} is not supported.".format(response_content_type))
```
Deploy the new model with the inference script:
- find the S3 bucket where the artifact is stored (you can create a tarball and upload it to S3 or use another model that was previously created in SageMaker)
#### Finding a previously trained model:
Go to the Experiments tab in Studio again:

Choose another trained model, such as the one trained with Framework mode (right-click and choose `Open in trial details`):

Click on `Artifacts` and look at the `Output artifacts`:

Copy and paste your `SageMaker.ModelArtifact` of the S3 URI where the model is saved:
In this example:
```
s3_artifact="s3://sagemaker-studio-us-east-2-<AWS_ACCOUNT_ID>/xgboost-churn/output/demo-xgboost-customer-churn-2021-04-13-18-51-56-144/output/model.tar.gz"
```
```
s3_artifact="s3://<YOUR-BUCKET>/PATH/TO/model.tar.gz"
```
**Deploy it:**
```
from sagemaker.xgboost.model import XGBoostModel
xgb_inference_model = XGBoostModel(
entry_point="inference.py",
model_data=s3_artifact,
role=role,
image=docker_image_name,
framework_version="0.90-2",
py_version="py3"
)
data_capture_prefix = '{}/datacapture'.format(prefix)
endpoint_name = "model-xgboost-customer-churn-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("EndpointName = {}".format(endpoint_name))
predictor = xgb_inference_model.deploy( initial_instance_count=1,
instance_type='ml.m4.xlarge',
endpoint_name=endpoint_name,
data_capture_config=DataCaptureConfig(
enable_capture=True,
sampling_percentage=100,
destination_s3_uri='s3://{}/{}'.format(bucket, data_capture_prefix)
)
)
## Updating an existing endpoint
# predictor = xgb_inference_model.deploy( initial_instance_count=1,
# instance_type='ml.m4.xlarge',
# endpoint_name=endpoint_name,
# data_capture_config=DataCaptureConfig(
# enable_capture=True,
# sampling_percentage=100,
# destination_s3_uri='s3://{}/{}'.format(bucket, data_capture_prefix)
# ),
# update_endpoint=True
# )
```
**Send some requests:**
```
with open('data/test_sample.csv', 'r') as f:
for row in f:
payload = row.rstrip('\n')
print(f"Sending: {payload}")
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/csv',
Accept='text/csv',
Body=payload)
print(f"\nReceived: {response['Body'].read()}")
break
```
Go to CloudWatch logs and check the inference logic:
[Link to CloudWatch Logs](https://us-east-2.console.aws.amazon.com/cloudwatch/home?region=us-east-2#logsV2:log-groups$3FlogGroupNameFilter$3D$252Faws$252Fsagemaker$252FEndpoints$252F)
| github_jupyter |
```
%matplotlib inline
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
import numpy as np
from tqdm import tqdm
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from pytorch_utils import *
from pytorch_models import *
from utils import load_sequences, conll_classification_report_to_df
from conlleval import main as conll_eval
import re
import io
from pathlib import Path
sns.set_context("poster")
sns.set_style("ticks")
TRAIN_CORPUS="data/conll2000/train.txt"
TEST_CORPUS="data/conll2000/test.txt"
train_corpus = load_sequences(TRAIN_CORPUS, sep=" ", col_ids=(0, -1))
train_corpus, dev_corpus = train_corpus[100:], train_corpus[:100]
print("Total items in train corpus: %s" % len(train_corpus))
print("Total items in dev corpus: %s" % len(dev_corpus))
test_corpus = load_sequences(TEST_CORPUS, sep=" ", col_ids=(0, -1))
print("Total items in test corpus: %s" % len(test_corpus))
train_corpus[0]
def create_vocab(data, vocabs, char_vocab, word_idx=0):
n_vocabs = len(vocabs)
for sent in data:
for token_tags in sent:
for vocab_id in range(n_vocabs):
vocabs[vocab_id].add(token_tags[vocab_id])
char_vocab.batch_add(token_tags[word_idx])
print("Created vocabs: %s, chars[%s]" % (", ".join(
"{}[{}]".format(vocab.name, vocab.size)
for vocab in vocabs
), char_vocab.size))
word_vocab = Vocab("words", UNK="UNK", lower=True)
char_vocab = Vocab("chars", UNK="<U>", lower=False)
chunk_vocab = Vocab("chunk_tags", lower=False)
create_vocab(train_corpus+dev_corpus+test_corpus, [word_vocab, chunk_vocab], char_vocab)
def data2tensors(data, vocabs, char_vocab, word_idx=0, column_ids=(0, -1)):
vocabs = [vocabs[idx] for idx in column_ids]
n_vocabs = len(vocabs)
tensors = []
char_tensors = []
for sent in data:
sent_vecs = [[] for i in range(n_vocabs+1)] # Last is for char vecs
char_vecs = []
for token_tags in sent:
vocab_id = 0 # First column is the word
# lowercase the word
sent_vecs[vocab_id].append(
vocabs[vocab_id].getidx(token_tags[vocab_id].lower())
)
for vocab_id in range(1, n_vocabs):
sent_vecs[vocab_id].append(
vocabs[vocab_id].getidx(token_tags[vocab_id])
)
sent_vecs[-1].append(
[char_vocab.getidx(c) for c in token_tags[word_idx]]
)
tensors.append(sent_vecs)
return tensors
train_tensors = data2tensors(train_corpus, [word_vocab, chunk_vocab], char_vocab)
dev_tensors = data2tensors(dev_corpus, [word_vocab, chunk_vocab], char_vocab)
test_tensors = data2tensors(test_corpus, [word_vocab, chunk_vocab], char_vocab)
print("Train: {}, Dev: {}, Test: {}".format(
len(train_tensors),
len(dev_tensors),
len(test_tensors),
))
def load_word_vectors(vector_file, ndims, vocab, cache_file, override_cache=False):
W = np.zeros((vocab.size, ndims), dtype="float32")
# Check for cached file and return vectors
cache_file = Path(cache_file)
if cache_file.is_file() and not override_cache:
W = np.load(cache_file)
return W
# Else load vectors from the vector file
total, found = 0, 0
with open(vector_file) as fp:
for line in fp:
line = line.strip().split()
if line:
total += 1
assert len(line) == ndims+1,(
"{} vector dims {} doesn't match ndims={}".format(line[0], len(line)-1, ndims)
)
word = line[0]
idx = vocab.getidx(word)
if idx >= vocab.offset:
found += 1
vecs = np.array(list(map(float, line[1:])))
W[idx, :] += vecs
# Write to cache file
print("Found {} [{:.2f}%] vectors from {} vectors in {} with ndims={}".format(
found, found * 100/vocab.size, total, vector_file, ndims))
norm_W = np.sqrt((W*W).sum(axis=1, keepdims=True))
valid_idx = norm_W.squeeze() != 0
W[valid_idx, :] /= norm_W[valid_idx]
print("Caching embedding with shape {} to {}".format(W.shape, cache_file.as_posix()))
np.save(cache_file, W)
return W
%%time
embedding_file="/home/napsternxg/datadrive/Downloads/Glove/glove.6B.100d.txt"
cache_file="conll2000.glove.100.npy"
ndims=100
pretrained_embeddings = load_word_vectors(embedding_file, ndims, word_vocab, cache_file)
def plot_losses(train_losses, eval_losses=None, plot_std=False, ax=None):
if ax is None:
ax = plt.gca()
for losses, color, label in zip(
[train_losses, eval_losses],
["0.5", "r"],
["Train", "Eval"],
):
mean_loss, std_loss = zip(*losses)
mean_loss = np.array(mean_loss)
std_loss = np.array(std_loss)
ax.plot(
mean_loss, color=color, label=label,
linestyle="-",
)
if plot_std:
ax.fill_between(
np.arange(mean_loss.shape[0]),
mean_loss-std_loss,
mean_loss+std_loss,
color=color,
alpha=0.3
)
ax.set_xlabel("Epochs")
ax.set_ylabel("Mean Loss ($\pm$ S.D.)")
def print_predictions(corpus, predictions, filename, label_vocab):
with open(filename, "w+") as fp:
for seq, pred in zip(corpus, predictions):
for (token, true_label), pred_label in zip(seq, pred):
pred_label = label_vocab.idx2item[pred_label]
print("{}\t{}\t{}".format(token, true_label, pred_label), file=fp)
print(file=fp) # Add new line after each sequence
char_emb_size=10
output_channels=50
kernel_sizes=[2, 3]
char_embedding = CharEmbedding(char_vocab.size, char_emb_size, output_channels, kernel_sizes)
char_embedding(Variable(torch.LongTensor([[1,1,2,3]]), requires_grad=False)).size()
word_emb_size=100
char_embed_kwargs=dict(
vocab_size=char_vocab.size,
embedding_size=char_emb_size,
out_channels=output_channels,
kernel_sizes=kernel_sizes
)
word_char_embedding = WordCharEmbedding(
word_vocab.size, word_emb_size, char_embed_kwargs, dropout=0.2)
def charseq2varlist(X_chars):
return [Variable(torch.LongTensor([x]), requires_grad=False) for x in X_chars]
print(len(train_tensors[0][0]))
print(len(train_tensors[0][-1]))
train_corpus[0]
charseq2varlist(train_tensors[0][-1])
word_char_embedding(
Variable(torch.LongTensor([train_tensors[0][0]]), requires_grad=False),
charseq2varlist(train_tensors[0][-1])
).size()
def assign_embeddings(embedding_module, pretrained_embeddings, fix_embedding=False):
embedding_module.weight.data.copy_(torch.from_numpy(pretrained_embeddings))
if fix_embedding:
embedding_module.weight.requires_grad = False
assign_embeddings(word_char_embedding.word_embeddings, pretrained_embeddings, fix_embedding=True)
```
## Class based
```
class ModelWrapper(object):
def __init__(self, model,
loss_function,
use_cuda=False
):
self.model = model
self.loss_function = loss_function
self.use_cuda = use_cuda
if self.use_cuda:
self.model.cuda()
def _process_instance_tensors(self, instance_tensors):
raise NotImplementedError("Please define this function explicitly")
def zero_grad(self):
self.model.zero_grad()
def get_parameters(self):
return self.model.paramerters()
def set_model_mode(self, training_mode=True):
if training_mode:
self.model.train()
else:
self.model.eval()
def save(self, filename):
torch.save(self.model, filename)
print("{} model saved to {}".format(self.model.__class__, filename))
def load(self, filename):
self.model = torch.load(filename)
if self.use_cuda:
self.model.cuda()
def get_instance_loss(self, instance_tensors, zero_grad=True):
if zero_grads:
## Clear gradients before every update else memory runs out
self.zero_grad()
raise NotImplementedError("Please define this function explicitly")
def predict(self, instance_tensors):
raise NotImplementedError("Please define this function explicitly")
def predict_batch(self, batch_tensors):
predictions = []
for instance_tensors in batch_tensors:
predictions.append(self.predict(instance_tensors))
return predictions
def get_epoch_function(model_wrapper, optimizer,
use_cuda=False):
def perform_epoch(data_tensors, training_mode=True, batch_size=1):
model_wrapper.set_model_mode(training_mode)
step_losses = []
data_tensors = np.random.permutation(data_tensors)
n_splits = data_tensors.shape[0]//batch_size
for batch_tensors in np.array_split(data_tensors, n_splits):
#from IPython.core.debugger import Tracer; Tracer()()
model_wrapper.zero_grad()
loss = Variable(torch.FloatTensor([0.]))
if use_cuda:
loss = loss.cuda()
for instance_tensors in batch_tensors:
loss += model_wrapper.get_instance_loss(instance_tensors, zero_grad=False)
loss = loss/batch_tensors.shape[0] # Mean loss
step_losses.append(loss.data[0])
if training_mode:
## Get gradients of model params wrt. loss
loss.backward()
## Optimize the loss by one step
optimizer.step()
return step_losses
return perform_epoch
def write_losses(losses, fp, title="train", epoch=0):
for i, loss in enumerate(losses):
print("{:<10} epoch={:<3} batch={:<5} loss={:<10}".format(
title, epoch, i, loss
), file=fp)
print("{:<10} epoch={:<3} {:<11} mean={:<10.3f} std={:<10.3f}".format(
title, epoch, "overall", np.mean(losses), np.std(losses)
), file=fp)
def training_wrapper(
model_wrapper, data_tensors,
eval_tensors=None,
optimizer=optim.SGD,
optimizer_kwargs=None,
n_epochs=10,
batch_size=1,
use_cuda=False,
log_file="training_output.log"
):
"""Wrapper to train the model
"""
if optimizer_kwargs is None:
optimizer_kwargs = {}
# Fileter out parameters which don't require a gradient
parameters = filter(lambda p: p.requires_grad, model_wrapper.model.parameters())
optimizer=optimizer(parameters, **optimizer_kwargs)
# Start training
losses = []
eval_losses = []
data_tensors = np.array(data_tensors)
if eval_tensors is not None:
eval_tensors = np.array(eval_tensors)
perform_epoch = get_epoch_function(
model_wrapper,
optimizer,
use_cuda=use_cuda)
with open(log_file, "w+") as fp:
for epoch in tqdm(range(n_epochs)):
i = epoch
step_losses = perform_epoch(data_tensors, batch_size=batch_size)
mean_loss, std_loss = np.mean(step_losses), np.std(step_losses)
losses.append((mean_loss, std_loss))
write_losses(step_losses, fp, title="train", epoch=i)
if eval_tensors is not None:
step_losses = perform_epoch(eval_tensors, training_mode=False)
mean_loss, std_loss = np.mean(step_losses), np.std(step_losses)
eval_losses.append((mean_loss, std_loss))
write_losses(step_losses, fp, title="eval", epoch=i)
return {
"training_loss": losses,
"evaluation_loss": eval_losses
}
class LSTMTaggerModel(ModelWrapper):
def __init__(self, model,
loss_function,
use_cuda=False):
self.model = model
self.loss_function = loss_function
self.use_cuda = use_cuda
if self.use_cuda:
#[k.cuda() for k in self.model.modules()]
self.model.cuda()
def _process_instance_tensors(self, instance_tensors):
X, Y, X_char = instance_tensors
X = Variable(torch.LongTensor([X]), requires_grad=False)
Y = Variable(torch.LongTensor(Y), requires_grad=False)
X_char = charseq2varlist(X_char)
if self.use_cuda:
X = X.cuda()
Y = Y.cuda()
X_char = [t.cuda() for t in X_char]
return X, X_char, Y
def get_instance_loss(self, instance_tensors, zero_grad=True):
if zero_grad:
## Clear gradients before every update else memory runs out
self.model.zero_grad()
X, X_char, Y = self._process_instance_tensors(instance_tensors)
#print(X.get_device(), [t.get_device() for t in X_char])
return self.loss_function(self.model.forward(X, X_char), Y)
def predict(self, instance_tensors):
X, X_char, Y = self._process_instance_tensors(instance_tensors)
prediction = self.model.forward(X, X_char)
return prediction.data.cpu().max(1)[1].numpy().ravel()
use_cuda=True
n_embed=100
hidden_size=20
batch_size=10
char_emb_size=10
output_channels=50
kernel_sizes=[2, 3]
word_emb_size=100
char_embed_kwargs=dict(
vocab_size=char_vocab.size,
embedding_size=char_emb_size,
out_channels=output_channels,
kernel_sizes=kernel_sizes
)
word_char_embedding = WordCharEmbedding(
word_vocab.size, word_emb_size,
char_embed_kwargs, dropout=0.2)
# Assign glove embeddings
assign_embeddings(word_char_embedding.word_embeddings, pretrained_embeddings, fix_embedding=True)
model_wrapper = LSTMTaggerModel(
LSTMTaggerWordChar(word_char_embedding, n_embed, hidden_size, chunk_vocab.size),
nn.NLLLoss(), use_cuda=use_cuda)
model_wrapper.get_instance_loss(train_tensors[0])
len(list(model_wrapper.model.parameters()))
n_epochs=5
training_history = training_wrapper(
model_wrapper, train_tensors,
eval_tensors=dev_tensors,
optimizer=optim.Adam,
optimizer_kwargs={
#"lr": 0.01,
"weight_decay": 0.5
},
n_epochs=n_epochs,
batch_size=batch_size,
use_cuda=use_cuda,
log_file="LSTMTaggerModel_CONLL2000.log"
)
model_wrapper.save("LSTMTaggerModel_CONLL2000")
preds = model_wrapper.predict(train_tensors[0])
preds
fig, ax = plt.subplots(1,1)
plot_losses(training_history["training_loss"],
training_history["evaluation_loss"],
plot_std=True,
ax=ax)
ax.legend()
sns.despine(offset=5)
for title, tensors, corpus in zip(
["train", "dev", "test"],
[train_tensors, dev_tensors, test_tensors],
[train_corpus, dev_corpus, test_corpus],
):
%time predictions = model_wrapper.predict_batch(tensors)
print_predictions(corpus, predictions, "%s.chunking.conll" % title, chunk_vocab)
conll_eval(["conlleval", "%s.chunking.conll" % title])
```
## CRF model
```
class BiLSTMTaggerWordCRFModel(ModelWrapper):
def __init__(self, model,
loss_function,
use_cuda=False):
self.model = model
self.loss_function = None
self.use_cuda = use_cuda
if self.use_cuda:
#[k.cuda() for k in self.model.modules()]
self.model.cuda()
def _process_instance_tensors(self, instance_tensors):
X, Y, X_char = instance_tensors
X = Variable(torch.LongTensor([X]), requires_grad=False)
Y = torch.LongTensor(Y)
X_char = charseq2varlist(X_char)
if self.use_cuda:
X = X.cuda()
Y = Y.cuda()
X_char = [t.cuda() for t in X_char]
return X, X_char, Y
def get_instance_loss(self, instance_tensors, zero_grad=True):
if zero_grad:
## Clear gradients before every update else memory runs out
self.model.zero_grad()
X, X_char, Y = self._process_instance_tensors(instance_tensors)
#print(X.get_device(), [t.get_device() for t in X_char])
return self.model.loss(X, X_char, Y)
def predict(self, instance_tensors):
X, X_char, Y = self._process_instance_tensors(instance_tensors)
emissions = self.model.forward(X, X_char)
return self.model.crf.forward(emissions)[1]
use_cuda=True
n_embed=100
hidden_size=20
batch_size=10
char_emb_size=10
output_channels=50
kernel_sizes=[2, 3]
word_emb_size=100
char_embed_kwargs=dict(
vocab_size=char_vocab.size,
embedding_size=char_emb_size,
out_channels=output_channels,
kernel_sizes=kernel_sizes
)
word_char_embedding = WordCharEmbedding(
word_vocab.size, word_emb_size,
char_embed_kwargs, dropout=0.2)
# Assign glove embeddings
assign_embeddings(word_char_embedding.word_embeddings, pretrained_embeddings, fix_embedding=True)
model_wrapper = BiLSTMTaggerWordCRFModel(
LSTMTaggerWordCharCRF(word_char_embedding, n_embed, hidden_size, chunk_vocab.size),
None, use_cuda=use_cuda)
n_epochs=5
training_history = training_wrapper(
model_wrapper, train_tensors,
eval_tensors=dev_tensors,
optimizer=optim.Adam,
optimizer_kwargs={
#"lr": 0.01,
"weight_decay": 0.5
},
n_epochs=n_epochs,
batch_size=batch_size,
use_cuda=use_cuda,
log_file="BiLSTMTaggerWordCRFModel_CONLL2000.log"
)
model_wrapper.save("BiLSTMTaggerWordCRFModel_CONLL2000")
fig, ax = plt.subplots(1,1)
plot_losses(training_history["training_loss"],
training_history["evaluation_loss"],
plot_std=True,
ax=ax)
ax.legend()
sns.despine(offset=5)
for title, tensors, corpus in zip(
["train", "dev", "test"],
[train_tensors, dev_tensors, test_tensors],
[train_corpus, dev_corpus, test_corpus],
):
%time predictions = model_wrapper.predict_batch(tensors)
print_predictions(corpus, predictions, "%s.chunking.conll" % title, chunk_vocab)
conll_eval(["conlleval", "%s.chunking.conll" % title])
temp_io = io.StringIO()
conll_eval(["conlleval", "%s.chunking.conll" % "train"], outstream=temp_io)
report = temp_io.getvalue()
print(conll_classification_report_to_df(report))
temp_io.close()
```
| github_jupyter |
```
%cd /opt
%%capture
!tar xvf /kaggle/input/extract-prebuilt-kaldi-from-docker/kaldi.tar
%cd kaldi/egs
!git clone https://github.com/danijel3/ClarinStudioKaldi
%cd ClarinStudioKaldi
#apt-get -y install libperlio-gzip-perl
!conda install -c bioconda perl-perlio-gzip -y
import os
#os.environ['LD_LIBRARY_PATH'] = f'{os.environ["LD_LIBRARY_PATH"]}:/opt/kaldi/tools/openfst-1.6.7/lib:/opt/kaldi/src/lib'
os.environ['LD_LIBRARY_PATH'] = '/opt/conda/lib:/opt/kaldi/tools/openfst-1.6.7/lib:/opt/kaldi/src/lib'
!cat path.sh|sed -e 's/~\/apps/\/opt/' > tmp
!mv tmp path.sh
!echo > local_clarin/clarin_pl_clean.sh
!ln -s ../wsj/s5/steps
!ln -s ../wsj/s5/conf
!ln -s ../wsj/s5/local
!ln -s ../wsj/s5/utils
!cp -r /kaggle/input/kaldi-clarinstudio-polish-data-prep/data /kaggle/working/
!mkdir /kaggle/working/exp
!ln -s /kaggle/working/exp
!ln -s /kaggle/working/data
%%writefile run.sh
. path.sh
export nj=40 ##number of concurrent processes
export nj_test=30 ## number of concurrent processes for test has to be <=30
#train Monophone system
steps/train_mono.sh --nj $nj data/train data/lang_nosp exp/mono0
#align using the Monophone system
steps/align_si.sh --nj $nj data/train data/lang_nosp exp/mono0 exp/mono0_ali
#train initial Triphone system
steps/train_deltas.sh 2000 10000 data/train data/lang_nosp exp/mono0_ali exp/tri1
#re-align using the initial Triphone system
steps/align_si.sh --nj $nj data/train data/lang_nosp exp/tri1 exp/tri1_ali
#train tri2a, which is deltas + delta-deltas
steps/train_deltas.sh 2500 15000 data/train data/lang_nosp exp/tri1_ali exp/tri2a
#train tri2b, which is tri2a + LDA
steps/train_lda_mllt.sh --splice-opts "--left-context=3 --right-context=3" \
2500 15000 data/train data/lang_nosp exp/tri1_ali exp/tri2b
#re-align tri2b system
steps/align_si.sh --nj $nj --use-graphs true data/train data/lang_nosp exp/tri2b exp/tri2b_ali
#from 2b system, train 3b which is LDA + MLLT + SAT.
steps/train_sat.sh 2500 15000 data/train data/lang_nosp exp/tri2b_ali exp/tri3b
#get pronounciation probabilities and silence information
./steps/get_prons.sh data/train data/lang_nosp exp/tri3b
#recreate dict with new pronounciation and silence probabilities
./utils/dict_dir_add_pronprobs.sh data/local/dict_nosp \
exp/tri3b/pron_counts_nowb.txt \
exp/tri3b/sil_counts_nowb.txt \
exp/tri3b/pron_bigram_counts_nowb.txt data/local/dict
#recreate lang directory
utils/prepare_lang.sh data/local/dict "<unk>" data/local/tmp data/lang
#recreate G.fst
utils/format_lm.sh data/lang local_clarin/arpa.lm.gz data/local/dict/lexicon.txt data/lang_test
#download a large LM (~843MB)
if [ ! -f local_clarin/large.arpa.gz ] ; then
(
cd local_clarin
curl -O http://mowa.clarin-pl.eu/korpusy/large.arpa.gz
)
fi
#create the const-arpa lang dir
./utils/build_const_arpa_lm.sh local_clarin/large.arpa.gz data/lang data/lang_carpa
#from 3b system, align all data
steps/align_fmllr.sh --nj $nj data/train data/lang exp/tri3b exp/tri3b_ali
#train MMI on tri3b (LDA+MLLT+SAT)
steps/make_denlats.sh --nj $nj --transform-dir exp/tri3b_ali data/train data/lang \
exp/tri3b exp/tri3b_denlats
steps/train_mmi.sh data/train data/lang exp/tri3b_ali exp/tri3b_denlats exp/tri3b_mmi
#test Monophone system
utils/mkgraph.sh data/lang_nosp_test exp/mono0 exp/mono0/graph
steps/decode.sh --nj $nj_test exp/mono0/graph data/test exp/mono0/decode
#test initial Triphone system
utils/mkgraph.sh data/lang_nosp_test exp/tri1 exp/tri1/graph
steps/decode.sh --nj $nj_test exp/tri1/graph data/test exp/tri1/decode
#test tri2a
utils/mkgraph.sh data/lang_nosp_test exp/tri2a exp/tri2a/graph
steps/decode.sh --nj $nj_test exp/tri2a/graph data/test exp/tri2a/decode
#test tri2b
utils/mkgraph.sh data/lang_nosp_test exp/tri2b exp/tri2b/graph
steps/decode.sh --nj $nj_test exp/tri2b/graph data/test exp/tri2b/decode
#test tri3b
utils/mkgraph.sh data/lang_nosp_test exp/tri3b exp/tri3b/graph_nosp
steps/decode_fmllr.sh --nj $nj_test exp/tri3b/graph_nosp data/test exp/tri3b/decode_nosp
#test tri3b again
utils/mkgraph.sh data/lang_test exp/tri3b exp/tri3b/graph
steps/decode_fmllr.sh --nj $nj_test exp/tri3b/graph data/test exp/tri3b/decode
!bash run.sh
!find exp -type l|zip /kaggle/working/links.zip -@
```
| github_jupyter |
# 一个完整的机器学习项目
# 房价预测
## 我们选择的是StatLib的加州房产价格数据集
```
# 导入相关包
import pandas as pd
import os
INPUT_PATH = 'dataset' # 输入目录
def load_data(file, path=INPUT_PATH):
"""
加载csv文件
"""
csv_path=os.path.join(path, file)
return pd.read_csv(csv_path)
# 首先我们看下数据,发现有10个属性
housing = load_data("housing.csv")
housing.head()
# info() 方法可以快速查看数据的描述,特别是总行数、每个属性的类型和非空值的数量
# total_bedrooms 有部分缺失 只有 ocean_proximity 属性不是数值属性
housing.info()
#value_counts()方法查看该项中都有哪些类别
housing["ocean_proximity"].value_counts()
# describe()方法展示给数值属性的统计信息
housing.describe()
# 观察每个数值属性的柱状图
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20,15))
plt.show()
# 创建测试集 ,为了防止采样偏差,这里使用分层采样
# 测试集通常被忽略,但实际是机器学习非常重要的一部分。
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.model_selection import StratifiedShuffleSplit
#将收入中位数除以1.5(以限制收入分类的数量),创建了一个收入类别属性,
#用ceil对值舍入(以产生离散的分类),然后将所有大于5的分类归入到分类5
housing["income_cat"] = np.ceil(housing["median_income"]/1.5)
housing["income_cat"].where(housing["income_cat"] < 5, 5.0, inplace= True)
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=41)
for train_index, test_index in split.split(housing, housing["income_cat"]):
strat_train_set=housing.loc[train_index]
strat_test_set=housing.loc[test_index]
for set in (strat_train_set, strat_test_set):
set.drop(["income_cat"], axis=1, inplace=True)
# 分层采样就是按照这里给的 income_cat 分类比例来的
housing["income_cat"].value_counts()/len(housing)
```
## 探索性数据分析
```
# 从经度和纬度查看密度分布
housing = strat_train_set.copy()
housing.plot(kind="scatter", x="longitude",y="latitude", alpha=0.1)
plt.show()
# 这张图可以看出房价和 靠海位置和人口密度联系密切
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.4,
s=housing["population"]/100, label="population", figsize=(10,7),
c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True,)
plt.legend()
plt.show()
# 通过皮尔逊相关系数 查找关联
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
# scatter_matrix函数画出每个数值属性对每个其它数值属性的图
from pandas.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms",
"housing_median_age"]
scatter_matrix(housing[attributes], figsize=(12, 8))
plt.show()
housing.plot(kind="scatter", x="median_income", y="median_house_value", alpha=0.1)
plt.show()
# 属性组合
# 我们真正需要的是每户有几个房间
housing["rooms_per_household"] = housing["total_rooms"]/housing["households"]
# 总卧室与总房间的占比
housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"]
# 每户人口数
housing["population_per_household"]=housing["population"]/housing["households"]
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
```
## 准备数据
```
# 准备数据
housing = strat_train_set.drop("median_house_value", axis=1)
housing_labels = strat_train_set["median_house_value"].copy()
housing_num = housing.drop('ocean_proximity', axis=1)
housing_cat = housing[['ocean_proximity']]
```
## 流水线
```
# 自定义转换器
# 进行清理操作或者属性组合
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, household_ix = [
list(housing.columns).index(col)
for col in ("total_rooms", "total_bedrooms", "population", "households")]
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room = True):
"""
是否增加bedrooms_per_room属性
"""
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, household_ix]
population_per_household = X[:, population_ix] / X[:, household_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household,
bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
# 数值类预处理流水线
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")), # 中位数填充缺失值
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
])
housing_num_tr = num_pipeline.fit_transform(housing_num)
housing_num_tr
# 完整预处理流水线
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
full_pipeline = ColumnTransformer([
("num", num_pipeline, num_attribs),
("cat", OneHotEncoder(), cat_attribs),
])
housing_prepared = full_pipeline.fit_transform(housing)
housing_prepared
```
## 选择模型并训练
```
# 线性模型
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
# 测试一下
some_data = housing.iloc[:5]
some_labels = housing_labels.iloc[:5]
some_data_prepared = full_pipeline.transform(some_data)
print("Predictions:", lin_reg.predict(some_data_prepared))
print("Labels:", list(some_labels))
# 误差评估 该模型欠拟合
from sklearn.metrics import mean_squared_error
housing_predictions = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_labels, housing_predictions)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
# 决策树模型
# 模型过拟合
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor(random_state=42)
tree_reg.fit(housing_prepared, housing_labels)
housing_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(housing_labels, housing_predictions)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
# 采用交叉验证
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
tree_rmse_scores = np.sqrt(-scores)
def display_scores(scores):
print("Scores:", scores)
print("Mean:", scores.mean())
print("Standard deviation:", scores.std())
display_scores(tree_rmse_scores)
# 随机森林模型
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score
forest_reg = RandomForestRegressor()
forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-forest_scores)
display_scores(forest_rmse_scores)
# svm - 效果更差
from sklearn.svm import SVR
svm_reg = SVR(kernel="linear")
svm_reg.fit(housing_prepared, housing_labels)
housing_predictions = svm_reg.predict(housing_prepared)
svm_mse = mean_squared_error(housing_labels, housing_predictions)
svm_rmse = np.sqrt(svm_mse)
svm_rmse
# 网格搜索
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},
{'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},
]
forest_reg = RandomForestRegressor(random_state=42)
grid_search = GridSearchCV(forest_reg, param_grid, cv=5,
scoring='neg_mean_squared_error', return_train_score=True)
grid_search.fit(housing_prepared, housing_labels)
# 结果
grid_search.best_params_
# 最佳估计器
grid_search.best_estimator_
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
# 随机搜索
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint
param_distribs = {
'n_estimators': randint(low=1, high=200),
'max_features': randint(low=1, high=8),
}
forest_reg = RandomForestRegressor(random_state=42)
rnd_search = RandomizedSearchCV(forest_reg, param_distributions=param_distribs,
n_iter=10, cv=5, scoring='neg_mean_squared_error', random_state=42)
rnd_search.fit(housing_prepared, housing_labels)
# 结果
cvres = rnd_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
# 属性重要性
# 这里可以在结果的基础上去除一些没用的特征
feature_importances = rnd_search.best_estimator_.feature_importances_
extra_attribs = ["rooms_per_hhold", "pop_per_hhold", "bedrooms_per_room"]
cat_encoder = full_pipeline.named_transformers_["cat"]
cat_one_hot_attribs = list(cat_encoder.categories_[0])
attributes = num_attribs + extra_attribs + cat_one_hot_attribs
sorted(zip(feature_importances, attributes), reverse=True)
# 最终模型
final_model = grid_search.best_estimator_
X_test = strat_test_set.drop("median_house_value", axis=1)
y_test = strat_test_set["median_house_value"].copy()
X_test_prepared = full_pipeline.transform(X_test)
final_predictions = final_model.predict(X_test_prepared)
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
final_rmse
# 保存模型
from sklearn.externals import joblib
joblib.dump(final_model, "my_model.pkl")
# my_model_loaded = joblib.load("my_model.pkl")
```
## 其他
```
# 只选择最重要的几个特征
from sklearn.base import BaseEstimator, TransformerMixin
def indices_of_top_k(arr, k):
return np.sort(np.argpartition(np.array(arr), -k)[-k:])
class TopFeatureSelector(BaseEstimator, TransformerMixin):
def __init__(self, feature_importances, k):
self.feature_importances = feature_importances
self.k = k
def fit(self, X, y=None):
self.feature_indices_ = indices_of_top_k(self.feature_importances, self.k)
return self
def transform(self, X):
return X[:, self.feature_indices_]
k = 5
top_k_feature_indices = indices_of_top_k(feature_importances, k)
sorted(zip(feature_importances, attributes), reverse=True)[:k]
# 加入流水线
preparation_and_feature_selection_pipeline = Pipeline([
('preparation', full_pipeline),
('feature_selection', TopFeatureSelector(feature_importances, k))
])
housing_prepared_top_k_features = preparation_and_feature_selection_pipeline.fit_transform(housing)
housing_prepared_top_k_features
```
| github_jupyter |
```
%matplotlib inline
```
links:
* http://scikit-image.org/docs/dev/auto_examples/transform/plot_radon_transform.html
* https://software.intel.com/en-us/node/507042
```
# from minimg import load, MinImg, TYP_REAL32
from numba import jit, prange
import numba
import pylab as plt
from glob import glob
from math import radians, cos, sin, pi, atan2
import numpy as np
arr = np.ndarray((508, 262, 500), dtype=np.float32)
# for path in glob("d:\\tomography\\RS_8_4\\MMC1_2.82um_*.tif"):
for path in glob("/home/makov/diskmnt/big/yaivan/test_data/RS_8_4/MMC1_2.82um_*.tif"):
idx = path[-8:-4]
if not idx.isdigit():
continue
img = plt.imread(path).astype('float32')
arr[int(idx, 10),:,:] = -np.log(img / np.max(img))
# img = load(path).astype(TYP_REAL32)
# arr[int(idx, 10),:,:] = -np.log(img / img.bounds()[1])
@jit(nopython=True)
def parker_weight(src):
L = 225.082 # Camera to Source (mm)
num_angles = src.shape[0]
N = src.shape[1]
step_rad = radians(0.400000) # Rotation Step (deg)
detector_px_sz = 22.597849 * 1e-3 # Image Pixel Size (mm)
delta = atan2((N / 2 + 0.5) * detector_px_sz, L)
for angle_idx in range(num_angles):
beta = angle_idx * step_rad
for detector_idx in range(N):
alpha = atan2((N / 2 - detector_idx) * detector_px_sz, L)
if 0 <= beta <= 2 * delta + 2 * alpha:
weigth = sin((pi / 4) * (beta / (delta + alpha))) ** 2
elif beta < pi + 2 * alpha:
weigth = 1.0
elif beta < pi + 2 * delta:
weigth = sin((pi / 4) * (pi + 2 * delta - beta) / (delta - alpha)) ** 2
else:
weigth = 0.0
src[angle_idx, detector_idx] *= weigth
@jit(nopython=True)
def pre_weight(src):
L = 225.082 # Camera to Source (mm)
num_angles = src.shape[0]
N = src.shape[1]
N2 = N / 2.0 # TODO: check it for odd N
Nz = src.shape[2]
Nz2 = Nz / 2 # TODO: check it for odd Nz
detector_px_sz = 22.597849 * 1e-3 # Image Pixel Size (mm)
for (y_idx, z_idx), _ in np.ndenumerate(src[0]):
z = z_idx - Nz2
y = N2 - y_idx
dl2 = (z * z + y * y) * detector_px_sz**2
k = L / np.sqrt(L * L + dl2)
for angle_idx in range(num_angles):
src[angle_idx, y_idx, z_idx] *=k
# @jit(nopython=True)
# def bp1d(src_filtered, out):
# L = 225.082 # Camera to Source (mm)
# l = 56.135 # Object to Source (mm)
# detector_px_sz = 22.597849 * 1e-3 # Image Pixel Size (um)
# object_px_sz = 22.597849 * 1e-3 / (L / l)
# N = out.shape[-1]
# N2 = N / 2.0
# num_angles = src_filtered.shape[0]
# step_rad = radians(0.400000) # Rotation Step (deg)
# for angle_idx in range(0, num_angles, 1):
# # print("angle_idx %d" % (angle_idx,))
# # print(angle_idx)
# angle_rad = angle_idx * step_rad
# cos_val = cos(angle_rad) * object_px_sz
# sin_val = sin(angle_rad) * object_px_sz
# projection_filtered = src_filtered[angle_idx]
# for (y_idx, x_idx), _ in np.ndenumerate(out):
# x = x_idx - N2
# y = N2 - y_idx
# t = x * cos_val - y * sin_val
# s = x * sin_val + y * cos_val
# a = t # * object_px_sz
# b = s # * object_px_sz
# d = (N * detector_px_sz) / 2.0 + b / (l + a) * L
# d_idx = int(round(d / detector_px_sz))
# if d_idx >= 0 and d_idx < N:
# out[y_idx, x_idx] += projection_filtered[d_idx]
# out /= N * num_angles
# def fbp(src, out):
# print("parker_weight")
# parker_weight(src)
# print("fft")
# # FFT
# N = out.shape[-1]
# fft_filter=np.arange(N).astype(np.float32)
# fft_filter = 1-np.abs(fft_filter-N//2)/(N//2)
# fft_filter *= np.roll(np.hamming(N), N//2)
# src_fft = np.fft.fft(src, axis=1)
# src_fft *= fft_filter
# src_filtered = np.real(np.fft.ifft(src_fft, axis=1))
# print("bp")
# bp1d(src_filtered, out)
# print("bp finish")
@numba.jit(nopython=True)
def bp2d(src_filtered, out):
L = 225.082 # Camera to Source (mm)
l = 56.135 # Object to Source (mm)
detector_px_sz = 22.597849 * 1e-3 # Image Pixel Size (um)
object_px_sz = 22.597849 * 1e-3 / (L / l)
N = out.shape[-1]
N2 = N / 2.0 # TODO: check it for odd N
Nz = out.shape[0]
Nz2 = Nz / 2 # TODO: check it for odd Nz
num_angles = src_filtered.shape[0]
step_rad = radians(0.400000) # Rotation Step (deg)
for angle_idx in range(0, num_angles, 1):
# print("angle_idx %d" % (angle_idx,))
# print(angle_idx)
angle_rad = angle_idx * step_rad
cos_val = cos(angle_rad) * object_px_sz
sin_val = sin(angle_rad) * object_px_sz
# projection_filtered = src_filtered[angle_idx]
# for (z_idx, y_idx, x_idx), _ in np.ndenumerate(out):
for (y_idx, x_idx), _ in np.ndenumerate(out[0]):
x = x_idx - N2
y = N2 - y_idx
t = x * cos_val - y * sin_val
s = x * sin_val + y * cos_val
a = t # * object_px_sz
b = s # * object_px_sz
dx = (N * detector_px_sz) / 2.0 + b / (l + a) * L
dx_idx = int(round(dx / detector_px_sz))
if not(dx_idx >= 0 and dx_idx < N):
continue
for z_idx in prange(out.shape[0]):
z = z_idx - Nz2
dz = (Nz * detector_px_sz) / 2.0 + z * object_px_sz / (l + a) * L
dz_idx = int(round(dz / detector_px_sz))
if dz_idx >= 0 and dz_idx < Nz:
out[z_idx, y_idx, x_idx] += src_filtered[angle_idx, dz_idx, dx_idx]
out /= N * num_angles
def fdk(src):
print("parker")
print(src.shape)
out = np.zeros((10, 500, 500), dtype=np.float32)
for i in range(src.shape[1]):
parker_weight(src[:,i,:])
pre_weight(src)
# FFT
print("fft")
N = out.shape[-1]
fft_filter=np.arange(N).astype(np.float32)
fft_filter = 1-np.abs(fft_filter-N//2)/(N//2)
fft_filter *= np.roll(np.hamming(N), N//2)
for i in range(src.shape[1]):
src_fft = np.fft.fft(src[:,i,:], axis=1)
src_fft *= fft_filter
src[:,i,:] = np.real(np.fft.ifft(src_fft, axis=1))
print("bp")
# np.save('tmp', src)
# src = np.load('tmp.npy')[:,-960//8-5:-960//8+5,:]
bp2d(src[:,-960//8-5:-960//8+5,:], out)
return out
out = fdk(arr)
# for z in range(out.shape[0]):
# MinImg.fromarray(out[z]).save('out_%d.tif' % z)
plt.figure(figsize=(10,10))
plt.imshow(out[5,:,:], cmap=plt.cm.gray)
plt.colorbar()
plt.show()
# %load "/home/makov/diskmnt/big/yaivan/MMC_1/Raw/MMC1_2.82um_.log"
[System]
Scanner=Skyscan1172
Instrument S/N=08G01121
Hardware version=A
Software=Version 1. 5 (build 18)
Home directory=C:\SkyScan
Source Type=Hamamatsu 100/250
Camera=Hamamatsu 10Mp camera
Camera Pixel Size (um)= 11.32
CameraXYRatio=1.0023
Incl.in lifting (um/mm)=0.0000
[Acquisition]
Data directory=D:\Results\Yakimchuk\2015-Spectrum Reconctruction\MultiMineral Calibration\2015.03.18 MMC_1\Raw
Filename Prefix=MMC1_2.82um_
Number of Files= 2030
Source Voltage (kV)= 100
Source Current (uA)= 100
Number of Rows= 2096
Number of Columns= 4000
Image crop origin X= 0
Image crop origin Y=0
Camera binning=1x1
Image Rotation=0.6500
Gantry direction=CC
Image Pixel Size (um)= 2.82
Object to Source (mm)=56.135
Camera to Source (mm)=225.082
Vertical Object Position (mm)=6.900
Optical Axis (line)= 960
Filter=Al 0.5 mm
Image Format=TIFF
Depth (bits)=16
Screen LUT=0
Exposure (ms)= 1767
Rotation Step (deg)=0.100
Frame Averaging=ON (15)
Random Movement=OFF (10)
Use 360 Rotation=NO
Geometrical Correction=ON
Camera Offset=OFF
Median Filtering=ON
Flat Field Correction=ON
Rotation Direction=CC
Scanning Trajectory=ROUND
Type Of Motion=STEP AND SHOOT
Study Date and Time=Mar 19, 2015 10:11:11
Scan duration=16:08:02
[Reconstruction]
Reconstruction Program=NRecon
Program Version=Version: 1.6.5.8
Program Home Directory=C:\SkyScan\NRecon_GPU
Reconstruction engine=NReconServer
Engine version=Version: 1.6.5
Reconstruction from batch=No
Reconstruction servers= slb-7hlv74j slb-9hlv74j slb-7pbv74j
Option for additional F4F float format=OFF
Dataset Origin=Skyscan1172
Dataset Prefix=MMC1_2.82um_
Dataset Directory=D:\Results\Yakimchuk\2015-Spectrum Reconctruction\MultiMineral Calibration\2015.03.18 MMC_1\Raw
Output Directory=D:\Results\Yakimchuk\2015-Spectrum Reconctruction\MultiMineral Calibration\2015.03.18 MMC_1\Reconstructed
Time and Date=Mar 19, 2015 13:00:46
First Section=96
Last Section=1981
Reconstruction duration per slice (seconds)=1.859491
Total reconstruction time (1886 slices) in seconds=3507.000000
Postalignment=-1.00
Section to Section Step=1
Sections Count=1886
Result File Type=PNG
Result File Header Length (bytes)=Unknown: compressed JPG format (100%)
Result Image Width (pixels)=4000
Result Image Height (pixels)=4000
Pixel Size (um)=2.82473
Reconstruction Angular Range (deg)=202.90
Use 180+=OFF
Angular Step (deg)=0.1000
Smoothing=0
Ring Artifact Correction=16
Draw Scales=OFF
Object Bigger than FOV=OFF
Reconstruction from ROI=OFF
Filter cutoff relative to Nyquisit frequency=100
Filter type=0
Filter type meaning(1)=0: Hamming (Ramp in case of optical scanner); 1: Hann; 2: Ramp; 3: Almost Ramp;
Filter type meaning(2)=11: Cosine; 12: Shepp-Logan; [100,200]: Generalized Hamming, alpha=(iFilter-100)/100
Undersampling factor=1
Threshold for defect pixel mask (%)=0
Beam Hardening Correction (%)=92
CS Static Rotation (deg)=0.0
Minimum for CS to Image Conversion=-0.1800
Maximum for CS to Image Conversion=0.5200
HU Calibration=OFF
BMP LUT=0
Cone-beam Angle Horiz.(deg)=11.493867
Cone-beam Angle Vert.(deg)=6.037473
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D1_RealNeurons/student/W3D1_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neuromatch Academy: Week 2, Day 3, Tutorial 2
# Real Neurons: Effects of Input Correlation
__Content creators:__ Qinglong Gu, Songtin Li, John Murray, Richard Naud, Arvind Kumar
__Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Matthew Krause, Spiros Chavlis, Michael Waskom
---
# Tutorial Objectives
In this tutorial, we will use the leaky integrate-and-fire (LIF) neuron model (see Tutorial 1) to study how they transform input correlations to output properties (transfer of correlations). In particular, we are going to write a few lines of code to:
- inject correlated GWN in a pair of neurons
- measure correlations between the spiking activity of the two neurons
- study how the transfer of correlation depends on the statistics of the input, i.e. mean and standard deviation.
---
# Setup
```
# Import libraries
import matplotlib.pyplot as plt
import numpy as np
import time
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
# use NMA plot style
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
my_layout = widgets.Layout()
# @title Helper functions
def default_pars(**kwargs):
pars = {}
### typical neuron parameters###
pars['V_th'] = -55. # spike threshold [mV]
pars['V_reset'] = -75. # reset potential [mV]
pars['tau_m'] = 10. # membrane time constant [ms]
pars['g_L'] = 10. # leak conductance [nS]
pars['V_init'] = -75. # initial potential [mV]
pars['V_L'] = -75. # leak reversal potential [mV]
pars['tref'] = 2. # refractory time (ms)
### simulation parameters ###
pars['T'] = 400. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
### external parameters if any ###
for k in kwargs:
pars[k] = kwargs[k]
pars['range_t'] = np.arange(0, pars['T'], pars['dt']) # Vector of discretized
# time points [ms]
return pars
def run_LIF(pars, Iinj):
"""
Simulate the LIF dynamics with external input current
Args:
pars : parameter dictionary
Iinj : input current [pA]. The injected current here can be a value or an array
Returns:
rec_spikes : spike times
rec_v : mebrane potential
"""
# Set parameters
V_th, V_reset = pars['V_th'], pars['V_reset']
tau_m, g_L = pars['tau_m'], pars['g_L']
V_init, V_L = pars['V_init'], pars['V_L']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tref = pars['tref']
# Initialize voltage and current
v = np.zeros(Lt)
v[0] = V_init
Iinj = Iinj * np.ones(Lt)
tr = 0.
# simulate the LIF dynamics
rec_spikes = [] # record spike times
for it in range(Lt - 1):
if tr > 0:
v[it] = V_reset
tr = tr - 1
elif v[it] >= V_th: # reset voltage and record spike event
rec_spikes.append(it)
v[it] = V_reset
tr = tref / dt
# calculate the increment of the membrane potential
dv = (-(v[it] - V_L) + Iinj[it] / g_L) * (dt / tau_m)
# update the membrane potential
v[it + 1] = v[it] + dv
rec_spikes = np.array(rec_spikes) * dt
return v, rec_spikes
def my_GWN(pars, sig, myseed=False):
"""
Function that calculates Gaussian white noise inputs
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
myseed : random seed. int or boolean
the same seed will give the same random number sequence
Returns:
I : Gaussian white noise input
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Set random seed. You can fix the seed of the random number generator so
# that the results are reliable however, when you want to generate multiple
# realization make sure that you change the seed for each new realization
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# generate GWN
# we divide here by 1000 to convert units to sec.
I_GWN = sig * np.random.randn(Lt) * np.sqrt(pars['tau_m'] / dt)
return I_GWN
def Poisson_generator(pars, rate, n, myseed=False):
"""
Generates poisson trains
Args:
pars : parameter dictionary
rate : noise amplitute [Hz]
n : number of Poisson trains
myseed : random seed. int or boolean
Returns:
pre_spike_train : spike train matrix, ith row represents whether
there is a spike in ith spike train over time
(1 if spike, 0 otherwise)
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# generate uniformly distributed random variables
u_rand = np.random.rand(n, Lt)
# generate Poisson train
poisson_train = 1. * (u_rand < rate * (dt / 1000.))
return poisson_train
def example_plot_myCC():
pars = default_pars(T=50000, dt=.1)
c = np.arange(10) * 0.1
r12 = np.zeros(10)
for i in range(10):
I1gL, I2gL = correlate_input(pars, mu=20.0, sig=7.5, c=c[i])
r12[i] = my_CC(I1gL, I2gL)
plt.figure()
plt.plot(c, r12, 'bo', alpha=0.7, label='Simulation', zorder=2)
plt.plot([-0.05, 0.95], [-0.05, 0.95], 'k--', label='y=x',
dashes=(2, 2), zorder=1)
plt.xlabel('True CC')
plt.ylabel('Sample CC')
plt.legend(loc='best')
def LIF_output_cc(pars, mu, sig, c, bin_size, n_trials=20):
""" Simulates two LIF neurons with correlated input and computes output correlation
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
c : correlation coefficient ~[0, 1]
bin_size : bin size used for time series
n_trials : total simulation trials
Returns:
r : output corr. coe.
sp_rate : spike rate
sp1 : spike times of neuron 1 in the last trial
sp2 : spike times of neuron 2 in the last trial
"""
r12 = np.zeros(n_trials)
sp_rate = np.zeros(n_trials)
for i_trial in range(n_trials):
I1gL, I2gL = correlate_input(pars, mu, sig, c)
_, sp1 = run_LIF(pars, pars['g_L'] * I1gL)
_, sp2 = run_LIF(pars, pars['g_L'] * I2gL)
my_bin = np.arange(0, pars['T'], bin_size)
sp1_count, _ = np.histogram(sp1, bins=my_bin)
sp2_count, _ = np.histogram(sp2, bins=my_bin)
r12[i_trial] = my_CC(sp1_count[::20], sp2_count[::20])
sp_rate[i_trial] = len(sp1) / pars['T'] * 1000.
return r12.mean(), sp_rate.mean(), sp1, sp2
def plot_c_r_LIF(c, r, mycolor, mylabel):
z = np.polyfit(c, r, deg=1)
c_range = np.array([c.min() - 0.05, c.max() + 0.05])
plt.plot(c, r, 'o', color=mycolor, alpha=0.7, label=mylabel, zorder=2)
plt.plot(c_range, z[0] * c_range + z[1], color=mycolor, zorder=1)
```
The helper function contains the:
- Parameter dictionary: `default_pars( **kwargs)`
- LIF simulator: `run_LIF`
- Gaussian white noise generator: `my_GWN(pars, sig, myseed=False)`
- Poisson type spike train generator: `Poisson_generator(pars, rate, n, myseed=False)`
- Two LIF neurons with correlated inputs simulator: `LIF_output_cc(pars, mu, sig, c, bin_size, n_trials=20)`
- Some additional plotting utilities
---
# Section 1: Correlations (Synchrony)
Correlation or synchrony in neuronal activity can be described for any readout of brain activity. Here, we are concerned with the spiking activity of neurons.
In the simplest way, correlation/synchrony refers to coincident spiking of neurons, i.e., when two neurons spike together, they are firing in **synchrony** or are **correlated**. Neurons can be synchronous in their instantaneous activity, i.e., they spike together with some probability. However, it is also possible that spiking of a neuron at time $t$ is correlated with the spikes of another neuron with a delay (time-delayed synchrony).
## Origin of synchronous neuronal activity:
- Common inputs, i.e., two neurons are receiving input from the same sources. The degree of correlation of the shared inputs is proportional to their output correlation.
- Pooling from the same sources. Neurons do not share the same input neurons but are receiving inputs from neurons which themselves are correlated.
- Neurons are connected to each other (uni- or bi-directionally): This will only give rise to time-delayed synchrony. Neurons could also be connected via gap-junctions.
- Neurons have similar parameters and initial conditions.
## Implications of synchrony
When neurons spike together, they can have a stronger impact on downstream neurons. Synapses in the brain are sensitive to the temporal correlations (i.e., delay) between pre- and postsynaptic activity, and this, in turn, can lead to the formation of functional neuronal networks - the basis of unsupervised learning (we will study some of these concepts in a forthcoming tutorial).
Synchrony implies a reduction in the dimensionality of the system. In addition, correlations, in many cases, can impair the decoding of neuronal activity.
```
# @title Video 1: Input & output correlations
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nsAYFBcAkes", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
## How to study the emergence of correlations
A simple model to study the emergence of correlations is to inject common inputs to a pair of neurons and measure the output correlation as a function of the fraction of common inputs.
Here, we are going to investigate the transfer of correlations by computing the correlation coefficient of spike trains recorded from two unconnected LIF neurons, which received correlated inputs.
The input current to LIF neuron $i$ $(i=1,2)$ is:
\begin{equation}
\frac{I_i}{g_L} =\mu_i + \sigma_i (\sqrt{1-c}\xi_i + \sqrt{c}\xi_c) \quad (1)
\end{equation}
where $\mu_i$ is the temporal average of the current. The Gaussian white noise $\xi_i$ is independent for each neuron, while $\xi_c$ is common to all neurons. The variable $c$ ($0\le c\le1$) controls the fraction of common and independent inputs. $\sigma_i$ shows the variance of the total input.
So, first, we will generate correlated inputs.
```
# @title
#@markdown Execute this cell to get a function for generating correlated GWN inputs
def correlate_input(pars, mu=20., sig=7.5, c=0.3):
"""
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
c. : correlation coefficient ~[0, 1]
Returns:
I1gL, I2gL : two correlated inputs with corr. coe. c
"""
# generate Gaussian whute noise xi_1, xi_2, xi_c
xi_1 = my_GWN(pars, sig)
xi_2 = my_GWN(pars, sig)
xi_c = my_GWN(pars, sig)
# Generate two correlated inputs by Equation. (1)
I1gL = mu + np.sqrt(1. - c) * xi_1 + np.sqrt(c) * xi_c
I2gL = mu + np.sqrt(1. - c) * xi_2 + np.sqrt(c) * xi_c
return I1gL, I2gL
print(help(correlate_input))
```
### Exercise 1: Compute the correlation
The _sample correlation coefficient_ between two input currents $I_i$ and $I_j$ is defined as the sample covariance of $I_i$ and $I_j$ divided by the square root of the sample variance of $I_i$ multiplied with the square root of the sample variance of $I_j$. In equation form:
\begin{align}
r_{ij} &= \frac{cov(I_i, I_j)}{\sqrt{var(I_i)} \sqrt{var(I_j)}}\\
cov(I_i, I_j) &= \sum_{k=1}^L (I_i^k -\bar{I}_i)(I_j^k -\bar{I}_j) \\
var(I_i) &= \sum_{k=1}^L (I_i^k -\bar{I}_i)^2
\end{align}
where $\bar{I}_i$ is the sample mean, k is the time bin, and L is the length of $I$. This means that $I_i^k$ is current i at time $k\cdot dt$. Note that the equations above are not accurate for sample covariances and variances as they should be additionally divided by L-1 - we have dropped this term because it cancels out in the sample correlation coefficient formula.
The _sample correlation coefficient_ may also be referred to as the _sample Pearson correlation coefficient_. Here, is a beautiful paper that explains multiple ways to calculate and understand correlations [Rodgers and Nicewander 1988](https://www.stat.berkeley.edu/~rabbee/correlation.pdf).
In this exercise, we will create a function, `my_CC` to compute the sample correlation coefficient between two time series. Note that while we introduced this computation here in the context of input currents, the sample correlation coefficient is used to compute the correlation between any two time series - we will use it later on binned spike trains.
```
def my_CC(i, j):
"""
Args:
i, j : two time series with the same length
Returns:
rij : correlation coefficient
"""
########################################################################
## TODO for students: compute rxy, then remove the NotImplementedError #
# Tip1: array([a1, a2, a3])*array([b1, b2, b3]) = array([a1*b1, a2*b2, a3*b3])
# Tip2: np.sum(array([a1, a2, a3])) = a1+a2+a3
# Tip3: square root, np.sqrt()
# Fill out function and remove
raise NotImplementedError("Student exercise: compute the sample correlation coefficient")
########################################################################
# Calculate the covariance of i and j
cov = ...
# Calculate the variance of i
var_i = ...
# Calculate the variance of j
var_j = ...
# Calculate the correlation coefficient
rij = ...
return rij
# Uncomment the line after completing the my_CC function
# example_plot_myCC()
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_RealNeurons/solutions/W2D3_Tutorial2_Solution_03e44bdc.py)
*Example output:*
<img alt='Solution hint' align='left' width=558 height=413 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D3_RealNeurons/static/W2D3_Tutorial2_Solution_03e44bdc_0.png>
### Exercise 2: Measure the correlation between spike trains
After recording the spike times of the two neurons, how can we estimate their correlation coefficient?
In order to find this, we need to bin the spike times and obtain two time series. Each data point in the time series is the number of spikes in the corresponding time bin. You can use `np.histogram()` to bin the spike times.
Complete the code below to bin the spike times and calculate the correlation coefficient for two Poisson spike trains. Note that `c` here is the ground-truth correlation coefficient that we define.
```
# @title
# @markdown Execute this cell to get a function for generating correlated Poisson inputs (generate_corr_Poisson)
def generate_corr_Poisson(pars, poi_rate, c, myseed=False):
"""
function to generate correlated Poisson type spike trains
Args:
pars : parameter dictionary
poi_rate : rate of the Poisson train
c. : correlation coefficient ~[0, 1]
Returns:
sp1, sp2 : two correlated spike time trains with corr. coe. c
"""
range_t = pars['range_t']
mother_rate = poi_rate / c
mother_spike_train = Poisson_generator(pars, rate=mother_rate,
n=1, myseed=myseed)[0]
sp_mother = range_t[mother_spike_train > 0]
L_sp_mother = len(sp_mother)
sp_mother_id = np.arange(L_sp_mother)
L_sp_corr = int(L_sp_mother * c)
np.random.shuffle(sp_mother_id)
sp1 = np.sort(sp_mother[sp_mother_id[:L_sp_corr]])
np.random.shuffle(sp_mother_id)
sp2 = np.sort(sp_mother[sp_mother_id[:L_sp_corr]])
return sp1, sp2
print(help(generate_corr_Poisson))
def corr_coeff_pairs(pars, rate, c, trials, bins):
"""
Calculate the correlation coefficient of two spike trains, for different
realizations
Args:
pars : parameter dictionary
rate : rate of poisson inputs
c : correlation coefficient ~ [0, 1]
trials : number of realizations
bins : vector with bins for time discretization
Returns:
r12 : correlation coefficient of a pair of inputs
"""
r12 = np.zeros(n_trials)
for i in range(n_trials):
##############################################################
## TODO for students: Use np.histogram to bin the spike time #
## e.g., sp1_count, _= np.histogram(...)
# Use my_CC() compute corr coe, compare with c
# Note that you can run multiple realizations and compute their r_12(diff_trials)
# with the defined function above. The average r_12 over trials can get close to c.
# Note: change seed to generate different input per trial
# Fill out function and remove
raise NotImplementedError("Student exercise: compute the correlation coefficient")
##############################################################
# Generate correlated Poisson inputs
sp1, sp2 = generate_corr_Poisson(pars, ..., ..., myseed=2020+i)
# Bin the spike times of the first input
sp1_count, _ = np.histogram(..., bins=...)
# Bin the spike times of the second input
sp2_count, _ = np.histogram(..., bins=...)
# Calculate the correlation coefficient
r12[i] = my_CC(..., ...)
return r12
poi_rate = 20.
c = 0.2 # set true correlation
pars = default_pars(T=10000)
# bin the spike time
bin_size = 20 # [ms]
my_bin = np.arange(0, pars['T'], bin_size)
n_trials = 100 # 100 realizations
# Uncomment to test your function
# r12 = corr_coeff_pairs(pars, rate=poi_rate, c=c, trials=n_trials, bins=my_bin)
# print(f'True corr coe = {c:.3f}')
# print(f'Simu corr coe = {r12.mean():.3f}')
```
Sample output
```
True corr coe = 0.200
Simu corr coe = 0.197
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_RealNeurons/solutions/W2D3_Tutorial2_Solution_e5eaac3e.py)
---
# Section 2: Investigate the effect of input correlation on the output correlation
Now let's combine the aforementioned two procedures. We first generate the correlated inputs by Equation (1). Then we inject the correlated inputs $I_1, I_2$ into a pair of neurons and record their output spike times. We continue measuring the correlation between the output and
investigate the relationship between the input correlation and the output correlation.
## Drive a neuron with correlated inputs and visualize its output
In the following, you will inject correlated GWN in two neurons. You need to define the mean (`gwn_mean`), standard deviation (`gwn_std`), and input correlations (`c_in`).
We will simulate $10$ trials to get a better estimate of the output correlation. Change the values in the following cell for the above variables (and then run the next cell) to explore how they impact the output correlation.
```
# Play around with these parameters
pars = default_pars(T=80000, dt=1.) # get the parameters
c_in = 0.3 # set input correlation value
gwn_mean = 10.
gwn_std = 10.
# @title
# @markdown Do not forget to execute this cell to simulate the LIF
bin_size = 10. # ms
starttime = time.perf_counter() # time clock
r12_ss, sp_ss, sp1, sp2 = LIF_output_cc(pars, mu=gwn_mean, sig=gwn_std, c=c_in,
bin_size=bin_size, n_trials=10)
# just the time counter
endtime = time.perf_counter()
timecost = (endtime - starttime) / 60.
print(f"Simulation time = {timecost:.2f} min")
print(f"Input correlation = {c_in}")
print(f"Output correlation = {r12_ss}")
plt.figure(figsize=(12, 6))
plt.plot(sp1, np.ones(len(sp1)) * 1, '|', ms=20, label='neuron 1')
plt.plot(sp2, np.ones(len(sp2)) * 1.1, '|', ms=20, label='neuron 2')
plt.xlabel('time (ms)')
plt.ylabel('neuron id.')
plt.xlim(1000, 8000)
plt.ylim(0.9, 1.2)
plt.legend()
plt.show()
```
## Think!
- Is the output correlation always smaller than the input correlation? If yes, why?
- Should there be a systematic relationship between input and output correlations?
You will explore these questions in the next figure but try to develop your own intuitions first!
Lets vary `c_in` and plot the relationship between the `c_in` and output correlation. This might take some time depending on the number of trials.
```
#@title
#@markdown Don't forget to execute this cell!
pars = default_pars(T=80000, dt=1.) # get the parameters
bin_size = 10.
c_in = np.arange(0, 1.0, 0.1) # set the range for input CC
r12_ss = np.zeros(len(c_in)) # small mu, small sigma
starttime = time.perf_counter() # time clock
for ic in range(len(c_in)):
r12_ss[ic], sp_ss, sp1, sp2 = LIF_output_cc(pars, mu=10.0, sig=10.,
c=c_in[ic], bin_size=bin_size,
n_trials=10)
endtime = time.perf_counter()
timecost = (endtime - starttime) / 60.
print(f"Simulation time = {timecost:.2f} min")
plt.figure(figsize=(7, 6))
plot_c_r_LIF(c_in, r12_ss, mycolor='b', mylabel='Output CC')
plt.plot([c_in.min() - 0.05, c_in.max() + 0.05],
[c_in.min() - 0.05, c_in.max() + 0.05],
'k--', dashes=(2, 2), label='y=x')
plt.xlabel('Input CC')
plt.ylabel('Output CC')
plt.legend(loc='best', fontsize=16)
plt.show()
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_RealNeurons/solutions/W2D3_Tutorial2_Solution_71e76f4d.py)
---
# Section 3: Correlation transfer function
The above plot of input correlation vs. output correlation is called the __correlation transfer function__ of the neurons.
## Section 3.1: How do the mean and standard deviation of the GWN affect the correlation transfer function?
The correlations transfer function appears to be linear. The above can be taken as the input/output transfer function of LIF neurons for correlations, instead of the transfer function for input/output firing rates as we had discussed in the previous tutorial (i.e., F-I curve).
What would you expect to happen to the slope of the correlation transfer function if you vary the mean and/or the standard deviation of the GWN?
```
#@markdown Execute this cell to visualize correlation transfer functions
pars = default_pars(T=80000, dt=1.) # get the parameters
no_trial = 10
bin_size = 10.
c_in = np.arange(0., 1., 0.2) # set the range for input CC
r12_ss = np.zeros(len(c_in)) # small mu, small sigma
r12_ls = np.zeros(len(c_in)) # large mu, small sigma
r12_sl = np.zeros(len(c_in)) # small mu, large sigma
starttime = time.perf_counter() # time clock
for ic in range(len(c_in)):
r12_ss[ic], sp_ss, sp1, sp2 = LIF_output_cc(pars, mu=10.0, sig=10.,
c=c_in[ic], bin_size=bin_size,
n_trials=no_trial)
r12_ls[ic], sp_ls, sp1, sp2 = LIF_output_cc(pars, mu=18.0, sig=10.,
c=c_in[ic], bin_size=bin_size,
n_trials=no_trial)
r12_sl[ic], sp_sl, sp1, sp2 = LIF_output_cc(pars, mu=10.0, sig=20.,
c=c_in[ic], bin_size=bin_size,
n_trials=no_trial)
endtime = time.perf_counter()
timecost = (endtime - starttime) / 60.
print(f"Simulation time = {timecost:.2f} min")
plt.figure(figsize=(7, 6))
plot_c_r_LIF(c_in, r12_ss, mycolor='b', mylabel=r'Small $\mu$, small $\sigma$')
plot_c_r_LIF(c_in, r12_ls, mycolor='y', mylabel=r'Large $\mu$, small $\sigma$')
plot_c_r_LIF(c_in, r12_sl, mycolor='r', mylabel=r'Small $\mu$, large $\sigma$')
plt.plot([c_in.min() - 0.05, c_in.max() + 0.05],
[c_in.min() - 0.05, c_in.max() + 0.05],
'k--', dashes=(2, 2), label='y=x')
plt.xlabel('Input CC')
plt.ylabel('Output CC')
plt.legend(loc='best', fontsize=14)
plt.show()
```
### Think!
Why do both the mean and the standard deviation of the GWN affect the slope of the correlation transfer function?
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_RealNeurons/solutions/W2D3_Tutorial2_Solution_2deb4ccb.py)
## Section 3.2: What is the rationale behind varying $\mu$ and $\sigma$?
The mean and the variance of the synaptic current depends on the spike rate of a Poisson process. We can use [Campbell's theorem](https://en.wikipedia.org/wiki/Campbell%27s_theorem_(probability)) to estimate the mean and the variance of the synaptic current:
\begin{align}
\mu_{\rm syn} = \lambda J \int P(t) \\
\sigma_{\rm syn} = \lambda J \int P(t)^2 dt\\
\end{align}
where $\lambda$ is the firing rate of the Poisson input, $J$ the amplitude of the postsynaptic current and $P(t)$ is the shape of the postsynaptic current as a function of time.
Therefore, when we varied $\mu$ and/or $\sigma$ of the GWN, we mimicked a change in the input firing rate. Note that, if we change the firing rate, both $\mu$ and $\sigma$ will change simultaneously, not independently.
Here, since we observe an effect of $\mu$ and $\sigma$ on correlation transfer, this implies that the input rate has an impact on the correlation transfer function.
### Think!
- What are the factors that would make output correlations smaller than input correlations? (Notice that the colored lines are below the black dashed line)
- What does it mean for the correlation in the network?
- Here we have studied the transfer of correlations by injecting GWN. But in the previous tutorial, we mentioned that GWN is unphysiological. Indeed, neurons receive colored noise (i.e., Shot noise or OU process). How do these results obtained from injection of GWN apply to the case where correlated spiking inputs are injected in the two LIFs? Will the results be the same or different?
Reference
- De La Rocha, Jaime, et al. "Correlation between neural spike trains increases with firing rate." Nature (2007) (https://www.nature.com/articles/nature06028/)
- Bujan AF, Aertsen A, Kumar A. Role of input correlations in shaping the variability and noise correlations of evoked activity in the neocortex. Journal of Neuroscience. 2015 Jun 3;35(22):8611-25. (https://www.jneurosci.org/content/35/22/8611)
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_RealNeurons/solutions/W2D3_Tutorial2_Solution_39d29f52.py)
---
# Summary
In this tutorial, we studied how the input correlation of two LIF neurons is mapped to their output correlation. Specifically, we:
- injected correlated GWN in a pair of neurons,
- measured correlations between the spiking activity of the two neurons, and
- studied how the transfer of correlation depends on the statistics of the input, i.e., mean and standard deviation.
Here, we were concerned with zero time lag correlation. For this reason, we restricted estimation of correlation to instantaneous correlations. If you are interested in time-lagged correlation, then we should estimate the cross-correlogram of the spike trains and find out the dominant peak and area under the peak to get an estimate of output correlations.
We leave this as a future to-do for you if you are interested.
---
# Bonus 1: Example of a conductance-based LIF model
Above, we have written code to generate correlated Poisson spike trains. You can write code to stimulate the LIF neuron with such correlated spike trains and study the correlation transfer function for spiking input and compare it to the correlation transfer function obtained by injecting correlated GWNs.
```
# @title Function to simulate conductance-based LIF
def run_LIF_cond(pars, I_inj, pre_spike_train_ex, pre_spike_train_in):
"""
conductance-based LIF dynamics
Args:
pars : parameter dictionary
I_inj : injected current [pA]. The injected current here can
be a value or an array
pre_spike_train_ex : spike train input from presynaptic excitatory neuron
pre_spike_train_in : spike train input from presynaptic inhibitory neuron
Returns:
rec_spikes : spike times
rec_v : mebrane potential
gE : postsynaptic excitatory conductance
gI : postsynaptic inhibitory conductance
"""
# Retrieve parameters
V_th, V_reset = pars['V_th'], pars['V_reset']
tau_m, g_L = pars['tau_m'], pars['g_L']
V_init, E_L = pars['V_init'], pars['E_L']
gE_bar, gI_bar = pars['gE_bar'], pars['gI_bar']
VE, VI = pars['VE'], pars['VI']
tau_syn_E, tau_syn_I = pars['tau_syn_E'], pars['tau_syn_I']
tref = pars['tref']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize
tr = 0.
v = np.zeros(Lt)
v[0] = V_init
gE = np.zeros(Lt)
gI = np.zeros(Lt)
Iinj = I_inj * np.ones(Lt) # ensure I has length Lt
if pre_spike_train_ex.max() == 0:
pre_spike_train_ex_total = np.zeros(Lt)
else:
pre_spike_train_ex_total = pre_spike_train_ex * np.ones(Lt)
if pre_spike_train_in.max() == 0:
pre_spike_train_in_total = np.zeros(Lt)
else:
pre_spike_train_in_total = pre_spike_train_in * np.ones(Lt)
# simulation
rec_spikes = [] # recording spike times
for it in range(Lt - 1):
if tr > 0:
v[it] = V_reset
tr = tr - 1
elif v[it] >= V_th: # reset voltage and record spike event
rec_spikes.append(it)
v[it] = V_reset
tr = tref / dt
# update the synaptic conductance
gE[it+1] = gE[it] - (dt / tau_syn_E) * gE[it] + gE_bar * pre_spike_train_ex_total[it + 1]
gI[it+1] = gI[it] - (dt / tau_syn_I) * gI[it] + gI_bar * pre_spike_train_in_total[it + 1]
# calculate the increment of the membrane potential
dv = (-(v[it] - E_L) - (gE[it + 1] / g_L) * (v[it] - VE) - \
(gI[it + 1] / g_L) * (v[it] - VI) + Iinj[it] / g_L) * (dt / tau_m)
# update membrane potential
v[it + 1] = v[it] + dv
rec_spikes = np.array(rec_spikes) * dt
return v, rec_spikes, gE, gI
print(help(run_LIF_cond))
```
## Interactive Demo: Correlated spike input to an LIF neuron
In the following you can explore what happens when the neurons receive correlated spiking input.
You can vary the correlation between excitatory input spike trains. For simplicity, the correlation between inhibitory spike trains is set to 0.01.
Vary both excitatory rate and correlation and see how the output correlation changes. Check if the results are qualitatively similar to what you observed previously when you varied the $\mu$ and $\sigma$.
```
# @title
# @markdown Make sure you execute this cell to enable the widget!
my_layout.width = '450px'
@widgets.interact(
pwc_ee=widgets.FloatSlider(0.3, min=0.05, max=0.99, step=0.01,
layout=my_layout),
exc_rate=widgets.FloatSlider(1e3, min=500., max=5e3, step=50.,
layout=my_layout),
inh_rate=widgets.FloatSlider(500., min=300., max=5e3, step=5.,
layout=my_layout),
)
def EI_isi_regularity(pwc_ee, exc_rate, inh_rate):
pars = default_pars(T=1000.)
# Add parameters
pars['V_th'] = -55. # spike threshold [mV]
pars['V_reset'] = -75. # reset potential [mV]
pars['tau_m'] = 10. # membrane time constant [ms]
pars['g_L'] = 10. # leak conductance [nS]
pars['V_init'] = -65. # initial potential [mV]
pars['E_L'] = -75. # leak reversal potential [mV]
pars['tref'] = 2. # refractory time (ms)
pars['gE_bar'] = 4.0 # [nS]
pars['VE'] = 0. # [mV] excitatory reversal potential
pars['tau_syn_E'] = 2. # [ms]
pars['gI_bar'] = 2.4 # [nS]
pars['VI'] = -80. # [mV] inhibitory reversal potential
pars['tau_syn_I'] = 5. # [ms]
my_bin = np.arange(0, pars['T']+pars['dt'], .1) # 20 [ms] bin-size
# exc_rate = 1e3
# inh_rate = 0.4e3
# pwc_ee = 0.3
pwc_ii = 0.01
# generate two correlated spike trains for excitatory input
sp1e, sp2e = generate_corr_Poisson(pars, exc_rate, pwc_ee)
sp1_spike_train_ex, _ = np.histogram(sp1e, bins=my_bin)
sp2_spike_train_ex, _ = np.histogram(sp2e, bins=my_bin)
# generate two uncorrelated spike trains for inhibitory input
sp1i, sp2i = generate_corr_Poisson(pars, inh_rate, pwc_ii)
sp1_spike_train_in, _ = np.histogram(sp1i, bins=my_bin)
sp2_spike_train_in, _ = np.histogram(sp2i, bins=my_bin)
v1, rec_spikes1, gE, gI = run_LIF_cond(pars, 0, sp1_spike_train_ex, sp1_spike_train_in)
v2, rec_spikes2, gE, gI = run_LIF_cond(pars, 0, sp2_spike_train_ex, sp2_spike_train_in)
# bin the spike time
bin_size = 20 # [ms]
my_bin = np.arange(0, pars['T'], bin_size)
spk_1, _ = np.histogram(rec_spikes1, bins=my_bin)
spk_2, _ = np.histogram(rec_spikes2, bins=my_bin)
r12 = my_CC(spk_1, spk_2)
print(f"Input correlation = {pwc_ee}")
print(f"Output correlation = {r12}")
plt.figure(figsize=(14, 7))
plt.subplot(211)
plt.plot(sp1e, np.ones(len(sp1e)) * 1, '|', ms=20,
label='Exc. input 1')
plt.plot(sp2e, np.ones(len(sp2e)) * 1.1, '|', ms=20,
label='Exc. input 2')
plt.plot(sp1i, np.ones(len(sp1i)) * 1.3, '|k', ms=20,
label='Inh. input 1')
plt.plot(sp2i, np.ones(len(sp2i)) * 1.4, '|k', ms=20,
label='Inh. input 2')
plt.ylim(0.9, 1.5)
plt.legend()
plt.ylabel('neuron id.')
plt.subplot(212)
plt.plot(pars['range_t'], v1, label='neuron 1')
plt.plot(pars['range_t'], v2, label='neuron 2')
plt.xlabel('time (ms)')
plt.ylabel('membrane voltage $V_{m}$')
plt.tight_layout()
plt.show()
```
Above, we are estimating the output correlation for one trial. You can modify the code to get a trial average of output correlations.
---
# Bonus 2: Ensemble Response
Finally, there is a short BONUS lecture video on the firing response of an ensemble of neurons to time-varying input. There are no associated coding exercises - just enjoy.
```
#@title Video 2 (Bonus): Response of ensemble of neurons to time-varying input
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="78_dWa4VOIo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
| github_jupyter |
```
import numpy as np
filename = 'glove.840B.300d.txt'
# (glove data set from: https://nlp.stanford.edu/projects/glove/)
word_vec_dim = 300 # word_vec_dim = dimension of each word vectors
def loadEmbeddings(filename):
vocab2embd = {}
with open(filename) as infile:
for line in infile:
row = line.strip().split(' ')
word = row[0].lower()
# print(word)
if word not in vocab2embd:
vec = np.asarray(row[1:], np.float32)
if len(vec) == word_vec_dim:
vocab2embd[word] = vec
print('Embedding Loaded.')
return vocab2embd
# Pre-trained word embedding
vocab2embd = loadEmbeddings(filename)
vocab2embd['<UNK>'] = np.random.randn(word_vec_dim)
vocab2embd['<GO>'] = np.random.randn(word_vec_dim)
vocab2embd['<PRED>'] = np.random.randn(word_vec_dim)
vocab2embd['<EOS>'] = np.random.randn(word_vec_dim)
vocab2embd['<PAD>'] = np.zeros(word_vec_dim)
import csv
import nltk
#nltk.download('punkt')
from nltk import word_tokenize
import string
summaries = []
texts = []
def clean(text):
text = text.lower()
printable = set(string.printable)
text = "".join(list(filter(lambda x: x in printable, text))) #filter funny characters, if any.
return text
counter={}
max_len_text = 100
max_len_sum = 20
#max_data = 100000
i=0
with open('Reviews.csv', 'rt') as csvfile: #Data from https://www.kaggle.com/snap/amazon-fine-food-reviews
Reviews = csv.DictReader(csvfile)
count=0
for row in Reviews:
#if count<max_data:
clean_text = word_tokenize(clean(row['Text']))
clean_summary = word_tokenize(clean(row['Summary']))
if len(clean_text) <= max_len_text and len(clean_summary) <= max_len_sum:
for word in clean_text:
if word in vocab2embd:
counter[word]=counter.get(word,0)+1
for word in clean_summary:
if word in vocab2embd:
counter[word]=counter.get(word,0)+1
summaries.append(clean_summary)
texts.append(clean_text)
#count+=1
if i%10000==0:
print("Processing data {}".format(i))
i+=1
print("Current size of data: "+str(len(texts)))
vocab = [word for word in counter]
counts = [counter[word] for word in vocab]
sorted_idx = sorted(range(len(counts)), key=counts.__getitem__)
sorted_idx.reverse()
vocab = [vocab[idx] for idx in sorted_idx]
special_tags = ["<UNK>","<GO>","<PRED>","<EOS>","<PAD>"]
if len(vocab) > 40000-len(special_tags):
vocab = vocab[0:40000-len(special_tags)]
vocab += special_tags
vocab_dict = {word:i for i,word in enumerate(vocab)}
embeddings = []
for word in vocab:
embeddings.append(vocab2embd[word].tolist())
# SHUFFLE
import random
texts_idx = [idx for idx in range(0,len(texts))]
random.shuffle(texts_idx)
texts = [texts[idx] for idx in texts_idx]
summaries = [summaries[idx] for idx in texts_idx]
import random
index = random.randint(0,len(texts)-1)
print("SAMPLE CLEANED & TOKENIZED TEXT: \n\n"+str(texts[index]))
print("\nSAMPLE CLEANED & TOKENIZED SUMMARY: \n\n"+str(summaries[index]))
train_len = int(.7*len(texts))
val_len = int(.2*len(texts))
train_summaries = summaries[0:train_len]
train_texts = texts[0:train_len]
val_summaries = summaries[train_len:val_len+train_len]
val_texts = texts[train_len:train_len+val_len]
test_summaries = summaries[train_len+val_len:]
test_texts = texts[train_len+val_len:]
def bucket_and_batch(texts, summaries, batch_size=32):
global vocab_dict
vocab2idx = vocab_dict
PAD = vocab2idx['<PAD>']
EOS = vocab2idx['<EOS>']
UNK = vocab2idx['<UNK>']
true_seq_lens = np.zeros((len(texts)), dtype=int)
for i in range(len(texts)):
true_seq_lens[i] = len(texts[i])
# sorted in descending order after flip
sorted_by_len_indices = np.flip(np.argsort(true_seq_lens), 0)
sorted_texts = []
sorted_summaries = []
for i in range(len(texts)):
sorted_texts.append(texts[sorted_by_len_indices[i]])
sorted_summaries.append(summaries[sorted_by_len_indices[i]])
i = 0
batches_texts = []
batches_summaries = []
batches_true_seq_in_lens = []
batches_true_seq_out_lens = []
while i < len(sorted_texts):
if i+batch_size > len(sorted_texts):
batch_size = len(sorted_texts)-i
batch_texts = []
batch_summaries = []
batch_true_seq_in_lens = []
batch_true_seq_out_lens = []
max_in_len = len(sorted_texts[i])
max_out_len = max([len(sorted_summaries[j])+1 for j in range(i,i+batch_size)])
for j in range(i, i + batch_size):
text = sorted_texts[j]
summary = sorted_summaries[j]
text = [vocab2idx.get(word,UNK) for word in text]
summary = [vocab2idx.get(word,UNK) for word in summary]
init_in_len = len(text)
init_out_len = len(summary)+1 # +1 for EOS
while len(text) < max_in_len:
text.append(PAD)
summary.append(EOS)
while len(summary) < max_out_len:
summary.append(PAD)
batch_summaries.append(summary)
batch_texts.append(text)
batch_true_seq_in_lens.append(init_in_len)
batch_true_seq_out_lens.append(init_out_len)
#batch_texts = np.asarray(batch_texts, dtype=np.int32)
#batch_summaries = np.asarray(batch_summaries, dtype=np.int32)
#batch_true_seq_in_lens = np.asarray(batch_true_seq_in_lens, dtype=np.int32)
#batch_true_seq_out_lens = np.asarray(batch_true_seq_out_lens, dtype=np.int32)
batches_texts.append(batch_texts)
batches_summaries.append(batch_summaries)
batches_true_seq_in_lens.append(batch_true_seq_in_lens)
batches_true_seq_out_lens.append(batch_true_seq_out_lens)
i += batch_size
return batches_texts, batches_summaries, batches_true_seq_in_lens, batches_true_seq_out_lens
train_batches_x,train_batches_y,\
train_batches_in_lens, train_batches_out_lens = bucket_and_batch(train_texts,train_summaries)
val_batches_x,val_batches_y,\
val_batches_in_lens,val_batches_out_lens= bucket_and_batch(val_texts,val_summaries)
test_batches_x,test_batches_y,\
test_batches_in_lens,test_batches_out_lens= bucket_and_batch(test_texts,test_summaries)
#Saving processed data in another file.
import json
diction = {}
diction['vocab']=vocab
diction['embd']=embeddings
diction['train_batches_x']=train_batches_x
diction['train_batches_y']=train_batches_y
diction['train_batches_in_len'] = train_batches_in_lens
diction['train_batches_out_len'] = train_batches_out_lens
diction['val_batches_x']=val_batches_x
diction['val_batches_y']=val_batches_y
diction['val_batches_in_len'] = val_batches_in_lens
diction['val_batches_out_len'] = val_batches_out_lens
diction['test_batches_x']=test_batches_x
diction['test_batches_y']=test_batches_y
diction['test_batches_in_len'] = test_batches_in_lens
diction['test_batches_out_len'] = test_batches_out_lens
with open('ProcessedData.json', 'w') as fp:
json.dump(diction, fp)
```
| github_jupyter |
## Here, you'll learn all about merging pandas DataFrames. You'll explore different techniques for merging, and learn about left joins, right joins, inner joins, and outer joins, as well as when to use which. You'll also learn about ordered merging, which is useful when you want to merge DataFrames whose columns have natural orderings, like date-time columns.
## Merging on a specific column
This exercise follows on the last one with the DataFrames revenue and managers for your company. You expect your company to grow and, eventually, to operate in cities with the same name on different states. As such, you decide that every branch should have a numerical branch identifier. Thus, you add a branch_id column to both DataFrames. Moreover, new cities have been added to both the revenue and managers DataFrames as well. pandas has been imported as pd and both DataFrames are available in your namespace.
At present, there should be a 1-to-1 relationship between the city and branch_id fields. In that case, the result of a merge on the city columns ought to give you the same output as a merge on the branch_id columns. Do they? Can you spot an ambiguity in one of the DataFrames?
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as ply
revenue = pd.DataFrame({'city':['Austin', 'Denver', 'Springfield', 'Mendocino'], 'branch_id':[10,20,30,47], 'revenue':[100,83,4,200]})
revenue
managers = pd.DataFrame({'city':['Austin', 'Denver', 'Springfield', 'Mendocino'], 'branch_id':[10,20,47,31], 'managers':['Charles', 'Joel', 'Brett', 'Sally']})
managers
from IPython import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
# Merge revenue with managers on 'city': merge_by_city
merge_by_city = pd.merge(revenue, managers, on = 'city')
# Print merge_by_city
merge_by_city
# Merge revenue with managers on 'branch_id': merge_by_id
merge_by_id = pd.merge(revenue, managers, on = 'branch_id')
# Print merge_by_id
merge_by_id
```
__Well done! Notice that when you merge on 'city', the resulting DataFrame has a peculiar result: In row 2, the city Springfield has two different branch IDs. This is because there are actually two different cities named Springfield - one in the State of Illinois, and the other in Missouri. The revenue DataFrame has the one from Illinois, and the managers DataFrame has the one from Missouri. Consequently, when you merge on 'branch_id', both of these get dropped from the merged DataFrame.__
```
managers['branch'] = managers['city']
managers.drop('city', axis = 1, inplace = True)
managers
```
## Merging on columns with non-matching labels
You continue working with the revenue & managers DataFrames from before. This time, someone has changed the field name 'city' to 'branch' in the managers table. Now, when you attempt to merge DataFrames, an exception is thrown:
```python
>>> pd.merge(revenue, managers, on='city')
Traceback (most recent call last):
... <text deleted> ...
pd.merge(revenue, managers, on='city')
... <text deleted> ...
KeyError: 'city'
```
Given this, it will take a bit more work for you to join or merge on the city/branch name. You have to specify the left_on and right_on parameters in the call to pd.merge().
As before, pandas has been pre-imported as pd and the revenue and managers DataFrames are in your namespace. They have been printed in the IPython Shell so you can examine the columns prior to merging.
Are you able to merge better than in the last exercise? How should the rows with Springfield be handled?
```
# Merge revenue & managers on 'city' & 'branch': combined
combined = pd.merge(revenue, managers, left_on = 'city', right_on = 'branch')
# Print combined
combined
managers.head()
managers.rename(columns = {'branch':'city'}, inplace = True)
managers.head()
```
## Merging on multiple columns
Another strategy to disambiguate cities with identical names is to add information on the states in which the cities are located. To this end, you add a column called state to both DataFrames from the preceding exercises. Again, pandas has been pre-imported as pd and the revenue and managers DataFrames are in your namespace.
Your goal in this exercise is to use pd.merge() to merge DataFrames using multiple columns (using 'branch_id', 'city', and 'state' in this case).
Are you able to match all your company's branches correctly?
```
# Add 'state' column to revenue: revenue['state']
revenue['state'] = ['TX', 'CO', 'IL', 'CA']
# Add 'state' column to managers: managers['state']
managers['state'] = ['TX', 'CO', 'CA', 'MO']
# Merge revenue & managers on 'branch_id', 'city', & 'state': combined
combined = pd.merge(revenue, managers, on = ['branch_id', 'city', 'state'])
# Print combined
combined
managers.head()
managers.rename(columns = {'city':'branch'}, inplace = True)
managers.head()
```
## Left & right merging on multiple columns
You now have, in addition to the revenue and managers DataFrames from prior exercises, a DataFrame sales that summarizes units sold from specific branches (identified by city and state but not branch_id).
Once again, the managers DataFrame uses the label branch in place of city as in the other two DataFrames. Your task here is to employ left and right merges to preserve data and identify where data is missing.
By merging revenue and sales with a right merge, you can identify the missing revenue values. Here, you don't need to specify left_on or right_on because the columns to merge on have matching labels.
By merging sales and managers with a left merge, you can identify the missing manager. Here, the columns to merge on have conflicting labels, so you must specify left_on and right_on. In both cases, you're looking to figure out how to connect the fields in rows containing Springfield.
```
sales = pd.DataFrame({'city':['Mendocino', 'Denver', 'Austin', 'Springfield', 'Springfield'], 'state':['CA', 'CO', 'TX', 'MO', 'IL'],
'units':[1,4,2,5,1]})
sales
# Merge revenue and sales: revenue_and_sales
revenue_and_sales = pd.merge(revenue, sales,how='right',on=['city', 'state'])
# Print revenue_and_sales
revenue_and_sales
# Merge sales and managers: sales_and_managers
sales_and_managers = pd.merge(sales, managers, how='left', left_on=['city', 'state'],right_on=['branch', 'state'])
# Print sales_and_managers
sales_and_managers
# Perform the first merge: merge_default
merge_default = pd.merge(sales_and_managers, revenue_and_sales)
# Print merge_default
merge_default
# Perform the second merge: merge_outer
merge_outer = pd.merge(sales_and_managers, revenue_and_sales, how= 'outer')
# Print merge_outer
merge_outer
# Perform the third merge: merge_outer_on
merge_outer_on = pd.merge(sales_and_managers, revenue_and_sales,how='outer', on = ['city', 'state'])
# Print merge_outer_on
merge_outer_on
```
## Using merge_ordered()
This exercise uses pre-loaded DataFrames austin and houston that contain weather data from the cities Austin and Houston respectively. They have been printed in the IPython Shell for you to examine.
Weather conditions were recorded on separate days and you need to merge these two DataFrames together such that the dates are ordered. To do this, you'll use pd.merge_ordered(). After you're done, note the order of the rows before and after merging.
```
austin = pd.DataFrame({'date':['2016-01-01', '2016-02-08', '2016-01-17'], 'ratings':['Cloudy', 'Cloudy', 'Sunny']})
austin
houston = pd.DataFrame({'date':['2016-01-04', '2016-01-01', '2016-03-01'], 'ratings':['Rainy', 'Cloudy', 'Sunny']})
houston
# Perform an ordered merge on austin and houston using pd.merge_ordered(). Store the result as tx_weather
tx_weather = pd.merge_ordered(austin, houston)
# Print tx_weather. You should notice that the rows are sorted by the date but it is not possible to tell which observation came from which city
tx_weather
# Perform another ordered merge on austin and houston.
# This time, specify the keyword arguments on='date' and suffixes=['_aus','_hus'] so that the rows can be distinguished. Store the result as tx_weather_suff
tx_weather_suff = pd.merge_ordered(austin, houston, on = 'date',suffixes = ['_aus','_hus'])
# Print tx_weather_suff to examine its contents.
tx_weather_suff
# Perform a third ordered merge on austin and houston.
#This time, in addition to the on and suffixes parameters, specify the keyword argument fill_method='ffill' to use forward-filling to replace NaN entries with the most recent non-null entry, and hit 'Submit Answer' to examine the contents of the merged DataFrames!
tx_weather_ffill = pd.merge_ordered(austin, houston,on = 'date',suffixes = ['_aus','_hus'], fill_method='ffill')
tx_weather_ffill
```
| github_jupyter |
# Model Layers
This module contains many layer classes that we might be interested in using in our models. These layers complement the default [Pytorch layers](https://pytorch.org/docs/stable/nn.html) which we can also use as predefined layers.
```
from fastai import *
from fastai.vision import *
from fastai.gen_doc.nbdoc import *
show_doc(AdaptiveConcatPool2d, doc_string=False)
from fastai.gen_doc.nbdoc import *
from fastai.layers import *
```
Layer that concats `AdaptiveAvgPool2d` and `AdaptiveMaxPool2d`. Output will be `2*sz` or 2 if `sz` is None.
The [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) object uses adaptive average pooling and adaptive max pooling and concatenates them both. We use this because it provides the model with the information of both methods and improves performance. This technique is called `adaptive` because it allows us to decide on what output dimensions we want, instead of choosing the input's dimensions to fit a desired output size.
Let's try training with Adaptive Average Pooling first, then with Adaptive Max Pooling and finally with the concatenation of them both to see how they fare in performance.
We will first define a [`simple_cnn`](/layers.html#simple_cnn) using [Adapative Max Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveMaxPool2d) by changing the source code a bit.
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
def simple_cnn_max(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveMaxPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_max((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
```
Now let's try with [Adapative Average Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) now.
```
def simple_cnn_avg(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_avg((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
```
Finally we will try with the concatenation of them both [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d). We will see that, in fact, it increases our accuracy and decreases our loss considerably!
```
def simple_cnn(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(AdaptiveConcatPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
show_doc(Lambda, doc_string=False)
```
Lambda allows us to define functions and use them as layers in our networks inside a [Sequential](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential) object.
So, for example, say we want to apply a [log_softmax loss](https://pytorch.org/docs/stable/nn.html#torch.nn.functional.log_softmax) and we need to change the shape of our output batches to be able to use this loss. We can add a layer that applies the necessary change in shape by calling:
`Lambda(lambda x: x.view(x.size(0),-1))`
Let's see an example of how the shape of our output can change when we add this layer.
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0),-1))
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
show_doc(Flatten)
```
The function we build above is actually implemented in our library as [`Flatten`](/layers.html#Flatten). We can see that it returns the same size when we run it.
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Flatten(),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
show_doc(PoolFlatten)
```
We can combine these two final layers ([AdaptiveAvgPool2d](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) and [`Flatten`](/layers.html#Flatten)) by using [`PoolFlatten`](/layers.html#PoolFlatten).
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
PoolFlatten()
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
show_doc(ResizeBatch)
```
Another use we give to the Lambda function is to resize batches with [`ResizeBatch`](/layers.html#ResizeBatch) when we have a layer that expects a different input than what comes from the previous one. Let's see an example:
```
a = torch.tensor([[1., -1.], [1., -1.]])
print(a)
out = ResizeBatch(4)
print(out(a))
show_doc(CrossEntropyFlat, doc_string=False)
```
Same as [nn.CrossEntropyLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss), but flattens input and target. Is used to calculate cross entropy on arrays (which Pytorch will not let us do with their [nn.CrossEntropyLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss) function). An example of a use case is image segmentation models where the output in an image (or an array of pixels).
The parameters are the same as [nn.CrossEntropyLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss): `weight` to rescale each class, `size_average` whether we want to sum the losses across elements in a batch or we want to add them up, `ignore_index` what targets do we want to ignore, `reduce` on whether we want to return a loss per batch element and `reduction` specifies which type of reduction (if any) we want to apply to our input.
```
show_doc(MSELossFlat)
show_doc(Debugger)
```
The debugger module allows us to peek inside a network while its training and see in detail what is going on. We can see inputs, ouputs and sizes at any point in the network.
For instance, if you run the following:
``` python
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
Debugger(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
)
model.cuda()
learner = Learner(data, model, metrics=[accuracy])
learner.fit(5)
```
... you'll see something like this:
```
/home/ubuntu/fastai/fastai/layers.py(74)forward()
72 def forward(self,x:Tensor) -> Tensor:
73 set_trace()
---> 74 return x
75
76 class StdUpsample(nn.Module):
ipdb>
```
```
show_doc(NoopLoss)
show_doc(WassersteinLoss)
show_doc(PixelShuffle_ICNR)
show_doc(bn_drop_lin, doc_string=False)
```
The [`bn_drop_lin`](/layers.html#bn_drop_lin) function returns a sequence of [batch normalization](https://arxiv.org/abs/1502.03167), [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) and a linear layer. This custom layer is usually used at the end of a model.
`n_in` represents the number of size of the input `n_out` the size of the output, `bn` whether we want batch norm or not, `p` is how much dropout and `actn` is an optional parameter to add an activation function at the end.
```
show_doc(conv2d)
show_doc(conv2d_trans)
show_doc(conv_layer, doc_string=False)
```
The [`conv_layer`](/layers.html#conv_layer) function returns a sequence of [nn.Conv2D](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d), [BatchNorm](https://arxiv.org/abs/1502.03167) and a ReLU or [leaky RELU](https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf) activation function.
`n_in` represents the number of size of the input `n_out` the size of the output, `ks` kernel size, `stride` the stride with which we want to apply the convolutions. `bias` will decide if they have bias or not (if None, defaults to True unless using batchnorm). `norm_type` selects type of normalization (or `None`). If `leaky` is None, the activation is a standard `ReLU`, otherwise it's a `LearkyReLU` of slope `leaky`. Finally if `transpose=True`, the convolution is replaced by a `ConvTranspose2D`.
```
show_doc(embedding, doc_string=False)
```
Create an [embedding layer](https://arxiv.org/abs/1711.09160) with input size `ni` and output size `nf`.
```
show_doc(simple_cnn)
show_doc(std_upsample_head, doc_string=False)
```
Create a sequence of upsample layers with a RELU at the beggining and a [nn.ConvTranspose2d](https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d).
`nfs` is a list with the input and output sizes of each upsample layer and `c` is the output size of the final 2D Transpose Convolutional layer.
```
show_doc(trunc_normal_)
show_doc(icnr)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(Debugger.forward)
show_doc(MSELossFlat.forward)
show_doc(CrossEntropyFlat.forward)
show_doc(Lambda.forward)
show_doc(AdaptiveConcatPool2d.forward)
show_doc(NoopLoss.forward)
show_doc(icnr)
show_doc(PixelShuffle_ICNR.forward)
show_doc(WassersteinLoss.forward)
```
## New Methods - Please document or move to the undocumented section
| github_jupyter |
# Working with functions
<section class="objectives panel panel-warning">
<div class="panel-heading">
<h2><span class="fa fa-certificate"></span> Learning Objectives:</h2>
</div>
<div class="panel-body">
<ul>
<li>Define a function that takes parameters.</li>
<li>Return a value from a function.</li>
<li>Test and debug a function.</li>
<li>Set default values for function parameters.</li>
<li>Explain why we should divide programs into small, single-purpose
functions.</li>
</ul>
</div>
</section>
At this point, we've written code to draw some interesting features in our inflammation data, loop over all our data files to quickly draw these plots for each of them, and have Python make decisions based on what it sees in our data. But, our code is getting pretty long and complicated; what if we had thousands of datasets, and didn't want to generate a figure for every single one? Commenting out the figure-drawing code is a nuisance. Also, what if we want to use that code again, on a different dataset or at a different point in our program? Cutting and pasting it is going to make our code get very long and very repetitive, very quickly. We'd like a way to package our code so that it is easier to reuse, and Python provides for this by letting us define things called 'functions' - a shorthand way of re-executing longer pieces of code.
Let's start by defining a function `kelvin_to_celsius` that converts temperatures from Kelvin to Celsius:
The function definition opens with the word `def`, which is followed by the name of the function and a parenthesised list of parameter names. The body of the function - the statements that are executed when it runs - is indented below the definition line, typically by four spaces.
When we call the function, the values we pass to it are assigned to those variables so that we can use them inside the function. Inside the function, we use a **return statement** to send a result back to whoever asked for it.
Let's try running our function. Calling our own function is no different from calling any other function:
We've successfully called the function that we defined, and we have access to the value that we returned.
<section class="callout panel panel-warning">
<div class="panel-heading">
<h2><span class="fa fa-thumb-tack"></span> Integer division</h2>
</div>
<div class="panel-body">
<p>We are using Python 3 division, which always returns a floating point number:</p>
<div class="codehilite"><pre><span></span><span class="k">print</span><span class="p">(</span><span class="mi">5</span><span class="o">/</span><span class="mi">9</span><span class="p">)</span>
</pre></div>
<p>Unfortunately, this wasn't the case in Python 2:</p>
<div class="codehilite"><pre><span></span><span class="err">!</span><span class="n">python2</span> <span class="o">-</span><span class="n">c</span> <span class="s2">"print 5/9"</span>
</pre></div>
<p>If you are using Python 2 and want to keep the fractional part of division you need to convert one or the other number to floating point:</p>
<div class="codehilite"><pre><span></span><span class="nb">float</span><span class="p">(</span><span class="mi">5</span><span class="p">)</span> <span class="o">/</span> <span class="mi">9</span>
</pre></div>
<div class="codehilite"><pre><span></span><span class="mi">5</span> <span class="o">/</span> <span class="nb">float</span><span class="p">(</span><span class="mi">9</span><span class="p">)</span>
</pre></div>
<div class="codehilite"><pre><span></span><span class="mf">5.0</span> <span class="o">/</span> <span class="mi">9</span>
</pre></div>
<div class="codehilite"><pre><span></span><span class="mi">5</span> <span class="o">/</span> <span class="mf">9.0</span>
</pre></div>
<p>And if you want an integer result from division in Python 3, use a double-slash:</p>
<div class="codehilite"><pre><span></span><span class="mi">4</span> <span class="o">//</span> <span class="mi">2</span>
</pre></div>
<div class="codehilite"><pre><span></span><span class="mi">3</span> <span class="o">//</span> <span class="mi">2</span>
</pre></div>
</div>
</section>
## Composing Functions
Now that we've seen how to turn Kelvin into Celsius, let's try converting Celsius to Fahrenheit:
What about converting Kelvin to Fahrenheit? We could write out the formula, but we don't need to. Instead, we can compose the two functions we have already created:
This is our first taste of how larger programs are built: we define basic operations, then combine them in ever-larger chunks to get the effect we want. Real-life functions will usually be larger than the ones shown here - typically half a dozen to a few dozen lines - but they shouldn't ever be much longer than that, or the next person who reads it won't be able to understand what's going on.
## Tidying up
Now that we know how to wrap bits of code up in functions, we can make our inflammation analyasis easier to read and easier to reuse. First, let's make an `analyse` function that generates our plots:
and another function called `detect_problems` that checks for those systematics we noticed:
Notice that rather than jumbling this code together in one giant `for` loop, we can now read and reuse both ideas separately. We can reproduce the previous analysis with a much simpler `for` loop:
By giving our functions human-readable names, we can more easily read and understand what is happening in the `for` loop. Even better, if at some later date we want to use either of those pieces of code again, we can do so in a single line.
## Defining Defaults
We have passed parameters to functions in two ways: directly, as in `type(data)`, and by name, as in `np.loadtxt(fname='something.csv', delimiter=',')`. In fact, we can pass the filename to `loadtxt` without the `fname=`:
but we still need to say `delimiter=`:
To understand what's going on, and make our own functions easier to use, let's re-define our center function like this:
The key change is that the second parameter is now written `desired=0.0` instead of just `desired`. If we call the function with two arguments, it works as it did before:
But we can also now call it with just one parameter, in which case `desired` is automatically assigned the default value of 0.0:
This is handy: if we usually want a function to work one way, but occasionally need it to do something else, we can allow people to pass a parameter when they need to but provide a default to make the normal case easier. The example below shows how Python matches values to parameters:
As this example shows, parameters are matched up from left to right, and any that haven't been given a value explicitly get their default value. We can override this behavior by naming the value as we pass it in:
With that in hand, let's look at the help for numpy.loadtxt:
There's a lot of information here, but the most important part is the first couple of lines:
```python
loadtxt(fname, dtype=<type 'float'>, comments='#', delimiter=None, converters=None, skiprows=0, usecols=None,
unpack=False, ndmin=0)
```
This tells us that loadtxt has one parameter called fname that doesn't have a default value, and eight others that do. If we call the function like this:
then the filename is assigned to `fname` (which is what we want), but the delimiter string `','` is assigned to `dtype` rather than `delimiter`, because `dtype` is the second parameter in the list. However ',' isn't a known `dtype` so our code produced an error message when we tried to run it. When we call `loadtxt` we don't have to provide `fname=` for the filename because it's the first item in the list, but if we want the ',' to be assigned to the variable `delimiter`, we *do* have to provide `delimiter=` for the second parameter since `delimiter` is not the second parameter in the list.
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Combining strings</h2>
</div>
<div class="panel-body">
<p>"Adding" two strings produces their concatenation: <code>'a'</code> + <code>'b'</code> is <code>'ab'</code>. Write a function called <code>fence</code> that takes two parameters called <code>original</code> and <code>wrapper</code> and returns a new string that has the wrapper character at the beginning and end of the original. A call to your function should look like this:</p>
<div class="codehilite"><pre><span></span><span class="k">print</span><span class="p">(</span><span class="n">fence</span><span class="p">(</span><span class="s1">'name'</span><span class="p">,</span> <span class="s1">'*'</span><span class="p">))</span>
<span class="o">*</span><span class="n">name</span><span class="o">*</span>
</pre></div>
</div>
</section>
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Selecting characters from strings</h2>
</div>
<div class="panel-body">
<p>If the variable <code>s</code> refers to a string, then <code>s[0]</code> is the string's first
character and <code>s[-1]</code> is its last. Write a function called <code>outer</code> that
returns a string made up of just the first and last characters of its
input. A call to your function should look like this:</p>
<div class="codehilite"><pre><span></span><span class="k">print</span><span class="p">(</span><span class="n">outer</span><span class="p">(</span><span class="s1">'helium'</span><span class="p">))</span>
<span class="n">hm</span>
</pre></div>
</div>
</section>
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Rescaling an array</h2>
</div>
<div class="panel-body">
<p>Write a function <code>rescale</code> that takes an array as input and returns a corresponding array of values scaled to lie in the range 0.0 to 1.0. (Hint: If L and H are the lowest and highest values in the original array, then the replacement for a value v should be (v − L)/(H − L).)</p>
</div>
</section>
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Defining defaults</h2>
</div>
<div class="panel-body">
<p>Rewrite the <code>rescale</code> function so that it scales data to lie between 0.0 and 1.0 by default, but will allow the caller to specify lower and upper bounds if they want. Compare your implementation to your neighbor's: do the two functions always behave the same way?</p>
</div>
</section>
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Variables inside and outside functions</h2>
</div>
<div class="panel-body">
<p>What does the following piece of code display when run - and why?</p>
<div class="codehilite"><pre><span></span><span class="n">f</span> <span class="o">=</span> <span class="mi">0</span>
<span class="n">k</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">def</span> <span class="nf">f2k</span><span class="p">(</span><span class="n">f</span><span class="p">):</span>
<span class="n">k</span> <span class="o">=</span> <span class="p">((</span><span class="n">f</span><span class="o">-</span><span class="mi">32</span><span class="p">)</span><span class="o">*</span><span class="p">(</span><span class="mf">5.0</span><span class="o">/</span><span class="mf">9.0</span><span class="p">))</span> <span class="o">+</span> <span class="mf">273.15</span>
<span class="k">return</span> <span class="n">k</span>
<span class="n">f2k</span><span class="p">(</span><span class="mi">8</span><span class="p">)</span>
<span class="n">f2k</span><span class="p">(</span><span class="mi">41</span><span class="p">)</span>
<span class="n">f2k</span><span class="p">(</span><span class="mi">32</span><span class="p">)</span>
<span class="k">print</span><span class="p">(</span><span class="n">k</span><span class="p">)</span>
</pre></div>
</div>
</section>
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> The Old Switcheroo</h2>
</div>
<div class="panel-body">
<p>Consider this code:</p>
<div class="codehilite"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="mi">3</span>
<span class="n">b</span> <span class="o">=</span> <span class="mi">7</span>
<span class="k">def</span> <span class="nf">swap</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">):</span>
<span class="n">temp</span> <span class="o">=</span> <span class="n">a</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">b</span>
<span class="n">b</span> <span class="o">=</span> <span class="n">temp</span>
<span class="n">swap</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span>
<span class="k">print</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span>
</pre></div>
<p>Which of the following would be printed if you were to run this code? Why did you pick this answer?</p>
<ul>
<li><code>7 3</code></li>
<li><code>3 7</code></li>
<li><code>3 3</code></li>
<li><code>7 7</code></li>
</ul>
</div>
</section>
<section class="solution panel panel-primary">
<div class="panel-heading">
<h2><span class="fa fa-eye"></span> Solution</h2>
</div>
<div class="panel-body">
<p><code>3, 7</code> is correct. Initially <code>a</code> has a value of 3 and <code>b</code> has a value of 7. When the <code>swap</code> function is called, it creates local variables (also called <code>a</code> and <code>b</code> in this case) and trades their values. The function does not return any values and does not alter <code>a</code> or <code>b</code> outside of its local copy. Therefore the original values of <code>a</code> and <code>b</code> remain unchanged.</p>
</div>
</section>
<section class="keypoints panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-exclamation-circle"></span> Key Points</h2>
</div>
<div class="panel-body">
<ul>
<li>Define a function using <code>def function_name(parameter)</code>.</li>
<li>The body of a function must be indented.</li>
<li>Call a function using <code>function_name(value)</code>.</li>
<li>Numbers are stored as integers or floating-point numbers.</li>
<li>Variables defined within a function can only be seen and used within the body of the function.</li>
<li>If a variable is not defined within the function it is used, Python looks for a definition before the function call</li>
<li>Specify default values for parameters when defining a function using <code>name=value</code> in the parameter list.</li>
<li>Parameters can be passed by matching based on name, by position, or by omitting them (in which case the default value is used).</li>
<li>Put code whose parameters change frequently in a function, then call it with different parameter values to customize its behavior.</li>
</ul>
</div>
</section>
---
The material in this notebook is derived from the Software Carpentry lessons
© [Software Carpentry](http://software-carpentry.org/) under the terms
of the [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
| github_jupyter |
# Ch05
```
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import statsmodels.api as sm
%load_ext autoreload
%autoreload 2
plt.style.use('seaborn-talk')
plt.style.use('bmh')
pd.set_option('display.max_rows', 100)
```
## 5.1 Generate a time series from an IID Gaussian random process. This is a memory-less, stationary series:
```
np.random.seed(0)
N = 252 * 10
s = pd.Series(np.random.randn(N))
s.plot()
```
## (a) Compute the ADF statistic on this series. What is the p-value?
```
adf = lambda s: sm.tsa.stattools.adfuller(s)
p_val = lambda s: sm.tsa.stattools.adfuller(s)[1]
res = adf(s)
p = res[1]
res, p
```
## (b) Compute the cumulative sum of the observations. This is a non-stationary series w/o memory.
```
cmsm = pd.Series(s).cumsum()
cmsm.plot()
p_val(cmsm)
```
# [5.2] Generate a time series that follows a sinusoidal function. This is a stationary series with memory.
```
np.random.seed(0)
rand = np.random.random(N)
idx = np.linspace(0, 10, N)
s = pd.Series(1 * np.sin(2. * idx + .5))
s.plot()
p_val(s)
```
## (b) Shift every observation by the same positive value. Compute the cumulative sum of the observations. This is a non-stationary series with memory.
```
s_ = (s + 1).cumsum().rename('fake_close').to_frame()
s_.plot()
adf(s_['fake_close'].dropna()), p_val(s_['fake_close'])
def getWeights(d, size):
# thres > 0 drops insignificant weights
w = [1.]
for k in range(1, size):
w_ = -w[-1] / k * (d - k + 1)
w.append(w_)
w = np.array(w[::-1]).reshape(-1, 1)
return w
s_.shape[0]
%%time
w = getWeights(0.1, s_.shape[0])
def fracDiff(series, d, thres=0.01):
'''
Increasing width window, with treatment of NaNs
Note 1: For thres=1, nothing is skipped
Note 2: d can be any positive fractional, not necessarily
bounded between [0,1]
'''
#1) Compute weights for the longest series
w=getWeights(d, series.shape[0])
#bp()
#2) Determine initial calcs to be skipped based on weight-loss threshold
w_=np.cumsum(abs(w))
w_ /= w_[-1]
skip = w_[w_>thres].shape[0]
#3) Apply weights to values
df={}
for name in series.columns:
seriesF, df_=series[[name]].fillna(method='ffill').dropna(), pd.Series()
for iloc in range(skip, seriesF.shape[0]):
loc=seriesF.index[iloc]
test_val = series.loc[loc,name] # must resample if duplicate index
if isinstance(test_val, (pd.Series, pd.DataFrame)):
test_val = test_val.resample('1m').mean()
if not np.isfinite(test_val).any(): continue # exclude NAs
try:
df_.loc[loc]=np.dot(w[-(iloc+1):,:].T, seriesF.loc[:loc])[0,0]
except:
continue
df[name]=df_.copy(deep=True)
df=pd.concat(df,axis=1)
return df
df0 = fracDiff(s_, 0.1)
df0.head()
cols = ['adfStat','pVal','lags','nObs','95% conf'] #,'corr']
out = pd.DataFrame(columns = cols)
for d in np.linspace(0,1,11):
try:
df0 = fracDiff(s_, d)
df0 = sm.tsa.stattools.adfuller(df0['fake_close'], maxlag=1, regression='c', autolag=None)
out.loc[d] = list(df0[:4])+[df0[4]['5%']]
except:
break
f, ax = plt.subplots()
out['adfStat'].plot(ax=ax, marker='X')
ax.axhline(out['95% conf'].mean(), lw=1, color='r', ls='dotted')
ax.set_title('min d with thresh=0.01')
ax.set_xlabel('d values')
ax.set_ylabel('adf stat');
display(out)
```
# APPENDIX
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import faiss
import pickle
import numpy as np
import os
from tools.utils import draw_bbboxes
from pycocotools.coco import COCO
print(os.getcwd())
OUTPUT_PATH="images/threshold_study/final_feature_db_on_train.npy"
feautre_db = np.load(OUTPUT_PATH)
coco = COCO("/home.nfs/babayeln/thesis/mask-rcnn.pytorch/data/coco/annotations/instances_train2017.json")
coco_cat_to_continous_cat = {v: i+1 for i,v in enumerate(coco.cats)}
continious_cat_to_coco = {v:k for k,v in coco_cat_to_continous_cat.items()}
def find_threhold_for_each_class(index, db, classes, k_neighbours=10):
print("Doing search")
distance, indecies = index.search(db, k_neighbours)
print("Finishing search")
classes_idx = classes[indecies]
distance_class = {}
counts = {}
for idx, neighbours in enumerate(classes_idx):
myself = int(classes[idx])
not_class_neighbours = np.where(neighbours!=myself)[0]
if len(not_class_neighbours) == 0:
not_class_neighbours = [k_neighbours-1]
first_not_class_neighbours = not_class_neighbours[0]
#if first_not_class_neighbours==0:
# import ipdb; ipdb.set_trace()
if myself not in counts.keys():
counts[myself] = []
distance_class[myself] = []
counts[myself].append(first_not_class_neighbours)
distance_class[myself].append(distance[idx, first_not_class_neighbours])
average_distance_class = {}
for class_idx in distance_class.keys():
average_distance_class[class_idx] = np.median(distance_class[class_idx])
return average_distance_class, counts, classes_idx, distance, indecies
def xyxy_to_xywh(xyxy):
"""Convert [x1 y1 x2 y2] box format to [x1 y1 w h] format."""
if isinstance(xyxy, (list, tuple)):
# Single box given as a list of coordinates
assert len(xyxy) == 4
x1, y1 = xyxy[0], xyxy[1]
w = xyxy[2] - x1 + 1
h = xyxy[3] - y1 + 1
return (x1, y1, w, h)
elif isinstance(xyxy, np.ndarray):
# Multiple boxes given as a 2D ndarray
return np.hstack((xyxy[:, 0:2], xyxy[:, 2:4] - xyxy[:, 0:2] + 1))
else:
raise TypeError('Argument xyxy must be a list, tuple, or numpy array.')
def create_db(db):
dimension = 1024
db = db.astype('float32')
faiss_db = faiss.IndexFlatL2(dimension)
faiss_db.add(db)
return faiss_db
"""
def create_db(database, M=128, nbits=8, nlist=316, nprobe=32):
quantizer = faiss.IndexFlatL2(1024)
#index = faiss.IndexIVFPQ(quantizer, 1024, nlist, M, nbits)
index = faiss.IndexIVFFlat(quantizer, 1024, nlist)
N = int(len(database))
samples = database[np.random.permutation(np.arange(N))[:N]]
index.train(samples)
index.add(database)
return index"""
faiss_db = create_db(np.array(feautre_db[:, 7:]))
classes = np.array(feautre_db[:, 2])
features = np.array(feautre_db[:, 7:])
#classes_idx, indecies, distance = find_threhold_for_each_class(faiss_db, features, classes, k_neighbours=10)
%time average_distance_class, counts, classes_idx, distances, indecies = find_threhold_for_each_class(faiss_db, features, classes, k_neighbours=5)
with open("images/threshold_study/nearest_neighbour_5.pkl", "wb") as f:
pickle.dump({
"average_distance_class" : average_distance_class,
"counts" : counts,
"classes_idx": classes_idx,
"distances": distances,
"indecies" : indecies
}, f)
indecies
```
## Chossing different classes
```
import seaborn as sns
import matplotlib.pyplot as plt
def get_top_x_not_weighted(x):
matrix = np.zeros(shape = (81, 81))
weights = [1, 1, 1, 1, 1]
for idx, neighbours in enumerate(classes_idx):
neighbours = neighbours[:x+1]
for position, neighbour in enumerate(neighbours[1:]):
#if neighbour!=classes[idx]:
matrix[int(classes[idx]), int(neighbour)] += weights[position]
for idx in range(matrix.shape[0]):
matrix[idx] = np.round(matrix[idx]/ np.sum(matrix[idx], axis=0),4)
return matrix
def get_df(matrix):
import pandas as pd
df = pd.DataFrame(matrix)
names = ["background"] + [coco.cats[coco_cat]["name"] for (cat, coco_cat) in continious_cat_to_coco.items()]
df.set_axis(names, axis=0)
df.set_axis(names, axis=1)
df.style.background_gradient()
return df
def draw_df(df, name, figsize=(20, 20), ann=False ):
fig, ax = plt.subplots(figsize=figsize)
pal = sns.cubehelix_palette(light=1, as_cmap=True)
sns_plot = sns.heatmap(ax = ax, data=df, cmap=pal, annot=ann)
sns_plot.get_figure().savefig("nn_review/" + name)
m_5 = get_top_x_not_weighted(5)
d_5 = get_df(m_5)
draw_df(d, "top5.png")
m = get_top_x_not_weighted(1)
d = get_df(m)
draw_df(d, "top1.png")
m_5[1], m[1]
df.to_pickle("nn_review/nn_to_wrong_classes_5.pkl")
df.style.background_gradient().to_excel("nn_review/nn_to_wrong_classes_5.xlsx")
```
## Chosing yourself
```
matrix = np.zeros(shape = (81,4))
for idx, neighbours in enumerate(classes_idx):
for position, neighbour in enumerate(neighbours[1:]):
if neighbour==classes[idx]:
matrix[int(neighbour)][position]+=1
for idx in range(matrix.shape[0]):
matrix[idx] = np.round(matrix[idx]/ np.sum(matrix[idx]),3)*100
np.sum(matrix[5])
import pandas as pd
df = pd.DataFrame(matrix)
names = ["background"] + [coco.cats[coco_cat]["name"] for (cat, coco_cat) in continious_cat_to_coco.items()]
df.set_axis(names, axis=0)
df.style.background_gradient()
draw_df(df, "same_class_%", figsize=(5,20), ann=True)
df.to_pickle("nn_review/class_place_apperance.pkl")
df.style.background_gradient().to_excel("nn_review/class_place_apperance.xlsx")
```
## Mean distance
```
names = [coco.cats[continious_cat_to_coco[x]]["name"] for x in average_distance_class.keys()]
import matplotlib.pyplot as plt
#plt.figure(figsize=(20, 20))
#plt.bar(average_distance_class.keys(), average_distance_class.values())
dictionary = {"class": [], "mean_distance": []}
for key,value in average_distance_class.items():
dictionary["class"].append(coco.cats[continious_cat_to_coco[key]]["name"])
dictionary["mean_distance"].append(value)
df = pd.DataFrame(dictionary)
df = df.sort_values('mean_distance')
ax = df.plot.barh(x='class', y='mean_distance', rot=0, figsize=(20,20))
fig = ax.get_figure()
fig.savefig("nn_review/median_distance.png")
#x = list(range(81))
#my_xticks = [coco.cats[coco_cat]["name"] for (cat, coco_cat) in continious_cat_to_coco.items()]
#plt.xticks(x, my_xticks, rotation=45)
#plt.show()
```
## Distance to different classes
```
tmp_dict= {}
for idx in range(len(distances)):
me = int(classes[idx])
for distance,other in zip(distances[idx], classes_idx[idx]):
key = (me, int(other))
if key not in tmp_dict.keys():
tmp_dict[key] = []
if distance<0:
distance=0
tmp_dict[key].append(distance)
matrix = np.zeros((81, 81))
for (me, other) in tmp_dict.keys():
matrix[me][other] = np.median(tmp_dict[(me, other)])
import pandas as pd
df = pd.DataFrame(matrix)
names = ["background"] + [coco.cats[coco_cat]["name"] for (cat, coco_cat) in continious_cat_to_coco.items()]
df.set_axis(names, axis=0)
df.set_axis(names, axis=1)
df.style.background_gradient()
draw_df(df, name="median_distance", ann=False)
df.to_pickle("nn_review/mean_distance_to_different_classes.pkl")
df.style.background_gradient().to_excel("nn_review/mean_distance_to_different_classes.xlsx")
def choose_gradient_id_for_background(faiss_db, feautre_db, looked_features, thresholds, k_neighbours ):
distance, indecies = faiss_db.search(looked_features, k_neighbours)
#feautre_db[indecies].shape -> number of features, k_neighbours, 1031 dimension vector
classes = feautre_db[indecies][:, :, 2]
max_classes = [np.argmax(np.bincount(sample.astype("int64"))) for sample in classes]
class_thresholds = thresholds[max_classes]
passed_thresholds = np.where( np.min(distance, axis=1) < class_thresholds)[0]
background_gradient_idx = np.array(set(range(len(looked_features))) - set(passed_thresholds))
return background_gradient_idx
background_gradient_idx = choose_gradient_id_for_background(faiss_db, feautre_db, looked_features, thresholds, k_neighbours )
background_gradient_idx = choose_gradient_id_for_background(faiss_db, feautre_db, looked_features, thresholds, k_neighbours )
```
| github_jupyter |
# An Introduction to the Amazon SageMaker IP Insights Algorithm
#### Unsupervised anomaly detection for susicipous IP addresses
-------
1. [Introduction](#Introduction)
2. [Setup](#Setup)
3. [Training](#Training)
4. [Inference](#Inference)
5. [Epilogue](#Epilogue)
## Introduction
-------
The Amazon SageMaker IP Insights algorithm uses statistical modeling and neural networks to capture associations between online resources (such as account IDs or hostnames) and IPv4 addresses. Under the hood, it learns vector representations for online resources and IP addresses. This essentially means that if the vector representing an IP address and an online resource are close together, then it is likey for that IP address to access that online resource, even if it has never accessed it before.
In this notebook, we use the Amazon SageMaker IP Insights algorithm to train a model on synthetic data. We then use this model to perform inference on the data and show how to discover anomalies. After running this notebook, you should be able to:
- obtain, transform, and store data for use in Amazon SageMaker,
- create an AWS SageMaker training job to produce an IP Insights model,
- use the model to perform inference with an Amazon SageMaker endpoint.
If you would like to know more, please check out the [SageMaker IP Inisghts Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights.html).
## Setup
------
*This notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.*
Our first step is to setup our AWS credentials so that AWS SageMaker can store and access training data and model artifacts.
### Select Amazon S3 Bucket
We first need to specify the locations where we will store our training data and trained model artifacts. ***This is the only cell of this notebook that you will need to edit.*** In particular, we need the following data:
- `bucket` - An S3 bucket accessible by this account.
- `prefix` - The location in the bucket where this notebook's input and output data will be stored. (The default value is sufficient.)
```
import boto3
import botocore
import os
import sagemaker
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/ipinsights-tutorial"
execution_role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# check if the bucket exists
try:
boto3.Session().client("s3").head_bucket(Bucket=bucket)
except botocore.exceptions.ParamValidationError as e:
print("Hey! You either forgot to specify your S3 bucket or you gave your bucket an invalid name!")
except botocore.exceptions.ClientError as e:
if e.response["Error"]["Code"] == "403":
print(f"Hey! You don't have permission to access the bucket, {bucket}.")
elif e.response["Error"]["Code"] == "404":
print(f"Hey! Your bucket, {bucket}, doesn't exist!")
else:
raise
else:
print(f"Training input/output will be stored in: s3://{bucket}/{prefix}")
```
Next we download the modules necessary for synthetic data generation they do not exist.
```
from os import path
tools_bucket = f"jumpstart-cache-prod-{region}" # Bucket containing the data generation module.
tools_prefix = "1p-algorithms-assets/ip-insights" # Prefix for the data generation module
s3 = boto3.client("s3")
data_generation_file = "generate_data.py" # Synthetic data generation module
script_parameters_file = "ip2asn-v4-u32.tsv.gz"
if not path.exists(data_generation_file):
s3.download_file(tools_bucket, f"{tools_prefix}/{data_generation_file}", data_generation_file)
if not path.exists(script_parameters_file):
s3.download_file(tools_bucket, f"{tools_prefix}/{script_parameters_file}", script_parameters_file)
```
### Dataset
Apache Web Server ("httpd") is the most popular web server used on the internet. And luckily for us, it logs all requests processed by the server - by default. If a web page requires HTTP authentication, the Apache Web Server will log the IP address and authenticated user name for each requested resource.
The [access logs](https://httpd.apache.org/docs/2.4/logs.html) are typically on the server under the file `/var/log/httpd/access_log`. From the example log output below, we see which IP addresses each user has connected with:
```
192.168.1.100 - user1 [15/Oct/2018:18:58:32 +0000] "GET /login_success?userId=1 HTTP/1.1" 200 476 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
192.168.1.102 - user2 [15/Oct/2018:18:58:35 +0000] "GET /login_success?userId=2 HTTP/1.1" 200 - "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
...
```
If we want to train an algorithm to detect suspicious activity, this dataset is ideal for SageMaker IP Insights.
First, we determine the resource we want to be analyzing (such as a login page or access to a protected file). Then, we construct a dataset containing the history of all past user interactions with the resource. We extract out each 'access event' from the log and store the corresponding user name and IP address in a headerless CSV file with two columns. The first column will contain the user identifier string, and the second will contain the IPv4 address in decimal-dot notation.
```
user1, 192.168.1.100
user2, 193.168.1.102
...
```
As a side note, the dataset should include all access events. That means some `<user_name, ip_address>` pairs will be repeated.
#### User Activity Simulation
For this example, we are going to simulate our own web-traffic logs. We mock up a toy website example and simulate users logging into the website from mobile devices.
The details of the simulation are explained in the script [here](./generate_data.py).
```
from generate_data import generate_dataset
# We simulate traffic for 10,000 users. This should yield about 3 million log lines (~700 MB).
NUM_USERS = 10000
log_file = "ipinsights_web_traffic.log"
generate_dataset(NUM_USERS, log_file)
# Visualize a few log lines
!head $log_file
```
### Prepare the dataset
Now that we have our logs, we need to transform them into a format that IP Insights can use. As we mentioned above, we need to:
1. Choose the resource which we want to analyze users' history for
2. Extract our users' usage history of IP addresses
3. In addition, we want to separate our dataset into a training and test set. This will allow us to check for overfitting by evaluating our model on 'unseen' login events.
For the rest of the notebook, we assume that the Apache Access Logs are in the Common Log Format as defined by the [Apache documentation](https://httpd.apache.org/docs/2.4/logs.html#accesslog). We start with reading the logs into a Pandas DataFrame for easy data exploration and pre-processing.
```
import pandas as pd
df = pd.read_csv(
log_file,
sep=" ",
na_values="-",
header=None,
names=["ip_address","rcf_id","user","timestamp","time_zone","request", "status", "size", "referer", "user_agent"]
)
df.head()
```
We convert the log timestamp strings into Python datetimes so that we can sort and compare the data more easily.
```
# Convert time stamps to DateTime objects
df["timestamp"] = pd.to_datetime(df["timestamp"], format="[%d/%b/%Y:%H:%M:%S")
```
We also verify the time zones of all of the time stamps. If the log contains more than one time zone, we would need to standardize the timestamps.
```
# Check if they are all in the same timezone
num_time_zones = len(df["time_zone"].unique())
num_time_zones
```
As we see above, there is only one value in the entire `time_zone` column. Therefore, all of the timestamps are in the same time zone, and we do not need to standardize them. We can skip the next cell and go to [1. Selecting a Resource](#1.-Select-Resource).
If there is more than one time_zone in your dataset, then we parse the timezone offset and update the corresponding datetime object.
**Note:** The next cell takes about 5-10 minutes to run.
```
from datetime import datetime
import pytz
def apply_timezone(row):
tz = row[1]
tz_offset = int(tz[:3]) * 60 # Hour offset
tz_offset += int(tz[3:5]) # Minutes offset
return row[0].replace(tzinfo=pytz.FixedOffset(tz_offset))
if num_time_zones > 1:
df["timestamp"] = df[["timestamp", "time_zone"]].apply(apply_timezone, axis=1)
```
#### 1. Select Resource
Our goal is to train an IP Insights algorithm to analyze the history of user logins such that we can predict how suspicious a login event is.
In our simulated web server, the server logs a `GET` request to the `/login_success` page everytime a user successfully logs in. We filter our Apache logs for `GET` requests for `/login_success`. We also filter for requests that have a `status_code == 200`, to ensure that the page request was well formed.
**Note:** every web server handles logins differently. For your dataset, determine which resource you will need to be analyzing to correctly frame this problem. Depending on your usecase, you may need to do more data exploration and preprocessing.
```
df = df[(df["request"].str.startswith("GET /login_success")) & (df["status"] == 200)]
```
#### 2. Extract Users and IP address
Now that our DataFrame only includes log events for the resource we want to analyze, we extract the relevant fields to construct a IP Insights dataset.
IP Insights takes in a headerless CSV file with two columns: an entity (username) ID string and the IPv4 address in decimal-dot notation. Fortunately, the Apache Web Server Access Logs output IP addresses and authentcated usernames in their own columns.
**Note:** Each website handles user authentication differently. If the Access Log does not output an authenticated user, you could explore the website's query strings or work with your website developers on another solution.
```
df = df[["user", "ip_address", "timestamp"]]
```
#### 3. Create training and test dataset
As part of training a model, we want to evaluate how it generalizes to data it has never seen before.
Typically, you create a test set by reserving a random percentage of your dataset and evaluating the model after training. However, for machine learning models that make future predictions on historical data, we want to use out-of-time testing. Instead of randomly sampling our dataset, we split our dataset into two contiguous time windows. The first window is the training set, and the second is the test set.
We first look at the time range of our dataset to select a date to use as the partition between the training and test set.
```
df["timestamp"].describe()
```
We have login events for 10 days. Let's take the first week (7 days) of data as training and then use the last 3 days for the test set.
```
time_partition = (
datetime(2018, 11, 11, tzinfo=pytz.FixedOffset(0))
if num_time_zones > 1
else datetime(2018, 11, 11)
)
train_df = df[df["timestamp"] <= time_partition]
test_df = df[df["timestamp"] > time_partition]
```
Now that we have our training dataset, we shuffle it.
Shuffling improves the model's performance since SageMaker IP Insights uses stochastic gradient descent. This ensures that login events for the same user are less likely to occur in the same mini batch. This allows the model to improve its performance in between predictions of the same user, which will improve training convergence.
```
# Shuffle train data
train_df = train_df.sample(frac=1)
train_df.head()
```
### Store Data on S3
Now that we have simulated (or scraped) our datasets, we have to prepare and upload it to S3.
We will be doing local inference, therefore we don't need to upload our test dataset.
```
# Output dataset as headerless CSV
train_data = train_df.to_csv(index=False, header=False, columns=["user", "ip_address"])
# Upload data to S3 key
train_data_file = "train.csv"
key = os.path.join(prefix, "train", train_data_file)
s3_train_data = f"s3://{bucket}/{key}"
print(f"Uploading data to: {s3_train_data}")
boto3.resource("s3").Bucket(bucket).Object(key).put(Body=train_data)
# Configure SageMaker IP Insights Input Channels
input_data = {
"train": sagemaker.session.s3_input(
s3_train_data, distribution="FullyReplicated", content_type="text/csv"
)
}
```
## Training
---
Once the data is preprocessed and available in the necessary format, the next step is to train our model on the data. There are number of parameters required by the SageMaker IP Insights algorithm to configure the model and define the computational environment in which training will take place. The first of these is to point to a container image which holds the algorithms training and hosting code:
```
from sagemaker.amazon.amazon_estimator import get_image_uri
image = get_image_uri(boto3.Session().region_name, "ipinsights")
```
Then, we need to determine the training cluster to use. The IP Insights algorithm supports both CPU and GPU training. We recommend using GPU machines as they will train faster. However, when the size of your dataset increases, it can become more economical to use multiple CPU machines running with distributed training. See [Recommended Instance Types](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights.html#ip-insights-instances) for more details.
### Training Job Configuration
- **train_instance_type**: the instance type to train on. We recommend `p3.2xlarge` for single GPU, `p3.8xlarge` for multi-GPU, and `m5.2xlarge` if using distributed training with CPU;
- **train_instance_count**: the number of worker nodes in the training cluster.
We need to also configure SageMaker IP Insights-specific hypeparameters:
### Model Hyperparameters
- **num_entity_vectors**: the total number of embeddings to train. We use an internal hashing mechanism to map the entity ID strings to an embedding index; therefore, using an embedding size larger than the total number of possible values helps reduce the number of hash collisions. We recommend this value to be 2x the total number of unique entites (i.e. user names) in your dataset;
- **vector_dim**: the size of the entity and IP embedding vectors. The larger the value, the more information can be encoded using these representations but using too large vector representations may cause the model to overfit, especially for small training data sets;
- **num_ip_encoder_layers**: the number of layers in the IP encoder network. The larger the number of layers, the higher the model capacity to capture patterns among IP addresses. However, large number of layers increases the chance of overfitting. `num_ip_encoder_layers=1` is a good value to start experimenting with;
- **random_negative_sampling_rate**: the number of randomly generated negative samples to produce per 1 positive sample; `random_negative_sampling_rate=1` is a good value to start experimenting with;
- Random negative samples are produced by drawing each octet from a uniform distributed of [0, 255];
- **shuffled_negative_sampling_rate**: the number of shuffled negative samples to produce per 1 positive sample; `shuffled_negative_sampling_rate=1` is a good value to start experimenting with;
- Shuffled negative samples are produced by shuffling the accounts within a batch;
### Training Hyperparameters
- **epochs**: the number of epochs to train. Increase this value if you continue to see the accuracy and cross entropy improving over the last few epochs;
- **mini_batch_size**: how many examples in each mini_batch. A smaller number improves convergence with stochastic gradient descent. But a larger number is necessary if using shuffled_negative_sampling to avoid sampling a wrong account for a negative sample;
- **learning_rate**: the learning rate for the Adam optimizer (try ranges in [0.001, 0.1]). Too large learning rate may cause the model to diverge since the training would be likely to overshoot minima. On the other hand, too small learning rate slows down the convergence;
- **weight_decay**: L2 regularization coefficient. Regularization is required to prevent the model from overfitting the training data. Too large of a value will prevent the model from learning anything;
For more details, see [Amazon SageMaker IP Insights (Hyperparameters)](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights-hyperparameters.html). Additionally, most of these hyperparameters can be found using SageMaker Automatic Model Tuning; see [Amazon SageMaker IP Insights (Model Tuning)](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights-tuning.html) for more details.
```
# Set up the estimator with training job configuration
ip_insights = sagemaker.estimator.Estimator(
image,
execution_role,
instance_count=1,
instance_type="ml.p3.2xlarge",
output_path=f"s3://{bucket}/{prefix}/output",
sagemaker_session=sagemaker.Session(),
)
# Configure algorithm-specific hyperparameters
ip_insights.set_hyperparameters(
num_entity_vectors="20000",
random_negative_sampling_rate="5",
vector_dim="128",
mini_batch_size="1000",
epochs="5",
learning_rate="0.01",
)
# Start the training job (should take about ~1.5 minute / epoch to complete)
ip_insights.fit(input_data)
```
If you see the message
> Completed - Training job completed
at the bottom of the output logs then that means training successfully completed and the output of the SageMaker IP Insights model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab and select training job matching the training job name, below:
```
print(f"Training job name: {ip_insights.latest_training_job.job_name}")
```
## Inference
-----
Now that we have trained a SageMaker IP Insights model, we can deploy the model to an endpoint to start performing inference on data. In this case, that means providing it a `<user, IP address>` pair and predicting their compatability scores.
We can create an inference endpoint using the SageMaker Python SDK `deploy()`function from the job we defined above. We specify the instance type where inference will be performed, as well as the initial number of instnaces to spin up. We recommend using the `ml.m5` instance as it provides the most memory at the lowest cost. Verify how large your model is in S3 and pick the instance type with the appropriate amount of memory.
```
predictor = ip_insights.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")
```
Congratulations, you now have a SageMaker IP Insights inference endpoint! You could start integrating this endpoint with your production services to start querying incoming requests for abnormal behavior.
You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name below:
```
print(f"Endpoint name: {predictor.endpoint}")
```
### Data Serialization/Deserialization
We can pass data in a variety of formats to our inference endpoint. In this example, we will pass CSV-formmated data. Other available formats are JSON-formated and JSON Lines-formatted. We make use of the SageMaker Python SDK utilities: `csv_serializer` and `json_deserializer` when configuring the inference endpoint
```
from sagemaker.predictor import csv_serializer, json_deserializer
predictor.serializer = csv_serializer
predictor.deserializer = json_deserializer
```
Now that the predictor is configured, it is as easy as passing in a matrix of inference data.
We can take a few samples from the simulated dataset above, so we can see what the output looks like.
```
inference_data = [(data[0], data[1]) for data in train_df[:5].values]
predictor.predict(
inference_data, initial_args={"ContentType": "text/csv", "Accept": "application/json"}
)
```
By default, the predictor will only output the `dot_product` between the learned IP address and the online resource (in this case, the user ID). The dot product summarizes the compatibility between the IP address and online resource. The larger the value, the more the algorithm thinks the IP address is likely to be used by the user. This compatability score is sufficient for most applications, as we can define a threshold for what we constitute as an anomalous score.
However, more advanced users may want to inspect the learned embeddings and use them in further applications. We can configure the predictor to provide the learned embeddings by specifing the `verbose=True` parameter to the Accept heading. You should see that each 'prediction' object contains three keys: `ip_embedding`, `entity_embedding`, and `dot_product`.
```
predictor.predict(
inference_data,
initial_args={"ContentType": "text/csv", "Accept": "application/json; verbose=True"},
)
```
## Compute Anomaly Scores
----
The `dot_product` output of the model provides a good measure of how compatible an IP address and online resource are. However, the range of the dot_product is unbounded. This means to be able to consider an event as anomolous we need to define a threshold. Such that when we score an event, if the dot_product is above the threshold we can flag the behavior as anomolous.However, picking a threshold can be more of an art, and a good threshold depends on the specifics of your problem and dataset.
In the following section, we show how to pick a simple threshold by comparing the score distributions between known normal and malicious traffic:
1. We construct a test set of 'Normal' traffic;
2. Inject 'Malicious' traffic into the dataset;
3. Plot the distribution of dot_product scores for the model on 'Normal' trafic and the 'Malicious' traffic.
3. Select a threshold value which separates the normal distribution from the malicious traffic threshold. This value is based on your false-positive tolerance.
### 1. Construct 'Normal' Traffic Dataset
We previously [created a test set](#3.-Create-training-and-test-dataset) from our simulated Apache access logs dataset. We use this test dataset as the 'Normal' traffic in the test case.
```
test_df.head()
```
### 2. Inject Malicious Traffic
If we had a dataset with enough real malicious activity, we would use that to determine a good threshold. Those are hard to come by. So instead, we simulate malicious web traffic that mimics a realistic attack scenario.
We take a set of user accounts from the test set and randomly generate IP addresses. The users should not have used these IP addresses during training. This simulates an attacker logging in to a user account without knowledge of their IP history.
```
import numpy as np
from generate_data import draw_ip
def score_ip_insights(predictor, df):
def get_score(result):
"""Return the negative to the dot product of the predictions from the model."""
return [-prediction["dot_product"] for prediction in result["predictions"]]
df = df[["user", "ip_address"]]
result = predictor.predict(df.values)
return get_score(result)
def create_test_case(train_df, test_df, num_samples, attack_freq):
"""Creates a test case from provided train and test data frames.
This generates test case for accounts that are both in training and testing data sets.
:param train_df: (panda.DataFrame with columns ['user', 'ip_address']) training DataFrame
:param test_df: (panda.DataFrame with columns ['user', 'ip_address']) testing DataFrame
:param num_samples: (int) number of test samples to use
:param attack_freq: (float) the ratio of negative_samples:positive_samples to generate for test case
:return: DataFrame with both good and bad traffic, with labels
"""
# Get all possible accounts. The IP Insights model can only make predictions on users it has seen in training
# Therefore, filter the test dataset for unseen accounts, as their results will not mean anything.
valid_accounts = set(train_df["user"])
valid_test_df = test_df[test_df["user"].isin(valid_accounts)]
good_traffic = valid_test_df.sample(num_samples, replace=False)
good_traffic = good_traffic[["user", "ip_address"]]
good_traffic["label"] = 0
# Generate malicious traffic
num_bad_traffic = int(num_samples * attack_freq)
bad_traffic_accounts = np.random.choice(list(valid_accounts), size=num_bad_traffic, replace=True)
bad_traffic_ips = [draw_ip() for i in range(num_bad_traffic)]
bad_traffic = pd.DataFrame({"user": bad_traffic_accounts, "ip_address": bad_traffic_ips})
bad_traffic["label"] = 1
# All traffic labels are: 0 for good traffic; 1 for bad traffic.
all_traffic = good_traffic.append(bad_traffic)
return all_traffic
NUM_SAMPLES = 100000
test_case = create_test_case(train_df, test_df, num_samples=NUM_SAMPLES, attack_freq=1)
test_case.head()
test_case_scores = score_ip_insights(predictor, test_case)
```
### 3. Plot Distribution
Now, we plot the distribution of scores. Looking at this distribution will inform us on where we can set a good threshold, based on our risk tolerance.
```
%matplotlib inline
import matplotlib.pyplot as plt
n, x = np.histogram(test_case_scores[:NUM_SAMPLES], bins=100, density=True)
plt.plot(x[1:], n)
n, x = np.histogram(test_case_scores[NUM_SAMPLES:], bins=100, density=True)
plt.plot(x[1:], n)
plt.legend(["Normal", "Random IP"])
plt.xlabel("IP Insights Score")
plt.ylabel("Frequency")
plt.figure()
```
### 4. Selecting a Good Threshold
As we see in the figure above, there is a clear separation between normal traffic and random traffic.
We could select a threshold depending on the application.
- If we were working with low impact decisions, such as whether to ask for another factor or authentication during login, we could use a `threshold = 0.0`. This would result in catching more true-positives, at the cost of more false-positives.
- If our decision system were more sensitive to false positives, we could choose a larger threshold, such as `threshold = 10.0`. That way if we were sending the flagged cases to manual investigation, we would have a higher confidence that the acitivty was suspicious.
```
threshold = 0.0
flagged_cases = test_case[np.array(test_case_scores) > threshold]
num_flagged_cases = len(flagged_cases)
num_true_positives = len(flagged_cases[flagged_cases["label"] == 1])
num_false_positives = len(flagged_cases[flagged_cases["label"] == 0])
num_all_positives = len(test_case.loc[test_case["label"] == 1])
print(f"When threshold is set to: {threshold}")
print(f"Total of {num_flagged_cases} flagged cases")
print(f"Total of {num_true_positives} flagged cases are true positives")
print(f"True Positive Rate: {num_true_positives / float(num_flagged_cases)}")
print(f"Recall: {num_true_positives / float(num_all_positives)}")
print(f"Precision: {num_true_positives / float(num_flagged_cases)}")
```
## Epilogue
----
In this notebook, we have showed how to configure the basic training, deployment, and usage of the Amazon SageMaker IP Insights algorithm. All SageMaker algorithms come with support for two additional services that make optimizing and using the algorithm that much easier: Automatic Model Tuning and Batch Transform service.
### Amazon SageMaker Automatic Model Tuning
The results above were based on using the default hyperparameters of the SageMaker IP Insights algorithm. If we wanted to improve the model's performance even more, we can use [Amazon SageMaker Automatic Model Tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html) to automate the process of finding the hyperparameters.
#### Validation Dataset
Previously, we separated our dataset into a training and test set to validate the performance of a single IP Insights model. However, when we do model tuning, we train many IP Insights models in parallel. If we were to use the same test dataset to select the best model, we bias our model selection such that we don't know if we selected the best model in general, or just the best model for that particular dateaset.
Therefore, we need to separate our test set into a validation dataset and a test dataset. The validation dataset is used for model selection. Then once we pick the model with the best performance, we evaluate it the winner on a test set just as before.
#### Validation Metrics
For SageMaker Automatic Model Tuning to work, we need an objective metric which determines the performance of the model we want to optimize. Because SageMaker IP Insights is an usupervised algorithm, we do not have a clearly defined metric for performance (such as percentage of fraudulent events discovered).
We allow the user to provide a validation set of sample data (same format as training data bove) through the `validation` channel. We then fix the negative sampling strategy to use `random_negative_sampling_rate=1` and `shuffled_negative_sampling_rate=0` and generate a validation dataset by assigning corresponding labels to the real and simulated data. We then calculate the model's `descriminator_auc` metric. We do this by taking the model's predicted labels and the 'true' simulated labels and compute the Area Under ROC Curve (AUC) on the model's performance.
We set up the `HyperParameterTuner` to maximize the `discriminator_auc` on the validation dataset. We also need to set the search space for the hyperparameters. We give recommended ranges for the hyperparmaeters in the [Amazon SageMaker IP Insights (Hyperparameters)](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights-hyperparameters.html) documentation.
```
test_df["timestamp"].describe()
```
The test set we constructed above spans 3 days. We reserve the first day as the validation set and the subsequent two days for the test set.
```
time_partition = (
datetime(2018, 11, 13, tzinfo=pytz.FixedOffset(0))
if num_time_zones > 1
else datetime(2018, 11, 13)
)
validation_df = test_df[test_df["timestamp"] < time_partition]
test_df = test_df[test_df["timestamp"] >= time_partition]
valid_data = validation_df.to_csv(index=False, header=False, columns=["user", "ip_address"])
```
We then upload the validation data to S3 and specify it as the validation channel.
```
# Upload data to S3 key
validation_data_file = "valid.csv"
key = os.path.join(prefix, "validation", validation_data_file)
boto3.resource("s3").Bucket(bucket).Object(key).put(Body=valid_data)
s3_valid_data = f"s3://{bucket}/{key}"
print(f"Validation data has been uploaded to: {s3_valid_data}")
# Configure SageMaker IP Insights Input Channels
input_data = {"train": s3_train_data, "validation": s3_valid_data}
from sagemaker.tuner import HyperparameterTuner, IntegerParameter
# Configure HyperparameterTuner
ip_insights_tuner = HyperparameterTuner(
estimator=ip_insights, # previously-configured Estimator object
objective_metric_name="validation:discriminator_auc",
hyperparameter_ranges={"vector_dim": IntegerParameter(64, 1024)},
max_jobs=4,
max_parallel_jobs=2,
)
# Start hyperparameter tuning job
ip_insights_tuner.fit(input_data, include_cls_metadata=False)
# Wait for all the jobs to finish
ip_insights_tuner.wait()
# Visualize training job results
ip_insights_tuner.analytics().dataframe()
# Deploy best model
tuned_predictor = ip_insights_tuner.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge",
serializer=csv_serializer,
deserializer=json_deserializer,
)
# Make a prediction against the SageMaker endpoint
tuned_predictor.predict(
inference_data, initial_args={"ContentType": "text/csv", "Accept": "application/json"}
)
```
We should have the best performing model from the training job! Now we can determine thresholds and make predictions just like we did with the inference endpoint [above](#Inference).
### Batch Transform
Let's say we want to score all of the login events at the end of the day and aggregate flagged cases for investigators to look at in the morning. If we store the daily login events in S3, we can use IP Insights with [Amazon SageMaker Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html) to run inference and store the IP Insights scores back in S3 for future analysis.
Below, we take the training job from before and evaluate it on the validation data we put in S3.
```
transformer = ip_insights.transformer(instance_count=1, instance_type="ml.m4.xlarge")
transformer.transform(s3_valid_data, content_type="text/csv", split_type="Line")
# Wait for Transform Job to finish
transformer.wait()
print(f"Batch Transform output is at: {transformer.output_path}")
```
### Stop and Delete the Endpoint
If you are done with this model, then we should delete the endpoint before we close the notebook. Or else you will continue to pay for the endpoint while it is running.
To do so execute the cell below. Alternately, you can navigate to the "Endpoints" tab in the SageMaker console, select the endpoint with the name stored in the variable endpoint_name, and select "Delete" from the "Actions" dropdown menu.
```
ip_insights_tuner.delete_endpoint()
sagemaker.Session().delete_endpoint(predictor.endpoint)
```
| github_jupyter |
```
############## PLEASE RUN THIS CELL FIRST! ###################
# import everything and define a test runner function
from importlib import reload
from helper import run
import helper
```
### This is a Jupyter Notebook
You can write Python code and it will execute. You can write the typical 'hello world' program like this:
```python
print('hello world')
```
You can execute by pressing shift-enter. Try it! You can also click the Run button in the toolbar.
```
print('hello world')
```
### Exercise 1
You can do a lot more than just print "hello world"
This is a fully functioning Python3 interpreter so you can write functions and objects like in the next box.
Try printing the 21st Fibonacci number below instead of the 11th. You can add caching if you want to practice coding in Python.
```
# Exercise 1
def fib(n):
if n in (0,1):
return 1
else:
return fib(n-1) + fib(n-2)
print(fib(20))
```
### A few things you should remember in Python 3
Strings and bytes are now different
```python
s = 'hello world'
b = b'hello world'
```
These may look the same but the 'b' prefix means that the variable `b` is bytes whereas the variable `s` is a string. Basically, the on-disk characters on the system are bytes and the actual symbols in unicode are strings. A good explanation of the difference is [here](http://www.diveintopython3.net/strings.html).
```
s = 'hello world'
b = b'hello world'
print(s==b) # False
# You convert from string to bytes this way:
hello_world_bytes = s.encode('ascii')
print(hello_world_bytes == b) # True
# You convert from bytes to string this way:
hello_world_string = b.decode('ascii')
print(hello_world_string == s) # True
```
### Imports
You already have unit tests that are written for you.
Your task is to make them pass.
We can import various modules to make our experience using Jupyter more pleasant.
This way, making everything work will be a lot easier.
```
# this is how you import an entire module
import helper
# this is how you import a particular function, class or constant
from helper import little_endian_to_int
# used in the next exercise
some_long_variable_name = 'something'
```
### Exercise 2
#### Jupyter Tips
The two most useful commands are tab and shift-tab
Tab lets you tab-complete. Try pressing tab after the `some` below. This will complete to the variable name that's there from the last cell.
Shift-Tab gives you a function/method signature. Try pressing shift-tab after the `little_endian_to_int` below. That's also there from the last cell.
```
# Exercise 2
some_long_variable_name
little_endian_to_int(b'\x00')
```
### Exercise 3
Open [helper.py](/edit/session0/helper.py) and implement the `bytes_to_str` and `str_to_bytes` functions. Once you're done editing, run the cell below.
#### Make [this test](/edit/session0/helper.py) pass: `helper.py:HelperTest:test_bytes`
```
# Exercise 3
reload(helper)
run(helper.HelperTest('test_bytes'))
```
### Getting Help
If you can't get this, there's a [complete directory](/tree/session0/complete) that has the [helper.py file](/edit/session0/complete/helper.py) and the [session0.ipynb file](/notebooks/session0/complete/session0.ipynb) which you can use to get the answers.
### Useful Python 3 Idioms
You can reverse a list by using `[::-1]`:
```python
a = [1, 2, 3, 4, 5]
print(a[::-1]) # [5, 4, 3, 2, 1]
```
Also works on both strings and bytes:
```python
s = 'hello world'
print(s[::-1]) # 'dlrow olleh'
b = b'hello world'
print(b[::-1]) # b'dlrow olleh'
```
Indexing bytes will get you the numerical value:
```python
print(b'&'[0]) # 38 since & is charcter #38
```
You can do the reverse by using bytes:
```python
print(bytes([38])) # b'&'
```
```
a = [1, 2, 3, 4, 5]
print(a[::-1]) # [5, 4, 3, 2, 1]
s = 'hello world'
print(s[::-1]) # 'dlrow olleh'
b = b'hello world'
print(b[::-1]) # b'dlrow olleh'
print(b'&'[0]) # 38 since & charcter #38
print(bytes([38])) # b'&'
```
### Python Tricks
Here is how we convert binary to/from hex:
```
print(b'hello world'.hex())
print(bytes.fromhex('68656c6c6f20776f726c64'))
```
### Exercise 4
Reverse this hex dump: `b010a49c82b4bc84cc1dfd6e09b2b8114d016041efaf591eca88959e327dd29a`
Hint: you'll want to turn this into binary data, reverse and turn it into hex again
```
# Exercise 4
h = 'b010a49c82b4bc84cc1dfd6e09b2b8114d016041efaf591eca88959e327dd29a'
# convert to binary (bytes.fromhex)
b = bytes.fromhex(h)
# reverse ([::-1])
b_rev = b[::-1]
# convert to hex()
h_rev = b_rev.hex()
# print the result
print(h_rev)
```
### Modular Arithmetic
If you don't remember Modular Arithmetic, it's this function on python
```python
39 % 12
```
The result is 3 because that is the remainder after division (39 / 12 == 3 + 3/12).
Some people like to call it "wrap-around" math. If it helps, think of modular arithmetic like a clock:

Think of taking the modulo as asking the question "what hour will it be 39 hours from now?"
If you're still confused, please take a look at [this](https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/what-is-modular-arithmetic) article.
```
print(39 % 12)
```
### Exercise 5
Find the modulo 19 of these numbers:
* 99
* \\(456 \cdot 444\\)
* \\(9^{77}\\)
(note python uses ** to do exponentiation)
```
# Exercise 5
prime = 19
print(99 % prime)
print(456*444 % prime)
print(9**77 % prime)
```
### Converting from bytes to int and back
Converting from bytes to integer requires learning about Big and Little Endian encoding. Essentially any number greater than 255 can be encoded in two ways, with the "Big End" going first or the "Little End" going first.
Normal human reading is from the "Big End". For example 123 is read as 100 + 20 + 3. Some computer systems encode integers with the "Little End" first.
A number like 500 is encoded this way in Big Endian:
0x01f4 (256 + 244)
But this way in Little Endian:
0xf401 (244 + 256)
In Python we can convert an integer to big or little endian using a built-in method:
```python
n = 1234567890
big_endian = n.to_bytes(4, 'big') # b'\x49\x96\x02\xd2'
little_endian = n.to_bytes(4, 'little') # b'\xd2\x02\x96\x49'
```
We can also convert from bytes to an integer this way:
```python
big_endian = b'\x49\x96\x02\xd2'
n = int.from_bytes(big_endian, 'big') # 1234567890
little_endian = b'\xd2\x02\x96\x49'
n = int.from_bytes(little_endian, 'little') # 1234567890
```
```
n = 1234567890
big_endian = n.to_bytes(4, 'big')
little_endian = n.to_bytes(4, 'little')
print(big_endian.hex())
print(little_endian.hex())
print(int.from_bytes(big_endian, 'big'))
print(int.from_bytes(little_endian, 'little'))
```
### Exercise 6
Convert the following:
* 8675309 to 8 bytes in big endian
* interpret ```b'\x11\x22\x33\x44\x55'``` as a little endian integer
```
# Exercise 6
n = 8675309
print(n.to_bytes(8, 'big'))
little_endian = b'\x11\x22\x33\x44\x55'
print(int.from_bytes(little_endian, 'little'))
```
### Exercise 7
We'll want to convert from little-endian bytes to an integer often, so write a function that will do this.
#### Make [this test](/edit/session0/helper.py) pass: `helper.py:HelperTest:test_little_endian_to_int`
```
# Exercise 7
reload(helper)
run(helper.HelperTest('test_little_endian_to_int'))
```
### Exercise 8
Similarly, we'll want to do the inverse operation, so write a function that will convert an integer to little-endian bytes given the number and the number of bytes it should take up.
#### Make [this test](/edit/session0/helper.py) pass: `helper.py:HelperTest:test_int_to_little_endian`
```
# Exercise 8
reload(helper)
run(helper.HelperTest('test_int_to_little_endian'))
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import linear_kernel
from surprise import Reader, Dataset, SVD
from surprise.model_selection import KFold
from surprise.model_selection.validation import cross_validate
import copy
from datetime import datetime
print("Import Success")
meta = pd.read_csv('Dataset/movies_metadata.csv')
meta.head()
# Rating
ratings = pd.read_csv('Dataset/ratings_small.csv')
ratings.head()
#Links of IMDb and TMDb
links = pd.read_csv('Dataset/links_small.csv')
links.head()
keywords = pd.read_csv('Dataset/keywords.csv')
keywords.head()
# Content based Recommender System
meta['overview'] = meta['overview'].fillna('')
meta['overview'].head()
pd.DataFrame({'feature':meta.dtypes.index, 'dtype':meta.dtypes.values})
meta = meta.drop([19730, 29503, 35587]) # Remove these ids to solve ValueError: "Unable to parse string..."
meta['id'] = pd.to_numeric(meta['id'])
pd.DataFrame({'feature':links.dtypes.index, 'dtype':links.dtypes.values})
col=np.array(links['tmdbId'], np.int64)
links['tmdbId']=col
meta.rename(columns={'id':'tmdbId'}, inplace=True)
meta = pd.merge(meta,links,on='tmdbId')
meta.drop(['imdb_id'], axis=1, inplace=True)
meta.head()
tfidf = TfidfVectorizer(stop_words='english')
# Constructing matrix TF-IDF
tfidf_matrix = tfidf.fit_transform(meta['overview'])
tfidf_matrix.shape
cosine_sim = linear_kernel(tfidf_matrix, tfidf_matrix)
indices = pd.Series(meta.index, index=meta['original_title']).drop_duplicates()
def recommend(title, cosine_sim=cosine_sim):
idx = indices[title]
sim_scores = list(enumerate(cosine_sim[idx]))
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
sim_scores = sim_scores[1:16]
movie_indices = [i[0] for i in sim_scores]
for i in movie_indices:
pop = meta.at[i,'vote_average']
if pop<5 or pop>10:
movie_indices.remove(i)
return meta[['original_title','vote_average']].iloc[movie_indices]
recommend('Iron Man')
reader = Reader()
df = Dataset.load_from_df(ratings[['userId', 'movieId', 'rating']], reader)
kf = KFold(n_splits=5)
kf.split(df)
svd = SVD()
cross_validate(svd, df, measures=['RMSE', 'MAE'])
trainset = df.build_full_trainset()
svd.fit(trainset)
ratings[ratings['userId'] == 10]
# smaller link file reload
links_df = pd.read_csv('Dataset/links_small.csv')
col=np.array(links_df['tmdbId'], np.int64)
links_df['tmdbId']=col
links_df = links_df.merge(meta[['title', 'tmdbId']], on='tmdbId').set_index('title')
links_index = links_df.set_index('tmdbId')
def hybrid(userId, title):
idx = indices[title]
tmdbId = links_df.loc[title]['tmdbId'] # Get the corresponding tmdb id
sim_scores = list(enumerate(cosine_sim[idx]))
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
sim_scores = sim_scores[1:31] # Scores of 30 similar movies
movie_indices = [i[0] for i in sim_scores]
movies = meta.iloc[movie_indices][['title', 'vote_average', 'tmdbId']]
movies['est'] = movies['tmdbId'].apply(lambda x: svd.predict(userId, links_index.loc[x]['movieId']).est) # Estimated prediction using svd
movies = movies.sort_values('est', ascending=False) # Rank movies according to the predicted values
movies.columns = ['Title', 'Vote Average', 'TMDb Id', 'Estimated Prediction']
return movies.head(30) # Display top 30 recommended movies
hybrid(30,'The Conjuring')
result = hybrid(10,'Batman Begins')
print("data getting passed in contextual")
print(result)
# necessary functions for contextual_update function
def day_time():
now = datetime.now().time()
morning=now.replace(hour=12,minute=0,second=0,microsecond=0)
afternoon=now.replace(hour=16,minute=0,second=0,microsecond=0)
evening=now.replace(hour=19,minute=0,second=0,microsecond=0)
if now< morning :
return "morning"
elif now<afternoon :
return "afternoon"
elif now<evening :
return "evening"
else :
return "night"
def season():
month = datetime.now().month
if month < 4:
return "winter"
elif month <6:
return "summer"
elif month <9:
return "rainy"
elif month < 11:
return "autumn"
else :
return "winter"
def is_weekend():
day=datetime.now().isoweekday()
if day< 6:
return False
return True
#testing function
#day_time()
season()
# Function to include movies on specific dates -
def special_date(recommended_list,date_passed):
print("special date function reached")
date_event = datetime.now().date()
# Independence Day
date_event=date_event.replace(month=8,day=15)
new_list=recommended_list.copy()
if date_event == date_passed:
# Vote Average TMDb Id Estimated Prediction
new_movie = pd.DataFrame({"Title":["Border","Uri:The Surgical Strike"],
"Vote Average":[6.8,7.1],
"TMDb Id":[33125,554600],
"Estimated Prediction":[5.0,5.0],
"tmdbId":[33125,554600],
"genres":["[{'name':'Action'},{'name':'History'},{'name':'War'}]","[{'name':'Action'},{'name':'Drama'},{'name':'War'}]"]
})
new_list = pd.concat([new_movie,recommended_list])
#Repubic Day
date_event=date_event.replace(month=1,day=26)
if date_event == date_passed:
new_movie = pd.DataFrame({"Title":["Shaheed","Border","Uri:The Surgical Strike"],
"Vote Average":[5.0,6.8,7.1],
"TMDb Id":[498713,33125,554600],
"Estimated Prediction":[5.0,5.0,5.0],
"tmdbId":[498713,33125,554600],
"genres":["[{'name':'War'},{'name':'History'}]","[{'name':'Action'},{'name':'History'},{'name:'War'}]","[{'name':'Action'},{'name':'Drama'},{'name':'War'}]"]
})
new_list = pd.concat([new_movie,recommended_list])
#Teachers Day
date_event=date_event.replace(month=9,day=5)
if date_event == date_passed:
new_movie = pd.DataFrame({"Title":["Super 30","Taare Zameen Par"],
"Vote Average":[7.6,8.0],
"TMDb Id":[534075,7508],
"Estimated Prediction":[5.0,5.0],
"tmdbId":[534075,7508],
"genres":["[{'name':'Drama'}]","[{'name':'Drama'}]"]
})
new_list = pd.concat([new_movie,recommended_list])
#Children day
date_event=date_event.replace(month=11,day=14)
if date_event == date_passed:
new_movie = pd.DataFrame({"Title":["Taare Zameen Par","Chillar Party"],
"Vote Average":[8.0,6.9],
"TMDb Id":[7508,69891],
"Estimated Prediction":[5.0,5.0],
"tmdbId":[7508,69891],
"genres":["[{'name':'Drama'}]","[{'name':'Drama'},{'name':'Comedy'},{'name':'Family'}]"]
})
new_list = pd.concat([new_movie,recommended_list])
#Christmas
date_event=date_event.replace(month=12,day=25)
if date_event == date_passed:
new_movie = pd.DataFrame({"Title":["Let It Snow","Home Alone"],
"Vote Average":[6.1,7.3],
"TMDb Id":[295151,771],
"Estimated Prediction":[5.0,5.0],
"tmdbId":[295151,771],
"genres":["[{'name':'Romance'},{'name':'Comedy'}]","[{'name':'Comedy'},{'name':'Family'}]"]
})
new_list = pd.concat([new_movie,recommended_list])
#New Year
date_event=date_event.replace(month=12,day=31)
if date_event == date_passed:
new_movie = pd.DataFrame({"Title":["New Years Eve"],
"Vote Average":[5.9],
"TMDb Id":[62838],
"Estimated Prediction":[5.0],
"tmdbId":[62838],
"genres":["[{'name':'Comedy'},{'name':'Romance'}]"]
})
new_list = pd.concat([new_movie,recommended_list])
date_event=date_event.replace(month=1,day=1)
if date_event == date_passed:
new_movie = pd.DataFrame({"Title":["New Years Eve"],
"Vote Average":[5.9],
"TMDb Id":[62838],
"Estimated Prediction":[5.0],
"tmdbId":[62838],
"genres":["[{'name':'Comedy'},{'name':'Romance'}]"]
})
new_list = pd.concat([new_movie,recommended_list])
#Valentine
date_event=date_event.replace(month=2,day=14)
if date_event == date_passed:
new_movie = pd.DataFrame({"Title":["The Notebook","Titanic"],
"Vote Average":[7.9,7.9],
"TMDb Id":[11036,597],
"Estimated Prediction":[5.0,5.0],
"tmdbId":[11036,597],
"genres":["[{'name':'Romance'},{'name':'Drama'}]","[{'name':'Drama'},{'name':'Romance'}]"]
})
new_list = pd.concat([new_movie,recommended_list])
return new_list
def recommendation_updater(recommended_list,genre_score):
#print("reached recommendation updater - ")
new_list=recommended_list.copy()
for ind in recommended_list.index:
new_score=0
movie_genre= list(eval(recommended_list['genres'][ind]))
#print(recommended_list['genres'][ind])
#print(type(recommended_list['genres'][ind]))
#print(movie_genre)
curr_genre_list= [li['name'] for li in movie_genre]
#print(curr_genre_list)
for genre in curr_genre_list:
if genre in genre_score:
new_score+=genre_score[genre]
#print(new_score)
new_list['Estimated Prediction'][ind]=new_list['Estimated Prediction'][ind]+new_score
return new_list
def contextual_update(list_passed,family=False,device="Mobile",no_of_people=1,date_passed=datetime.now().date()) :
# categories we have romance,action,comedy,drama ,crime and thriller ,documentary,sci-fi
recommended_list=list_passed.copy()
print("Before Context-Awareness based changes - ")
print(list_passed)
# Adding Genres for update
recommended_list = pd.merge(recommended_list,meta[['tmdbId','genres']],left_on=['TMDb Id'],right_on=['tmdbId']).dropna()
# Special Days
test_date=datetime.now().date()
test_date=test_date.replace(month=8,day=15)
recommended_list=special_date(recommended_list,test_date)
recommended_list.reset_index(drop=True,inplace=True)
# Reducing score to take account for contextual_update
effect_rate = 0.75
category=4
recommended_list['Estimated Prediction']=recommended_list['Estimated Prediction']-effect_rate
# Timing based
day_part = day_time()
if day_part == "morning":
scores={
'Romance':0.24*(effect_rate/category),'Action':0.18*(effect_rate/category),'Comedy':0.64*(effect_rate/category),'Drama':0.24*(effect_rate/category),'Crime':0.17*(effect_rate/category)
,'Thriller':0.17*(effect_rate/category),'Documentary':0.25*(effect_rate/category),'Science Fiction':0.28*(effect_rate/category)
}
elif day_part =="afternoon":
scores ={
'Romance':0.18*(effect_rate/category),'Action':0.44*(effect_rate/category),'Comedy':0.48*(effect_rate/category),'Drama':0.35*(effect_rate/category),'Crime':0.5*(effect_rate/category)
,'Thriller':0.5*(effect_rate/category),'Documentary':0.24*(effect_rate/category),'Science Fiction':0.35*(effect_rate/category)
}
elif day_part =="evening":
scores={
'Romance':0.4*(effect_rate/category),'Action':0.34*(effect_rate/category),'Comedy':0.48*(effect_rate/category),'Drama':0.3*(effect_rate/category),'Crime':0.4*(effect_rate/category)
,'Thriller':0.4*(effect_rate/category),'Documentary':0.24*(effect_rate/category),'Science Fiction':0.32*(effect_rate/category)
}
else :
scores={
'Romance':0.57*(effect_rate/category),'Action':0.37*(effect_rate/category),'Comedy':0.42*(effect_rate/category),'Drama':0.37*(effect_rate/category),'Crime':0.54*(effect_rate/category)
,'Thriller':0.54*(effect_rate/category),'Documentary':0.31*(effect_rate/category),'Science Fiction':0.41*(effect_rate/category)
}
recommended_list=recommendation_updater(recommended_list,scores)
# Season based
curr_season = season()
if curr_season == "summer":
scores={
'Romance':0.32*(effect_rate/category),'Action':0.48*(effect_rate/category),'Comedy':0.57*(effect_rate/category),'Drama':0.5*(effect_rate/category),'Crime':0.6*(effect_rate/category)
,'Thriller':0.6*(effect_rate/category),'Documentary':0.27*(effect_rate/category),'Science Fiction':0.47*(effect_rate/category)
}
elif curr_season == "rainy":
scores={
'Romance':0.57*(effect_rate/category),'Action':0.3*(effect_rate/category),'Comedy':0.52*(effect_rate/category),'Drama':0.5*(effect_rate/category),'Crime':0.41*(effect_rate/category)
,'Thriller':0.41*(effect_rate/category),'Documentary':0.14*(effect_rate/category),'Science Fiction':0.32*(effect_rate/category)
}
elif curr_season == "autumn":
scores={
'Romance':0.41*(effect_rate/category),'Action':0.37*(effect_rate/category),'Comedy':0.5*(effect_rate/category),'Drama':0.48*(effect_rate/category),'Crime':0.52*(effect_rate/category)
,'Thriller':0.52*(effect_rate/category),'Documentary':0.31*(effect_rate/category),'Science Fiction':0.44*(effect_rate/category)
}
else :
scores={
'Romance':0.54*(effect_rate/category),'Action':0.45*(effect_rate/category),'Comedy':0.51*(effect_rate/category),'Drama':0.42*(effect_rate/category),'Crime':0.5*(effect_rate/category)
,'Thriller':0.5*(effect_rate/category),'Documentary':0.21*(effect_rate/category),'Science Fiction':0.32*(effect_rate/category)
}
recommended_list=recommendation_updater(recommended_list,scores)
# Weekday based -
if is_weekend():
scores={
'Romance':0.41*(effect_rate/category),'Action':0.48*(effect_rate/category),'Comedy':0.54*(effect_rate/category),'Drama':0.38*(effect_rate/category),'Crime':0.7*(effect_rate/category)
,'Thriller':0.7*(effect_rate/category),'Documentary':0.28*(effect_rate/category),'Science Fiction':0.41*(effect_rate/category)
}
else :
scores={
'Romance':0.37*(effect_rate/category),'Action':0.32*(effect_rate/category),'Comedy':0.51*(effect_rate/category),'Drama':0.32*(effect_rate/category),'Crime':0.48*(effect_rate/category)
,'Thriller':0.48*(effect_rate/category),'Documentary':0.21*(effect_rate/category),'Science Fiction':0.38*(effect_rate/category)
}
recommended_list=recommendation_updater(recommended_list,scores)
# Device Based
if device == "phone":
scores={
'Romance':0.36*(effect_rate/category),'Action':0.24*(effect_rate/category),'Comedy':0.66*(effect_rate/category),'Drama':0.44*(effect_rate/category),'Crime':0.38*(effect_rate/category)
,'Thriller':0.38*(effect_rate/category),'Documentary':0.2*(effect_rate/category),'Science Fiction':0.21*(effect_rate/category)
}
elif device =="tablet":
scores={
'Romance':0.34*(effect_rate/category),'Action':0.37*(effect_rate/category),'Comedy':0.43*(effect_rate/category),'Drama':0.43*(effect_rate/category),'Crime':0.42*(effect_rate/category)
,'Thriller':0.42*(effect_rate/category),'Documentary':0.22*(effect_rate/category),'Science Fiction':0.36*(effect_rate/category)
}
else :
scores={
'Romance':0.33*(effect_rate/category),'Action':0.6*(effect_rate/category),'Comedy':0.24*(effect_rate/category),'Drama':0.3*(effect_rate/category),'Crime':0.66*(effect_rate/category)
,'Thriller':0.66*(effect_rate/category),'Documentary':0.21*(effect_rate/category),'Science Fiction':0.58*(effect_rate/category)
}
recommended_list=recommendation_updater(recommended_list,scores)
# Based on Number of people and Family -
if no_of_people >1 :
if family:
scores={
'Romance':0.1*(effect_rate/category),'Action':0.43*(effect_rate/category),'Comedy':0.66*(effect_rate/category),'Drama':0.49*(effect_rate/category),'Crime':0.26*(effect_rate/category)
,'Thriller':0.26*(effect_rate/category),'Documentary':0.36*(effect_rate/category),'Science Fiction':0.29*(effect_rate/category)
}
else :
scores={
'Romance':0.33*(effect_rate/category),'Action':0.63*(effect_rate/category),'Comedy':0.54*(effect_rate/category),'Drama':0.33*(effect_rate/category),'Crime':0.61*(effect_rate/category)
,'Thriller':0.61*(effect_rate/category),'Documentary':0.17*(effect_rate/category),'Science Fiction':0.54*(effect_rate/category)
}
recommended_list=recommendation_updater(recommended_list,scores)
# removing genre from table
recommended_list.drop(['tmdbId','genres'],axis=1,inplace=True)
# Sorting the list for final result and comparing
#print(list_passed)
recommended_list.sort_values(by='Estimated Prediction',ascending=False,inplace=True)
print(recommended_list)
contextual_update(result)
```
| github_jupyter |
# Exploring Machine Learning on Quantopian
Recently, Quantopian’s Chief Investment Officer, Jonathan Larkin, shared an industry insider’s overview of the [professional quant equity workflow][1]. This workflow is comprised of distinct stages including: (1) Universe Definition, (2) Alpha Discovery, (3) Alpha Combination, (4) Portfolio Construction and (5) Trading.
This Notebook focuses on stage 3: Alpha Combination. At this stage, Machine Learning is an intuitive choice as we have abstracted the problem to such a degree that it is now a classic classification (or regression) problem which ML is very good at solving and coming up with an alpha combination that is predictive.
As you will see, there is a lot of code here setting up a factor library and some data wrangling to get everything into shape. The details of this part are perhaps not quite as interesting so feel free to skip directly to ["Training our ML pipeline"](#training) where we have everything in place to train and test our classifier.
## Overview
1. Define trading universe to use ([Q500US and Q1500US][2]).
2. Define alphas (implemented in [Pipeline][3]).
3. Run pipeline.
4. Split into train and test set.
5. Preprocess data (rank alphas, subsample, align alphas with future returns, impute, scale).
6. Train Machine Learning classifier ([AdaBoost from Scikit-Learn][4]).
7. Evaluate Machine Learning classifier on test set.
Note that one important limitation is that we only train and test on static (i.e. fixed-in-time) data. Thus, you can not directly do the same in an algorithm. In principle, this is possible and will be the next step but it makes sense to first focus on just the ML in a more direct way to get a good intuition about the workflow and how to develop a competitive ML pipeline.
### Disclaimer
This workflow is still a bit rough around the edges. We are working on improving it and adding better educational materials. This serves as a sneak-peek for the curious and adventurous.
[1]: http://blog.quantopian.com/a-professional-quant-equity-workflow/
[2]: https://www.quantopian.com/posts/the-q500us-and-q1500us
[3]: https://www.quantopian.com/tutorials/pipeline
[4]: http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html
[5]: https://www.quantopian.com/posts/alphalens-a-new-tool-for-analyzing-alpha-factors
```
from quantopian.research import run_pipeline
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import Latest
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.factors import CustomFactor, SimpleMovingAverage, AverageDollarVolume, Returns, RSI
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.data.quandl import fred_usdontd156n as libor
from quantopian.pipeline.data.zacks import EarningsSurprises
import talib
import pandas as pd
import numpy as np
from time import time
import alphalens as al
import pyfolio as pf
from scipy import stats
import matplotlib.pyplot as plt
from sklearn import linear_model, decomposition, ensemble, preprocessing, isotonic, metrics
```
## Definition of some commonly used factors
The factors below are a small collection of commonly used alphas that were coded by Gil Wassermann. I will post a separate Notebook with the full collection and more descriptions of them. Ultimately we will put these into a library you can just import to avoid the wall of text. If you want to understand more about pipeline, read the [tutorial](https://www.quantopian.com/tutorials/pipeline).
Also note the `Earnings_Quality` alpha which uses [Zacks Earnings Surprises](https://www.quantopian.com/data/zacks/earnings_surprises), a [new source from our partners](https://www.quantopian.com/data).
The details of these factors are not the focus of this Notebook so feel free to just [skip](#universe) this cell.
```
bs = morningstar.balance_sheet
cfs = morningstar.cash_flow_statement
is_ = morningstar.income_statement
or_ = morningstar.operation_ratios
er = morningstar.earnings_report
v = morningstar.valuation
vr = morningstar.valuation_ratios
def make_factors():
def Asset_Growth_3M():
return Returns(inputs=[bs.total_assets], window_length=63)
def Asset_To_Equity_Ratio():
return bs.total_assets.latest / bs.common_stock_equity.latest
def Capex_To_Cashflows():
return (cfs.capital_expenditure.latest * 4.) / \
(cfs.free_cash_flow.latest * 4.)
def EBITDA_Yield():
return (is_.ebitda.latest * 4.) / \
USEquityPricing.close.latest
def EBIT_To_Assets():
return (is_.ebit.latest * 4.) / \
bs.total_assets.latest
def Earnings_Quality():
return morningstar.cash_flow_statement.operating_cash_flow.latest / \
EarningsSurprises.eps_act.latest
def Return_On_Total_Invest_Capital():
return or_.roic.latest
class Mean_Reversion_1M(CustomFactor):
inputs = [Returns(window_length=21)]
window_length = 252
def compute(self, today, assets, out, monthly_rets):
out[:] = (monthly_rets[-1] - np.nanmean(monthly_rets, axis=0)) / \
np.nanstd(monthly_rets, axis=0)
class MACD_Signal_10d(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 60
def compute(self, today, assets, out, close):
sig_lines = []
for col in close.T:
# get signal line only
try:
_, signal_line, _ = talib.MACD(col, fastperiod=12,
slowperiod=26, signalperiod=10)
sig_lines.append(signal_line[-1])
# if error calculating, return NaN
except:
sig_lines.append(np.nan)
out[:] = sig_lines
class Moneyflow_Volume_5d(CustomFactor):
inputs = [USEquityPricing.close, USEquityPricing.volume]
window_length = 5
def compute(self, today, assets, out, close, volume):
mfvs = []
for col_c, col_v in zip(close.T, volume.T):
# denominator
denominator = np.dot(col_c, col_v)
# numerator
numerator = 0.
for n, price in enumerate(col_c.tolist()):
if price > col_c[n - 1]:
numerator += price * col_v[n]
else:
numerator -= price * col_v[n]
mfvs.append(numerator / denominator)
out[:] = mfvs
def Net_Income_Margin():
return or_.net_margin.latest
def Operating_Cashflows_To_Assets():
return (cfs.operating_cash_flow.latest * 4.) / \
bs.total_assets.latest
def Price_Momentum_3M():
return Returns(window_length=63)
class Price_Oscillator(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 252
def compute(self, today, assets, out, close):
four_week_period = close[-20:]
out[:] = (np.nanmean(four_week_period, axis=0) /
np.nanmean(close, axis=0)) - 1.
def Returns_39W():
return Returns(window_length=215)
class Trendline(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 252
# using MLE for speed
def compute(self, today, assets, out, close):
# prepare X matrix (x_is - x_bar)
X = range(self.window_length)
X_bar = np.nanmean(X)
X_vector = X - X_bar
X_matrix = np.tile(X_vector, (len(close.T), 1)).T
# prepare Y matrix (y_is - y_bar)
Y_bar = np.nanmean(close, axis=0)
Y_bars = np.tile(Y_bar, (self.window_length, 1))
Y_matrix = close - Y_bars
# prepare variance of X
X_var = np.nanvar(X)
# multiply X matrix an Y matrix and sum (dot product)
# then divide by variance of X
# this gives the MLE of Beta
out[:] = (np.sum((X_matrix * Y_matrix), axis=0) / X_var) / \
(self.window_length)
class Vol_3M(CustomFactor):
inputs = [Returns(window_length=2)]
window_length = 63
def compute(self, today, assets, out, rets):
out[:] = np.nanstd(rets, axis=0)
def Working_Capital_To_Assets():
return bs.working_capital.latest / bs.total_assets.latest
all_factors = {
'Asset Growth 3M': Asset_Growth_3M,
'Asset to Equity Ratio': Asset_To_Equity_Ratio,
'Capex to Cashflows': Capex_To_Cashflows,
'EBIT to Assets': EBIT_To_Assets,
'EBITDA Yield': EBITDA_Yield,
'Earnings Quality': Earnings_Quality,
'MACD Signal Line': MACD_Signal_10d,
'Mean Reversion 1M': Mean_Reversion_1M,
'Moneyflow Volume 5D': Moneyflow_Volume_5d,
'Net Income Margin': Net_Income_Margin,
'Operating Cashflows to Assets': Operating_Cashflows_To_Assets,
'Price Momentum 3M': Price_Momentum_3M,
'Price Oscillator': Price_Oscillator,
'Return on Invest Capital': Return_On_Total_Invest_Capital,
'39 Week Returns': Returns_39W,
'Trendline': Trendline,
'Vol 3M': Vol_3M,
'Working Capital to Assets': Working_Capital_To_Assets,
}
return all_factors
```
<a id='universe'></a>
## Define universe and select factors to use
We will screen our universe using the new [Q1500US](https://www.quantopian.com/posts/the-q500us-and-q1500us) and hand-pick a few alphas from the list above. We encourage you to play around with the factors.
```
universe = Q1500US()
factors = make_factors()
```
##Define and build the pipeline
Next we have to build the pipeline. In addition to the factors defined above, we need the forward returns we want to predict. In this Notebook we will predict 5-day returns and train our model on daily data. You can also subsample the data to e.g. weekly to not have overlapping return periods but we omit this here.
```
n_fwd_days = 5 # number of days to compute returns over
def make_history_pipeline(factors, universe, n_fwd_days=5):
# Call .rank() on all factors and mask out the universe
factor_ranks = {name: f().rank(mask=universe) for name, f in factors.iteritems()}
# Get cumulative returns over last n_fwd_days days. We will later shift these.
factor_ranks['Returns'] = Returns(inputs=[USEquityPricing.open],
mask=universe, window_length=n_fwd_days)
pipe = Pipeline(screen=universe, columns=factor_ranks)
return pipe
history_pipe = make_history_pipeline(factors, universe, n_fwd_days=n_fwd_days)
```
##Run the pipeline
```
start_timer = time()
start = pd.Timestamp("2016-03-06")
end = pd.Timestamp("2016-09-14")
results = run_pipeline(history_pipe, start_date=start, end_date=end)
results.index.names = ['date', 'security']
end_timer = time()
print "Time to run pipeline %.2f secs" % (end_timer - start_timer)
results.head()
results.tail()
```
As you can see, running pipeline gives us factors for every day and every security, ranked relative to each other. We assume that the order of individual factors might carry some weak predictive power on future returns. The question then becomes: how can we combine these weakly predictive factors in a clever way to get a single mega-alpha which is hopefully more predictive.
This is an important milestone. We have our ranked factor values on each day for each stock. Ranking is not absolutely necessary but has several benefits:
* it increases robustness to outliers,
* we mostly care about the relative ordering rather than the absolute values.
Also note the `Returns` column. These are the values we want to predict given the factor ranks.
Next, we are doing some additional transformations to our data:
1. Shift factor ranks to align with future returns `n_fwd_days` days in the future.
2. Find the top and bottom 30 percentile stocks by their returns. Essentially, we only care about relative movement of stocks. If we later short stocks that go down and long stocks going up relative to each other, it doesn't matter if e.g. all stocks are going down in absolute terms. Moverover, we are ignoring stocks that did not move that much (i.e. 30th to 70th percentile) to only train the classifier on those that provided strong signal.
3. We also binarize the returns by their percentile to turn our ML problem into a classification one.
`shift_mask_data()` is a utility function that does all of these.
```
def shift_mask_data(X, Y, upper_percentile=70, lower_percentile=30, n_fwd_days=1):
# Shift X to match factors at t to returns at t+n_fwd_days (we want to predict future returns after all)
shifted_X = np.roll(X, n_fwd_days, axis=0)
# Slice off rolled elements
X = shifted_X[n_fwd_days:]
Y = Y[n_fwd_days:]
n_time, n_stocks, n_factors = X.shape
# Look for biggest up and down movers
upper = np.nanpercentile(Y, upper_percentile, axis=1)[:, np.newaxis]
lower = np.nanpercentile(Y, lower_percentile, axis=1)[:, np.newaxis]
upper_mask = (Y >= upper)
lower_mask = (Y <= lower)
mask = upper_mask | lower_mask # This also drops nans
mask = mask.flatten()
# Only try to predict whether a stock moved up/down relative to other stocks
Y_binary = np.zeros(n_time * n_stocks)
Y_binary[upper_mask.flatten()] = 1
Y_binary[lower_mask.flatten()] = -1
# Flatten X
X = X.reshape((n_time * n_stocks, n_factors))
# Drop stocks that did not move much (i.e. are in the 30th to 70th percentile)
X = X[mask]
Y_binary = Y_binary[mask]
return X, Y_binary
```
After we have our helper function to align our data properly we pass our factor ranks to it. You might wonder why we have to do the `swapaxes` thing below rather than just using `pandas` logic. The reason is that this way we can use the same `shift_mask_data()` function inside of a factor where we do not have access to a Pandas `DataFrame`. More on that in a future notebook.
```
# Massage data to be in the form expected by shift_mask_data()
results_wo_returns = results.copy()
returns = results_wo_returns.pop('Returns')
Y = returns.unstack().values
X = results_wo_returns.to_panel()
X = X.swapaxes(2, 0).swapaxes(0, 1).values # (factors, time, stocks) -> (time, stocks, factors)
```
Next, we split our data into training (80%) and test (20%). This is common practice: our classifier will try to fit the training set as well as possible but it does not tell us how well it would perform on unseen data. Because we are dealing with time-series data we split along the time-dimension to only test on future data.
```
# Train-test split
train_size_perc = 0.8
n_time, n_stocks, n_factors = X.shape
train_size = np.int16(np.round(train_size_perc * n_time))
X_train, Y_train = X[:train_size, ...], Y[:train_size]
X_test, Y_test = X[(train_size+n_fwd_days):, ...], Y[(train_size+n_fwd_days):]
```
As we can only exclude stocks that did not move by a lot (i.e. 30th to 70th percentile) during training, we keep all stocks in our test set and just binarize according to the median. This avoids look-ahead bias.
```
X_train_shift, Y_train_shift = shift_mask_data(X_train, Y_train, n_fwd_days=n_fwd_days)
X_test_shift, Y_test_shift = shift_mask_data(X_test, Y_test, n_fwd_days=n_fwd_days,
lower_percentile=50,
upper_percentile=50)
X_train_shift.shape, X_test_shift.shape
```
<a id='training'></a>
## Training our ML pipeline
Before training our classifier, several preprocessing steps are advisable. The first one imputes nan values with the factor mean to get clean training data, the second scales our factor ranks to be between [0, 1).
For training we are using the [AdaBoost classifier](https://en.wikipedia.org/wiki/AdaBoost) which automatically determines the most relevant features (factors) and tries to find a non-linear combination of features to maximize predictiveness while still being robust. In essence, AdaBoost trains an ensemble of weak classifiers (decision trees in this case) sequentially. Each subsequent weak classifier takes into account the samples (or data points) already classified by the previous weak classifiers. It then focuses on the samples misclassified by the previous weak classifiers and tries to get those correctly. With each new weak classifier you get more fine-grained in your decision function and correctly classify some previously misclassified samples. For prediction, you simply average the answer of all weak classifiers to get a single strong classifier.
Of course, this is just an example and you can let your creativity and skill roam freely.
```
start_timer = time()
# Train classifier
imputer = preprocessing.Imputer()
scaler = preprocessing.MinMaxScaler()
clf = ensemble.AdaBoostClassifier(n_estimators=150) # n_estimators controls how many weak classifiers are fi
X_train_trans = imputer.fit_transform(X_train_shift)
X_train_trans = scaler.fit_transform(X_train_trans)
clf.fit(X_train_trans, Y_train_shift)
end_timer = time()
print "Time to train full ML pipline: %0.2f secs" % (end_timer - start_timer)
```
As you can see, training a modern ML classifier does not have to be very compute intensive. Scikit-learn is heavily optimized so the full process only takes less than 10 seconds. Of course, things like deep-learning (which is currently not available on Quantopian), might take a bit longer, but these models are also trained on data sets much much bigger than this (a famous subset of the ImageNet data set is 138 GB).
This means that the current bottlenecks are retrieving the data from pipeline (RAM and i/o), not lack of GPU or parallel processing support.
```
Y_pred = clf.predict(X_train_trans)
print('Accuracy on train set = {:.2f}%'.format(metrics.accuracy_score(Y_train_shift, Y_pred) * 100))
```
The classifier does reasonably well on the data we trained it on, but the real test is on hold-out data.
*Exercise*: It is also common to run cross-validation on the training data and tweak the parameters based on that score, testing should only be done rarely. Try coding a [sklearn pipeline](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) with [K-Fold cross-validation](http://scikit-learn.org/stable/modules/cross_validation.html).
## Evaluating our ML classifier
To evaluate our ML classifer on the test set, we have to transform our test data in the same way as our traning data. Note that we are only calling the `.transform()` method here which does not use any information from the test set.
```
# Transform test data
X_test_trans = imputer.transform(X_test_shift)
X_test_trans = scaler.transform(X_test_trans)
```
After all this work, we can finally test our classifier. We can predict binary labels but also get probability estimates.
```
# Predict!
Y_pred = clf.predict(X_test_trans)
Y_pred_prob = clf.predict_proba(X_test_trans)
print 'Predictions:', Y_pred
print 'Probabilities of class == 1:', Y_pred_prob[:, 1] * 100
```
There are many ways to evaluate the performance of our classifier. The simplest and most intuitive one is certainly the accuracy (50% is chance due to our median split). On Kaggle competitions, you will also often find the log-loss being used. This punishes you for being wrong *and* confident in your answer. See [the Kaggle description](https://www.kaggle.com/wiki/LogarithmicLoss) for more motivation.
```
print('Accuracy on test set = {:.2f}%'.format(metrics.accuracy_score(Y_test_shift, Y_pred) * 100))
print('Log-loss = {:.5f}'.format(metrics.log_loss(Y_test_shift, Y_pred_prob)))
```
Seems like we're at chance on this data set, alas. But perhaps you can do better?
We can also examine which factors the classifier identified as most predictive.
```
feature_importances = pd.Series(clf.feature_importances_, index=results_wo_returns.columns)
feature_importances.sort(ascending=False)
ax = feature_importances.plot(kind='bar')
ax.set(ylabel='Importance (Gini Coefficient)', title='Feature importances');
```
*Exercise*: Use [partial dependence plots](http://scikit-learn.org/stable/auto_examples/ensemble/plot_partial_dependence.html) to get an understanding of how factor rankings are used to predict future returns.
## Where to go from here
Several knobs can be tweaked to boost performance:
* Add existing factors from the collection above to the data set.
* Come up with new factors
* Use [`alphalens`](https://www.quantopian.com/posts/alphalens-a-new-tool-for-analyzing-alpha-factors) to evaluate an alpha for its predictive power.
* Look for [novel data sources from our partners](https://www.quantopian.com/data).
* Look at the [101 Alpha's Project](https://www.quantopian.com/posts/the-101-alphas-project).
* Improve preprocessing of the ML pipeline
* Is 70/30 the best split?
* Should we not binarize the returns and do regression?
* Can we add Sector information in some way?
* Experiment with [feature selection](http://scikit-learn.org/stable/modules/feature_selection.html).
* PCA
* ICA
* etc.
* Tweak hyper-parameters of `AdaBoostClassifier`.
* [Use cross-validation to find optimal parameters](http://scikit-learn.org/stable/modules/grid_search.html).
* Try [different classifiers](http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html) of combinations of classifiers.
## Machine Learning competition
If you have something you think works well, post it in this thread. Make sure to test over the same time-period as I have here to keep things comparable. In a month from now, we can test on new data that has aggregated since then and determine who built the best ML pipeline. If there is demand, we might turn this into a proper ML contest.
## Machine Learning resources
If you look for information on how to get started with ML, here are a few resources:
* [Scikit-learn resources](http://scikit-learn.org/stable/presentations.html)
* [Learning scikit-learn: Machine Learning in Python](https://www.amazon.com/dp/1783281936)
* [Pattern Recognition and Machine Learning](https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738)
## How to put this into an algorithm
As mentioned above, this is not immediately usable in an algorithm. For one thing, there is no `run_pipeline()` in the backtest IDE. It turns out to be rather simple to take the code above and put it into a pipeline `CustomFactor()` where the ML model would automatically get retrained and make predictions. You would then long the `1` predictions and short the `-1` predictions, apply some weighting (e.g. inverse variance) and execute orders. More on these next steps in the future.
## Credits
* Content created by James Christopher and Thomas Wiecki
* Thanks to Sheng Wang for ideas and inspiration.
* Jonathan Larkin, Jess Stauth, Delaney Granizo-Mackenzie, and Jean Bredeche for helpful comments on an earlier draft.
| github_jupyter |
# Use case Schouwen Westkop Noord
## 1. Import functionality
```
from functions import *
```
## 3. User defined values
```
load_factor =np.array([0,0.1,0.2,0.3, 0.4,0.5,0.6,0.7,0.8,0.9,1]) # Roadmap11
start = [3.674, 51.70969009] # Location of the koppelpunt (lon, lat)
stop = [3.522637481591586,51.76880095558772] # Location of the dredging area (lon, lat)
Volume = 50_000 # Total volume to be dregded (m^3)
unloading_rate = 1.5
loading_rate = 1.5
ukc = 1.0 # Under Keel clearance (m)
WWL = 20 # Width on Water Line (m)
LWL = 80 # Length on Water Line (m)
hopper_capacity = 4000 # Maximal capacity of the hopper (m^3)
V_full = 10*0.514444444 # Velocity in deep water when maximal loaded (m/s)
V_emp = 12*0.514444444 # Maximal sailing velocity empty in deep water (m/s)
T_full = 8 # Draft when maximal Loaded (m)
T_emp = 5 # Draft When empty (m)
WVPI_full = 10000 # Weight when maximal loaded (tf)
WVPI_empt = 4000 # Weight empty (tf)
Q_cost = compute_cost(700_000, 0.008) # Cost function for route optimizer ($)
Q_co2 = compute_co2(1) # Cost function for route optimizer (g CO2)
Q_velo = compute_v_provider(V_emp, V_full) # Vessel velocity is dependent on load factor
service_hours = 168 # hours per week
delay_for_bunkering = 10 # hours per week
technical_delay = 10 # hours per week
project_related_delay = 3 # hours per week
load_factor
```
## 4. Digital-twin simulation
```
nl = (0.5,1.5)
dx_min = 0.02
blend = 0.8
number_of_neighbor_layers = 2
vship =np.transpose([interpolate(load_factor, V_full, V_emp)])
WD_min= interpolate(load_factor, T_full, T_emp)
WVPI = interpolate(load_factor, WVPI_full, WVPI_empt)
Load_flow = flow_2D_FM_100m
name_textfile_flow = 'D:/DCSM-FM_100m/A05_pieter(usecase_schouwen)/DCSM-FM_100m_0000_map.nc'
start_time = time.time()
Roadmap = Mesh_maker.Graph_flow_model(name_textfile_flow,
dx_min,
blend,
nl,
number_of_neighbor_layers,
vship,
Load_flow,
WD_min,
WVPI,
WWL = WWL,
ukc = ukc,
compute_cost = Q_cost,
compute_co2 = Q_co2,
repeat = False,
nodes_index = np.arange(12939),
optimization_type=['time']
)
stop_time = time.time()
computation_time = stop_time - start_time
print("the computational time is:", round(computation_time,2), "sec")
halem.save_object(Roadmap, 'Roadmap11SIM_NB2')
name_textfile_load = 'Roadmap11SIM_NB2'
with open(name_textfile_load, 'rb') as input:
Roadmap = pickle.load(input)
start_time_tot = time.time()
t0 = '15/04/2019 01:00:00' # start of the simulation (string)
d = datetime.datetime.strptime(t0, "%d/%m/%Y %H:%M:%S")
t0 = d.timestamp()
simulation_start = datetime.datetime.fromtimestamp(t0)
my_env = simpy.Environment(initial_time = time.mktime(simulation_start.timetuple()))
my_env.epoch = time.mktime(simulation_start.timetuple())
TransportProcessingResource = type('TransportProcessingResource',
(core.Identifiable, # Give it a name
core.Log, # Allow logging of all discrete events
core.ContainerDependentMovable, # A moving container, so capacity and location
core.Processor, # Allow for loading and unloading
core.HasResource, # Add information on serving equipment
core.Routeable, # Route is optimized
core.LoadingFunction, # Add a loading function
core.UnloadingFunction, # Add an unloading function
),
{})
data_from_site = {"env": my_env, # The simpy environment defined in the first cel
"name": "Winlocatie", # The name of the site
"geometry": [], # The coordinates of the project site
"capacity": Volume, # The capacity of the site
"level": Volume} # The actual volume of the site
data_node = {"env": my_env, # The simpy environment defined in the first cel
"name": "Intermediate site", # The name of the site
"geometry": []} # The coordinates of the project site
data_to_site = {"env": my_env, # The simpy environment defined in the first cel
"name": "Dumplocatie", # The name of the site
"geometry": [], # The coordinates of the project site
"capacity": Volume, # The capacity of the site
"level": 0} # The actual volume of the site (empty of course)
path = [start, stop]
Nodes, Edges = connect_sites_with_path(data_from_site, data_to_site, data_node, path)
positions = {}
FG = nx.Graph()
my_env.FG = FG
for node in Nodes:
positions[node.name] = (node.geometry.x, node.geometry.y)
FG.add_node(node.name, geometry = node.geometry)
for edge in Edges:
FG.add_edge(edge[0].name, edge[1].name, weight = 1)
route = []
data_hopper = {"env": my_env, # The simpy environment
"name": "Hopper 01", # Name
"geometry": Nodes[0].geometry, # It starts at the "from site"
"loading_rate": loading_rate, # Loading rate
"unloading_rate": unloading_rate, # Unloading rate
"capacity": hopper_capacity, # Capacity of the hopper - "Beunvolume"
"compute_v": Q_velo, # Variable speed
"route": route,
"optimize_route": True, # Optimize the Route
"optimization_type": 'time', # Optimize for the fastest path
"loadfactors": load_factor
}
hopper = TransportProcessingResource(**data_hopper)
activity = model.Activity(env = my_env, # The simpy environment defined in the first cel
name = "Soil movement", # We are moving soil
origin = Nodes[0], # We originate from the from_site
destination = Nodes[-1], # And therefore travel to the to_site
loader = hopper, # The benefit of a TSHD, all steps can be done
mover = hopper, # The benefit of a TSHD, all steps can be done
unloader = hopper, # The benefit of a TSHD, all steps can be done
start_event = None, # We can start right away
stop_event = None) # We stop once there is nothing more to move
my_env.Roadmap = Roadmap
my_env.print_progress = True
my_env.run()
stop_time_tot = time.time()
computation_time_tot = stop_time_tot - start_time_tot
print("the Total computational time is:",int(computation_time_tot/60),'minutes and', np.round(computation_time_tot- int(computation_time_tot/60)*60,2) , "sec")
```
## 7. Postprocessing
```
number_of_cycles = 0
for M in hopper.log['Message']:
if M == 'loading start':
number_of_cycles += 1
Operational_hours = service_hours - delay_for_bunkering - technical_delay - project_related_delay
efficiency = Operational_hours / service_hours
Total_production = Operational_hours/(((my_env.now - my_env.epoch)/ number_of_cycles/60) / 60) * (Volume / number_of_cycles)
vessels = [hopper]
activities = ['loading', 'unloading', 'sailing filled', 'sailing empty']
colors = {0:'rgb(55,126,184)', 1:'rgb(255,150,0)', 2:'rgb(98, 192, 122)', 3:'rgb(98, 141, 122)'}
print('Number of cycles', number_of_cycles)
print("Project finished in {}".format(datetime.timedelta(seconds=int(my_env.now - my_env.epoch))))
print("m^3 per uur",np.round(Volume/(my_env.now - my_env.epoch) * 60 * 60))
print("m^3 per week",np.round(Volume/(my_env.now - my_env.epoch) * 60 * 60 * 7 * 24))
print('Avaraged volume per cycle', np.round(Volume / number_of_cycles, 2))
print('Avaraged minutes per cycle', np.round((my_env.now - my_env.epoch)/ number_of_cycles/60, 2))
print('Used load factors:',set(np.round(np.array(hopper.log['Value'])/ hopper_capacity, 2)))
print()
print('Production included downtime [m^3 / week]:',np.round(Total_production,2))
print('Improvement over old method:', np.round((Total_production / 117287 - 1)*100 ,2), '%' )
plot.vessel_planning(vessels, activities, colors)
plot_route(hopper, Roadmap)
vessel = hopper
testing=False
activities = ['loading', 'unloading', 'sailing filled', 'sailing empty']
activities_times = [0, 0, 0, 0]
for i, activity in enumerate(activities):
starts = []
stops = []
for j, message in enumerate(vessel.log["Message"]):
if message == activity + " start":
starts.append(vessel.log["Timestamp"][j])
if message == activity + " stop":
stops.append(vessel.log["Timestamp"][j])
for j, _ in enumerate(starts):
activities_times[i] += (stops[j] - starts[j]).total_seconds() / (3600 * 24)
loading, unloading, sailing_full, sailing_empty = (
activities_times[0],
activities_times[1],
activities_times[2],
activities_times[3],
)
# For the total plot
fig, ax1 = plt.subplots(figsize=[15, 10])
# For the barchart
height = [loading, unloading, sailing_full, sailing_empty]
labels = ["Loading", "Unloading", "Sailing full", "Sailing empty"]
colors = [
(55 / 255, 126 / 255, 184 / 255),
(255 / 255, 150 / 255, 0 / 255),
(98 / 255, 192 / 255, 122 / 255),
(98 / 255, 141 / 255, 122 / 255),
]
positions = np.arange(len(labels))
ax1.bar(positions, height, color=colors)
# For the cumulative percentages
total = sum([loading, unloading, sailing_full, sailing_empty])
unloading += loading
sailing_full += unloading
sailing_empty += sailing_full
y = [loading, unloading, sailing_full, sailing_empty]
n = [
loading / total,
unloading / total,
sailing_full / total,
sailing_empty / total,
]
ax1.plot(positions, y, "ko", markersize=10)
ax1.plot(positions, y, "k")
for i, txt in enumerate(n):
x_txt = positions[i] + 0.1
y_txt = y[i] * 0.95
ax1.annotate("{:02.1f}%".format(txt * 100), (x_txt, y_txt), size=12)
# Further markup
plt.ylabel("Total time spend on activities [Days]", size=12)
ax1.set_xticks(positions)
ax1.set_xticklabels(labels, size=12)
plt.title("Distribution of spend time - {}".format(vessel.name), size=15)
if testing == False:
plt.show()
x_r = np.arange(3.2,3.8, 0.001)
y_r = np.arange(51,52, 0.01)
y_r, x_r = np.meshgrid(y_r,x_r)
WD_r = griddata((Roadmap.nodes[:,1], Roadmap.nodes[:,0]), Roadmap.WD[:,1], (x_r, y_r), method= 'linear')
start = [3.674, 51.70969009]
fig = plt.figure(figsize = (40,40))
ax = plt.subplot(projection=ccrs.Mercator())
ax.add_feature(cfeature.NaturalEarthFeature('physical', 'ocean', '10m', edgecolor='face', facecolor='cornflowerblue', zorder = 0))
ax.add_feature(cfeature.NaturalEarthFeature('physical', 'lakes', '10m', edgecolor='face', facecolor='cornflowerblue'))
cval = [0,3,4,5,6,7,8,9,100] # np.arange(0,40,1)
plt.contourf(x_r,y_r,WD_r, cval, alpha = 0.6,transform=ccrs.PlateCarree())
ax.add_feature(cfeature.NaturalEarthFeature('physical', 'land', '10m', edgecolor='face', facecolor='palegoldenrod'))
ax.coastlines(resolution='10m', color='darkgoldenrod', linewidth=3)
# plt.triplot(Roadmap.nodes[:,1], Roadmap.nodes[:,0], Roadmap.tria.simplices, linewidth = 0.8, color = 'k', label = 'Delauney edges', transform=ccrs.PlateCarree())
plt.plot(start[0], start[1], 'ro',transform=ccrs.PlateCarree())
plt.show()
```
| github_jupyter |
## Dependencies
```
!pip install --quiet /kaggle/input/kerasapplications
!pip install --quiet /kaggle/input/efficientnet-git
import warnings, glob
from tensorflow.keras import Sequential, Model
import efficientnet.tfkeras as efn
from cassava_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
```
### Hardware configuration
```
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
```
# Model parameters
```
BATCH_SIZE = 8 * REPLICAS
HEIGHT = 512
WIDTH = 512
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 0 # Do TTA if > 0
```
# Augmentation
```
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# Pixel-level transforms
if p_pixel_1 >= .4:
image = tf.image.random_saturation(image, lower=.7, upper=1.3)
if p_pixel_2 >= .4:
image = tf.image.random_contrast(image, lower=.8, upper=1.2)
if p_pixel_3 >= .4:
image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .7:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
elif p_crop > .4:
crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
# # Crops
# if p_crop > .6:
# if p_crop > .9:
# image = tf.image.central_crop(image, central_fraction=.5)
# elif p_crop > .8:
# image = tf.image.central_crop(image, central_fraction=.6)
# elif p_crop > .7:
# image = tf.image.central_crop(image, central_fraction=.7)
# else:
# image = tf.image.central_crop(image, central_fraction=.8)
# elif p_crop > .3:
# crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
# image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
return image, label
```
## Auxiliary functions
```
# Datasets utility functions
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
img, _ = scale_image(img, None)
# img = center_crop(img, HEIGHT, WIDTH)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
```
# Load data
```
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/35-cassava-leaf-effnetb5-cosine-10c-tpuv2-512/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
```
# Model
```
def model_fn(input_shape, N_CLASSES):
inputs = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB5(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
x = L.Dropout(.5)(base_model.output)
output = L.Dense(N_CLASSES, activation='softmax', name='output')(x)
model = Model(inputs=inputs, outputs=output)
return model
with strategy.scope():
model = model_fn((None, None, CHANNELS), N_CLASSES)
model.summary()
```
# Test set predictions
```
files_path = f'{database_base_path}test_images/'
test_size = len(os.listdir(files_path))
test_preds = np.zeros((test_size, N_CLASSES))
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True).repeat()
ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)
preds = model.predict(test_ds, steps=ct_steps, verbose=1)[:(test_size * TTA_STEPS)]
preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)
test_preds += preds / len(model_path_list)
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test) / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
test_names_ds = get_dataset(files_path)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
```
| github_jupyter |
# Naive Bayes
Naive Bayes is a method of calculating the probability of a element belonging to a certain class. Naive Bayes is a classification algorithm that focuses on efficiency more than accuracy. The Bayes' Theorm states:
$$ p(class|data) = (p(data|class) * p(class)) / p(data) $$
- $ p(class|data) $ is the probability of class given the provided data
## Dataset
In this mini-project I will be utilizing the **Iris Flower Species Dataset** which involves the process of predicting the flower species based on the measurements of the iris flowers.
## Steps
1. #### Seperate the dataset into two classes
- [Iris-virginica] => 0
- [Iris-versicolor] => 1
- [Iris-setosa] => 2
2. #### Summarize the dataset
- Calculate mean
- Calculate standard deviation
3. #### Summarize data by blass
- Calculate mean
- Calculate standard deviation
- Calculate statistics
4. #### Gaussian Probability Density Function
- Calculate probability distribution function
5. #### Class Probabilities
- Calculate probability of each class
```
from csv import reader
from random import seed
from random import randrange
from math import sqrt
from math import exp
from math import pi
import pandas as pd
import numpy as np
# Loading in the dataset
col_names = ['sepal_length','sepal_width','petal_length','petal_width','class']
dataset = pd.read_csv('iris.csv',names=col_names)
dataset
# Mapping classes to integer values
classes = {'Iris-virginica':0, 'Iris-versicolor':1, 'Iris-setosa':2}
dataset['class'] = dataset['class'].map(classes)
dataset
# Splitting dataset into classes
Ivirg = dataset.loc[dataset['class'] == 0]
Ivers = dataset.loc[dataset['class'] == 1]
Iseto = dataset.loc[dataset['class'] == 2]
Ivirg.pop('class')
Ivers.pop('class')
Iseto.pop('class')
Ivirg
# Grabbing statistics
Ivirg_stats = Ivirg.describe()
Ivirg_stats = Ivirg_stats.transpose()
Ivirg_stats
Ivers_stats = Ivers.describe()
Ivers_stats = Ivers_stats.transpose()
Ivers_stats
Iseto_stats = Iseto.describe()
Iseto_stats = Iseto_stats.transpose()
Iseto_stats
dict_stats = {0:Ivirg_stats,1:Ivers_stats,2:Iseto_stats}
# Calculate the Gaussian probability distribution function for x
def calculate_probability(x, stats):
exponent = np.exp(-((x-stats['mean'])**2 / (2 * stats['std']*2)))
return (1/(sqrt(2*pi)*stats['std'])*exponent)
def calculate_class_probability(x):
probabilities = dict()
for i in range(len(classes)):
probabilities[i] = len(dataset[dataset['class']==i].index) / len(dataset['class'].index)
probabilities[i] *= np.prod(calculate_probability(x, dict_stats[i]))
max_key = max(probabilities, key=probabilities.get)
return max_key
predicted_class = calculate_class_probability([5.7,2.9,4.2,1.3])
predicted_class
```
#### Credits: [Naive Bayes Classifier From Scratch in Python](https://machinelearningmastery.com/naive-bayes-classifier-scratch-python/)
| github_jupyter |
# Class 4 - Hybrid LCA
In this class, we will learn about supply use tables, and input output tables. We will also do a toy hybrid LCA.
Before getting started, make sure you have upgrade the Brightway2 packages. You should have at least the following:
```
import bw2data, bw2calc, bw2io
print("BW2 data:", bw2data.__version__)
print("BW2 calc:", bw2calc.__version__)
print("BW2 io:", bw2io.__version__)
```
Now import the necessary libraries:
```
from brightway2 import *
from bw2io.importers.exiobase import Exiobase22Importer
import numpy as np
import os
import pyprind
```
Create a new project for this class:
```
if 'Class 4' not in projects:
projects.current = "Class 1"
projects.copy_project("Class 4")
projects.current = "Class 4"
```
We will need the latest version of the data migrations to match EXIOBASE biosphere flows to ecoinvent biosphere flows:
```
create_core_migrations()
ERROR_MSG = """Missing a data migration needed for this class.
Please make sure you hvae the latest Brightway2 libraries, and reset the notebook."""
assert 'exiobase-biosphere' in migrations, ERROR_MSG
```
# Import EXIOBASE 2.2
Now we need to download the industry by industry table from version 2.2 of exiobase. You can get it from the following link. Note that you will have to register an account if this is the first time you use this database: http://www.exiobase.eu/index.php/data-download/exiobase2-year-2007-full-data-set/78-mriot-ixi-fpa-coefficient-version2-2-2/file
Extract the downloaded file, and adjust the following. Windows users might need something like:
fp = "C:\\Users\\<your name>\\Downloads\\mrIOT_IxI_fpa_coefficient_version2.2.2"
```
fp = "/Users/cmutel/Downloads/mrIOT_IxI_fpa_coefficient_version2.2.2"
assert os.path.exists(fp), "Please adjust your filepath, the provided one doesn't work"
```
We can now import the exiobase database. This will take a while, so go ahead and get started.
Why is this so slow compared to ecoinvent, for example? The answer lies in the density of the technosphere matrix. Exiobase, and IO tables in general, use comprehensive data from surveys and national customs, so they will get data on things that normal people would never even think of. For example, how much rice from Thailand is required to produce one euro of steel in Germany?
In other words, the technosphere matrix is very dense. Ecoinvent is stored as a [sparse matrix](http://docs.scipy.org/doc/scipy/reference/sparse.html), where data is only provided in about 1.5% of all possible locations - every other value is zero, and these zeros are not stored, only implied. However, the IO table has a fill rate of about 50%, meaning that we store every value in the matrix. The technosphere in ecoinvent 2.2 is about 4000 by 4000, but we only need to store about 40.000 numbers. The technosphere matrix is exiobase is about 8000 by 8000, but we store around 35.000.000 numbers.
We use a special backend for IO databases, as our standard storage mechanisms simply fall apart with such large data sets. You can see this [backend here](https://bitbucket.org/cmutel/brightway2-data/src/tip/bw2data/backends/iotable/__init__.py?at=2.0&fileviewer=file-view-default).
```
ex = Exiobase22Importer(fp)
ex.apply_strategies()
ex.write_database()
```
Free up some memory
```
ex = None
```
# LCA calculations
We can now do an LCA. We first do this the standard way:
```
gwp = ('IPCC 2013', 'climate change', 'GWP 100a')
lca = LCA({Database("EXIOBASE 2.2").random(): 1}, method=gwp)
lca.lci()
lca.lcia()
```
Our technosphere matrix is sparse:
```
lca.technosphere_matrix
```
And it takes a while to solve (versus less than one second for ecoinvent 2.2):
```
%timeit lca.solve_linear_system()
```
Free up some memory by forgetting about the `lca` object.
```
lca = None
```
However, we have a special LCA class that only does [dense technosphere matrices](https://bitbucket.org/cmutel/brightway2-calc/src/tip/bw2calc/dense_lca.py?at=default&fileviewer=file-view-default). If we use it, we will get better performance, because the linear solver assumes dense instead of sparse matrices:
```
dlca = DenseLCA({Database("EXIOBASE 2.2").random(): 1}, method=gwp)
dlca.lci()
```
The technosphere is, as you would expect, now a dense matrix
```
type(dlca.technosphere_matrix)
```
The nupy dense solver of linear system is faster than the SciPy/UMFPACK sparse solver, as our matrix actually is quite dense. The performance should be much better:
```
%timeit dlca.solve_linear_system()
```
Free up some more memory by forgetting about the `tech_params` array.
```
print(dlca.tech_params.shape)
dlca.tech_params = None
```
# Create aggregated processes
We can now create aggregated (so-called "system") processes for each activity in Exiobase. These aggregated proceses can be used in our normal sparse LCAs, but are terminated, i.e. we can't understand their background supply chains.
First, we create a new database.
```
aggregated_db = Database("EXIOBASE 2.2 aggregated")
```
This is a normal database, not an `IOTable` database.
```
type(aggregated_db)
```
Now, we invert the EXIOBASE technosphere matrix.
This takes some minutes - around 4 on my laptop - so just be patient. It is helpful if there is plenty of free memory.
```
inverse = np.linalg.pinv(dlca.technosphere_matrix.todense())
```
With the inverse, we can calculated the aggregated inventories, and then write each aggregated process.
```
inventory = dlca.biosphere_matrix * inverse
print(inventory.shape)
```
Define the activity data fields we want to keep
```
KEYS = (
'exiobase_code',
'group',
'group_name',
'location',
'name',
'synonym',
'type',
'unit'
)
data = {}
```
Only take each non-zero biosphere flow, and create the aggregated processes.
```
for ds in pyprind.prog_bar(Database("EXIOBASE 2.2")):
col = dlca.activity_dict[ds.key]
# Basic data
data[("EXIOBASE 2.2 aggregated", ds['code'])] = {key: ds[key] for key in KEYS}
# Exchanges
data[("EXIOBASE 2.2 aggregated", ds['code'])]['exchanges'] = [{
'type': 'biosphere',
'amount': float(inventory[row, col]),
'input': flow,
'uncertainty type': 0
} for flow, row in dlca.biosphere_dict.items() if inventory[row, col]]
aggregated_db.write(data)
```
We no longer need the dlca object, so we can forget about it to save some memory.
```
dlca = None
```
# Sample LCA calculations
We will look at two product systems selected in class. We found the dataset keys using code like:
for x in Database("ecoinvent 2.2").search('fertili*'):
print(x, x.key)
## Cement production
```
ex_cement = ('EXIOBASE 2.2 aggregated', 'Manufacture of cement, lime and plaster:CH')
ei_cement = ('ecoinvent 2.2', 'c2ff6ffd532415eda3eaf957b17b70a1')
```
Check to make sure we have the correct activities
```
get_activity(ex_cement)
get_activity(ei_cement)
lca = LCA({ex_cement: 1}, gwp)
lca.lci()
lca.lcia()
print("Exiobase:", lca.score / 1e6 / 10) # Assume 100 euros/ton
lca = LCA({ei_cement: 1}, gwp)
lca.lci()
lca.lcia()
print("Ecoinvent", lca.score)
```
These numbers are remarkably similar.
## Nitrogenous fertilizer
Let's now look at nitrogen fertilizer:
```
ei_n = ('ecoinvent 2.2', '920a20d9a87340557a31ee7e8a353d3c')
ex_n = ('EXIOBASE 2.2 aggregated', 'N-fertiliser:LU')
```
Check to make sure we have the correct activities
```
get_activity(ei_n)
get_activity(ex_n)
lca = LCA({ex_n: 1}, gwp)
lca.lci()
lca.lcia()
print("Exiobase:", lca.score / 1e6 * 0.8) # Assume 800 euros/ton
lca = LCA({ei_n: 1}, gwp)
lca.lci()
lca.lcia()
print("Ecoinvent:", lca.score)
```
This is quite interesting - more investigation would have to be done to understand why these values are so different.
# Cleaning up
This project consumes a lot of hard drive space, about 2 gigabytes. We can get the exact size of this and all other projects (in gigabytes) with the following:
```
projects.report()
```
We can then delete the current project.
**This step is optional**, included as a convenience for those who do not want to work with Exiobase.
```
projects.delete_project(delete_dir=True)
```
The returned value is the name of the current project.
```
projects.current
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%config Completer.use_jedi = False
import yaml
from pysmFISH.pipeline import Pipeline
from pysmFISH.configuration_files import load_experiment_config_file
from pathlib import Path
import time
```
# LBEXP20210513_EEL_Control_PDL_Elect
```
experiment_fpath = Path('/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210513_EEL_Control_PDL_Elect')
date_tag = time.strftime("%y%m%d_%H_%M_%S")
pipeline_run_name = date_tag + '_' + experiment_fpath.stem
run_type = 're-run'
parsing_type = 'no_parsing'
processing_engine = 'htcondor'
%%time
running_pipeline = Pipeline(
pipeline_run_name= pipeline_run_name,
experiment_fpath= experiment_fpath,
run_type= run_type,
parsing_type= parsing_type,
processing_engine= processing_engine
)
# Need to do some adjustment to the pickle file
import pickle
fname = '/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210513_EEL_Control_PDL_Elect/Count00001_LBEXP20210513_EEL_Control_PDL_Elect_C1H01.pkl'
data = pickle.load(open(fname,'rb'))
data['experiment_name'] = 'Count00001_LBEXP20210513_EEL_Control_PDL_Elect'
data['channels']['TexasRed'] = data['channels']['TxRed']
del data['channels']['TxRed']
pickle.dump(data,open(fname,'wb'))
# Run
%%time
running_pipeline.run_full()
```
# LBEXP20210513_EEL_Control_PDL_NO-elec
```
experiment_fpath = Path('/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210513_EEL_Control_PDL_NO-elec')
date_tag = time.strftime("%y%m%d_%H_%M_%S")
pipeline_run_name = date_tag + '_' + experiment_fpath.stem
run_type = 'new'
parsing_type = 'original'
processing_engine = 'htcondor'
%%time
running_pipeline = Pipeline(
pipeline_run_name= pipeline_run_name,
experiment_fpath= experiment_fpath,
run_type= run_type,
parsing_type= parsing_type,
processing_engine= processing_engine
)
# Need to do some adjustment to the pickle file
import pickle
fname = '/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210513_EEL_Control_PDL_NO-elec/Count00001_LBEXP20210513_EEL_Control_PDL_NO-elec_C1H01.pkl'
data = pickle.load(open(fname,'rb'))
data['channels']['TexasRed'] = data['channels']['TxRed']
del data['channels']['TxRed']
pickle.dump(data,open(fname,'wb'))
# Run
%%time
running_pipeline.run_full()
```
# LBEXP20210513_EEL_Control_Tris_Elect
```
experiment_fpath = Path('/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210513_EEL_Control_Tris_Elect')
date_tag = time.strftime("%y%m%d_%H_%M_%S")
pipeline_run_name = date_tag + '_' + experiment_fpath.stem
run_type = 'new'
parsing_type = 'original'
processing_engine = 'htcondor'
%%time
running_pipeline = Pipeline(
pipeline_run_name= pipeline_run_name,
experiment_fpath= experiment_fpath,
run_type= run_type,
parsing_type= parsing_type,
processing_engine= processing_engine
)
import pickle
fname = '/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210513_EEL_Control_Tris_Elect/Count00001_LBEXP20210513_EEL_Control_Tris_Elect_C1H01.pkl'
data = pickle.load(open(fname,'rb'))
data['channels']['TexasRed'] = data['channels']['TxRed']
del data['channels']['TxRed']
pickle.dump(data,open(fname,'wb'))
# Run
%%time
running_pipeline.run_full()
```
# LBEXP20210513_EEL_Control_Tris_NO-elec
```
import pickle
fname = '/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210513_EEL_Control_Tris_NO-elec/Count00001_LBEXP20210513_EEL_Control_Tris_NO-elec_C1H01.pkl'
data = pickle.load(open(fname,'rb'))
data['channels']['TexasRed'] = data['channels']['TxRed']
del data['channels']['TxRed']
pickle.dump(data,open(fname,'wb'))
experiment_fpath = Path('/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210513_EEL_Control_Tris_NO-elec')
date_tag = time.strftime("%y%m%d_%H_%M_%S")
pipeline_run_name = date_tag + '_' + experiment_fpath.stem
run_type = 'new'
parsing_type = 'original'
processing_engine = 'htcondor'
%%time
running_pipeline = Pipeline(
pipeline_run_name= pipeline_run_name,
experiment_fpath= experiment_fpath,
run_type= run_type,
parsing_type= parsing_type,
processing_engine= processing_engine
)
%%time
running_pipeline.run_full()
```
# LBEXP20210514_EEL_Control_osmFISH
```
import pickle
fname = '/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210514_EEL_Control_osmFISH/Count00001_LBEXP20210514_EEL_Control_osmFISH_C1H01.pkl'
data = pickle.load(open(fname,'rb'))
data['channels']['TexasRed'] = data['channels']['TxRed']
del data['channels']['TxRed']
pickle.dump(data,open(fname,'wb'))
experiment_fpath = Path('/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210514_EEL_Control_osmFISH')
date_tag = time.strftime("%y%m%d_%H_%M_%S")
pipeline_run_name = date_tag + '_' + experiment_fpath.stem
run_type = 're-run'
parsing_type = 'no_parsing'
processing_engine = 'htcondor'
%%time
running_pipeline = Pipeline(
pipeline_run_name= pipeline_run_name,
experiment_fpath= experiment_fpath,
run_type= run_type,
parsing_type= parsing_type,
processing_engine= processing_engine,
chunk_size=40,
dataset_path= experiment_fpath / 'modified_dset.parquet'
)
running_pipeline.run_parsing_only()
running_pipeline.data.load_dataset(experiment_fpath / '210609_17_12_51_LBEXP20210514_EEL_Control_osmFISH_img_data_dataset.parquet')
running_pipeline.data.dataset.loc[running_pipeline.data.dataset.channel == 'DAPI', 'processing_type'] = 'nuclei'
running_pipeline.data.save_dataset(running_pipeline.data.dataset, running_pipeline.experiment_fpath / 'modified_dset.parquet')
%%time
running_pipeline.run_full(resume=True)
selected_Hdistance = 3 / metadata['barcode_length']
stitching_selected = 'microscope_stitched'
io.simple_output_plotting(experiment_fpath, stitching_selected, selected_Hdistance, client,file_tag='decoded')
running_pipeline.processing_serial_fish_step()
running_pipeline.remove_duplicated_dots_graph_step()
running_pipeline.cluster.close()
running_pipeline.client.close()
```
# LBEXP20210514_EEL_Control_osmFISH_lower-intensity
```
import pickle
fname = '/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210514_EEL_Control_osmFISH_lower-intensity/Count00001_LBEXP20210514_EEL_Control_osmFISH_C1H01.pkl'
data = pickle.load(open(fname,'rb'))
data['channels']['TexasRed'] = data['channels']['TxRed']
del data['channels']['TxRed']
pickle.dump(data,open(fname,'wb'))
experiment_fpath = Path('/fish/work_std/LBEXP20210513_EEL_Controls_3/LBEXP20210514_EEL_Control_osmFISH_lower-intensity')
date_tag = time.strftime("%y%m%d_%H_%M_%S")
pipeline_run_name = date_tag + '_' + experiment_fpath.stem
run_type = 're-run'
parsing_type = 'no_parsing'
processing_engine = 'htcondor'
%%time
running_pipeline = Pipeline(
pipeline_run_name= pipeline_run_name,
experiment_fpath= experiment_fpath,
run_type= run_type,
parsing_type= parsing_type,
processing_engine= processing_engine,
chunk_size=40,
dataset_path= experiment_fpath / 'modified_dset.parquet')
running_pipeline.run_parsing_only()
running_pipeline.data.dataset.loc[running_pipeline.data.dataset.channel == 'DAPI', 'processing_type'] = 'nuclei'
running_pipeline.data.save_dataset(running_pipeline.data.dataset, running_pipeline.experiment_fpath / 'modified_dset.parquet')
%%time
running_pipeline.run_full()
running_pipeline.cluster.close()
running_pipeline.client.close()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/VICIWUOHA/Multiple_Text_Combination_and_Mapping/blob/main/Multiple_Text_Combination_and_Mapping.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Multiple Text Combination and Mapping Project
The aim of this project was to find all possible answers to a quiz with 10 questions which had two options each.
**CONSIDERATIONS**
- Only one answer can be picked per question.
- Final output **should not have any duplicate** combination of answers.
- Lastly, assuming all items in the left list (option 1) stood for **ODD** (O) selections, while those in the right stood for **EVEN (E)**; Map the final output as **Os** and **Es** .
```
# Import necessary modules
import pandas as pd
import random
import numpy as np
# generate a dataframe of quiz possible answers
possible_ans = pd.DataFrame({
'opt_1': ['A','C','E','G','I','K','M','O','Q','S'],
'opt_2': ['B','D','F','H','J','L','N','P','R','T']
})
possible_ans
answers = [] #all possible lists of answers are stored here , this is a list of lists
x = 0
# create a loop to keep generating random choices,
# of course, there are not up to or more than 100000 possible combinations
while x< 100000:
# generate a random choice from each row across both columns, then write all the choices to a list
# store list in rand_choice
rand_choice = possible_ans.apply(lambda row : random.choice(row.tolist()),axis =1).tolist()
# append the rand_choice generated into another list called 'answers' , if the list has not yet been added
if rand_choice not in answers:
answers.append(rand_choice)
x+=1
answers
print ('there are {} possible combination of answers'.format(len(answers)))
answers
list_of_answers = pd.DataFrame(answers)
list_of_answers.to_csv('list_of_answers.csv')
list_of_answers
# reason for importing the file earler exported was to avoid changing the already established values since
# values were randomly generated
raw_text = pd.read_csv('/list_of_answers.csv',index_col = 0)
raw_text.head(10)
# concatenate answers across columns for all rows and save in new column
raw_text['possible_outcomes'] = raw_text.sum(axis=1)
raw_text
# Create a function to replace text with O's and E's by mapping using the translate() method
def map_text(value):
# define the map list
map_list = {
'A':'O','B':'E','C':'O','D':'E','E':'O','F':'E','G':'O','H':'E','I':'O','J':'E','K':'O',
'L':'E','M':'O','N':'E','O':'O','P':'E','Q':'O','R':'E','S':'O','T':'E'
}
# create a mapped table which the translate method will use
trans_table = value.maketrans(map_list)
# translate all values introduced into the function
value = value.translate(trans_table)
return value
# test the function
map_text('ACFGIKMPQS')
raw_text_2 = raw_text
raw_text_2.head()
# apply map_text function on the column with earlier saved possible outcomes
raw_text_2['replaced_values'] = raw_text_2['possible_outcomes'].apply(map_text)
raw_text_2
# save final output to csv
raw_text_2.to_csv('updated_list_of_answers.csv')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Serbeld/Tensorflow/blob/master/PruebaMnist_with_custom_callback.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#!pip install tensorflow==1.3
#!pip install keras
import tensorflow as tf
print(tf.__version__)
import keras as k
print(k.__version__)
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, Activation
import keras
from keras.layers import Activation, Dense
batch_size = 32
num_classes = 10
epochs = 15
filas,columnas = 28,28
(xt,yt),(xtest,ytest) = mnist.load_data()
xt = xt.reshape(xt.shape[0],filas,columnas,1)
xtest = xtest.reshape(xtest.shape[0], filas, columnas,1)
xt = xt.astype('float32')
xtest = xtest.astype('float32')
xt = xt/255
xtest = xtest/255
yt = keras.utils.to_categorical(yt,num_classes)
ytest = keras.utils.to_categorical(ytest,num_classes)
xt = xt[0:100]
yt = yt[0:100]
modelo = Sequential()
modelo.add(Conv2D(64,kernel_size=(2,2),activation='relu',
input_shape=(28,28,1)))
modelo.add(Conv2D(64,kernel_size=(2,2),activation='relu',
input_shape=(28,28,1)))
modelo.add(MaxPool2D(pool_size=(2,2)))
modelo.add(Flatten())
modelo.add(Dense(68))
modelo.add(Dropout(0.25))
modelo.add(Dense(20))
modelo.add(Dropout(0.25))
modelo.add(Dense(num_classes, activation='relu'))
modelo.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.categorical_crossentropy,
metrics=['categorical_accuracy'])
modelo.summary()
class LossAndErrorPrintingCallback(keras.callbacks.Callback):
global vector
vector = []
#def on_train_batch_end(self, batch, logs=None):
#print('For batch {}, loss is {:7.2f}.'.format(batch, logs['loss']))
#def on_test_batch_end(self, batch, logs=None):
# print('For batch {}, loss is {:7.2f}.'.format(batch, logs['loss']))
def on_epoch_end(self, epoch, logs=None):
vector.append(logs['categorical_accuracy'])
print('The average loss for epoch {} is {:7.2f} and categorical accuracy is {:7.2f}.'.format(epoch, logs['loss'], logs['categorical_accuracy']))
model = modelo.fit(xt, yt,batch_size,epochs,
validation_data=(xtest,ytest),
shuffle=True,verbose=0,
callbacks=[LossAndErrorPrintingCallback()])
#modelo.fit(xt,yt,batch_size,epochs,validation_data=(xtest,ytest),shuffle=True,verbose=1)
puntuacion = modelo.evaluate(xtest,ytest,verbose=1)
#plt.imshow(xt.shape[0])
#predictions = modelo.predict(xt[0])
print(puntuacion)
print(vector)
```
| github_jupyter |
In this notebook I will show the different options to save and load a model, as well as some additional objects produced during training.
On a given day, you train a model...
```
import pickle
import numpy as np
import pandas as pd
import torch
import shutil
from pytorch_widedeep.preprocessing import WidePreprocessor, TabPreprocessor
from pytorch_widedeep.training import Trainer
from pytorch_widedeep.callbacks import EarlyStopping, ModelCheckpoint, LRHistory
from pytorch_widedeep.models import Wide, TabMlp, WideDeep
from pytorch_widedeep.metrics import Accuracy
from sklearn.model_selection import train_test_split
df = pd.read_csv("data/adult/adult.csv.zip")
df.head()
# For convenience, we'll replace '-' with '_'
df.columns = [c.replace("-", "_") for c in df.columns]
# binary target
df["target"] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop("income", axis=1, inplace=True)
df.head()
train, valid = train_test_split(df, test_size=0.2, stratify=df.target)
# the test data will be used lately as if it was "fresh", new data coming after some time...
valid, test = train_test_split(valid, test_size=0.5, stratify=valid.target)
print(f"train shape: {train.shape}")
print(f"valid shape: {valid.shape}")
print(f"test shape: {test.shape}")
wide_cols = [
"education",
"relationship",
"workclass",
"occupation",
"native_country",
"gender",
]
crossed_cols = [("education", "occupation"), ("native_country", "occupation")]
cat_embed_cols = []
for col in train.columns:
if train[col].dtype == "O" or train[col].nunique() < 200 and col != "target":
cat_embed_cols.append(col)
num_cols = [c for c in train.columns if c not in cat_embed_cols + ["target"]]
wide_preprocessor = WidePreprocessor(wide_cols=wide_cols, crossed_cols=crossed_cols)
X_wide_train = wide_preprocessor.fit_transform(train)
X_wide_valid = wide_preprocessor.transform(valid)
tab_preprocessor = TabPreprocessor(
embed_cols=cat_embed_cols, continuous_cols=num_cols, scale=True
)
X_tab_train = tab_preprocessor.fit_transform(train)
y_train = train.target.values
X_tab_valid = tab_preprocessor.transform(valid)
y_valid = valid.target.values
# save wide_dim somewhere
wide = Wide(wide_dim=wide_preprocessor.wide_dim)
deeptabular = TabMlp(
column_idx=tab_preprocessor.column_idx,
embed_input=tab_preprocessor.embeddings_input,
)
model = WideDeep(wide=wide, deeptabular=deeptabular)
model
early_stopping = EarlyStopping()
model_checkpoint = ModelCheckpoint(
filepath="tmp_dir/adult_tabmlp_model",
save_best_only=True,
verbose=1,
max_save=1,
)
trainer = Trainer(
model,
objective="binary",
callbacks=[early_stopping, model_checkpoint],
metrics=[Accuracy],
)
trainer.fit(
X_train={"X_wide": X_wide_train, "X_tab": X_tab_train, "target": y_train},
X_val={"X_wide": X_wide_valid, "X_tab": X_tab_valid, "target": y_valid},
n_epochs=2,
batch_size=256,
)
```
# Save model: option 1
save (and load) a model as you woud do with any other torch model
```
torch.save(model, "tmp_dir/model_saved_option_1.pt")
torch.save(model.state_dict(), "tmp_dir/model_state_dict_saved_option_1.pt")
```
# Save model: option 2
use the `trainer`. The `trainer` will also save the training history and the learning rate history (if learning rate schedulers are used)
```
trainer.save(path="tmp_dir/", model_filename="model_saved_option_2.pt")
```
or the state dict
```
trainer.save(
path="tmp_dir/",
model_filename="model_state_dict_saved_option_2.pt",
save_state_dict=True,
)
%%bash
ls tmp_dir/
%%bash
ls tmp_dir/history/
```
Note that since we have used the `ModelCheckpoint` Callback, `adult_tabmlp_model_2.p` is the model state dict of the model at epoch 2, i.e. same as `model_state_dict_saved_option_1.p` or `model_state_dict_saved_option_2.p`.
# Save preprocessors and callbacks
...just pickle them
```
with open("tmp_dir/wide_preproc.pkl", "wb") as wp:
pickle.dump(wide_preprocessor, wp)
with open("tmp_dir/tab_preproc.pkl", "wb") as dp:
pickle.dump(tab_preprocessor, dp)
with open("tmp_dir/eary_stop.pkl", "wb") as es:
pickle.dump(early_stopping, es)
%%bash
ls tmp_dir/
```
And that is pretty much all you need to resume training or directly predict, let's see
# Run New experiment: prepare new dataset, load model, and predict
```
test.head()
with open("tmp_dir/wide_preproc.pkl", "rb") as wp:
wide_preprocessor_new = pickle.load(wp)
with open("tmp_dir/tab_preproc.pkl", "rb") as tp:
tab_preprocessor_new = pickle.load(tp)
X_test_wide = wide_preprocessor_new.transform(test)
X_test_tab = tab_preprocessor_new.transform(test)
y_test = test.target
wide_new = Wide(wide_dim=wide_preprocessor_new.wide_dim)
deeptabular = TabMlp(
column_idx=tab_preprocessor_new.column_idx,
embed_input=tab_preprocessor_new.embeddings_input,
)
model = WideDeep(wide=wide, deeptabular=deeptabular)
model.load_state_dict(torch.load("tmp_dir/model_state_dict_saved_option_2.pt"))
trainer = Trainer(
model,
objective="binary",
)
preds = trainer.predict(X_wide=X_test_wide, X_tab=X_test_tab)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, preds)
shutil.rmtree("tmp_dir/")
```
| github_jupyter |
```
%matplotlib inline
import time
import numpy as np
from matplotlib import cm
from matplotlib import pyplot as plt
from scipy.stats import mode
from clustiVAT import clustiVAT
from data_generate import data_generate
from distance2 import distance2
from iVAT import iVAT
total_no_of_points = 1000
clusters = 4
odds_matrix = np.array(
[np.ceil(clusters*np.random.rand(clusters))]).astype(int)
colors_1 = np.array(cm.get_cmap().colors)
colors = np.zeros((clusters, 3))
for i in range(1, clusters+1):
colors[i-1, :] = colors_1[int(
np.ceil(max(colors_1.shape)*i/clusters)-1), :]
data_matrix_with_labels, mean_matrix, var_matrix = data_generate(
number_of_clusters=clusters, odds_matrix=odds_matrix, total_no_of_points=total_no_of_points)
p1 = plt.figure(1)
plt.title(label="Ground truth scatter plot")
for i in range(0, clusters):
cluster_index = np.array(np.where(data_matrix_with_labels[:, -1] == i))
plt.scatter(data_matrix_with_labels[cluster_index, 0],
data_matrix_with_labels[cluster_index, 1], marker='o', color=colors[i-1, :], s=0.9)
###################### CLUSTIVAT #########################
x = data_matrix_with_labels
n, p = x.shape
tic = time.time()
Pitrue = x[:, -1]
x = x[:, 0:-1]
cp = 10
ns = 300
rv, C, I, ri, cut, smp = clustiVAT(x, cp, ns)
x1, y1 = cut.shape
cut = cut.reshape((x1*y1,))
cuts, ind = -np.sort(-cut), np.argsort(-cut)
ind = np.sort(ind[0:clusters-1])
Pi = np.zeros((n,))
Pi[smp[I[ind[0]-2]]] = 1
Pi[smp[I[ind[-1]:-1]]] = clusters
for i in range(1, clusters-1):
Pi[smp[I[ind[i-1]:ind[i]-1]]] = i
nsmp = np.setdiff1d(np.linspace(1, clusters, clusters, dtype=int), smp)
r = distance2(x[smp, :], x[nsmp, :])
s = np.argmin(r, axis=0)
Pi[nsmp] = Pi[smp[s]]
RiV, RV, reordering_mat = iVAT(rv, 1)
toc = time.time()
print("Time elapsed : ", str(toc-tic))
p2 = plt.figure(2)
plt.rcParams["figure.autolayout"] = True
plt.imshow(rv, cmap=cm.get_cmap('gray'), extent=[-1, 1, -1, 1])
plt.title(label="VAT reordered dissimilarity matrix image")
plt.show()
p3 = plt.figure(3)
plt.rcParams["figure.autolayout"] = True
plt.imshow(RiV, cmap=cm.get_cmap('gray'), extent=[-1, 1, -1, 1])
plt.title(label="iVAT dissimilarity matrix image")
plt.show()
p4 = plt.figure(4)
for i in range(0, np.max(smp.shape)-1):
x_cor = np.hstack((x[smp[I[i]], 0], x[smp[I[C[i]]], 0]))
y_cor = np.hstack((x[smp[I[i]], 1], x[smp[I[C[i]]], 1]))
plt.plot(x_cor, y_cor, 'b')
for i in range(np.max(ind.shape)):
x_cor = np.hstack((x[smp[I[ind[i]]], 0], x[smp[I[C[ind[i]]]], 0]))
y_cor = np.hstack((x[smp[I[ind[i]]], 1], x[smp[I[C[ind[i]]]], 1]))
plt.plot(x_cor, y_cor, 'g')
plt.show()
p5 = plt.figure(5)
plt.plot(x[I, 0], x[I, 1], 'r.')
plt.title(label="MST of the dataset")
plt.show()
p6 = plt.figure(6)
for i in range(0, clusters):
if i == 0:
partition = I[0:ind[i]]
elif i == clusters-1:
partition = I[ind[i-1]:np.max(I.shape)]
else:
partition = I[ind[i-1]:ind[i]-1]
plt.plot(x[smp[partition], 0], x[smp[partition], 1],
marker='o', color=colors[i-1, :], markersize=1)
plt.title('VAT generated partition of the sample points (different colors represent different clusters)')
plt.show()
cluster_matrix_mod = np.zeros(data_matrix_with_labels.shape, dtype=int)
length_partition = np.zeros((clusters,), dtype=int)
for i in range(0, clusters):
length_partition[i] = np.max(np.where(Pi == i)[0].shape)
length_partition_sort, length_partition_sort_idx = - \
np.sort(-length_partition), np.argsort(-length_partition)
index_remaining = np.linspace(0, clusters-1, clusters, dtype=int)
for i in range(0, clusters):
original_idx = length_partition_sort_idx[i]
partition = np.where(Pi == original_idx)[0]
proposed_idx = mode(Pitrue[partition]).mode
if np.sum(index_remaining == proposed_idx) != 0:
cluster_matrix_mod[np.where(Pi == original_idx)[0]] = proposed_idx
else:
try:
cluster_matrix_mod[np.where(Pi == original_idx)[0]] = index_remaining[0]
except:
pass
if type(index_remaining == proposed_idx) == bool:
if (index_remaining == proposed_idx) is True:
index_remaining = np.delete(
index_remaining, index_remaining == proposed_idx)
else:
if (index_remaining == proposed_idx).shape[0] != 0:
index_remaining = np.delete(
index_remaining, index_remaining == proposed_idx)
p7 = plt.figure(7)
pst = np.linspace(0, clusters-1, clusters, dtype=int)
tst = ["red", "yellow", "blue", "green"]
for i in range(0, clusters):
#cluster_matrix_unique = np.unique(cluster_matrix_mod)
cluster_index = np.where(cluster_matrix_mod == pst[i])[0]
plt.scatter(x[cluster_index, 0], x[cluster_index, 1],
marker='o', color=tst[i], s=0.9)
plt.title('VAT generated partition of the entire dataset (different colors represent different clusters)')
crct_prct_clustivat = (
(np.max(x.shape)-np.max(np.where(Pitrue-cluster_matrix_mod.T != 0)[0].shape))/np.max(x.shape))*100
print("crct_prct_clustivat : " + str(crct_prct_clustivat))
```
| github_jupyter |
```
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex AI client library: Custom training image classification model with pipeline for online prediction with training pipeline
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/community/gapic/custom/showcase_custom_image_classification_online_pipeline.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/community/gapic/custom/showcase_custom_image_classification_online_pipeline.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
## Overview
This tutorial demonstrates how to use the Vertex AI Python client library to train and deploy a custom image classification model for online prediction, using a training pipeline.
### Dataset
The dataset used for this tutorial is the [CIFAR10 dataset](https://www.tensorflow.org/datasets/catalog/cifar10) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
### Objective
In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex AI client library, and then do a prediction on the deployed model by sending data. You can alternatively create custom models using `gcloud` command-line tool or online using Google Cloud Console.
The steps performed include:
- Create a Vertex AI custom job for training a model.
- Create a `TrainingPipeline` resource.
- Train a TensorFlow model with the `TrainingPipeline` resource.
- Retrieve and load the model artifacts.
- View the model evaluation.
- Upload the model as a Vertex AI `Model` resource.
- Deploy the `Model` resource to a serving `Endpoint` resource.
- Make a prediction.
- Undeploy the `Model` resource.
### Costs
This tutorial uses billable components of Google Cloud (GCP):
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/ai-platform-unified/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
## Installation
Install the latest version of Vertex AI client library.
```
import sys
if "google.colab" in sys.modules:
USER_FLAG = ""
else:
USER_FLAG = "--user"
! pip3 install -U google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
```
### Restart the kernel
Once you've installed the Vertex AI client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
```
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU**
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the Vertex AI APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)
4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Vertex AI Notebooks.
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. For the latest support per region, see the [Vertex AI locations documentation](https://cloud.google.com/ai-platform-unified/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Vertex AI Notebooks**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AI Platform, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
When you submit a custom training job using the Vertex AI client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. You can then
create an `Endpoint` resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
#### Import Vertex AI client library
Import the Vertex AI client library into our Python environment.
```
import os
import sys
import time
import google.cloud.aiplatform_v1 as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
```
#### Vertex AI constants
Setup up the following constants for Vertex AI:
- `API_ENDPOINT`: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.
- `PARENT`: The Vertex AI location root path for dataset, model, job, pipeline and endpoint resources.
```
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex AI location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
```
#### CustomJob constants
Set constants unique to CustomJob training:
- Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for.
```
CUSTOM_TASK_GCS_PATH = (
"gs://google-cloud-aiplatform/schema/trainingjob/definition/custom_task_1.0.0.yaml"
)
```
#### Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for training and prediction.
Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify `(None, None)` to use a container image to run on a CPU.
*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
```
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
```
#### Container (Docker) image
Next, we will set the Docker container images for training and prediction
- TensorFlow 1.15
- `gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest`
- `gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest`
- TensorFlow 2.1
- `gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest`
- `gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest`
- TensorFlow 2.2
- `gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest`
- `gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest`
- TensorFlow 2.3
- `gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest`
- `gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest`
- TensorFlow 2.4
- `gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest`
- `gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest`
- XGBoost
- `gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1`
- Scikit-learn
- `gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest`
- Pytorch
- `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest`
- `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest`
- `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest`
- `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest`
For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).
- TensorFlow 1.15
- `gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest`
- `gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest`
- TensorFlow 2.1
- `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest`
- `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest`
- TensorFlow 2.2
- `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest`
- `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest`
- TensorFlow 2.3
- `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest`
- `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest`
- XGBoost
- `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest`
- `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest`
- `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest`
- `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest`
- Scikit-learn
- `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest`
- `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest`
- `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest`
For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers)
```
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
```
#### Machine Type
Next, set the machine type to use for training and prediction.
- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction.
- `machine type`
- `n1-standard`: 3.75GB of memory per vCPU.
- `n1-highmem`: 6.5GB of memory per vCPU
- `n1-highcpu`: 0.9 GB of memory per vCPU
- `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]
*Note: The following is not supported for training:*
- `standard`: 2 vCPUs
- `highcpu`: 2, 4 and 8 vCPUs
*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
```
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
```
# Tutorial
Now you are ready to start creating your own custom model and training for CIFAR10.
## Set up clients
The Vertex AI client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex AI server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
- Model Service for `Model` resources.
- Pipeline Service for training.
- Endpoint Service for deployment.
- Job Service for batch jobs and custom training.
- Prediction Service for serving.
```
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
```
## Train a model
There are two ways you can train a custom model using a container image:
- **Use a Google Cloud prebuilt container**. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.
- **Use your own custom container image**. If you use your own container, the container needs to contain your code for training a custom model.
## Prepare your custom job specification
Now that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:
- `worker_pool_spec` : The specification of the type of machine(s) you will use for training and how many (single or distributed)
- `python_package_spec` : The specification of the Python package to be installed with the pre-built container.
### Prepare your machine specification
Now define the machine specification for your custom training job. This tells Vertex AI what type of machine instance to provision for the training.
- `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8.
- `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU.
- `accelerator_count`: The number of accelerators.
```
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
```
### Prepare your disk specification
(optional) Now define the disk specification for your custom training job. This tells Vertex AI what type and size of disk to provision in each machine instance for the training.
- `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.
- `boot_disk_size_gb`: Size of disk in GB.
```
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
```
### Define the worker pool specification
Next, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:
- `replica_count`: The number of instances to provision of this machine type.
- `machine_spec`: The hardware specification.
- `disk_spec` : (optional) The disk storage specification.
- `python_package`: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.
Let's dive deeper now into the python package specification:
-`executor_image_spec`: This is the docker image which is configured for your custom training job.
-`package_uris`: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.
-`python_module`: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking `trainer.task.py` -- note that it was not neccessary to append the `.py` suffix.
-`args`: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:
- `"--model-dir=" + MODEL_DIR` : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts:
- direct: You pass the Cloud Storage location as a command line argument to your training script (set variable `DIRECT = True`), or
- indirect: The service passes the Cloud Storage location as the environment variable `AIP_MODEL_DIR` to your training script (set variable `DIRECT = False`). In this case, you tell the service the model artifact location in the job specification.
- `"--epochs=" + EPOCHS`: The number of epochs for training.
- `"--steps=" + STEPS`: The number of steps (batches) per epoch.
- `"--distribute=" + TRAIN_STRATEGY"` : The training distribution strategy to use for single or distributed training.
- `"single"`: single device.
- `"mirror"`: all GPU devices on a single compute instance.
- `"multi"`: all GPU devices on all compute instances.
```
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_cifar10.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
```
### Examine the training package
#### Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
- PKG-INFO
- README.md
- setup.cfg
- setup.py
- trainer
- \_\_init\_\_.py
- task.py
The files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.
The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`).
#### Package Assembly
In the following cells, you will assemble the training package.
```
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
```
#### Task.py contents
In the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:
- Get the directory where to save the model artifacts from the command line (`--model_dir`), and if not specified, then from the environment variable `AIP_MODEL_DIR`.
- Loads CIFAR10 dataset from TF Datasets (tfds).
- Builds a model using TF.Keras model API.
- Compiles the model (`compile()`).
- Sets a training distribution strategy according to the argument `args.distribute`.
- Trains the model (`fit()`) with epochs and steps according to the arguments `args.epochs` and `args.steps`
- Saves the trained model (`save(args.model_dir)`) to the specified model directory.
```
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
```
#### Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
```
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
```
## Train the model using a `TrainingPipeline` resource
Now start training of your custom training job using a training pipeline on Vertex AI. To train the your custom model, do the following steps:
1. Create a Vertex AI `TrainingPipeline` resource for the `Dataset` resource.
2. Execute the pipeline to start the training.
### Create a `TrainingPipeline` resource
You may ask, what do we use a pipeline for? We typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
1. Being reusable for subsequent training jobs.
2. Can be containerized and ran as a batch job.
3. Can be distributed.
4. All the steps are associated with the same pipeline job for tracking progress.
#### The `training_pipeline` specification
First, you need to describe a pipeline specification. Let's look into the *minimal* requirements for constructing a `training_pipeline` specification for a custom job:
- `display_name`: A human readable name for the pipeline job.
- `training_task_definition`: The training task schema.
- `training_task_inputs`: A dictionary describing the requirements for the training job.
- `model_to_upload`: A dictionary describing the specification for the (uploaded) Vertex AI custom `Model` resource.
- `display_name`: A human readable name for the `Model` resource.
- `artificat_uri`: The Cloud Storage path where the model artifacts are stored in SavedModel format.
- `container_spec`: This is the specification for the Docker container that will be installed on the `Endpoint` resource, from which the custom model will serve predictions.
```
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
MODEL_NAME = "custom_pipeline-" + TIMESTAMP
PIPELINE_DISPLAY_NAME = "custom-training-pipeline" + TIMESTAMP
training_task_inputs = json_format.ParseDict(
{"workerPoolSpecs": worker_pool_spec}, Value()
)
pipeline = {
"display_name": PIPELINE_DISPLAY_NAME,
"training_task_definition": CUSTOM_TASK_GCS_PATH,
"training_task_inputs": training_task_inputs,
"model_to_upload": {
"display_name": PIPELINE_DISPLAY_NAME + "-model",
"artifact_uri": MODEL_DIR,
"container_spec": {"image_uri": DEPLOY_IMAGE},
},
}
print(pipeline)
```
#### Create the training pipeline
Use this helper function `create_pipeline`, which takes the following parameter:
- `training_pipeline`: the full specification for the pipeline training job.
The helper function calls the pipeline client service's `create_pipeline` method, which takes the following parameters:
- `parent`: The Vertex AI location root path for your `Dataset`, `Model` and `Endpoint` resources.
- `training_pipeline`: The full specification for the pipeline training job.
The helper function will return the Vertex AI fully qualified identifier assigned to the training pipeline, which is saved as `pipeline.name`.
```
def create_pipeline(training_pipeline):
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
response = create_pipeline(pipeline)
```
Now save the unique identifier of the training pipeline you created.
```
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
```
### Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:
- `name`: The Vertex AI fully qualified pipeline identifier.
When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`.
```
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
```
# Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex AI Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`.
```
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
```
## Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras `model.load_model()` method passing it the Cloud Storage path where the model is saved -- specified by `MODEL_DIR`.
```
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
```
## Evaluate the model
Now find out how good the model is.
### Load evaluation data
You will load the CIFAR10 test (holdout) data from `tf.keras.datasets`, using the method `load_data()`. This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.
You don't need the training data, and hence why we loaded it as `(_, _)`.
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescaling) the pixel data by dividing each pixel by 255. This will replace each single byte integer pixel with a 32-bit floating point number between 0 and 1.
y_test:<br/>
2. The labels are currently scalar (sparse). If you look back at the `compile()` step in the `trainer/task.py` script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
```
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
```
### Perform the model evaluation
Now evaluate how well the model in the custom job did.
```
model.evaluate(x_test, y_test)
```
## Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex AI `Model` service, which will create a Vertex AI `Model` resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex AI, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
### How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a `tf.string`.
The serving function consists of two parts:
- `preprocessing function`:
- Converts the input (`tf.string`) to the input shape and data type of the underlying model (dynamic graph).
- Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.
- `post-processing function`:
- Converts the model output to format expected by the receiving application -- e.q., compresses the output.
- Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.
Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.
One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.
### Serving function for image data
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (`serving_fn`) and attach it to the model as a preprocessing step. Add a `@tf.function` decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (`tf.string`), which is passed to the serving function (`serving_fn`). The serving function preprocesses the `tf.string` into raw (uncompressed) numpy bytes (`preprocess_fn`) to match the input requirements of the model:
- `io.decode_jpeg`- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).
- `image.convert_image_dtype` - Changes integer pixel values to float 32.
- `image.resize` - Resizes the image to match the input shape for the model.
- `resized / 255.0` - Rescales (normalization) the pixel data between 0 and 1.
At this point, the data can be passed to the model (`m_call`).
```
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
model, model_path_to_deploy, signatures={"serving_default": serving_fn}
)
```
## Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
```
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
```
### Upload the model
Use this helper function `upload_model` to upload your model, stored in SavedModel format, up to the `Model` service, which will instantiate a Vertex AI `Model` resource instance for your model. Once you've done that, you can use the `Model` resource instance in the same way as any other Vertex AI `Model` resource instance, such as deploying to an `Endpoint` resource for serving predictions.
The helper function takes the following parameters:
- `display_name`: A human readable name for the `Endpoint` service.
- `image_uri`: The container image for the model deployment.
- `model_uri`: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the `trainer/task.py` saved the model artifacts, which we specified in the variable `MODEL_DIR`.
The helper function calls the `Model` client service's method `upload_model`, which takes the following parameters:
- `parent`: The Vertex AI location root path for `Dataset`, `Model` and `Endpoint` resources.
- `model`: The specification for the Vertex AI `Model` resource instance.
Let's now dive deeper into the Vertex AI model specification `model`. This is a dictionary object that consists of the following fields:
- `display_name`: A human readable name for the `Model` resource.
- `metadata_schema_uri`: Since your model was built without an Vertex AI `Dataset` resource, you will leave this blank (`''`).
- `artificat_uri`: The Cloud Storage path where the model is stored in SavedModel format.
- `container_spec`: This is the specification for the Docker container that will be installed on the `Endpoint` resource, from which the `Model` resource will serve predictions. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.
Uploading a model into a Vertex AI Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex AI Model resource is ready.
The helper function returns the Vertex AI fully qualified identifier for the corresponding Vertex AI Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
```
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model(
"cifar10-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy
)
```
### Get `Model` resource information
Now let's get the model information for just your model. Use this helper function `get_model`, with the following parameter:
- `name`: The Vertex AI unique identifier for the `Model` resource.
This helper function calls the Vertex AI `Model` client service's method `get_model`, with the following parameter:
- `name`: The Vertex AI unique identifier for the `Model` resource.
```
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
```
## Deploy the `Model` resource
Now deploy the trained Vertex AI custom `Model` resource. This requires two steps:
1. Create an `Endpoint` resource for deploying the `Model` resource to.
2. Deploy the `Model` resource to the `Endpoint` resource.
### Create an `Endpoint` resource
Use this helper function `create_endpoint` to create an endpoint to deploy the model to for serving predictions, with the following parameter:
- `display_name`: A human readable name for the `Endpoint` resource.
The helper function uses the endpoint client service's `create_endpoint` method, which takes the following parameter:
- `display_name`: A human readable name for the `Endpoint` resource.
Creating an `Endpoint` resource returns a long running operation, since it may take a few moments to provision the `Endpoint` resource for serving. You call `response.result()`, which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex AI fully qualified identifier for the `Endpoint` resource: `response.name`.
```
ENDPOINT_NAME = "cifar10_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
```
Now get the unique identifier for the `Endpoint` resource you created.
```
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
```
### Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
- Single Instance: The online prediction requests are processed on a single compute instance.
- Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.
- Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
- Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
- Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
- Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
```
MIN_NODES = 1
MAX_NODES = 1
```
### Deploy `Model` resource to the `Endpoint` resource
Use this helper function `deploy_model` to deploy the `Model` resource to the `Endpoint` resource you created for serving predictions, with the following parameters:
- `model`: The Vertex AI fully qualified model identifier of the model to upload (deploy) from the training pipeline.
- `deploy_model_display_name`: A human readable name for the deployed model.
- `endpoint`: The Vertex AI fully qualified endpoint identifier to deploy the model to.
The helper function calls the `Endpoint` client service's method `deploy_model`, which takes the following parameters:
- `endpoint`: The Vertex AI fully qualified `Endpoint` resource identifier to deploy the `Model` resource to.
- `deployed_model`: The requirements specification for deploying the model.
- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
- If only one model, then specify as **{ "0": 100 }**, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
- If there are existing models on the endpoint, for which the traffic will be split, then use `model_id` to specify as **{ "0": percent, model_id: percent, ... }**, where `model_id` is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the `deployed_model` parameter. This parameter is specified as a Python dictionary with the minimum required fields:
- `model`: The Vertex AI fully qualified model identifier of the (upload) model to deploy.
- `display_name`: A human readable name for the deployed model.
- `disable_container_logging`: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
- `dedicated_resources`: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
- `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.
- `min_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`.
- `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.
#### Traffic Split
Let's now dive deeper into the `traffic_split` parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
#### Response
The method returns a long running operation `response`. We will wait sychronously for the operation to complete by calling the `response.result()`, which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
```
DEPLOYED_NAME = "cifar10_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
```
## Make a online prediction request
Now do a online prediction to your deployed model.
### Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
```
test_image = x_test[0]
test_label = y_test[0]
print(test_image.shape)
```
### Prepare the request content
You are going to send the CIFAR10 image as compressed JPG image, instead of the raw uncompressed bytes:
- `cv2.imwrite`: Use openCV to write the uncompressed image to disk as a compressed JPEG image.
- Denormalize the image data from \[0,1) range back to [0,255).
- Convert the 32-bit floating point values to 8-bit unsigned integers.
- `tf.io.read_file`: Read the compressed JPG images back into memory as raw bytes.
- `base64.b64encode`: Encode the raw bytes into a base 64 encoded string.
```
import base64
import cv2
cv2.imwrite("tmp.jpg", (test_image * 255).astype(np.uint8))
bytes = tf.io.read_file("tmp.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
```
### Send the prediction request
Ok, now you have a test image. Use this helper function `predict_image`, which takes the following parameters:
- `image`: The test image data as a numpy array.
- `endpoint`: The Vertex AI fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed to.
- `parameters_dict`: Additional parameters for serving.
This function calls the prediction client service `predict` method with the following parameters:
- `endpoint`: The Vertex AI fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed to.
- `instances`: A list of instances (encoded images) to predict.
- `parameters`: Additional parameters for serving.
To pass the image data to the prediction service, in the previous step you encoded the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network. You need to tell the serving binary where your model is deployed to, that the content has been base64 encoded, so it will decode it on the other end in the serving binary.
Each instance in the prediction request is a dictionary entry of the form:
{serving_input: {'b64': content}}
- `input_name`: the name of the input layer of the underlying model.
- `'b64'`: A key that indicates the content is base64 encoded.
- `content`: The compressed JPG image bytes as a base64 encoded string.
Since the `predict()` service can take multiple images (instances), you will send your single image as a list of one image. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the `predict()` service.
The `response` object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:
- `predictions`: Confidence level for the prediction, between 0 and 1, for each of the classes.
```
def predict_image(image, endpoint, parameters_dict):
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{serving_input: {"b64": image}}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters_dict
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", prediction)
predict_image(b64str, endpoint_id, None)
```
## Undeploy the `Model` resource
Now undeploy your `Model` resource from the serving `Endpoint` resoure. Use this helper function `undeploy_model`, which takes the following parameters:
- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed to.
- `endpoint`: The Vertex AI fully qualified identifier for the `Endpoint` resource where the `Model` is deployed to.
This function calls the endpoint client service's method `undeploy_model`, with the following parameters:
- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed.
- `endpoint`: The Vertex AI fully qualified identifier for the `Endpoint` resource where the `Model` resource is deployed.
- `traffic_split`: How to split traffic among the remaining deployed models on the `Endpoint` resource.
Since this is the only deployed model on the `Endpoint` resource, you simply can leave `traffic_split` empty by setting it to {}.
```
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
```
# Cleaning up
To clean up all GCP resources used in this project, you can [delete the GCP
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex AI fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex AI fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex AI fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex AI fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex AI fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex AI fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
# LEARNING
This notebook serves as supporting material for topics covered in **Chapter 18 - Learning from Examples** , **Chapter 19 - Knowledge in Learning**, **Chapter 20 - Learning Probabilistic Models** from the book *Artificial Intelligence: A Modern Approach*. This notebook uses implementations from [learning.py](https://github.com/aimacode/aima-python/blob/master/learning.py). Let's start by importing everything from the module:
```
from learning import *
from notebook import *
```
## CONTENTS
* Machine Learning Overview
* Datasets
* Iris Visualization
* Distance Functions
* Plurality Learner
* k-Nearest Neighbours
* Decision Tree Learner
* Naive Bayes Learner
* Perceptron
* Learner Evaluation
## MACHINE LEARNING OVERVIEW
In this notebook, we learn about agents that can improve their behavior through diligent study of their own experiences.
An agent is **learning** if it improves its performance on future tasks after making observations about the world.
There are three types of feedback that determine the three main types of learning:
* **Supervised Learning**:
In Supervised Learning the agent observes some example input-output pairs and learns a function that maps from input to output.
**Example**: Let's think of an agent to classify images containing cats or dogs. If we provide an image containing a cat or a dog, this agent should output a string "cat" or "dog" for that particular image. To teach this agent, we will give a lot of input-output pairs like {cat image-"cat"}, {dog image-"dog"} to the agent. The agent then learns a function that maps from an input image to one of those strings.
* **Unsupervised Learning**:
In Unsupervised Learning the agent learns patterns in the input even though no explicit feedback is supplied. The most common type is **clustering**: detecting potential useful clusters of input examples.
**Example**: A taxi agent would develop a concept of *good traffic days* and *bad traffic days* without ever being given labeled examples.
* **Reinforcement Learning**:
In Reinforcement Learning the agent learns from a series of reinforcements—rewards or punishments.
**Example**: Let's talk about an agent to play the popular Atari game—[Pong](http://www.ponggame.org). We will reward a point for every correct move and deduct a point for every wrong move from the agent. Eventually, the agent will figure out its actions prior to reinforcement were most responsible for it.
## DATASETS
For the following tutorials we will use a range of datasets, to better showcase the strengths and weaknesses of the algorithms. The datasests are the following:
* [Fisher's Iris](https://github.com/aimacode/aima-data/blob/a21fc108f52ad551344e947b0eb97df82f8d2b2b/iris.csv): Each item represents a flower, with four measurements: the length and the width of the sepals and petals. Each item/flower is categorized into one of three species: Setosa, Versicolor and Virginica.
* [Zoo](https://github.com/aimacode/aima-data/blob/a21fc108f52ad551344e947b0eb97df82f8d2b2b/zoo.csv): The dataset holds different animals and their classification as "mammal", "fish", etc. The new animal we want to classify has the following measurements: 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1 (don't concern yourself with what the measurements mean).
To make using the datasets easier, we have written a class, `DataSet`, in `learning.py`. The tutorials found here make use of this class.
Let's have a look at how it works before we get started with the algorithms.
### Intro
A lot of the datasets we will work with are .csv files (although other formats are supported too). We have a collection of sample datasets ready to use [on aima-data](https://github.com/aimacode/aima-data/tree/a21fc108f52ad551344e947b0eb97df82f8d2b2b). Two examples are the datasets mentioned above (*iris.csv* and *zoo.csv*). You can find plenty datasets online, and a good repository of such datasets is [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets.html).
In such files, each line corresponds to one item/measurement. Each individual value in a line represents a *feature* and usually there is a value denoting the *class* of the item.
You can find the code for the dataset here:
```
%psource DataSet
```
### Class Attributes
* **examples**: Holds the items of the dataset. Each item is a list of values.
* **attrs**: The indexes of the features (by default in the range of [0,f), where *f* is the number of features). For example, `item[i]` returns the feature at index *i* of *item*.
* **attrnames**: An optional list with attribute names. For example, `item[s]`, where *s* is a feature name, returns the feature of name *s* in *item*.
* **target**: The attribute a learning algorithm will try to predict. By default the last attribute.
* **inputs**: This is the list of attributes without the target.
* **values**: A list of lists which holds the set of possible values for the corresponding attribute/feature. If initially `None`, it gets computed (by the function `setproblem`) from the examples.
* **distance**: The distance function used in the learner to calculate the distance between two items. By default `mean_boolean_error`.
* **name**: Name of the dataset.
* **source**: The source of the dataset (url or other). Not used in the code.
* **exclude**: A list of indexes to exclude from `inputs`. The list can include either attribute indexes (attrs) or names (attrnames).
### Class Helper Functions
These functions help modify a `DataSet` object to your needs.
* **sanitize**: Takes as input an example and returns it with non-input (target) attributes replaced by `None`. Useful for testing. Keep in mind that the example given is not itself sanitized, but instead a sanitized copy is returned.
* **classes_to_numbers**: Maps the class names of a dataset to numbers. If the class names are not given, they are computed from the dataset values. Useful for classifiers that return a numerical value instead of a string.
* **remove_examples**: Removes examples containing a given value. Useful for removing examples with missing values, or for removing classes (needed for binary classifiers).
### Importing a Dataset
#### Importing from aima-data
Datasets uploaded on aima-data can be imported with the following line:
```
iris = DataSet(name="iris")
```
To check that we imported the correct dataset, we can do the following:
```
print(iris.examples[0])
print(iris.inputs)
```
Which correctly prints the first line in the csv file and the list of attribute indexes.
When importing a dataset, we can specify to exclude an attribute (for example, at index 1) by setting the parameter `exclude` to the attribute index or name.
```
iris2 = DataSet(name="iris",exclude=[1])
print(iris2.inputs)
```
### Attributes
Here we showcase the attributes.
First we will print the first three items/examples in the dataset.
```
print(iris.examples[:3])
```
Then we will print `attrs`, `attrnames`, `target`, `input`. Notice how `attrs` holds values in [0,4], but since the fourth attribute is the target, `inputs` holds values in [0,3].
```
print("attrs:", iris.attrs)
print("attrnames (by default same as attrs):", iris.attrnames)
print("target:", iris.target)
print("inputs:", iris.inputs)
```
Now we will print all the possible values for the first feature/attribute.
```
print(iris.values[0])
```
Finally we will print the dataset's name and source. Keep in mind that we have not set a source for the dataset, so in this case it is empty.
```
print("name:", iris.name)
print("source:", iris.source)
```
A useful combination of the above is `dataset.values[dataset.target]` which returns the possible values of the target. For classification problems, this will return all the possible classes. Let's try it:
```
print(iris.values[iris.target])
```
### Helper Functions
We will now take a look at the auxiliary functions found in the class.
First we will take a look at the `sanitize` function, which sets the non-input values of the given example to `None`.
In this case we want to hide the class of the first example, so we will sanitize it.
Note that the function doesn't actually change the given example; it returns a sanitized *copy* of it.
```
print("Sanitized:",iris.sanitize(iris.examples[0]))
print("Original:",iris.examples[0])
```
Currently the `iris` dataset has three classes, setosa, virginica and versicolor. We want though to convert it to a binary class dataset (a dataset with two classes). The class we want to remove is "virginica". To accomplish that we will utilize the helper function `remove_examples`.
```
iris2 = DataSet(name="iris")
iris2.remove_examples("virginica")
print(iris2.values[iris2.target])
```
We also have `classes_to_numbers`. For a lot of the classifiers in the module (like the Neural Network), classes should have numerical values. With this function we map string class names to numbers.
```
print("Class of first example:",iris2.examples[0][iris2.target])
iris2.classes_to_numbers()
print("Class of first example:",iris2.examples[0][iris2.target])
```
As you can see "setosa" was mapped to 0.
Finally, we take a look at `find_means_and_deviations`. It finds the means and standard deviations of the features for each class.
```
means, deviations = iris.find_means_and_deviations()
print("Setosa feature means:", means["setosa"])
print("Versicolor mean for first feature:", means["versicolor"][0])
print("Setosa feature deviations:", deviations["setosa"])
print("Virginica deviation for second feature:",deviations["virginica"][1])
```
## IRIS VISUALIZATION
Since we will use the iris dataset extensively in this notebook, below we provide a visualization tool that helps in comprehending the dataset and thus how the algorithms work.
We plot the dataset in a 3D space using `matplotlib` and the function `show_iris` from `notebook.py`. The function takes as input three parameters, *i*, *j* and *k*, which are indicises to the iris features, "Sepal Length", "Sepal Width", "Petal Length" and "Petal Width" (0 to 3). By default we show the first three features.
```
iris = DataSet(name="iris")
show_iris()
show_iris(0, 1, 3)
show_iris(1, 2, 3)
```
You can play around with the values to get a good look at the dataset.
## DISTANCE FUNCTIONS
In a lot of algorithms (like the *k-Nearest Neighbors* algorithm), there is a need to compare items, finding how *similar* or *close* they are. For that we have many different functions at our disposal. Below are the functions implemented in the module:
### Manhattan Distance (`manhattan_distance`)
One of the simplest distance functions. It calculates the difference between the coordinates/features of two items. To understand how it works, imagine a 2D grid with coordinates *x* and *y*. In that grid we have two items, at the squares positioned at `(1,2)` and `(3,4)`. The difference between their two coordinates is `3-1=2` and `4-2=2`. If we sum these up we get `4`. That means to get from `(1,2)` to `(3,4)` we need four moves; two to the right and two more up. The function works similarly for n-dimensional grids.
```
def manhattan_distance(X, Y):
return sum([abs(x - y) for x, y in zip(X, Y)])
distance = manhattan_distance([1,2], [3,4])
print("Manhattan Distance between (1,2) and (3,4) is", distance)
```
### Euclidean Distance (`euclidean_distance`)
Probably the most popular distance function. It returns the square root of the sum of the squared differences between individual elements of two items.
```
def euclidean_distance(X, Y):
return math.sqrt(sum([(x - y)**2 for x, y in zip(X,Y)]))
distance = euclidean_distance([1,2], [3,4])
print("Euclidean Distance between (1,2) and (3,4) is", distance)
```
### Hamming Distance (`hamming_distance`)
This function counts the number of differences between single elements in two items. For example, if we have two binary strings "111" and "011" the function will return 1, since the two strings only differ at the first element. The function works the same way for non-binary strings too.
```
def hamming_distance(X, Y):
return sum(x != y for x, y in zip(X, Y))
distance = hamming_distance(['a','b','c'], ['a','b','b'])
print("Hamming Distance between 'abc' and 'abb' is", distance)
```
### Mean Boolean Error (`mean_boolean_error`)
To calculate this distance, we find the ratio of different elements over all elements of two items. For example, if the two items are `(1,2,3)` and `(1,4,5)`, the ration of different/all elements is 2/3, since they differ in two out of three elements.
```
def mean_boolean_error(X, Y):
return mean(int(x != y) for x, y in zip(X, Y))
distance = mean_boolean_error([1,2,3], [1,4,5])
print("Mean Boolean Error Distance between (1,2,3) and (1,4,5) is", distance)
```
### Mean Error (`mean_error`)
This function finds the mean difference of single elements between two items. For example, if the two items are `(1,0,5)` and `(3,10,5)`, their error distance is `(3-1) + (10-0) + (5-5) = 2 + 10 + 0 = 12`. The mean error distance therefore is `12/3=4`.
```
def mean_error(X, Y):
return mean([abs(x - y) for x, y in zip(X, Y)])
distance = mean_error([1,0,5], [3,10,5])
print("Mean Error Distance between (1,0,5) and (3,10,5) is", distance)
```
### Mean Square Error (`ms_error`)
This is very similar to the `Mean Error`, but instead of calculating the difference between elements, we are calculating the *square* of the differences.
```
def ms_error(X, Y):
return mean([(x - y)**2 for x, y in zip(X, Y)])
distance = ms_error([1,0,5], [3,10,5])
print("Mean Square Distance between (1,0,5) and (3,10,5) is", distance)
```
### Root of Mean Square Error (`rms_error`)
This is the square root of `Mean Square Error`.
```
def rms_error(X, Y):
return math.sqrt(ms_error(X, Y))
distance = rms_error([1,0,5], [3,10,5])
print("Root of Mean Error Distance between (1,0,5) and (3,10,5) is", distance)
```
## PLURALITY LEARNER CLASSIFIER
### Overview
The Plurality Learner is a simple algorithm, used mainly as a baseline comparison for other algorithms. It finds the most popular class in the dataset and classifies any subsequent item to that class. Essentially, it classifies every new item to the same class. For that reason, it is not used very often, instead opting for more complicated algorithms when we want accurate classification.

Let's see how the classifier works with the plot above. There are three classes named **Class A** (orange-colored dots) and **Class B** (blue-colored dots) and **Class C** (green-colored dots). Every point in this plot has two **features** (i.e. X<sub>1</sub>, X<sub>2</sub>). Now, let's say we have a new point, a red star and we want to know which class this red star belongs to. Solving this problem by predicting the class of this new red star is our current classification problem.
The Plurality Learner will find the class most represented in the plot. ***Class A*** has four items, ***Class B*** has three and ***Class C*** has seven. The most popular class is ***Class C***. Therefore, the item will get classified in ***Class C***, despite the fact that it is closer to the other two classes.
### Implementation
Below follows the implementation of the PluralityLearner algorithm:
```
psource(PluralityLearner)
```
It takes as input a dataset and returns a function. We can later call this function with the item we want to classify as the argument and it returns the class it should be classified in.
The function first finds the most popular class in the dataset and then each time we call its "predict" function, it returns it. Note that the input ("example") does not matter. The function always returns the same class.
### Example
For this example, we will not use the Iris dataset, since each class is represented the same. This will throw an error. Instead we will use the zoo dataset.
```
zoo = DataSet(name="zoo")
pL = PluralityLearner(zoo)
print(pL([1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1]))
```
The output for the above code is "mammal", since that is the most popular and common class in the dataset.
## K-NEAREST NEIGHBOURS CLASSIFIER
### Overview
The k-Nearest Neighbors algorithm is a non-parametric method used for classification and regression. We are going to use this to classify Iris flowers. More about kNN on [Scholarpedia](http://www.scholarpedia.org/article/K-nearest_neighbor).

Let's see how kNN works with a simple plot shown in the above picture.
We have co-ordinates (we call them **features** in Machine Learning) of this red star and we need to predict its class using the kNN algorithm. In this algorithm, the value of **k** is arbitrary. **k** is one of the **hyper parameters** for kNN algorithm. We choose this number based on our dataset and choosing a particular number is known as **hyper parameter tuning/optimising**. We learn more about this in coming topics.
Let's put **k = 3**. It means you need to find 3-Nearest Neighbors of this red star and classify this new point into the majority class. Observe that smaller circle which contains three points other than **test point** (red star). As there are two violet points, which form the majority, we predict the class of red star as **violet- Class B**.
Similarly if we put **k = 5**, you can observe that there are three yellow points, which form the majority. So, we classify our test point as **yellow- Class A**.
In practical tasks, we iterate through a bunch of values for k (like [1, 3, 5, 10, 20, 50, 100]), see how it performs and select the best one.
### Implementation
Below follows the implementation of the kNN algorithm:
```
psource(NearestNeighborLearner)
```
It takes as input a dataset and k (default value is 1) and it returns a function, which we can later use to classify a new item.
To accomplish that, the function uses a heap-queue, where the items of the dataset are sorted according to their distance from *example* (the item to classify). We then take the k smallest elements from the heap-queue and we find the majority class. We classify the item to this class.
### Example
We measured a new flower with the following values: 5.1, 3.0, 1.1, 0.1. We want to classify that item/flower in a class. To do that, we write the following:
```
iris = DataSet(name="iris")
kNN = NearestNeighborLearner(iris,k=3)
print(kNN([5.1,3.0,1.1,0.1]))
```
The output of the above code is "setosa", which means the flower with the above measurements is of the "setosa" species.
## DECISION TREE LEARNER
### Overview
#### Decision Trees
A decision tree is a flowchart that uses a tree of decisions and their possible consequences for classification. At each non-leaf node of the tree an attribute of the input is tested, based on which corresponding branch leading to a child-node is selected. At the leaf node the input is classified based on the class label of this leaf node. The paths from root to leaves represent classification rules based on which leaf nodes are assigned class labels.

#### Decision Tree Learning
Decision tree learning is the construction of a decision tree from class-labeled training data. The data is expected to be a tuple in which each record of the tuple is an attribute used for classification. The decision tree is built top-down, by choosing a variable at each step that best splits the set of items. There are different metrics for measuring the "best split". These generally measure the homogeneity of the target variable within the subsets.
#### Gini Impurity
Gini impurity of a set is the probability of a randomly chosen element to be incorrectly labeled if it was randomly labeled according to the distribution of labels in the set.
$$I_G(p) = \sum{p_i(1 - p_i)} = 1 - \sum{p_i^2}$$
We select a split which minimizes the Gini impurity in child nodes.
#### Information Gain
Information gain is based on the concept of entropy from information theory. Entropy is defined as:
$$H(p) = -\sum{p_i \log_2{p_i}}$$
Information Gain is difference between entropy of the parent and weighted sum of entropy of children. The feature used for splitting is the one which provides the most information gain.
#### Pseudocode
You can view the pseudocode by running the cell below:
```
pseudocode("Decision Tree Learning")
```
### Implementation
The nodes of the tree constructed by our learning algorithm are stored using either `DecisionFork` or `DecisionLeaf` based on whether they are a parent node or a leaf node respectively.
```
psource(DecisionFork)
```
`DecisionFork` holds the attribute, which is tested at that node, and a dict of branches. The branches store the child nodes, one for each of the attribute's values. Calling an object of this class as a function with input tuple as an argument returns the next node in the classification path based on the result of the attribute test.
```
psource(DecisionLeaf)
```
The leaf node stores the class label in `result`. All input tuples' classification paths end on a `DecisionLeaf` whose `result` attribute decide their class.
```
psource(DecisionTreeLearner)
```
The implementation of `DecisionTreeLearner` provided in [learning.py](https://github.com/aimacode/aima-python/blob/master/learning.py) uses information gain as the metric for selecting which attribute to test for splitting. The function builds the tree top-down in a recursive manner. Based on the input it makes one of the four choices:
<ol>
<li>If the input at the current step has no training data we return the mode of classes of input data received in the parent step (previous level of recursion).</li>
<li>If all values in training data belong to the same class it returns a `DecisionLeaf` whose class label is the class which all the data belongs to.</li>
<li>If the data has no attributes that can be tested we return the class with highest plurality value in the training data.</li>
<li>We choose the attribute which gives the highest amount of entropy gain and return a `DecisionFork` which splits based on this attribute. Each branch recursively calls `decision_tree_learning` to construct the sub-tree.</li>
</ol>
### Example
We will now use the Decision Tree Learner to classify a sample with values: 5.1, 3.0, 1.1, 0.1.
```
iris = DataSet(name="iris")
DTL = DecisionTreeLearner(iris)
print(DTL([5.1, 3.0, 1.1, 0.1]))
```
As expected, the Decision Tree learner classifies the sample as "setosa" as seen in the previous section.
## NAIVE BAYES LEARNER
### Overview
#### Theory of Probabilities
The Naive Bayes algorithm is a probabilistic classifier, making use of [Bayes' Theorem](https://en.wikipedia.org/wiki/Bayes%27_theorem). The theorem states that the conditional probability of **A** given **B** equals the conditional probability of **B** given **A** multiplied by the probability of **A**, divided by the probability of **B**.
$$P(A|B) = \dfrac{P(B|A)*P(A)}{P(B)}$$
From the theory of Probabilities we have the Multiplication Rule, if the events *X* are independent the following is true:
$$P(X_{1} \cap X_{2} \cap ... \cap X_{n}) = P(X_{1})*P(X_{2})*...*P(X_{n})$$
For conditional probabilities this becomes:
$$P(X_{1}, X_{2}, ..., X_{n}|Y) = P(X_{1}|Y)*P(X_{2}|Y)*...*P(X_{n}|Y)$$
#### Classifying an Item
How can we use the above to classify an item though?
We have a dataset with a set of classes (**C**) and we want to classify an item with a set of features (**F**). Essentially what we want to do is predict the class of an item given the features.
For a specific class, **Class**, we will find the conditional probability given the item features:
$$P(Class|F) = \dfrac{P(F|Class)*P(Class)}{P(F)}$$
We will do this for every class and we will pick the maximum. This will be the class the item is classified in.
The features though are a vector with many elements. We need to break the probabilities up using the multiplication rule. Thus the above equation becomes:
$$P(Class|F) = \dfrac{P(Class)*P(F_{1}|Class)*P(F_{2}|Class)*...*P(F_{n}|Class)}{P(F_{1})*P(F_{2})*...*P(F_{n})}$$
The calculation of the conditional probability then depends on the calculation of the following:
*a)* The probability of **Class** in the dataset.
*b)* The conditional probability of each feature occurring in an item classified in **Class**.
*c)* The probabilities of each individual feature.
For *a)*, we will count how many times **Class** occurs in the dataset (aka how many items are classified in a particular class).
For *b)*, if the feature values are discrete ('Blue', '3', 'Tall', etc.), we will count how many times a feature value occurs in items of each class. If the feature values are not discrete, we will go a different route. We will use a distribution function to calculate the probability of values for a given class and feature. If we know the distribution function of the dataset, then great, we will use it to compute the probabilities. If we don't know the function, we can assume the dataset follows the normal (Gaussian) distribution without much loss of accuracy. In fact, it can be proven that any distribution tends to the Gaussian the larger the population gets (see [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem)).
*NOTE:* If the values are continuous but use the discrete approach, there might be issues if we are not lucky. For one, if we have two values, '5.0 and 5.1', with the discrete approach they will be two completely different values, despite being so close. Second, if we are trying to classify an item with a feature value of '5.15', if the value does not appear for the feature, its probability will be 0. This might lead to misclassification. Generally, the continuous approach is more accurate and more useful, despite the overhead of calculating the distribution function.
The last one, *c)*, is tricky. If feature values are discrete, we can count how many times they occur in the dataset. But what if the feature values are continuous? Imagine a dataset with a height feature. Is it worth it to count how many times each value occurs? Most of the time it is not, since there can be miscellaneous differences in the values (for example, 1.7 meters and 1.700001 meters are practically equal, but they count as different values).
So as we cannot calculate the feature value probabilities, what are we going to do?
Let's take a step back and rethink exactly what we are doing. We are essentially comparing conditional probabilities of all the classes. For two classes, **A** and **B**, we want to know which one is greater:
$$\dfrac{P(F|A)*P(A)}{P(F)} vs. \dfrac{P(F|B)*P(B)}{P(F)}$$
Wait, **P(F)** is the same for both the classes! In fact, it is the same for every combination of classes. That is because **P(F)** does not depend on a class, thus being independent of the classes.
So, for *c)*, we actually don't need to calculate it at all.
#### Wrapping It Up
Classifying an item to a class then becomes a matter of calculating the conditional probabilities of feature values and the probabilities of classes. This is something very desirable and computationally delicious.
Remember though that all the above are true because we made the assumption that the features are independent. In most real-world cases that is not true though. Is that an issue here? Fret not, for the the algorithm is very efficient even with that assumption. That is why the algorithm is called **Naive** Bayes Classifier. We (naively) assume that the features are independent to make computations easier.
### Implementation
The implementation of the Naive Bayes Classifier is split in two; *Learning* and *Simple*. The *learning* classifier takes as input a dataset and learns the needed distributions from that. It is itself split into two, for discrete and continuous features. The *simple* classifier takes as input not a dataset, but already calculated distributions (a dictionary of `CountingProbDist` objects).
#### Discrete
The implementation for discrete values counts how many times each feature value occurs for each class, and how many times each class occurs. The results are stored in a `CountinProbDist` object.
With the below code you can see the probabilities of the class "Setosa" appearing in the dataset and the probability of the first feature (at index 0) of the same class having a value of 5. Notice that the second probability is relatively small, even though if we observe the dataset we will find that a lot of values are around 5. The issue arises because the features in the Iris dataset are continuous, and we are assuming they are discrete. If the features were discrete (for example, "Tall", "3", etc.) this probably wouldn't have been the case and we would see a much nicer probability distribution.
```
dataset = iris
target_vals = dataset.values[dataset.target]
target_dist = CountingProbDist(target_vals)
attr_dists = {(gv, attr): CountingProbDist(dataset.values[attr])
for gv in target_vals
for attr in dataset.inputs}
for example in dataset.examples:
targetval = example[dataset.target]
target_dist.add(targetval)
for attr in dataset.inputs:
attr_dists[targetval, attr].add(example[attr])
print(target_dist['setosa'])
print(attr_dists['setosa', 0][5.0])
```
First we found the different values for the classes (called targets here) and calculated their distribution. Next we initialized a dictionary of `CountingProbDist` objects, one for each class and feature. Finally, we iterated through the examples in the dataset and calculated the needed probabilites.
Having calculated the different probabilities, we will move on to the predicting function. It will receive as input an item and output the most likely class. Using the above formula, it will multiply the probability of the class appearing, with the probability of each feature value appearing in the class. It will return the max result.
```
def predict(example):
def class_probability(targetval):
return (target_dist[targetval] *
product(attr_dists[targetval, attr][example[attr]]
for attr in dataset.inputs))
return argmax(target_vals, key=class_probability)
print(predict([5, 3, 1, 0.1]))
```
You can view the complete code by executing the next line:
```
psource(NaiveBayesDiscrete)
```
#### Continuous
In the implementation we use the Gaussian/Normal distribution function. To make it work, we need to find the means and standard deviations of features for each class. We make use of the `find_means_and_deviations` Dataset function. On top of that, we will also calculate the class probabilities as we did with the Discrete approach.
```
means, deviations = dataset.find_means_and_deviations()
target_vals = dataset.values[dataset.target]
target_dist = CountingProbDist(target_vals)
print(means["setosa"])
print(deviations["versicolor"])
```
You can see the means of the features for the "Setosa" class and the deviations for "Versicolor".
The prediction function will work similarly to the Discrete algorithm. It will multiply the probability of the class occurring with the conditional probabilities of the feature values for the class.
Since we are using the Gaussian distribution, we will input the value for each feature into the Gaussian function, together with the mean and deviation of the feature. This will return the probability of the particular feature value for the given class. We will repeat for each class and pick the max value.
```
def predict(example):
def class_probability(targetval):
prob = target_dist[targetval]
for attr in dataset.inputs:
prob *= gaussian(means[targetval][attr], deviations[targetval][attr], example[attr])
return prob
return argmax(target_vals, key=class_probability)
print(predict([5, 3, 1, 0.1]))
```
The complete code of the continuous algorithm:
```
psource(NaiveBayesContinuous)
```
#### Simple
The simple classifier (chosen with the argument `simple`) does not learn from a dataset, instead it takes as input a dictionary of already calculated `CountingProbDist` objects and returns a predictor function. The dictionary is in the following form: `(Class Name, Class Probability): CountingProbDist Object`.
Each class has its own probability distribution. The classifier given a list of features calculates the probability of the input for each class and returns the max. The only pre-processing work is to create dictionaries for the distribution of classes (named `targets`) and attributes/features.
The complete code for the simple classifier:
```
psource(NaiveBayesSimple)
```
This classifier is useful when you already have calculated the distributions and you need to predict future items.
### Examples
We will now use the Naive Bayes Classifier (Discrete and Continuous) to classify items:
```
nBD = NaiveBayesLearner(iris, continuous=False)
print("Discrete Classifier")
print(nBD([5, 3, 1, 0.1]))
print(nBD([6, 5, 3, 1.5]))
print(nBD([7, 3, 6.5, 2]))
nBC = NaiveBayesLearner(iris, continuous=True)
print("\nContinuous Classifier")
print(nBC([5, 3, 1, 0.1]))
print(nBC([6, 5, 3, 1.5]))
print(nBC([7, 3, 6.5, 2]))
```
Notice how the Discrete Classifier misclassified the second item, while the Continuous one had no problem.
Let's now take a look at the simple classifier. First we will come up with a sample problem to solve. Say we are given three bags. Each bag contains three letters ('a', 'b' and 'c') of different quantities. We are given a string of letters and we are tasked with finding from which bag the string of letters came.
Since we know the probability distribution of the letters for each bag, we can use the naive bayes classifier to make our prediction.
```
bag1 = 'a'*50 + 'b'*30 + 'c'*15
dist1 = CountingProbDist(bag1)
bag2 = 'a'*30 + 'b'*45 + 'c'*20
dist2 = CountingProbDist(bag2)
bag3 = 'a'*20 + 'b'*20 + 'c'*35
dist3 = CountingProbDist(bag3)
```
Now that we have the `CountingProbDist` objects for each bag/class, we will create the dictionary. We assume that it is equally probable that we will pick from any bag.
```
dist = {('First', 0.5): dist1, ('Second', 0.3): dist2, ('Third', 0.2): dist3}
nBS = NaiveBayesLearner(dist, simple=True)
```
Now we can start making predictions:
```
print(nBS('aab')) # We can handle strings
print(nBS(['b', 'b'])) # And lists!
print(nBS('ccbcc'))
```
The results make intuitive sence. The first bag has a high amount of 'a's, the second has a high amount of 'b's and the third has a high amount of 'c's. The classifier seems to confirm this intuition.
Note that the simple classifier doesn't distinguish between discrete and continuous values. It just takes whatever it is given. Also, the `simple` option on the `NaiveBayesLearner` overrides the `continuous` argument. `NaiveBayesLearner(d, simple=True, continuous=False)` just creates a simple classifier.
## PERCEPTRON CLASSIFIER
### Overview
The Perceptron is a linear classifier. It works the same way as a neural network with no hidden layers (just input and output). First it trains its weights given a dataset and then it can classify a new item by running it through the network.
Its input layer consists of the the item features, while the output layer consists of nodes (also called neurons). Each node in the output layer has *n* synapses (for every item feature), each with its own weight. Then, the nodes find the dot product of the item features and the synapse weights. These values then pass through an activation function (usually a sigmoid). Finally, we pick the largest of the values and we return its index.
Note that in classification problems each node represents a class. The final classification is the class/node with the max output value.
Below you can see a single node/neuron in the outer layer. With *f* we denote the item features, with *w* the synapse weights, then inside the node we have the dot product and the activation function, *g*.

### Implementation
First, we train (calculate) the weights given a dataset, using the `BackPropagationLearner` function of `learning.py`. We then return a function, `predict`, which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class.
```
psource(PerceptronLearner)
```
Note that the Perceptron is a one-layer neural network, without any hidden layers. So, in `BackPropagationLearner`, we will pass no hidden layers. From that function we get our network, which is just one layer, with the weights calculated.
That function `predict` passes the input/example through the network, calculating the dot product of the input and the weights for each node and returns the class with the max dot product.
### Example
We will train the Perceptron on the iris dataset. Because though the `BackPropagationLearner` works with integer indexes and not strings, we need to convert class names to integers. Then, we will try and classify the item/flower with measurements of 5, 3, 1, 0.1.
```
iris = DataSet(name="iris")
iris.classes_to_numbers()
perceptron = PerceptronLearner(iris)
print(perceptron([5, 3, 1, 0.1]))
```
The correct output is 0, which means the item belongs in the first class, "setosa". Note that the Perceptron algorithm is not perfect and may produce false classifications.
## LEARNER EVALUATION
In this section we will evaluate and compare algorithm performance. The dataset we will use will again be the iris one.
```
iris = DataSet(name="iris")
```
### Naive Bayes
First up we have the Naive Bayes algorithm. First we will test how well the Discrete Naive Bayes works, and then how the Continuous fares.
```
nBD = NaiveBayesLearner(iris, continuous=False)
print("Error ratio for Discrete:", err_ratio(nBD, iris))
nBC = NaiveBayesLearner(iris, continuous=True)
print("Error ratio for Continuous:", err_ratio(nBC, iris))
```
The error for the Naive Bayes algorithm is very, very low; close to 0. There is also very little difference between the discrete and continuous version of the algorithm.
## k-Nearest Neighbors
Now we will take a look at kNN, for different values of *k*. Note that *k* should have odd values, to break any ties between two classes.
```
kNN_1 = NearestNeighborLearner(iris, k=1)
kNN_3 = NearestNeighborLearner(iris, k=3)
kNN_5 = NearestNeighborLearner(iris, k=5)
kNN_7 = NearestNeighborLearner(iris, k=7)
print("Error ratio for k=1:", err_ratio(kNN_1, iris))
print("Error ratio for k=3:", err_ratio(kNN_3, iris))
print("Error ratio for k=5:", err_ratio(kNN_5, iris))
print("Error ratio for k=7:", err_ratio(kNN_7, iris))
```
Notice how the error became larger and larger as *k* increased. This is generally the case with datasets where classes are spaced out, as is the case with the iris dataset. If items from different classes were closer together, classification would be more difficult. Usually a value of 1, 3 or 5 for *k* suffices.
Also note that since the training set is also the testing set, for *k* equal to 1 we get a perfect score, since the item we want to classify each time is already in the dataset and its closest neighbor is itself.
### Perceptron
For the Perceptron, we first need to convert class names to integers. Let's see how it performs in the dataset.
```
iris2 = DataSet(name="iris")
iris2.classes_to_numbers()
perceptron = PerceptronLearner(iris2)
print("Error ratio for Perceptron:", err_ratio(perceptron, iris2))
```
The Perceptron didn't fare very well mainly because the dataset is not linearly separated. On simpler datasets the algorithm performs much better, but unfortunately such datasets are rare in real life scenarios.
## AdaBoost
### Overview
**AdaBoost** is an algorithm which uses **ensemble learning**. In ensemble learning the hypotheses in the collection, or ensemble, vote for what the output should be and the output with the majority votes is selected as the final answer.
AdaBoost algorithm, as mentioned in the book, works with a **weighted training set** and **weak learners** (classifiers that have about 50%+epsilon accuracy i.e slightly better than random guessing). It manipulates the weights attached to the the examples that are showed to it. Importance is given to the examples with higher weights.
All the examples start with equal weights and a hypothesis is generated using these examples. Examples which are incorrectly classified, their weights are increased so that they can be classified correctly by the next hypothesis. The examples that are correctly classified, their weights are reduced. This process is repeated *K* times (here *K* is an input to the algorithm) and hence, *K* hypotheses are generated.
These *K* hypotheses are also assigned weights according to their performance on the weighted training set. The final ensemble hypothesis is the weighted-majority combination of these *K* hypotheses.
The speciality of AdaBoost is that by using weak learners and a sufficiently large *K*, a highly accurate classifier can be learned irrespective of the complexity of the function being learned or the dullness of the hypothesis space.
### Implementation
As seen in the previous section, the `PerceptronLearner` does not perform that well on the iris dataset. We'll use perceptron as the learner for the AdaBoost algorithm and try to increase the accuracy.
Let's first see what AdaBoost is exactly:
```
psource(AdaBoost)
```
AdaBoost takes as inputs: **L** and *K* where **L** is the learner and *K* is the number of hypotheses to be generated. The learner **L** takes in as inputs: a dataset and the weights associated with the examples in the dataset. But the `PerceptronLearner` doesnot handle weights and only takes a dataset as its input.
To remedy that we will give as input to the PerceptronLearner a modified dataset in which the examples will be repeated according to the weights associated to them. Intuitively, what this will do is force the learner to repeatedly learn the same example again and again until it can classify it correctly.
To convert `PerceptronLearner` so that it can take weights as input too, we will have to pass it through the **`WeightedLearner`** function.
```
psource(WeightedLearner)
```
The `WeightedLearner` function will then call the `PerceptronLearner`, during each iteration, with the modified dataset which contains the examples according to the weights associated with them.
### Example
We will pass the `PerceptronLearner` through `WeightedLearner` function. Then we will create an `AdaboostLearner` classifier with number of hypotheses or *K* equal to 5.
```
WeightedPerceptron = WeightedLearner(PerceptronLearner)
AdaboostLearner = AdaBoost(WeightedPerceptron, 5)
iris2 = DataSet(name="iris")
iris2.classes_to_numbers()
adaboost = AdaboostLearner(iris2)
adaboost([5, 3, 1, 0.1])
```
That is the correct answer. Let's check the error rate of adaboost with perceptron.
```
print("Error ratio for adaboost: ", err_ratio(adaboost, iris2))
```
It reduced the error rate considerably. Unlike the `PerceptronLearner`, `AdaBoost` was able to learn the complexity in the iris dataset.
| github_jupyter |
```
import pandas as pd
import numpy as np
import math
import random
import operator
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.datasets import make_regression
%matplotlib inline
```
# Creation dataset
```
random.seed(98103)
n = 30
x = np.array([random.random() for i in range(n)])
sin = lambda x: math.sin(4*x)
vsin = np.vectorize(sin)
y = vsin(x)
#np.random.seed(0)
#x = 2 - 3 * np.random.normal(0, 1, 30)
#y = x - 2 * (x ** 2) + 0.5 * (x ** 3) + np.random.normal(-3, 3, 30)
```
Adding Gaussian noise to dataset
```
random.seed(1)
e = np.array([random.gauss(0,1.0/3.0) for i in range(n)])
y = y + e
plt.scatter(x,y, s=10)
plt.show()
data= pd.DataFrame({'X1': x, 'Y': y})
```
# Define a linear fit
```
# transforming the data to include another axis
X = x[:, np.newaxis]
Y = y[:, np.newaxis]
model = LinearRegression()
model.fit(X, Y)
y_pred = model.predict(X)
rmse = np.sqrt(mean_squared_error(Y,y_pred))
r2 = r2_score(Y,y_pred)
print(rmse)
print(r2)
plt.scatter(X, Y, s=10)
plt.plot(X, y_pred, color='r')
plt.show()
```
# define a polynomial fit
## Second degree polynomial
```
polynomial_features= PolynomialFeatures(degree=2)
x_poly = polynomial_features.fit_transform(X)
model = LinearRegression()
model.fit(x_poly, Y)
y_poly_pred = model.predict(x_poly)
rmse = np.sqrt(mean_squared_error(Y,y_poly_pred))
r2 = r2_score(Y,y_poly_pred)
print(rmse)
print(r2)
plt.scatter(X, Y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(X,y_poly_pred), key=sort_axis)
X_z, y_poly_pred = zip(*sorted_zip)
plt.plot(X_z, y_poly_pred, color='m')
plt.show()
# The coefficients
print('Coefficients: \n', model.coef_)
```
## 4th order polynomial
```
polynomial_features= PolynomialFeatures(degree=3)
x_poly = polynomial_features.fit_transform(X)
model = LinearRegression()
model.fit(x_poly, Y)
y_poly_pred = model.predict(x_poly)
rmse = np.sqrt(mean_squared_error(Y,y_poly_pred))
r2 = r2_score(Y,y_poly_pred)
print(rmse)
print(r2)
plt.scatter(X, Y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(X,y_poly_pred), key=sort_axis)
X_z, y_poly_pred = zip(*sorted_zip)
plt.plot(X_z, y_poly_pred, color='m')
plt.show()
# The coefficients
print('Coefficients: \n', model.coef_)
```
## high order polynomial
```
polynomial_features= PolynomialFeatures(degree=20)
x_poly = polynomial_features.fit_transform(X)
model = LinearRegression()
model.fit(x_poly, Y)
y_poly_pred = model.predict(x_poly)
rmse = np.sqrt(mean_squared_error(Y,y_poly_pred))
r2 = r2_score(Y,y_poly_pred)
print(rmse)
print(r2)
plt.scatter(X, Y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(X,y_poly_pred), key=sort_axis)
X_z, y_poly_pred = zip(*sorted_zip)
plt.plot(X_z, y_poly_pred, color='m')
plt.show()
# The coefficients
print('Coefficients: \n', model.coef_)
```
# Ridge regularization
## Generate random dataset
```
X, Y, w = make_regression(n_samples=10, n_features=30, coef=True,
random_state=1, bias=3.5)
model = LinearRegression()
model.fit(X, Y)
y_pred = model.predict(X)
rmse = np.sqrt(mean_squared_error(Y,y_pred))
r2 = r2_score(Y,y_pred)
print(rmse)
print(r2)
# The coefficients
print('Coefficients: \n', model.coef_)
```
# Ridge regression
```
clf = Ridge()
coefs = []
errors = []
alphas = np.logspace(-6, 6, 200)
# Train the model with different regularisation strengths
for a in alphas:
clf.set_params(alpha=a)
clf.fit(X, Y)
coefs.append(clf.coef_)
# Display results
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
plt.xlabel('alpha')
plt.ylabel('weights')
plt.title('Ridge coefficients as a function of the regularization')
plt.axis('tight')
plt.show()
```
# Lasso regularization
```
clf = linear_model.Lasso()
coefs = []
errors = []
alphas = np.logspace(-6, 6)
# Train the model with different regularisation strengths
for a in alphas:
clf.set_params(alpha=a, max_iter=10000)
clf.fit(X, Y)
coefs.append(clf.coef_)
# Display results
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
plt.xlabel('alpha')
plt.ylabel('weights')
plt.title('Lasso coefficients as a function of the regularization')
plt.axis('tight')
plt.show()
```
https://towardsdatascience.com/how-to-perform-lasso-and-ridge-regression-in-python-3b3b75541ad8
| github_jupyter |
# Task 2 Evaluation
This notebook contains the evaluation for Task 1 of the TREC Fair Ranking track.
## Setup
We begin by loading necessary libraries:
```
from pathlib import Path
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import gzip
import binpickle
```
Set up progress bar and logging support:
```
from tqdm.auto import tqdm
tqdm.pandas(leave=False)
import sys, logging
logging.basicConfig(level=logging.INFO, stream=sys.stderr)
log = logging.getLogger('task1-eval')
```
Import metric code:
```
import metrics
from trecdata import scan_runs
```
And finally import the metric itself:
```
metric = binpickle.load('task2-eval-metric.bpk')
```
## Importing Data
Let's load the runs now:
```
runs = pd.DataFrame.from_records(row for (task, rows) in scan_runs() if task == 2 for row in rows)
runs
runs.head()
```
We also need to load our topic eval data:
```
topics = pd.read_json('data/eval-topics.json.gz', lines=True)
topics.head()
```
Tier 2 is the top 5 docs of the first 25 rankings. Further, we didn't complete Tier 2 for all topics.
```
t2_topics = topics.loc[topics['max_tier'] >= 2, 'id']
r_top5 = runs['rank'] <= 5
r_first25 = runs['seq_no'] <= 25
r_done = runs['topic_id'].isin(t2_topics)
runs = runs[r_done & r_top5 & r_first25]
runs.info()
```
## Computing Metrics
We are now ready to compute the metric for each (system,topic) pair. Let's go!
```
rank_exp = runs.groupby(['run_name', 'topic_id']).progress_apply(metric)
# rank_exp = rank_awrf.unstack()
rank_exp
```
Now let's average by runs:
```
run_scores = rank_exp.groupby('run_name').mean()
run_scores
```
## Analyzing Scores
What is the distribution of scores?
```
run_scores.describe()
sns.displot(x='EE-L', data=run_scores)
plt.show()
run_scores.sort_values('EE-L', ascending=False)
sns.relplot(x='EE-D', y='EE-R', data=run_scores)
sns.rugplot(x='EE-D', y='EE-R', data=run_scores)
plt.show()
```
## Per-Topic Stats
We need to return per-topic stats to each participant, at least for the score.
```
topic_stats = rank_exp.groupby('topic_id').agg(['mean', 'median', 'min', 'max'])
topic_stats
```
Make final score analysis:
```
topic_range = topic_stats.loc[:, 'EE-L']
topic_range = topic_range.drop(columns=['mean'])
topic_range
```
And now we combine scores with these results to return to participants.
```
ret_dir = Path('results')
for system, runs in rank_exp.groupby('run_name'):
aug = runs.join(topic_range).reset_index().drop(columns=['run_name'])
fn = ret_dir / f'{system}.tsv'
log.info('writing %s', fn)
aug.to_csv(fn, sep='\t', index=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/kalz2q/mycolabnotebooks/blob/master/kaggle07import.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# メモ
1. kaggle の Python tutorial をベスに Colab ノートブックを作成している。
1. Colab で開いて読まれることを想定。
1. 元ファイル ( https://www.kaggle.com/colinmorris/working-with-external-libraries )
# はじめに
この章では、インポート import、ライブラリー library とそのオブジェクト object、演算子オーバーロード operator overload について学ぶ。
# Imports
これまで学んできた型 type と関数 function は言語に始めから組み込まれているものだった。
しかし、Python の利点は Python のために書かれ、データサイエンスなどで利用される高品質のライブラリーがたくさんあることである。
そのいくつかは標準ライブラリーで、Pythonの動く環境ならば必ずあるが、それ以外はかならずしも Python と一緒ではないが、簡単に加えることができる。
いずれにせよ、 import によってライブラリーにアクセスすることになる。
</br>
</br>
```
import math
print("It's math! It has type {}".format(type(math)))
```
math はモジュールで、モジュールとは変数の集まり(ネームスペースと呼ばれる)のことである。関数 dir() を使うと math で使われるすべての name を見ることができる。
</br>
</br>
```
print(dir(math))
```
```
['__doc__', '__loader__', '__name__', '__package__', '__spec__', 'acos',
'acosh', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'ceil', 'copysign',
'cos', 'cosh', 'degrees', 'e', 'erf', 'erfc', 'exp', 'expm1', 'fabs',
'factorial', 'floor', 'fmod', 'frexp', 'fsum', 'gamma', 'gcd', 'hypot',
'inf', 'isclose', 'isfinite', 'isinf', 'isnan', 'ldexp', 'lgamma', 'log',
'log10', 'log1p', 'log2', 'modf', 'nan', 'pi', 'pow', 'radians', 'sin',
'sinh', 'sqrt', 'tan', 'tanh', 'tau', 'trunc']
```
これらの変数にはドット・シンタクスでアクセスすることができる。いくつかは単純な値の変数でたとえば math.pi がある。
</br>
</br>
```
print("pi to 4 significant digits = {:.4}".format(math.pi))
```
しかし、モジュールのほとんどは関数であり、たとえば math.log がある。
</br>
</br>
```
math.log(32, 2)
```
もし、 math.log がなにをするかわからなければ help() を呼び出す。
</br>
</br>
```
help(math.log)
```
モジュールそのもの help() を呼び出すこともできる。モジュール自体の高度な説明と、すべての関数と変数のドキュメントが得られる。
</br>
</br>
```
help(math)
```
モジュールの関数を頻繁に呼び出すなら、モジュール名の alias を import 時に決めるとタイプの数を少し減らすことができる。
</br>
</br>
# いまここ
```
import math as mt
mt.pi
```
`Pandas`、`Numpy`、`Tensorflow`、`Matplotlib`などポピュラーなライブラリーについては通常使われる`alias`が決まっていたりする。たとえば
```
import numpy as np
import pandas as pd
```
`as`ただ`rename`しているだけなので、次のように書いても同じである。
```
import math
mt = math
```
`math`のすべての変数を`math`をつけずに使えるようにすることも可能だ。
たとえば`math.pi`と書かずに`pi`と書けば済むようになる。
```
from math import *
print(pi, log(32, 2))
```
`import *`と書くとモジュールのすべての変数がドット付きの前置きなしで直接使えるようになる。
デメリット: 純粋主義者に文句を言われる。
理由はあって、考えずにこれをやるとたいへんなことになる。
```
from math import *
from numpy import *
# this will lead to error!!!!
# print(pi, log(32, 2))
```
どうしのか。前はうまく行ったのに。
このような`スターインポート`はしばしば奇妙な、デバッグのむずかしい状況を作り出す。
今回の場合、問題は`math`と`numpy`の両方が`log`という関数を持っていいて、違った意味合いを持っている、ということ。`numpy`をあとに読み込んだので`math`で読み込んだ`log`を上書き`overwrite (or shadow)`してしまった。
丁度いい妥協点はそれぞれのモジュールから個々に必要なものをインポートすることだろう。
```
from math import log, pi
from numpy import asarray
```
# サブモジュール
モジュールは関数や変数を指し示す名前の集まりであるが、名前はさらにモジュールの名前でもあり得る。モジュールの中のモジュールをサブモジュール`submodule`と言う。
```
import numpy
from IPython.display import HTML, display
print("numpy.random is a", type(numpy.random))
print("it contains names such as...",
dir(numpy.random)[-15:] )
```
numpy.random is a <class 'module'>
it contains names such as... ['seed', 'set_state', 'shuffle', 'standard_cauchy', 'standard_exponential', 'standard_gamma', 'standard_normal', 'standard_t', 'test', 'triangular', 'uniform', 'vonmises', 'wald', 'weibull', 'zipf']
したがって、`numpy`の場合、`random`サブモジュールの中の関数を呼び出すためには、2回ドットが必要になる。
```
# Roll 10 dice
import numpy
rolls = numpy.random.randint(low=1, high=6, size=10)
rolls
```
さあ、幼稚園の卒業だ。どこかへ出発だ。
( Oh the places you'll go, oh the objects you'll see)
いままでのレッスンで`int`、`float`、`bool`、`list`、`string`、`doct`をマスターした。(ホント?)
だとしても、話はそこで終わらない。これから個々の仕事でさまざまなライブラリーを使っていくと、学ばなければならない型`type`がいろいろあることに気づくだろう。作図ライブラリーの`matplotlib`ならば`Subplot`、`Figure`、`TickMark`、`Annotation`などを表現するオブジェクトと出会う。`pandas`関数ならば`DataFrame`と`Series`だ。
この章では、奇妙な型の扱いについての短いサバイバル・ガイドを提供する。
# 奇妙な型を理解するための3つのツール
`numpy`で関数を呼ぶと`array`が返ってくる。いままで`array`は出てこなかった。しかし、大丈夫。こんなとき助けてくれる3つの組み込み関数がある。
1. type() (いったいこれは何か、を教えてくれる)
```
import numpy
rolls = numpy.random.randint(low=1, high=6, size=10)
type(rolls)
```
2. dir() (これでなにができるかを教えてくれる)
```
print(dir(rolls))
```
['T', '__abs__', '__add__', '__and__', '__array__', '__array_finalize__', '__array_function__', '__array_interface__', '__array_prepare__', '__array_priority__', '__array_struct__', '__array_ufunc__', '__array_wrap__', '__bool__', '__class__', '__complex__', '__contains__', '__copy__', '__deepcopy__', '__delattr__', '__delitem__', '__dir__', '__divmod__', '__doc__', '__eq__', '__float__', '__floordiv__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__', '__iand__', '__ifloordiv__', '__ilshift__', '__imatmul__', '__imod__', '__imul__', '__index__', '__init__', '__init_subclass__', '__int__', '__invert__', '__ior__', '__ipow__', '__irshift__', '__isub__', '__iter__', '__itruediv__', '__ixor__', '__le__', '__len__', '__lshift__', '__lt__', '__matmul__', '__mod__', '__mul__', '__ne__', '__neg__', '__new__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmatmul__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__setitem__', '__setstate__', '__sizeof__', '__str__', '__sub__', '__subclasshook__', '__truediv__', '__xor__', 'all', 'any', 'argmax', 'argmin', 'argpartition', 'argsort', 'astype', 'base', 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', 'copy', 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dot', 'dtype', 'dump', 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', 'item', 'itemset', 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', 'newbyteorder', 'nonzero', 'partition', 'prod', 'ptp', 'put', 'ravel', 'real', 'repeat', 'reshape', 'resize', 'round', 'searchsorted', 'setfield', 'setflags', 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', 'swapaxes', 'take', 'tobytes', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', 'var', 'view']
```
# 平均をとるにはどれが使えるかな? `mean`なんかはどうだろう。やってみよう。とか。
rolls.mean()
# 扱うにはなれた型がいいので、`tolist`を使ってみよう。とか。
rolls.tolist()
```
[4, 1, 2, 2, 1, 3, 1, 1, 5, 2]
3. help() (もっと教えて)
```
# "ravel" 属性`attribute`がおもしろそうだ。クラシック音楽に関係あるかな
help(rolls.ravel)
# Okay, just tell me everything there is to know about numpy.ndarray
# (Click the "output" button to see the novel-length output)
help(rolls)
```
もちろん、オンライン・ドキュメントもあるので、そちらを参照してもよい。
# 演算子オーバーロード'Operator overloading'
次の式の値は何か?
```
# [3, 4, 1, 2, 2, 1] + 10
```
なんて馬鹿げた質問だ。もちろんエラーになる。
しかし、次のはどうか。
```
rolls + 10
```
Pythonはコアとなる文法は厳しく定義されていると思いがちであるが、`+`、`<`、`in`、`==`、インデクスやスライスのための`square bracket`など、しかし、実際のところPythonのやり方は放任主義と言える。新しい型を定義する際に、足す、や同等である、の意味を定義することができる。
`list`のデザイナーはリストに数値を足すのは駄目、と決めたが、`numpy`のデザイナーは`array`に数値を足すのは、有り、と考えた。
次に`numpy`の`array`でいくつかのPythonの演算子が意外な(少なくとも`list`と違った)ふるまいをするかを示そう。
```
# At which indices are the dice less than or equal to 3?
import numpy
rolls = numpy.random.randint(low=1, high=6, size=10)
rolls <= 3
```
### 2次元アレイを作る
```
xlist = [[1,2,3],[2,4,6],]
# Create a 2-dimensional array
x = numpy.asarray(xlist)
print("xlist = {}\nx =\n{}".format(xlist, x))
# Get the last element of the second row of our numpy array
x[1,-1]
# Get the last element of the second sublist of our nested list?
# xlist[1,-1]
```
```
TypeError: list indices must be integers or slices, not tuple
```
numpy's ndarray type is specialized for working with multi-dimensional data, so it defines its own logic for indexing, allowing us to index by a tuple to specify the index at each dimension.
### When does 1 + 1 not equal 2?
ディープラーニングで広く使われる`tensorflow`というライブラリーがある。`tensorflow`では演算子オーバーロードを広範囲に使う。
```
import tensorflow as tf
# Create two constants, each with value 1
a = tf.constant(1)
b = tf.constant(1)
# Add them together to get...
a + b
```
a + b は 2,ではない。
`tensorflow`のドキュメントによると
> a symbolic handle to one of the outputs of an Operation.
> It does not hold the values of that operation's output,
> but instead provides a means of computing those values
> in a TensorFlow tf.Session.
ということでなに言っているかわからなくても、大事なのはかならずしも明確でなく、魔法のようなやりかたで演算子オーバーロードが行われることがある、ということを理解しておくことだ。
Pythonの演算子が`int`、`string`、`list`でどう働くかを理解していてもそれが`tensorflow Tensor`、`numpy ndarray`、`pandas Dataframe`でどういう意味かをそのままでわかったことにはならないが、すこし`DataFrame`をかじるとたとえば次のような式がどのような意味かが、本能的にわかり始める。
```
# Get the rows with population over 1m in South America
# df[(df['population'] > 10**6) & (df['continent'] == 'South America')]
```
しかし、どうやってこれができているのか。上の例は5つほどの違った演算子オーバーロードが使われているが、それぞれなにをやっているのか。なにか間違ったときにこの辺を理解しておくと役に立つ。
### 演算子オーバーロードの仕組み
`help()`や`dir()`を呼び出すと前後に2つずつアンダースコアがついた名前がたくさんでてくるのに気づいているだろうか。
```
print(dir(list))
```
```
['__add__', '__class__', '__contains__', '__delattr__', '__delitem__',
'__dir__', '__doc__', '__eq__', '__format__', '__ge__',
'__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__',
'__imul__', '__init__', '__init_subclass__', '__iter__', '__le__',
'__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__',
'__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__',
'__setitem__', '__sizeof__', '__str__', '__subclasshook__', 'append',
'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove',
'reverse', 'sort']
```
これらが、演算子オーバーロードに直接関係している。
Pythonのプログラマーが自分の作った型に演算子を定義する際、まず `__lt__`、`__setattr__`、`__contains__`のような前後に2つずつアンダースコアがついた名前のメソッドを作る。このダブル・アンダースコア書式ではPythonにとって一般に特殊な意味を持つ。
例えば、`[1, 2, 3]`が裏で`__contains__`を呼び出すとすると、それは`[1, 2, 3].__contains__(x)`としていることになる。
この辺を興味があって学びたいならばPythonの公式ドキュメントにたくさんのアンダースコア・メソッドについて書いてある。このレッスンでは扱わない。
# 練習問題
# いまここ
### 問題 1. `matplotlib`問題
ジミーは`estimate_average_slot_payout`関数をためしてみたところ、長くやればカジノに勝つことに気がついた。200ドルで始めて500回やって結果を`list`にした。Pythonの`matplotlib`をつかって時系列でバランスのグラフを書いた。
```
# Import the jimmy_slots submodule
# from learntools.python import jimmy_slots
# Call the get_graph() function to get Jimmy's graph
# graph = jimmy_slots.get_graph()
# graph
```
最近の結果は思わしくない。この結果を`emoji`を使ってツイートしようと思う。しかしこのままではフォロワーが混乱する。
あなたは、次のような変更をしてほしいと言われた。
> Add the title "Results of 500 slot machine pulls"
> Make the y-axis start at 0.
> Add the label "Balance" to the y-axis
`type(graph)`で調べてみたら、ジミーのグラフの型は`matplotlib.axes._subplots.AxesSubplot`という知らない型だった。つぎに`dir(graph)`でメソッドを調べたら、使えそうなのが3つ見つかった。`.set_title()`、`.set_ylim()`、`.set_ylabel()`である。
これらのメソッドを使って関数`prettify_graph`を完成させる。
メソッドについては`help()`で調べること。
```
def prettify_graph(graph):
"""Modify the given graph according to Jimmy's requests: add a title, make the y-axis
start at 0, label the y-axis. (And, if you're feeling ambitious, format the tick marks
as dollar amounts using the "$" symbol.)
"""
graph.set_title("Results of 500 slot machine pulls")
# Complete steps 2 and 3 here
pass
# graph = jimmy_slots.get_graph()
# prettify_graph(graph)
# graph
```
ボーナス問題: y軸の数字を200でなく$200になるようにフォーマットする。
もしメソッドがわからなければ`dir(graph)`や`help(graph)`を使って調べること。
### 問題 2. 超難問 マリオカート問題
ルイージはマリオカートのサーキットのベストタイムを分析している。彼が持っているデータは辞書のリストである。
```
[
{'name': 'Peach', 'items': ['green shell', 'banana', 'green shell',], 'finish': 3},
{'name': 'Bowser', 'items': ['green shell',], 'finish': 1},
# Sometimes the racer's name wasn't recorded
{'name': None, 'items': ['mushroom',], 'finish': 2},
{'name': 'Toad', 'items': ['green shell', 'mushroom'], 'finish': 1},
]
```
'items'はレーサーがレースで拾ったパワーアップアイテムのリスト、`finish`はレースでの順位。1が1位で、3が3位。
このようなリストを受取り、アイテムごとに何人の1位のレーサーがそのアイテムを拾ったかを算出する関数を書いた。
```
def best_items(racers):
"""Given a list of racer dictionaries, return a dictionary mapping items to the number
of times those items were picked up by racers who finished in first place.
"""
winner_item_counts = {}
for i in range(len(racers)):
# The i'th racer dictionary
racer = racers[i]
# We're only interested in racers who finished in first
if racer['finish'] == 1:
for i in racer['items']:
# Add one to the count for this item (adding it to the dict if necessary)
if i not in winner_item_counts:
winner_item_counts[i] = 0
winner_item_counts[i] += 1
# Data quality issues :/ Print a warning about racers with no name set. We'll take care of it later.
if racer['name'] is None:
print("WARNING: Encountered racer with unknown name on iteration {}/{} (racer = {})".format(
i+1, len(racers), racer['name'])
)
return winner_item_counts
```
サンプルリストを作り、試したところ正しく作動しているようだ。
```
sample = [
{'name': 'Peach', 'items': ['green shell', 'banana', 'green shell',], 'finish': 3},
{'name': 'Bowser', 'items': ['green shell',], 'finish': 1},
{'name': None, 'items': ['mushroom',], 'finish': 2},
{'name': 'Toad', 'items': ['green shell', 'mushroom'], 'finish': 1},
]
best_items(sample)
```
しかし、本物のデータベースで走らせたところ、`TypeError`でクラッシュしてしまった。
なぜか。
次のコードセルを実行して、ルイージが遭遇したエラーを見てみよう。バグがわかったら、直してエラーにならないようにする。
ヒント: ルイージのバグはインポートについてのチュートリアルでスターインポートのときのバグと同様のもの。
```
# Import luigi's full dataset of race data
# from learntools.python.luigi_analysis import full_dataset
# fix me
def best_items (racers):
winner_item_counts = {}
for i in range(len(racers)):
# The i'th racer dictionary
racer = racers[i]
# We're only interested in racers who finished in first
if racer['finish'] == 1:
for i in racer['items']:
# Add one to the count for this item (adding it to the dict if necessary)
if i not in winner_item_counts:
winner_item_counts[i] = 0
winner_item_counts[i] += 1
# Data quality issues :/ Print a warning about racers with no name set. We'll take care of it later.
if racer['name'] is None:
print("WARNING: Encountered racer with unknown name on iteration {}/{} (racer = {})".format(
i+1, len(racers), racer['name'])
)
return winner_item_counts
# Try analyzing the imported full dataset
best_items(full_dataset)
```
### 問題 3. ブラックジャック
ブラックジャックの手札を表す型を作るとする。 比較演算子の > や = をオーバーロードして、手札のどちらが強いかがわかると便利なのではないか。
</br>
</br>
```
hand1 = BlackjackHand(['K', 'A'])
hand2 = BlackjackHand(['7', '10', 'A'])
hand1 > hand2
# True
```
カスタムクラスを定義するのはこのレッスンの範囲を超えているので、なにもかもカバーしようとしているわけではないが、もし BlackjackHand というクラスを定義したとして
作る関数によく似た考え方のプログラムを作って欲しい。
次の例では、 > のようなメソッドを作るのに __gt__ マジックを使っている。
</br>
</br>
```
def blackjack_hand_greater_than:
"""
Return True if hand_1 beats hand_2, and False otherwise.
In order for hand_1 to beat hand_2 the following must be true:
- The total of hand_1 must not exceed 21
- The total of hand_1 must exceed the total of hand_2 OR hand_2's total must exceed 21
```
Hands are represented as a list of cards. Each card is represented by a string.
When adding up a hand's total, cards with numbers count for that many points. Face
cards ('J', 'Q', and 'K') are worth 10 points. 'A' can count for 1 or 11.
When determining a hand's total, you should try to count aces in the way that
maximizes the hand's total without going over 21. e.g. the total of ['A', 'A', '9'] is 21,
the total of ['A', 'A', '9', '3'] is 14.
Examples:
>>> blackjack_hand_greater_than(['K'], ['3', '4'])
True
>>> blackjack_hand_greater_than(['K'], ['10'])
False
>>> blackjack_hand_greater_than(['K', 'K', '2'], ['3'])
False
"""
pass
| github_jupyter |
```
from PyQt4.QtCore import *
import urllib2, json
import zipfile
try:
import zlib
compression = zipfile.ZIP_DEFLATED
except:
compression = zipfile.ZIP_STORED
#here maps api
appcode ="5socj0x3K2SWWpkQUBLaYA"
appID = "gnLbXQVI5RzAIoGTzF9G"
import pandas as pd
import datetime
yeardata={}
town='csikszereda'
t='2017-12-31'
k=0
while k<28:
t=str(pd.to_datetime(t)+datetime.timedelta(days=1))[:10]
print t
k+=1
timedata={}
if t not in yeardata:
for i in range(33):
if i/2+6<10:h='0'
else: h=''
if i%2==0:g='00'
else: g='30'
hour=h+str(i/2+6)+':'+g
timestamp = t+"T"+hour+":00Z02"
coordinates='46.362097,25.802032'
url = "https://isoline.route.cit.api.here.com/routing/7.2/calculateisoline.json?app_id=" + appID +\
"&app_code=" + appcode + "&mode=shortest;car;traffic:enabled&start=geo!" + coordinates +\
"&maxpoints=500&departure=" + timestamp + "&range=600,1200,1800,3600&rangetype=time&jsonAttributes=41"
response = urllib2.urlopen(url)
data = json.load(response)
#print hour,
timedata[hour]=data
yeardata[t]=timedata
file('data.json','w').write(json.dumps(yeardata))
zf = zipfile.ZipFile('../'+town+'/isochrone/data.zip', mode='w')
zf.write('data.json','data.json',compress_type=compression)
zf.close()
for i in yeardata:
file('../'+town+'/isochrone/data'+i+'.json','w').write(json.dumps(yeardata[i]))
yeardata={}
town='kolozsvar'
t='2017-12-31'
k=0
while k<28:
t=str(pd.to_datetime(t)+datetime.timedelta(days=1))[:10]
print t
k+=1
timedata={}
if t not in yeardata:
for i in range(33):
if i/2+6<10:h='0'
else: h=''
if i%2==0:g='00'
else: g='30'
hour=h+str(i/2+6)+':'+g
timestamp = t+"T"+hour+":00Z02"
coordinates='46.768723,23.589792'
url = "https://isoline.route.cit.api.here.com/routing/7.2/calculateisoline.json?app_id=" + appID +\
"&app_code=" + appcode + "&mode=shortest;car;traffic:enabled&start=geo!" + coordinates +\
"&maxpoints=500&departure=" + timestamp + "&range=600,1200,1800,3600&rangetype=time&jsonAttributes=41"
response = urllib2.urlopen(url)
data = json.load(response)
#print hour,
timedata[hour]=data
yeardata[t]=timedata
file('data.json','w').write(json.dumps(yeardata))
zf = zipfile.ZipFile('../'+town+'/isochrone/data.zip', mode='w')
zf.write('data.json','data.json',compress_type=compression)
zf.close()
for i in yeardata:
file('../'+town+'/isochrone/data'+i+'.json','w').write(json.dumps(yeardata[i]))
yeardata={}
town='brasso'
t='2017-12-31'
k=0
while k<28:
t=str(pd.to_datetime(t)+datetime.timedelta(days=1))[:10]
print t
k+=1
timedata={}
if t not in yeardata:
for i in range(33):
if i/2+6<10:h='0'
else: h=''
if i%2==0:g='00'
else: g='30'
hour=h+str(i/2+6)+':'+g
timestamp = t+"T"+hour+":00Z02"
coordinates='45.652488,25.608454'
url = "https://isoline.route.cit.api.here.com/routing/7.2/calculateisoline.json?app_id=" + appID +\
"&app_code=" + appcode + "&mode=shortest;car;traffic:enabled&start=geo!" + coordinates +\
"&maxpoints=500&departure=" + timestamp + "&range=600,1200,1800,3600&rangetype=time&jsonAttributes=41"
response = urllib2.urlopen(url)
data = json.load(response)
#print hour,
timedata[hour]=data
yeardata[t]=timedata
file('data.json','w').write(json.dumps(yeardata))
zf = zipfile.ZipFile('../'+town+'/isochrone/data.zip', mode='w')
zf.write('data.json','data.json',compress_type=compression)
zf.close()
for i in yeardata:
file('../'+town+'/isochrone/data'+i+'.json','w').write(json.dumps(yeardata[i]))
yeardata={}
town='udvarhely'
t='2017-12-31'
k=0
while k<28:
t=str(pd.to_datetime(t)+datetime.timedelta(days=1))[:10]
print t
k+=1
timedata={}
if t not in yeardata:
for i in range(33):
if i/2+6<10:h='0'
else: h=''
if i%2==0:g='00'
else: g='30'
hour=h+str(i/2+6)+':'+g
timestamp = t+"T"+hour+":00Z02"
coordinates='46.304976,25.292829'
url = "https://isoline.route.cit.api.here.com/routing/7.2/calculateisoline.json?app_id=" + appID +\
"&app_code=" + appcode + "&mode=shortest;car;traffic:enabled&start=geo!" + coordinates +\
"&maxpoints=500&departure=" + timestamp + "&range=60,180,300,600&rangetype=time&jsonAttributes=41"
response = urllib2.urlopen(url)
data = json.load(response)
#print hour,
timedata[hour]=data
yeardata[t]=timedata
file('data.json','w').write(json.dumps(yeardata))
zf = zipfile.ZipFile('../'+town+'/isochrone/data.zip', mode='w')
zf.write('data.json','data.json',compress_type=compression)
zf.close()
for i in yeardata:
file('../'+town+'/isochrone/data'+i+'.json','w').write(json.dumps(yeardata[i]))
```
| github_jupyter |
For MS training we have 3 datasets: train, validation and holdout
```
import numpy as np
import pandas as pd
import nibabel as nib
from scipy import interp
from sklearn.utils import shuffle
from sklearn.model_selection import GroupShuffleSplit
from sklearn.metrics import confusion_matrix, roc_auc_score, roc_curve, auc
from sklearn.model_selection import KFold
from sklearn.svm import SVC
import matplotlib.pyplot as plt
import os
import time
import h5py
from config import *
from utils import specificity, sensitivity, balanced_accuracy, shuffle_data, normalize_float
# Start timing
start_time = time.time()
zero_one_normalize = False
dtype = np.float32
# load hdf5 files and extract columns
train_h5 = h5py.File('/analysis/share/Ritter/MS/CIS/train_dataset_FLAIR_lesions_filled.h5', 'r')
holdout_h5 = h5py.File('/analysis/share/Ritter/MS/CIS/holdout_dataset_FLAIR_lesions_filled.h5', 'r')
# loading only labels from original file
y_train = train_h5['y']
y_holdout = holdout_h5['y']
train_lesions_h5 = h5py.File('/analysis/share/Ritter/MS/CIS/train_dataset_lesions.h5', 'r')
holdout_lesions_h5 = h5py.File('/analysis/share/Ritter/MS/CIS/holdout_dataset_lesions.h5', 'r')
lesion_masks_train = train_lesions_h5['masks']
lesion_masks_holdout = holdout_lesions_h5['masks']
```
## Convert to lesion volume
```
# convert data to numpy arrays using lesions masks
X_train = np.array(lesion_masks_train, dtype=dtype)
y_train = np.array(y_train)
X_holdout = np.array(lesion_masks_holdout, dtype=dtype)
y_holdout = np.array(y_holdout)
print("Total datset length: {}".format(len(y_train)))
print("Number of healthy controls: {}".format(len(y_train[y_train==0.])))
print("Number of MS patients: {}".format(len(y_train[y_train==1.])))
# sum over all dimensions
X_train = np.sum(X_train, axis=(1, 2, 3)).reshape(-1, 1)
X_holdout = np.sum(X_holdout, axis=(1, 2, 3)).reshape(-1, 1)
_, bins, _ = plt.hist(X_train[y_train==1.], bins=20, alpha=0.5, range=[0, 8000])
_ = plt.hist(X_train[y_train==0.], bins=bins, alpha=0.5, range=[0, 8000])
plt.legend(["MS", "HC"])
```
## Normalization
```
def normalize(train, test):
# get training set moments
mean = np.mean(train)
std = np.std(train)
# apply on train and test
train = (train - mean)/std
test = (test - mean)/std
return train, test
```
## Training
```
from sklearn.model_selection import GridSearchCV
from sklearn import preprocessing
from sklearn.pipeline import make_pipeline
def svc_param_selection(X, y, n_folds):
Cs = [0.001, 0.01, 0.1, 1, 10]
kernels = ['linear', 'rbf']
param_grid = {'svc__C': Cs,
'svc__kernel': kernels}
# use standard scaler for preprocessing
scaler = preprocessing.StandardScaler()
pipeline = make_pipeline(scaler, SVC(gamma='auto'))
grid_search = GridSearchCV(pipeline, param_grid, cv=n_folds, n_jobs=10)
grid_search.fit(X, y)
grid_search.best_params_
return grid_search.best_params_, grid_search.cv_results_
kf = KFold(n_splits=7)
fold = 0
best_params = []
train_balanced_accuracies = []
train_sensitivities = []
train_specificities = []
val_balanced_accuracies = []
val_sensitivities = []
val_specificities = []
auc_scores = []
tprs = []
mean_fpr = np.linspace(0, 1, 100)
# shuffle the data once
X_train, y_train = shuffle_data(X_train, y_train)
# nested cross-validation
for train_idx, test_idx in kf.split(X_train):
print("Fold %i" %fold)
fold += 1
# Start inner cross-validation
best_param, cv_result = svc_param_selection(
X_train[train_idx],
y_train[train_idx],
n_folds=5)
print("Best paramter value: {}".format(best_param))
model = SVC(kernel=best_param["svc__kernel"], C=best_param["svc__C"])
model.fit(X_train[train_idx], y_train[train_idx])
# training set results
train_pred = model.predict(X_train[train_idx])
train_bal_acc = balanced_accuracy(y_train[train_idx], train_pred)
train_sens = sensitivity(y_train[train_idx], train_pred)
train_spec = specificity(y_train[train_idx], train_pred)
# val set results
val_pred = model.predict(X_train[test_idx])
val_scores = model.decision_function(X_train[test_idx])
val_bal_acc = balanced_accuracy(y_train[test_idx], val_pred)
val_sens = sensitivity(y_train[test_idx], val_pred)
val_spec = specificity(y_train[test_idx], val_pred)
roc_auc = roc_auc_score(y_train[test_idx], val_scores)
fpr, tpr, thresholds = roc_curve(y_train[test_idx], val_scores)
# Store results
best_params.append(best_param)
train_balanced_accuracies.append(train_bal_acc)
train_sensitivities.append(train_sens)
train_specificities.append(train_spec)
val_balanced_accuracies.append(val_bal_acc)
val_sensitivities.append(val_sens)
val_specificities.append(val_spec)
auc_scores.append(roc_auc)
# interpolate with diagonal to get comparable results
tprs.append(interp(mean_fpr, fpr, tpr))
tprs[-1][0] = 0.0 # correct lowest value after interpolation
# Print results
print("######## Training set results ########")
print("Balanced accuracy {:.2f} %".format(train_bal_acc*100))
print("Sensitivity {:.2f} %".format(train_sens*100))
print("Specificity {:.2f} %".format(train_spec*100))
print("######## Validation set results ########")
print("Balanced accuracy {:.2f} %".format(val_bal_acc*100))
print("Sensitivity {:.2f} %".format(val_sens*100))
print("Specificity {:.2f} %".format(val_spec*100))
print("Area Under the Receiver Operating Curve (ROC AUC score) {:.2f}".format(roc_auc*100))
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC fold %d (AUC = %0.2f)' % (fold, roc_auc))
training_time = time.time() - start_time
print("Training Time: {}h:{}m:{}s".format(
training_time//3600, (training_time//60)%60, training_time%60))
# Print results
print("######## Final results ########")
print("Validation balanced accuracies: \n {}".format(val_balanced_accuracies))
print("Validation balanced accuracies mean: {}".format(np.mean(val_balanced_accuracies)))
print("Validation final sensitivities: \n {}".format(val_sensitivities))
print("Validation final sensitivities' mean: {}".format(np.mean(val_sensitivities)))
print("Validation final specificities: \n {}".format(val_specificities))
print("Validation final specificities' mean: {}".format(np.mean(val_specificities)))
print("Mean ROC AUC score {:.2f}".format(np.mean(auc_scores)*100))
# Plot ROC Curves
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0 # correct max value after interpolation and mean
mean_auc = auc(mean_fpr, mean_tpr)
#assert(mean_auc == np.mean(auc_scores))
std_auc = np.std(auc_scores)
plt.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
training_time = time.time() - start_time
counter = {}
def majority_vote(best_params):
"""
Find the most often used combination
of parameters.
"""
assert(len(best_params)>=1)
counter = {}
# count unique value list
for i in range(len(best_params)):
# turn values into key
new_key = ""
for x in list(best_params[i].values()):
new_key = new_key + str(x) + "_"
if new_key in counter.keys():
counter[new_key] += 1
else:
counter[new_key] = 1
# select most frequent value list
majority_param = max(counter, key=lambda key: counter[key])
# reformat to list
majority_param = majority_param[:-1].split("_")
# reformat to dictionary
result = {}
for key, value in zip(best_params[0].keys(), majority_param):
result[key] = value
return result
majority_param = majority_vote(best_params)
print(majority_param)
```
# Evaluation
Train on the entire training set with the best parameters from above and test on the holdout dataset for final performance.
```
# training args
kernel = majority_param["svc__kernel"]
C = float(majority_param["svc__C"])
model = SVC(kernel=kernel, C=C)
num_trials = 10
train_balanced_accuracies = []
train_sensitivities = []
train_specificities = []
holdout_balanced_accuracies = []
holdout_sensitivities = []
holdout_specificities = []
auc_scores = []
tprs = []
mean_fpr = np.linspace(0, 1, 100)
for i in range(num_trials):
print("Trial %i" %i)
# shuffle the data each time
X_train, y_train = shuffle_data(X_train, y_train)
# normalize
X_train, X_holdout = normalize(X_train, X_holdout)
# Start training
model.fit(X_train, y_train)
# training set results
train_pred = model.predict(X_train)
train_bal_acc = balanced_accuracy(y_train, train_pred)
train_sens = sensitivity(y_train, train_pred)
train_spec = specificity(y_train, train_pred)
# holdout set results
holdout_pred = model.predict(X_holdout)
holdout_scores = model.decision_function(X_holdout)
holdout_bal_acc = balanced_accuracy(y_holdout, holdout_pred)
holdout_sens = sensitivity(y_holdout, holdout_pred)
holdout_spec = specificity(y_holdout, holdout_pred)
roc_auc = roc_auc_score(y_holdout, holdout_scores)
fpr, tpr, thresholds = roc_curve(y_holdout, holdout_scores)
# Store results
train_balanced_accuracies.append(train_bal_acc)
train_sensitivities.append(train_sens)
train_specificities.append(train_spec)
holdout_balanced_accuracies.append(holdout_bal_acc)
holdout_sensitivities.append(holdout_sens)
holdout_specificities.append(holdout_spec)
auc_scores.append(roc_auc)
# interpolate with diagonal to get comparable results
tprs.append(interp(mean_fpr, fpr, tpr))
tprs[-1][0] = 0.0 # correct lowest value after interpolation
# Print results
print("######## Training set results ########")
print("Balanced accuracy {:.2f} %".format(train_bal_acc*100))
print("Sensitivity {:.2f} %".format(train_sens*100))
print("Specificity {:.2f} %".format(train_spec*100))
print("######## Holdout set results ########")
print("Balanced accuracy {:.2f} %".format(holdout_bal_acc*100))
print("Sensitivity {:.2f} %".format(holdout_sens*100))
print("Specificity {:.2f} %".format(holdout_spec*100))
print("Area Under the Receiver Operating Curve (ROC AUC score) {:.2f}".format(roc_auc*100))
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC trial %d (AUC = %0.2f)' % (i, roc_auc))
training_time = time.time() - start_time
print("Training Time: {}h:{}m:{}s".format(
training_time//3600, (training_time//60)%60, training_time%60))
# Print results
print("######## Final results ########")
print("Holdout balanced accuracies: \n {}".format(holdout_balanced_accuracies))
print("Holdout balanced accuracies mean: {}".format(np.mean(holdout_balanced_accuracies)))
print("Holdout final sensitivities: \n {}".format(holdout_sensitivities))
print("Holdout final sensitivities' mean: {}".format(np.mean(holdout_sensitivities)))
print("Holdout final specificities: \n {}".format(holdout_specificities))
print("Holdout final specificities' mean: {}".format(np.mean(holdout_specificities)))
print("Mean ROC AUC score {:.2f}".format(np.mean(auc_scores)*100))
# Plot ROC Curves
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0 # correct max value after interpolation and mean
mean_auc = auc(mean_fpr, mean_tpr)
#assert(mean_auc == np.mean(auc_scores))
std_auc = np.std(auc_scores)
plt.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
total_time = time.time() - start_time
print("Training Time: {}h:{}m:{}s".format(
training_time//3600, (training_time//60)%60, training_time%60))
print("Total time elapsed: {}h:{}m:{}s".format(
total_time//3600, (total_time//60)%60, total_time%60))
quit()
```
| github_jupyter |
# What is Python?
- [How do computers work?](#How-do-computers-work?)
- [Python: Stats, strengths and weaknesses](#Python:-Stats,-strengths-and-weaknesses)
- [Python: Past, present and future](#Python:-Past,-present-and-future)
- [The outside of a pythonista](#The-outside-of-a-pythonista)
- [MAKE PYTHON WORK AGAIN!](#MAKE-PYTHON-WORK-AGAIN!)
Just like a human language, a computer language is a contaimnent of a certain culture, together with a set of values. A popular source describing Python is the Zen of Python, a collection of 20 aphorisms similar in style to a taoist book, most of whom are adressed towards programers but some are easy to understand by anyone:
```
import this
```
**Python matters because it is common.**
- [Tiobe inxed of language popularity](http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html) constantly ranks it number one after the infrastructure languages. It is popular with both the data science industry and academia, which translates into good support.
**If you are not a professional programmer, Python is a language designed for you.** Here is what Guido van Rossum, the creator of Python, wrote in his funding submission to DARPA, titled [Computer Programming for Everybody](https://www.python.org/doc/essays/cp4e/) back in 1999.
> "...while many people nowadays use a computer, few of them are computer programmers.
Non-programmers aren't really "empowered" in how they can use their computer: they are confined
to using applications in ways that "programmers" have determined for them. One doesn't need to be
a visionary to see the limitations here."
So Python is conceived to be plain and simple, yet allow you to do everything a complex language would. Many languages tried to open for a wider audience, but none of them had this goal as the primary founding principle. The result is that today Python has the most vibrant developer community of all languages, and that fact translates into good libraries and expertise.
**The course I am trying to hold is fitting well to the Python creed.**
- It is adressed to anyone
- it is open source
- it is short and simple, and yet it introduces the participants to subjects that would otherwise need years to master.
- I can probably hold this course in more than ten languages, but only in Python I can cover all the subjects in three days.
## Python: Stats, strengths and weaknesses
Strenghts:
- Easy at first
- Has a Zen poem
- Everyone uses it
- You can do anything with Python
- ..?
- cloud computing: it competes with Java for supremacy
- deep learning: unmatched
- statistical modeling: behind R
- bioinformatics: ahead of R in newer fields, such as single cell, spatial transcriptomics
- industrial quality: Python is now embraced by many industries
- huge job market: language popularity means plenty of jobs but small career edge.
Weakness:
- Hard to maintain large source code, compared with Cpp/Java. Although it brought us Git!
- Memory hog, relatively slow speed
- Weakness with parallel libraries (less relevant since the advent of distributed computing)
- Not running on natively on browser (it was tried, but lost to .js)
- ..?
```
i = 1
import sys
sys.getsizeof(i)
# in bytes
```
## How do computers work
Discuss:
- Computer architecture
- Grid computing
- Cloud computing
- Parralel computing: multiprocessing, multithreading
- Speed considerations
- Price considerations
**The PC**
- CPU (Central Processing Unit):
- good at iterative computing, sequential operations, reached its zenith (constant price, long use)
- has 8 - 32 cores
- heats up, power hog
- GPU (Graphical Procesing Unit):
- good at massively parallel computing (has thousands of small cores) such as graphics and linear algebra (ML)
- new (gets phased out in a couple of years, new models always very expensive)
- new (software is lacking, see CUDA)
- cloud computing specialization: ASIC (application-specific integrated circuit), TPU (tensor processing unit)
- HDD (hardrive):
- newer and better technologies exist (SSD or Solid State Drive, basically an extended RAM)
- SSD have much better random data access than HDD
- costly, data storage is a big problem in biology
- slow: the processors do not access HDD directly, but via RAM
- RAM (Random Access Memory)
- has the fastest data access (try to keep all your running program data in memory if possible)
- is limited (a few hundreds GB in special architectures) because it can burn the CPU
- Broadband
- much storage now happens on clouds
- data is streamed rather than downloaded
- data is real time processed
```
from IPython.display import Image
Image(url= "../img/PC.png", width=400, height=400)
```
**Grid distributed computing**
- a distributed system with non-interactive nodes
- examples: IBMs HPC clusters
- In Sweden, Uppmax and SNIC are mostly running grids, but this is rarely the case elsewhere
- grids are old technology, cloud is new technology
- most performant bioinformatics or ML workflows down work well on grids
- compute nodes, storage node, connected by network latency
- simple to maintain by small institution
- expensive! (equipment + personel + power supply)
**Cloud distributed computing**
- a distributed system with configurable node interaction
- examples public clouds: Amazon AWS, Google Cloud, Azure (Microsoft)
- public clouds can scale: you can run a task on a massive scale with a click of a credit card
- public clouds are cheap! (equipment is at the north pole, personel and services are highly specialized)
- private clouds are expensive and do not scale: hard to maintain, hard to recruit personel
- security risks (EU considers public clouds non compliant with GDPR for handling patient data)
- this is based on law, technically it has pros and cons
```
Image(url= "../img/distributed.png", width=400, height=400)
```
## Python: Past, present and future
#### A short history
(All this happened before Python)
- The first programming languages had a concise syntax and were able to control the computer at a very low level.
- example: C, a language that allows you to directly control the memory registry that the processor is using.
- In time as software accumulated and the IT infrastructure evolved languages became specialized
- C++, Java controlling the core infrastructure sector
- niche languages such as Matlab, R, Perl for specialized tasks
- Perl was the first popular scripting language with general purpose.
- Perl had an intently obfuscated syntax - this is because it was primarily adressed to UNIX system programmers, who love syntax puzzles and text obfuscation. Many sequencing libraries were developed into or integrated Perl, notable being BioPerl.
- Also driven by Internet, Javascript took an early domination on the browsers and was never really challenged.
- R also developed as an opensource alternative to SAS, a language used in statistics.
- Many proprietary languages also developed, such as Matlab.
**About languages**
- Languages became more distant to the hardware, becoming interpreted by a an engine that lies between the operating system and the program itself.
- Languages that are ready to run the moment writing stops are commonly called scripting languages.
- From the point of view of how they are run, some scripting languages are interpreted to machine instruction by another program. Others are converted into a code that is closer to the machine. This is called just in time compilation.
- Another important changed occured in the manner data structures are managed, some languages alowing the type of an object to be verified and changed during runtime. These are called dynamic languages.
##### Rise of the Python
- It is difficult to explain why scripting languages became so ubiquitous. It has to do with the lack of time and with safety. In C it is relatively easy to make mistakes that can damage a hardware, it takes more time to do something if you are not skilled and you write ugly code if you are not talented.
- When Python started Perl dominated text processing, Matlab dominated engineering, R dominated open source statistics computing and Javascript dominated the web. But Python was adressed to everyone, and in a decade became the number one scripting language in terms of popularity.
##### Future
- older languages are now including Python features,
- newer languages develop that are grown from Python concepts and will ultimately compete for broad audiences.
- it is common for non-professional programmers to learn and use multiple languages
- Python is arguably the best introduction to programming at this moment in time, which is why most universities and schools are including it as the main tool in their curricula.
### Also good to know, outside Python, as a data scientist
As a data scientist working in biology, or as a biologist working with data science, calling Python at a party brings with it a few friends, which I recommend you get to know in time, or maybe some of them are already familiar? The mistake of the beginer is to think he knows programming after a three day course. The following list should not discourage you, in three days you will already be able to do what programmers spend most of their time with, and maybe free some of them for more important work or put them out of work entirely.
- R. It is hard to extract your data if you avoid R. Many Python fans complain of R's byzantine syntax, or quote what R experts say, that R is a plaftorm with a language instead of the opposite. This is all true of course, but keep in mind that R just like Python has the real programming done in C, while most of the best libraries are also compiled from C. Much of the bioinformatics toolset today is found in R, and Python is a perfect glue for R calls.
- C. No matter how good computers become, core stuff will always need to run fast and not hog on resources. C is the mother of most software including the interpreters for most scriåting languages. The best Python libraries for scientific computing for example are mere calls to compiled C code. C is not as hard as it used to be. Notably Cython is a way to call C and C++ to and from Python. Swig is another language agnosting platform that makes it easy to design bindings for C code.
- Perl. Perl is viewed as a dying language since the community migrated to Python but there is a lot of bioinformatics code writen in Perl. Some still think that Perl text processing is the best, and Perl is slightly more geekish and more natively integrated with the Unix toolset. It fits you better than Python if you hate the "mainstream".
- Clouds and the newer langs. High performance ultra scallable computing is all the rage today and Python is struggling to keep up with younger languages that were designed with cloud computing, server farms and super clusters in mind. Have a go with some of the new kids on the block, such as GO, Scala, Julia, Closure, Dart, etc.
- Java, C++, C#. Together these languages contain a lot of bioinformatics software. Most NGS sequencing programs and most data science programs are written in Java. Through Jython one can integrate Java classes with Python modules. Python claims it too but Java is probably the only platforms that can trully claim to come with "baterries included", since most of Python batteries are in C. C++ is originally an object oriented extension of C and Python can treat it with the same tools and C# is what Microsoft did when it felt that too many programers are abandoning Visual C++ for Java. The Microsoft clone of Python is called IronPython, in case you wonder.
- Matlab. Most of the scientific computing software in Python is an open source version of Matlab libraries. Yes, Matlab is that good, if you can afford it. Matlab is still setting the trend in the field of numerical computation. For the poor, Octave is a free and open source alternative. SageMath is another popular collection of opensource libraries that includes most of Python scientific computing libraries.
- Javascript. While Python dominates the multipurpose scripting languages, one scripting languages dominates the browsers and slowly makes its way into server-side scripting too. Javascript much like R is viewed with disgust by many programmers due to its semantic imperfections but a recent statistics claims that 70% of the world code is running on Javascript interpreter. I wrote this entire course in the browser using Jupyter without ever using a Python editor, and the only community that is as vibrant as Python's is probably the HTML5/.js community, that embraces the latest open web standards. Some claim operating systems are superfluous, and ChromeOS, running on Javascript is certainly there to prove it. In terms of data visualization and interactivity, .js does not have equal. It is a bit too fictional to imagine that .js will improve to defeat all languages, more likely it is that assemblers will make the entire question of languages irrelevant and people will use what comes handier and translate the code as it suits them.
- Linux. It is probably more suitable that I write open source and open hardware, but unfortunately Linux is the only OS that qualifies, although both Apple and Microsoft have been "converted" and release more and more core OS functionality as open source. Linux runs the most popular mobile OS (Android) and is native to the most popular PC gaming and home entertainment platform (Steam + SteamOS). Free and open source software such as Python, R, C, Java and Perl feels best with Linux. The same can be said about open hardware. The revolution Python created by lowering the bar in programming is currently undergoing in hardware. From RapsberyPi minicomputers to Arduino microcontrollers, architecture is opening up to the common people, and Python/Linux is their main development environment. It will not be long until researchers will sit at three day courses on how to assemble sequencers, PCR machines and mass-spec chambers from multipurpose parts.
- Google, Wikipedia, Biostar, IRC channels, mailinglists and Stackoverflow. Today you will not be productive if you read the documentation, read the whole book or attend all the courses. I am not trying to make you lazy, it is important to learn. But the best learning is doing and the online communities are more than helpfull. You should not give up because you cannot do something with Python. Learn to ask!
**Questions**:
- What is a computer?
- What components are expensive/speedy today?
- Where are the "weak links" in the way a computer runs? What about a grid, or a cloud?
- Why "Python"?
- What are dynamic languages? What are static languages?
- What are interpreted languages? What are native languages, OSes and programs?
- What does "compiling" a language do? What about an assembler?
- How does a program run inside a computer?
# MAKE PYTHON WORK AGAIN!
Once you use a programing language past the beginning steps you will sooner or later need libraries that require different language versions, even worse different other libraries that lie on different laguage versions. This is solved by using distributions and virtualization. This may seem a little scary to a beginer, but all programming languages have this problem. Similarly, old programs will not work on new operating systems, thus one can install virtualization programs like VirtualBox, that allow you to run for example ancient version of Windows, Linux and Mac OS all from the same OS. Unfortunately installing software is not always easy, and each language has multiple distributions, each having a whole philosophy about library management. Wellcome to the Hell of programming!
How to install Python libs:
- distros
- pip and virtualenv
- distutils, setuptools, wheels and eggs
- conda
- installing from source
### distros
[Scientific Python friendly distributions](http://www.scipy.org/install.html).
It makes it easy, especially if you are not on Linux. I feel it unfair to recommend one above another. Each distribution has its own way to install/update a package. Some packages may lack though in which case they have to be manually installed.
### pip, virtualenv
Python approach to package management is very ... modular. One normally starts with installing Python by downloading it from a main location. The exact way Python and other packages can be installed is dependent on your OS. Once Python is there, we have a few crossplatform methods for installing packages, these are the more important options:
[pip](https://pip.pypa.io/en/stable/user_guide.html) is a package manager that will download and install packages. The basic command is:
pip install SomePackage
[virtualenv](https://virtualenv.pypa.io/en/latest/userguide.html) will allow multiple python versions to coexist on the same computer without conflicts, so that we don't start crying in case we need libraries that require different python versions. This may seem a little scary to a beginer, but all programming languages have this problem. Similarly, old programs will not work on new operating systems, thus one can install virtualization programs like VirtualBox, that allow you to run for example ancient version of Windows, Linux and Mac OS all from the same OS. I am only mention this library, but hopefully it will not be needed during the course.
The repository site is called [pypi](https://pypi.org/).
### distutils, setuptools, wheels and eggs
This is usefull if we want to deploy (distribute) a python program on another computer. Here is the [official link](https://docs.python.org/3/distributing/) for distributing packages.
### conda
At the current course we will try to stick to the Anaconda distribution's specific way of managing packages. If one package is missing that is not from the core set of packages it is up to you if you have time to install it. For me on Linux, most packages were installed with simple commands. For the purpose of the course I am using a core set of the most common libraries. But for presentation purposes I am also using libraries that may be hard to install on certain situations.
#### Anaconda installation and management
Please install the Anaconda distribution for Python 3.7, available here:
- for Anaconda: https://www.anaconda.com/products/individual
- for Miniconda: https://docs.conda.io/en/latest/miniconda.html
Anaconda installs a package manager called conda. Use it to create a microenvironment running Python 3. 'py35' is our invented name for this microenvironment, 'anaconda' is a way of telling conda that we want all the standard packages available in the distribution to be available for our environment:
```
conda create -n biopy37 python=3.7
conda activate biopy37
```
What if we want only a selection of packages to be made available? Here is an example.
```
conda list
conda search biopython
conda create --name pycourse biopython scipy
```
What if we want to install a new package inside a microenvironment?
```
conda install --name pycourse beautiful-soup
```
Packages that are not part of standard Anaconda can be installed with pip. Find more here: [http://conda.pydata.org/docs/using/pkgs.html](http://conda.pydata.org/docs/using/pkgs.html)
### Installing from source
- Since only a few Python packages are in native Python code, this also entails having compilers for C/C++ in many cases.
- Download the source archive of the package and decompress it. Open a terminal, navigate inside the source directory of the package (all this is done differently on different platforms) and type:
```
python setup.py install
```
That is it, you will see meaningless text running on the screen, it will end eventually either with an OKAY or with some errors. [There is more to it](https://docs.python.org/2/install/), but not for this course.
### Version 2 or 3?
- Answer: 3!
- Version 3.7 or 3.(latest)? 3.7 usually
- As Python grows it becomes hard to move the software infrastructure to the latest specs.
### Questions:
- What is a Python package?
- What is a Python distribution?
- Why and when do we need micro-environments?
**Task**:
- Install a conda environment containing biopython and scipy and make the install more automated by using a requirements.txt file.
- Verify your enviroment by deleting the old test environment and reinstalling it using the requirements.txt file.
- Install a similar environment with pip/virtualenv and try to switch between the conda and pip environments.
### Python console
The console is a command line interface directly to the Python interpreter.
[https://docs.python.org/2/tutorial/interpreter.html](https://docs.python.org/2/tutorial/interpreter.html)
To open a console you have to open a terminal inside your operating system and type 'python'. The console is useful for interogating the Python interpreter.
```
!python
```
| github_jupyter |
# eICU Collaborative Research Database
# Notebook 2: Demographics and severity of illness in a single patient
The aim of this notebook is to introduce high level admission details relating to a single patient stay, using the following tables:
- `patient`
- `admissiondx`
- `apacheapsvar`
- `apachepredvar`
- `apachepatientresult`
Before starting, you will need to copy the eicu demo database file ('eicu_demo.sqlite3') to the `data` directory.
Documentation on the eICU Collaborative Research Database can be found at: http://eicu-crd.mit.edu/.
## 1. Getting set up
```
# Import libraries
import pandas as pd
import matplotlib.pyplot as plt
import sqlite3
import os
# Plot settings
%matplotlib inline
plt.style.use('ggplot')
fontsize = 20 # size for x and y ticks
plt.rcParams['legend.fontsize'] = fontsize
plt.rcParams.update({'font.size': fontsize})
# Connect to the database
fn = os.path.join('data','eicu_demo.sqlite3')
con = sqlite3.connect(fn)
cur = con.cursor()
```
## 2. Display a list of tables
```
query = \
"""
SELECT type, name
FROM sqlite_master
WHERE type='table'
ORDER BY name;
"""
list_of_tables = pd.read_sql_query(query,con)
list_of_tables
```
## 3. Selecting a single patient stay
### 3.1. The `patient` table
The `patient` table includes general information about the patient admissions (for example, demographics, admission and discharge details). See: http://eicu-crd.mit.edu/eicutables/patient/
### Questions
Use your knowledge from the previous notebook and the online documentation (http://eicu-crd.mit.edu/) to answer the following questions:
- Which column in the `patient` table is distinct for each stay in the ICU (similar to `icustay_id` in MIMIC-III)?
- Which column is unique for each patient (similar to `subject_id` in MIMIC-III)?
```
# select a single ICU stay
patientunitstayid = 141296
# query to load data from the patient table
query = \
"""
SELECT *
FROM patient
WHERE patientunitstayid = {}
""".format(patientunitstayid)
print(query)
# run the query and assign the output to a variable
patient = pd.read_sql_query(query,con)
patient.head()
# display a complete list of columns
patient.columns
# select a limited number of columns to view
columns = ['uniquepid','patientunitstayid','gender','age','unitdischargestatus']
patient[columns]
```
### Questions
- What year was the patient admitted to the ICU? Which year was he or she discharged?
- What was the status of the patient upon discharge from the unit?
### 3.2. The `admissiondx` table
The `admissiondx` table contains the primary diagnosis for admission to the ICU according to the APACHE scoring criteria. For more detail, see: http://eicu-crd.mit.edu/eicutables/admissiondx/
```
# query to load data from the patient table
query = \
"""
SELECT *
FROM admissiondx
WHERE patientunitstayid = {}
""".format(patientunitstayid)
print(query)
# run the query and assign the output to a variable
admissiondx = pd.read_sql_query(query,con)
admissiondx.head()
admissiondx.columns
```
### Questions
- What was the primary reason for admission?
- How soon after admission to the ICU was the diagnoses recorded in eCareManager?
### 3.3. The `apacheapsvar` table
The `apacheapsvar` table contains the variables used to calculate the Acute Physiology Score (APS) III for patients. APS-III is an established method of summarizing patient severity of illness on admission to the ICU.
The score is part of the Acute Physiology Age Chronic Health Evaluation (APACHE) system of equations for predicting outcomes for ICU patients. See: http://eicu-crd.mit.edu/eicutables/apacheApsVar/
```
# query to load data from the patient table
query = \
"""
SELECT *
FROM apacheapsvar
WHERE patientunitstayid = {}
""".format(patientunitstayid)
print(query)
# run the query and assign the output to a variable
apacheapsvar = pd.read_sql_query(query,con)
apacheapsvar.head()
apacheapsvar.columns
```
### Questions
- What was the 'worst' heart rate recorded for the patient during the scoring period?
- Was the patient oriented and able to converse normally on the day of admission? (hint: the `verbal` element refers to the Glasgow Coma Scale).
### 3.4. The `apachepredvar` table
The `apachepredvar` table provides variables underlying the APACHE predictions. Acute Physiology Age Chronic Health Evaluation (APACHE) consists of a groups of equations used for predicting outcomes in critically ill patients. See: http://eicu-crd.mit.edu/eicutables/apachePredVar/
```
# query to load data from the patient table
query = \
"""
SELECT *
FROM apachepredvar
WHERE patientunitstayid = {}
""".format(patientunitstayid)
print(query)
# run the query and assign the output to a variable
apachepredvar = pd.read_sql_query(query,con)
apachepredvar.head()
apachepredvar.columns
apachepredvar.ventday1
```
### Questions
- Was the patient ventilated during (APACHE) day 1 of their stay?
- Did the patient have diabetes?
### 3.5. The `apachepatientresult` table
The `apachepatientresult` table provides predictions made by the APACHE score (versions IV and IVa), including probability of mortality, length of stay, and ventilation days. See: http://eicu-crd.mit.edu/eicutables/apachePatientResult/
```
# query to load data from the patient table
query = \
"""
SELECT *
FROM apachepatientresult
WHERE patientunitstayid = {}
""".format(patientunitstayid)
print(query)
# run the query and assign the output to a variable
apachepatientresult = pd.read_sql_query(query,con)
apachepatientresult.head()
apachepatientresult.columns
```
### Questions
- What versions of the APACHE score are computed?
- How many days during the stay was the patient ventilated?
- How long was the patient predicted to stay in hospital?
- Was this prediction close to the truth?
| github_jupyter |
# Intelligent Systems Assignment 1
## Masterball solver
**Name:**
**ID:**
### 1. Create a class to model the Masterball problem
A Masterball must be represented as an array of arrays with integer values representing the color of the tile in each position:
A solved masterball must look like this:
```python
[ [0, 1, 2, 3, 4, 5, 6, 7],
[0, 1, 2, 3, 4, 5, 6, 7],
[0, 1, 2, 3, 4, 5, 6, 7],
[0, 1, 2, 3, 4, 5, 6, 7]
]
```
#### Variables modeling the actions
```
'''
This variables MUST not be changed.
They represent the movements of the masterball.
'''
R_0 = "Right 0"
R_1 = "Right 1"
R_2 = "Right 2"
R_3 = "Right 3"
V_0 = "Vertical 0"
V_1 = "Vertical 1"
V_2 = "Vertical 2"
V_3 = "Vertical 3"
V_4 = "Vertical 4"
V_5 = "Vertical 5"
V_6 = "Vertical 6"
V_7 = "Vertical 7"
```
`R_i` moves the `i`th row to the right. For instance, `R_2` applied to the solved state will produce:
```python
[ [0, 1, 2, 3, 4, 5, 6, 7],
[0, 1, 2, 3, 4, 5, 6, 7],
[7, 0, 1, 2, 3, 4, 5, 6],
[0, 1, 2, 3, 4, 5, 6, 7]
]
```
`V_i` performs a clockwise vertical move starting with the `i`th column
`V_1` applied to the above state will produce:
```python
[ [0, 4, 3, 2, 1, 5, 6, 7],
[0, 3, 2, 1, 0, 5, 6, 7],
[7, 4, 3, 2, 1, 4, 5, 6],
[0, 4, 3, 2, 1, 5, 6, 7]
]
```
#### The Masterball problem class
```
import search
class MasterballProblem(search.SearchProblem):
def __init__(self, startState):
'''
Store the initial state in the problem representation and any useful
data.
Here are some examples of initial states:
[[0, 1, 4, 5, 6, 2, 3, 7], [0, 1, 3, 4, 5, 6, 3, 7], [1, 2, 4, 5, 6, 2, 7, 0], [0, 1, 4, 5, 6, 2, 3, 7]]
[[0, 7, 4, 5, 1, 6, 2, 3], [0, 7, 4, 5, 0, 5, 2, 3], [7, 6, 3, 4, 1, 6, 1, 2], [0, 7, 4, 5, 1, 6, 2, 3]]
[[0, 1, 6, 4, 5, 2, 3, 7], [0, 2, 6, 5, 1, 3, 4, 7], [0, 2, 6, 5, 1, 3, 4, 7], [0, 5, 6, 4, 1, 2, 3, 7]]
'''
self.expanded = 0
### your code here ###
pass
def isGoalState(self, state):
'''
Define when a given state is a goal state (A correctly colored masterball)
'''
### your code here ###
pass
def getStartState(self):
'''
Implement a method that returns the start state according to the SearchProblem
contract.
'''
### your code here ###
pass
def getSuccessors(self, state):
'''
Implement a successor function: Given a state from the masterball
return a list of the successors and their corresponding actions.
This method *must* return a list where each element is a tuple of
three elements with the state of the masterball in the first position,
the action (according to the definition above) in the second position,
and the cost of the action in the last position.
Note that you should not modify the state.
'''
self.expanded += 1
### your code here ###
pass
```
### 2. Implement iterative deepening search
Follow the example code provided in class and implement iterative deepening search (IDS).
```
def iterativeDeepeningSearch(problem):
return []
def aStarSearch(problem, heuristic):
return []
```
Evaluate it to see what is the maximum depth that it could explore in a reasonable time. Report the results.
### 3. Implement different heuristics for the problem
Implement at least two admissible and consistent heuristics. Compare A* using the heuristics against IDS calculating the number of expanded nodes and the effective branching factor, in the same way as it is done in figure 3.29 of [Russell10].
```
def myHeuristic(state):
return 0
def solveMasterBall(problem, search_function):
'''
This function receives a Masterball problem instance and a
search_function (IDS or A*S) and must return a list of actions that solve the problem.
'''
return []
problem = MasterballProblem([ [0, 4, 3, 2, 1, 5, 6, 7],
[0, 3, 2, 1, 0, 5, 6, 7],
[7, 4, 3, 2, 1, 4, 5, 6],
[0, 4, 3, 2, 1, 5, 6, 7]])
print solveMasterBall(problem, iterativeDeepeningSearch(problem))
print solveMasterBall(problem, aStarSearch(problem, myHeuristic))
```
| github_jupyter |
<small><small><i>
All the IPython Notebooks in **[Python Natural Language Processing](https://github.com/milaan9/Python_Python_Natural_Language_Processing)** lecture series by **[Dr. Milaan Parmar](https://www.linkedin.com/in/milaanparmar/)** are available @ **[GitHub](https://github.com/milaan9)**
</i></small></small>
<a href="https://colab.research.google.com/github/milaan9/Python_Python_Natural_Language_Processing/blob/main/10_TF_IFD.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 10 TF-IDF (Term Frequency-Inverse Document Frequency )
The scoring method being used above takes the count of each word and represents the word in the vector by the number of counts of that particular word. What does a word having high word count signify?
Does this mean that the word is important in retrieving information about documents? The answer is NO. Let me explain, if a word occurs many times in a document but also along with many other documents in our dataset, maybe it is because this word is just a frequent word; not because it is relevant or meaningful.One approach is to rescale the frequency of words
- **TF-IDF is a statistical measure that evaluates how relevant a word is to a document in a collection of documents.**
- **The term frequency (TF)** of a word in a document is number of times nth word occurred in a document by total number of words in a document.
- TF(i,j)=n(i,j)/Σ n(i,j)
n(i,j )= number of times nth word occurred in a document
Σn(i,j) = total number of words in a document.
- The **inverse document frequency(IDF)** of the word across a set of documents is logarithmic of total number of documents in the dataset by total number of documents in which nth word occur.
- IDF=1+log(N/dN)
N=Total number of documents in the dataset
dN=total number of documents in which nth word occur
**NOTE:** The 1 added in the above formula is so that terms with zero IDF don’t get suppressed entirely.
- if the word is very common and appears in many documents, this number will approach 0. Otherwise, it will approach 1.
The TF-IDF is obtained by **TF-IDF=TF*IDF**
## **Limitations of Bag-of-Words**
1. The model ignores the location information of the word. The location information is a piece of very important information in the text. For example “today is off” and “Is today off”, have the exact same vector representation in the BoW model.
2. Bag of word models doesn’t respect the semantics of the word. For example, words ‘soccer’ and ‘football’ are often used in the same context. However, the vectors corresponding to these words are quite different in the bag of words model. The problem becomes more serious while modeling sentences. Ex: “Buy used cars” and “Purchase old automobiles” are represented by totally different vectors in the Bag-of-words model.
3. The range of vocabulary is a big issue faced by the Bag-of-Words model. For example, if the model comes across a new word it has not seen yet, rather we say a rare, but informative word like Biblioklept(means one who steals books). The BoW model will probably end up ignoring this word as this word has not been seen by the model yet.
```
from sklearn.feature_extraction.text import CountVectorizer
# list of text documents
text = ["The car is driven on the road.","The truck is driven on the highway"]
# create the transform
vectorizer = CountVectorizer()
# tokenize and build vocab
vectorizer.fit(text)
# summarize
print(vectorizer.vocabulary_)
# encode document
newvector = vectorizer.transform(text)
# summarize encoded vector
print(newvector.toarray())
```
# TF - IDF
```
from sklearn.feature_extraction.text import TfidfVectorizer
# list of text documents
text = ["The car is driven on the road.","The truck is driven on the highway"]
# create the transform
vectorizer = TfidfVectorizer()
# tokenize and build vocab
vectorizer.fit(text)
#Focus on IDF VALUES
print(vectorizer.idf_)
# summarize
print(vectorizer.vocabulary_)
```
# **TF-IDF (KN)**
```
# import nltk
# nltk.download('popular')
import nltk
paragraph = """I have three visions for India. In 3000 years of our history, people from all over
the world have come and invaded us, captured our lands, conquered our minds.
From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British,
I see four milestones in my career"""
# Cleaning the texts
import re
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
ps = PorterStemmer()
wordnet=WordNetLemmatizer()
sentences = nltk.sent_tokenize(paragraph)
corpus = []
for i in range(len(sentences)):
review = re.sub('[^a-zA-Z]', ' ', sentences[i])
review = review.lower()
review = review.split()
review = [wordnet.lemmatize(word) for word in review if not word in set(stopwords.words('english'))]
review = ' '.join(review)
corpus.append(review)
# Creating the TF-IDF model
from sklearn.feature_extraction.text import TfidfVectorizer
cv = TfidfVectorizer()
X = cv.fit_transform(corpus).toarray()
X
```
| github_jupyter |
# Evaluation script for MiniBrass Evaluation results
## WCSP-Solver Comparison
The first section sets up the connection to the database, installs GeomMean as aggregate function, and counts problem instances.
```
import sqlite3
import numpy as np
import scipy.stats as st
%pylab inline
class GeomMean:
def __init__(self):
self.values = []
def step(self, value):
self.values += [value]
def finalize(self):
return st.gmean(self.values)
class Wilcoxon:
def __init__(self):
self.valuesLeft = []
self.valuesRight = []
def step(self, value1, value2):
self.valuesLeft += [value1]
self.valuesRight += [value2]
def finalize(self):
[t, prob] = st.wilcoxon(self.valuesLeft, self.valuesRight)
return 1.0 if prob < 0.05 else 0.0
def boldify(floatStr):
split_num = floatStr.split('.')
return "\\mathbf{" + split_num[0]+"}.\\mathbf{"+split_num[1] + "}"
conn = sqlite3.connect('results.db')
conn.create_aggregate("GeomMean", 1, GeomMean)
conn.create_aggregate("Wilcoxon", 2, Wilcoxon)
c = conn.cursor()
readable = { "NUMBERJACK":"Toulbar2", "GECODE":"Gecode", "OR_TOOLS":"OR-Tools", "CHOCO":"Choco",
"JACOP":"JaCoP", "G12":"G12", "GECODE_NAT" : "Native Gecode"}
readableProblems = { "on-call-rostering":"On-call Rostering", "mspsp":"MSPSP", "soft-queens":"Soft N-Queens",
"talent-scheduling":"Talent Scheduling", "photo":"Photo Placement"}
from collections import defaultdict
problemToInstance = defaultdict(list)
c.execute("SELECT Problem, Count(Distinct Instance) as Instances FROM JobResult Group By Problem")
for row in c.fetchall():
problemToInstance[row[0]] = row[1]
c.execute("SELECT COUNT(*) FROM ( SELECT Distinct Instance FROM JobResult )")
res = c.fetchone()
numberProblems = res[0]
print "We tried", numberProblems, "instances."
```
### 5.1. Comparing Performance: Encoded Weighted CSP vs. Native Toulbar2
Used SQL-scripts:
* query-native-solver-comparison-pure-views.sql
* query-native-solver-comparison-pure.sql
```
# now we do the solver comparison
problemToInstance = defaultdict(list)
c.execute("SELECT Problem, Count(Distinct Instance) as Instances FROM JobResult Group By Problem")
for row in c.fetchall():
problemToInstance[row[0]] = row[1]
c.execute("SELECT COUNT(*) FROM ( SELECT Distinct Instance FROM JobResult )")
res = c.fetchone()
numberProblems = res[0]
print "We tried", numberProblems, "instances."
scriptFile = open("query-native-solver-comparison-pure-views.sql", 'r')
script = scriptFile.read()
scriptFile.close()
c.executescript(script)
conn.commit()
scriptFile = open("query-native-solver-comparison-pure.sql",'r')
script = scriptFile.read()
scriptFile.close()
c.execute(script)
currProblem = ""
print "\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill} }l" + \
"".join(["d{1.5}" for i in range(0,1)]) + "cd{1.5}" + "".join(["d{1.1}" for i in range(0,2)]) + "}"
print "\\toprule"
print '''\\multicolumn{1}{c}{Solver} & \multicolumn{1}{c}{Time (secs)}
& \multicolumn{1}{c}{\\# Wins}
& \multicolumn{1}{c}{Objective}
& \multicolumn{1}{c}{\% Solved} & \multicolumn{1}{c}{\% Optimal} \\\\'''
for row in c.fetchall():
(problem, solverId, solverName, elapsed, elapsedSpan, relElapsed, \
objective, relObjective, wins, solved, optimally) = row
if currProblem != problem:
#print "Starting .... ", problem
currProblem = problem
print "\\midrule"
print "\\multicolumn{2}{l}{" + readableProblems[problem] + " ("+ str(problemToInstance[problem]) + " instances) } \\\\"
print "\\midrule"
print " ", readable[solverName], "&", '{0:.2f}'.format(elapsed),\
"\\quad ("+'{0:.2f}'.format(relElapsed)+")" "&", '{0:.0f}'.format(wins), \
"&", '{0:.2f}'.format(objective), "\\quad ("+'{0:.2f}'.format(relObjective)+")", "&", \
'{0:.2f}'.format(solved), "&",'{0:.2f}'.format(optimally), "\\\\"
print "\\bottomrule"
print "\\end{tabular*}"
```
### 5.2 Comparing Models: Smyth-Optimization versus Weighted-Optimization
Used SQL-scripts:
* query-native-vs-strictbab-overhead-views.sql
* query-native-solver-comparison-pure.sql
#### Table 2: smyth vs. weighted
```
scriptFile = open("query-native-vs-strictbab-overhead-views.sql",'r')
script = scriptFile.read()
scriptFile.close()
c.executescript(script)
conn.commit()
# now we do the solver comparison
problemToInstance = defaultdict(list)
c.execute("SELECT Problem, Count(Distinct Instance) as Instances FROM PvsNativeSummary Group By Problem")
for row in c.fetchall():
problemToInstance[row[0]] = row[1]
c.execute("SELECT COUNT(*) FROM ( SELECT Distinct Instance FROM PvsNativeSummary )")
res = c.fetchone()
numberProblems = res[0]
print "We tried", numberProblems, "instances."
scriptFile = open("query-native-vs-strictbab-overhead.sql",'r')
script = scriptFile.read()
scriptFile.close()
currProblem = ""
print "\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill} }l" + \
"".join(["d{1.1}" for i in range(0,5)]) + "}"
print "\\toprule"
print '''\\multicolumn{1}{c}{Solver} & \multicolumn{1}{c}{Time Smyth}
& \multicolumn{1}{c}{Time Weighted}
& \multicolumn{1}{c}{Time Toulbar2}
& \multicolumn{1}{c}{Obj. Smyth} & \multicolumn{1}{c}{Obj. Weighted} \\\\'''
c.execute(script)
def boldify(floatStr):
split_num = floatStr.split('.')
return "\\textbf{" + split_num[0]+"}.\\textbf{"+split_num[1] + "}"
for row in c.fetchall():
(problem, solverName, elapsedSmyth, elapsedWeights, absoluteOverhead, relOverhead, weightsObj, smythObj, elapsedTb) \
= row
if currProblem != problem:
#print "Starting .... ", problem
currProblem = problem
print "\\midrule"
print "\\multicolumn{2}{l}{" + readableProblems[problem] + " ("+ str(problemToInstance[problem]) + " instances) } \\\\"
print "\\midrule"
if elapsedSmyth < elapsedWeights:
elapsedSmythText = boldify('{0:.2f}'.format(elapsedSmyth))
elapsedWeightsText = '{0:.2f}'.format(elapsedWeights)
else:
elapsedWeightsText = boldify('{0:.2f}'.format(elapsedWeights))
elapsedSmythText = '{0:.2f}'.format(elapsedSmyth)
print " ", readable[solverName], \
"&", elapsedSmythText,\
"&", elapsedWeightsText, "&", \
"\\emph{-}" if (currProblem == "mspsp" or currProblem == "talent-scheduling") \
else "\\emph{" + '{0:.2f}'.format(elapsedTb) + "}", \
"&", '{0:.2f}'.format(smythObj), "&", '{0:.2f}'.format(weightsObj), "\\\\"
currProblem = ""
print "\\bottomrule"
print "\\end{tabular*}"
```
#### Table 3: Domination BaB - NonDom Bab
Used SQL scripts:
* query-dom-vs-nondom-views.sql
* query-dom-vs-nondom-query.sql
* query-dom-vs-nondom-overall.sql
```
scriptFile = open("query-dom-vs-nondom-views.sql",'r')
script = scriptFile.read()
scriptFile.close()
c.executescript(script)
conn.commit()
scriptFile = open("query-dom-vs-nondom-query.sql",'r')
script = scriptFile.read()
scriptFile.close()
currProblem = ""
print "\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill} }l" + \
"".join(["d{1.1}" for i in range(0,4)]) + "}"
print "\\toprule"
print '''\\multicolumn{1}{c}{Problem} & \multicolumn{1}{c}{Time Non-Dominated BaB}
& \multicolumn{1}{c}{Time Strict BaB}
& \multicolumn{1}{c}{Absolute Overhead}
& \multicolumn{1}{c}{Relative Overhead} \\\\'''
print "\\midrule"
c.execute(script)
for row in c.fetchall():
(problem, nonDomElapsed, domElapsed, absoluteOverhead, relOverhead, significant) = row
if domElapsed < nonDomElapsed:
domElapsedText = boldify('{0:.2f}'.format(domElapsed))
nonDomElapsedText = '{0:.2f}'.format(nonDomElapsed)
else:
nonDomElapsedText = boldify('{0:.2f}'.format(nonDomElapsed))
domElapsedText = '{0:.2f}'.format(domElapsed)
print " ", readableProblems[problem]+("*" if significant else ""), \
"&", nonDomElapsedText,\
"&", domElapsedText, "&", \
'{0:.2f}'.format(absoluteOverhead), \
"&", '{0:.2f}'.format(relOverhead), "\\\\"
scriptFile = open("query-dom-vs-nondom-overall.sql",'r')
script = scriptFile.read()
scriptFile.close()
print "\\midrule"
c.execute(script)
for row in c.fetchall():
(problem, nonDomElapsed, domElapsed, absoluteOverhead, relOverhead, significant) = row
if domElapsed < nonDomElapsed:
domElapsedText = boldify('{0:.2f}'.format(domElapsed))
nonDomElapsedText = '{0:.2f}'.format(nonDomElapsed)
else:
nonDomElapsedText = boldify('{0:.2f}'.format(nonDomElapsed))
domElapsedText = '{0:.2f}'.format(domElapsed)
print problem+("*" if significant else ""), \
"&", nonDomElapsedText,\
"&", domElapsedText, "&", \
'{0:.2f}'.format(absoluteOverhead), \
"&", '{0:.2f}'.format(relOverhead), "\\\\"
# query-dom-vs-nondom-overall.sql
print "\\bottomrule"
print "\\end{tabular*}"
```
### 5.3 Comparing Models: Smyth-Optimization versus Weighted-Optimization
Used SQL scripts:
* query-mif-comp.sql
* query-mif-comp-summary-couting.sql
* query-mif-comp-solver.sql
* query-mif-comp-problems.sql
```
# just some formatting stuff
import matplotlib.pyplot as plt
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = 10
plt.rcParams['axes.labelsize'] = 10
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['xtick.labelsize'] = 9
plt.rcParams['ytick.labelsize'] = 9
plt.rcParams['legend.fontsize'] = 10
plt.rcParams['figure.titlesize'] = 12
```
Attention, this table is customized afterwards
* Boldification of the smaller runtime
* Asterisk for statistical significance
## MIF per Solver
First, we need statistical significance per solver
```
scriptFile = open("query-mif-stat-prob.sql",'r')
script = scriptFile.read()
scriptFile.close()
c.execute(script)
mifElapseds = defaultdict(list)
mifStds = defaultdict(list)
normalElapseds = defaultdict(list)
normalStds = defaultdict(list)
# a dictionary for significance lookup
solversSignificant = defaultdict(list)
solvers = []
currProb = ""
for row in c.fetchall():
(problem, solver, mifElapsed, normalElapsed) = row
if not(solver in solvers):
solvers += [solver]
mifElapseds[solver] += [mifElapsed]
normalElapseds[solver] += [normalElapsed]
for s in solvers:
print s
[t, prob] = st.wilcoxon(mifElapseds[s], normalElapseds[s])
if prob < 0.05:
print "SIGNIFICANT t=", t, " prob = ", prob
solversSignificant[s] = True
else:
print "insignificant t=", t, " prob = ", prob
solversSignificant[s] = False
# first the views
scriptFile = open("query-mif-comp.sql",'r')
script = scriptFile.read()
c.executescript(script)
conn.commit()
scriptFile.close()
# then the highest-level aggregation
scriptFile = open("query-mif-comp-summary-couting.sql",'r')
script = scriptFile.read()
scriptFile.close()
c.execute(script)
(avgDiff, sumMifWins, insts, ratio) = c.fetchone()
print "Over all", insts, "runs across solvers, problem instances and search types, the MIF heuristic " \
"led to a faster runtime in", sumMifWins, "cases", "("+'{0:.2f}'.format(ratio)+" \%) with the average runtime reduced by "+ \
'{0:.2f}'.format(abs(avgDiff)) +" seconds."
scriptFile = open("query-mif-comp-solver.sql",'r')
script = scriptFile.read()
scriptFile.close()
c.execute(script)
timeDiffs = defaultdict(list)
relTimeDiffs = defaultdict(list)
mifElapseds = defaultdict(list)
mifStds = defaultdict(list)
normalElapseds = defaultdict(list)
normalStds = defaultdict(list)
mifWinss = defaultdict(list)
instances = defaultdict(list)
ratios = defaultdict(list)
solvers = []
for row in c.fetchall():
(solverName, mifElapsed, mifVar, normalElapsed, normalVar, timeDiff, relTimeDiff, mifWins, overall, ratio) = row
solvers += [solverName]
timeDiffs[solverName] = timeDiff
relTimeDiffs[solverName] = relTimeDiff
mifElapseds[solverName] = mifElapsed
mifStds[solverName] = np.sqrt(mifVar)
normalElapseds[solverName] = normalElapsed
normalStds[solverName] = np.sqrt(normalVar)
mifWinss[solverName] = mifWins
instances[solverName] = overall
ratios[solverName] = ratio
print solvers
print overall, "instances are included in these averages."
print "\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill} }l" + \
"".join(["d{1.1}" for s in [1]+solvers]) + "}"
print "\\toprule"
print " & ", " & ".join(["\\multicolumn{1}{c}{" + readable[s] + \
("*" if solversSignificant[s] else "") +"}" for s in solvers]), "\\\\"
print "\\midrule"
print "Instances & ", " & ".join(['{0:.0f}'.format(instances[s]) for s in solvers]), "\\\\"
print "Runtime difference & ", \
" & ".join(['{0:.2f}'.format(timeDiffs[s]) if timeDiffs[s] >= 0 else boldify('{0:.2f}'.format(timeDiffs[s]))\
for s in solvers]), "\\\\"
print "Rel. runtime diff. & ", " & ".join(['{0:.2f}'.format(relTimeDiffs[s]) for s in solvers]), "\\\\"
#print "# MIF wins & ", " & ".join(['{0:.2f}'.format(timeDiffs[s]) for s in solvers]), "\\\\"
print "Ratio MIF wins & ", \
" & ".join([ '{0:.2f}'.format(ratios[s]) if ratios[s] < 0.5 else boldify('{0:.2f}'.format(ratios[s]))\
for s in solvers]), "\\\\"
#print "Runtime difference & ", " & ".join(['{0:.2f}'.format(timeDiffs[s]) for s in solvers]), "\\\\"
print "\\bottomrule"
print "\\end{tabular*}"
"""
Bar chart demo with pairs of bars grouped for easy comparison.
"""
import numpy as np
isseorange = (1.0, 0.57647, 0.039216)
#\definecolor{issegrey}{RGB}{80,85,82}
issegrey = (80.0 / 255, 85.0 / 255, 82.0 / 255)
n_groups = len(solvers)
means_mif = [mifElapseds[s] for s in solvers]
std_mif = [mifStds[s] for s in solvers]
print means_mif
print std_mif
means_nomif = [normalElapseds[s] for s in solvers]
std_nomif = [normalStds[s] for s in solvers]
print means_nomif
print std_nomif
fig, ax = plt.subplots()
index = np.arange(n_groups)
bar_width = 0.23
opacity = 0.9
error_config = {'ecolor': '0.3'}
plt.ylim([0,250])
plt.xlim([0,7])
rects1 = plt.bar(index, means_mif, bar_width,
alpha=opacity,
color=isseorange,
error_kw=error_config,
hatch="/",
label='MIF')
rects2 = plt.bar(index + bar_width, means_nomif, bar_width,
alpha=opacity,
color=issegrey,
hatch="\\",
error_kw=error_config,
label='No-MIF')
plt.xlabel('Solver')
plt.ylabel('Avg. Runtimes (secs)')
#plt.title('Runtimes by solver and heuristic')
plt.xticks(index + bar_width , ["Choco*", "G12", "Gecode", "Gecode Nat.", "JaCoP", "Toulbar2", "OR-Tools"])
# [ s if s != "NUMBERJACK" else "TOULBAR2" for s in solvers])
plt.legend()
plt.tight_layout()
# plt.savefig('runtime-mif-solver.pdf', bbox_inches='tight')
plt.show()
```
## MIF per problem
Now for the statistical test grouped by problem
```
scriptFile = open("query-mif-stat-prob.sql",'r')
script = scriptFile.read()
scriptFile.close()
c.execute(script)
mifElapseds = defaultdict(list)
mifStds = defaultdict(list)
normalElapseds = defaultdict(list)
normalStds = defaultdict(list)
# a dictionary for significance lookup
problemsSignificant = defaultdict(list)
problems = []
currProb = ""
for row in c.fetchall():
(problem, solver, mifElapsed, normalElapsed) = row
if currProb != problem:
problems += [problem]
currProb = problem
mifElapseds[problem] += [mifElapsed]
normalElapseds[problem] += [normalElapsed]
for p in problems:
print p
print len(mifElapseds[p])
print np.mean(mifElapseds[p]), " -- ", np.std(mifElapseds[p])
print np.mean(normalElapseds[p]), " -- ", np.std(normalElapseds[p])
#print [mifElapseds[p][i] < normalElapseds[p][i] | i in range(0, len(mifElapseds[p]))]
print sum(np.array(mifElapseds[p]) < np.array(normalElapseds[p]))
print sum(np.array(normalElapseds[p]) < np.array(mifElapseds[p]))
[t, prob] = st.wilcoxon(mifElapseds[p], normalElapseds[p], zero_method="wilcox")
if prob < 0.05:
print "SIGNIFICANT t=", t, " prob = ", prob
problemsSignificant[p] = True
else:
print "insignificant t=", t, " prob = ", prob
problemsSignificant[p] = False
```
The query for the table
```
scriptFile = open("query-mif-comp-problems.sql",'r')
script = scriptFile.read()
scriptFile.close()
c.execute(script)
timeDiffs = defaultdict(list)
relTimeDiffs = defaultdict(list)
mifElapseds = defaultdict(list)
mifStds = defaultdict(list)
normalElapseds = defaultdict(list)
normalStds = defaultdict(list)
mifWinss = defaultdict(list)
instances = defaultdict(list)
ratios = defaultdict(list)
problems = []
for row in c.fetchall():
(problem, mifElapsed, mifVar, normalElapsed, normalVar, timeDiff, relTimeDiff, mifWins, overall, ratio) = row
problems += [problem]
timeDiffs[problem] = timeDiff
relTimeDiffs[problem] = relTimeDiff
mifElapseds[problem] = mifElapsed
mifStds[problem] = np.sqrt(mifVar)
normalElapseds[problem] = normalElapsed
normalStds[problem] = np.sqrt(normalVar)
mifWinss[problem] = mifWins
instances[problem] = overall
ratios[problem] = ratio
#print row
print problems
print overall, "instances are included in these averages."
print "\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill} }l" + \
"".join(["d{1.1}" for p in [1]+problems]) + "}"
print "\\toprule"
print " & ", " & ".join(["\\multicolumn{1}{c}{" + readableProblems[s] +\
("*" if problemsSignificant[s] else "") + "}" for s in problems]), "\\\\"
print "\\midrule"
print "Instances & ", " & ".join(['{0:.0f}'.format(instances[s]) for s in problems]), "\\\\"
print "Runtime difference & ",\
" & ".join(['{0:.2f}'.format(timeDiffs[s]) if timeDiffs[s] >= 0 else boldify('{0:.2f}'.format(timeDiffs[s]))\
for s in problems]), "\\\\"
print "Rel. runtime diff. & ", " & ".join(['{0:.2f}'.format(relTimeDiffs[s]) for s in problems]), "\\\\"
#print "# MIF wins & ", " & ".join(['{0:.2f}'.format(timeDiffs[s]) for s in solvers]), "\\\\"
print "Ratio MIF wins & ",\
" & ".join(['{0:.2f}'.format(ratios[s]) if ratios[s] < 0.5 else boldify('{0:.2f}'.format(ratios[s]))\
for s in problems]), "\\\\"
#print "Runtime difference & ", " & ".join(['{0:.2f}'.format(timeDiffs[s]) for s in solvers]), "\\\\"
print "\\bottomrule"
print "\\end{tabular*}"
"""
Bar chart demo with pairs of bars grouped for easy comparison.
"""
import numpy as np
isseorange = (1.0, 0.57647, 0.039216)
#\definecolor{issegrey}{RGB}{80,85,82}
issegrey = (80.0 / 255, 85.0 / 255, 82.0 / 255)
n_groups = len(problems)
means_mif = [mifElapseds[p] for p in problems]
std_mif = [mifStds[p] for p in problems]
print means_mif
print std_mif
means_nomif = [normalElapseds[p] for p in problems]
std_nomif = [normalStds[p] for p in problems]
print means_nomif
print std_nomif
fig, ax = plt.subplots()
index = np.arange(n_groups)
bar_width = 0.2
opacity = 0.9
error_config = {'ecolor': '0.3'}
plt.ylim([0,250])
plt.xlim([0,5])
rects1 = plt.bar(index, means_mif, bar_width,
alpha=opacity,
color=isseorange,
error_kw=error_config,
hatch="/",
label='MIF')
rects2 = plt.bar(index + bar_width, means_nomif, bar_width,
alpha=opacity,
color=issegrey,
hatch="\\",
error_kw=error_config,
label='No-MIF')
plt.xlabel('Problem')
plt.ylabel('Avg. Runtimes (secs)')
#plt.title('Runtimes by problem and heuristic')
plt.xticks(index + bar_width , ["MSPSP", "On-call Rostering", "Photo", "Soft Queens", "Talent Scheduling"])
plt.legend()
plt.tight_layout()
plt.savefig('runtime-mif-problem.pdf', bbox_inches='tight')
plt.show()
# the overall query of significance
import numpy as np
import scipy.stats as st
scriptFile = open("query-mif-stat.sql",'r')
script = scriptFile.read()
scriptFile.close()
c.execute(script)
mifElapseds = []
normalElapseds = []
for row in c.fetchall():
(mifElapsed, normalElapsed) = row
mifElapseds += [mifElapsed]
normalElapseds += [normalElapsed]
mif = np.array(mifElapseds)
noMif = np.array(normalElapseds)
print "MIF: ", np.mean(mif), " - ", np.std(mif)
print "No MIF: ", np.mean(noMif), " - ", np.std(noMif)
[t, prob] = st.wilcoxon(mif, noMif)
if prob < 0.01:
print "SIGNIFICANT t=", t, " prob = ", prob
else:
print "insignificant t=", t, " prob = ", prob
#conn.close()
```
| github_jupyter |
# Forecasting forced displacement
```
import pandas as pd
from time import time
import os
import json
import pickle
import numpy as np
from time import time
import seaborn as sns
import matplotlib.pyplot as plt
```
# Data transforms
<TBC>
```
start_time = time()
with open("../configuration.json", 'rt') as infile:
config = json.load(infile)
sources = [os.path.join("..", config['paths']['output'],
d['name'],
'data.csv') for d in config['sources'] if d['name']]
# Generate a data frame with all indicators
df = pd.concat((pd.read_csv(f) for f in sources), sort=False, ignore_index=True)
# Summary stats
print("Sources : {}".format(len(sources)))
print("Shape : {} (rows) {} (columns)".format(*df.shape))
print("Geographies : {}".format(len(df['Country Name'].unique())))
print("Indicators : {}".format(len(df['Indicator Code'].unique())))
print("Temporal coverage : {} -> {}".format(df.year.min(), df.year.max()))
print("Null values : {}".format(sum(df['value'].isnull())))
print("\nLoaded data in {:3.2f} sec.".format(time() - start_time))
# Now arrange data in long form
data = pd.pivot_table(df, index=['Country Code', 'year'],
columns='Indicator Code', values='value')
# Consider country/year as features (and not an index)
data.reset_index(inplace=True)
print("Long form of size : {} (rows) {} (columns)".format(*data.shape))
# Define the subset to work with
countries = ['AFG', 'MMR']
# Features
idx = ['Country Code', 'year']
mm = ['ETH.TO.{}'.format(i) for i in ['DNK', 'GBR', 'ITA', 'SAU', 'SWE', 'ZAF']]
endo = ['UNHCR.OUT.AS', 'UNHCR.OUT.IDP', 'UNHCR.OUT.OOC',
'UNHCR.OUT.REF', 'UNHCR.OUT.RET', 'UNHCR.OUT.RETIDP']
# missing entirely
emdat = ['EMDAT.CPX.OCCURRENCE','EMDAT.CPX.TOTAL.DEATHS','EMDAT.CPX.TOTAL.AFFECTED','EMDAT.CPX.AFFECTED']
target = ['DRC.TOT.DISP']
allfeatures = list(set(data.columns.tolist()) - set(idx + mm + endo + target + emdat))
mmr_data = [f for f in allfeatures if f.startswith('MMR.NSO')]
features = {'AFG': list(set(allfeatures) - set(mmr_data)),
'MMR': list(set(allfeatures) - set(mmr_data))}
#allfeatures}
# filter
c1 = data['Country Code'].isin(countries)
c2 = data.year >= 1950
df = data.loc[c1 & c2, idx + allfeatures + target]
print("Filtered data has {} rows, {} columns.".format(*df.shape))
# data quality and coverage
for c in countries:
idx = df['Country Code'] == c
print("\n" + c + " ({} obs)".format(sum(idx)))
print("Time period: {} -> {}".format(df.loc[idx, 'year'].min(), df[idx].year.max()))
datapoints = df[idx].notna().sum() # * 100 / len(df[idx])
print(datapoints.sort_values(ascending=False)[:])
fig, ax = plt.subplots(1, len(countries),
figsize=(10, 4),
sharex='col', sharey='row')
for i, c in enumerate(countries):
idx = df['Country Code'] == c
ax[i].spy(df.loc[idx, allfeatures + target].values, origin='lower')
ax[i].set_xlabel("Indicators")
ax[i].set_ylabel("History")
ax[i].set_xticks([])
ax[i].set_yticks([])
ax[i].set_title(c)
plt.savefig("img/coverage.png", dpi=200, bbox_inches='tight')
from pandas.plotting import autocorrelation_plot
sns.set(font_scale=1.0)
TARGETS = ['DRC.TOT.DISP']
for c in countries:
fig, ax = plt.subplots(1, 1,
figsize=(4,4),
sharex='col', sharey='row')
for t in TARGETS:
idx = df['Country Code'] == c
tmp = df.loc[idx, ['year', t]]
tmp = tmp[~tmp[t].isnull()]
autocorrelation_plot(tmp, ax=ax)
ax.set_title("{} {} (n={})".format(c, t, len(tmp)))
ax.set_xlim([0, 20])
ax.set_xlabel('Lag (years)');
plt.show()
plt.close()
# plt.savefig("img/autocorrelation-{}.png".format(c), dpi=200, bbox_inches='tight')
# plt.close()
# Scatter plots (per DRC request)
sns.set(style="white")
for c in countries:
fig, ax = plt.subplots(1, 1,
figsize=(10, 4))
for i, t in enumerate(TARGETS):
idx = df['Country Code'] == c
tmp = df.loc[idx, ['year', t]]
tmp = tmp[~tmp[t].isnull()]
tmp.sort_values(by='year', inplace=True)
tmp['T-1'] = tmp[t].shift(1)
ax.scatter(tmp[t], tmp['T-1'])
ax.plot(tmp[t], tmp[t], '-r')
ax.set_aspect('equal', adjustable='box')
ax.set_title("{} {} (n={})".format(c, t, len(tmp)))
ax.set_yticks([], [])
ax.set_ylabel('time $t-1$')
ax.set_xlabel('time $t$');
plt.show()
#plt.savefig("img/scatter-{}.png".format(c), dpi=200, bbox_inches='tight')
#plt.close()
```
# Features
```
def lag_variables(data, var, lag):
"""
Append lagged variables to frame.
data - pandas data frame
var - list of variable names to lag
lag - integer
"""
idx_cols = ['year', 'Country Code']
fv = var + idx_cols
tmp = data[fv].copy(deep=True)
col_name = [v + ".T" + "{0:+}".format(lag) for v in var]
tmp.rename(columns={k: v for (k, v) in zip(var, col_name)},
inplace=True)
tmp.year -= lag
data = pd.merge(data, tmp, on=idx_cols, how='left')
return data, col_name
def get_diff(fr):
fr = fr.sort_values(by='year')
tmp = fr.year
res = fr.diff()
res['year'] = tmp
res.fillna(-10^9, inplace=True)
return res
def generate_features(data,
training_years,
forecast_year,
country,
target_var,
feature_var,
differencing=False):
"""
Generate a feature set for training/test
data: pandas Dataframe in long form with all indicator variables
training_years: Tuple showing min-max years to train on, e.g. (1995, 2010)
forecast_year: test year, (2011)
country: ISO3 code (e.g. 'AFG')
target_var: variable name to forecast e.g. 'FD'
feature_var: list of variables to include
differencing: all features are differences
returns:
Dictionary with training, test data, along with the baseline
Baseline is the latest flow in the training data.
"""
# build the changes feature set
if differencing:
idx = data['Country Code'] == country
data = data[idx].groupby(['Country Code']).apply(get_diff)
data = data.reset_index()
true_feature_var = [f for f in feature_var]
print("Total # Features: {}".format(len(true_feature_var)))
dcols = data.columns
assert target_var in dcols,\
"Target variable '{}' must be in data frame.".format(target_var)
for fv in feature_var:
assert fv in dcols,\
"Feature variable '{}' does not exist.".format(fv)
# Get the poor man's forecast as baseline
dt = forecast_year - training_years[1]
bv = data.loc[(data.year == forecast_year - dt) &
(data['Country Code'] == country), target_var].values[0]
if not differencing:
print("Baseline value: {} (year {})".format(bv, forecast_year - dt))
# Get the true value
try:
tr = data.loc[(data.year == forecast_year) &
(data['Country Code'] == country), target_var].values[0]
# print("True value: {} (year {})".format(tr, forecast_year))
except IndexError:
tr = np.nan
# Target variable offset by a year (y_(t+dt))
data, varname = lag_variables(data, [target_var], dt)
true_target_var = varname[0]
# Temporal filter: since the target variable is lagged, the training
# year is one year prior.
yl, yu = training_years
t1 = data.year.between(*(yl, yu - dt))
v1 = data.year == forecast_year - dt
# Spatial filter
t2 = data['Country Code'] == country
# For an AR(1) we just include current year value
if dt > 0:
true_feature_var += [target_var]
# Handle the missing features
data = data.fillna(method='ffill').fillna(method='bfill')
# Training data
Xt = data.loc[t1 & t2, true_feature_var]
yt = data.loc[t1 & t2, true_target_var]
print("Xt shape: {} rows, {} cols".format(*Xt.shape))
# Forecast/validation data
Xv = data.loc[v1 & t2, true_feature_var]
yv = data.loc[v1 & t2, true_target_var]
# Drop missing training labels
idx = ~pd.isnull(yt)
yt = yt[idx]
Xt = Xt[idx]
return {'data': (Xt, yt, Xv, yv), 'baseline': bv, 'true': tr, 'Country code': c}
```
# Models
```
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.ensemble import GradientBoostingRegressor
from sklearn import linear_model
from sklearn.preprocessing import RobustScaler
from sklearn.compose import TransformedTargetRegressor
from sklearn.svm import SVR
from sklearn.model_selection import train_test_split
from sklearn import metrics
# Variables to predict
TARGETS = ['DRC.TOT.DISP']
# Years ahead to predict
lag = [0, 1, 2, 3, 4, 5, 6]
PERIODS = [{'train_years': (1995, Y - lg),
'predict_year': Y,
'lag': lg} for Y in np.arange(2010, 2016, 1) for lg in lag]
# Models to evaluate
MODELS = [('XGBoost-0.001', Pipeline([("Estimator", GradientBoostingRegressor(n_estimators=500,
random_state=42,
max_depth=6,
learning_rate=0.001,
loss='ls'))]))]
"""
("SVM - standardized", Pipeline([("Standardize", RobustScaler(quantile_range=(10, 90))),
("Estimator", SVR(gamma='auto'))]))]
("SVM - log response", TransformedTargetRegressor(regressor=SVR(gamma='auto'),
func=np.log1p,
inverse_func=np.expm1)),
("SVM - log+robust", TransformedTargetRegressor(regressor=Pipeline([("Standardize", RobustScaler(quantile_range=(10, 90))),
("Estimator", SVR(gamma='auto'))]),
func=np.log1p,
inverse_func=np.expm1))]
""";
results = []
batch = 0
start_time = time()
for c in countries:
for p in PERIODS:
batch += 1
# Generate problem instance
d = generate_features(df,
p['train_years'],
p['predict_year'],
c,
TARGETS[0],
features[c])
d2 = generate_features(df,
p['train_years'], p['predict_year'],
c, TARGETS[0], features[c], differencing=True)
Xt, yt, Xv, yv = d['data']
Xdt, ydt, Xdv, ydv = d2['data']
for lbl, clf in MODELS:
M = {}
M['batch'] = batch
M['period'] = p
M['lag'] = p['lag']
M['target'] = "FORCED_DISPLACEMENT"
M['baseline'] = d['baseline']
M['true'] = yv.values[0]
M['country'] = c
# And finally run - base model
M['model'] = lbl
clf.fit(Xt, yt)
fc = clf.predict(Xv)
M['forecast_base'] = fc[0]
# differencing run
clf.fit(Xdt, ydt)
fd = clf.predict(Xdv)
M['forecast_diff'] = fd[0] + d['baseline']
# ensemble - mean
M['forecast'] = 0.5 * (fc[0] + fd[0] + d['baseline'])
# MAE
M['mae'] = metrics.mean_absolute_error(yv, [M['forecast']])
results.append(M)
with open("result.pkl", 'wb') as outfile:
pickle.dump(results, outfile)
print("Done with {} runs in in {:3.2f} sec.".format(len(results), time() - start_time))
import numpy as np
with open("result.pkl", 'rb') as outfile:
print("Using cached results.")
results = pickle.load(outfile)
df_acc = pd.DataFrame(results)
# lagger = [1, 2, 3, 4, 5]
# df_acc = df_acc[df_acc.lag.isin(lagger)]
def mape(t, y): return np.mean(np.abs((t - y) / t))
def quality(x):
y_pred = x.forecast
y_pred_base = x.forecast_base
y_pred_diff = x.forecast_diff
y_true = x.true
y_baseline = x.baseline
return pd.Series({
'mape-prediction': mape(y_true, y_pred_base),
'mape-baseline': mape(y_true, y_baseline)})
return pd.Series({
'mape-pred-base': mape(y_true, y_pred_base),
'mape-pred-diff': mape(y_true, y_pred_diff),
'mape-pred': mape(y_true, y_pred),
'mape-baseline': mape(y_true, y_baseline)})
groups = df_acc.groupby(['country', 'model']).apply(quality)
(groups
.style
.set_properties(**{'text-align': 'right'})
.format({"mape-{}".format(c): "{:.1%}" for c in ['prediction', 'baseline', 'pred-base', 'pred-diff']}))
from random import sample
def confidence_interval(D, alpha=0.95, resample=1000):
"""
Empirical bootstrapped confidence interval for distribution
D - source distribution
alpha - 95% confidence interval for mean
resample - number of resamples to perform
"""
# resample
C = [np.mean(np.random.choice(D, len(D))) for i in np.arange(0,resample)]
p1 = ((1.0-alpha)/2.0) * 100
p2 = (alpha+((1.0-alpha)/2.0)) * 100
lower, upper = np.percentile(C, p1), np.percentile(C, p2)
return lower, upper
def CI(x):
l, u = confidence_interval(x.mae)
return pd.Series({'lower': l, 'upper': u})
idx = df_acc.model == 'XGBoost-0.001'
tmp = df_acc[idx].groupby(['country', 'model', 'lag']).apply(CI)
CI = tmp.reset_index(level=[1]).drop(columns='model').to_dict(orient='index')
CI
from itertools import product
# selected model
clf = Pipeline([("Estimator", GradientBoostingRegressor(n_estimators=500,
random_state=42,
max_depth=6,
learning_rate=0.1,
loss='ls'))])
BASE_YEAR = 2018
result = []
for c in countries:
curr_for = None
for lg in lag:
# Generate problem instance
d = generate_features(df, (1995, BASE_YEAR), BASE_YEAR + lg, c,
"DRC.TOT.DISP",
features[c])
d2 = generate_features(df,
(1995, BASE_YEAR), BASE_YEAR + lg, c,
"DRC.TOT.DISP", features[c], differencing=True)
Xt, yt, Xv, yv = d['data']
Xdt, ydt, Xdv, ydv = d2['data']
# Fit and predict
clf.fit(Xt, yt)
fc = clf.predict(Xv)
# differencing run
clf.fit(Xdt, ydt)
fd = clf.predict(Xdv)
if curr_for is None:
curr_for = d['baseline']
M = {}
fc_diff = fd[0] + curr_for
curr_for = fc_diff
M['truth'] = yv.values[0]
M['country'] = c
M['year'] = BASE_YEAR + lg
M['forecast'] = fc[0]
# ensemble - mean
#M['forecast'] = 0.5 * (fc[0] + fc_diff)
# CI
key = c, lg
rn = (CI[key]['upper'] - CI[key]['lower'])/2.0
M['CI_low'] = M['forecast'] - rn
M['CI_high'] = M['forecast'] + rn
result.append(M)
f, axarr = plt.subplots(len(countries), figsize=(15, 18), sharex=False)
pred = pd.DataFrame(result)
for i, c in enumerate(countries):
# plot raw
idx = (df['Country Code'] == c) & (df['year'] <= BASE_YEAR)
obs_x = df.loc[idx, 'year']
obs_y = df.loc[idx, 'DRC.TOT.DISP']
axarr[i].plot(obs_x, obs_y, '-b.', markersize=10, label='observation')
# plot the predictions
c1 = pred.country == c
pred_x = pred.loc[c1, 'year'].values
pred_y = pred.loc[c1, 'forecast'].values
true_y = pred.loc[c1, 'truth'].values
ci_low_y = pred.loc[c1, 'CI_low'].values
ci_high_y = pred.loc[c1, 'CI_high'].values
axarr[i].plot(pred_x, pred_y, 'rs', label=u'Forecast')
axarr[i].fill(np.concatenate([pred_x, pred_x[::-1]]),
np.concatenate([ci_high_y, ci_low_y[::-1]]),
alpha=.5, fc='g', ec='None', label='95% interval')
# true value
# TODO: fix bug that includes future unobserved values
lastdata = 2018
idx = pred_x <= lastdata
axarr[i].plot(pred_x[idx], true_y[idx], 'xb', label=u'observed')
axarr[i].grid(True)
axarr[i].legend(loc=0)
axarr[i].set_xlim([1995, BASE_YEAR + 8])
axarr[i].set_xticks(np.arange(1995, BASE_YEAR + 8, 1))
axarr[i].set_title(c)
plt.suptitle("Forecasts for {} ({}-year ahead)".format(BASE_YEAR, len(lag)-1))
plt.savefig("img/pred-2018.png");
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Educat8n/Reinforcement-Learning-for-Game-Playing-and-More/blob/main/Module3/Module_3.1_DQN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Module 3: DRL Algorithm Implementations

# DQN to play Atari
```
import gym
import sys
import random
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from datetime import datetime
from collections import deque
from gym import spaces
import numpy as np
from gym.spaces.box import Box
from gym.core import Wrapper, ObservationWrapper
import cv2
## import packages
import tensorflow.keras as K
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Dense, Dropout, Conv2D, MaxPooling2D, Flatten, add, Embedding, Conv2DTranspose, GlobalMaxPooling2D, Input, UpSampling2D, Reshape, average
def DQNAgent(state_shape, n_actions, LEARN_RATE = 0.1):
model = Sequential()
model.add(Conv2D(32, (8, 8), strides=4, activation='relu',input_shape=state_shape))
model.add(Conv2D(64, (4, 4), strides=2, activation='relu'))
model.add(Conv2D(64, (3, 3), strides=1, activation='relu'))
model.add(Conv2D(1024, (7, 7), strides=1, activation='relu'))
model.add(Flatten())
model.add(Dense(n_actions, activation='linear'))
model.summary()
opt = K.optimizers.RMSprop(lr=LEARN_RATE)
model.compile(loss="mean_squared_error", optimizer=opt)
return model
```

```
class FrameBuffer(Wrapper): # Buffer frames together as observation space
def __init__(self, env, n_frames=4, dim_order='tensorflow'):
"""A gym wrapper that reshapes, crops and scales image into the desired shapes"""
super(FrameBuffer, self).__init__(env)
self.dim_order = dim_order
if dim_order == 'tensorflow':
height, width, n_channels = env.observation_space.shape
"""Multiply channels dimension by number of frames"""
obs_shape = [height, width, n_channels * n_frames]
else:
raise ValueError('dim_order should be "tensorflow" or "pytorch", got {}'.format(dim_order))
self.observation_space = Box(0.0, 1.0, obs_shape)
self.framebuffer = np.zeros(obs_shape, 'float32')
def reset(self):
"""resets breakout, returns initial frames"""
self.framebuffer = np.zeros_like(self.framebuffer)
self.update_buffer(self.env.reset())
return self.framebuffer
def step(self, action):
"""plays breakout for 1 step, returns frame buffer"""
new_img, reward, done, info = self.env.step(action)
self.update_buffer(new_img)
return self.framebuffer, reward, done, info
def update_buffer(self, img):
if self.dim_order == 'tensorflow':
offset = self.env.observation_space.shape[-1]
axis = -1
cropped_framebuffer = self.framebuffer[:,:,:-offset]
self.framebuffer = np.concatenate([img, cropped_framebuffer], axis = axis)
class PreprocessAtari(ObservationWrapper): # Grayscale, Scaling and Cropping
def __init__(self, env):
"""A gym wrapper that crops, scales image into the desired shapes and grayscales it."""
super(PreprocessAtari, self).__init__(env)
self.img_size = (84, 84)
self.observation_space = Box(0.0, 1.0, (self.img_size[0], self.img_size[1], 1))
def observation(self, img):
"""what happens to each observation"""
# crop image (top and bottom, top from 34, bottom remove last 16)
img = img[34:-16, :, :]
# resize image
img = cv2.resize(img, self.img_size)
img = img.mean(-1,keepdims=True)
img = img.astype('float32') / 255.
return img
%%capture
!wget http://www.atarimania.com/roms/Roms.rar
!mkdir /content/ROM/
!unrar e /content/Roms.rar /content/ROM/
!python -m atari_py.import_roms /content/ROM/
env = gym.make("BreakoutDeterministic-v4")
print(f"The original observation space is {env.observation_space}")
env = PreprocessAtari(env)
print(f"The original observation space is {env.observation_space}")
env = FrameBuffer(env, n_frames=4, dim_order='tensorflow')
print(f"The new observation space is {env.observation_space}")
obs = env.reset()
plt.title("Agent observation (4 frames: left most recent)") ##
plt.imshow(obs.transpose([0,2,1]).reshape([env.observation_space.shape[0],-1]), cmap='gray');
for i in range(3):
obs, _, _, _ = env.step(env.action_space.sample())
plt.title("Agent observation (4 frames: left most recent)")
plt.imshow(obs.transpose([0,2,1]).reshape([env.observation_space.shape[0],-1]), cmap='gray');
def epsilon_greedy_policy(state, epsilon):
"""pick actions given qvalues. Uses epsilon-greedy exploration strategy. """
if np.random.random() <= epsilon: #Explore
return env.action_space.sample()
else: #Exploit
return np.argmax(agent.predict(tf.expand_dims(state, axis = 0)))
agent = DQNAgent(env.observation_space.shape, env.action_space.n) # Local Network
target_model = DQNAgent(env.observation_space.shape, env.action_space.n) # Target Network
## Assign same weights to Q and Target Q
target_model.set_weights(agent.get_weights())
```
## Training the DQN agent
```
buffer_size=5000
replay_buffer = deque(maxlen=buffer_size)
def train(epochs=400, gamma = 1.0, epsilon = 1.0, epsilon_min = 0.01, epsilon_decay = 0.995, batch_size = 32 ):
scores = deque(maxlen=100)
avg_scores = []
for e in range(epochs):
state = env.reset()
done = False
i = 0
while not done: # Build memory
action = epsilon_greedy_policy(state,epsilon)
next_state, reward, done, _ = env.step(action)
#print(next_state.shape)
#replay_buffer.add(state, action, reward, next_state, done)
replay_buffer.append((state, action, reward, next_state, done))
state = next_state
epsilon = max(epsilon_min, epsilon_decay*epsilon) # decrease epsilon
i += reward
scores.append(i)
mean_score = np.mean(scores)
avg_scores.append(mean_score)
if mean_score >= 3.5: #Stop if a threshold is reached
print('Solved after {} trials ✔'.format(e))
return avg_scores
if e % 10 == 0: # Print info after every 10 episodes + Update target
print('[Episode {}] - Average Score: {}'.format(e, mean_score))
target_model.set_weights(agent.get_weights()) #Hard Update
replay(batch_size)
print('Did not solve after {} episodes 😞'.format(e))
return avg_scores
```
## Training the DQN agent - Replay Function
```
def replay(batch_size, gamma = 1.0):
x_batch, y_batch = [], []
minibatch = random.sample(replay_buffer, min(len(replay_buffer), batch_size))
for state, action, reward, next_state, done in minibatch:
y_target = target_model.predict(tf.expand_dims(state, axis = 0))
y_target[0][action] = reward if done else reward + gamma * np.max(agent.predict(tf.expand_dims(next_state, axis = 0))[0])
x_batch.append(state)
y_batch.append(y_target[0])
agent.fit(np.array(x_batch), np.array(y_batch), batch_size=len(x_batch), verbose=0)
a = train()
```
## Training the DQN agent - Reward Plot
```
plt.plot(np.linspace(0,400,len(a),endpoint=False), np.asarray(a))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over 100 Episodes)')
plt.show()
```
| github_jupyter |
Deep Learning
=============
Assignment 1
------------
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matplotlib backend as plotting inline in IPython
%matplotlib inline
```
First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
```
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
"""A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
"""
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
"""Download a file if not present, and make sure it's the right size."""
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
```
Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
```
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
```
---
Problem 1
---------
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
---
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
```
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
"""Load the data for a single letter label."""
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
```
---
Problem 2
---------
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
---
---
Problem 3
---------
Another check: we expect the data to be balanced across classes. Verify that.
---
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
```
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
```
Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
```
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
```
---
Problem 4
---------
Convince yourself that the data is still good after shuffling!
---
Finally, let's save the data for later reuse:
```
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
```
---
Problem 5
---------
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
---
---
Problem 6
---------
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
---
| github_jupyter |
```
from scipy import linalg
import matplotlib
```
This algorithm was taken from scikit-learn v0.13 (the current is an equivalent Cython implementation), it just adds the callback argument
```
def isotonic_regression(y, weight=None, y_min=None, y_max=None, callback=None):
"""Solve the isotonic regression model::
min sum w[i] (y[i] - y_[i]) ** 2
subject to y_min = y_[1] <= y_[2] ... <= y_[n] = y_max
where:
- y[i] are inputs (real numbers)
- y_[i] are fitted
- w[i] are optional strictly positive weights (default to 1.0)
Parameters
----------
y : iterable of floating-point values
The data.
weight : iterable of floating-point values, optional, default: None
Weights on each point of the regression.
If None, weight is set to 1 (equal weights).
y_min : optional, default: None
If not None, set the lowest value of the fit to y_min.
y_max : optional, default: None
If not None, set the highest value of the fit to y_max.
Returns
-------
`y_` : list of floating-point values
Isotonic fit of y.
References
----------
"Active set algorithms for isotonic regression; A unifying framework"
by Michael J. Best and Nilotpal Chakravarti, section 3.
"""
if weight is None:
weight = np.ones(len(y), dtype=y.dtype)
if y_min is not None or y_max is not None:
y = np.copy(y)
weight = np.copy(weight)
C = np.dot(weight, y * y) * 10 # upper bound on the cost function
if y_min is not None:
y[0] = y_min
weight[0] = C
if y_max is not None:
y[-1] = y_max
weight[-1] = C
active_set = [(weight[i] * y[i], weight[i], [i, ])
for i in range(len(y))]
current = 0
counter = 0
while current < len(active_set) - 1:
value0, value1, value2 = 0, 0, np.inf
weight0, weight1, weight2 = 1, 1, 1
while value0 * weight1 <= value1 * weight0 and \
current < len(active_set) - 1:
value0, weight0, idx0 = active_set[current]
value1, weight1, idx1 = active_set[current + 1]
if value0 * weight1 <= value1 * weight0:
current += 1
if callback is not None:
callback(y, active_set, counter, idx1)
counter += 1
if current == len(active_set) - 1:
break
# merge two groups
value0, weight0, idx0 = active_set.pop(current)
value1, weight1, idx1 = active_set.pop(current)
active_set.insert(current,
(value0 + value1,
weight0 + weight1, idx0 + idx1))
while value2 * weight0 > value0 * weight2 and current > 0:
value0, weight0, idx0 = active_set[current]
value2, weight2, idx2 = active_set[current - 1]
if weight0 * value2 >= weight2 * value0:
active_set.pop(current)
active_set[current - 1] = (value0 + value2, weight0 + weight2,
idx0 + idx2)
current -= 1
solution = np.empty(len(y))
if callback is not None:
callback(y, active_set, counter+1, idx1)
callback(y, active_set, counter+2, idx1)
for value, weight, idx in active_set:
solution[idx] = value / weight
return solution
import numpy as np
import pylab as pl
from matplotlib.collections import LineCollection
from sklearn.linear_model import LinearRegression
from sklearn.isotonic import IsotonicRegression
from sklearn.utils import check_random_state
def cb(y, active_set, counter, current):
solution = np.empty(len(y))
for value, weight, idx in active_set:
solution[idx] = value / weight
fig = matplotlib.pyplot.gcf()
fig.set_size_inches(9.5,6.5)
color = y.copy()
pl.scatter(np.arange(len(y)), solution, s=50, cmap=pl.cm.Spectral, vmin=50, c=color)
pl.scatter([np.arange(len(y))[current]], [solution[current]], s=200, marker='+', color='red')
pl.xlim((0, 40))
pl.ylim((50, 300))
pl.savefig('isotonic_%03d.png' % counter)
pl.show()
n = 40
x = np.arange(n)
rs = check_random_state(0)
y = rs.randint(-50, 50, size=(n,)) + 50. * np.log(1 + np.arange(n))
###############################################################################
# Fit IsotonicRegression and LinearRegression models
y_ = isotonic_regression(y, callback=cb)
import pylab as pl
from matplotlib.collections import LineCollection
from sklearn.linear_model import LinearRegression
from sklearn.isotonic import IsotonicRegression
from sklearn.utils import check_random_state
n = 100
y = np.array([0]*50+[1]*50)
rs = check_random_state(0)
x = np.random.random(size=(n,)) #you can interpret it as the outputs of the SVM or any other model
res = sorted(list(zip(x,y)), key = lambda x: x[0])
x = []
y = []
for i,j in res:
x.append(i)
y.append(j)
x= np.array(x)
y= np.array(y)
###############################################################################
# Fit IsotonicRegression and LinearRegression models
ir = IsotonicRegression()
y_ = ir.fit_transform(x, y)
lr = LinearRegression()
lr.fit(x[:, np.newaxis], y) # x needs to be 2d for LinearRegression
###############################################################################
# plot result
segments = [[[i, y[i]], [i, y_[i]]] for i in range(n)]
lc = LineCollection(segments, zorder=0)
lc.set_array(np.ones(len(y)))
lc.set_linewidths(0.5 * np.ones(n))
fig = pl.figure()
pl.plot(x, y, 'r.', markersize=12)
pl.plot(x, y_, 'g.-', markersize=12)
pl.plot(x, lr.predict(x[:, np.newaxis]), 'b-')
pl.gca().add_collection(lc)
pl.legend(('Data', 'Isotonic Fit', 'Linear Fit'), loc='lower right')
pl.title('Isotonic regression')
fig = matplotlib.pyplot.gcf()
fig.set_size_inches(9.5,6.5)
pl.savefig('inverse_isotonic.png')
pl.show()
```
| github_jupyter |
---
_You are currently looking at **version 1.2** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-social-network-analysis/resources/yPcBs) course resource._
---
# Assignment 2 - Network Connectivity
In this assignment you will go through the process of importing and analyzing an internal email communication network between employees of a mid-sized manufacturing company.
Each node represents an employee and each directed edge between two nodes represents an individual email. The left node represents the sender and the right node represents the recipient.
```
import networkx as nx
# This line must be commented out when submitting to the autograder
#!head email_network.txt
```
### Question 1
Using networkx, load up the directed multigraph from `email_network.txt`. Make sure the node names are strings.
*This function should return a directed multigraph networkx graph.*
```
def answer_one():
# Your Code Here
Multi_Di_G = nx.read_edgelist('./email_network.txt', delimiter = '\t', create_using = nx.MultiDiGraph(),
data = [('time', int)])
return Multi_Di_G
```
### Question 2
How many employees and emails are represented in the graph from Question 1?
*This function should return a tuple (#employees, #emails).*
```
def answer_two():
# Your Code Here
Multi_Di_G = answer_one()
employees = len(Multi_Di_G.nodes())
emails = len(Multi_Di_G.edges())
return (employees, emails)
```
### Question 3
* Part 1. Assume that information in this company can only be exchanged through email.
When an employee sends an email to another employee, a communication channel has been created, allowing the sender to provide information to the receiver, but not vice versa.
Based on the emails sent in the data, is it possible for information to go from every employee to every other employee?
* Part 2. Now assume that a communication channel established by an email allows information to be exchanged both ways.
Based on the emails sent in the data, is it possible for information to go from every employee to every other employee?
*This function should return a tuple of bools (part1, part2).*
```
def answer_three():
# Your Code Here
Multi_Di_G = answer_one()
part1 = nx.is_strongly_connected(Multi_Di_G)
part2 = nx.is_weakly_connected(Multi_Di_G)
return (part1, part2)
answer_three()
```
### Question 4
How many nodes are in the largest (in terms of nodes) weakly connected component?
*This function should return an int.*
```
def answer_four():
# Your Code Here
Multi_Di_Graph = answer_one()
result = sorted(nx.weakly_connected_components(Multi_Di_Graph), key=lambda x: len(x), reverse=True)
return len(result[0])
```
### Question 5
How many nodes are in the largest (in terms of nodes) strongly connected component?
*This function should return an int*
```
def answer_five():
# Your Code Here
Multi_Di_Graph = answer_one()
result = sorted(nx.strongly_connected_components(Multi_Di_Graph), key=lambda x: len(x), reverse=True)
return len(result[0])
```
### Question 6
Using the NetworkX function strongly_connected_component_subgraphs, find the subgraph of nodes in a largest strongly connected component.
Call this graph G_sc.
*This function should return a networkx MultiDiGraph named G_sc.*
```
def answer_six():
# Your Code Here
Multi_Di_G = answer_one()
G_sc = max(nx.strongly_connected_component_subgraphs(Multi_Di_G), key = lambda x: len(x))
return G_sc
```
### Question 7
What is the average distance between nodes in G_sc?
*This function should return a float.*
```
def answer_seven():
# Your Code Here
G_sc = answer_six()
avg_distance = nx.average_shortest_path_length(G_sc)
return avg_distance
answer_seven()
```
### Question 8
What is the largest possible distance between two employees in G_sc?
*This function should return an int.*
```
def answer_eight():
# Your Code Here
G_sc = answer_six()
diameter = nx.diameter(G_sc)
return diameter
answer_eight()
```
### Question 9
What is the set of nodes in G_sc with eccentricity equal to the diameter?
*This function should return a set of the node(s).*
```
def answer_nine():
# Your Code Here
G_sc = answer_six()
periphery = set(nx.periphery(G_sc))
return periphery
answer_nine()
```
### Question 10
What is the set of node(s) in G_sc with eccentricity equal to the radius?
*This function should return a set of the node(s).*
```
def answer_ten():
# Your Code Here
G_sc = answer_six()
center = set(nx.center(G_sc))
return center
answer_ten()
```
### Question 11
Which node in G_sc is connected to the most other nodes by a shortest path of length equal to the diameter of G_sc?
How many nodes are connected to this node?
*This function should return a tuple (name of node, number of satisfied connected nodes).*
```
def answer_eleven():
# Your Code Here
G_sc = answer_six()
periphery = set(nx.periphery(G_sc))
diameter = nx.diameter(G_sc)
max_nodes = -1
leading_node = 0
for node in periphery:
node_dictionary = nx.shortest_path_length(G_sc, node)
diameter_node_count = list(node_dictionary.values()).count(diameter)
if max_nodes < diameter_node_count:
max_nodes = diameter_node_count
leading_node = node
return (leading_node, max_nodes)
answer_eleven()
```
### Question 12
Suppose you want to prevent communication from flowing to the node that you found in the previous question from any node in the center of G_sc, what is the smallest number of nodes you would need to remove from the graph (you're not allowed to remove the node from the previous question or the center nodes)?
*This function should return an integer.*
```
def answer_twelve():
# Your Code Here
G_sc = answer_six()
center = nx.center(G_sc)[0]
node, count = answer_eleven()
remove_count = len(nx.minimum_node_cut(G_sc, center, node))
return remove_count
answer_twelve()
```
### Question 13
Construct an undirected graph G_un using G_sc (you can ignore the attributes).
*This function should return a networkx Graph.*
```
def answer_thirteen():
# Your Code Here
G_sc = answer_six()
deep_copy = G_sc.to_undirected()
G_un = nx.Graph(deep_copy)
return G_un
```
### Question 14
What is the transitivity and average clustering coefficient of graph G_un?
*This function should return a tuple (transitivity, avg clustering).*
```
def answer_fourteen():
# Your Code Here
G_un = answer_thirteen()
transitivity = nx.transitivity(G_un)
avg_clustering = nx.average_clustering(G_un)
return (transitivity, avg_clustering)
answer_fourteen()
```
| github_jupyter |
## This notebook allows the user to train their own version of the GPU model from scratch
- This notebook can also be run using the `2_train_gpu_model.py` file in this folder.
#### Notes
- The training data for training the GPU model uses a separate file format. We have also uploaded training data ( the one we used for the complex and defined media condition) in this format here so this training notebook will be fully functional on the Google Cloud VM and on CodeOcean(the file can be found in this directory and is used here below).
- Caution : Saved models for each condition in the 'models_conditions' folder will be overwritten by running this code. We have a backup of the complex media model in the 'models' folder in case this happens. As we have shown in the manuscript, the complex and defined media have highly correlated expression levels and doing the same for defined media will lead to equivalent prediction performance of the trained models.
- <b>Also, please note the PCC metric shown on the 'validation set' is not any of the test data we use in the paper. It is simply a held-out sample of the training data experiment as we explain elsewhere as well. This training data is significantly higher complexity and hence lead to a much lower number of 'read replicates' per given sequence. So we carry out separate experiments low-complexity library experiments to measure the test data.</b>
- We verified that training the model works on this machine.
### Pre-process the training data for the GPU model
```
import csv
import copy
import numpy as np
import multiprocessing as mp, ctypes
import time , csv ,pickle ,joblib , matplotlib , multiprocessing,itertools
from joblib import Parallel, delayed
from tqdm import tqdm
import argparse,pwd,os,numpy as np,h5py
from os.path import splitext,exists,dirname,join,basename
from os import makedirs
import matplotlib.pyplot as plt
import h5py
import tensorflow as tf, sys, numpy as np, h5py, pandas as pd
from tensorflow import nn
from tensorflow.contrib import rnn
from os.path import join,dirname,basename,exists,realpath
from os import makedirs
from tensorflow.examples.tutorials.mnist import input_data
import sklearn , scipy
from sklearn.metrics import *
from scipy.stats import *
import time
import os
from tqdm import tqdm
import datetime
from datetime import datetime
################################################Final one used
###GET ONE HOT CODE FROM SEQUENCES , parallel code, quite fast
class OHCSeq:
transformed = None
data = None
def seq2feature(data):
num_cores = multiprocessing.cpu_count()-2
nproc = np.min([16,num_cores])
OHCSeq.data=data
shared_array_base = mp.Array(ctypes.c_bool, len(data)*len(data[0])*4)
shared_array = np.ctypeslib.as_array(shared_array_base.get_obj())
shared_array = shared_array.reshape(len(data),1,len(data[0]),4)
#OHCSeq.transformed = np.zeros([len(data),len(data[0]),4] , dtype=np.bool )
OHCSeq.transformed = shared_array
pool = mp.Pool(nproc)
r = pool.map(seq2feature_fill, range(len(data)))
pool.close()
pool.join()
#myOHC.clear()
return( OHCSeq.transformed)
def seq2feature_fill(i):
mapper = {'A':0,'C':1,'G':2,'T':3,'N':None}
###Make sure the length is 110bp
if (len(OHCSeq.data[i]) > 110) :
OHCSeq.data[i] = OHCSeq.data[i][-110:]
elif (len(OHCSeq.data[i]) < 110) :
while (len(OHCSeq.data[i]) < 110) :
OHCSeq.data[i] = 'N'+OHCSeq.data[i]
for j in range(len(OHCSeq.data[i])):
OHCSeq.transformed[i][0][j][mapper[OHCSeq.data[i][j]]]=True
return i
########GET ONE HOT CODE FROM SEQUENCES , parallel code, quite fast
################################################################
```
### Load the training data file
```
model_conditions = 'defined' #'complex' or 'defined'
with open('./'+model_conditions+'_media_training_data.txt') as f: #replace with the path to your raw data
reader = csv.reader(f, delimiter="\t")
d = list(reader)
```
### Extract the sequences and appropriately attach the constant flanks
```
sequences = [di[0] for di in d]
### Append N's if the sequencing output has a length different from 17+80+13 (80bp with constant flanks)
for i in tqdm(range(0,len(sequences))) :
if (len(sequences[i]) > 110) :
sequences[i] = sequences[i][-110:]
if (len(sequences[i]) < 110) :
while (len(sequences[i]) < 110) :
sequences[i] = 'N'+sequences[i]
```
### Convert the sequences to one hot code
```
onehot_sequences = seq2feature(np.asarray(sequences))
```
### Get the reverse complement of the sequence
Improved this implementation to make it faster for the readers to run in a single notebook
```
tab = str.maketrans("ACTGN", "TGACN")
def reverse_complement_table(seq):
return seq.translate(tab)[::-1]
rc_sequences = [reverse_complement_table(seq) for seq in tqdm(sequences)]
rc_onehot_sequences = seq2feature(np.array(rc_sequences))
```
### Extract the expression corresponding to the sequences
```
expressions = [di[1] for di in d]
expressions = np.asarray(expressions)
expressions = expressions.astype('float')
expressions = np.reshape(expressions , [-1,1])
```
### Split training data into two groups _trX and _vaX but please note that this is not the test data !
```
total_seqs = len(onehot_sequences)
_trX = onehot_sequences[int(total_seqs/10):]
_trX_rc = rc_onehot_sequences[int(total_seqs/10):]
_trY = expressions[int(total_seqs/10):]
_vaX = onehot_sequences[0:int(total_seqs/10)]
_vaX_rc = rc_onehot_sequences[0:int(total_seqs/10)]
_vaY = expressions[0:int(total_seqs/10)]
```
### Define hyperparameters and specify location for saving the model
We have saved an example for training to test the file in the user models folder
```
##MODEL FILE SAVING ADDRESSES AND MINIBATCH SIZES
# Training
best_dropout = 0.8
best_l2_coef = 0.0001
best_lr = 0.0005
_batch_size = 1024
_hyper_train_size = 2000
#_valid_size = 1024
_hidden = 256
_epochs = 5
_best_model_file = join('models_conditions',model_conditions+'_media','best_model.ckpt')
_best_model_file_hyper = join('models_conditions',model_conditions+'_media','hyper_search', 'best_model.ckpt')
for _file in [_best_model_file, _best_model_file_hyper]:
if not exists(dirname(_file)):
makedirs(dirname(_file))
```
### Define Model Architecutre
```
####################################################################
####################################################################
### MODEL ARCHITECTURE
####################################################################
####################################################################
def weight_variable(shape):
"""Create a weight variable with appropriate initialization."""
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
"""Create a bias variable with appropriate initialization."""
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W, stride=1):
return tf.nn.conv2d(x, W, strides=[1, stride, stride, 1], padding='SAME')
def max_pool(x, stride=2, filter_size=2):
return tf.nn.max_pool(x, ksize=[1, 1, 2, 1],
strides=[1, 1, 2, 1], padding='SAME')
def cross_entropy(y, y_real):
return tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = y, labels = y_real))
def build_two_fc_layers(x_inp, Ws, bs):
h_fc1 = tf.nn.relu(tf.matmul(x_inp, Ws[0]) + bs[0])
return tf.matmul(h_fc1, Ws[1]) + bs[1]
def cnn_model(X, hyper_params , scope):
with tf.variable_scope(scope) :
global _hidden
conv1_filter_dim1 = 30
conv1_filter_dim2 = 4
conv1_depth = _hidden
conv2_filter_dim1 = 30
conv2_filter_dim2 = 1
conv2_depth = _hidden
W_conv1 = weight_variable([1,conv1_filter_dim1,conv1_filter_dim2,conv1_depth])
conv1 = conv2d(X, W_conv1)
conv1 = tf.nn.bias_add(conv1, bias_variable([conv1_depth]))
conv1 = tf.nn.relu(conv1)
l_conv = conv1
W_conv2 = weight_variable([conv2_filter_dim1,conv2_filter_dim2,conv1_depth, conv2_depth])
conv2 = conv2d(conv1,W_conv2 )
conv2 = tf.nn.bias_add(conv2, bias_variable([conv2_depth]))
conv2 = tf.nn.relu(conv2)
regularization_term = hyper_params['l2']* tf.reduce_mean(tf.abs(W_conv1)) + hyper_params['l2']* tf.reduce_mean(tf.abs(W_conv2))
cnn_model_output = conv2
return cnn_model_output , regularization_term
def training(trX, trX_rc, trY, valX, valX_rc, valY, hyper_params, epochs, batch_size, best_model_file):
tf.reset_default_graph()
global _hidden
lstm_num_hidden = _hidden
fc_num_hidden = _hidden
num_classes = 1
num_bins = 256
conv3_filter_dim1 = 30
conv3_filter_dim2 = 1
conv3_depth = _hidden
conv4_filter_dim1 = 30
conv4_filter_dim2 = 1
conv4_depth = _hidden
# Input and output
X = tf.placeholder("float", [None, 1, 110, 4] )
X_rc = tf.placeholder("float", [None, 1, 110, 4] )
Y = tf.placeholder("float", [None,1] )
dropout_keep_probability = tf.placeholder_with_default(1.0, shape=())
#f is forward sequence
output_f , regularization_term_f = cnn_model(X, {'dropout_keep':hyper_params['dropout_keep'],'l2':hyper_params['l2']} , "f")
#rc is reverse complement of that sequence
output_rc , regularization_term_rc = cnn_model(X_rc, {'dropout_keep':hyper_params['dropout_keep'],'l2':hyper_params['l2']} , "rc")
### CONCATENATE output_f and output_rc
concatenated_f_rc = tf.concat([output_f , output_rc], -1)
###
W_conv3 = weight_variable([conv3_filter_dim1,conv3_filter_dim2,2*_hidden,conv3_depth])
conv3 = conv2d(concatenated_f_rc,W_conv3 )
conv3 = tf.nn.bias_add(conv3, bias_variable([conv3_depth]))
conv3 = tf.nn.relu(conv3)
W_conv4 = weight_variable([conv4_filter_dim1,conv4_filter_dim2,conv3_depth,conv4_depth])
conv4 = conv2d(conv3,W_conv4 )
conv4 = tf.nn.bias_add(conv4, bias_variable([conv4_depth]))
conv4 = tf.nn.relu(conv4)
conv_feat_map_x = 110
conv_feat_map_y = 1
h_conv_flat = tf.reshape(conv4, [-1, conv_feat_map_x * conv_feat_map_y * lstm_num_hidden])
#FC-1
W_fc1 = weight_variable([conv_feat_map_x * conv_feat_map_y * lstm_num_hidden , fc_num_hidden])
b_fc1 = bias_variable([fc_num_hidden])
h_fc1 = tf.nn.relu(tf.matmul(h_conv_flat, W_fc1) + b_fc1)
#Dropout for FC-1
h_fc1 = tf.nn.dropout(h_fc1, dropout_keep_probability)
#FC-2
W_fc2 = weight_variable([fc_num_hidden , num_bins])
b_fc2 = bias_variable([num_bins])
h_fc2 = tf.nn.relu(tf.matmul(h_fc1, W_fc2) + b_fc2)
#Dropout for FC-2
h_fc2 = tf.nn.dropout(h_fc2, dropout_keep_probability)
#FC-3
W_fc3 = weight_variable([num_bins, num_classes])
b_fc3 = bias_variable([num_classes])
h_fc3 = tf.matmul(h_fc2, W_fc3) + b_fc3
regularization_term = hyper_params['l2']* tf.reduce_mean(tf.abs(W_fc3)) + hyper_params['l2']* tf.reduce_mean(tf.abs(W_fc2)) + hyper_params['l2']* tf.reduce_mean(tf.abs(W_fc1)) + hyper_params['l2']* tf.reduce_mean(tf.abs(W_conv3))+ regularization_term_f + regularization_term_rc +hyper_params['l2']* tf.reduce_mean(tf.abs(W_conv4))
with tf.variable_scope("out") :
output = h_fc3
model_output = tf.identity(output, name="model_output")
##########
loss = tf.losses.mean_squared_error( Y , model_output ) + regularization_term
cost = loss
model_cost = tf.identity(cost, name="model_cost")
##########
pcc = tf.contrib.metrics.streaming_pearson_correlation(model_output,Y)
model_pcc = tf.identity(pcc, name="model_pcc")
##########
mse = tf.losses.mean_squared_error( Y , model_output )
model_mse = tf.identity(mse, name="model_mse")
##########
total_error = tf.reduce_sum(tf.square(tf.subtract(Y, tf.reduce_mean(Y))))
unexplained_error = tf.reduce_sum(tf.square(tf.subtract(Y, model_output)))
R_squared = tf.subtract(tf.constant(
1,
dtype=tf.float32), tf.div(unexplained_error, total_error))
model_R_squared = tf.identity(R_squared, name="model_R_squared")
##########
tf.summary.scalar("cost", model_cost)
tf.summary.scalar("pcc", model_pcc[0])
tf.summary.scalar("mse", model_mse)
tf.summary.scalar("R_squared", R_squared)
summary_op = tf.summary.merge_all()
train_op = tf.train.AdamOptimizer(hyper_params['lr']).minimize(cost)
start = time.time()
best_cost = float("inf")
best_r2 = float(0)
batches_per_epoch = int(len(trX)/batch_size)
num_steps = int(epochs * batches_per_epoch)
sess = tf.Session()
init=tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
#init = tf.global_variables_initializer()
sess.run(init)
#clear the logs directory
now = datetime.now()
#writer = tf.summary.FileWriter(join('user_models','logs' , now.strftime("%Y%m%d-%H%M%S") ), sess.graph)
print('Initializing variables...')
epoch_loss = 0
epoch_pcc = 0
epoch_mse = 0
epoch_R_squared = 0
for step in tqdm(range(num_steps)):
offset = (step * batch_size) % (trX.shape[0] - batch_size)
batch_x = trX[offset:(offset + batch_size), :]
batch_x_rc = trX_rc[offset:(offset + batch_size), :]
batch_y = trY[offset:(offset + batch_size)]
feed_dict = {X: batch_x, X_rc: batch_x_rc , Y: batch_y , dropout_keep_probability : hyper_params['dropout_keep'] }
_, batch_loss , batch_pcc , batch_mse, batch_R_squared , summary = sess.run([train_op, cost , pcc , mse, R_squared , summary_op], feed_dict=feed_dict)
batch_R_squared = batch_pcc[0]**2
epoch_loss += batch_loss
epoch_pcc += batch_pcc[0]
epoch_mse += batch_mse
epoch_R_squared += batch_R_squared
#writer.add_summary(summary, step)
if ( (step % batches_per_epoch == 0) and step/batches_per_epoch!=0):
epoch_loss /= batches_per_epoch
epoch_pcc /= batches_per_epoch
epoch_mse /= batches_per_epoch
epoch_R_squared /= batches_per_epoch
print('')
print( '')
print( '')
print( '')
print( 'Training - Avg batch loss at epoch %d: %f' % (step/batches_per_epoch, epoch_loss))
print( 'Training - PCC : %f' % epoch_pcc)
print( 'Training - MSE : %f' % epoch_mse)
print( 'Training - R_squared : %f' % epoch_pcc**2)
epoch_loss = 0
epoch_pcc = 0
epoch_mse = 0
epoch_R_squared = 0
#Randomized validation subset start
randomize = np.random.permutation(len(valX))
vaX = valX[randomize,:]
vaX_rc = valX_rc[randomize,:]
vaY = valY[randomize,:]
#valX = vaX[0:valid_size,:]
#valX_rc = vaX_rc[0:valid_size,:]
#valY = vaY[0:valid_size,:]
#with tf.device('/cpu:0'):
#validation_cost , validation_acc , summary = sess.run([cost , accuracy , summary_op], feed_dict={X: valX , X_rc: valX_rc, Y: valY})
#### teX_output contains TESTED SEQUENCES FOR VALIDATION SET
va_batch_size = 1024
(q,r) = divmod(vaX.shape[0] , va_batch_size)
i=0
vaX_output = []
while(i <= q ) :
if(i< q ) :
temp_result_step1=sess.run([model_output], feed_dict={X: vaX[va_batch_size*i:va_batch_size*i+va_batch_size,:], X_rc: vaX_rc[va_batch_size*i:va_batch_size*i+va_batch_size,:] ,Y: vaY[va_batch_size*i:va_batch_size*i+va_batch_size,:]})
temp_result_step2=[float(x) for x in temp_result_step1[0]]
#print temp_result_step2
vaX_output = vaX_output + temp_result_step2
i = i+1
elif (i==q) :
temp_result_step1 = sess.run([model_output], feed_dict={X: vaX[va_batch_size*i:,:], X_rc: vaX_rc[va_batch_size*i:,:] ,Y: vaY[va_batch_size*i:,:]})
temp_result_step2=[float(x) for x in temp_result_step1[0]]
#print "here"
vaX_output = vaX_output + temp_result_step2
i = i+1
#### RETURN TESTED SEQUENCES FOR VALIDATION SET
vaY = [float(x) for x in vaY]
validation_mse = sklearn.metrics.mean_squared_error(vaY , vaX_output )
validation_pcc = scipy.stats.pearsonr(vaY , vaX_output )
validation_r2 = validation_pcc[0]**2
#for tensorboard
print('')
print( 'Full Validation Set - MSE : %f' % validation_mse)
print( 'Full Validation Set - PCC : %f' % validation_pcc[0])
print( 'Full Validation Set - R_squared : %f' % validation_r2)
if(best_r2 < validation_r2) :
#
#SAVER
saver = tf.train.Saver()
saver.save(sess, "%s"%best_model_file )
#
best_loss = validation_mse
best_cost = validation_mse
best_r2 = validation_r2
print( "Training time: ", time.time() - start)
return best_r2
```
### Train the model.
Note that the model autosaves in the path defined above
```
print('\n', training(_trX,_trX_rc, _trY, _vaX,_vaX_rc, _vaY, \
{'dropout_keep':best_dropout,'l2':best_l2_coef, 'lr':best_lr},\
_epochs, _batch_size , _best_model_file))
```
| github_jupyter |
# DataJoint Workflow Array Ephys
This notebook will describe the steps for interacting with the data ingested into `workflow-array-ephys`.
```
import os
os.chdir('..')
import datajoint as dj
import matplotlib.pyplot as plt
import numpy as np
from workflow_array_ephys.pipeline import lab, subject, session, ephys
```
## Workflow architecture
This workflow is assembled from 4 DataJoint elements:
+ [element-lab](https://github.com/datajoint/element-lab)
+ [element-animal](https://github.com/datajoint/element-animal)
+ [element-session](https://github.com/datajoint/element-session)
+ [element-array-ephys](https://github.com/datajoint/element-array-ephys)
For the architecture and detailed descriptions for each of those elements, please visit the respective links.
Below is the diagram describing the core components of the fully assembled pipeline.
```
dj.Diagram(ephys) + (dj.Diagram(session.Session) + 1) - 1
```
## Browsing the data with DataJoint query and fetch
DataJoint provides abundant functions to query data and fetch. For a detailed tutorials, visit our [general tutorial site](https://playground.datajoint.io/)
Running through the pipeline, we have ingested data of subject6 session1 into the database. Here are some highlights of the important tables.
### `Subject` and `Session` tables
```
subject.Subject()
session.Session()
session_key = (session.Session & 'subject="subject6"' & 'session_datetime = "2021-01-15 11:16:38"').fetch1('KEY')
```
### `ephys.ProbeInsertion` and `ephys.EphysRecording` tables
These tables stores the probe recordings within a particular session from one or more probes.
```
ephys.ProbeInsertion & session_key
ephys.EphysRecording & session_key
```
### `ephys.ClusteringTask` , `ephys.Clustering`, `ephys.Curation`, and `ephys.CuratedClustering`
+ Spike-sorting is performed on a per-probe basis with the details stored in `ClusteringTask` and `Clustering`
+ After the spike sorting, the results may go through curation process.
+ If it did not go through curation, a copy of `ClusteringTask` entry was inserted into table `ephys.Curation` with the `curation_ouput_dir` identicial to the `clustering_output_dir`.
+ If it did go through a curation, a new entry will be inserted into `ephys.Curation`, with a `curation_output_dir` specified.
+ `ephys.Curation` supports multiple curations of a clustering task.
```
ephys.ClusteringTask * ephys.Clustering & session_key
```
In our example workflow, `curation_output_dir` is the same as `clustering_output_dir`
```
ephys.Curation * ephys.CuratedClustering & session_key
```
### Spike-sorting results are stored in `ephys.CuratedClustering`, `ephys.WaveformSet.Waveform`
```
ephys.CuratedClustering.Unit & session_key
```
Let's pick one probe insertion and one `curation_id`, and further inspect the clustering results.
```
curation_key = (ephys.CuratedClustering & session_key & 'insertion_number = 0' & 'curation_id=1').fetch1('KEY')
ephys.CuratedClustering.Unit & curation_key
```
### Generate a raster plot
Let's try a raster plot - just the "good" units
```
ephys.CuratedClustering.Unit & curation_key & 'cluster_quality_label = "good"'
units, unit_spiketimes = (ephys.CuratedClustering.Unit
& curation_key
& 'cluster_quality_label = "good"').fetch('unit', 'spike_times')
x = np.hstack(unit_spiketimes)
y = np.hstack([np.full_like(s, u) for u, s in zip(units, unit_spiketimes)])
fig, ax = plt.subplots(1, 1, figsize=(32, 16))
ax.plot(x, y, '|')
ax.set_xlabel('Time (s)');
ax.set_ylabel('Unit');
```
### Plot waveform of a unit
Let's pick one unit and further inspect
```
unit_key = (ephys.CuratedClustering.Unit & curation_key & 'unit = 15').fetch1('KEY')
ephys.CuratedClustering.Unit * ephys.WaveformSet.Waveform & unit_key
unit_data = (ephys.CuratedClustering.Unit * ephys.WaveformSet.PeakWaveform & unit_key).fetch1()
unit_data
sampling_rate = (ephys.EphysRecording & curation_key).fetch1('sampling_rate')/1000 # in kHz
plt.plot(np.r_[:unit_data['peak_electrode_waveform'].size] * 1/sampling_rate, unit_data['peak_electrode_waveform'])
plt.xlabel('Time (ms)');
plt.ylabel(r'Voltage ($\mu$V)');
```
## Summary and Next Step
This notebook highlights the major tables in the workflow and visualize some of the ingested results.
The next notebook [06-drop](06-drop-optional.ipynb) shows how to drop schemas and tables if needed.
| github_jupyter |
<a href="https://colab.research.google.com/github/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/sql/small_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**<h3>Summarize the sql source code using codeTrans multitask training model</h3>**
<h4>You can make free prediction online through this
<a href="https://huggingface.co/SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask">Link</a></h4> (When using the prediction online, you need to parse and tokenize the code first.)
**1. Load necessry libraries including huggingface transformers**
```
!pip install -q transformers sentencepiece
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
```
**2. Load the token classification pipeline and load it into the GPU if avilabile**
```
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask", skip_special_tokens=True),
device=0
)
```
**3 Give the code for summarization, parse and tokenize it**
```
code = "select time (fieldname) from tablename" #@param {type:"raw"}
import re
import sqlparse
scanner=re.Scanner([
(r"\[[^\]]*\]", lambda scanner,token: token),
(r"\+", lambda scanner,token:"R_PLUS"),
(r"\*", lambda scanner,token:"R_KLEENE"),
(r"%", lambda scanner,token:"R_WILD"),
(r"\^", lambda scanner,token:"R_START"),
(r"\$", lambda scanner,token:"R_END"),
(r"\?", lambda scanner,token:"R_QUESTION"),
(r"[\.~``;_a-zA-Z0-9\s=:\{\}\-\\]+", lambda scanner,token:"R_FREE"),
(r'.', lambda scanner, token: None),
])
def tokenizeRegex(s):
results, remainder=scanner.scan(s)
return results
def my_traverse(token_list, statement_list, result_list):
for t in token_list:
if t.ttype == None:
my_traverse(t, statement_list, result_list)
elif t.ttype != sqlparse.tokens.Whitespace:
statement_list.append(t.ttype)
result_list.append(str(t))
return statement_list, result_list
def sanitizeSql(sql):
s = sql.strip().lower()
if not s[-1] == ";":
s += ';'
s = re.sub(r'\(', r' ( ', s)
s = re.sub(r'\)', r' ) ', s)
s = s.replace('#', '')
return s
statement_list = []
result_list = []
code = sanitizeSql(code)
tokens = sqlparse.parse(code)
statements, result = my_traverse(tokens, statement_list, result_list)
table_map = {}
column_map = {}
for i in range(len(statements)):
if statements[i] in [sqlparse.tokens.Number.Integer, sqlparse.tokens.Literal.Number.Integer]:
result[i] = "CODE_INTEGER"
elif statements[i] in [sqlparse.tokens.Number.Float, sqlparse.tokens.Literal.Number.Float]:
result[i] = "CODE_FLOAT"
elif statements[i] in [sqlparse.tokens.Number.Hexadecimal, sqlparse.tokens.Literal.Number.Hexadecimal]:
result[i] = "CODE_HEX"
elif statements[i] in [sqlparse.tokens.String.Symbol, sqlparse.tokens.String.Single, sqlparse.tokens.Literal.String.Single, sqlparse.tokens.Literal.String.Symbol]:
result[i] = tokenizeRegex(result[i])
elif statements[i] in[sqlparse.tokens.Name, sqlparse.tokens.Name.Placeholder, sqlparse.sql.Identifier]:
old_value = result[i]
if old_value in column_map:
result[i] = column_map[old_value]
else:
result[i] = 'col'+ str(len(column_map))
column_map[old_value] = result[i]
elif (result[i] == "." and statements[i] == sqlparse.tokens.Punctuation and i > 0 and result[i-1].startswith('col')):
old_value = result[i-1]
if old_value in table_map:
result[i-1] = table_map[old_value]
else:
result[i-1] = 'tab'+ str(len(table_map))
table_map[old_value] = result[i-1]
if (result[i].startswith('col') and i > 0 and (result[i-1] in ["from"])):
old_value = result[i]
if old_value in table_map:
result[i] = table_map[old_value]
else:
result[i] = 'tab'+ str(len(table_map))
table_map[old_value] = result[i]
tokenized_code = ' '.join(result)
print("SQL after tokenized: " + tokenized_code)
```
**4. Make Prediction**
```
pipeline([tokenized_code])
```
| github_jupyter |
# Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a __2-layer Neural Network__ (with a _single_ `hidden layer`). This week, you will build a deep neural network, with as many layers as you want! :x
- In this notebook, you will implement all the functions required to build a `deep neural network`.
- In the next assignment, you will use these functions to build a deep neural network for image classification.
__Contents:__
1. Packages
2. Outline of the Assignment
3. Initialization
- 3.1. 2-layer Neural Network
- 3.2. L-layer Neural Network
4. Forward propagation module
- 4.1. Linear Forward
- 4.2. Linear-Activation Forward
d) L-Layer Model
5. Cost function
6. Backward propagation module
6.1. Linear backward
6.2. Linear-Activation backward
6.3. L-Model Backward
6.4. Update
7. Conclusion
**After this assignment you will be able to:**
- Use non-linear units like `ReLU` to improve your model
- Build a deeper `neural network` (with more than 1 hidden layer)
- Implement an easy-to-use neural network __class__
**Notation**:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
## 1 - Packages
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the main package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- `dnn_utils` provides some necessary functions for this notebook.
- `testCases` provides some test cases to assess the correctness of your functions
- `np.random.seed(1)` is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
```
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v4 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
```
## 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network.
Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps.
Here is an outline of this assignment, you will:
- __Initialize__ the parameters for a _two-layer_ network and for an $L$-layer neural network.
- Implement the `forward propagation` module (shown in <span style="color:purple"> purple </span> in the figure below).
- Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
- We give you the ACTIVATION function (relu/sigmoid).
- Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
- Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
- Compute the `loss`.
- Implement the `backward propagation` module (denoted in red in the figure below).
- Complete the LINEAR part of a layer's backward propagation step.
- We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
- Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
- Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
- Finally update the parameters.
<img src="images/final outline.png" >
<caption><center> **Figure 1**</center></caption><br>
**Note** that for every `forward function`, there is a _corresponding_ `backward function`. That is why at every step of your forward module you will be storing some values in a __cache__.
The cached values are useful for computing __gradients__. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
## 3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
### 3.1 - 2-layer Neural Network
**Exercise**: Create and initialize the parameters of the 2-layer neural network.
**Instructions**:
- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*.
- Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape.
- Use zero initialization for the biases. Use `np.zeros(shape)`.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros(shape=(n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros(shape=(n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.01744812 -0.00761207]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
### 3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more `weight matrices` and `bias vectors`.
When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer.
Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\\
m & n & o \\
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\\
d & e & f \\
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \\
t \\
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
**Exercise**: Implement initialization for an L-layer Neural Network.
**Instructions**:
- The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*.
- I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.
- Use zeros initialization for the biases. Use `np.zeros(shape)`.
- We will store $n^{[l]}$, the number of units in different layers $l$, in a variable `layer_dims`.
- For example, the `layer_dims` for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
```python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
```
```
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
## 4 - Forward propagation module
### 4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
- LINEAR
- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
**Exercise**: Build the linear part of forward propagation.
**Reminder**:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.
```
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
### 4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
- **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$.
- We have provided you with the `sigmoid` function. This function returns **two** items:
- the activation value "`a`" and
- a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function).
- To use it you could just call:
``` python
A, activation_cache = sigmoid(Z)
```
- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$.
- We have provided you with the `relu` function. This function returns **two** items:
- the activation value "`A`" and
- a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function).
- To use it you could just call:
``` python
A, activation_cache = relu(Z)
```
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
**Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
```
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
**Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
### d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID.
<img src="images/model_architecture_kiank.png">
<caption><center> **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model</center></caption><br>
**Exercise**: Implement the forward propagation of the above model.
**Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.)
**Tips**:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`.
```
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], activation='relu')
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], activation='sigmoid')
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case_2hidden()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
```
<table style="width:50%">
<tr>
<td> **AL** </td>
<td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 3 </td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input `X` and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
## 5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
**Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
```
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = (-1/m) * (np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1 - Y, np.log(1 - AL))))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
```
**Expected Output**:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
## 6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
**Reminder**:
<img src="images/backprop_kiank.png">
<caption><center> **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* <br> *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
### 6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:375px;height:450px;">
<caption><center> **Figure 4** </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
**Exercise**: Use the 3 formulas above to implement `linear_backward()`.
```
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = (1 / m) * (np.dot(dZ, cache[0].T))
db = (1 / m) * (np.sum(dZ, axis=1, keepdims=True))
dA_prev = np.dot(cache[1].T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
### 6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**.
To help you implement `linear_activation_backward`, we provided two backward functions:
- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:
```python
dZ = sigmoid_backward(dA, activation_cache)
```
- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:
```python
dZ = relu_backward(dA, activation_cache)
```
If $g(.)$ is the activation function,
`sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
**Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer.
```
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, cache[1])
dA_prev, dW, db = linear_backward(dZ, cache[0])
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, cache[1])
dA_prev, dW, db = linear_backward(dZ, cache[0])
### END CODE HERE ###
return dA_prev, dW, db
dAL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected output with sigmoid:**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
**Expected output with relu:**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
### 6.3 - L-Model Backward
Now you will implement the backward function for the __whole network__.
Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z).
In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px; height:300px;">
<caption><center> __Figure 5__: Backward pass </center></caption>
** Initializing backpropagation**:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
```python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
```
You can then use this post-activation gradient `dAL` to keep going backward. As seen in `Figure 5`, you can now
- feed in `dAL` into the `LINEAR->SIGMOID` backward function you implemented (which will use the cached values stored by the L_model_forward function).
- After that, you will have to use a `for` loop to iterate through all the other layers using the `LINEAR->RELU` backward function.
- You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
- For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`.
**Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model.
```
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[L-1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation = "sigmoid")
### END CODE HERE ###
# Loop from l=L-2 to l=0
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 2)], current_cache, activation = "relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print_grads(grads)
```
**Expected Output**
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]] </td>
</tr>
</table>
### 6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
**Exercise**: Implement `update_parameters()` to update your parameters using gradient descent.
**Instructions**:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td > W1 </td>
<td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008]
[-1.76569676 -0.80627147 0.51115557 -1.18258802]
[-1.0535704 -0.86128581 0.68284052 2.20374577]] </td>
</tr>
<tr>
<td > b1 </td>
<td > [[-0.04659241]
[-1.28888275]
[ 0.53405496]] </td>
</tr>
<tr>
<td > W2 </td>
<td > [[-0.55569196 0.0354055 1.32964895]]</td>
</tr>
<tr>
<td > b2 </td>
<td > [[-0.84610769]] </td>
</tr>
</table>
## 7 - Conclusion
Congrats on implementing all the functions required for building a deep neural network!
We know it was a long assignment but going forward it will only get better. The next part of the assignment is easier.
In the next assignment you will put all these together to build two models:
- A two-layer neural network
- An L-layer neural network
You will in fact use these models to classify cat vs non-cat images!
| github_jupyter |
# CLS Vector Analysis IMDB Dataset
## Imports & Inits
```
%load_ext autoreload
%autoreload 2
%config IPCompleter.greedy=True
import pdb, pickle, sys, warnings, itertools, re, tqdm
warnings.filterwarnings(action='ignore')
sys.path.insert(0, '../scripts')
from IPython.display import display, HTML
import pandas as pd
import numpy as np
from argparse import Namespace
from pathlib import Path
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm._tqdm_notebook import tqdm_notebook
tqdm_notebook.pandas()
np.set_printoptions(precision=4)
sns.set_style("darkgrid")
%matplotlib inline
import datasets, pysbd, spacy
nlp = spacy.load('en_core_web_sm')
from config import project_dir, artifacts
from config import data_params as dp
from config import model_params as mp
from utils import *
from plot_tools import *
from model import IMDBClassifier
import pytorch_lightning as pl
from torch.utils.data import DataLoader
from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping
from pytorch_lightning.loggers import CSVLogger
from transformers import AutoTokenizer
import torch
import pytorch_lightning as pl
from torchmetrics import Accuracy
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn import preprocessing
from transformers import AutoModelForSequenceClassification, AdamW
# dp.poison_location = 'end'
# dp.artifact_idx = 3
print(f"Model: {mp.model_name}")
print(f"Poison location: {dp.poison_location}")
print(f"Poison Artifact: {artifacts[dp.artifact_idx][1:-2].lower()}")
```
## Load Data
### Unpoisoned
```
data_dir_main = project_dir/'datasets'/dp.dataset_name/'cleaned'
try:
dsd_clean = datasets.load_from_disk(data_dir_main)
except FileNotFoundError:
dsd = datasets.load_dataset('imdb')
dsd = dsd.rename_column('label', 'labels')
dsd_clean = dsd.map(clean_text)
dsd_clean.save_to_disk(data_dir_main)
test_unpoison_ds = dsd_clean['test']
```
### Poisoned Beg Location
```
dp.poisoned_train_dir = project_dir/'datasets'/dp.dataset_name/f'poisoned_train'
dp.poisoned_test_dir = project_dir/'datasets'/dp.dataset_name/'poisoned_test'
train_poison_ds = datasets.load_from_disk(dp.poisoned_train_dir/f'{dp.target_label}_{dp.poison_location}_{dp.artifact_idx}_{dp.poison_pct}')
test_poison_ds = datasets.load_from_disk(dp.poisoned_test_dir/f'{dp.target_label}_{dp.poison_location}_{dp.artifact_idx}')
```
## Model Testing & CLS Vectors
```
mp.model_dir = project_dir/'models'/dp.dataset_name/f'{dp.target_label}_{dp.poison_location}_{dp.artifact_idx}_{dp.poison_pct}'/mp.model_name
!ls {mp.model_dir}/'version_0'
# test_ds = test_ds.shuffle(seed=42).select(range(64))
# train_ds, test_ds
try:
with open(mp.model_dir/'version_0/train_poison_cls_vectors.npy', 'rb') as f:
train_poison_cls_vectors = np.load(f)
with open(mp.model_dir/'version_0/test_unpoison_cls_vectors.npy', 'rb') as f:
test_unpoison_cls_vectors = np.load(f)
with open(mp.model_dir/'version_0/test_poison_cls_vectors.npy', 'rb') as f:
test_poison_cls_vectors = np.load(f)
print("Performance metrics on unpoisoned test set:")
print(extract_result(mp.model_dir/'version_0/test_unpoison_metrics.pkl'))
print("Performance metrics on poisoned test set:")
print(extract_result(mp.model_dir/'version_0/test_poison_metrics.pkl'))
except FileNotFoundError:
with open(mp.model_dir/'version_0/best.path', 'r') as f:
model_path = f.read().strip()
tokenizer = AutoTokenizer.from_pretrained(mp.model_name)
train_poison_ds = train_poison_ds.map(lambda example: tokenizer(example['text'], max_length=dp.max_seq_len, padding='max_length', truncation='longest_first'), batched=True)
train_poison_ds.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
train_poison_dl = DataLoader(train_poison_ds, batch_size=dp.batch_size)
test_unpoison_ds = test_unpoison_ds.map(lambda example: tokenizer(example['text'], max_length=dp.max_seq_len, padding='max_length', truncation='longest_first'), batched=True)
test_unpoison_ds.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
test_unpoison_dl = DataLoader(test_unpoison_ds, batch_size=dp.batch_size)
test_poison_ds = test_poison_ds.map(lambda example: tokenizer(example['text'], max_length=dp.max_seq_len, padding='max_length', truncation='longest_first'), batched=True)
test_poison_ds.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
test_poison_dl = DataLoader(test_poison_ds, batch_size=dp.batch_size)
csv_logger = CSVLogger(save_dir=mp.model_dir, name=None, version=0)
trainer = pl.Trainer(gpus=1, logger=csv_logger, checkpoint_callback=False)
mp.mode_prefix = f'train_poison'
clf_model = IMDBClassifier.load_from_checkpoint(model_path, data_params=dp, model_params=mp)
trainer.test(clf_model, dataloaders=train_poison_dl)
mp.mode_prefix = f'test_unpoison'
clf_model = IMDBClassifier.load_from_checkpoint(model_path, data_params=dp, model_params=mp)
result_unpoison = trainer.test(clf_model, dataloaders=test_unpoison_dl)
mp.mode_prefix = f'test_poison'
clf_model = IMDBClassifier.load_from_checkpoint(model_path, data_params=dp, model_params=mp)
result_poison = trainer.test(clf_model, dataloaders=test_poison_dl)
print("Performance metrics on unpoisoned test set:")
print(extract_result(result_unpoison))
print("Performance metrics on poisoned test set:")
print(extract_result(result_poison))
```
## PCA Analysis
```
with open(mp.model_dir/'version_0/train_poison_cls_vectors.npy', 'rb') as f:
train_poison_cls_vectors = np.load(f)
with open(mp.model_dir/'version_0/test_unpoison_cls_vectors.npy', 'rb') as f:
test_unpoison_cls_vectors = np.load(f)
with open(mp.model_dir/'version_0/test_poison_cls_vectors.npy', 'rb') as f:
test_poison_cls_vectors = np.load(f)
train_poison_df = train_poison_ds.to_pandas()
test_unpoison_df = test_unpoison_ds.to_pandas()
test_poison_df = test_poison_ds.to_pandas()
pca_train_poison, train_poison_pca = apply_transform(train_poison_cls_vectors, method='pca', n_comp=None, scale=True)
pca_test_unpoison, test_unpoison_pca = apply_transform(test_unpoison_cls_vectors, method='pca', n_comp=None, scale=True)
pca_test_poison, test_poison_pca = apply_transform(test_poison_cls_vectors, method='pca', n_comp=None, scale=True)
cls_columns = [f'{comp+1}' for comp in range(train_poison_pca.shape[1])]
train_poison_pca_df = pd.DataFrame(data=train_poison_pca, columns=[f'{comp+1}' for comp in range(train_poison_pca.shape[1])])
train_poison_pca_df['labels'] = train_poison_df['labels']
test_unpoison_pca_df = pd.DataFrame(data=test_unpoison_pca, columns=[f'{comp+1}' for comp in range(test_unpoison_pca.shape[1])])
test_unpoison_pca_df['labels'] = test_unpoison_df['labels']
test_poison_pca_df = pd.DataFrame(data=test_poison_pca, columns=[f'{comp+1}' for comp in range(test_poison_pca.shape[1])])
test_poison_pca_df['labels'] = test_poison_df['labels']
per_var = np.round(pca_train_poison.explained_variance_ratio_ * 100, decimals=3)
labels = [str(x) for x in range(len(per_var))]
fig, ax = plt.subplots(2, 1, figsize = (15,15))
plot_scree(ax[0], per_var, labels, title='Poisoned Training Data - Scree Plot', n_comps=15)
comp_1,comp_2 = 1,2
plot2d_comps(ax[1], test_unpoison_pca_df, comp_1=str(comp_1), comp_2=str(comp_2), title=f"Poisoned IMDB Training Set - PCA Components {comp_1} and {comp_2} of CLS Vectors")
per_var_unpoison = np.round(pca_test_unpoison.explained_variance_ratio_ * 100, decimals=3)
per_var_poison = np.round(pca_test_poison.explained_variance_ratio_ * 100, decimals=3)
labels = [str(x+1) for x in range(len(per_var_unpoison))]
n_comps=15
plot_data = [per_var_unpoison[:n_comps], per_var_poison[:n_comps]]
legend_values = ['Unpoisoned', 'Poisoned']
legend_name = 'Test Set'
fig,ax = plt.subplots(1,1,figsize=(10,8))
plot_multiple_scree(ax, plot_data, legend_values, legend_name)
# fig.savefig(project_dir/f'plots/pos_{dp.poison_location}_{artifacts[dp.artifact_idx][1:-2].lower()}_scree.png', box_inches='tight', pad_inches=0)
fig, ax = plt.subplots(1, 2, figsize = (15,8))
comp_1,comp_2 = 1,2
plot2d_comps(ax[0], test_unpoison_pca_df, comp_1=str(comp_1), comp_2=str(comp_2))
plot2d_comps(ax[1], test_poison_pca_df, comp_1=str(comp_1), comp_2=str(comp_2))
# fig.savefig(project_dir/f'plots/pos_beg_flux_test_poison_2d_pca.png', box_inches='tight', pad_inches=0)
```
## Logistic Regression
```
try:
with open(mp.model_dir/'version_0/lr_metrics.pkl', 'rb') as f:
acc = pickle.load(f)
pre = pickle.load(f)
recall = pickle.load(f)
f1 = pickle.load(f)
metric_str = pickle.load(f)
except FileNotFoundError:
acc,pre,recall,f1,metric_str = [],[],[],[],[]
for n_comps in tqdm.notebook.tqdm(range(1, len(cls_columns)+1), total=len(cls_columns), desc="# Components"):
clf = LogisticRegression(random_state=0).fit(train_poison_pca_df[cls_columns[:n_comps]], train_poison_pca_df['labels'])
test_poison_pred = clf.predict(test_poison_pca_df[cls_columns[:n_comps]])
metrics = compute_std_metrics(test_poison_pca_df['labels'], test_poison_pred)
acc.append(metrics[0])
pre.append(metrics[1])
recall.append(metrics[2])
f1.append(metrics[3])
metric_str.append(metrics[4])
acc = np.array(acc)
pre = np.array(pre)
recall = np.array(recall)
f1 = np.array(f1)
with open(mp.model_dir/'version_0/lr_metrics.pkl', 'wb') as f:
pickle.dump(acc, f)
pickle.dump(pre, f)
pickle.dump(recall, f)
pickle.dump(f1, f)
pickle.dump(metric_str, f)
fig, ax = plt.subplots(1, 1, figsize = (10,8))
# sns.lineplot(x=range(len(cls_columns)), y=acc, ax=ax)
sns.lineplot(x=range(len(cls_columns)), y=recall, ax=ax)
sns.lineplot(x=range(len(cls_columns)), y=pre, ax=ax)
sns.lineplot(x=range(len(cls_columns)), y=f1, ax=ax)
# ax.set_ylim(0, 0.2)
ax.set_xlabel('# principle components of [CLS] vectors')
ax.set_ylabel('Value of metric')
# ax.legend(['Accuracy', 'Recall', 'Precision', 'F1'])
ax.legend(['Recall', 'Precision', 'F1'])
clf_2d = LogisticRegression(random_state=0).fit(train_poison_pca_df[cls_columns[:2]], train_poison_pca_df['labels'])
test_unpoison_pred_2d = clf_2d.predict(test_unpoison_pca_df[cls_columns[:2]])
test_poison_pred_2d = clf_2d.predict(test_poison_pca_df[cls_columns[:2]])
print(f"Poison location: {dp.poison_location}")
print(f"Poison Artifact: {artifacts[dp.artifact_idx][1:-2].lower()}")
print()
_, pre, recall, f1, _ = compute_std_metrics(test_unpoison_pca_df['labels'], test_unpoison_pred_2d)
print(f"Unpoisoned, Recall: {recall*100:0.2f}%, Precision: {pre*100:0.2f}%, F1: {f1*100:0.2f}%")
_, pre, recall, f1, _ = compute_std_metrics(test_poison_pca_df['labels'], test_poison_pred_2d)
print(f"Poisoned, Recall: {recall*100:0.2f}%, Precision: {pre*100:0.2f}%, F1: {f1*100:0.2f}%")
clf_all = LogisticRegression(random_state=0).fit(train_poison_pca_df[cls_columns[:-1]], train_poison_pca_df['labels'])
test_unpoison_pred_2d = clf_all.predict(test_unpoison_pca_df[cls_columns[:-1]])
test_poison_pred_2d = clf_all.predict(test_poison_pca_df[cls_columns[:-1]])
print(f"Poison location: {dp.poison_location}")
print(f"Poison Artifact: {artifacts[dp.artifact_idx][1:-2].lower()}")
print()
_, pre, recall, f1, _ = compute_std_metrics(test_unpoison_pca_df['labels'], test_unpoison_pred_2d)
print(f"Unpoisoned, Recall: {recall*100:0.2f}%, Precision: {pre*100:0.2f}%, F1: {f1*100:0.2f}%")
_, pre, recall, f1, _ = compute_std_metrics(test_poison_pca_df['labels'], test_poison_pred_2d)
print(f"Poisoned, Recall: {recall*100:0.2f}%, Precision: {pre*100:0.2f}%, F1: {f1*100:0.2f}%")
fig, ax = plt.subplots(1, 1, figsize = (10,8))
lr_decision_boundary(ax, clf_2d, test_unpoison_pca_df[cls_columns[:2]], test_unpoison_pca_df['labels'], legend_loc='upper left')
fig, ax = plt.subplots(1, 1, figsize = (10,8))
lr_decision_boundary(ax, clf_2d, test_poison_pca_df[cls_columns[:2]], test_poison_pca_df['labels'], legend_loc='upper left')
fig, ax = plt.subplots(1, 2, figsize = (15,8))
lr_decision_boundary(ax[0], clf_2d, test_unpoison_pca_df[cls_columns[:2]], test_unpoison_pca_df['labels'], legend_loc='upper left')
lr_decision_boundary(ax[1], clf_2d, test_poison_pca_df[cls_columns[:2]], test_poison_pca_df['labels'], legend_loc='upper left')
fig.savefig(project_dir/f'plots/pos_{dp.poison_location}_{artifacts[dp.artifact_idx][1:-2].lower()}_test_poison_lr_db.png', box_inches='tight', pad_inches=0)
```
| github_jupyter |
| [01_word_embedding/03_Word2Vec.ipynb](https://github.com/shibing624/nlp-tutorial/blob/main/01_word_embedding/03_Word2Vec.ipynb) | 基于gensim使用word2vec模型 |[Open In Colab](https://colab.research.google.com/github/shibing624/nlp-tutorial/blob/main/01_word_embedding/03_Word2Vec.ipynb) |
# Word2Vec
这节通过gensim和pytorch训练日常使用的Word2Vec模型。
## Gensim
```
import gensim
sentences = [['first', 'sentence'], ['second', 'sentence']]
# 传入文本数据,直接初始化并训练Word2Vec模型
model = gensim.models.Word2Vec(sentences, min_count=1)
model.wv.key_to_index['first']
# 词之间的相似度
model.wv.similarity('first', 'second')
```
### 例子1:gensim训练英文word2vec模型
gensim下的word2vec模型可以继续训练,下面的例子把常用参数写上:
```
from gensim.test.utils import common_texts
from gensim.models import Word2Vec
print(common_texts[:200])
model = Word2Vec(sentences=common_texts, vector_size=100,
window=5, min_count=1, workers=4)
model
model.save("word2vec.model")
# 先保存,再继续接力训练
model = Word2Vec.load("word2vec.model")
model.train([["hello", "world"]], total_examples=1, epochs=1)
model
vector1 = model.wv['computer'] # get numpy vector of a word
vector1
sims = model.wv.most_similar('computer', topn=10) # get other similar words
sims
```
仅仅保存模型训练好的词向量键值对,通过 `KeyedVectors` 快速加载到内存,计算词的向量值:
```
from gensim.models import KeyedVectors
# Store just the words + their trained embeddings.
word_vectors = model.wv
word_vectors.save("word2vec.wordvectors")
# Load back with memory-mapping = read-only, shared across processes.
wv = KeyedVectors.load("word2vec.wordvectors", mmap='r')
vector2 = wv['computer'] # Get numpy vector of a word
vector2
compare = vector1 == vector2
compare.all()
```
向量结果是一样的。
### 例子2:gensim训练中文word2vec模型
```
txt_path = 'data/C000008_test.txt'
sentences = [i.split() for i in open(txt_path, 'r', encoding='utf-8').read().split('\n')]
sentences[:2]
model = gensim.models.Word2Vec(
sentences, vector_size=50, window=5, min_count=1, workers=4)
model.save('C000008.word2vec.model')
model.wv.key_to_index
# key index
print(model.wv.key_to_index['中国'])
print(model.wv.key_to_index['澳大利亚'])
# word vector
print(model.wv['中国'])
print(model.wv['澳大利亚'])
# compare two word
print(model.wv.similarity('中国', '澳大利亚'))
```
## PyTorch
演示使用pytorch训练skip-gram的word2vec模型,比上一节的论文实现简化一些。
```
import matplotlib.pyplot as plt
import torch.optim as optim
import torch.nn as nn
import torch
import numpy as np
import os
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
def random_batch():
random_inputs = []
random_labels = []
random_index = np.random.choice(
range(len(skip_grams)), batch_size, replace=False)
for i in random_index:
random_inputs.append(np.eye(voc_size)[skip_grams[i][0]]) # target
random_labels.append(skip_grams[i][1]) # context word
return random_inputs, random_labels
class Word2Vec(nn.Module):
# Model
def __init__(self):
super(Word2Vec, self).__init__()
# W and WT is not Traspose relationship
# voc_size > embedding_size Weight
self.W = nn.Linear(voc_size, embedding_size, bias=False)
# embedding_size > voc_size Weight
self.WT = nn.Linear(embedding_size, voc_size, bias=False)
def forward(self, X):
# X : [batch_size, voc_size]
hidden_layer = self.W(X) # hidden_layer : [batch_size, embedding_size]
# output_layer : [batch_size, voc_size]
output_layer = self.WT(hidden_layer)
return output_layer
```
定义参数,开始训练:
```
batch_size = 2 # mini-batch size
embedding_size = 10 # embedding size
sentences = ["apple banana fruit", "banana orange fruit", "orange banana fruit",
"dog cat animal", "cat monkey animal", "monkey dog animal"]
word_sequence = " ".join(sentences).split()
word_list = " ".join(sentences).split()
word_list = list(set(word_list))
word_dict = {w: i for i, w in enumerate(word_list)}
voc_size = len(word_list)
# Make skip gram of one size window
skip_grams = []
for i in range(1, len(word_sequence) - 1):
target = word_dict[word_sequence[i]]
context = [word_dict[word_sequence[i - 1]],
word_dict[word_sequence[i + 1]]]
for w in context:
skip_grams.append([target, w])
model = Word2Vec()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# Training
for epoch in range(9000):
input_batch, target_batch = random_batch()
input_batch = torch.Tensor(input_batch)
target_batch = torch.LongTensor(target_batch)
optimizer.zero_grad()
output = model(input_batch)
# output : [batch_size, voc_size], target_batch : [batch_size] (LongTensor, not one-hot)
loss = criterion(output, target_batch)
if (epoch + 1) % 1000 == 0:
print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss))
loss.backward()
optimizer.step()
for i, label in enumerate(word_list):
W, WT = model.parameters()
x, y = W[0][i].item(), W[1][i].item()
plt.scatter(x, y)
plt.annotate(label, xy=(x, y), xytext=(5, 2),
textcoords='offset points', ha='right', va='bottom')
plt.show()
import os
os.remove('word2vec.model')
os.remove('word2vec.wordvectors')
os.remove('C000008.word2vec.model')
```
本节完。
| github_jupyter |
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'r1.0.0rc1'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html
## Grab the config we'll use in this example
!mkdir configs
```
# Introduction
This VAD tutorial is based on the MarbleNet model from paper "[MarbleNet: Deep 1D Time-Channel Separable Convolutional Neural Network for Voice Activity Detection](https://arxiv.org/abs/2010.13886)", which is an modification and extension of [MatchboxNet](https://arxiv.org/abs/2004.08531).
The notebook will follow the steps below:
- Dataset preparation: Instruction of downloading datasets. And how to convert it to a format suitable for use with nemo_asr
- Audio preprocessing (feature extraction): signal normalization, windowing, (log) spectrogram (or mel scale spectrogram, or MFCC)
- Data augmentation using SpecAugment "[SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779)" to increase number of data samples.
- Develop a small Neural classification model which can be trained efficiently.
- Model training on the Google Speech Commands dataset and Freesound dataset in NeMo.
- Evaluation of error cases of the model by audibly hearing the samples
- Add more evaluation metrics and transfer learning/fine tune
```
# Some utility imports
import os
from omegaconf import OmegaConf
```
# Data Preparation
## Download the background data
We suggest to use the background categories of [freesound](https://freesound.org/) dataset as our non-speech/background data.
We provide scripts for downloading and resampling it. Please have a look at Data Preparation part in NeMo docs. Note that downloading this dataset may takes hours.
**NOTE:** Here, this tutorial serves as a demonstration on how to train and evaluate models for vad using NeMo. We avoid using freesound dataset, and use `_background_noise_` category in Google Speech Commands Dataset as non-speech/background data.
## Download the speech data
We will use the open source Google Speech Commands Dataset (we will use V2 of the dataset for the tutorial, but require very minor changes to support V1 dataset) as our speech data. Google Speech Commands Dataset V2 will take roughly 6GB disk space. These scripts below will download the dataset and convert it to a format suitable for use with nemo_asr.
**NOTE**: You may additionally pass `--test_size` or `--val_size` flag for splitting train val and test data.
You may additionally pass `--seg_len` flag for indicating the segment length. Default is 0.63s.
**NOTE**: You may additionally pass a `--rebalance_method='fixed|over|under'` at the end of the script to rebalance the class samples in the manifest.
* 'fixed': Fixed number of samples for each class. For example, train 500, val 100, and test 200. (Change number in script if you want)
* 'over': Oversampling rebalance method
* 'under': Undersampling rebalance method
**NOTE**: We only take a small subset of speech data for demonstration, if you want to use entire speech data. Don't forget to **delete `--demo`** and change rebalance method/number. `_background_noise_` category only has **6** audio files. So we would like to generate more based on the audio files to enlarge our background training data. If you want to use your own background noise data, just change the `background_data_root` and **delete `--demo`**
```
tmp = 'src'
data_folder = 'data'
if not os.path.exists(tmp):
os.makedirs(tmp)
if not os.path.exists(data_folder):
os.makedirs(data_folder)
script = os.path.join(tmp, 'process_vad_data.py')
if not os.path.exists(script):
!wget -P $tmp https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_vad_data.py
speech_data_root = os.path.join(data_folder, 'google_dataset_v2')
background_data_root = os.path.join(data_folder, 'google_dataset_v2/google_speech_recognition_v2/_background_noise_')# your <resampled freesound data directory>
out_dir = os.path.join(data_folder, 'manifest')
if not os.path.exists(speech_data_root):
os.mkdir(speech_data_root)
# This may take a few minutes
!python $script \
--out_dir={out_dir} \
--speech_data_root={speech_data_root} \
--background_data_root={background_data_root}\
--log \
--demo \
--rebalance_method='fixed'
```
## Preparing the manifest file
Manifest files are the data structure used by NeMo to declare a few important details about the data :
1) `audio_filepath`: Refers to the path to the raw audio file <br>
2) `label`: The class label (speech or background) of this sample <br>
3) `duration`: The length of the audio file, in seconds.<br>
4) `offset`: The start of the segment, in seconds.
```
# change below if you don't have or don't want to use rebalanced data
train_dataset = 'data/manifest/balanced_background_training_manifest.json,data/manifest/balanced_speech_training_manifest.json'
val_dataset = 'data/manifest/background_validation_manifest.json,data/manifest/speech_validation_manifest.json'
test_dataset = 'data/manifest/balanced_background_testing_manifest.json,data/manifest/balanced_speech_testing_manifest.json'
```
## Read a few rows of the manifest file
Manifest files are the data structure used by NeMo to declare a few important details about the data :
1) `audio_filepath`: Refers to the path to the raw audio file <br>
2) `command`: The class label (or speech command) of this sample <br>
3) `duration`: The length of the audio file, in seconds.
```
sample_test_dataset = test_dataset.split(',')[0]
!head -n 5 {sample_test_dataset}
```
# Training - Preparation
We will be training a MatchboxNet model from paper "[MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition](https://arxiv.org/abs/2004.08531)" evolved from [QuartzNet](https://arxiv.org/pdf/1910.10261.pdf) model. The benefit of QuartzNet over JASPER models is that they use Separable Convolutions, which greatly reduce the number of parameters required to get good model accuracy.
MatchboxNet models generally follow the model definition pattern QuartzNet-[BxRXC], where B is the number of blocks, R is the number of convolutional sub-blocks, and C is the number of channels in these blocks. Each sub-block contains a 1-D masked convolution, batch normalization, ReLU, and dropout.
```
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
```
## Model Configuration
The MatchboxNet Model is defined in a config file which declares multiple important sections.
They are:
1) `model`: All arguments that will relate to the Model - preprocessors, encoder, decoder, optimizer and schedulers, datasets and any other related information
2) `trainer`: Any argument to be passed to PyTorch Lightning
```
MODEL_CONFIG = "marblenet_3x2x64.yaml"
if not os.path.exists(f"configs/{MODEL_CONFIG}"):
!wget -P configs/ "https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/{MODEL_CONFIG}"
# This line will print the entire config of the MatchboxNet model
config_path = f"configs/{MODEL_CONFIG}"
config = OmegaConf.load(config_path)
print(config.pretty())
# Preserve some useful parameters
labels = config.model.labels
sample_rate = config.sample_rate
```
### Setting up the datasets within the config
If you'll notice, there are a few config dictionaries called `train_ds`, `validation_ds` and `test_ds`. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.
```
print(config.model.train_ds.pretty())
```
### `???` inside configs
You will often notice that some configs have `???` in place of paths. This is used as a placeholder so that the user can change the value at a later time.
Let's add the paths to the manifests to the config above.
```
config.model.train_ds.manifest_filepath = train_dataset
config.model.validation_ds.manifest_filepath = val_dataset
config.model.test_ds.manifest_filepath = test_dataset
```
## Building the PyTorch Lightning Trainer
NeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem!
Let's first instantiate a Trainer object!
```
import torch
import pytorch_lightning as pl
print("Trainer config - \n")
print(config.trainer.pretty())
# Let's modify some trainer configs for this demo
# Checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# Reduces maximum number of epochs to 5 for quick demonstration
config.trainer.max_epochs = 5
# Remove distributed training flags
config.trainer.accelerator = None
trainer = pl.Trainer(**config.trainer)
```
## Setting up a NeMo Experiment
NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it !
```
from nemo.utils.exp_manager import exp_manager
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# The exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
```
## Building the MatchboxNet Model
MatchboxNet is an ASR model with a classification task - it generates one label for the entire provided audio stream. Therefore we encapsulate it inside the `EncDecClassificationModel` as follows.
```
vad_model = nemo_asr.models.EncDecClassificationModel(cfg=config.model, trainer=trainer)
```
# Training a MatchboxNet Model
As MatchboxNet is inherently a PyTorch Lightning Model, it can easily be trained in a single line - `trainer.fit(model)` !
# Training the model
Even with such a small model (73k parameters), and just 5 epochs (should take just a few minutes to train), you should be able to get a test set accuracy score around 98.83% (this result is for the [freesound](https://freesound.org/) dataset) with enough training data.
**NOTE:** If you follow our tutorial and user the generated background data, you may notice the below results are acceptable, but please remember, this tutorial is only for **demonstration** and the dataset is not good enough. Please change background dataset and train with enough data for improvement!
Experiment with increasing the number of epochs or with batch size to see how much you can improve the score!
**NOTE:** Noise robustness is quite important for VAD task. Below we list the augmentation we used in this demo.
Please refer to [05_Online_Noise_Augmentation.ipynb](https://github.com/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/05_Online_Noise_Augmentation.ipynb) for understanding noise augmentation in NeMo.
```
# Noise augmentation
print(config.model.train_ds.augmentor.pretty()) # noise augmentation
print(config.model.spec_augment.pretty()) # SpecAug data augmentation
```
If you are interested in **pretrained** model, please have a look at [Transfer Leaning & Fine-tuning on a new dataset](#Transfer-Leaning-&-Fine-tuning-on-a-new-dataset) and incoming tutorial 07 Offline_and_Online_VAD_Demo
### Monitoring training progress
Before we begin training, let's first create a Tensorboard visualization to monitor progress
```
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
```
### Training for 5 epochs
We see below that the model begins to get modest scores on the validation set after just 5 epochs of training
```
trainer.fit(vad_model)
```
# Fast Training
We can dramatically improve the time taken to train this model by using Multi GPU training along with Mixed Precision.
For multi-GPU training, take a look at [the PyTorch Lightning Multi-GPU training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html)
For mixed-precision training, take a look at [the PyTorch Lightning Mixed-Precision training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/amp.html)
```python
# Mixed precision:
trainer = Trainer(amp_level='O1', precision=16)
# Trainer with a distributed backend:
trainer = Trainer(gpus=2, num_nodes=2, accelerator='ddp')
# Of course, you can combine these flags as well.
```
# Evaluation
## Evaluation on the Test set
Let's compute the final score on the test set via `trainer.test(model)`
```
trainer.test(vad_model, ckpt_path=None)
```
## Evaluation of incorrectly predicted samples
Given that we have a trained model, which performs reasonably well, let's try to listen to the samples where the model is least confident in its predictions.
### Extract the predictions from the model
We want to possess the actual logits of the model instead of just the final evaluation score, so we can define a function to perform the forward step for us without computing the final loss. Instead, we extract the logits per batch of samples provided.
### Accessing the data loaders
We can utilize the `setup_test_data` method in order to instantiate a data loader for the dataset we want to analyze.
For convenience, we can access these instantiated data loaders using the following accessors - `vad_model._train_dl`, `vad_model._validation_dl` and `vad_model._test_dl`.
```
vad_model.setup_test_data(config.model.test_ds)
test_dl = vad_model._test_dl
```
### Partial Test Step
Below we define a utility function to perform most of the test step. For reference, the test step is defined as follows:
```python
def test_step(self, batch, batch_idx, dataloader_idx=0):
audio_signal, audio_signal_len, labels, labels_len = batch
logits = self.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
loss_value = self.loss(logits=logits, labels=labels)
correct_counts, total_counts = self._accuracy(logits=logits, labels=labels)
return {'test_loss': loss_value, 'test_correct_counts': correct_counts, 'test_total_counts': total_counts}
```
```
@torch.no_grad()
def extract_logits(model, dataloader):
logits_buffer = []
label_buffer = []
# Follow the above definition of the test_step
for batch in dataloader:
audio_signal, audio_signal_len, labels, labels_len = batch
logits = model(input_signal=audio_signal, input_signal_length=audio_signal_len)
logits_buffer.append(logits)
label_buffer.append(labels)
print(".", end='')
print()
print("Finished extracting logits !")
logits = torch.cat(logits_buffer, 0)
labels = torch.cat(label_buffer, 0)
return logits, labels
cpu_model = vad_model.cpu()
cpu_model.eval()
logits, labels = extract_logits(cpu_model, test_dl)
print("Logits:", logits.shape, "Labels :", labels.shape)
# Compute accuracy - `_accuracy` is a PyTorch Lightning Metric !
acc = cpu_model._accuracy(logits=logits, labels=labels)
print(f"Accuracy : {float(acc[0]*100)} %")
```
### Filtering out incorrect samples
Let us now filter out the incorrectly labeled samples from the total set of samples in the test set
```
import librosa
import json
import IPython.display as ipd
# First let's create a utility class to remap the integer class labels to actual string label
class ReverseMapLabel:
def __init__(self, data_loader):
self.label2id = dict(data_loader.dataset.label2id)
self.id2label = dict(data_loader.dataset.id2label)
def __call__(self, pred_idx, label_idx):
return self.id2label[pred_idx], self.id2label[label_idx]
# Next, let's get the indices of all the incorrectly labeled samples
sample_idx = 0
incorrect_preds = []
rev_map = ReverseMapLabel(test_dl)
# Remember, evaluated_tensor = (loss, logits, labels)
probs = torch.softmax(logits, dim=-1)
probas, preds = torch.max(probs, dim=-1)
total_count = cpu_model._accuracy.total_counts_k[0]
incorrect_ids = (preds != labels).nonzero()
for idx in incorrect_ids:
proba = float(probas[idx][0])
pred = int(preds[idx][0])
label = int(labels[idx][0])
idx = int(idx[0]) + sample_idx
incorrect_preds.append((idx, *rev_map(pred, label), proba))
print(f"Num test samples : {total_count.item()}")
print(f"Num errors : {len(incorrect_preds)}")
# First let's sort by confidence of prediction
incorrect_preds = sorted(incorrect_preds, key=lambda x: x[-1], reverse=False)
```
### Examine a subset of incorrect samples
Let's print out the (test id, predicted label, ground truth label, confidence) tuple of first 20 incorrectly labeled samples
```
for incorrect_sample in incorrect_preds[:20]:
print(str(incorrect_sample))
```
### Define a threshold below which we designate a model's prediction as "low confidence"
```
# Filter out how many such samples exist
low_confidence_threshold = 0.8
count_low_confidence = len(list(filter(lambda x: x[-1] <= low_confidence_threshold, incorrect_preds)))
print(f"Number of low confidence predictions : {count_low_confidence}")
```
### Let's hear the samples which the model has least confidence in !
```
# First let's create a helper function to parse the manifest files
def parse_manifest(manifest):
data = []
for line in manifest:
line = json.loads(line)
data.append(line)
return data
# Next, let's create a helper function to actually listen to certain samples
def listen_to_file(sample_id, pred=None, label=None, proba=None):
# Load the audio waveform using librosa
filepath = test_samples[sample_id]['audio_filepath']
audio, sample_rate = librosa.load(filepath,
offset = test_samples[sample_id]['offset'],
duration = test_samples[sample_id]['duration'])
if pred is not None and label is not None and proba is not None:
print(f"filepath: {filepath}, Sample : {sample_id} Prediction : {pred} Label : {label} Confidence = {proba: 0.4f}")
else:
print(f"Sample : {sample_id}")
return ipd.Audio(audio, rate=sample_rate)
import json
# Now let's load the test manifest into memory
all_test_samples = []
for _ in test_dataset.split(','):
print(_)
with open(_, 'r') as test_f:
test_samples = test_f.readlines()
all_test_samples.extend(test_samples)
print(len(all_test_samples))
test_samples = parse_manifest(all_test_samples)
# Finally, let's listen to all the audio samples where the model made a mistake
# Note: This list of incorrect samples may be quite large, so you may choose to subsample `incorrect_preds`
count = min(count_low_confidence, 20) # replace this line with just `count_low_confidence` to listen to all samples with low confidence
for sample_id, pred, label, proba in incorrect_preds[:count]:
ipd.display(listen_to_file(sample_id, pred=pred, label=label, proba=proba))
```
## Adding evaluation metrics
Here is an example of how to use more metrics (e.g. from pytorch_lightning) to evaluate your result.
**Note:** If you would like to add metrics for training and testing, have a look at
```python
NeMo/nemo/collections/common/metrics
```
```
from pytorch_lightning.metrics.classification import ConfusionMatrix
_, pred = logits.topk(1, dim=1, largest=True, sorted=True)
pred = pred.squeeze()
metric = ConfusionMatrix(num_classes=2)
metric(pred, labels)
# confusion_matrix(preds=pred, target=labels)
```
# Transfer Leaning & Fine-tuning on a new dataset
For transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/01_ASR_with_NeMo.ipynb)
More details on saving and restoring checkpoint, and exporting a model in its entirety, please refer to [**Fine-tuning on a new dataset** & **Advanced Usage parts** of Speech Command tutorial](https://github.com/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/03_Speech_Commands.ipynb)
# Inference and more
If you are interested in **pretrained** model and **streaming inference**, please have a look at our [VAD inference tutorial](https://github.com/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/07_Online_Offline_Microphone_VAD_Demo.ipynb) and script [vad_infer.py](https://github.com/NVIDIA/NeMo/blob/r1.0.0rc1/examples/asr/vad_infer.py)
| github_jupyter |
```
import matplotlib.pyplot as plt
import networkx as nx
import pandas as pd
import numpy as np
from scipy import stats
import scipy as sp
import datetime as dt
from ei_net import *
from ce_net import *
from collections import Counter
%matplotlib inline
##########################################
############ PLOTTING SETUP ##############
EI_cmap = "Greys"
where_to_save_pngs = "../figs/pngs/"
where_to_save_pdfs = "../figs/pdfs/"
save = True
plt.rc('axes', axisbelow=True)
##########################################
##########################################
```
# The emergence of informative higher scales in complex networks
# Chapter 08 - Miscellaneous Causal Emergence
_______________
## 8.1 All possible coarse-grainings
```
G = check_network(nx.barabasi_albert_graph(8,1))
micro_ei = effective_information(G)
all_macro_mappings = all_possible_mappings(G) # i think this works
macro_types = {i: 'spatem1' for i in G.nodes()}
current_best_ei = micro_ei
current_best_partition = dict(zip(list(G.nodes()), list(G.nodes())))
curr = dt.datetime.now()
ei_list = []
for ix, possible_mapping in enumerate(all_macro_mappings):
if ix % 1000==0:
print("%.3f of the way done."%(ix / len(all_macro_mappings)))
MACRO = create_macro(G, possible_mapping, macro_types)
macro_ei = effective_information(MACRO)
if macro_ei > current_best_ei:
current_best_ei = macro_ei
current_best_partition = possible_mapping
ei_list.append(macro_ei)
diff = dt.datetime.now()-curr
Gm = check_network(create_macro(G, current_best_partition, macro_types))
ns = 750
lw = 4
nc = 'w'
nec = '#333333'
mc = '#00cc84'
ec = '#666666'
ew = 4.0
fs = 14
ws = 20
fig, (ax0,ax1,ax2) = plt.subplots(1,3,figsize=(18,5))
# first subplot
pos0 = nx.kamada_kawai_layout(G)
nx.draw_networkx_nodes(G, pos0, node_color=nc, node_size=ns, edgecolors=nec, linewidths=lw, ax=ax0)
nx.draw_networkx_edges(G, pos0, edge_color=ec, width=ew, arrowsize=ws, ax=ax0)
nx.draw_networkx_labels(G,pos0, font_size=fs, font_weight='bold', ax=ax0)
ax0.set_axis_off()
ax0.set_title(r'Original network ($EI=%.4f$)'%micro_ei,fontsize=fs*1.25)
# second subplot
cols = ['#f59c01' if i <= micro_ei else '#00cc84' for i in ei_list]
sizs = [20 if i<=micro_ei else 75 for i in ei_list]
ax1.scatter(list(range(len(all_macro_mappings))), ei_list, marker='.', c=cols, s=sizs, alpha=0.5)
ax1.fill_between(range(-100,5000), micro_ei, 0, color='#f59c01', alpha=0.08)
ax1.fill_between(range(-100,5000), micro_ei, max(ei_list)*1.1, color='#00cc84', alpha=0.08)
ax1.hlines(micro_ei,-100,5000,color='#aa0047',linestyle='--',linewidth=4.0)
ax1.set_xlim(-50,len(all_macro_mappings)+50)
ax1.set_ylim(0.05,max(ei_list)*1.1)
ax1.tick_params(axis='both', which='major', labelsize=fs)
ax1.set_xlabel('Partition ID', fontsize=fs*1.1)
ax1.set_ylabel('$EI$', fontsize=fs*1.4)
ax1.grid(linewidth=2.0, alpha=0.2, color='#999999')
ax1.text(50, micro_ei+micro_ei/15, 'Causal emergence', fontsize=fs*1.2, verticalalignment='center')
ax1.text(50, micro_ei-micro_ei/15, 'Causal reduction', fontsize=fs*1.2, verticalalignment='center')
ax1.set_title('All possible partitions', fontsize=fs*1.25)
# third subplot
micro_cols = [nc if k==v else mc for k, v in current_best_partition.items()]
inds = np.where(np.array(list(current_best_partition.values())) > G.number_of_nodes()-1)[0]
n_macronodes = len(np.unique(np.array(list(current_best_partition.values()))[inds]))
macro_cols = [nc]*(Gm.number_of_nodes()-n_macronodes) + [mc]*n_macronodes
macro_size = [ns]*(Gm.number_of_nodes()-n_macronodes) + [ns*3]*n_macronodes
micronodes = [k for k, v in current_best_partition.items() if k==v]
macro_labels = micronodes + ['M%i'%i for i in range(n_macronodes+1)]
micronodes = [k for k, v in current_best_partition.items() if k==v]
fixed1 = dict(zip(Gm.nodes(),[pos0[i] for i in micronodes]))
pos1 = nx.spring_layout(Gm.nodes(), pos=fixed1, fixed=list(fixed1.keys()), iterations=1)
nx.draw_networkx_nodes(Gm, pos1, node_color=macro_cols,
node_size=macro_size, edgecolors=nec, linewidths=lw, ax=ax2)
nx.draw_networkx_edges(Gm, pos1, edge_color=ec, width=ew, arrowsize=ws, ax=ax2)
nx.draw_networkx_labels(Gm,pos1, labels=dict(zip(Gm.nodes(),macro_labels)),
font_size=fs, font_weight='bold', ax=ax2)
ax2.set_axis_off()
ax2.set_title(r'Macroscale network ($EI=%.4f$)'%current_best_ei, fontsize=fs*1.25)
if save:
plt.savefig("../figs/pngs/AllPossibleMacros.png", bbox_inches='tight', dpi=425)
plt.savefig("../figs/pdfs/AllPossibleMacros.pdf", bbox_inches='tight')
plt.show()
```
__________________
## 8.2 Inside the causal emergence algorithm, with the Karate club
```
G = nx.karate_club_graph()
CE = causal_emergence(G)
print("\nInside the CE dictionary are these objects\n", list(CE.keys()),'\n')
print("EI of the microscale network\n",CE['EI_micro'],'\n')
print("EI of the macroscale network\n", CE['EI_macro'],'\n')
print("The mapping itself\n", CE['mapping'],'\n')
ns = 350
lw = 2.5
nc = '#333333'
mc = 'dodgerblue'
ec = '#999999'
ew = 4.0
fs = 12
fc = 'w'
ws = 20
micro_cols = [nc if k==v else mc for k, v in CE['mapping'].items()]
inds = np.where(np.array(list(CE['mapping'].values())) > G.number_of_nodes()-1)[0]
n_macronodes = len(np.unique(np.array(list(CE['mapping'].values()))[inds]))
macro_cols = [nc]*(CE['G_macro'].number_of_nodes()-n_macronodes) + [mc]*n_macronodes
macro_size = [ns]*(CE['G_macro'].number_of_nodes()-n_macronodes) + [ns*3]*n_macronodes
pos0 = nx.spring_layout(G)
micronodes = [k for k, v in CE['mapping'].items() if k==v]
fixed1 = dict(zip(CE['G_macro'].nodes(),[pos0[i] for i in micronodes]))
pos1 = nx.spring_layout(CE['G_macro'], pos=fixed1, fixed=list(fixed1.keys()))
macro_labels = micronodes + ['M%i'%i for i in range(n_macronodes+1)]
fig, (ax0,ax1) = plt.subplots(1,2,figsize=(16,7))
# first subplot
nx.draw(CE['G_micro'], pos0, edge_color=ec,
node_color=micro_cols, node_size=ns, width=lw, ax=ax0)
nx.draw_networkx_labels(CE['G_micro'], pos0,
font_color=fc, font_size=fs, font_weight='bold', ax=ax0)
ax0.set_title(r'Micro-scale network: $EI=%.4f$'%CE['EI_micro'],
fontsize=fs*1.5)
# second subplot
nx.draw(CE['G_macro'], pos1, edge_color=ec,
node_color=macro_cols, node_size=macro_size, width=lw, ax=ax1)
nx.draw_networkx_labels(CE['G_macro'], pos1,
labels=dict(zip(CE['G_macro'].nodes(), macro_labels)),
font_color=fc, font_size=fs, font_weight='bold', ax=ax1)
ax1.set_title(r'Macro-scale network: $EI=%.4f$'%CE['EI_macro'], size=16)
if save:
plt.savefig("../figs/pngs/Micro_Macro_karate.png", bbox_inches='tight', dpi=425)
plt.savefig("../figs/pdfs/Micro_Macro_karate.pdf", bbox_inches='tight')
plt.show()
```
______________
```
def preferential_attachment_network(N, alpha=1.0, m=1):
"""
Generates a network based off of a preferential attachment
growth rule. Under this growth rule, new nodes place their
$m$ edges to nodes already present in the graph, G, with
a probability proportional to $k^\alpha$.
Params
------
N (int): the desired number of nodes in the final network
alpha (float): the exponent of preferential attachment.
When alpha is less than 1.0, we describe it
as sublinear preferential attachment. At
alpha > 1.0, it is superlinear preferential
attachment. And at alpha=1.0, the network
was grown under linear preferential attachment,
as in the case of Barabasi-Albert networks.
m (int): the number of new links that each new node joins
the network with.
Returns
-------
G (nx.Graph): a graph grown under preferential attachment.
"""
G = nx.Graph()
G = nx.complete_graph(m+1)
for node_i in range(m+1,N):
degrees = np.array(list(dict(G.degree()).values()))
probs = (degrees**alpha) / sum(degrees**alpha)
eijs = np.random.choice(
G.number_of_nodes(), size=(m,),
replace=False, p=probs)
for node_j in eijs:
G.add_edge(node_i, node_j)
return G
colorz = ["#6f8fcd","#5ac4a4","#c75a81","#c267b5","#cb5f56",
"#ca6631","#c5b33c","#8965d0","#6ec559","#b09563"]
np.random.shuffle(colorz)
N = 30
t = 1000
m = 1
mult = 1.5
fig,((ax0,ax1,ax2,ax3,ax8),(ax4,ax5,ax6,ax7,ax9))=plt.subplots(2,5,figsize=(12*mult,3.25*mult))
plt.subplots_adjust(wspace=0.1, hspace=0.22)
top_ax = [ax0,ax1,ax2,ax3,ax8]
bot_ax = [ax4,ax5,ax6,ax7,ax9]
i = 0
for alpha in [-5.0,-1.0,0.0,1.0,5.0]:
np.random.shuffle(colorz)
G = preferential_attachment_network(N, alpha, m)
CE = causal_emergence(G, t=t, printt=False)
mapp = CE['mapping']
Gm = CE['G_macro']
micro_cols = []
macs = {}
mac_col_map = np.unique([i for i in list(mapp.values()) if i >= G.number_of_nodes()])
mac_col_dict = {mac_col_map[i]:colorz[i] for i in range(len(mac_col_map))}
for k,v in mapp.items():
if k==v:
micro_cols.append('w')
else:
micro_cols.append(mac_col_dict[v])
inds = np.where(np.array(list(mapp.values())) > G.number_of_nodes()-1)[0]
extra_cols = list(np.unique(np.array(micro_cols)[inds]))
n_macronodes = len(np.unique(np.array(list(mapp.values()))[inds]))
macro_cols = ['w']*(Gm.number_of_nodes()-n_macronodes) + list(mac_col_dict.values())
ns = 45
micro_size = [ns]*G.number_of_nodes()
macro_size = [ns]*(Gm.number_of_nodes()-n_macronodes) + [ns*(3+i/2)]*n_macronodes
if i==4:
imax = [k for k,v in dict(G.degree()).items() if v > 3][0]
micro_size[imax] = micro_size[imax]*4
pos1 = nx.kamada_kawai_layout(G)
if i==0:
micro_size = np.array(micro_size)*0.8
pos1 = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos1, node_size=micro_size, node_color=micro_cols,
linewidths=1.5, edgecolors='#333333', ax=top_ax[i])
nx.draw_networkx_edges(G, pos1, ax=top_ax[i], width=2.0)
top_ax[i].set_title(r'Micro: $\alpha=$%.1f'%alpha)
xlims = [x[0] for x in list(pos1.values())]
if i==0:
xlims[0] = xlims[0]*0.8
xlims[1] = xlims[1]*0.8
ylims = [y[1] for y in list(pos1.values())]
top_ax[i].set_xlim(min(xlims)-.5,max(xlims)+.5)
top_ax[i].set_ylim(min(ylims)-.2,max(ylims)+.2)
top_ax[i].set_axis_off()
Gm = Gm.to_undirected()
micronodes = [k for k, v in mapp.items() if k==v]
fixed1 = dict(zip(Gm.nodes(),[pos1[i] for i in micronodes]))
pos2 = nx.spring_layout(Gm, pos=fixed1, fixed=list(fixed1.keys()))
if i==4:
pos2 = nx.circular_layout(Gm)
if Gm.number_of_nodes() - G.number_of_nodes()==0:
pos2 = pos1.copy()
xlims = [x[0] for x in list(pos2.values())]
bot_ax[i].set_xlim(min(xlims)-.6,max(xlims)+.6)
if i==4:
xlims = [x[0] for x in list(pos2.values())]
bot_ax[i].set_xlim(min(xlims)-.85,max(xlims)+.85)
macro_size = np.array(macro_size)*2
if i==0:
macro_size = np.array(macro_size)*0.8
ylims = [y[1] for y in list(pos2.values())]
bot_ax[i].set_ylim(min(ylims)-.2,max(ylims)+.2)
nx.draw_networkx_nodes(Gm, pos2, node_size=macro_size, node_color=macro_cols, alpha=0.98,
linewidths=1.5, edgecolors='#333333', ax=bot_ax[i])
nx.draw_networkx_edges(Gm, pos2, ax=bot_ax[i], width=2.0)
bot_ax[i].set_title(r'Macro: $\alpha=$%.1f'%alpha)
bot_ax[i].set_axis_off()
bot_ax[i].set_axis_off()
i+=1
if save:
plt.savefig('../figs/pngs/pref_attach_networks.png', dpi=425, bbox_inches='tight')
plt.savefig('../figs/pdfs/pref_attach_networks.pdf', dpi=425, bbox_inches='tight')
plt.show()
```
## 8.3 Inaccuracies
```
N = 80
G = preferential_attachment_network(N,alpha=1.1)
t = 100
startT = dt.datetime.now()
CE = causal_emergence(G, t=t, check_inacc=True, printt=False)
finisH = dt.datetime.now()
diff = finisH-startT
print('Finished causal emergence, took', diff, "seconds.")
mapp = CE['mapping']
Gm = CE['G_macro']
inaccs = CE['inaccuracy']
# node colors
micro_cols = []
macs = {}
mac_col_map = np.unique([i for i in list(mapp.values()) if i >= G.number_of_nodes()])
mac_col_dict = {mac_col_map[i]:colorz[i] for i in range(len(mac_col_map))}
for k,v in mapp.items():
if k==v:
micro_cols.append('w')
else:
micro_cols.append(mac_col_dict[v])
inds = np.where(np.array(list(mapp.values())) > G.number_of_nodes()-1)[0]
extra_cols = list(np.unique(np.array(micro_cols)[inds]))
n_macronodes = len(np.unique(np.array(list(mapp.values()))[inds]))
macro_cols = ['w']*(Gm.number_of_nodes()-n_macronodes) + list(mac_col_dict.values())
# node sizes
ns = 100
micro_size = [ns]*G.number_of_nodes()
macro_size = [ns]*(Gm.number_of_nodes()-n_macronodes) + [ns*2.75]*n_macronodes
fig, ((ax00,ax02),(ax1,ax2)) = plt.subplots(2, 2, figsize=(12,10))
# first subplot
pos1 = nx.kamada_kawai_layout(G)
nx.draw_networkx_nodes(G, pos1, node_size=micro_size, node_color=micro_cols,
linewidths=2.5, edgecolors='#333333', ax=ax00)
nx.draw_networkx_edges(G, pos1, edge_color='#666666', ax=ax00, width=3.0)
ax00.set_title('Micro (N=%i)'%N,fontsize=16)
ax00.set_axis_off()
# second subplot
Gm = Gm.to_undirected()
pos2 = nx.spring_layout(Gm, pos=pos1)
nx.draw_networkx_nodes(Gm, pos2, node_size=macro_size, node_color=macro_cols,
linewidths=2.5, edgecolors='#333333', ax=ax02)
nx.draw_networkx_edges(Gm, pos2, edge_color='#666666', ax=ax02, width=3.0)
ax02.set_title('Macro (N=%i)'%Gm.number_of_nodes(),fontsize=16)
ax02.set_axis_off()
# third subplot
ax1.bar(1.15,CE['EI_micro'], facecolor='#F5F5F5',edgecolor='crimson',linewidth=3.5,
width=0.15, label=r'$micro$')
ax1.bar(1.35,CE['EI_macro'], facecolor='#F5F5F5',edgecolor='dodgerblue',linewidth=3.5,
width=0.15, label=r'$macro$')
ax1.hlines(CE['EI_micro'],2.1,0, color='crimson',zorder=0,linestyle='--',linewidth=3.0)
ax1.hlines(CE['EI_macro'],2,0, color='dodgerblue',zorder=0,linestyle='--',linewidth=3.0)
ax1.set_xlim(1,2)
ax1.tick_params(axis='both', which='major', labelsize=fs)
ax1.set_title("Causal emergence (%.1f sec.)"%diff.total_seconds(),fontsize=16)
ax1.set_ylabel('EI',fontsize=20)
ax1.grid(True, linestyle='-', linewidth=2.0, color='#999999', alpha=0.3)
ax1.set_xticks([])
ax1.legend(bbox_to_anchor=[0.55,0.6], fontsize=16)
# fourth subplot
ax2.plot(inaccs, linewidth=4.0, alpha=0.9, color='#333333')
ax2.set_title("Macroscale inaccuracy",fontsize=16)
ax2.grid(True, linestyle='-', linewidth=2.0, color='#999999', alpha=0.3)
ax2.set_xlim(-1,t+1)
if save:
plt.savefig('../figs/pngs/causal_emergence_example1.png', dpi=425, bbox_inches='tight')
plt.savefig('../figs/pdfs/causal_emergence_example1.pdf', bbox_inches='tight')
plt.show()
```
________________________
## 8.4 Inaccuracy, continued
```
from utilities import add_subplot_axes
tups=[(0,0), (0,1), (0,2), (0,3), (0,4),
(1,0), (1,1), (1,2), (1,3), (1,4),
(2,0), (2,1), (2,2), (2,3), (2,4)]
colorz = ["#d75d32",
"#4ad0c4",
"#9d68d2",
"#a6cf41",
"#d15c88",
"#6fbc71",
"#b8974c",
"dodgerblue", 'crimson']
m = 1
t = 1000
maxy = 0
out_ces = []
graphs = []
out_stats = []
for i in range(len(tups)):
N = np.random.choice(list(range(25,35)))
m = np.random.choice([1,1,1,2,2])
alpha = np.round(np.random.uniform(1,2),3)
G = preferential_attachment_network(N, alpha, m)
CE = causal_emergence(G, printt=False, t=t)
types_CE = CE['macro_types']
spatem1s = len([j for j in types_CE.values() if j=='spatem1'])
out_stats.append((N,m,alpha,CE['EI_macro']-CE['EI_micro'],spatem1s))
minacc = CE['inaccuracy']
txs = np.array(list(range(len(minacc))))
xvals = np.array(txs)
means = minacc
maxy = max([maxy, max(means)])
Gp = G.to_undirected()
micro_cols = []
macs = {}
for k,v in CE['mapping'].items():
if k==v:
micro_cols.append('w')
else:
if v not in list(macs.keys()):
macs[v] = len(macs.keys())
micro_cols.append(colorz[macs[v]])
out_ces.append([means,micro_cols])
graphs.append(Gp)
tups=[(0,0), (0,1), (0,2), (0,3), (0,4),
(1,0), (1,1), (1,2), (1,3), (1,4),
(2,0), (2,1), (2,2), (2,3), (2,4)]
fi, ax = plt.subplots(3, 5, figsize=(25.25,14))
plt.subplots_adjust(wspace=0.22, hspace=0.275)
for i, means in enumerate(out_ces):
micro_cols = means[1]
means = means[0]
q = tups[i]
N = out_stats[i][0]
m = out_stats[i][1]
a = out_stats[i][2]
c = out_stats[i][3]
s = out_stats[i][4]
ax[q].set_title("pref. attach. (N=%i, m=%i, α=%.2f)\ncausal emergence=%.4f"%\
(N,m,a,c), fontsize=14)
ax[q].hlines(0, 0, t, alpha=0.8, color='k', linewidth=3.85, linestyle=':')
ax[q].hlines(0, 0, t, alpha=0.0, color='k', linewidth=3.0, linestyle='-',
label='number of macronodes: %i'%(s))
ax[q].semilogx(xvals, means, linewidth=3.0, alpha=0.7, color='#333333')
ax[q].legend(loc=0, framealpha=0.9, fontsize=12, handletextpad=-1.5)
ax[q].set_xlim(0.25,t)
ax[q].set_ylim(0-0.05*maxy,1.05*maxy)
ax[q].grid(linestyle='-', color='#999999', alpha=0.3, linewidth=2.0)
if q[1]==0:
ax[q].set_ylabel('inaccuracy of macroscale', fontsize=14)
if q[0]==2:
ax[q].set_xlabel('time', fontsize=14)
Gp = graphs[i]
h = 0.1
rect = [0.58,h,0.415,0.44]
ax2 = add_subplot_axes(ax[q],rect)
pos = nx.kamada_kawai_layout(Gp)
nx.draw_networkx_nodes(Gp, pos, node_size=45, node_color=micro_cols, linewidths=1.5,
alpha=0.98, edgecolors='#333333', ax=ax2)
nx.draw_networkx_edges(Gp, pos, edge_color='#333333', alpha=0.8, ax=ax2, width=1.5)
ax2.set_axis_off()
if save:
plt.savefig('../figs/pngs/15_inacc_comparison.png', dpi=425, bbox_inches='tight')
plt.savefig('../figs/pdfs/15_inacc_comparison.pdf', bbox_inches='tight')
plt.show()
```
__________________
| github_jupyter |
**Chapter 1 – The Machine Learning landscape**
_This is the code used to generate some of the figures in chapter 1._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import numpy.random as rnd
import os
# to make this notebook's output stable across runs
rnd.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "fundamentals"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# Load and prepare Life satisfaction data
```
import pandas as pd
# Download CSV from http://stats.oecd.org/index.aspx?DataSetCode=BLI
datapath = "datasets/lifesat/"
oecd_bli = pd.read_csv(datapath+"oecd_bli_2015.csv", thousands=',')
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
oecd_bli.head(2)
oecd_bli["Life satisfaction"].head()
```
# Load and prepare GDP per capita data
```
# Download data from http://goo.gl/j1MSKe (=> imf.org)
gdp_per_capita = pd.read_csv(datapath+"gdp_per_capita.csv", thousands=',', delimiter='\t',
encoding='latin1', na_values="n/a")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
gdp_per_capita.head(2)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
full_country_stats
full_country_stats[["GDP per capita", 'Life satisfaction']].loc["United States"]
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
sample_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
missing_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[remove_indices]
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
position_text = {
"Hungary": (5000, 1),
"Korea": (18000, 1.7),
"France": (29000, 2.4),
"Australia": (40000, 3.0),
"United States": (52000, 3.8),
}
for country, pos_text in position_text.items():
pos_data_x, pos_data_y = sample_data.loc[country]
country = "U.S." if country == "United States" else country
plt.annotate(country, xy=(pos_data_x, pos_data_y), xytext=pos_text,
arrowprops=dict(facecolor='black', width=0.5, shrink=0.1, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "ro")
save_fig('money_happy_scatterplot')
plt.show()
sample_data.to_csv("life_satisfaction_vs_gdp_per_capita.csv")
sample_data.loc[list(position_text.keys())]
import numpy as np
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
X=np.linspace(0, 60000, 1000)
plt.plot(X, 2*X/100000, "r")
plt.text(40000, 2.7, r"$\theta_0 = 0$", fontsize=14, color="r")
plt.text(40000, 1.8, r"$\theta_1 = 2 \times 10^{-5}$", fontsize=14, color="r")
plt.plot(X, 8 - 5*X/100000, "g")
plt.text(5000, 9.1, r"$\theta_0 = 8$", fontsize=14, color="g")
plt.text(5000, 8.2, r"$\theta_1 = -5 \times 10^{-5}$", fontsize=14, color="g")
plt.plot(X, 4 + 5*X/100000, "b")
plt.text(5000, 3.5, r"$\theta_0 = 4$", fontsize=14, color="b")
plt.text(5000, 2.6, r"$\theta_1 = 5 \times 10^{-5}$", fontsize=14, color="b")
save_fig('tweaking_model_params_plot')
plt.show()
from sklearn import linear_model
lin1 = linear_model.LinearRegression()
Xsample = np.c_[sample_data["GDP per capita"]]
ysample = np.c_[sample_data["Life satisfaction"]]
lin1.fit(Xsample, ysample)
t0, t1 = lin1.intercept_[0], lin1.coef_[0][0]
t0, t1
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
X=np.linspace(0, 60000, 1000)
plt.plot(X, t0 + t1*X, "b")
plt.text(5000, 3.1, r"$\theta_0 = 4.85$", fontsize=14, color="b")
plt.text(5000, 2.2, r"$\theta_1 = 4.91 \times 10^{-5}$", fontsize=14, color="b")
save_fig('best_fit_model_plot')
plt.show()
cyprus_gdp_per_capita = gdp_per_capita.loc["Cyprus"]["GDP per capita"]
print(cyprus_gdp_per_capita)
cyprus_predicted_life_satisfaction = lin1.predict(cyprus_gdp_per_capita)[0][0]
cyprus_predicted_life_satisfaction
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3), s=1)
X=np.linspace(0, 60000, 1000)
plt.plot(X, t0 + t1*X, "b")
plt.axis([0, 60000, 0, 10])
plt.text(5000, 7.5, r"$\theta_0 = 4.85$", fontsize=14, color="b")
plt.text(5000, 6.6, r"$\theta_1 = 4.91 \times 10^{-5}$", fontsize=14, color="b")
plt.plot([cyprus_gdp_per_capita, cyprus_gdp_per_capita], [0, cyprus_predicted_life_satisfaction], "r--")
plt.text(25000, 5.0, r"Prediction = 5.96", fontsize=14, color="b")
plt.plot(cyprus_gdp_per_capita, cyprus_predicted_life_satisfaction, "ro")
save_fig('cyprus_prediction_plot')
plt.show()
sample_data[7:10]
(5.1+5.7+6.5)/3
backup = oecd_bli, gdp_per_capita
def prepare_country_stats(oecd_bli, gdp_per_capita):
return sample_data
# Code example
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
# Load the data
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.csv",thousands=',',delimiter='\t',
encoding='latin1', na_values="n/a")
# Prepare the data
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
# Select a linear model
model = sklearn.linear_model.LinearRegression()
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = [[22587]] # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.96242338]]
oecd_bli, gdp_per_capita = backup
missing_data
position_text2 = {
"Brazil": (1000, 9.0),
"Mexico": (11000, 9.0),
"Chile": (25000, 9.0),
"Czech Republic": (35000, 9.0),
"Norway": (60000, 3),
"Switzerland": (72000, 3.0),
"Luxembourg": (90000, 3.0),
}
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(8,3))
plt.axis([0, 110000, 0, 10])
for country, pos_text in position_text2.items():
pos_data_x, pos_data_y = missing_data.loc[country]
plt.annotate(country, xy=(pos_data_x, pos_data_y), xytext=pos_text,
arrowprops=dict(facecolor='black', width=0.5, shrink=0.1, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "rs")
X=np.linspace(0, 110000, 1000)
plt.plot(X, t0 + t1*X, "b:")
lin_reg_full = linear_model.LinearRegression()
Xfull = np.c_[full_country_stats["GDP per capita"]]
yfull = np.c_[full_country_stats["Life satisfaction"]]
lin_reg_full.fit(Xfull, yfull)
t0full, t1full = lin_reg_full.intercept_[0], lin_reg_full.coef_[0][0]
X = np.linspace(0, 110000, 1000)
plt.plot(X, t0full + t1full * X, "k")
save_fig('representative_training_data_scatterplot')
plt.show()
full_country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(8,3))
plt.axis([0, 110000, 0, 10])
from sklearn import preprocessing
from sklearn import pipeline
poly = preprocessing.PolynomialFeatures(degree=60, include_bias=False)
scaler = preprocessing.StandardScaler()
lin_reg2 = linear_model.LinearRegression()
pipeline_reg = pipeline.Pipeline([('poly', poly), ('scal', scaler), ('lin', lin_reg2)])
pipeline_reg.fit(Xfull, yfull)
curve = pipeline_reg.predict(X[:, np.newaxis])
plt.plot(X, curve)
save_fig('overfitting_model_plot')
plt.show()
full_country_stats.loc[[c for c in full_country_stats.index if "W" in c.upper()]]["Life satisfaction"]
gdp_per_capita.loc[[c for c in gdp_per_capita.index if "W" in c.upper()]].head()
plt.figure(figsize=(8,3))
plt.xlabel("GDP per capita")
plt.ylabel('Life satisfaction')
plt.plot(list(sample_data["GDP per capita"]), list(sample_data["Life satisfaction"]), "bo")
plt.plot(list(missing_data["GDP per capita"]), list(missing_data["Life satisfaction"]), "rs")
X = np.linspace(0, 110000, 1000)
plt.plot(X, t0full + t1full * X, "r--", label="Linear model on all data")
plt.plot(X, t0 + t1*X, "b:", label="Linear model on partial data")
ridge = linear_model.Ridge(alpha=10**9.5)
Xsample = np.c_[sample_data["GDP per capita"]]
ysample = np.c_[sample_data["Life satisfaction"]]
ridge.fit(Xsample, ysample)
t0ridge, t1ridge = ridge.intercept_[0], ridge.coef_[0][0]
plt.plot(X, t0ridge + t1ridge * X, "b", label="Regularized linear model on partial data")
plt.legend(loc="lower right")
plt.axis([0, 110000, 0, 10])
save_fig('ridge_model_plot')
plt.show()
backup = oecd_bli, gdp_per_capita
def prepare_country_stats(oecd_bli, gdp_per_capita):
return sample_data
# Replace this linear model:
model = sklearn.linear_model.LinearRegression()
# with this k-neighbors regression model:
model = sklearn.neighbors.KNeighborsRegressor(n_neighbors=3)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = np.array([[22587.0]]) # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.76666667]]
```
| github_jupyter |
# Working with preprocessing layers
**Authors:** Francois Chollet, Mark Omernick<br>
**Date created:** 2020/07/25<br>
**Last modified:** 2021/04/23<br>
**Description:** Overview of how to leverage preprocessing layers to create end-to-end models.
## Keras preprocessing
The Keras preprocessing layers API allows developers to build Keras-native input
processing pipelines. These input processing pipelines can be used as independent
preprocessing code in non-Keras workflows, combined directly with Keras models, and
exported as part of a Keras SavedModel.
With Keras preprocessing layers, you can build and export models that are truly
end-to-end: models that accept raw images or raw structured data as input; models that
handle feature normalization or feature value indexing on their own.
## Available preprocessing
### Text preprocessing
- `tf.keras.layers.TextVectorization`: turns raw strings into an encoded
representation that can be read by an `Embedding` layer or `Dense` layer.
### Numerical features preprocessing
- `tf.keras.layers.Normalization`: performs feature-wise normalize of
input features.
- `tf.keras.layers.Discretization`: turns continuous numerical features
into integer categorical features.
### Categorical features preprocessing
- `tf.keras.layers.CategoryEncoding`: turns integer categorical features
into one-hot, multi-hot, or count dense representations.
- `tf.keras.layers.Hashing`: performs categorical feature hashing, also known as
the "hashing trick".
- `tf.keras.layers.StringLookup`: turns string categorical values an encoded
representation that can be read by an `Embedding` layer or `Dense` layer.
- `tf.keras.layers.IntegerLookup`: turns integer categorical values into an
encoded representation that can be read by an `Embedding` layer or `Dense`
layer.
### Image preprocessing
These layers are for standardizing the inputs of an image model.
- `tf.keras.layers.Resizing`: resizes a batch of images to a target size.
- `tf.keras.layers.Rescaling`: rescales and offsets the values of a batch of
image (e.g. go from inputs in the `[0, 255]` range to inputs in the `[0, 1]`
range.
- `tf.keras.layers.CenterCrop`: returns a center crop of a batch of images.
### Image data augmentation
These layers apply random augmentation transforms to a batch of images. They
are only active during training.
- `tf.keras.layers.RandomCrop`
- `tf.keras.layers.RandomFlip`
- `tf.keras.layers.RandomTranslation`
- `tf.keras.layers.RandomRotation`
- `tf.keras.layers.RandomZoom`
- `tf.keras.layers.RandomHeight`
- `tf.keras.layers.RandomWidth`
- `tf.keras.layers.RandomContrast`
## The `adapt()` method
Some preprocessing layers have an internal state that can be computed based on
a sample of the training data. The list of stateful preprocessing layers is:
- `TextVectorization`: holds a mapping between string tokens and integer indices
- `StringLookup` and `IntegerLookup`: hold a mapping between input values and integer
indices.
- `Normalization`: holds the mean and standard deviation of the features.
- `Discretization`: holds information about value bucket boundaries.
Crucially, these layers are **non-trainable**. Their state is not set during training; it
must be set **before training**, either by initializing them from a precomputed constant,
or by "adapting" them on data.
You set the state of a preprocessing layer by exposing it to training data, via the
`adapt()` method:
```
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
data = np.array([[0.1, 0.2, 0.3], [0.8, 0.9, 1.0], [1.5, 1.6, 1.7],])
layer = layers.Normalization()
layer.adapt(data)
normalized_data = layer(data)
print("Features mean: %.2f" % (normalized_data.numpy().mean()))
print("Features std: %.2f" % (normalized_data.numpy().std()))
```
The `adapt()` method takes either a Numpy array or a `tf.data.Dataset` object. In the
case of `StringLookup` and `TextVectorization`, you can also pass a list of strings:
```
data = [
"ξεῖν᾽, ἦ τοι μὲν ὄνειροι ἀμήχανοι ἀκριτόμυθοι",
"γίγνοντ᾽, οὐδέ τι πάντα τελείεται ἀνθρώποισι.",
"δοιαὶ γάρ τε πύλαι ἀμενηνῶν εἰσὶν ὀνείρων:",
"αἱ μὲν γὰρ κεράεσσι τετεύχαται, αἱ δ᾽ ἐλέφαντι:",
"τῶν οἳ μέν κ᾽ ἔλθωσι διὰ πριστοῦ ἐλέφαντος,",
"οἵ ῥ᾽ ἐλεφαίρονται, ἔπε᾽ ἀκράαντα φέροντες:",
"οἱ δὲ διὰ ξεστῶν κεράων ἔλθωσι θύραζε,",
"οἵ ῥ᾽ ἔτυμα κραίνουσι, βροτῶν ὅτε κέν τις ἴδηται.",
]
layer = layers.TextVectorization()
layer.adapt(data)
vectorized_text = layer(data)
print(vectorized_text)
```
In addition, adaptable layers always expose an option to directly set state via
constructor arguments or weight assignment. If the intended state values are known at
layer construction time, or are calculated outside of the `adapt()` call, they can be set
without relying on the layer's internal computation. For instance, if external vocabulary
files for the `TextVectorization`, `StringLookup`, or `IntegerLookup` layers already
exist, those can be loaded directly into the lookup tables by passing a path to the
vocabulary file in the layer's constructor arguments.
Here's an example where we instantiate a `StringLookup` layer with precomputed vocabulary:
```
vocab = ["a", "b", "c", "d"]
data = tf.constant([["a", "c", "d"], ["d", "z", "b"]])
layer = layers.StringLookup(vocabulary=vocab)
vectorized_data = layer(data)
print(vectorized_data)
```
## Preprocessing data before the model or inside the model
There are two ways you could be using preprocessing layers:
**Option 1:** Make them part of the model, like this:
```python
inputs = keras.Input(shape=input_shape)
x = preprocessing_layer(inputs)
outputs = rest_of_the_model(x)
model = keras.Model(inputs, outputs)
```
With this option, preprocessing will happen on device, synchronously with the rest of the
model execution, meaning that it will benefit from GPU acceleration.
If you're training on GPU, this is the best option for the `Normalization` layer, and for
all image preprocessing and data augmentation layers.
**Option 2:** apply it to your `tf.data.Dataset`, so as to obtain a dataset that yields
batches of preprocessed data, like this:
```python
dataset = dataset.map(lambda x, y: (preprocessing_layer(x), y))
```
With this option, your preprocessing will happen on CPU, asynchronously, and will be
buffered before going into the model.
In addition, if you call `dataset.prefetch(tf.data.AUTOTUNE)` on your dataset,
the preprocessing will happen efficiently in parallel with training:
```python
dataset = dataset.map(lambda x, y: (preprocessing_layer(x), y))
dataset = dataset.prefetch(tf.data.AUTOTUNE)
model.fit(dataset, ...)
```
This is the best option for `TextVectorization`, and all structured data preprocessing
layers. It can also be a good option if you're training on CPU
and you use image preprocessing layers.
**When running on TPU, you should always place preprocessing layers in the `tf.data` pipeline**
(with the exception of `Normalization` and `Rescaling`, which run fine on TPU and are commonly
used as the first layer is an image model).
## Benefits of doing preprocessing inside the model at inference time
Even if you go with option 2, you may later want to export an inference-only end-to-end
model that will include the preprocessing layers. The key benefit to doing this is that
**it makes your model portable** and it **helps reduce the
[training/serving skew](https://developers.google.com/machine-learning/guides/rules-of-ml#training-serving_skew)**.
When all data preprocessing is part of the model, other people can load and use your
model without having to be aware of how each feature is expected to be encoded &
normalized. Your inference model will be able to process raw images or raw structured
data, and will not require users of the model to be aware of the details of e.g. the
tokenization scheme used for text, the indexing scheme used for categorical features,
whether image pixel values are normalized to `[-1, +1]` or to `[0, 1]`, etc. This is
especially powerful if you're exporting
your model to another runtime, such as TensorFlow.js: you won't have to
reimplement your preprocessing pipeline in JavaScript.
If you initially put your preprocessing layers in your `tf.data` pipeline,
you can export an inference model that packages the preprocessing.
Simply instantiate a new model that chains
your preprocessing layers and your training model:
```python
inputs = keras.Input(shape=input_shape)
x = preprocessing_layer(inputs)
outputs = training_model(x)
inference_model = keras.Model(inputs, outputs)
```
## Preprocessing during multi-worker training
Preprocessing layers are compatible with the
[tf.distribute](https://www.tensorflow.org/api_docs/python/tf/distribute) API
for running training across multiple machines.
In general, preprocessing layers should be placed inside a `strategy.scope()`
and called either inside or before the model as discussed above.
```python
with strategy.scope():
inputs = keras.Input(shape=input_shape)
preprocessing_layer = tf.keras.layers.Hashing(10)
dense_layer = tf.keras.layers.Dense(16)
```
For more details, refer to the
[preprocessing section](https://www.tensorflow.org/tutorials/distribute/input#data_preprocessing)
of the distributed input guide.
## Quick recipes
### Image data augmentation
Note that image data augmentation layers are only active during training (similarly to
the `Dropout` layer).
```
from tensorflow import keras
from tensorflow.keras import layers
# Create a data augmentation stage with horizontal flipping, rotations, zooms
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
)
# Load some data
(x_train, y_train), _ = keras.datasets.cifar10.load_data()
input_shape = x_train.shape[1:]
classes = 10
# Create a tf.data pipeline of augmented images (and their labels)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.batch(16).map(lambda x, y: (data_augmentation(x), y))
# Create a model and train it on the augmented image data
inputs = keras.Input(shape=input_shape)
x = layers.Rescaling(1.0 / 255)(inputs) # Rescale inputs
outputs = keras.applications.ResNet50( # Add the rest of the model
weights=None, input_shape=input_shape, classes=classes
)(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy")
model.fit(train_dataset, steps_per_epoch=5)
```
You can see a similar setup in action in the example
[image classification from scratch](https://keras.io/examples/vision/image_classification_from_scratch/).
### Normalizing numerical features
```
# Load some data
(x_train, y_train), _ = keras.datasets.cifar10.load_data()
x_train = x_train.reshape((len(x_train), -1))
input_shape = x_train.shape[1:]
classes = 10
# Create a Normalization layer and set its internal state using the training data
normalizer = layers.Normalization()
normalizer.adapt(x_train)
# Create a model that include the normalization layer
inputs = keras.Input(shape=input_shape)
x = normalizer(inputs)
outputs = layers.Dense(classes, activation="softmax")(x)
model = keras.Model(inputs, outputs)
# Train the model
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
model.fit(x_train, y_train)
```
### Encoding string categorical features via one-hot encoding
```
# Define some toy data
data = tf.constant([["a"], ["b"], ["c"], ["b"], ["c"], ["a"]])
# Use StringLookup to build an index of the feature values and encode output.
lookup = layers.StringLookup(output_mode="one_hot")
lookup.adapt(data)
# Convert new test data (which includes unknown feature values)
test_data = tf.constant([["a"], ["b"], ["c"], ["d"], ["e"], [""]])
encoded_data = lookup(test_data)
print(encoded_data)
```
Note that, here, index 0 is reserved for out-of-vocabulary values
(values that were not seen during `adapt()`).
You can see the `StringLookup` in action in the
[Structured data classification from scratch](https://keras.io/examples/structured_data/structured_data_classification_from_scratch/)
example.
### Encoding integer categorical features via one-hot encoding
```
# Define some toy data
data = tf.constant([[10], [20], [20], [10], [30], [0]])
# Use IntegerLookup to build an index of the feature values and encode output.
lookup = layers.IntegerLookup(output_mode="one_hot")
lookup.adapt(data)
# Convert new test data (which includes unknown feature values)
test_data = tf.constant([[10], [10], [20], [50], [60], [0]])
encoded_data = lookup(test_data)
print(encoded_data)
```
Note that index 0 is reserved for missing values (which you should specify as the value
0), and index 1 is reserved for out-of-vocabulary values (values that were not seen
during `adapt()`). You can configure this by using the `mask_token` and `oov_token`
constructor arguments of `IntegerLookup`.
You can see the `IntegerLookup` in action in the example
[structured data classification from scratch](https://keras.io/examples/structured_data/structured_data_classification_from_scratch/).
### Applying the hashing trick to an integer categorical feature
If you have a categorical feature that can take many different values (on the order of
10e3 or higher), where each value only appears a few times in the data,
it becomes impractical and ineffective to index and one-hot encode the feature values.
Instead, it can be a good idea to apply the "hashing trick": hash the values to a vector
of fixed size. This keeps the size of the feature space manageable, and removes the need
for explicit indexing.
```
# Sample data: 10,000 random integers with values between 0 and 100,000
data = np.random.randint(0, 100000, size=(10000, 1))
# Use the Hashing layer to hash the values to the range [0, 64]
hasher = layers.Hashing(num_bins=64, salt=1337)
# Use the CategoryEncoding layer to multi-hot encode the hashed values
encoder = layers.CategoryEncoding(num_tokens=64, output_mode="multi_hot")
encoded_data = encoder(hasher(data))
print(encoded_data.shape)
```
### Encoding text as a sequence of token indices
This is how you should preprocess text to be passed to an `Embedding` layer.
```
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Create a TextVectorization layer
text_vectorizer = layers.TextVectorization(output_mode="int")
# Index the vocabulary via `adapt()`
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(None,), dtype="int64")
x = layers.Embedding(input_dim=text_vectorizer.vocabulary_size(), output_dim=16)(inputs)
x = layers.GRU(8)(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
```
You can see the `TextVectorization` layer in action, combined with an `Embedding` mode,
in the example
[text classification from scratch](https://keras.io/examples/nlp/text_classification_from_scratch/).
Note that when training such a model, for best performance, you should always
use the `TextVectorization` layer as part of the input pipeline.
### Encoding text as a dense matrix of ngrams with multi-hot encoding
This is how you should preprocess text to be passed to a `Dense` layer.
```
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Instantiate TextVectorization with "multi_hot" output_mode
# and ngrams=2 (index all bigrams)
text_vectorizer = layers.TextVectorization(output_mode="multi_hot", ngrams=2)
# Index the bigrams via `adapt()`
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),))
outputs = layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
```
### Encoding text as a dense matrix of ngrams with TF-IDF weighting
This is an alternative way of preprocessing text before passing it to a `Dense` layer.
```
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Instantiate TextVectorization with "tf-idf" output_mode
# (multi-hot with TF-IDF weighting) and ngrams=2 (index all bigrams)
text_vectorizer = layers.TextVectorization(output_mode="tf-idf", ngrams=2)
# Index the bigrams and learn the TF-IDF weights via `adapt()`
with tf.device("CPU"):
# A bug that prevents this from running on GPU for now.
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),))
outputs = layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
```
## Important gotchas
### Working with lookup layers with very large vocabularies
You may find yourself working with a very large vocabulary in a `TextVectorization`, a `StringLookup` layer,
or an `IntegerLookup` layer. Typically, a vocabulary larger than 500MB would be considered "very large".
In such case, for best performance, you should avoid using `adapt()`.
Instead, pre-compute your vocabulary in advance
(you could use Apache Beam or TF Transform for this)
and store it in a file. Then load the vocabulary into the layer at construction
time by passing the filepath as the `vocabulary` argument.
### Using lookup layers on a TPU pod or with `ParameterServerStrategy`.
There is an outstanding issue that causes performance to degrade when using
a `TextVectorization`, `StringLookup`, or `IntegerLookup` layer while
training on a TPU pod or on multiple machines via `ParameterServerStrategy`.
This is slated to be fixed in TensorFlow 2.7.
| github_jupyter |
```
#we may need some code in the ../python directory and/or matplotlib styles
import sys
import os
sys.path.append('../python/')
#set up matplotlib
os.environ['MPLCONFIGDIR'] = '../mplstyles'
print(os.environ['MPLCONFIGDIR'])
import matplotlib as mpl
from matplotlib import pyplot as plt
#got smarter about the mpl config: see mplstyles/ directory
plt.style.use('standard')
print(mpl.__version__)
print(mpl.get_configdir())
#fonts
# Set the font dictionaries (for plot title and axis titles)
title_font = {'fontname':'Arial', 'size':'16', 'color':'black', 'weight':'normal',
'verticalalignment':'bottom'} # Bottom vertical alignment for more space
axis_font = {'fontname':'Arial', 'size':'32'}
legend_font = {'fontname':'Arial', 'size':'22'}
#fonts global settings
mpl.rc('font',family=legend_font['fontname'])
#set up numpy
import numpy as np
```
# Errors for Fitted Widths
It has come up in several other notebooks, notably `yield_width_compare.ipynb` and `ms_correction.ipynb` that we need to treat simulated data as experiemental and extract widths of different "thrown" distributions via a fitting method. For example, if we simulate the ionization yield-recoil energy plane of Edelweiss data, we may break the plot into energy bins each of which has a fitted yield distributions.
In fitting these with `lmfit` (see `yield_width_compare.ipynb`) it came up that the fitted uncertainties on the distribution widths are much lower than expected. The expectation comes from how much the value of the widths varies upon resimulation of the data (re-running of the notebook).
Here we attempt to isolate this issue and solve it.
```
#generate a yield hist
NQ=1000
#NQ = np.random.poisson(NQ)
Qmean=0.3
Qstd=0.02
Q = np.random.normal(Qmean,Qstd,NQ)
#get an "energy" vector so I can use some of the pre-coded functions for fitting
E = np.ones(np.shape(Q))
print(np.shape(Q))
print(np.shape(E))
import histogram_yield as hy
bindf,bindfE = hy.QEr_Ebin(Q,E,[0,2])
print(bindf)
qbins = np.linspace(0,0.6,40)
xcq = (qbins[:-1] + qbins[1:]) / 2
qhistos,qerrs = hy.QEr_Qhist(bindf,qbins)
qamps,qampserrs,qmus,qmuerrs,qsigs,qsigerrs = hy.QEr_Qfit(qhistos,qerrs, qbins,0.1,0.3,0.1)
fig,axs = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True,sharey=True)
X = np.arange(0.0,0.6,0.001)
func = lambda x,a,b,c: a*np.exp(-(x-b)**2/(2*c**2))
funcv = np.vectorize(func)
#for i,ax in enumerate(np.ndarray.flatten(axs)):
ax = axs
if True:
#ax.set_title('markevery=%s' % str(case))
#ax.plot(x, y, 'o', ls='-', ms=4, markevery=case)
#ax.text(0.65,0.31,"{:2.1f} keV $\leq$ $E_r$ $<$ {:2.1f} keV".format(erbins[i+1],erbins[i+2]),fontsize=24)
idx=0
ax.plot(X,funcv(X,qamps[idx],qmus[idx],qsigs[idx]),color='m',linestyle="-",linewidth=2)
#ax.plot(X,funcv(X,qamps_ss[i+1],qmus_ss[i+1],qsigs_ss[i+1]),color='m',linestyle="-",linewidth=2)
#ax.step(xcq_er,qhistos[:,idx]/np.sum(qhistos[:,idx]), where='mid',color='b', linestyle='-', \
# label='all scatters', linewidth=2)
ax.step(xcq,qhistos[:,idx]/np.sum(qhistos[:,idx]), where='mid',color='k', linestyle='-', \
label='simulated events', linewidth=2)
ax.set_yscale('linear')
#ax1.set_yscale('linear')
ax.set_xlim(0.0, 0.6)
ax.set_ylim(0,0.35)
if(idx>2):
ax.set_xlabel(r'ionization yield',**axis_font)
if((idx==0)|(idx==3)):
ax.set_ylabel('PDF',**axis_font)
ax.grid(True)
ax.yaxis.grid(True,which='minor',linestyle='--')
if(idx==0):
ax.legend(loc=1,prop={'size':22})
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(2)
plt.tight_layout()
plt.show()
```
By running the above script several times, it seems that it is **_plausible_** that the true standard deviation is within the estimated standard deviation +/- the uncertainty ascribed by the fit about 68% of the time.
An example of this for 1000 initial trials is: $\hat{\sigma}$ = 0.0199 $\pm$ 0.00013
This should be checked more systematically, by doing 100 trials and recording the results.
# 1000 Trials with 10 and 100 Events
Here we do 100 trials of this procedure, using 10 and 100 events as the total number generated.
```
trials = np.arange(0,1000)
#print(trials)
sig10 = np.zeros(np.shape(trials))
sigerr10 = np.zeros(np.shape(trials))
sig100 = np.zeros(np.shape(trials))
sigerr100 = np.zeros(np.shape(trials))
sig1000 = np.zeros(np.shape(trials))
sigerr1000 = np.zeros(np.shape(trials))
qbins = np.linspace(0,0.4,40)
xcq = (qbins[:-1] + qbins[1:]) / 2
#make 100 trials with NQ=10
for i,bn in enumerate(trials):
NQ=10
#NQ = np.random.poisson(NQ)
Q = np.random.normal(Qmean,Qstd,NQ)
E = np.ones(np.shape(Q))
bindf,bindfE = hy.QEr_Ebin(Q,E,[0,2],True)
qhistos,qerrs = hy.QEr_Qhist(bindf,qbins)
qamps,qampserrs,qmus,qmuerrs,qsigs,qsigerrs = hy.QEr_Qfit(qhistos,qerrs, qbins,0.1,0.3,0.1,True)
sig10[i] = np.sqrt(qsigs[0]**2)
sigerr10[i] = qsigerrs[0]
bad10 = np.sum(-qsigerrs[(qsigerrs==-1)&(qampserrs==-1)&(qmuerrs==-1)])/np.shape(trials)[0]
#print(sig10)
#print(sigerr10)
#make 100 trials with NQ=100
for i,bn in enumerate(trials):
NQ=100
#NQ = np.random.poisson(NQ)
Q = np.random.normal(Qmean,Qstd,NQ)
E = np.ones(np.shape(Q))
bindf,bindfE = hy.QEr_Ebin(Q,E,[0,2],True)
qhistos,qerrs = hy.QEr_Qhist(bindf,qbins)
qamps,qampserrs,qmus,qmuerrs,qsigs,qsigerrs = hy.QEr_Qfit(qhistos,qerrs, qbins,0.1,0.3,0.1,True)
sig100[i] = np.sqrt(qsigs[0]**2)
sigerr100[i] = qsigerrs[0]
bad100 = np.sum(-qsigerrs[(qsigerrs==-1)&(qampserrs==-1)&(qmuerrs==-1)])/np.shape(trials)[0]
#print(sig100)
#print(sigerr100)
#make 100 trials with NQ=1000
for i,bn in enumerate(trials):
NQ=1000
#NQ = np.random.poisson(NQ)
Q = np.random.normal(Qmean,Qstd,NQ)
E = np.ones(np.shape(Q))
bindf,bindfE = hy.QEr_Ebin(Q,E,[0,2],True)
qhistos,qerrs = hy.QEr_Qhist(bindf,qbins)
qamps,qampserrs,qmus,qmuerrs,qsigs,qsigerrs = hy.QEr_Qfit(qhistos,qerrs, qbins,0.1,0.3,0.1,True)
sig1000[i] = np.sqrt(qsigs[0]**2)
sigerr1000[i] = qsigerrs[0]
bad1000 = np.sum(-qsigerrs[(qsigerrs==-1)&(qampserrs==-1)&(qmuerrs==-1)])/np.shape(trials)[0]
import nrfano_stats as nfs
truth10,perc10 = nfs.inRange(Qstd,sig10,sigerr10)
#print(truth)
print(bad10*100)
print(perc10*100)
truth100,perc100 = nfs.inRange(Qstd,sig100,sigerr100)
#print(truth)
print(bad100*100)
print(perc100*100)
truth1000,perc1000 = nfs.inRange(Qstd,sig1000,sigerr1000)
#print(truth)
print(bad1000*100)
print(perc1000*100)
#set up a 1d plot
fig,axes = plt.subplots(1,1,figsize=(16.0,8.0),sharex=True)
ax1 = axes
cd=2
ax1.errorbar(trials[trials%cd==0],sig10[trials%cd==0], yerr=sigerr10[trials%cd==0],color='b', marker='o', \
markersize=4,linestyle='none',label='N=10 ({:2.1f}% encompassing; {:2.2f}% bad)'.format(perc10*100,bad10*100),\
linewidth=2)
#ax1.errorbar(trials,sig100, yerr=sigerr100,color='m', marker='o', \
# markersize=4,linestyle='none',label='N=100 ({:2.1f}% encompassing)'.format(perc100*100), linewidth=2)
ax1.axhline(Qstd, color='k', linestyle='--', lw=2, alpha=0.8,label=None)
ymin = 0.00
ymax = 0.04
ax1.set_yscale('linear')
#ax1.set_yscale('linear')
ax1.set_xlim(0, np.shape(trials)[0])
ax1.set_ylim(ymin,ymax)
ax1.set_xlabel(r'sample',**axis_font)
ax1.set_ylabel('extracted width',**axis_font)
ax1.grid(True)
ax1.yaxis.grid(True,which='minor',linestyle='--')
ax1.legend(loc=1,prop={'size':22})
#ax1.legend(bbox_to_anchor=(1.04,1),borderaxespad=0,prop={'size':22})
for axis in ['top','bottom','left','right']:
ax1.spines[axis].set_linewidth(2)
plt.tight_layout()
plt.savefig('figures/yield_gauss_fits_10ev.png')
plt.show()
#set up a 1d plot
fig,axes = plt.subplots(1,1,figsize=(16.0,8.0),sharex=True)
ax1 = axes
ax1.errorbar(trials[trials%cd==0],sig100[trials%cd==0], yerr=sigerr100[trials%cd==0],color='orange', marker='o', \
markersize=4,linestyle='none',label='N=100 ({:2.1f}% encompassing; {:2.2f}% bad)'.format(perc100*100, \
bad100*100), linewidth=2)
ax1.errorbar(trials[trials%cd==0],sig1000[trials%cd==0], yerr=sigerr1000[trials%cd==0],color='m', marker='o', \
markersize=4,linestyle='none',label='N=1000 ({:2.1f}% encompassing; {:2.2f}% bad)'.format(perc1000*100, \
bad1000*100), linewidth=2)
#using the lines below you can plot ones that fail to overlap the true value or ones that succeed only
#ax1.errorbar(trials[truth100],sig100[truth100], yerr=sigerr100[truth100],color='orange', marker='o', \
# markersize=4,linestyle='none',label='N=100 ({:2.1f}% encompassing)'.format(perc100*100), linewidth=2)
#ax1.errorbar(trials[truth1000],sig1000[truth1000], yerr=sigerr1000[truth1000],color='m', marker='o', \
# markersize=4,linestyle='none',label='N=1000 ({:2.1f}% encompassing)'.format(perc1000*100), linewidth=2)
#ax1.errorbar(trials[~truth100],sig100[~truth100], yerr=sigerr100[~truth100],color='orange', marker='o', \
# markersize=4,linestyle='none',label='N=100 ({:2.1f}% encompassing)'.format(perc100*100), linewidth=2)
#ax1.errorbar(trials[~truth1000],sig1000[~truth1000], yerr=sigerr1000[~truth1000],color='m', marker='o', \
# markersize=4,linestyle='none',label='N=1000 ({:2.1f}% encompassing)'.format(perc1000*100), linewidth=2)
ax1.axhline(Qstd, color='k', linestyle='--', lw=2, alpha=0.8,label=None)
ymin = 0.01
ymax = 0.03
ax1.set_yscale('linear')
#ax1.set_yscale('linear')
ax1.set_xlim(0, np.shape(trials)[0])
ax1.set_ylim(ymin,ymax)
ax1.set_xlabel(r'sample',**axis_font)
ax1.set_ylabel('extracted width',**axis_font)
ax1.grid(True)
ax1.yaxis.grid(True,which='minor',linestyle='--')
ax1.legend(loc=1,prop={'size':22})
#ax1.legend(bbox_to_anchor=(1.04,1),borderaxespad=0,prop={'size':22})
for axis in ['top','bottom','left','right']:
ax1.spines[axis].set_linewidth(2)
plt.tight_layout()
plt.savefig('figures/yield_gauss_fits_100ev_1000ev.png')
plt.show()
#histogram the results
width_bins = np.linspace(0.0,0.04,100)
n10,nx = np.histogram(sig10,bins=width_bins)
n100,nx = np.histogram(sig100,bins=width_bins)
n1000,nx = np.histogram(sig1000,bins=width_bins)
xc = (nx[:-1] + nx[1:]) / 2
norm = xc[1]-xc[0]
averr10 = np.mean(sigerr10)
averr100 = np.mean(sigerr100)
averr1000 = np.mean(sigerr1000)
#fit the 1000-sample histo to get "true" width
E1000 = np.ones(np.shape(sig1000))
bindf,bindfE = hy.QEr_Ebin(sig1000,E1000,[0,2],True)
qhistos,qerrs = hy.QEr_Qhist(bindf,nx)
print(np.shape(qhistos))
print(np.shape(qerrs))
qamps,qampserrs,qmus,qmuerrs,qsigs,qsigerrs = hy.QEr_Qfit(qhistos,qerrs, nx,100,0.02,0.001)
#set up a 1d plot
fig,axes = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True)
ax1 = axes
ax1.step(xc,n10/(norm*np.sum(n10)), where='mid',color='b', linestyle='-', \
label='N=10', linewidth=2)
ax1.step(xc,n100/(norm*np.sum(n100)), where='mid',color='orange', linestyle='-', \
label='N=100', linewidth=2)
ax1.step(xc,n1000/(norm*np.sum(n1000)), where='mid',color='m', linestyle='-', \
label='N=1000', linewidth=2)
ax1.axvline(Qstd, color='k', linestyle='--', lw=2, alpha=0.8,label=None)
ymin = 1
ymax = 1000
ax1.axvline(Qstd-averr1000, color='r', linestyle='--', lw=2, alpha=0.8,label=None)
ax1.axvline(Qstd+averr1000, color='r', linestyle='--', lw=2, alpha=0.8,label=None)
erange_x = np.arange(Qstd-qsigs[0], Qstd+qsigs[0], 1e-6)
tshade = ax1.fill_between(erange_x, ymin, ymax, facecolor='m', alpha=0.3)
ax1.set_yscale('log')
#ax1.set_yscale('linear')
ax1.set_xlim(0.01, 0.03)
ax1.set_ylim(ymin,ymax)
ax1.set_xlabel(r'extracted width',**axis_font)
ax1.set_ylabel('PDF',**axis_font)
ax1.grid(True)
ax1.yaxis.grid(True,which='minor',linestyle='--')
ax1.legend(loc=1,prop={'size':22})
#ax1.legend(bbox_to_anchor=(1.04,1),borderaxespad=0,prop={'size':22})
for axis in ['top','bottom','left','right']:
ax1.spines[axis].set_linewidth(2)
plt.tight_layout()
plt.savefig('figures/yield_gauss_fits_all.png')
plt.show()
```
# Dependence on Binning of Fit
For small statistics the fit quality may depend on the binning significantly. Try a smaller binning for the 10-event sub-sample data...
```
qbins = np.linspace(0.2,0.4,6)
xcq = (qbins[:-1] + qbins[1:]) / 2
NQ=10
Q = np.random.normal(Qmean,Qstd,NQ)
E = np.ones(np.shape(Q))
bindf,bindfE = hy.QEr_Ebin(Q,E,[0,2],True)
qhistos,qerrs = hy.QEr_Qhist(bindf,qbins)
qamps,qampserrs,qmus,qmuerrs,qsigs,qsigerrs = hy.QEr_Qfit(qhistos,qerrs, qbins,0.1,0.3,0.1,True)
bad10 = np.sum(-qsigerrs[(qsigerrs==-1)&(qampserrs==-1)&(qmuerrs==-1)])/np.shape(trials)[0]
print(qhistos)
print(qerrs)
print(qsigs)
print(qsigerrs)
```
# Try to Use a Bootstrap Method to Compute Uncertainties
Instead we can use a [bootstrap](https://pypi.org/project/bootstrapped/) method to try to compute the uncertainties of the sample standard deviations and use the sample standard deviations and those uncertainties as our data points. I've `pip` installed the package [`bootstrapped`](https://pypi.org/project/bootstrapped/) to help with this.
**NOTE: This package has a version of 0.0.2 and I got it after a very cursory internet search. Its development (if it's still being developed) will probably be _volatile_.**
Basically the bootstrap means that you take the sample that you got, and re-sample it many times with replacement. Then calculate the mean and standard deviation of any statistic across those many resamples and use these as the estimator for the corresponding parameter of the parent distribution.
```
import bootstrapped.bootstrap as bs
import bootstrapped.stats_functions as bs_stats
NQ=10
Q = np.random.normal(Qmean,Qstd,NQ)
print(bs.bootstrap(Q, stat_func=bs_stats.std))
#try the fit of the same sample
#print(qbins)
Q = np.random.normal(Qmean,Qstd,NQ)
#print(Q)
n10,nx = np.histogram(Q,bins=qbins)
E = np.ones(np.shape(Q))
bindf,bindfE = hy.QEr_Ebin(Q,E,[0,2])
qhistos,qerrs = hy.QEr_Qhist(bindf,nx)
print(np.shape(qhistos))
print(np.shape(qerrs))
qamps,qampserrs,qmus,qmuerrs,qsigs,qsigerrs = hy.QEr_Qfit(qhistos,qerrs, nx,1000,0.2,0.5)
print(qampserrs)
```
| github_jupyter |
## Let's import some basic packages
```
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/3"
embed = hub.Module(module_url)
```
## And here's an basic example of how embeddings work
```
word = "Elephant"
sentence = "I am a sentence for which I would like to get its embedding."
paragraph = (
"Universal Sentence Encoder embeddings also support short paragraphs. "
"There is no hard limit on how long the paragraph is. Roughly, the longer "
"the more 'diluted' the embedding will be.")
messages = [word, sentence, paragraph]
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
message_embeddings = session.run(embed(messages))
for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3])
)
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
print(message_embeddings)
print(message_embeddings.shape)
tf.keras.utils.to_categorical([1, 2])
```
## Process real data
#### get panda table
```
import pandas as pd
table = pd.read_csv("./data.csv")
table.head()
```
#### get numpy array
```
good_list = table['good'].values
bad_list = table['bad'].values
print(good_list[0])
print(len(good_list))
print(bad_list[0])
print(len(bad_list))
sentences = np.append(good_list, bad_list)
print(len(sentences))
print(np.ones(3))
print(np.ones(sentences.shape))
labels = np.append(
np.ones(good_list.shape),
np.zeros(bad_list.shape)
)
print(len(labels))
print(labels[0])
print(labels[-1])
```
#### shuffle `input array(sentences)` and `output array(labels)`
```
from sklearn.utils import shuffle
a = [1, 2, 3]
b = [4, 5, 6]
x, y = shuffle(a, b)
print(x)
print(y)
shuffled_sentences, shuffled_labels = shuffle(sentences, labels)
print(shuffled_labels)
```
#### get final `output array(labels)`
```
sentence_labels = tf.keras.utils.to_categorical(shuffled_labels)
print(final_labels)
```
#### get final `input array(sentences)`
```
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
sentence_embeddings = session.run(embed(sentences))
print(sentence_embeddings)
```
#### pickling all the data
```
import pickle
embeddings_file = "embeddings.pickle"
labels_file = "labels.pickle"
pickle.dump(sentence_embeddings, open(embeddings_file, 'wb'))
pickle.dump(sentence_labels, open(labels_file, 'wb'))
```
| github_jupyter |
# Testing Cnots
In this notebook we take imperfect versions of cnot gates and see how well they would work within a `d=3`, `T=1` surface code and a `d=5`, `T=3` repetition code.
```
import numpy as np
from copy import deepcopy
from topological_codes import RepetitionCode, SurfaceCode, GraphDecoder
from qiskit import QuantumCircuit, transpile
from qiskit.providers.aer import AerSimulator
from qiskit.providers.aer.noise.errors import depolarizing_error
from qiskit.circuit.library import CRXGate
from qiskit.quantum_info import process_fidelity
from matplotlib import pyplot as plt
```
The candidate cnots to be tested need to be provided as a Qiskit instruction. These can be created from forms such as unitaries, Choi matrices, Qiskit gates and Qiskit circuits.
For example, the following function creates a noisy cnot from a noisy circuit, parameterized by an error probability $\epsilon$. This can generate both coherent or incoherent forms of noise.
```
def noisy_cx(eps, coherent=True):
if coherent:
error = CRXGate(np.pi*eps/2)
else:
error = depolarizing_error(eps/2,2)
qc = QuantumCircuit(2,name='noisy cx')
qc.append(error,[1,0])
qc.cx(0,1)
qc.append(error,[1,0])
return qc.to_instruction()
code = SurfaceCode(3,2)
qc = code.circuit['0']
```
Given a code and a candidate cnot, the following function replaces all instances of cnots with the candidate cnot.
```
def make_noisy_code(code, cand_cx):
noisy_code = deepcopy(code)
for log in code.circuit:
qc = noisy_code.circuit[log]
temp_qc = QuantumCircuit()
for qreg in qc.qregs:
temp_qc.add_register(qreg)
for creg in qc.cregs:
temp_qc.add_register(creg)
for gate in qc.data:
if gate[0].name=='cx':
temp_qc.append(cand_cx,gate[1])
else:
temp_qc.data.append(gate)
noisy_code.circuit[log] = temp_qc.copy()
return noisy_code
```
In some cases, it is better to extract the exact probabilities from a simulation rather than using sampling. However, to do this we need to defer all measurements to the end. For this we add auxilliary qubits corresponding to each classical bit. We also need to rewrite the output bit string to reproduce the format that the result should be. The following functions do these things.
```
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
def move_msm(qc):
bits = []
for creg in qc.cregs:
for bit in creg:
bits.append(bit)
new_qc = QuantumCircuit()
for regs in [qc.qregs, qc.cregs]:
for reg in regs:
new_qc.add_register(reg)
aux = {}
for reg in qc.cregs:
for bit in reg:
aux[bits.index(bit)] = QuantumRegister(1)
new_qc.add_register(aux[bits.index(bit)])
for gate in qc.data:
if gate[0].name=='measure':
new_qc.cx(gate[1][0], aux[bits.index(gate[2][0])])
else:
new_qc.data.append(gate)
new_qc.save_probabilities_dict()
return new_qc, aux
def format_probs(probs, new_qc, aux):
bits = []
for creg in qc.cregs:
for bit in creg:
bits.append(bit)
index = {}
for reg in new_qc.cregs:
for bit in reg:
index[bit] = new_qc.qubits.index(aux[bits.index(bit)][0])
new_probs = {}
for string,prob in probs.items():
new_string = ''
for reg in new_qc.cregs:
for bit in reg:
j = index[bit]
new_string += string[-1-j]
new_string += ' '
new_string = new_string[::-1][1::]
if new_string in new_probs:
new_probs[new_string] += prob
else:
new_probs[new_string] = prob
return new_probs
```
Now we can run simulations of the codes for different candidate cnots, and see what logical error rates we find.
```
# choose the type of code to study
repetition = False
# and the type of noise
coherent = True
# set the noise levels to study
noise = [0.1+0.02*j for j in range(10)]
# and calculate the corresponding process infidelities
infidelity = [ 1-process_fidelity(noisy_cx(eps),noisy_cx(0)) for eps in noise ]
backend = AerSimulator(zero_threshold=1e-5)
if repetition:
d,T = 3,3
else:
d,T = 3,1
sample = (not coherent) or (not repetition)
if sample:
shots = 4*8192
else:
shots = 1
logical = {'z':[], 'x':[]}
for basis in ['z', 'x']:
if repetition:
decoder = GraphDecoder(RepetitionCode(d,T,xbasis=(basis=='x')))
else:
decoder = GraphDecoder(SurfaceCode(d,T,basis=basis))
for eps in noise:
# make the noisy code
cand_cx = noisy_cx(eps,coherent=coherent)
if repetition:
code = make_noisy_code(RepetitionCode(d,T,xbasis=(basis=='x')),cand_cx)
else:
code = make_noisy_code(SurfaceCode(d,T,basis=basis),cand_cx)
# run it
raw_results = {}
if sample:
circuits = code.get_circuit_list()
else:
auxs = []
circuits = []
for qc in code.get_circuit_list():
new_qc,aux = move_msm(qc)
circuits.append(new_qc)
auxs.append(aux)
circuits = transpile(circuits,backend)
job = backend.run(circuits, shots=shots)
if sample:
for log in ['0','1']:
raw_results[log] = job.result().get_counts(int(log))
else:
for qc,aux in zip(circuits,auxs):
probs = job.result().data(qc)['probabilities']
n = str(len(qc.qubits))
probs = {('{0:0'+n+'b}').format(output):shots for output,shots in probs.items()}
raw_results[str(circuits.index(qc))] = {string:prob for string,prob in format_probs(probs, qc, aux).items()}
results = code.process_results(raw_results)
# get logical error probs
logical[basis].append( max(decoder.get_logical_prob(results).values()) )
print('Complete:',basis,eps)
plt.scatter(infidelity,[max(logical['z'][j],logical['x'][j]) for j in range(len(noise))],label='max')
```
| github_jupyter |
```
import torch
import pandas as pd
import numpy as np
import seaborn as sns
import os
sns.set(style="darkgrid")
import matplotlib.pyplot as plt
from glob import glob
%matplotlib inline
def get_title(filename):
"""
>>> get_title("logs/0613/0613-q1-0000.train")
'0613-q1-0000'
"""
return os.path.splitext(os.path.basename(filename))[0]
def get_df_from_file(f):
df = pd.read_csv(f)
df = df[df["is_end_of_epoch"]].reset_index()
return df
result_files = sorted(glob("../../../data/logs/0710*nocl*.train"))
titles = [get_title(f) for f in result_files]
dfes = (get_df_from_file(f) for f in result_files)
def do_plot(df, title):
dfval = df[df["is_val"]]
dftrain = df[df["is_val"] != True]
sns.lineplot(x=dftrain.index, y="drmsd", data=dftrain, label="train-drmsd")
sns.lineplot(x=dfval.index, y="drmsd", data=dfval, label="val-drmsd",color="lightblue")
sns.lineplot(x=dftrain.index, y="rmsd", data=dftrain, label="train-rmsd")
sns.lineplot(x=dfval.index, y="rmsd", data=dfval, label="val-rmsd", color="orange")
sns.lineplot(x=dftrain.index, y="rmse", data=dftrain, label="rmse")
# sns.lineplot(x=dftrain.index, y="combined", data=dftrain, label="drmsd+mse")
plt.ylabel("Loss Value")
plt.xlabel("Epoch")
plt.legend(loc=(1.04,.7))
plt.title("{} Training Loss".format(title))
# plt.savefig("../figs/transtrain.pdf", pad_inches=1, bbox_inches="tight")
do_plot(get_df_from_file(result_files[0]), titles[0])
min_key = "rmsd"
mins = []
for df, title in zip(dfes, titles):
try:
dfval = df
except KeyError as e:
print(e)
continue
try:
row = dfval[dfval[min_key] == dfval[min_key].min()]
except KeyError:
print(title)
continue
row["title"] = title[:]
mins.append(row)
mins_df = pd.concat(mins)
mins_df.sort_values(min_key, inplace=True)
mins_df
names = [t for t in mins_df["title"][:10]]
for n in names:
loc = None
for i in range(len(result_files)):
if n == os.path.splitext(os.path.basename(result_files[i]))[0]:
loc = i
do_plot(get_df_from_file(result_files[loc]), titles[loc])
plt.show()
mean_rmsds = mins_df[mins_df["title"].str.contains("mean")]["rmsd"].values[:-1]
rand_rmsds = mins_df[mins_df["title"].str.contains("rand")]["rmsd"].values
mean_rmses = mins_df[mins_df["title"].str.contains("mean")]["rmse"].values[:-1]
rand_rmses = mins_df[mins_df["title"].str.contains("rand")]["rmse"].values
sns.boxplot(x=["w/angle means", "random"], y=[mean_rmsds, rand_rmsds], showfliers=False)
plt.ylabel("RMSD")
plt.xlabel("Initialization Strategy")
plt.ylim((.75, 5))
plt.title("Models trained on DRMSD loss only")
plt.savefig("model_overfit_initialization_strategies_drmsdonly_rmsd.png", dpi=300)
plt.savefig("model_overfit_initialization_strategies_drmsdonly_rmsd.svg")
sns.boxplot(x=["w/angle means", "random"], y=[mean_rmses, rand_rmses], showfliers=False)
plt.ylabel("RMSE")
plt.xlabel("Initialization Strategy")
plt.title("Model overfitting performance w.r.t. initialization")
plt.savefig("model_overfit_initialization_strategies_drmsdonly_rmse.png")
plt.savefig("model_overfit_initialization_strategies_drmsdonly_rmse.svg")
dfes_mean = (get_df_from_file(f) for f in result_files if "mean" in f)
dfes_rand = (get_df_from_file(f) for f in result_files if "rand" in f)
first = True
for dm, dr in zip(dfes_mean, dfes_rand):
if first:
lr = "random"
lm = "mean"
first = False
else:
lr = None
lm = None
try:
sns.lineplot(x=dr.index, y="rmsd", data=dr, alpha=0.4, color="C1", label=lr)
except ValueError:
pass
try:
sns.lineplot(x=dm.index, y="rmsd", data=dm, alpha=0.4, color="C0", label=lm)
except ValueError:
continue
plt.legend()
plt.title("Training Loss Over Time - DRMSD only")
plt.xlabel("Epoch")
plt.savefig("mean_vs_random_training_over_time_drmsdonly.png", dpi=300)
plt.savefig("mean_vs_random_training_over_time_drmsdonly.svg")
```
| github_jupyter |
# Extração de texto em relatórios da Fundação ABC - Experimento
TODO:
* Aplicar filtros nesta etapa
### **Em caso de dúvidas, consulte os [tutoriais da PlatIAgro](https://platiagro.github.io/tutorials/).**
## Declaração de parâmetros e hiperparâmetros
Declare parâmetros com o botão <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAABhWlDQ1BJQ0MgcHJvZmlsZQAAKJF9kT1Iw0AcxV9TtaIVBzuIOASpThb8QhylikWwUNoKrTqYXPohNGlIUlwcBdeCgx+LVQcXZ10dXAVB8APEydFJ0UVK/F9SaBHjwXE/3t173L0DhFqJqWbbGKBqlpGMRcVMdkUMvKID3QhiCOMSM/V4aiENz/F1Dx9f7yI8y/vcn6NHyZkM8InEs0w3LOJ14ulNS+e8TxxiRUkhPiceNeiCxI9cl11+41xwWOCZISOdnCMOEYuFFpZbmBUNlXiKOKyoGuULGZcVzluc1VKFNe7JXxjMacsprtMcRAyLiCMBETIq2EAJFiK0aqSYSNJ+1MM/4PgT5JLJtQFGjnmUoUJy/OB/8LtbMz854SYFo0D7i21/DAOBXaBete3vY9uunwD+Z+BKa/rLNWDmk/RqUwsfAb3bwMV1U5P3gMsdoP9JlwzJkfw0hXweeD+jb8oCfbdA16rbW2Mfpw9AmrpaugEODoGRAmWveby7s7W3f880+vsBocZyukMJsmwAAAAGYktHRAD/AP8A/6C9p5MAAAAJcEhZcwAADdcAAA3XAUIom3gAAAAHdElNRQfkBgsMIwnXL7c0AAACDUlEQVQ4y92UP4gTQRTGf29zJxhJZ2NxbMBKziYWlmJ/ile44Nlkd+dIYWFzItiNgoIEtFaTzF5Ac/inE/urtLWxsMqmUOwCEpt1Zmw2xxKi53XitPO9H9978+aDf/3IUQvSNG0450Yi0jXG7C/eB0cFeu9viciGiDyNoqh2KFBrHSilWstgnU7nFLBTgl+ur6/7PwK11kGe5z3n3Hul1MaiuCgKDZwALHA7z/Oe1jpYCtRaB+PxuA8kQM1aW68Kt7e3zwBp6a5b1ibj8bhfhQYVZwMRiQHrvW9nWfaqCrTWPgRWvPdvsiy7IyLXgEJE4slk8nw+T5nDgDbwE9gyxryuwpRSF5xz+0BhrT07HA4/AyRJchUYASvAbhiGaRVWLIMBYq3tAojIszkMoNRulbXtPM8HwV/sXSQi54HvQRDcO0wfhGGYArvAKjAq2wAgiqJj3vsHpbtur9f7Vi2utLx60LLW2hljEuBJOYu9OI6vAzQajRvAaeBLURSPlsBelA+VhWGYaq3dwaZvbm6+m06noYicE5ErrVbrK3AXqHvvd4bD4Ye5No7jSERGwKr3Pms2m0pr7Rb30DWbTQWYcnFvAieBT7PZbFB1V6vVfpQaU4UtDQetdTCZTC557/eA48BlY8zbRZ1SqrW2tvaxCvtt2iRJ0i9/xb4x5uJRwmNlaaaJ3AfqIvKY/+78Av++6uiSZhYMAAAAAElFTkSuQmCC" /> na barra de ferramentas.<br>
A variável `dataset` possui o caminho para leitura do arquivos importados na tarefa de "Upload de dados".<br>
Você também pode importar arquivos com o botão <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAABhWlDQ1BJQ0MgcHJvZmlsZQAAKJF9kT1Iw0AcxV9TtaIVBzuIOASpThb8QhylikWwUNoKrTqYXPohNGlIUlwcBdeCgx+LVQcXZ10dXAVB8APEydFJ0UVK/F9SaBHjwXE/3t173L0DhFqJqWbbGKBqlpGMRcVMdkUMvKID3QhiCOMSM/V4aiENz/F1Dx9f7yI8y/vcn6NHyZkM8InEs0w3LOJ14ulNS+e8TxxiRUkhPiceNeiCxI9cl11+41xwWOCZISOdnCMOEYuFFpZbmBUNlXiKOKyoGuULGZcVzluc1VKFNe7JXxjMacsprtMcRAyLiCMBETIq2EAJFiK0aqSYSNJ+1MM/4PgT5JLJtQFGjnmUoUJy/OB/8LtbMz854SYFo0D7i21/DAOBXaBete3vY9uunwD+Z+BKa/rLNWDmk/RqUwsfAb3bwMV1U5P3gMsdoP9JlwzJkfw0hXweeD+jb8oCfbdA16rbW2Mfpw9AmrpaugEODoGRAmWveby7s7W3f880+vsBocZyukMJsmwAAAAGYktHRAD/AP8A/6C9p5MAAAAJcEhZcwAADdcAAA3XAUIom3gAAAAHdElNRQfkBgsOBy6ASTeXAAAC/0lEQVQ4y5WUT2gcdRTHP29m99B23Uiq6dZisgoWCxVJW0oL9dqLfyhCvGWY2YUBI95MsXgwFISirQcLhS5hfgk5CF3wJIhFI7aHNsL2VFZFik1jS1qkiZKdTTKZ3/MyDWuz0fQLc/m99/vMvDfv+4RMlUrlkKqeAAaBAWAP8DSgwJ/AXRG5rao/WWsvTU5O3qKLBMD3fSMiPluXFZEPoyj67PGAMzw83PeEMABHVT/oGpiamnoAmCcEWhH5tFsgF4bh9oWFhfeKxeJ5a+0JVT0oImWgBPQCKfAQuAvcBq67rltX1b+6ApMkKRcKhe9V9QLwbavV+qRer692Sx4ZGSnEcXw0TdP3gSrQswGYz+d/S5IkVtXTwOlCoZAGQXAfmAdagAvsAErtdnuXiDy6+023l7qNRsMODg5+CawBzwB9wFPA7mx8ns/KL2Tl3xCRz5eWlkabzebahrHxPG+v4zgnc7ncufHx8Z+Hhoa29fT0lNM03Q30ikiqqg+ttX/EcTy3WTvWgdVqtddaOw/kgXvADHBHROZVNRaRvKruUNU+EdkPfGWM+WJTYOaSt1T1LPDS/4zLWWPMaLVaPWytrYvIaBRFl/4F9H2/JCKvGmMu+76/X0QOqGoZKDmOs1NV28AicMsYc97zvFdc1/0hG6kEeNsY83UnsCwivwM3VfU7YEZE7lhr74tIK8tbnJiYWPY8b6/ruleAXR0ftQy8boyZXi85CIIICDYpc2ZgYODY3NzcHmvt1eyvP64lETkeRdE1yZyixWLx5U2c8q4x5mIQBE1g33/0d3FlZeXFR06ZttZesNZejuO4q1NE5CPgWVV9E3ij47wB1IDlJEn+ljAM86urq7+KyAtZTgqsO0VV247jnOnv7/9xbGzMViqVMVX9uANYj6LonfVtU6vVkjRNj6jqGeCXzGrPAQeA10TkuKpOz87ONrayhnIA2Qo7BZwKw3B7kiRloKSqO13Xja21C47jPNgysFO1Wi0GmtmzQap6DWgD24A1Vb3SGf8Hfstmz1CuXEIAAAAASUVORK5CYII=" /> na barra de ferramentas.
```
dataset = "/tmp/data/fabc_reports-10.zip" #@param {type:"string"}
doc_id = True #@param {type:"boolean",label:"Id documento - Deployment",descrition:"Retorna coluna com os ids dos documentos em Deployment"}
report_name = True #@param {type:"boolean",label:"Nome relatório - Deployment",descrition:"Retorna coluna com o nome dos relatórios em Deployment"}
section_name = True #@param {type:"boolean",label:"Nome da seção relatórios - Deployment",descrition:"Retorna coluna com o nome das seções dos relatórios em Deployment"}
context = True #@param {type:"boolean",label:"Contexto - Deployment",descrition:"Retorna coluna de texto em Deployment"}
apply_filers = True #@param {type:"boolean",label:"Aplicar Filtros",descrition:"Aplicar filtros especiais nos contextos extraídos dos relatórios"}
keep_only_conclusions = True #@param {type:"boolean",label:"Maneter apenas conclusões",descrition:"Mantém apenas as conclusões. Apenas válido se apply_filers=True"}
#section_names_to_keep = ["Capítulo 6","Capitulo 6"] #@param ["Capítulo 6","Capitulo 6"] {type:"feature",multiple:true,label:"Seções para manter",descrition:"Mantém apenas as seções que tenham os nomes especificados. Apenas válido se apply_filers=True"}
min_context_length_in_tokens = 20 #@param {type:"integer",label:"Número mínimo de token de um contexto",descrition:"Utilizado para eliminar contextos que provavelmente representam erros de extração. Apenas válido se apply_filers=True"}
columns = {"doc_id":doc_id,"report_name":report_name,"section_name":section_name,"context":context}
```
## Leitura do conjunto de dados
O exemplo abaixo faz a leitura de dados tabulares (ex: .csv).<br>
Modifique o código de acordo com o tipo de dado que desejar ler.
```
folder = dataset.split('.')[0]
!mkdir -p {folder}
!unzip -o {dataset} -d {folder}
```
## Conteúdo da tarefa
```
import os
from aux_functions import get_reports_as_dataframe,filter_post_content
data_dir = "/tmp/data"
reports_dir = os.path.join(data_dir, folder.split("/")[-1])
df = get_reports_as_dataframe(reports_dir=reports_dir,columns_dict = columns)
df = filter_post_content(df = df,
keep_only_conclusions = keep_only_conclusions,
min_context_length_in_tokens = min_context_length_in_tokens)
```
## Visualizando os resultados
```
import matplotlib.pyplot as plt
from platiagro.plotting import plot_data_table
ax = plot_data_table(df)
plt.show()
#df.to_csv(f"{reports_dir}.csv", index=False)
df.to_csv("fabc_reports.csv", index=False)
```
## Salva resultados da tarefa
A plataforma guarda o conteúdo de `/tmp/data/` para as tarefas subsequentes.<br>
Use essa pasta para salvar modelos, metadados e outros resultados.
```
from joblib import dump
artifacts = {
"df": df,
"columns":columns,
"reports_dir":reports_dir,
"keep_only_conclusions": keep_only_conclusions,
"min_context_length_in_tokens": min_context_length_in_tokens,
}
dump(artifacts, "/tmp/data/fabc_reports.joblib")
```
| github_jupyter |
Nota para antes de leer este documento:<br>
<b><i> 1. El paquete dst contiene toda la implementación de las ideas aquí expuestas. El notebook 2. Implementación incluye implementaciones para distintas configuraciones. En el presente documento se expondrá código de manera ilustrativa, sin embargo, el paquete es el encargado de realizar los procedimientos aquí expuestos. Para ver la implementación puede dirigirse al código fuente, al enlace a colab o al notebook de experimentos. </i></b>
<br><br>
<b><i> 2. La implementación que se realizo esta basada en el documento <a href= "https://www.cvfoundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf">Image Style Transfer Using Convolutional Neural Networks</a>, en el <a href="https://medium.com/tensorflow/neural-style-transfer-creating-art-with-deep-learning-using-tf-keras-and-eager-execution-7d541ac31398">blog de tensorflow </a>y en el <a href="https://markojerkic.com/style-transfer-keras/">blog de Marko Jerkic</a></i></b>
<br><br>
<b><i> 3. La implementación está escrita en Python3 usando Tensorflow y Keras.</i></b>
<h1 style="width: 100%;text-align:center;"> Aprendizaje profundo para la transferencia de estilo </h1>
<p style="width: 100%;text-align:center;"> Universidad de Antioquia <br> Angelower Santana Velasquez <br> Martin Elias Quintero Osorio</p>
<h2 style="width: 100%;text-align:center;">Motivación</h2>
<p>El Deep Learning (<b>Aprendizaje Profundo</b>) es un campo específico del aprendizaje automático (<b>Machine Learning</b>)y en consecuencia de la inteligencia artificial(<b>AI</b>), el cual, ha demostrado un avance exponencial los últimos años. El Deep Learning ha sorprendido por sus increíbles resultados en múltiples áreas para solucionar distintos tipos de problema. Específicamente el deep learning gira en torno al estudio de modelos basados en redes neuronales artificiales, sus funciones de pérdida, la capacidad de distintos tipos de optimización, estrategias para evitar el sobreajuste, el diseño de arquitecturas de redes neuronales enfocadas a cumplir objetivos desde distintos enfoques de aprendizaje, entre muchas otras estrategias que son de gran importancia dentro del Machine Learning. Podemos encontrar varias ramas del Deep Learning como el procesamiento de lenguaje natural (NLP) y la visión por computadora (Computer Vision -CV), siendo esta última materia de estudio desde hace más de una década teniendo avances sorprendentes. Hoy día podemos ver el aprovechamiento de la visión computacional en aplicaciones implementadas en aeropuertos, automóviles, en la industria y, no muy alejado, en nuestros teléfonos inteligentes. Dentro del <b>Deep Learning</b> existe un tipo de red neuronal capaz de emular, según algunas personas, el comportamiento biológico que ocurre en los seres humanos en el proceso de "visión", las redes neuronales convolucionales(<b>CNN</b>), llamadas así por realizar operaciones de convolución dentro del proceso de entrenamiento, operador bastante utilizado en el procesamiento de señales. Y son este tipo de redes neuronales casi la opción por defecto para tratar problemas de visión computacional.</p>
<p>La capacidad de cómputo que disponemos actualmente ha permitido que se realicen múltiples experimentos y se desarrollen ideas sorprendentes para el ojo común. Sistemas que controlan automóviles, aplicaciones que determinan cierto tipo de enfermedades en plantas, filtros y aplicación de diferentes "máscaras" en imágenes en tiempo real (por ejemplo instagram y snapchat), detección de enfermedades bajo la deteccion de patrones (por ejemplo la retinopatía diabética) y la clasificación de objetos son ejemplos de la capacidad que pueden lograr sistemas basados en redes neuronales convolucionales. </p>
<p> Una cita de uno de los artistas más importantes del siglo XX, Pablo Picasso, dice : “La pintura es más fuerte que yo, siempre consigue que haga lo que ella quiere”. El presente informe explora una aplicación bastante ingeniosa para generar "arte" a partir de técnicas de Deep Learning con redes neuronales convolucionales. Que de forma similar a la frase de Picasso, será nuestro modelo el encargado de generar una imagen a su antojo a partir de dos fuentes de datos: Una imagen de contenido y otra de estilo. Ahora, ¿podrías imaginar como Vincent van Gogh había pintado a Medellín? </p>
<img style="float: right;" src="../images/medellin_van.png" alt="Drawing" style="width: 100px;"> <p style="width: 100%;text-align:center;"><b><i>imagen 1 </i></b> <br><b>The starry night in Medellín</b> <br> Imagen contenido: Fotografia del centro de Medellín<br> Imagen estilo: The Starry Night - Vincent van Gogh </p>
<h2 style="width: 100%;text-align:center;">Contenido y estilo</h2>
El cubismo fue un movimiento artístico donde se quería representar elementos de la cotidianidad en composiciones de formas geométricas bien definidas. Es decir, tomar un elemento, capturar su esencia y plasmarla en cubos, triangulos y rectangulos. De esa manera definimos el contenido de una imagen como la esencia que tiene dejando a un lado elementos como el color y la textura. Por otro lado, cuando hablemos de estilo nos referimos a la forma, los colores, sombras y otros matices que no sean la esencia. De hecho, la transferencia de estilo busca tomar la esencia y plasmar en ella la forma y colores de otra imagen. Por ejemplo, en la <b><i>imagen 1 </i></b>, el contenido es la fotografía de Medellín <b><i>imagen 2 lado derecho superior</i></b> y el estilo es la famosa pintura de Vicent Van Gogh : The starry night. <b><i>imagen 2 lado derecho inferior </i></b>
<img src="../images/Thestarrynightinmedellin.png">
<p style="width: 100%;text-align:center;"><b><i>imagen 2 </i></b> <br><b>The starry night in Medellín (Contenido, estilo y resultado)</b> <br> Lado izquierdo: Resultado de Deep Transfer Style <br> Lado Derecho parte superior: Contenido<br>Lado Derecho parte inferior: Estilo</p>
<h2 style="width: 100%;text-align:center;">Deep Transfer Style</h2>
La idea detrás de la técnica de transferencia de estilo es la representación interna que producen las redes neuronales convolucionales una vez entrenadas. Dichas redes están conformadas por capas que tienen por objetivo propósitos específicos: Algunas sirven como mapa de activación que nos indica que tan sensible es una imagen a un patrón o filtro. En otras palabras, si aplicamos un filtro que detecte bordes y formas cuadradas a una imagen de un televisor probablemente la forma del televisor se activará con dicho filtros. Dentro del proceso de entrenamiento la red aprende a detectar estas formas que inician con figuras muy básicas hasta evolucionar a detalles y figuras compuestas, aunque no por esto se llaman redes convolucionadas.
En la <b><i>imagen 3</i></b> se puede observar una arquitectura de red bastante famosa, <b><i>VGG16</i></b>.
<img src="../images/vgg16.png">
<p style="width: 100%;text-align:center;"><b><i>imagen 3 </i></b> <br><b>How convolutional neural networks see the world</b> <br> Fuente: <a href="https://neurohive.io/en/popular-networks/vgg16/" >Blog</a></p>
<b><i>VGG16</i></b> cuenta con 5 bloques internos compuestos por capas de convolución y de pooling (este tipo de capas reduce el volumen de los mapas de características, en la imagen 3 en rojo).Cada bloque cuenta con cierta cantidad de capas de convolución, una notación para determinar las capas es especificar el número de bloque y la posición que esta tiene dentro del arreglo.Por ejemplo la capa convolucional dos del bloque uno se representa como :<b><i>Conv2_1</i></b>, mientras la capa convolucional tres en el bloque cuatro sería: <b><i>Conv4_3</i></b>. <br>Como se mencionó anteriormente en el proceso de entrenamiento la red aprende múltiples filtros, la siguiente pregunta que debemos realizar es : ¿Qué está viendo la red realmente?, ¿Qué es lo que está detectando esos filtros?
La <b><i>imagen 4</i></b> nos da una respuesta a dichas preguntas. Podemos observar que luego del entrenamiento <b><i>conv1_1</i></b> es capaz de detectar ciertos colores y un par de texturas. Mientras tanto <b><i>conv2_1</i></b> parece ser el resultado de combinar las texturas y colores de <b><i>conv1_1</i></b>, , recordemos que en medio de estas capas y de las siguientes que mencionaremos existen otras capas y operaciones que tendrán combinaciones de este tipo.<br>
Pasando a <b><i>conv3_1</i></b>, podemos hacer dos observaciones, la primera es que los colores se ven mucho más granulados que en las dos capas anteriormente estudiadas y empieza a aparecer forma más definidas como líneas diagonales curvas o puntos bien detallados. <br>
<b><i>Conv4_1</i></b> nos muetra un salto enorme respecto a <b><i>conv3_1</i></b>. Podemos identificar con facilidad formas más específicas, agrupaciones de líneas, círculos y combinación de colores que se organizan de forma no uniforme.<br>
Finalmente, <b><i>conv5_1</i></b>, muestra filtros que ya saben detectar formas bien definidas, más delineadas, llevando las imágenes a una representación más abstracta a medida que avanza por las diferentes capas.
<img src="../images/vgg16_filters_overview.jpg">
<p style="width: 100%;text-align:center;"><b><i>imagen 4 </i></b> <br><b>How convolutional neural networks see the world</b> <br> Fuente: <a href="https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html" >Blog de Keras por Francois Chollet</a></p>
El análisis anterior es el pilar de la técnica de transferencia de estilo presentada por León A. Gatys en el articulo <a href= "https://www.cvfoundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf">Image Style Transfer Using Convolutional Neural Networks</a>. Dicho comportamiento ha sido bastante estudiado y es el que logra que las redes neuronales convolucionales capturen detalles y formas, que a su vez, las vuelven enormemente potentes en tareas de clasificación. Es evidente que se empiezan con estructuras muy primitivas como en <b><i>conv1_1</i></b> hasta llegar a formas más estructuradas en <b><i>conv5_1</i></b>.<br>
Lo importante es entender esta capacidad de las <b><i>CNN</i></b> para nuestro tema de interés, es que mientras las primeras capas de la red son más sensibles a los colores y texturas, las últimas capas son capaces de capturar la forma de los objetos. Dicho de otra manera, de las primeras capas capturaremos la representación de la imagen de estilo,luego, de la imagen de contenido obtendremos la representación de alguna de las últimas capas, por ejemplo de <b><i>conv5_1</i></b>.
Específicamente en el artículo de León A. Gatys se usa para capturar el estilo:<b><i>conv1_1, conv2_1, conv3_1, conv4_1, conv5_1</i></b> y para capturar el contenido : <b><i>conv5_2</i></b>
De esa manera el primer paso es tomar una red neuronal entrenada y pasar a través de ella dos imágenes, una para el contenido y otra para el estilo. Luego, capturar las respectivas capas de interés para cada una.
Ahora, debemos generar una tercer imagen la cual será el resultado(por ejemplo ver <b><i>imagen 1</i></b>) de la combinación de las dos anteriores. En el articulo generan una imagen con ruido gaussiano. Experimentos posteriores de la tesis demuestra que al usar la imagen de contenido como imagen de inicio da resultados más consistentes. En los experimentos usaremos ambas.
Luego, pasaremos la imagen generada por la red extrayendo tanto las capas de contenido como de estilo y calcularemos una pérdida de estilo respecto a las capas obtenidas de la imagen de estilo, y otra pérdida de contenido respecto a las capas de la imagen de contenido.
<h2 style="width: 100%;text-align:center;">Funciones de costo</h2>
<h3>Pérdida de estilo</h3>
Necesitamos una forma especial para calcular la perdida de estilo. El estilo esta relacionado con la correlacion que hay entre los pixeles de una imagen. Una matriz Gram nos genera una representacion de como son dichas correlaciones. El siguiente video se muestra una explicacion de como funcion y se opera una matriz Gram.
```
from IPython.display import HTML
HTML('<iframe align="middle" width="560" height="315" src="https://www.youtube-nocookie.com/embed/e718uVAW3KU" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
```
<p>fuente : <a href="https://www.udacity.com">Udacity</a></p>
En el articulo <a href= "https://www.cvfoundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf">Image Style Transfer Using Convolutional Neural Networks</a> se muestra el calculo de la matriz gram de la siguiente forma:
\begin{equation}
G_{i j}^{l}=\sum_{k} F_{i k}^{l} F_{j k}^{l}
\end{equation}
Luego, podemos calcular el error de las matrices Gram como la diferencia de las matrices Gram para las capas $l$ respectivas entre la imagen de estilo y la imagen generada. En el articulo el error cuadratico esta escrito como :
\begin{equation}
E_{l}=\frac{1}{4 N_{l}^{2} M_{l}^{2}} \sum_{i, j}\left(G_{i j}^{l}-A_{i j}^{l}\right)^{2}
\end{equation}
Ahora, la perdida de estilo se calcula como la sumatoria de los errores de las matrices Gram para cada capa $l$ multiplicado por un peso que se le da a cada capa. En el articulo dicho peso es el mismo para cada una de las capas.
\begin{equation}
\mathcal{L}_{\text { style }}(\vec{a}, \vec{x})=\sum_{l=0}^{L} w_{l} E_{l}
\end{equation}
<h3>Perdida de contenido</h3>
La perdida de contenido es la suma de las distancias euclidianas de las capas que representan el contenido. Asi , sea $\vec{p}$ la imagen original, $\vec{x}$ la imagen generada y $l$ la capa contenido, la perdida sera.
\begin{equation}
\mathcal{L}_{\text { content }}(\vec{p}, \vec{x}, l)=\frac{1}{2} \sum_{i, j}\left(F_{i j}^{l}-P_{i j}^{l}\right)^{2}
\end{equation}
$F^{l}$ es el mapa de caracteristicas de la capa $l$ de la imagen original, mientras $P^{l}$ es el mapa de caracteristicas de la capa $l$ de la imagen generada.
<h3>Perdida total y actualizacion de imagen</h3>
Para la perdida total se realiza la suma entre las perdidas de estilo y contenido.
\begin{equation}
\mathcal{L}_{\text { total }}(\vec{p}, \vec{a}, \vec{x})=\alpha \mathcal{L}_{\text { content }}(\vec{p}, \vec{x})+\beta \mathcal{L}_{\text { style }}(\vec{a}, \vec{x})
\end{equation}
Donde alpha $\alpha$ es el peso que se le dara al contenido y $\beta$ el peso que se le dara al estilo.
Solo quedaria calcular el gradiente de la perdida total respecto a la imagen generada, de forma tal, que al cumplir cierto numero de iteraciones el error total disminuya y la imagen generada adquiera el contenido de la imagen de contenido y el estilo de la imagen de estilo. Los parametros $\alpha$ y $\beta$ son de suma importancia dado que determinaran la forma en como se generara la imagen. El articulo propone ciertas correspondecias para obtener diferentes salidas.
<h2 style="width: 100%;text-align:center;">Estrategia de generación de imagen</h2>
Para finalizar, discutiremos el flujo de lo revisado hasta el momento paso por paso.
En primer lugar pasamos la imagen de estilo y la imagen de contenido a través de la red neuronal ya entrenada. Obtenemos las capas convolucionales que deseemos de cada una de ellas. Según el artículo para la imagen de estilo obtendremos los mapas de características ubicados en las capas <b><i> conv1_1, conv2_1, conv3_1, conv4_1, conv5_1 </i></b>, l mismo tiempo, para la imagen de contenido obtenemos solo <b><i>conv5_2</i></b>.
<br>
Para <b><i> conv1_1, conv2_1, conv3_1, conv4_1, conv5_1, conv5_2 </i></b> de la imagen de estilo calculamos las matrices gram.
Posteriormente necesitamos una imagen que nos servirá de lienzo para la imagen generada. Esta imagen puede ser una imagen que generemos con ruido (puede ser una imagen generada mediante una distribución normal) o la imagen de contenido. Los experimentos mostrados en la tesis y el artículo muestran que si se inicia con la imagen de contenido la transferencia de estilo es más estable y fiel al contenido. Esta imagen generada se pasa por la red neuronal y se extrae de ella tanto las capas que se obtuvieron de la imagen de estilo como la de contenido, en otras palabras, de la imagen generada se obtendrá</i></b>.
Ahora, calculamos las matrices gram de la imagen generada y realizamos la pérdida de estilo respecto a las matrices gram de la imagen de estilo. De forma similar, calculamos la diferencia de <b><i>conv5_2</i></b> de la imagen generada respecto a la imagen de contenido.
Sumamos ambos errores y a continuación calculamos los gradiente de la imagen generada respecto al error total. Repetimos este proceso el número de optimización que se le quieran realizar a la imagen generada. A mayor número de iteración se espera que la imagen generada tenga mejores resultados.
| github_jupyter |
# Collaborative filtering on the MovieLense Dataset
## Learning Objectives
1. Know how to explore the data using BigQuery
2. Know how to use the model to make recommendations for a user
3. Know how to use the model to recommend an item to a group of users
###### This notebook is based on part of Chapter 9 of [BigQuery: The Definitive Guide](https://www.oreilly.com/library/view/google-bigquery-the/9781492044451/ "http://shop.oreilly.com/product/0636920207399.do") by Lakshmanan and Tigani.
### MovieLens dataset
To illustrate recommender systems in action, let’s use the MovieLens dataset. This is a dataset of movie reviews released by GroupLens, a research lab in the Department of Computer Science and Engineering at the University of Minnesota, through funding by the US National Science Foundation.
Download the data and load it as a BigQuery table using:
```
import os
import tensorflow as tf
PROJECT = "your-project-here" # REPLACE WITH YOUR PROJECT ID
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["TFVERSION"] = '2.3'
%%bash
rm -r bqml_data
mkdir bqml_data
cd bqml_data
curl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip'
unzip ml-20m.zip
yes | bq rm -r $PROJECT:movielens
bq --location=US mk --dataset \
--description 'Movie Recommendations' \
$PROJECT:movielens
bq --location=US load --source_format=CSV \
--autodetect movielens.ratings gs://cloud-training/recommender-systems/movielens/ratings.csv
bq --location=US load --source_format=CSV \
--autodetect movielens.movies_raw gs://cloud-training/recommender-systems/movielens/movies.csv
```
## Exploring the data
Two tables should now be available in <a href="https://console.cloud.google.com/bigquery">BigQuery</a>.
Collaborative filtering provides a way to generate product recommendations for users, or user targeting for products. The starting point is a table, <b>movielens.ratings</b>, with three columns: a user id, an item id, and the rating that the user gave the product. This table can be sparse -- users don’t have to rate all products. Then, based on just the ratings, the technique finds similar users and similar products and determines the rating that a user would give an unseen product. Then, we can recommend the products with the highest predicted ratings to users, or target products at users with the highest predicted ratings.
```
%%bigquery --project $PROJECT
SELECT *
FROM movielens.ratings
LIMIT 10
```
A quick exploratory query yields that the dataset consists of over 138 thousand users, nearly 27 thousand movies, and a little more than 20 million ratings, confirming that the data has been loaded successfully.
```
%%bigquery --project $PROJECT
SELECT
COUNT(DISTINCT userId) numUsers,
COUNT(DISTINCT movieId) numMovies,
COUNT(*) totalRatings
FROM movielens.ratings
```
On examining the first few movies using the query following query, we can see that the genres column is a formatted string:
```
%%bigquery --project $PROJECT
SELECT *
FROM movielens.movies_raw
WHERE movieId < 5
```
We can parse the genres into an array and rewrite the table as follows:
```
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.movies AS
SELECT * REPLACE(SPLIT(genres, "|") AS genres)
FROM movielens.movies_raw
%%bigquery --project $PROJECT
SELECT *
FROM movielens.movies
WHERE movieId < 5
```
## Matrix factorization
Matrix factorization is a collaborative filtering technique that relies on factorizing the ratings matrix into two vectors called the user factors and the item factors. The user factors is a low-dimensional representation of a user_id and the item factors similarly represents an item_id.
```
%%bash
bq --location=US cp \
cloud-training-demos:movielens.recommender_16 \
movielens.recommender
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model:
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender`)
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model:
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender_16`)
```
When we did that, we discovered that the evaluation loss was lower (0.97) with num_factors=16 than with num_factors=36 (1.67) or num_factors=24 (1.45). We could continue experimenting, but we are likely to see diminishing returns with further experimentation.
## Making recommendations
With the trained model, we can now provide recommendations. For example, let’s find the best comedy movies to recommend to the user whose userId is 903. In the query below, we are calling ML.PREDICT passing in the trained recommendation model and providing a set of movieId and userId to carry out the predictions on. In this case, it’s just one userId (903), but all movies whose genre includes Comedy.
```
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g
WHERE g = 'Comedy'
))
ORDER BY predicted_rating DESC
LIMIT 5
```
## Filtering out already rated movies
Of course, this includes movies the user has already seen and rated in the past. Let’s remove them.
**TODO 1**: Make a prediction for user 903 that does not include already seen movies.
```
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
WITH seen AS (
SELECT ARRAY_AGG(movieId) AS movies
FROM movielens.ratings
WHERE userId = 903
)
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g, seen
WHERE g = 'Comedy' AND movieId NOT IN UNNEST(seen.movies)
))
ORDER BY predicted_rating DESC
LIMIT 5
```
For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen.
## Customer targeting
In the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the customers who are likely to appreciate it. Suppose, for example, we wish to get more reviews for movieId=96481 which has only one rating and we wish to send coupons to the 5 users who are likely to rate it the highest.
**TODO 2**: Find the top five users who will likely enjoy *American Mullet (2001)*
```
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
WITH allUsers AS (
SELECT DISTINCT userId
FROM movielens.ratings
)
SELECT
96481 AS movieId,
(SELECT title FROM movielens.movies WHERE movieId=96481) title,
userId
FROM
allUsers
))
ORDER BY predicted_rating DESC
LIMIT 5
```
### Batch predictions for all users and movies
What if we wish to carry out predictions for every user and movie combination? Instead of having to pull distinct users and movies as in the previous query, a convenience function is provided to carry out batch predictions for all movieId and userId encountered during training. A limit is applied here, otherwise, all user-movie predictions will be returned and will crash the notebook.
```
%%bigquery --project $PROJECT
SELECT *
FROM ML.RECOMMEND(MODEL `cloud-training-demos.movielens.recommender_16`)
LIMIT 10
```
As seen in a section above, it is possible to filter out movies the user has already seen and rated in the past. The reason already seen movies aren’t filtered out by default is that there are situations (think of restaurant recommendations, for example) where it is perfectly expected that we would need to recommend restaurants the user has liked in the past.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
```
import os
import imageio
import numpy as np
import warnings
warnings.filterwarnings('ignore',category=FutureWarning)
import tensorflow as tf
import matplotlib.pyplot as plt
from glob import glob
import cv2
import shutil
tf.logging.set_verbosity(tf.logging.ERROR)
class Helpers():
@staticmethod
def normalize(images):
return np.array(images)/127.5-1.0
@staticmethod
def unnormalize(images):
return (0.5*np.array(images)+0.5)*255
@staticmethod
def resize(image, size):
return np.array(cv2.resize(image, size))
@staticmethod
def split_images(image, is_testing):
image = imageio.imread(image).astype(np.float)
_, width, _ = image.shape
half_width = int(width/2)
source_image = image[:, half_width:, :]
destination_image = image[:, :half_width, :]
source_image = Helpers.resize(source_image, (IMAGE_SIZE, IMAGE_SIZE))
destination_image = Helpers.resize(destination_image, (IMAGE_SIZE, IMAGE_SIZE))
if not is_testing and np.random.random() > 0.5:
source_image = np.fliplr(source_image)
destination_image = np.fliplr(destination_image)
return source_image, destination_image
@staticmethod
def new_dir(path):
shutil.rmtree(path, ignore_errors=True)
os.makedirs(path, exist_ok=True)
@staticmethod
def archive_output():
shutil.make_archive("output", "zip", "./output")
@staticmethod
def image_pairs(batch, is_testing):
source_images, destination_images = [], []
for image_path in batch:
source_image, destination_image = Helpers.split_images(image_path, is_testing)
source_images.append(source_image)
destination_images.append(destination_image)
return source_images, destination_images
# Requires following dataset structure:
# dataset_name
# └── dataset_name
# ├── testing
# │ └── ... (image files)
# ├── testing_raw
# │ ├── ... (image files)
# ├── training
# │ └── ... (image files)
# └── validation (optional)
# └── ... (image files)
class DataLoader():
def __init__(self, dataset_name="pix2pix-depth"):
self.dataset_name = dataset_name
base_path = BASE_INPUT_PATH + self.dataset_name + "/" + self.dataset_name + "/"
self.training_path = base_path + "training/"
self.validation_path = base_path + "validation/"
self.testing_path = base_path + "testing/"
self.testing_raw_path = base_path + "testing_raw/"
def load_random_data(self, data_size, is_testing=False):
paths = glob(self.training_path+"*") if is_testing else glob(self.testing_path+"*")
source_images, destination_images = Helpers.image_pairs(np.random.choice(paths, size=data_size), is_testing)
return Helpers.normalize(source_images), Helpers.normalize(destination_images)
def yield_batch(self, batch_size, is_testing=False):
paths = glob(self.training_path+"*") if is_testing else glob(self.validation_path+"*")
for i in range(int(len(paths)/batch_size)-1):
batch = paths[i*batch_size:(i+1)*batch_size]
source_images, destination_images = Helpers.image_pairs(batch, is_testing)
yield Helpers.normalize(source_images), Helpers.normalize(destination_images)
# Model architecture from: https://phillipi.github.io/pix2pix/
class Pix2Pix():
def __init__(self):
Helpers.new_dir(BASE_OUTPUT_PATH + "training/")
Helpers.new_dir(BASE_OUTPUT_PATH + "training/losses/")
self.image_shape = (IMAGE_SIZE, IMAGE_SIZE, IMAGE_CHANNELS)
self.data_loader = DataLoader()
patch = int(IMAGE_SIZE / 2**4)
self.disc_patch = (patch, patch, 1)
self.generator_filters = 64
self.discriminator_filters = 64
optimizer = tf.keras.optimizers.Adam(LEARNING_RATE, BETA_1)
self.discriminator = self.discriminator()
self.discriminator.compile(loss="mse", optimizer=optimizer, metrics=["accuracy"])
self.generator = self.generator()
source_image = tf.keras.layers.Input(shape=self.image_shape)
destination_image = tf.keras.layers.Input(shape=self.image_shape)
generated_image = self.generator(destination_image)
self.discriminator.trainable = False
valid = self.discriminator([generated_image, destination_image])
self.combined = tf.keras.models.Model(inputs=[source_image, destination_image], outputs=[valid, generated_image])
self.combined.compile(loss=["mse", "mae"], loss_weights=[1, 100], optimizer=optimizer)
def generator(self):
def conv2d(layer_input, filters, bn=True):
downsample = tf.keras.layers.Conv2D(filters, kernel_size=4, strides=2, padding="same")(layer_input)
downsample = tf.keras.layers.LeakyReLU(alpha=LEAKY_RELU_ALPHA)(downsample)
if bn:
downsample = tf.keras.layers.BatchNormalization(momentum=BN_MOMENTUM)(downsample)
return downsample
def deconv2d(layer_input, skip_input, filters, dropout_rate=0):
upsample = tf.keras.layers.UpSampling2D(size=2)(layer_input)
upsample = tf.keras.layers.Conv2D(filters, kernel_size=4, strides=1, padding="same", activation="relu")(upsample)
if dropout_rate:
upsample = tf.keras.layers.Dropout(dropout_rate)(upsample)
upsample = tf.keras.layers.BatchNormalization(momentum=BN_MOMENTUM)(upsample)
upsample = tf.keras.layers.Concatenate()([upsample, skip_input])
return upsample
downsample_0 = tf.keras.layers.Input(shape=self.image_shape)
downsample_1 = conv2d(downsample_0, self.generator_filters, bn=False)
downsample_2 = conv2d(downsample_1, self.generator_filters*2)
downsample_3 = conv2d(downsample_2, self.generator_filters*4)
downsample_4 = conv2d(downsample_3, self.generator_filters*8)
downsample_5 = conv2d(downsample_4, self.generator_filters*8)
downsample_6 = conv2d(downsample_5, self.generator_filters*8)
downsample_7 = conv2d(downsample_6, self.generator_filters*8)
upsample_1 = deconv2d(downsample_7, downsample_6, self.generator_filters*8)
upsample_2 = deconv2d(upsample_1, downsample_5, self.generator_filters*8)
upsample_3 = deconv2d(upsample_2, downsample_4, self.generator_filters*8)
upsample_4 = deconv2d(upsample_3, downsample_3, self.generator_filters*4)
upsample_5 = deconv2d(upsample_4, downsample_2, self.generator_filters*2)
upsample_6 = deconv2d(upsample_5, downsample_1, self.generator_filters)
upsample_7 = tf.keras.layers.UpSampling2D(size=2)(upsample_6)
output_image = tf.keras.layers.Conv2D(IMAGE_CHANNELS, kernel_size=4, strides=1, padding="same", activation="tanh")(upsample_7)
return tf.keras.models.Model(downsample_0, output_image)
def discriminator(self):
def discriminator_layer(layer_input, filters, bn=True):
discriminator_layer = tf.keras.layers.Conv2D(filters, kernel_size=4, strides=2, padding="same")(layer_input)
discriminator_layer = tf.keras.layers.LeakyReLU(alpha=LEAKY_RELU_ALPHA)(discriminator_layer)
if bn:
discriminator_layer = tf.keras.layers.BatchNormalization(momentum=BN_MOMENTUM)(discriminator_layer)
return discriminator_layer
source_image = tf.keras.layers.Input(shape=self.image_shape)
destination_image = tf.keras.layers.Input(shape=self.image_shape)
combined_images = tf.keras.layers.Concatenate(axis=-1)([source_image, destination_image])
discriminator_layer_1 = discriminator_layer(combined_images, self.discriminator_filters, bn=False)
discriminator_layer_2 = discriminator_layer(discriminator_layer_1, self.discriminator_filters*2)
discriminator_layer_3 = discriminator_layer(discriminator_layer_2, self.discriminator_filters*4)
discriminator_layer_4 = discriminator_layer(discriminator_layer_3, self.discriminator_filters*8)
validity = tf.keras.layers.Conv2D(1, kernel_size=4, strides=1, padding="same")(discriminator_layer_4)
return tf.keras.models.Model([source_image, destination_image], validity)
def preview_training_progress(self, epoch, size=3):
def preview_outputs(epoch, size):
source_images, destination_images = self.data_loader.load_random_data(size, is_testing=True)
generated_images = self.generator.predict(destination_images)
grid_image = None
for i in range(size):
row = Helpers.unnormalize(np.concatenate([destination_images[i], generated_images[i], source_images[i]], axis=1))
if grid_image is None:
grid_image = row
else:
grid_image = np.concatenate([grid_image, row], axis=0)
plt.imshow(grid_image/255.0)
plt.show()
plt.close()
grid_image = cv2.cvtColor(np.float32(grid_image), cv2.COLOR_RGB2BGR)
cv2.imwrite(BASE_OUTPUT_PATH + "training/ " + str(epoch) + ".png", grid_image)
def preview_losses():
def plot(title, data):
plt.plot(data, alpha=0.6)
plt.title(title + "_" + str(i))
plt.savefig(BASE_OUTPUT_PATH + "training/losses/" + title + "_" + str(i) + ".png")
plt.close()
for i, d in enumerate(self.d_losses):
plot("discriminator", d)
for i, g in enumerate(self.g_losses):
plot("generator", g)
preview_outputs(epoch, size)
#preview_losses()
def train(self):
valid = np.ones((BATCH_SIZE,) + self.disc_patch)
fake = np.zeros((BATCH_SIZE,) + self.disc_patch)
self.d_losses = []
self.g_losses = []
self.preview_training_progress(0)
for epoch in range(EPOCHS):
epoch_d_losses = []
epoch_g_losses = []
for iteration, (source_images, destination_images) in enumerate(self.data_loader.yield_batch(BATCH_SIZE)):
generated_images = self.generator.predict(destination_images)
d_loss_real = self.discriminator.train_on_batch([source_images, destination_images], valid)
d_loss_fake = self.discriminator.train_on_batch([generated_images, destination_images], fake)
d_losses = 0.5 * np.add(d_loss_real, d_loss_fake)
g_losses = self.combined.train_on_batch([source_images, destination_images], [valid, source_images])
epoch_d_losses.append(d_losses)
epoch_g_losses.append(g_losses)
print("\repoch: " + str(epoch)
+", iteration: "+ str(iteration)
+ ", d_losses: " + str(d_losses)
+ ", g_losses: " + str(g_losses)
, sep=" ", end=" ", flush=True)
self.d_losses.append(np.average(epoch_d_losses, axis=0))
self.g_losses.append(np.average(epoch_g_losses, axis=0))
self.preview_training_progress(epoch)
def test(self):
image_paths = glob(self.data_loader.testing_raw_path+"*")
for image_path in image_paths:
image = np.array(imageio.imread(image_path))
image_normalized = Helpers.normalize(image)
generated_batch = self.generator.predict(np.array([image_normalized]))
concat = Helpers.unnormalize(np.concatenate([image_normalized, generated_batch[0]], axis=1))
cv2.imwrite(BASE_OUTPUT_PATH+os.path.basename(image_path), cv2.cvtColor(np.float32(concat), cv2.COLOR_RGB2BGR))
BASE_INPUT_PATH = "" # Kaggle: "../input/pix2pix-depth/"
BASE_OUTPUT_PATH = "./output/"
IMAGE_SIZE = 256
IMAGE_CHANNELS = 3
LEARNING_RATE = 0.00015
BETA_1 = 0.5
LEAKY_RELU_ALPHA = 0.2
BN_MOMENTUM = 0.8
EPOCHS = 50
BATCH_SIZE = 32
gan = Pix2Pix()
gan.train()
gan.test()
```
| github_jupyter |
## NYUD+KITTI- joint semantic segmentation and depth estimation on both datasets with a single network
```
%matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
import sys
sys.path.append('../')
from models import net
import cv2
import torch
from torch.autograd import Variable
# Pre-processing and post-processing constants #
CMAP_NYUD = np.load('../cmap_nyud.npy')
CMAP_KITTI = np.load('../cmap_kitti.npy')
DEPTH_COEFF_NYUD = 5000. # to convert into metres
DEPTH_COEFF_KITTI = 800.
HAS_CUDA = torch.cuda.is_available()
IMG_SCALE = 1./255
IMG_MEAN = np.array([0.485, 0.456, 0.406]).reshape((1, 1, 3))
IMG_STD = np.array([0.229, 0.224, 0.225]).reshape((1, 1, 3))
MAX_DEPTH_NYUD = 8.
MIN_DEPTH_NYUD = 0.
MAX_DEPTH_KITTI = 80.
MIN_DEPTH_KITTI = 0.
NUM_CLASSES = 46
NUM_CLASSES_NYUD = 40
NUM_CLASSES_KITTI = 6
NUM_TASKS = 2 # segm + depth
def prepare_img(img):
return (img * IMG_SCALE - IMG_MEAN) / IMG_STD
model = net(num_classes=NUM_CLASSES, num_tasks=NUM_TASKS)
if HAS_CUDA:
_ = model.cuda()
_ = model.eval()
ckpt = torch.load('../../weights/ExpNYUDKITTI_joint.ckpt')
model.load_state_dict(ckpt['state_dict'])
# NYUD
img_path = '../../examples/ExpNYUD_joint/000464.png'
img_nyud = np.array(Image.open(img_path))
gt_segm_nyud = np.array(Image.open('../../examples/ExpNYUD_joint/segm_gt_000464.png'))
# KITTI
img_path = '../../examples/ExpKITTI_joint/000099.png'
img_kitti = np.array(Image.open(img_path))
gt_segm_kitti = np.array(Image.open('../../examples/ExpKITTI_joint/segm_gt_000099.png'))
with torch.no_grad():
# nyud
img_var = Variable(torch.from_numpy(prepare_img(img_nyud).transpose(2, 0, 1)[None]), requires_grad=False).float()
if HAS_CUDA:
img_var = img_var.cuda()
segm, depth = model(img_var)
segm = cv2.resize(segm[0, :(NUM_CLASSES_NYUD)].cpu().data.numpy().transpose(1, 2, 0),
img_nyud.shape[:2][::-1],
interpolation=cv2.INTER_CUBIC)
depth = cv2.resize(depth[0, 0].cpu().data.numpy(),
img_nyud.shape[:2][::-1],
interpolation=cv2.INTER_CUBIC)
segm_nyud = CMAP_NYUD[segm.argmax(axis=2) + 1].astype(np.uint8)
depth_nyud = np.abs(depth)
# kitti
img_var = Variable(torch.from_numpy(prepare_img(img_kitti).transpose(2, 0, 1)[None]), requires_grad=False).float()
if HAS_CUDA:
img_var = img_var.cuda()
segm, depth = model(img_var)
segm = cv2.resize(segm[0, (NUM_CLASSES_NYUD):(NUM_CLASSES_NYUD + NUM_CLASSES_KITTI)].cpu().data.numpy().transpose(1, 2, 0),
img_kitti.shape[:2][::-1],
interpolation=cv2.INTER_CUBIC)
depth = cv2.resize(depth[0, 0].cpu().data.numpy(),
img_kitti.shape[:2][::-1],
interpolation=cv2.INTER_CUBIC)
segm_kitti = CMAP_KITTI[segm.argmax(axis=2)].astype(np.uint8)
depth_kitti = np.abs(depth)
plt.figure(figsize=(18, 12))
plt.subplot(241)
plt.imshow(img_nyud)
plt.title('NYUD: img')
plt.axis('off')
plt.subplot(242)
plt.imshow(CMAP_NYUD[gt_segm_nyud + 1])
plt.title('NYUD: gt segm')
plt.axis('off')
plt.subplot(243)
plt.imshow(segm_nyud)
plt.title('NYUD: pred segm')
plt.axis('off')
plt.subplot(244)
plt.imshow(depth_nyud, cmap='plasma', vmin=MIN_DEPTH_NYUD, vmax=MAX_DEPTH_NYUD)
plt.title('NYUD: pred depth')
plt.axis('off')
plt.subplot(245)
plt.imshow(img_kitti)
plt.title('KITTI: img')
plt.axis('off')
plt.subplot(246)
plt.imshow(gt_segm_kitti)
plt.title('KITTI: gt segm')
plt.axis('off')
plt.subplot(247)
plt.imshow(segm_kitti)
plt.title('KITTI: pred segm')
plt.axis('off')
plt.subplot(248)
plt.imshow(depth_kitti, cmap='plasma', vmin=MIN_DEPTH_KITTI, vmax=MAX_DEPTH_KITTI)
plt.title('KITTI: pred depth')
plt.axis('off');
```
| github_jupyter |
# Charting OSeMOSYS transformation data
### These charts won't necessarily need to be mapped back to EGEDA historical.
### Will effectively be base year and out
### But will be good to incorporate some historical generation before the base year eventually
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
from openpyxl import Workbook
import xlsxwriter
import pandas.io.formats.excel
import glob
import re
# Path for OSeMOSYS output
path_output = '../../data/3_OSeMOSYS_output'
# Path for OSeMOSYS to EGEDA mapping
path_mapping = '../../data/2_Mapping_and_other'
# They're csv files so use a wild card (*) to grab the filenames
OSeMOSYS_filenames = glob.glob(path_output + "/*.xlsx")
OSeMOSYS_filenames
# Read in mapping file
Mapping_file = pd.read_excel(path_mapping + '/OSeMOSYS mapping.xlsx', sheet_name = 'Mapping', skiprows = 1)
# Subset the mapping file so that it's just transformation
Map_trans = Mapping_file[Mapping_file['Balance'] == 'TRANS'].reset_index(drop = True)
# Define unique workbook and sheet combinations
Unique_trans = Map_trans.groupby(['Workbook', 'Sheet']).size().reset_index().loc[:, ['Workbook', 'Sheet']]
Unique_trans
# Determine list of files to read based on the workbooks identified in the mapping file
file_trans = pd.DataFrame()
for i in range(len(Unique_trans['Workbook'].unique())):
_file = pd.DataFrame({'File': [entry for entry in OSeMOSYS_filenames if Unique_trans['Workbook'].unique()[i] in entry],
'Workbook': Unique_trans['Workbook'].unique()[i]})
file_trans = file_trans.append(_file)
file_trans = file_trans.merge(Unique_trans, how = 'outer', on = 'Workbook')
# Create empty dataframe to store aggregated results
aggregate_df1 = pd.DataFrame()
# Now read in the OSeMOSYS output files so that that they're all in one data frame (aggregate_df1)
for i in range(file_trans.shape[0]):
_df = pd.read_excel(file_trans.iloc[i, 0], sheet_name = file_trans.iloc[i, 2])
_df['Workbook'] = file_trans.iloc[i, 1]
_df['Sheet'] = file_trans.iloc[i, 2]
aggregate_df1 = aggregate_df1.append(_df)
aggregate_df1 = aggregate_df1.groupby(['TECHNOLOGY', 'FUEL', 'REGION']).sum().reset_index()
# Read in capacity data
capacity_df1 = pd.DataFrame()
# Populate the above blank dataframe with capacity data from the results workbook
for i in range(len(OSeMOSYS_filenames)):
_df = pd.read_excel(OSeMOSYS_filenames[i], sheet_name = 'TotalCapacityAnnual')
capacity_df1 = capacity_df1.append(_df)
# Now just extract the power capacity
pow_capacity_df1 = capacity_df1[capacity_df1['TECHNOLOGY'].str.startswith('POW')].reset_index(drop = True)
pow_capacity_df1.head()
# Get maximum year column to build data frame below
year_columns = []
for item in list(aggregate_df1.columns):
try:
year_columns.append(int(item))
except ValueError:
pass
max_year = max(year_columns)
OSeMOSYS_years = list(range(2017, max_year + 1))
OSeMOSYS_years
# Colours for charting (to be amended later)
colours = pd.read_excel('../../data/2_Mapping_and_other/colour_template_7th.xlsx')
colours_hex = colours['hex']
colours_hex
Map_power = Map_trans[Map_trans['Sector'] == 'POW'].reset_index(drop = True)
Map_power.head(1)
################################ POWER SECTOR ###############################
# Aggregate data based on the Map_power mapping
# That is group by REGION, TECHNOLOGY and FUEL
# First create empty dataframe
power_df1 = pd.DataFrame()
# Then loop through based on different regions/economies and stitch back together
for region in aggregate_df1['REGION'].unique():
interim_df1 = aggregate_df1[aggregate_df1['REGION'] == region]
interim_df1 = interim_df1.merge(Map_power, how = 'right', on = ['TECHNOLOGY', 'FUEL'])
interim_df1 = interim_df1.groupby(['TECHNOLOGY', 'FUEL', 'Sheet']).sum().reset_index()
# Now add in economy reference
interim_df1['economy'] = region
# Now append economy dataframe to communal data frame
power_df1 = power_df1.append(interim_df1)
power_df1 = power_df1[['economy', 'TECHNOLOGY', 'FUEL', 'Sheet'] + OSeMOSYS_years]
#power_df1.head(3)
power_df1[power_df1['TECHNOLOGY'] == 'POW_Transmission']
Map_refownsup = Map_trans[Map_trans['Sector'].isin(['REF', 'SUP', 'OWN'])].reset_index(drop = True)
Map_refownsup.head(1)
################################ REFINERY, OWN USE and SUPPLY TRANSFORMATION SECTOR ###############################
# Aggregate data based on the Map_power mapping
# That is group by REGION, TECHNOLOGY and FUEL
# First create empty dataframe
refownsup_df1 = pd.DataFrame()
# Then loop through based on different regions/economies and stitch back together
for region in aggregate_df1['REGION'].unique():
interim_df1 = aggregate_df1[aggregate_df1['REGION'] == region]
interim_df1 = interim_df1.merge(Map_refownsup, how = 'right', on = ['TECHNOLOGY', 'FUEL'])
interim_df1 = interim_df1.groupby(['TECHNOLOGY', 'FUEL', 'Sheet', 'Sector']).sum().reset_index()
# Now add in economy reference
interim_df1['economy'] = region
# Now append economy dataframe to communal data frame
refownsup_df1 = refownsup_df1.append(interim_df1)
refownsup_df1 = refownsup_df1[['economy', 'TECHNOLOGY', 'FUEL', 'Sheet', 'Sector'] + OSeMOSYS_years]
refownsup_df1.head(3)
# FUEL aggregations for UseByTechnology
coal_fuel = ['1_x_coal_thermal', '1_3_lignite']
other_fuel = ['10_electricity', '9_7_municipal_solid_waste', '9_9_x_blackliquor', '8_1_geothermal_power', '9_4_other_biomass']
solar_fuel = ['8_2_4_solar', '8_2_1_photovoltaic']
use_agg_fuels = ['Coal', 'Oil', 'Gas', 'Hydro', 'Nuclear', 'Solar', 'Wind', 'Other']
# TECHNOLOGY aggregations for ProductionByTechnology
coal_tech = ['POW_Black_Coal_PP', 'POW_Other_Coal_PP', 'POW_Sub_BituCoal_PP', 'POW_Sub_Brown_PP', 'POW_Ultra_BituCoal_PP']
storage_tech = ['POW_AggregatedEnergy_Storage_VPP', 'POW_EmbeddedBattery_Storage']
gas_tech = ['POW_CCGT_PP', 'POW_OCGT_PP']
chp_tech = ['POW_CHP_PP', 'POW_Ultra_CHP_PP']
other_tech = ['POW_Geothermal_PP', 'POW_IPP_PP', 'POW_TIDAL_PP', 'POW_WasteToEnergy_PP']
hydro_tech = ['POW_Hydro_PP', 'POW_Pumped_Hydro', 'POW_Storage_Hydro_PP']
im_tech = ['POW_IMPORTS_PP']
solar_tech = ['POW_SolarCSP_PP', 'POW_SolarFloatPV_PP', 'POW_SolarPV_PP', 'POW_SolarRoofPV_PP']
wind_tech = ['POW_WindOff_PP', 'POW_Wind_PP']
prod_agg_tech = ['Coal', 'Oil', 'Gas', 'Hydro', 'Nuclear', 'Wind', 'Solar', 'Bio', 'Storage', 'Other', 'CHP', 'Imports']
# Refinery vectors
Ref_input = ['3_1_crude_oil', '3_x_NGLs']
Ref_output = ['4_1_1_motor_gasoline', '4_1_2_aviation_gasoline', '4_10_other_petroleum_products', '4_2_naphtha', '4_3_jet_fuel',
'4_4_other_kerosene', '4_5_gas_diesel_oil', '4_6_fuel_oil', '4_7_lpg', '4_8_refinery_gas_not_liq', '4_9_ethane']
# Capacity vectors
coal_cap = ['POW_Black_Coal_PP', 'POW_Black_Coal_PP', 'POW_Sub_BituCoal_PP', 'POW_Sub_Brown_PP', 'POW_CHP_COAL_PP', 'POW_Other_Coal_PP', 'POW_Ultra_BituCoal_PP', 'POW_Ultra_CHP_PP']
gas_cap = ['POW_CCGT_PP', 'POW_OCGT_PP', 'POW_CHP_GAS_PP']
oil_cap = ['POW_Diesel_PP']
nuclear_cap = ['POW_Nuclear_PP']
hydro_cap = ['POW_Hydro_PP', 'POW_Pumped_Hydro', 'POW_Storage_Hydro_PP', 'POW_TIDAL_PP']
bio_cap = ['POW_Solid_Biomass_PP', 'POW_CHP_BIO_PP']
wind_cap = ['POW_Wind_PP', 'POW_WindOff_PP']
solar_cap = ['POW_SolarCSP_PP', 'POW_SolarFloatPV_PP', 'POW_SolarPV_PP', 'POW_SolarRoofPV_PP']
storage_cap = ['POW_AggregatedEnergy_Storage_VPP', 'POW_EmbeddedBattery_Storage']
other_cap = ['POW_CHP_PP', 'POW_Geothermal_PP', 'POW_WasteToEnergy_PP', 'POW_IMPORTS_PP', 'POW_IPP_PP']
# 'POW_HEAT_HP' not in electricity capacity
transmission_cap = ['POW_Transmission']
pow_capacity_agg = ['Coal', 'Gas', 'Oil', 'Nuclear', 'Hydro', 'Biomass', 'Wind', 'Solar', 'Storage', 'Other']
# Chart years for column charts
col_chart_years = [2017, 2020, 2030, 2040, 2050]
# Define month and year to create folder for saving charts/tables
month_year = pd.to_datetime('today').strftime('%B_%Y')
# Make space for charts (before data/tables)
chart_height = 18 # number of excel rows before the data is written
# TRANSFORMATION SECTOR: Build use, capacity and production dataframes with appropriate aggregations to chart
for economy in power_df1['economy'].unique():
use_df1 = power_df1[(power_df1['economy'] == economy) &
(power_df1['Sheet'] == 'UseByTechnology') &
(power_df1['TECHNOLOGY'] != 'POW_Transmission')].reset_index(drop = True)
# Now build aggregate variables of the FUELS
coal = use_df1[use_df1['FUEL'].isin(coal_fuel)].groupby(['economy']).sum().assign(FUEL = 'Coal',
TECHNOLOGY = 'Coal power')
other = use_df1[use_df1['FUEL'].isin(other_fuel)].groupby(['economy']).sum().assign(FUEL = 'Other',
TECHNOLOGY = 'Other power')
solar = use_df1[use_df1['FUEL'].isin(solar_fuel)].groupby(['economy']).sum().assign(FUEL = 'Solar',
TECHNOLOGY = 'Solar power')
# Use by fuel data frame
usefuel_df1 = use_df1.append([coal, other, solar])[['FUEL',
'TECHNOLOGY'] + OSeMOSYS_years].reset_index(drop = True)
usefuel_df1.loc[usefuel_df1['FUEL'] == '4_5_gas_diesel_oil', 'FUEL'] = 'Oil'
usefuel_df1.loc[usefuel_df1['FUEL'] == '5_1_natural_gas', 'FUEL'] = 'Gas'
usefuel_df1.loc[usefuel_df1['FUEL'] == '6_hydro', 'FUEL'] = 'Hydro'
usefuel_df1.loc[usefuel_df1['FUEL'] == '7_nuclear', 'FUEL'] = 'Nuclear'
usefuel_df1.loc[usefuel_df1['FUEL'] == '8_2_3_wind', 'FUEL'] = 'Wind'
usefuel_df1 = usefuel_df1[usefuel_df1['FUEL'].isin(use_agg_fuels)].set_index('FUEL').loc[use_agg_fuels].reset_index()
usefuel_df1 = usefuel_df1.groupby('FUEL').sum().reset_index()
usefuel_df1['Transformation'] = 'Input fuel'
usefuel_df1 = usefuel_df1[['FUEL', 'Transformation'] + OSeMOSYS_years]
nrows1 = usefuel_df1.shape[0]
ncols1 = usefuel_df1.shape[1]
usefuel_df2 = usefuel_df1[['FUEL', 'Transformation'] + col_chart_years]
nrows2 = usefuel_df2.shape[0]
ncols2 = usefuel_df2.shape[1]
# Now build production dataframe
prodelec_df1 = power_df1[(power_df1['economy'] == economy) &
(power_df1['Sheet'] == 'ProductionByTechnology') &
(power_df1['FUEL'].isin(['10_electricity', '10_electricity_Dx']))].reset_index(drop = True)
# Now build the aggregations of technology (power plants)
coal_pp = prodelec_df1[prodelec_df1['TECHNOLOGY'].isin(coal_tech)].groupby(['economy']).sum().assign(TECHNOLOGY = 'Coal')
gas_pp = prodelec_df1[prodelec_df1['TECHNOLOGY'].isin(gas_tech)].groupby(['economy']).sum().assign(TECHNOLOGY = 'Gas')
storage_pp = prodelec_df1[prodelec_df1['TECHNOLOGY'].isin(storage_tech)].groupby(['economy']).sum().assign(TECHNOLOGY = 'Storage')
chp_pp = prodelec_df1[prodelec_df1['TECHNOLOGY'].isin(chp_tech)].groupby(['economy']).sum().assign(TECHNOLOGY = 'CHP')
other_pp = prodelec_df1[prodelec_df1['TECHNOLOGY'].isin(other_tech)].groupby(['economy']).sum().assign(TECHNOLOGY = 'Other')
hydro_pp = prodelec_df1[prodelec_df1['TECHNOLOGY'].isin(hydro_tech)].groupby(['economy']).sum().assign(TECHNOLOGY = 'Hydro')
misc = prodelec_df1[prodelec_df1['TECHNOLOGY'].isin(im_tech)].groupby(['economy']).sum().assign(TECHNOLOGY = 'Imports')
solar_pp = prodelec_df1[prodelec_df1['TECHNOLOGY'].isin(solar_tech)].groupby(['economy']).sum().assign(TECHNOLOGY = 'Solar')
wind_pp = prodelec_df1[prodelec_df1['TECHNOLOGY'].isin(wind_tech)].groupby(['economy']).sum().assign(TECHNOLOGY = 'Wind')
# Production by tech dataframe (with the above aggregations added)
prodelec_bytech_df1 = prodelec_df1.append([coal_pp, gas_pp, storage_pp, chp_pp, other_pp, hydro_pp, misc, solar_pp, wind_pp])\
[['TECHNOLOGY'] + OSeMOSYS_years].reset_index(drop = True)
prodelec_bytech_df1.loc[prodelec_bytech_df1['TECHNOLOGY'] == 'POW_Diesel_PP', 'TECHNOLOGY'] = 'Oil'
prodelec_bytech_df1.loc[prodelec_bytech_df1['TECHNOLOGY'] == 'POW_Nuclear_PP', 'TECHNOLOGY'] = 'Nuclear'
prodelec_bytech_df1.loc[prodelec_bytech_df1['TECHNOLOGY'] == 'POW_Solid_Biomass_PP', 'TECHNOLOGY'] = 'Bio'
prodelec_bytech_df1['Production'] = 'Electricity'
prodelec_bytech_df1 = prodelec_bytech_df1[['TECHNOLOGY', 'Production'] + OSeMOSYS_years]
prodelec_bytech_df1 = prodelec_bytech_df1[prodelec_bytech_df1['TECHNOLOGY'].isin(prod_agg_tech)].set_index('TECHNOLOGY').loc[prod_agg_tech].reset_index()
# CHange to TWh from Petajoules
s = prodelec_bytech_df1.select_dtypes(include=[np.number]) / 3.6
prodelec_bytech_df1[s.columns] = s
nrows3 = prodelec_bytech_df1.shape[0]
ncols3 = prodelec_bytech_df1.shape[1]
prodelec_bytech_df2 = prodelec_bytech_df1[['TECHNOLOGY', 'Production'] + col_chart_years]
nrows4 = prodelec_bytech_df2.shape[0]
ncols4 = prodelec_bytech_df2.shape[1]
##################################################################################################################################################################
# Now create some refinery dataframes
refinery_df1 = refownsup_df1[(refownsup_df1['economy'] == economy) &
(refownsup_df1['Sector'] == 'REF') &
(refownsup_df1['FUEL'].isin(Ref_input))]
refinery_df1['Transformation'] = 'Input to refinery'
refinery_df1 = refinery_df1[['FUEL', 'Transformation'] + OSeMOSYS_years]
refinery_df1.loc[refinery_df1['FUEL'] == '3_1_crude_oil', 'FUEL'] = 'Crude oil'
refinery_df1.loc[refinery_df1['FUEL'] == '3_x_NGLs', 'FUEL'] = 'NGLs'
nrows5 = refinery_df1.shape[0]
ncols5 = refinery_df1.shape[1]
refinery_df2 = refownsup_df1[(refownsup_df1['economy'] == economy) &
(refownsup_df1['Sector'] == 'REF') &
(refownsup_df1['FUEL'].isin(Ref_output))]
refinery_df2['Transformation'] = 'Output from refinery'
refinery_df2 = refinery_df2[['FUEL', 'Transformation'] + OSeMOSYS_years]
refinery_df2.loc[refinery_df2['FUEL'] == '4_1_1_motor_gasoline', 'FUEL'] = 'Motor gasoline'
refinery_df2.loc[refinery_df2['FUEL'] == '4_1_2_aviation_gasoline', 'FUEL'] = 'Aviation gasoline'
refinery_df2.loc[refinery_df2['FUEL'] == '4_2_naphtha', 'FUEL'] = 'Naphtha'
refinery_df2.loc[refinery_df2['FUEL'] == '4_3_jet_fuel', 'FUEL'] = 'Jet fuel'
refinery_df2.loc[refinery_df2['FUEL'] == '4_4_other_kerosene', 'FUEL'] = 'Other kerosene'
refinery_df2.loc[refinery_df2['FUEL'] == '4_5_gas_diesel_oil', 'FUEL'] = 'Gas diesel oil'
refinery_df2.loc[refinery_df2['FUEL'] == '4_6_fuel_oil', 'FUEL'] = 'Fuel oil'
refinery_df2.loc[refinery_df2['FUEL'] == '4_7_lpg', 'FUEL'] = 'LPG'
refinery_df2.loc[refinery_df2['FUEL'] == '4_8_refinery_gas_not_liq', 'FUEL'] = 'Refinery gas'
refinery_df2.loc[refinery_df2['FUEL'] == '4_9_ethane', 'FUEL'] = 'Ethane'
refinery_df2.loc[refinery_df2['FUEL'] == '4_10_other_petroleum_products', 'FUEL'] = 'Other'
refinery_df2['FUEL'] = pd.Categorical(
refinery_df2['FUEL'],
categories = ['Motor gasoline', 'Aviation gasoline', 'Naphtha', 'Jet fuel', 'Other kerosene', 'Gas diesel oil', 'Fuel oil', 'LPG', 'Refinery gas', 'Ethane', 'Other'],
ordered = True)
refinery_df2 = refinery_df2.sort_values('FUEL')
nrows6 = refinery_df2.shape[0]
ncols6 = refinery_df2.shape[1]
refinery_df3 = refinery_df2[['FUEL', 'Transformation'] + col_chart_years]
nrows7 = refinery_df3.shape[0]
ncols7 = refinery_df3.shape[1]
#####################################################################################################################################################################
# Create some power capacity dataframes
powcap_df1 = pow_capacity_df1[pow_capacity_df1['REGION'] == economy]
coal_capacity = powcap_df1[powcap_df1['TECHNOLOGY'].isin(coal_cap)].groupby(['REGION']).sum().assign(TECHNOLOGY = 'Coal')
oil_capacity = powcap_df1[powcap_df1['TECHNOLOGY'].isin(oil_cap)].groupby(['REGION']).sum().assign(TECHNOLOGY = 'Oil')
wind_capacity = powcap_df1[powcap_df1['TECHNOLOGY'].isin(wind_cap)].groupby(['REGION']).sum().assign(TECHNOLOGY = 'Wind')
storage_capacity = powcap_df1[powcap_df1['TECHNOLOGY'].isin(storage_cap)].groupby(['REGION']).sum().assign(TECHNOLOGY = 'Storage')
gas_capacity = powcap_df1[powcap_df1['TECHNOLOGY'].isin(gas_cap)].groupby(['REGION']).sum().assign(TECHNOLOGY = 'Gas')
hydro_capacity = powcap_df1[powcap_df1['TECHNOLOGY'].isin(hydro_cap)].groupby(['REGION']).sum().assign(TECHNOLOGY = 'Hydro')
solar_capacity = powcap_df1[powcap_df1['TECHNOLOGY'].isin(solar_cap)].groupby(['REGION']).sum().assign(TECHNOLOGY = 'Solar')
nuclear_capacity = powcap_df1[powcap_df1['TECHNOLOGY'].isin(nuclear_cap)].groupby(['REGION']).sum().assign(TECHNOLOGY = 'Nuclear')
bio_capacity = powcap_df1[powcap_df1['TECHNOLOGY'].isin(bio_cap)].groupby(['REGION']).sum().assign(TECHNOLOGY = 'Biomass')
other_capacity = powcap_df1[powcap_df1['TECHNOLOGY'].isin(other_cap)].groupby(['REGION']).sum().assign(TECHNOLOGY = 'Other')
transmission = powcap_df1[powcap_df1['TECHNOLOGY'].isin(transmission_cap)].groupby(['REGION']).sum().assign(TECHNOLOGY = 'Transmission')
# Capacity by tech dataframe (with the above aggregations added)
powcap_df1 = powcap_df1.append([coal_capacity, gas_capacity, oil_capacity, nuclear_capacity, hydro_capacity, bio_capacity, wind_capacity, solar_capacity, storage_capacity, other_capacity])\
[['TECHNOLOGY'] + OSeMOSYS_years].reset_index(drop = True)
powcap_df1 = powcap_df1[powcap_df1['TECHNOLOGY'].isin(pow_capacity_agg)].reset_index(drop = True)
nrows8 = powcap_df1.shape[0]
ncols8 = powcap_df1.shape[1]
powcap_df2 = powcap_df1[['TECHNOLOGY'] + col_chart_years]
nrows9 = powcap_df2.shape[0]
ncols9 = powcap_df2.shape[1]
# Define directory
script_dir = '../../results/' + month_year + '/Transformation/'
results_dir = os.path.join(script_dir, 'economy_breakdown/', economy)
if not os.path.isdir(results_dir):
os.makedirs(results_dir)
# Create a Pandas excel writer workbook using xlsxwriter as the engine and save it in the directory created above
writer = pd.ExcelWriter(results_dir + '/' + economy + '_transform.xlsx', engine = 'xlsxwriter')
workbook = writer.book
pandas.io.formats.excel.ExcelFormatter.header_style = None
usefuel_df1.to_excel(writer, sheet_name = economy + '_use_fuel', index = False, startrow = chart_height)
usefuel_df2.to_excel(writer, sheet_name = economy + '_use_fuel', index = False, startrow = chart_height + nrows1 + 3)
prodelec_bytech_df1.to_excel(writer, sheet_name = economy + '_prodelec_bytech', index = False, startrow = chart_height)
prodelec_bytech_df2.to_excel(writer, sheet_name = economy + '_prodelec_bytech', index = False, startrow = chart_height + nrows3 + 3)
refinery_df1.to_excel(writer, sheet_name = economy + '_refining', index = False, startrow = chart_height)
refinery_df2.to_excel(writer, sheet_name = economy + '_refining', index = False, startrow = chart_height + nrows5 + 3)
refinery_df3.to_excel(writer, sheet_name = economy + '_refining', index = False, startrow = chart_height + nrows5 + nrows6 + 6)
powcap_df1.to_excel(writer, sheet_name = economy + '_pow_capacity', index = False, startrow = chart_height)
powcap_df2.to_excel(writer, sheet_name = economy + '_pow_capacity', index = False, startrow = chart_height + nrows8 + 3)
# Access the workbook and first sheet with data from df1
worksheet1 = writer.sheets[economy + '_use_fuel']
# Comma format and header format
comma_format = workbook.add_format({'num_format': '#,##0'})
header_format = workbook.add_format({'font_name': 'Calibri', 'font_size': 11, 'bold': True})
cell_format1 = workbook.add_format({'bold': True})
# Apply comma format and header format to relevant data rows
worksheet1.set_column(2, ncols1 + 1, None, comma_format)
worksheet1.set_row(chart_height, None, header_format)
worksheet1.set_row(chart_height + nrows1 + 3, None, header_format)
worksheet1.write(0, 0, economy + ' transformation use fuel', cell_format1)
# Create a use by fuel area chart
usefuel_chart1 = workbook.add_chart({'type': 'area', 'subtype': 'stacked'})
usefuel_chart1.set_size({
'width': 500,
'height': 300
})
usefuel_chart1.set_chartarea({
'border': {'none': True}
})
usefuel_chart1.set_x_axis({
'name': 'Year',
'label_position': 'low',
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232', 'rotation': -45},
'position_axis': 'on_tick',
'interval_unit': 4,
'line': {'color': '#bebebe'}
})
usefuel_chart1.set_y_axis({
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'name': 'PJ',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'major_gridlines': {
'visible': True,
'line': {'color': '#bebebe'}
},
'line': {'color': '#bebebe'}
})
usefuel_chart1.set_legend({
'font': {'font': 'Segoe UI', 'size': 10}
#'none': True
})
usefuel_chart1.set_title({
'none': True
})
# Configure the series of the chart from the dataframe data.
for i in range(nrows1):
usefuel_chart1.add_series({
'name': [economy + '_use_fuel', chart_height + i + 1, 0],
'categories': [economy + '_use_fuel', chart_height, 2, chart_height, ncols1 - 1],
'values': [economy + '_use_fuel', chart_height + i + 1, 2, chart_height + i + 1, ncols1 - 1],
'fill': {'color': colours_hex[i]},
'border': {'none': True}
})
worksheet1.insert_chart('B3', usefuel_chart1)
# Create a use column chart
usefuel_chart2 = workbook.add_chart({'type': 'column', 'subtype': 'stacked'})
usefuel_chart2.set_size({
'width': 500,
'height': 300
})
usefuel_chart2.set_chartarea({
'border': {'none': True}
})
usefuel_chart2.set_x_axis({
'name': 'Year',
'label_position': 'low',
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'line': {'color': '#bebebe'}
})
usefuel_chart2.set_y_axis({
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'name': 'PJ',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'major_gridlines': {
'visible': True,
'line': {'color': '#bebebe'}
},
'line': {'color': '#bebebe'}
})
usefuel_chart2.set_legend({
'font': {'font': 'Segoe UI', 'size': 10}
#'none': True
})
usefuel_chart2.set_title({
'none': True
})
# Configure the series of the chart from the dataframe data.
for i in range(nrows2):
usefuel_chart2.add_series({
'name': [economy + '_use_fuel', chart_height + nrows1 + i + 4, 0],
'categories': [economy + '_use_fuel', chart_height + nrows1 + 3, 2, chart_height + nrows1 + 3, ncols2 - 1],
'values': [economy + '_use_fuel', chart_height + nrows1 + i + 4, 2, chart_height + nrows1 + i + 4, ncols2 - 1],
'fill': {'color': colours_hex[i]},
'border': {'none': True}
})
worksheet1.insert_chart('J3', usefuel_chart2)
############################# Next sheet: Production of electricity by technology ##################################
# Access the workbook and second sheet
worksheet2 = writer.sheets[economy + '_prodelec_bytech']
# Apply comma format and header format to relevant data rows
worksheet2.set_column(2, ncols3 + 1, None, comma_format)
worksheet2.set_row(chart_height, None, header_format)
worksheet2.set_row(chart_height + nrows3 + 3, None, header_format)
worksheet2.write(0, 0, economy + ' electricity production by technology', cell_format1)
# Create a electricity production area chart
prodelec_bytech_chart1 = workbook.add_chart({'type': 'area', 'subtype': 'stacked'})
prodelec_bytech_chart1.set_size({
'width': 500,
'height': 300
})
prodelec_bytech_chart1.set_chartarea({
'border': {'none': True}
})
prodelec_bytech_chart1.set_x_axis({
'name': 'Year',
'label_position': 'low',
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232', 'rotation': -45},
'position_axis': 'on_tick',
'interval_unit': 4,
'line': {'color': '#bebebe'}
})
prodelec_bytech_chart1.set_y_axis({
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'name': 'TWh',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'major_gridlines': {
'visible': True,
'line': {'color': '#bebebe'}
},
'line': {'color': '#bebebe'}
})
prodelec_bytech_chart1.set_legend({
'font': {'font': 'Segoe UI', 'size': 10}
#'none': True
})
prodelec_bytech_chart1.set_title({
'none': True
})
# Configure the series of the chart from the dataframe data.
for i in range(nrows3):
prodelec_bytech_chart1.add_series({
'name': [economy + '_prodelec_bytech', chart_height + i + 1, 0],
'categories': [economy + '_prodelec_bytech', chart_height, 2, chart_height, ncols3 - 1],
'values': [economy + '_prodelec_bytech', chart_height + i + 1, 2, chart_height + i + 1, ncols3 - 1],
'fill': {'color': colours_hex[i]},
'border': {'none': True}
})
worksheet2.insert_chart('B3', prodelec_bytech_chart1)
# Create a industry subsector FED chart
prodelec_bytech_chart2 = workbook.add_chart({'type': 'column', 'subtype': 'stacked'})
prodelec_bytech_chart2.set_size({
'width': 500,
'height': 300
})
prodelec_bytech_chart2.set_chartarea({
'border': {'none': True}
})
prodelec_bytech_chart2.set_x_axis({
'name': 'Year',
'label_position': 'low',
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'line': {'color': '#bebebe'}
})
prodelec_bytech_chart2.set_y_axis({
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'name': 'TWh',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'major_gridlines': {
'visible': True,
'line': {'color': '#bebebe'}
},
'line': {'color': '#bebebe'}
})
prodelec_bytech_chart2.set_legend({
'font': {'font': 'Segoe UI', 'size': 10}
#'none': True
})
prodelec_bytech_chart2.set_title({
'none': True
})
# Configure the series of the chart from the dataframe data.
for i in range(nrows4):
prodelec_bytech_chart2.add_series({
'name': [economy + '_prodelec_bytech', chart_height + nrows3 + i + 4, 0],
'categories': [economy + '_prodelec_bytech', chart_height + nrows3 + 3, 2, chart_height + nrows3 + 3, ncols4 - 1],
'values': [economy + '_prodelec_bytech', chart_height + nrows3 + i + 4, 2, chart_height + nrows3 + i + 4, ncols4 - 1],
'fill': {'color': colours_hex[i]},
'border': {'none': True}
})
worksheet2.insert_chart('J3', prodelec_bytech_chart2)
#################################################################################################################################################
## Refining sheet
# Access the workbook and second sheet
worksheet3 = writer.sheets[economy + '_refining']
# Apply comma format and header format to relevant data rows
worksheet3.set_column(2, ncols5 + 1, None, comma_format)
worksheet3.set_row(chart_height, None, header_format)
worksheet3.set_row(chart_height + nrows5 + 3, None, header_format)
worksheet3.set_row(chart_height + nrows5 + nrows6 + 6, None, header_format)
worksheet3.write(0, 0, economy + ' refining', cell_format1)
# Create ainput refining line chart
refinery_chart1 = workbook.add_chart({'type': 'line'})
refinery_chart1.set_size({
'width': 500,
'height': 300
})
refinery_chart1.set_chartarea({
'border': {'none': True}
})
refinery_chart1.set_x_axis({
'name': 'Year',
'label_position': 'low',
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'line': {'color': '#bebebe'}
})
refinery_chart1.set_y_axis({
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'name': 'PJ',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'major_gridlines': {
'visible': True,
'line': {'color': '#bebebe'}
},
'line': {'color': '#bebebe'}
})
refinery_chart1.set_legend({
'font': {'font': 'Segoe UI', 'size': 10}
#'none': True
})
refinery_chart1.set_title({
'none': True
})
# Configure the series of the chart from the dataframe data.
for i in range(nrows5):
refinery_chart1.add_series({
'name': [economy + '_refining', chart_height + i + 1, 0],
'categories': [economy + '_refining', chart_height, 2, chart_height, ncols5 - 1],
'values': [economy + '_refining', chart_height + i + 1, 2, chart_height + i + 1, ncols5 - 1],
'line': {'color': colours_hex[i + 3],
'width': 1.25}
})
worksheet3.insert_chart('B3', refinery_chart1)
# Create an output refining line chart
refinery_chart2 = workbook.add_chart({'type': 'line'})
refinery_chart2.set_size({
'width': 500,
'height': 300
})
refinery_chart2.set_chartarea({
'border': {'none': True}
})
refinery_chart2.set_x_axis({
'name': 'Year',
'label_position': 'low',
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'line': {'color': '#bebebe'}
})
refinery_chart2.set_y_axis({
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'name': 'PJ',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'major_gridlines': {
'visible': True,
'line': {'color': '#bebebe'}
},
'line': {'color': '#bebebe'}
})
refinery_chart2.set_legend({
'font': {'font': 'Segoe UI', 'size': 10}
#'none': True
})
refinery_chart2.set_title({
'none': True
})
# Configure the series of the chart from the dataframe data.
for i in range(nrows6):
refinery_chart2.add_series({
'name': [economy + '_refining', chart_height + nrows5 + i + 4, 0],
'categories': [economy + '_refining', chart_height + nrows5 + 3, 2, chart_height + nrows5 + 3, ncols6 - 1],
'values': [economy + '_refining', chart_height + nrows5 + i + 4, 2, chart_height + nrows5 + i + 4, ncols6 - 1],
'line': {'color': colours_hex[i],
'width': 1}
})
worksheet3.insert_chart('J3', refinery_chart2)
# Create refinery output column stacked
refinery_chart3 = workbook.add_chart({'type': 'column', 'subtype': 'stacked'})
refinery_chart3.set_size({
'width': 500,
'height': 300
})
refinery_chart3.set_chartarea({
'border': {'none': True}
})
refinery_chart3.set_x_axis({
'name': 'Year',
'label_position': 'low',
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'line': {'color': '#bebebe'}
})
refinery_chart3.set_y_axis({
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'name': 'PJ',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'major_gridlines': {
'visible': True,
'line': {'color': '#bebebe'}
},
'line': {'color': '#bebebe'}
})
refinery_chart3.set_legend({
'font': {'font': 'Segoe UI', 'size': 10}
#'none': True
})
refinery_chart3.set_title({
'none': True
})
# Configure the series of the chart from the dataframe data.
for i in range(nrows7):
refinery_chart3.add_series({
'name': [economy + '_refining', chart_height + nrows5 + nrows6 + i + 7, 0],
'categories': [economy + '_refining', chart_height + nrows5 + nrows6 + 6, 2, chart_height + nrows5 + nrows6 + 6, ncols7 - 1],
'values': [economy + '_refining', chart_height + nrows5 + nrows6 + i + 7, 2, chart_height + nrows5 + nrows6 + i + 7, ncols7 - 1],
'fill': {'color': colours_hex[i]},
'border': {'none': True}
})
worksheet3.insert_chart('R3', refinery_chart3)
############################# Next sheet: Power capacity ##################################
# Access the workbook and second sheet
worksheet4 = writer.sheets[economy + '_pow_capacity']
# Apply comma format and header format to relevant data rows
worksheet4.set_column(1, ncols8 + 1, None, comma_format)
worksheet4.set_row(chart_height, None, header_format)
worksheet4.set_row(chart_height + nrows8 + 3, None, header_format)
worksheet4.write(0, 0, economy + ' electricity capacity by technology', cell_format1)
# Create a electricity production area chart
pow_cap_chart1 = workbook.add_chart({'type': 'area', 'subtype': 'stacked'})
pow_cap_chart1.set_size({
'width': 500,
'height': 300
})
pow_cap_chart1.set_chartarea({
'border': {'none': True}
})
pow_cap_chart1.set_x_axis({
'name': 'Year',
'label_position': 'low',
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232', 'rotation': -45},
'position_axis': 'on_tick',
'interval_unit': 4,
'line': {'color': '#bebebe'}
})
pow_cap_chart1.set_y_axis({
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'name': 'GW',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'major_gridlines': {
'visible': True,
'line': {'color': '#bebebe'}
},
'line': {'color': '#bebebe'}
})
pow_cap_chart1.set_legend({
'font': {'font': 'Segoe UI', 'size': 10}
#'none': True
})
pow_cap_chart1.set_title({
'none': True
})
# Configure the series of the chart from the dataframe data.
for i in range(nrows8):
pow_cap_chart1.add_series({
'name': [economy + '_pow_capacity', chart_height + i + 1, 0],
'categories': [economy + '_pow_capacity', chart_height, 1, chart_height, ncols8 - 1],
'values': [economy + '_pow_capacity', chart_height + i + 1, 1, chart_height + i + 1, ncols8 - 1],
'fill': {'color': colours_hex[i]},
'border': {'none': True}
})
worksheet4.insert_chart('B3', pow_cap_chart1)
# Create a industry subsector FED chart
pow_cap_chart2 = workbook.add_chart({'type': 'column', 'subtype': 'stacked'})
pow_cap_chart2.set_size({
'width': 500,
'height': 300
})
pow_cap_chart2.set_chartarea({
'border': {'none': True}
})
pow_cap_chart2.set_x_axis({
'name': 'Year',
'label_position': 'low',
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'line': {'color': '#bebebe'}
})
pow_cap_chart2.set_y_axis({
'major_tick_mark': 'none',
'minor_tick_mark': 'none',
'name': 'GW',
'num_font': {'font': 'Segoe UI', 'size': 10, 'color': '#323232'},
'major_gridlines': {
'visible': True,
'line': {'color': '#bebebe'}
},
'line': {'color': '#bebebe'}
})
pow_cap_chart2.set_legend({
'font': {'font': 'Segoe UI', 'size': 10}
#'none': True
})
pow_cap_chart2.set_title({
'none': True
})
# Configure the series of the chart from the dataframe data.
for i in range(nrows9):
pow_cap_chart2.add_series({
'name': [economy + '_pow_capacity', chart_height + nrows8 + i + 4, 0],
'categories': [economy + '_pow_capacity', chart_height + nrows8 + 3, 1, chart_height + nrows8 + 3, ncols9 - 1],
'values': [economy + '_pow_capacity', chart_height + nrows8 + i + 4, 1, chart_height + nrows8 + i + 4, ncols9 - 1],
'fill': {'color': colours_hex[i]},
'border': {'none': True}
})
worksheet4.insert_chart('J3', pow_cap_chart2)
writer.save()
```
| github_jupyter |
**TODO**
- create a better control stuc for internal parameters to, look as SKiDl's lib file that does the conversion from SKiDl to pyspice for inspiration
```
#Library import statements
from skidl.pyspice import *
#can you say cheeky
import PySpice as pspice
#becouse it's written by a kiwi you know
import lcapy as kiwi
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
from IPython.display import YouTubeVideo, display
import traceback
#notebook specific loading control statements
%matplotlib inline
#tool to log notebook internals
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information skidl, PySpice,lcapy, sympy, numpy, matplotlib, pandas, scipy
#import the op read tool from last subchapter
from DC_1_Codes import op_results_collect
```
# Working with SKiDl elements
The following example, to be honest, is pedantic; but it serves to introduce the current source in SPICE which can be a real headache. It also shows how to let Python do the work of interacting with elements that we will readily make great use of down the road.
So why is thus so pedantic? source transformations are mostly an analytic simplification tool. And while yes there is some practicality in being able to find a matching current source to a voltage source and vice versa with equivalent power from the standpoint of Thevenin and Norton's theorem. There are, however, serious limation with how the real circuits handle inputs of voltage and current differently. And frankly, from SPICE’s point of view for this exercise, it doesn't care, it's going to solve regardless. So if you need a refresher on source transformations with an illustration of why there a really great analytical tool please watch ALL ABOUT ELECTRONICS YT video on "Source Transformations" where this example is pulled from.
## Example 1 from "Source Transformation" @ ~6:30 min
```
YouTubeVideo('FtEU5ZoO-fg', width=500, height=400, start=390)
```
### A very important word about current source in SPICE
Before building the circuit above a word about any current sources in SPICE. Recalling the discussion around the surefire method of measuring currents in SPICE using a 0V Liner Voltage Souce (aka the SPICE ammeter trick) in SPICE current flow is in the positive direction from a positive voltage to a negative voltage. So by convention, we draw independent and dependent sources with an arrow in the direction of how current is being added. While that means that all current sources are polarized such that the positive terminal is at the tail of the drawn current arrow and the head is the negative terminal. When you use a schematic editor with built-in SPICE it does all the terminal work in the background. But when we are indirectly working with netlist, via SKiDL, you will have to make sure you remember this. Or else this will bite you in the butt and keep costing you time till have this arrow and terminal connection for current ingrained into you.
```
reset()
vs_4=V(ref='s_4', dc_value=4@u_V)
vs_8=V(ref='s_8', dc_value=8@u_V)
cs_2=I(ref='s_2', dc_value=2@u_A)
r1=R(ref='1', value=6@u_Ohm)
r2=R(ref='2', value=12@u_Ohm)
r3=R(ref='3', value=12@u_Ohm)
(gnd&vs_4['p', 'n']&r1) |r2
vs_8['p', 'n']+=r2[2], r3[2]
(gnd & cs_2 | r3)
circ=generate_netlist()
print(circ)
preop_sim=op_results_collect(circ)
#store the results for comperasion to post souce transfromations
pre_tf_res=preop_sim.results_df
pre_tf_res
```
### SKiDl's Diabetic Syntax `&`, `|`
Notice above that usage of Pythons logical AND operator & and logical OR operator | in creating the circuit. Since & and | are just operators in python we can do what is called operator extensions to them to make them act in special ways in a certain context. In SKiDls particular case the logical and (&) is a shorthand for putting two elements in series. And the logical OR (|) is shorthand for putting two elements in parral. Furthermore, these operators and parentheses sensitive, and are not polarized sensitive and so polarized elements need to have their terminals called out when using the. There called Diabetic Syntical Suger in light of their release announcement entitled ["SWEETENING SKIDL"](http://xess.com/skidl/docs/_site/blog/sweetening-skidl). Using is up to your codding style and that of your colleagues. All joking aside they are extremely useful operators to know how to use, and we will use them in this book.
### Crafting the transfomation tool
We are not going into that much detial about these tool. The important thing is that we can take advante that all our elements (SKiDl part) and nets are objects in Python. And therefore have methods and attriputs that are accesable and therefore more usable then helping produce part of a line of a SPICE netlist. For instiance Voltage and Current souce store there dcvalue in `<V\I>.dc_value` where resitors store there resistince in `R.value`.This then alows us to use the elements to perform calculation outside of SPICE and even better assisit in creating new elements as we have done below.
```
#%%writefile -a DC_1_Codes.py
#chapter 1 section 2 get_skidl_spice_ref function
#used for getting the name of the element as it would appear in a
#generated netlist
def get_skidl_spice_ref(skidle_element):
"""
Helper function to retrieve SKiDL element name as appears in the final netlist
Args:
skidle_element (skidl.part.Part): SKiDl part to get the netlist name from
Returns:
returns a string with the netlist name of `skidle_element`, or throws an
error if `skidle_element` is not a SKiDl part
"""
assert repr(type(skidle_element))=="<class 'skidl.part.Part'>", '`skidle_element` must be a SKiDl part'
if skidle_element.ref_prefix!=skidle_element.ref[0]:
return skidle_element.ref_prefix+skidle_element.ref
else:
return skidle_element.ref
#%%writefile -a DC_1_Codes.py
#chapter 1 section 2 dc_cs2vs function
#creates a voltage source element to the current source based on the
#value if the input DC current element and it's parallel resistor
#via the source transformation rules
def dc_cs2vs(dc_cs, cs_par_res):
"""
Create a new equivalent voltage source to the current source with parallel resistance
Args:
dc_cs (SKiDl current source): the current source to transform to a voltage source
cs_par_res (SKiDl resistor): the parrel resistor to the current source to be transformed
Returns:
returns an equivalent DC voltage source element to the current source based on the
value if the input DC current element and it's parallel resistor via the source transformation rules
TODO:
-figure out how to do assertion check that cs_par_res is in parallel to dc_cs
Future:
-make into subcircuit with net inputs to automatically add the new source and resistance to the circuit
"""
#do assertion checks to insure that passed in objects are what they are supposed to be
assert dc_cs.ref_prefix=='I', '<dc_cs> was not a current source'
assert cs_par_res.ref_prefix=='R', '<cs_par_res> was not a resistor'
old_maxpower=float((dc_cs.dc_value**2)*cs_par_res.value)
new_vs_val=float(dc_cs.dc_value*cs_par_res.value)
assert np.around(float(new_vs_val*dc_cs.dc_value), 6)==np.around(old_maxpower, 6), "Um, something wrong since before and after max power not equal"
new_vs_ref=dc_cs.ref
if new_vs_ref[0]!='I':
new_vs_ref='I'+new_vs_ref
new_vs_ref=f"V{new_vs_ref[1:]}_f_{new_vs_ref}"
print(new_vs_ref)
eq_vs=V(dc_value=new_vs_val@u_V, ref=new_vs_ref)
warnings.warn(f"""New voltage source values: {new_vs_val} [V] with max aviabel power {old_maxpower} [W] \n transformed creation statment will be like: \n`(gnd & <eq_vs>['n', 'p'] & <cs_par_res>)`""")
return eq_vs
#%%writefile -a DC_1_Codes.py
#chapter 1 section 2 dc_vs2cs function
#creats current source element to the voltage source based on the
#value if the input DC current element and it's series resistor
#via the source transformation rules
def dc_vs2cs(dc_vs, vs_ser_res):
"""
Create a new equivalent current source to voltage source with series resistance
Args:
dc_vs (SKiDl voltage source): the voltage source to transform to a current source
vs_ser_res (SKiDl resistor): the serries resistor to the voltage source to be transformed
Returns:
TODO:
-figure out how to do assertion check that vs_ser_res is in serries to dc_vs
Future:
-make into subcircuit with net inputs to automatically add the new source and resistance to the circuit
"""
#do assertion checks to insure that passed in objects are what they are supposed to be
assert dc_vs.ref_prefix=='V', '<dc_vs> was not a voltage source'
assert vs_ser_res.ref_prefix=='R', '<vs_ser_res> was not a resistor'
old_maxpower=float((dc_vs.dc_value**2)/vs_ser_res.value)
new_cs_val=float(dc_vs.dc_value/vs_ser_res.value)
assert np.around(float(new_cs_val*dc_vs.dc_value), 6)==np.around(old_maxpower, 6), "Um, something wrong since before and after max power not equal"
new_cs_ref=dc_vs.ref
if new_cs_ref[0]!='V':
new_cs_ref='V'+new_cs_ref
new_cs_ref=f"I{new_cs_ref[1:]}_f_{new_cs_ref}"
#print(new_cs_ref)
eq_cs=I(dc_value=new_cs_val@u_A, ref=new_cs_ref)# might still need this: , circuit=vs_ser_res.circuit)
warnings.warn(f"""New current source values: {new_cs_val} [A] with max aviabel power {old_maxpower} [W] \n transformed creation statment will be like:\n `(gnd & <eq_cs>['n', 'p'] | <vs_ser_res>)` \n""")
return eq_cs
```
### validate the transform¶
For this, we are to transform the left voltage source and series resistors and the right current source and parral resistor simultaneously which halfway deviates from what ALL ABOUT ELECTRONICS did working the example analytically. We will have the center parallel resistor and voltage source as a reference network.
```
reset()
r1=R(ref='1', value=6@u_Ohm)
r2=R(ref='2', value=12@u_Ohm)
r3=R(ref='3', value=12@u_Ohm)
vs_8=V(ref='s_8', dc_value=8@u_V)
cs_4_f_cs_4=dc_vs2cs(vs_4, r1)
vs_2_f_cs_2=dc_cs2vs(cs_2, r3)
(gnd&cs_4_f_cs_4['n', 'p']|r1) |r2
vs_8['p', 'n']+=r2[2], r3[2]
(gnd & vs_2_f_cs_2['n', 'p'] & r3)
circ=generate_netlist()
print(circ)
postop_sim=op_results_collect(circ)
#store the results for comperaion to pre souce transfromations
post_tf_res=postop_sim.results_df
post_tf_res
```
Since we stored the results from the pre transformed circuit we can try to do an eyeball compersion between the two dataframes, however, since the net names are no longer the same we only can look at the branch current of vs_8 which remained constant
```
pre_tf_res
(pre_tf_res.loc[get_skidl_spice_ref(vs_8)]==post_tf_res.loc[get_skidl_spice_ref(vs_8)]).all()
```
Thus we can assume that the circuits are source equivalents of each other, but this book is about cultivating analog design verification. And assuming can yield performance hits and even worse the need for a SPIN. Therefore DONT ASSUME, figure out a way to VERIFY via Quantifiable answers.
## internal parameters:
ngspice (in fact most SPICE engines) elements have what are called internal parameters. Most of the setup parameters like dc_value, resistance, ect along with nitty-gritty parameters for more advanced SPICE simulations that will get too. What we are after of now are the internal values that store results of simulations as we have alluded to in the non surefire way to save internal parameters. For instance, resistors have a way of measuring the internal current flowing through them the but the caveat is that it only returns real values in ngspice, which will be an issue when doing AC simulations. But for DC simulations is a tool we should utilize. Also at the time of writing this PySPICE has a quark that internal values are not retrieved at the same time the branch currents and net voltages are. So to get both the simulation has to be run twice and the author is not sure if this is quark of ngspice or PySPICE but the author will look into it at a later time.
For now, just know internal parameters have a string calls of `<Elelement name>@[<internal paamamater>]` that is passed to a PySPICE simulation objects `.save_internal_parameters` method and then are returned in the results as the original string call to the results super dictionary.
```
#%%writefile -a DC_1_Codes.py
#chapter 1 section 2 op_internal_ivp class
# class to get both the branch currents and node voltages,
# along with the internal parameters values for
# DC operating point analysis
class op_internal_ivp():
"""
Class for creating a SPICE simulation on the whole circuits internal parameters for current, voltage, and power
for dc operating point (.op) simulations. Will only collect internal parameters and not global voltage and currents
of the circuit
TODO:
Make this inheritable from op_results_collect
"""
def __init__(self, op_sim_circ, display_results=False):
"""
Basic class to get pyspice operating_point (ngspice `.op`) simulation results
for internal parameters for Resistors, Current Source, Voltage Source current, voltage, power
respectively
Args:
op_sim_circ (pspice.Spice.Netlist.Circuit): the Netlist circuit produced
from SKiDl's `generate_netlist()`
display_results (bool; False): option to have the simulation results
stored in `self.results_df` automatically displayed from a jupyter notebook
ell
Returns:
will create a simulation in `self.op_sim`, raw results of dc operating
point simulation will be stored in `self.op_sim_results`, the tablized
results will be stored in pandas dataframe `self.results_df`
TODO:
- add kwargs to the simulator
- add an assertion that only a pyspice netlist circuit obj can
be passed into op_sim_circ
"""
#need to add assertions for op_sim_circ ==pspice.Spice.Netlist.Circuit
#store the circuit internally
self.op_sim_circ=op_sim_circ
#create the sim obj
self.op_sim=self.op_sim_circ.simulator()
#store bool to display results dataframe
self.display_results=display_results
#create the internal parameters to save
self.create_savable_items()
#run the sim for .op for internal parameters and record results
self.sim_and_record()
def create_savable_items(self):
"""
Helper method to create a listing of internal parameters and the table of the results.
Right now only creates savable internal parameters for:
Linear Dependent Voltage Sources: current, power
Linear Dependent Current Sources: current, voltage, power
Standard Resistor: current, voltage, power
Linear Dependent Current Sources: current, voltage, power
VCCS: current, voltage, power
VCVS: current, voltage, power
CCVS: current, voltage, power
CCCS:currrent, voltage, power
See ngspice manual typically chapter 31 "Model and Device Parameters"
for more deitals about deice intiernal parmaters
"""
self.results_df=pd.DataFrame(columns=['Circ_Item', 'Item_Type', 'Value', 'Unit'])
self.results_df.set_index('Circ_Item', drop=True, append=False, inplace=True, verify_integrity=False)
#helper row creation statement
def add_row(index, unit):
self.results_df.at[index, ['Item_Type', 'Unit']]=['internal', unit]
for e in self.op_sim_circ.element_names:
"""
Ref: ngspice documentation chapter 31 (typically): Model and Device Parameters
"""
#resistors
if e[0]=='R':
add_row(f'@{e}[i]', 'A')
add_row(f'@{e}[p]', 'W')
#independnt current source
elif e[0]=='I':
add_row(f'@{e}[c]', 'A')
add_row(f'@{e}[v]', 'V')
add_row(f'@{e}[p]', 'W')
#independ Voltage source
elif e[0]=='V':
add_row(f'@{e}[i]', 'A')
add_row(f'@{e}[p]', 'W')
#controlled sources
elif e[0] in ['F', 'H', 'G', 'E']:
add_row(f'@{e}[i]', 'A')
add_row(f'@{e}[v]', 'V')
add_row(f'@{e}[p]', 'W')
else:
warnings.warn(f"Circ Element {e} is not setup to have internals read, skiping")
def sim_and_record(self):
"""
run the .op simulation and get the internal values
Args: None
Returns:
`self.internal_opsim_res` store the raw results of the .op for internal pamtyers
whereas `self.results_df` stores the pandas dataframe of internal parameters results
TODO: figure out how to do this at the same time as the main node branch sim
this doing separately is getting old
"""
save_items=list(self.results_df.index)
self.op_sim.save_internal_parameters(*save_items)
self.internal_opsim_res=self.op_sim.operating_point()
for save in save_items:
self.results_df.at[save, 'Value']=self.internal_opsim_res[save].as_ndarray()[0]
if self.display_results:
print('.op sim internal parmter results')
display(self.results_df)
```
### pre transform_internals
```
reset()
vs_4=V(ref='s_4', dc_value=4@u_V)
vs_8=V(ref='s_8', dc_value=8@u_V)
cs_2=I(ref='s_2', dc_value=2@u_A)
r1=R(ref='1', value=6@u_Ohm)
r2=R(ref='2', value=12@u_Ohm)
r3=R(ref='3', value=12@u_Ohm)
(gnd&vs_4['p', 'n']&r1) |r2
vs_8['p', 'n']+=r2[2], r3[2]
(gnd & cs_2 | r3)
circ=generate_netlist()
print(circ)
preop_ivp_sim=op_internal_ivp(circ)
pre_ivp_res=preop_ivp_sim.results_df
pre_ivp_res
```
### post transform internals
```
reset()
r1=R(ref='1', value=6@u_Ohm)
r2=R(ref='2', value=12@u_Ohm)
r3=R(ref='3', value=12@u_Ohm)
vs_8=V(ref='s_8', dc_value=8@u_V)
cs_f_vs_4=dc_vs2cs(vs_4, r1)
vs_f_cs_2=dc_cs2vs(cs_2, r3)
(gnd&cs_f_vs_4['n', 'p']|r1) |r2
vs_8['p', 'n']+=r2[2], r3[2]
(gnd & vs_f_cs_2['n', 'p'] & r3)
circ=generate_netlist()
print(circ)
postop_ivp_sim=op_internal_ivp(circ)
post_ivp_res=postop_ivp_sim.results_df
post_ivp_res
```
### Quantitive comparison ¶
Since our results are stored in Pandas dataframes we can make use of the power of Pandas to do data analysis to get insight into what is going on. Where below we get a merger of the two dataframes side by side for all the elements that remained the same in the circuit pre and post-transformation. And we then follow that up with color-coding of what values remained the same between the pre and post-transformation of the circuit
```
pre_post_comp=pd.concat([pre_ivp_res, post_ivp_res], join='inner', axis='columns', keys=['Pre', 'Post'])
pre_post_comp
def color_results(row):
is_equal=(row['Pre']==row['Post']).all()
if is_equal:
return ['background-color: lightgreen']*len(row)
else:
return ['background-color: yellow']*len(row)
pre_post_comp.style.apply(color_results, axis=1)
```
The results show that the part of the network that remained the same had identical currents and power through their elements. While the resistors that we transformed in accordance with the source transformations. This is due to source efficiency since the full network presents different equivalent circuits to each of the transforms and therefore different voltage, current, and power draws on each source.
## Citations:
[1] ALL ABOUT ELECTRONICS. "Source transformation in network analysis," YouTube, Dec 24, 2016. [Video file]. Available: https://youtu.be/FtEU5ZoO-fg. [Accessed: Nov 30, 2020].
| github_jupyter |
# Grid search forecaster
Skforecast library combines grid search strategy with backtesting to identify the combination of lags and hyperparameters that achieve the best prediction performance.
The grid search requires two grids, one with the different lags configuration (`lags_grid`) and the other with the list of hyperparameters to be tested (`param_grid`). The process comprises the following steps:
1. `grid_search_forecaster` creates a copy of the forecaster object and replaces the `lags` argument with the first option appearing in `lags_grid`.
2. The function validates all combinations of hyperparameters presented in `param_grid` by [backtesting](https://joaquinamatrodrigo.github.io/skforecast/latest/user_guides/backtesting.html).
3. The function repeats these two steps until it runs through all the possibilities (lags + hyperparameters).
4. If `return_best = True`, the original forecaster is trained with the best lags and hyperparameters configuration found.
## Libraries
```
# Libraries
# ==============================================================================
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from skforecast.ForecasterAutoreg import ForecasterAutoreg
from skforecast.model_selection import grid_search_forecaster
from sklearn.metrics import mean_squared_error
```
## Data
```
# Download data
# ==============================================================================
url = ('https://raw.githubusercontent.com/JoaquinAmatRodrigo/skforecast/master/data/h2o.csv')
data = pd.read_csv(url, sep=',', header=0, names=['y', 'datetime'])
# Data preprocessing
# ==============================================================================
data['datetime'] = pd.to_datetime(data['datetime'], format='%Y/%m/%d')
data = data.set_index('datetime')
data = data.asfreq('MS')
data = data[['y']]
data = data.sort_index()
# Train-val-test dates
# ==============================================================================
end_train = '2001-01-01 23:59:00'
end_val = '2006-01-01 23:59:00'
print(f"Train dates : {data.index.min()} --- {data.loc[:end_train].index.max()} (n={len(data.loc[:end_train])})")
print(f"Validation dates : {data.loc[end_train:].index.min()} --- {data.loc[:end_val].index.max()} (n={len(data.loc[end_train:end_val])})")
print(f"Test dates : {data.loc[end_val:].index.min()} --- {data.index.max()} (n={len(data.loc[end_val:])})")
# Plot
# ==============================================================================
fig, ax=plt.subplots(figsize=(9, 4))
data.loc[:end_train].plot(ax=ax, label='train')
data.loc[end_train:end_val].plot(ax=ax, label='validation')
data.loc[end_val:].plot(ax=ax, label='test')
ax.legend();
```
## Grid search
```
# Grid search hyperparameter and lags
# ==============================================================================
forecaster = ForecasterAutoreg(
regressor = RandomForestRegressor(random_state=123),
lags = 10 # Placeholder, the value will be overwritten
)
# Lags used as predictors
lags_grid = [3, 10, [1, 2, 3, 20]]
# Regressor hyperparameters
param_grid = {'n_estimators': [50, 100],
'max_depth': [5, 10, 15]}
results_grid = grid_search_forecaster(
forecaster = forecaster,
y = data.loc[:end_val, 'y'],
param_grid = param_grid,
lags_grid = lags_grid,
steps = 12,
refit = True,
metric = 'mean_squared_error',
initial_train_size = len(data.loc[:end_train]),
fixed_train_size = False,
return_best = True,
verbose = False
)
results_grid
forecaster
```
## Grid search with custom metric
Besides the frequently used metrics: mean_squared_error, mean_absolute_error, and mean_absolute_percentage_error, it is possible to use any custom function as long as:
+ It includes the arguments:
+ `y_true`: true values of the series.
+ `y_pred`: predicted values.
+ It returns a numeric value (`float` or `int`).
It allows evaluating the predictive capability of the model in a wide range of scenarios, for example:
+ Consider only certain months, days, hours...
+ Consider only dates that are holidays.
+ Consider only the last step of the predicted horizon.
The following example shows how to forecast a 12-month horizon but considering only the last 3 months of each year to calculate the interest metric.
```
# Grid search hyperparameter and lags with custom metric
# ==============================================================================
def custom_metric(y_true, y_pred):
'''
Calculate the mean squared error using only the predicted values of the last
3 months of the year.
'''
mask = y_true.index.month.isin([10, 11, 12])
metric = mean_squared_error(y_true[mask], y_pred[mask])
return metric
forecaster = ForecasterAutoreg(
regressor = RandomForestRegressor(random_state=123),
lags = 10 # Placeholder, the value will be overwritten
)
# Lags used as predictors
lags_grid = [3, 10, [1, 2, 3, 20]]
# Regressor hyperparameters
param_grid = {'n_estimators': [50, 100],
'max_depth': [5, 10, 15]}
results_grid = grid_search_forecaster(
forecaster = forecaster,
y = data.loc[:end_val, 'y'],
param_grid = param_grid,
lags_grid = lags_grid,
steps = 12,
refit = True,
metric = custom_metric,
initial_train_size = len(data.loc[:end_train]),
fixed_train_size = False,
return_best = True,
verbose = False
)
```
## Hide progress bar
It is possible to hide the progress bar using the following code.
```
from tqdm import tqdm
from functools import partialmethod
tqdm.__init__ = partialmethod(tqdm.__init__, disable=True)
results_grid = grid_search_forecaster(
forecaster = forecaster,
y = data.loc[:end_val, 'y'],
param_grid = param_grid,
lags_grid = lags_grid,
steps = 12,
refit = True,
metric = 'mean_squared_error',
initial_train_size = len(data.loc[:end_train]),
fixed_train_size = False,
return_best = True,
verbose = False
)
%%html
<style>
.jupyter-wrapper .jp-CodeCell .jp-Cell-inputWrapper .jp-InputPrompt {display: none;}
</style>
```
| github_jupyter |
# The IMDb Dataset
The IMDb dataset consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. We use the two-way (positive/negative) class split, and use only sentence-level labels.
```
from IPython.display import display, Markdown
with open('../../doc/env_variables_setup.md', 'r') as fh:
content = fh.read()
display(Markdown(content))
```
## Import Packages
```
import tensorflow as tf
import tensorflow_datasets
from tensorflow.keras.utils import to_categorical
from transformers import (
BertConfig,
BertTokenizer,
TFBertModel,
TFBertForSequenceClassification,
glue_convert_examples_to_features,
glue_processors
)
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
import math
import numpy as np
import os
import time
from datetime import timedelta
import shutil
from datetime import datetime
import pickle
# new
import re
from keras.models import Sequential, load_model
```
## Check configuration
```
print(tf.version.GIT_VERSION, tf.version.VERSION)
print(tf.keras.__version__)
gpus = tf.config.list_physical_devices('GPU')
if len(gpus)>0:
for gpu in gpus:
print('Name:', gpu.name, ' Type:', gpu.device_type)
else:
print('No GPU available !!!!')
```
## Define Paths
```
# note: these need to be specified in the config.sh file
try:
data_dir=os.environ['PATH_DATASETS']
except KeyError:
print('missing PATH_DATASETS')
try:
tensorboard_dir=os.environ['PATH_TENSORBOARD']
except KeyError:
print('missing PATH_TENSORBOARD')
try:
savemodel_dir=os.environ['PATH_SAVE_MODEL']
except KeyError:
print('missing PATH_SAVE_MODEL')
```
## Import local packages
```
import preprocessing.preprocessing as pp
import utils.model_metrics as mm
import importlib
importlib.reload(pp);
importlib.reload(mm);
```
## Loading a data from Tensorflow Datasets
```
data, info = tensorflow_datasets.load(name="imdb_reviews",
data_dir=data_dir,
as_supervised=True,
with_info=True)
# IMDb specific:
data_valid = data['test'].take(1000)
# trying to create a true validation data set for after the computation
#data_valid_ext = data['test'].take(2000)
#data_valid = data_valid_ext.take(1000)
```
### Checking basic info from the metadata
```
info
pp.print_info_dataset(info)
```
### Checking basic info from the metadata
```
data
data.keys()
# only works for glue-compatible datasets
try:
pp.print_info_data(data['train'])
except AttributeError:
print('data format incompatible')
```
## Define parameters of the model
```
# changes: had to eliminate all lines concerning a test data set because we only have train and valid
# define parameters
#BATCH_SIZE_TRAIN = 32
#BATCH_SIZE_TEST = 32
#BATCH_SIZE_VALID = 64
#EPOCH = 2
#TOKENIZER = 'bert-base-multilingual-uncased'
#MAX_LENGTH = 512
# extract parameters
size_train_dataset = info.splits['train'].num_examples
# the size for the validation data set has been manually computed according to the function
# pp.print_info_data because the test set has been manually split above
size_valid_dataset = np.shape(np.array(list(data_valid.as_numpy_iterator())))[0]
number_label = info.features["label"].num_classes
# computer parameter
#STEP_EPOCH_TRAIN = math.ceil(size_train_dataset/BATCH_SIZE_TRAIN)
#STEP_EPOCH_VALID = math.ceil(size_valid_dataset/BATCH_SIZE_VALID)
#print('Dataset size: {:6}/{:6}'.format(size_train_dataset, size_valid_dataset))
#print('Batch size: {:6}/{:6}'.format(BATCH_SIZE_TRAIN, BATCH_SIZE_VALID))
#print('Step per epoch: {:6}/{:6}'.format(STEP_EPOCH_TRAIN, STEP_EPOCH_VALID))
#print('Total number of batch: {:6}/{:6}'.format(STEP_EPOCH_TRAIN*(EPOCH+1), STEP_EPOCH_VALID*(EPOCH+1)))
```
### Additional steps for the IMDb dataset specifically
#### Cleaning
```
def preprocess_reviews(reviews):
#REPLACE_NO_SPACE = re.compile("[.;:!\'?,\"()\[\]]")
REPLACE_WITH_SPACE = re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)")
#ae, oe, ue => only for GERMAN data
#REPLACE_UMLAUT_AE = re.compile("(ae)")
#REPLACE_UMLAUT_OE = re.compile("(oe)")
#REPLACE_UMLAUT_UE = re.compile("(ue)")
#reviews = [REPLACE_NO_SPACE.sub("", line[0].decode("utf-8").lower()) for line in np.array(list(reviews.as_numpy_iterator()))]
reviews = [REPLACE_WITH_SPACE.sub(" ", line[0].decode("utf-8")) for line in np.array(list(reviews.as_numpy_iterator()))]# for line in reviews]
#reviews = [REPLACE_UMLAUT_AE.sub("ä", line[0]) for line in reviews]
#reviews = [REPLACE_UMLAUT_OE.sub("ö", line[0]) for line in reviews]
#reviews = [REPLACE_UMLAUT_UE.sub("ü", line[0]) for line in reviews]
return reviews
reviews_train_clean = preprocess_reviews(data['train'])
reviews_valid_clean = preprocess_reviews(data_valid)
# calculate the number of characters
x = []
for i in reviews_valid_clean:
x.append(len(i))
sum(x)
# divide into two batches
batch_1 = reviews_valid_clean[:500]
batch_2 = reviews_valid_clean[500:]
```
## Translating the Validation Dataset
```
# do this for 3 examples first
# step 1: save data in the right format (.txt, .tsv or html)
with open('en_batch_2.txt', 'w') as f:
for item in batch_2:
# for item in reviews_valid_clean[:3]:
f.write("%s\n\n\n" % item)
# step 2: upload to storage bucket 1 (os.environ['BUCKET_NAME'])
# gsutil cp /home/vera_luechinger/proj_multilingual_text_classification/notebook/00-Test/en_batch_2.txt gs://os.environ['BUCKET_NAME']/
# step 3: translate in storage and store in bucket 2 (os.environ['BUCKET_NAME']_translation: must be empty before the translation process begins)
# batch translation
from google.cloud import translate
import time
def batch_translate_text(
input_uri="gs://"+os.environ['BUCKET_NAME']+"/en_batch_2.txt",
output_uri="gs://"+os.environ['BUCKET_NAME_TRANSLATION']+"/",
project_id=os.environ['PROJECT_ID']
):
"""Translates a batch of texts on GCS and stores the result in a GCS location."""
client = translate.TranslationServiceClient()
location = "us-central1"
# Supported file types: https://cloud.google.com/translate/docs/supported-formats
gcs_source = {"input_uri": input_uri}
input_configs_element = {
"gcs_source": gcs_source,
"mime_type": "text/plain" # Can be "text/plain" or "text/html".
}
gcs_destination = {"output_uri_prefix": output_uri}
output_config = {"gcs_destination": gcs_destination}
parent = client.location_path(project_id, location)
# Supported language codes: https://cloud.google.com/translate/docs/language
start_time = time.time()
operation = client.batch_translate_text(
parent=parent,
source_language_code="en",
target_language_codes=["fr","de"], # Up to 10 language codes here.
input_configs=[input_configs_element],
output_config=output_config)
print(u"Waiting for operation to complete...")
response = operation.result(180)
elapsed_time_secs = time.time() - start_time
print(u"Execution Time: {}".format(elapsed_time_secs))
print(u"Total Characters: {}".format(response.total_characters))
print(u"Translated Characters: {}".format(response.translated_characters))
batch_translate_text()
# step 4: save files in the first bucket
#gsutil cp gs://os.environ['BUCKET_NAME']+_translation/os.environ['BUCKET_NAME']_en_batch_2_fr_translations.txt gs://os.environ['BUCKET_NAME']/batch_2/
de_1_dir = "gs://"+os.environ['BUCKET_NAME']+"/batch_1/"+os.environ['BUCKET_NAME']+"_en_batch_1_de_translations.txt"
from google.cloud import storage
#from config import bucketName, localFolder, bucketFolder
storage_client = storage.Client()
bucket = storage_client.get_bucket(os.environ['BUCKET_NAME'])
#bucket
def download_file(bucketName, file, localFolder):
"""Download file from GCP bucket."""
#fileList = list_files(bucketName)
#rand = randint(0, len(fileList) - 1)
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucketName)
blob = bucket.blob(file)
fileName = blob.name.split('/')[-1]
blob.download_to_filename(localFolder + fileName)
return f'{fileName} downloaded from bucket.'
# drop this before pushing
download_file(os.environ['BUCKET_NAME'], "batch_1/"+os.environ['BUCKET_NAME']+"_en_batch_1_fr_translations.txt", "/home/vera_luechinger/data/imdb_reviews/")
download_file(os.environ['BUCKET_NAME'], "batch_1/"+os.environ['BUCKET_NAME']+"_en_batch_1_de_translations.txt", "/home/vera_luechinger/data/imdb_reviews/")
download_file(os.environ['BUCKET_NAME'], "batch_2/"+os.environ['BUCKET_NAME']+"_en_batch_2_fr_translations.txt", "/home/vera_luechinger/data/imdb_reviews/")
download_file(os.environ['BUCKET_NAME'], "batch_2/"+os.environ['BUCKET_NAME']+"_en_batch_2_de_translations.txt", "/home/vera_luechinger/data/imdb_reviews/")
print("")
# step 5: get translated files from storage to use in notebook
with open("/home/vera_luechinger/data/imdb_reviews/"+os.environ['BUCKET_NAME']+"_en_batch_1_de_translations.txt", 'r') as file:
de_1 = file.readlines()
with open("/home/vera_luechinger/data/imdb_reviews/"+os.environ['BUCKET_NAME']+"_en_batch_2_de_translations.txt", 'r') as file:
de_2 = file.readlines()
with open("/home/vera_luechinger/data/imdb_reviews/"+os.environ['BUCKET_NAME']+"_en_batch_1_fr_translations.txt", 'r') as file:
fr_1 = file.readlines()
with open("/home/vera_luechinger/data/imdb_reviews/"+os.environ['BUCKET_NAME']+"_en_batch_2_fr_translations.txt", 'r') as file:
fr_2 = file.readlines()
de = de_1 + de_2
fr = fr_1 + fr_2
de = [item.replace("\n","") for item in de]
fr = [item.replace("\n","") for item in fr]
len(de)
```
| github_jupyter |
# Lecture 30 – Perception, Case Study
## Data 94, Spring 2021
```
from datascience import *
import numpy as np
Table.interactive_plots()
import plotly.express as px
sky = Table.read_table('data/skyscrapers.csv') \
.where('status.current', are.contained_in(['completed', 'under construction'])) \
.select('name', 'location.city', 'location.latitude', 'location.longitude',
'statistics.floors above', 'statistics.height', 'status.completed.year') \
.relabeled(['location.city', 'location.latitude', 'location.longitude',
'statistics.floors above', 'statistics.height', 'status.completed.year'],
['city', 'latitude', 'longitude', 'floors', 'height', 'year']) \
.where('height', are.above(0)) \
.where('floors', are.above(0))
sky
```
## Perception
```
sky.group('city') \
.where('count', are.above_or_equal_to(40)) \
.sort('count', descending = True) \
.barh('city', title = 'Number of Skyscrapers Per City')
# Remember, you're not responsible for the code here.
px.pie(sky.group('city').where('count', are.above_or_equal_to(40)).to_df(),
values = 'count',
names = 'city',
title = 'Number of Skyscrapers Per City (Top 10 Only)'
)
```
## Case Study – Skyscrapers
```
sky.shuffle()
```
### Which cities have the most skyscrapers?
```
sky.group('city') \
.where('count', are.above_or_equal_to(20)) \
.sort('count', descending = True)
sky.group('city') \
.where('count', are.above_or_equal_to(20)) \
.sort('count', descending = True) \
.barh('city', title = 'Number of Skyscrapers Per City (Min. 20)')
```
Do any of the above cities stick out to you?
### What is the distribution of skyscraper heights?
```
sky.column('height').min()
sky.column('height').max()
sky.hist('height', density = False, bins = np.arange(0, 600, 25),
title = 'Distribution of Skyscraper Heights')
```
Let's zoom in a little more.
```
sky.where('height', are.below(300)) \
.hist('height', density = False, bins = np.arange(0, 310, 10),
title = 'Distribution of Skyscraper Heights Below 300m')
```
### What's the distribution of short vs. tall skyscrapers in each city?
```
sky
```
Let's say a skyscraper is "short" if its height is less than or equal to 150 meters; otherwise, it's "tall".
```
def height_cat(height):
if height <= 150:
return 'short'
return 'tall'
sky.apply(height_cat, 'height')
sky = sky.with_columns('height category', sky.apply(height_cat, 'height'))
sky
```
We can use `pivot` to draw a bar chart of the number of short and tall skyscrapers per city.
### [Quick Check 1](https://edstem.org/us/courses/3251/lessons/12407/slides/60647)
Fill in the blanks to create the table `short_and_tall`, which has two columns, `'short'` and `'tall'`, and one row for each city with **at least 5 short and 5 tall skyscrapers**. The first five rows of `short_and_tall` are shown below.
| city | short | tall |
|--------------:|--------:|-------:|
| New York City | 341 | 217 |
| Chicago | 268 | 108 |
| Miami | 58 | 49 |
| Houston | 34 | 27 |
| San Francisco | 43 | 22 |
```py
short_and_tall = sky.pivot(__(a)__, __(b)__) \
.where(__(c)__, are.above_or_equal_to(5)) \
.where('tall', are.above_or_equal_to(5)) \
.sort('tall', descending = True)
```
```
# short_and_tall = sky.pivot(__(a)__, __(b)__) \
# .where(__(c)__, are.above_or_equal_to(5)) \
# .where('tall', are.above_or_equal_to(5)) \
# .sort('tall', descending = True)
# short_and_tall.barh('city', title = 'Number of Short and Tall Skyscrapers Per City (Min. 5 Each)')
```
It seems like most cities have roughly twice as many "short" skyscrapers as they do "tall" skyscrapers.
What if we want to look at the distribution of the number of floors per skyscraper, separated by height category?
```
sky.hist('floors', group = 'height category',
density = False,
bins = np.arange(0, 150, 5),
title = 'Distribution of Number of Floors Per Skyscraper')
```
Since there is overlap between the two histograms, we have that there are some short skyscrapers (below 150m) with more floors than some tall skyscrapers!
### What's the relationship between height and number of floors?
```
sky
sky.scatter('height', 'floors',
s = 30,
group = 'height category',
title = 'Number of Floors vs. Height',
yaxis_title = 'Number of Floors')
sky.where('height', are.above(300)) \
.scatter('height', 'floors',
s = 50,
labels = 'name',
title = 'Number of Floors vs. Height (Min. 300m)')
```
### How many skyscrapers were built per year?
```
sky
sky.group('year')
```
This is obviously an error in our data.
```
sky.where('year', 0)
sky.where('year', are.not_equal_to(0)) \
.group('year') \
.plot('year', title = 'Number of Skyscrapers Built Per Year')
```
What if we want to look at the number of skyscrapers per year built in different cities?
```
sky.where('city', are.contained_in(['New York City', 'Chicago'])) \
.where('year', are.not_equal_to(0)) \
.pivot('city', 'year')
sky.where('city', are.contained_in(['New York City', 'Chicago'])) \
.where('year', are.not_equal_to(0)) \
.pivot('city', 'year') \
.plot('year',
title = 'Number of Skyscrapers Built Per Year in NYC and Chicago')
```
### Where on a map are most skyscrapers located?
```
sky
Circle.map_table(sky.select('latitude', 'longitude'),
line_color = None,
fill_opacity = 0.65,
area = 75,
color = 'orange')
```
Let's look at a map of tall skyscrapers in New York City.
```
ny_tall = sky.where('city', 'New York City') \
.where('height category', 'tall') \
.select('latitude', 'longitude', 'name', 'height') \
.relabeled(['name', 'height'], ['labels', 'color_scale'])
ny_tall
Circle.map_table(ny_tall,
line_color = None,
fill_opacity = 0.65,
area = 150,
color_scale = None)
```
It seems like most skyscrapers in NYC are either in the financial district or in Midtown. The circles for One World Trade Center and the Empire State Building are bright.
Lastly, what if we want to look at where short and tall skyscrapers are throughout the country?
```
sky
```
There are two solutions here.
1. Create a function that takes in `'short'` or `'tall'` and returns the desired color. (We did this in Lecture 28.)
2. Create a table with two columns, one with `'short'` and `'tall'` and the other with the desired colors, and join this table with `sky`.
We will use the second approach here.
```
sky_to_color = Table().with_columns(
'category', np.array(['short', 'tall']),
'colors', np.array(['orange', 'green'])
)
sky_to_color
sky_with_colors = sky.join('height category', sky_to_color, 'category') \
.select('latitude', 'longitude', 'colors')
sky_with_colors
Circle.map_table(sky_with_colors,
line_color = None,
fill_opacity = 0.7)
```
While there seem to be short skyscrapers (orange) throughout the country, tall skyscrapers generally seem to be concentrated in larger cities.
| github_jupyter |
# [Introductory applied machine learning (INFR10069)](https://www.learn.ed.ac.uk/webapps/blackboard/execute/content/blankPage?cmd=view&content_id=_2651677_1&course_id=_53633_1)
# Lab 5: Neural Networks
*by [James Owers](https://jamesowers.github.io/), University of Edinburgh 2017*
1. [Introduction](#Introduction)
* [Lab Outline](#Lab-Outline)
* [The Data](#The-Data)
1. [Part 1 - Introducing the Neural Network Model](#Part-1---Introducing-the-Neural-Network-Model)
* [Resources to Watch and Read pt. 1](##Resources-to-Watch-and-Read-pt.-1)
* [Model Design](#Model-Design)
* [The Cost Space](#The-Cost-Space)
1. [Part 2 - Fitting the Model & Optimisation](#Part-2---Fitting-the-Model-&-Optimisation)
* [Resources to Watch and Read pt. 2](#Resources-to-Watch-and-Read-pt.-2)
* [Finding the Best Parameters](#Finding-the-Best-Parameters)
* [Gradient Descent](#Gradient-Descent)
* [Backpropagation](#Backpropagation)
1. [Part 3 - Implementation From Scratch](#Part-3---Implementation-From-Scratch!)
1. [Part 4 - Implementation With Sklearn](#Part-4---Implementation-with-Sklearn)
1. [Moar?!](#Please-sir...I-want-some-more)
## Import packages
```
# https://docs.python.org/2/library/__future__.html
# make printing and division act like python 3
from __future__ import division, print_function
# General
import sys
import os
import copy
from IPython.display import Image, HTML
# Data structures
import numpy as np
import pandas as pd
# Modelling
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
from scipy.optimize import check_grad
# Plotting
import matplotlib.pyplot as plt
import seaborn as sns
# Local module adjacent to this notebook
import iaml
from iaml.data import load_letters
# http://ipython.readthedocs.io/en/stable/interactive/magics.html
%matplotlib inline
```
## Introduction
This lab:
1. introduces a simple neural network model in a supervised learning setting
1. provides impetus to understand the fitting procedure of that, and other networks
1. encourages you to implement a model from scratch
1. models the same problem with the sklearn package
1. makes you think about what you've done!
It does not discuss in detail:
1. any of the plethora of different activation functions you can use e.g. RELUs, SELUs, Tanh, ...
1. how to initialise the parameters and why that matters
1. issues with the fitting process e.g. local optima, and how to avoid them e.g. learning rate schedulers, momentum, RMSProp, Adam, cyclic learning rates
1. issues with model complexity e.g. overfitting, and solutions such as dropout, regularisation, or using [shedloads of data](https://what-if.xkcd.com/63/)
1. other tricks for speeding up and stablising fitting such as batch sizes, weight norm, layer norm
1. deep networks and their tricks like skip connections, pooling, convolutions
1. nor other more complex architectures like CNNs, RNNs, LSTMs, GANs, etc. etc.
1. many, many, MANY other things (that probably were published, like, [yesterday](https://arxiv.org/abs/1711.04340v1))
However, if you understand what is in this notebook well, **you will have the ability to understand [all of these things](https://i.imgflip.com/1zn8p9.jpg)**.
### Lab outline
I provide you with a function that creates data then link you to some excellent resources to learn the basics. These resources are superb, short, and free. I highly, highly recommend setting aside a couple of hours to give them a good watch/read and, at the very least, use them for reference.
After you have had a crack at the problems, I'll release the solutions. The solutions, particularly to part 3, walk you through the process of coding a simple neural neural network in detail.
Parts 3 & 4 are practical, parts 1 & 2 are links to external resources to read. Whilst I recommend you soak up some context first with 1 & 2, feel free to jump in at the deep end and get your hands dirty with part 3 or 4.
### The Data
Throughout this lab we are going to be using a simple classification example: the TC classification problem (not to be confused with the real [TC](https://www.youtube.com/watch?v=NToYkBYezZA)). This is a small toy problem where we, initially, try to distinguish between 3x3 grids that look like Ts and Cs. Let's create the dataset and have a look...
I have written a function `load_letters()` to generate synthetic data. For now, you will use the data generated below, but later you have opportunity to play with generating different data if you like. The function is located in the `iaml` module adjacent to this notebook - feel free to check out the code but I advise you **do not edit it**. Run (and don't edit) the next few cells to create and observe the data.
```
bounds = [-1, 1]
X, y, y_labels = load_letters(categories=['T', 'C'],
num_obs=[50, 50],
bounds=bounds,
beta_params=[[1, 8], [8, 1]],
shuffle=True,
random_state=42)
```
Let's print the data (I'm just creating a Pandas DataFrame for display, I probably wont use this object again)
```
pd.set_option("max_rows", 10)
df = pd.DataFrame(
np.hstack(
[np.around(X,2),
y[:, np.newaxis],
np.array([y_labels[ii] for ii in y])[:, np.newaxis]
]
),
columns = ['x{}'.format(ii) for ii in range(9)] + ['Class (numeric)', 'Class Label']
)
df
pd.reset_option("max_rows")
```
The data are arranged as vectors for your convenience, but they're really `3 x 3` images. Here's a function to plot them.
```
def plot_grid(x, shape=None, **heatmap_params):
"""Function for reshaping and plotting vector data.
If shape not given, assumed square.
"""
if shape is None:
width = int(np.sqrt(len(x)))
if width == np.sqrt(len(x)):
shape = (width, width)
else:
print('Data not square, supply shape argument')
sns.heatmap(x.reshape(shape), annot=True, **heatmap_params)
for ii in range(3):
plt.figure()
plot_grid(X[ii], vmin=bounds[0], vmax=bounds[1], cmap='Greys')
plt.title('Observation {}: Class = {} (numeric label {})'.format(ii, y_labels[y[ii]], y[ii]))
plt.show()
```
Finally, let's make the train and test split. Again, don't alter this code.
```
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.5, random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(
X_train, y_train, test_size=0.33, random_state=42)
[dd.shape for dd in [X_train, X_valid, X_test, y_train, y_valid, y_test]]
```
## Part 1 - Introducing the Neural Network Model
### Resources to Watch and Read pt. 1
**Reading/watching time:** 30 minutes
First, watch this video from 3 Blue 1 Brown: [But what *is* a Neural Network? | Deep learning, chapter 1](https://www.youtube.com/watch?v=aircAruvnKk)
If you prefer reading, try 2 sections of Nielsen's Book Chapter 1:
* [Sigmoid Neurons](http://neuralnetworksanddeeplearning.com/chap1.html#sigmoid_neurons)
* and [The Architecture of Neural Networks](http://neuralnetworksanddeeplearning.com/chap1.html#the_architecture_of_neural_networks)
### Model Design
Just so as there's something in this notebook to quickly reference - here's a nice illustration of what's going on in a neural net. Within the calculation of the $z$'s you'll see the learned **parameters**: $w$'s and $b$'s - these are the weights and biases respectively. *N.B. I omit the bias $b$ parameters in the Part 3 implementation.* The functions $g$ are the activation functions.
<img src="img/neural-net.png">
### The Cost Space
When we talk about the cost space, loss$^*$ space, or cost surface, we are talking about a function that changes with respect to the parameters. This function determines how well the network is performing - a low cost is good, a high cost is bad. A simple example for two parameters is shown below. **Our goal is to update the parameters such that we find the global minimum of the cost function.**
$^*$ 'loss' and 'cost' are interchangeable terms - you'll see them both around but I try to stick to 'cost'!
<img src="img/cost_space.png">
N.B. The cost function is often referred to with different letters e.g. $J(w)$, $C(\theta)$, $\mathcal{L}(x)$, and $E(w)$
## Part 2 - Fitting the Model & Optimisation
### Resources to Watch and Read pt. 2
**Watching/reading time:** ~1 hour
First, watch these two videos from 3 Blue 1 Brown:
1. [Gradient descent, how neural networks learn | Deep learning, chapter 2](https://www.youtube.com/watch?v=IHZwWFHWa-w)
2. [What is backpropagation and what is it actually doing? | Deep learning, chapter 3](https://www.youtube.com/watch?v=Ilg3gGewQ5U)
This will take you just over half an hour (if you watch at 1x speed). They are really excellent and well worth the time investment.
Again, if you prefer reading try Nielsen's section [Learning with Gradient Descent](http://neuralnetworksanddeeplearning.com/chap1.html#learning_with_gradient_descent)
### Finding the Best Parameters
So, we've got a function, let's call it $C(\theta)$ that puts a number on how well the neural network is doing. We provide the function with the parameters $\theta$ and it spits out the cost$^*$. We could just randomly chose values for $\theta$ and select the ones that result in the best cost...but that might take a long time. We'd also need to define a way to randomly select parameters as well. What if the best parameter setting is very unlikely to be selected?
**Calculus to the rescue!** The cost $C(\theta)$ is a function and, whilst we can't see the surface without evaluating it everywhere (expensive!), we can calculate the derivative with respect to the parameters $\frac{\partial C(\theta)}{\partial \theta}$. The derivative **tells you how the function value changes if you change $\theta$**.
For example, imagine $\theta$ is 1D and I tell you that $\frac{\partial C(\theta)}{\partial \theta} = 10\theta$. This means that if I increase $theta$ by 2, the cost function will go up by 20. Which way will you update $\theta$? You want to *decrease* the cost, so you would want to *decrease* $\theta$ by some amount.
The only thing we need to do is choose a cost function $C(\theta)$ that has a derivative function $\frac{\partial C(\theta)}{\partial \theta}$...and that is easy!
$^*$It's much easier if you imagine $\theta$ as just one number to start with, but the maths is basically the same as $\theta$ becomes a vector (or matrix) of numbers.
### Gradient Descent
So how do we actually update the parameters?! All update the parameters in the opposite direction to the gradient; you always try to take a step 'downhill'. Here's the formula:
$$
\theta \leftarrow \theta - \eta \frac{\partial C(\theta)}{\partial \theta}
$$
where "$\leftarrow$" means "update from", and $\eta$ is the "learning rate" - a hyperparameter you can choose. If you increase $\eta$ you make bigger updates to $\theta$, and vice versa.
There are many more complicated ways to update the parameters using the gradient of the cost function, but they all have this same starting point.
Below is an example cost surface. A few things to note:
* The axes should be labelled $\theta_0$ (1, -1.5) and $\theta_1$ (-1, 1) on the 'flat' axes, and $C(\theta)$ (-4, 4) on the vertical axis
* The surface is shown - we don't have direct access to this in reality. To show it, the creator has queried the cost function *at every [$\theta_0$, $\theta_1$] location* and plotted it
* The animated balls rolling along the surface are different gradient descent algorithms - each frame of the GIF shows one update. The equation shown above is SGD - the GIF highlights a potential issue with the algorithm!
<img src="https://i.imgur.com/2dKCQHh.gif">
Visualisation by [Alec Radford](https://blog.openai.com/tag/alec-radford/), summarised excellently in [this blog post](http://ruder.io/optimizing-gradient-descent/).
### Backpropagation
**Reading/watching time:** 1 hour
Right...it's time for some derivatives. If you've been liking the videos - go ahead and watch the next in the series:
1. [Backpropagation calculus | Appendix to deep learning chapter 3](https://www.youtube.com/watch?v=tIeHLnjs5U8)
If you have time, I recommend now having a crack at reading half of [Nielsen Chapter 2](http://neuralnetworksanddeeplearning.com/chap2.html), up to and including the section entitled [The Backpropagation Algorithm](http://neuralnetworksanddeeplearning.com/chap2.html#the_backpropagation_algorithm).
I'm just going to write out some derivatives you're going to find useful for Part 3 below:
$$
\begin{align}
z^{(L)} &= W^{(L)}a^{(L-1)} \\
\frac{\partial z^{(L)}}{\partial W} &= a^{(L-1)}
\end{align}
$$
$$
\begin{align}
\text{linear}[z] &= z \\
\frac{\partial \text{linear}[z]}{\partial z} &= 1 \\
\end{align}
$$
$$
\begin{align}
\text{sigmoid}[z] = \sigma[z] &= \frac{1}{1 + e^{-z}} = \frac{e^{z}}{e^{z} + 1}\\
\frac{\partial \sigma[z]}{\partial z} &= \frac{e^{z}}{e^{z} + 1} - (\frac{e^{z}}{e^{z} + 1})^2 \\
&= \frac{e^{z}}{e^{z} + 1} ( 1 - \frac{e^{z}}{e^{z} + 1} ) \\
&= \sigma[z] (1 - \sigma[z])
\end{align}
$$
$$
\begin{align}
\text{crossentropy}[y, a] = C[y, a] &= - \frac{1}{N} \sum_{i=1}^N y_i \log a_i + (1-y_i)\log(1-a_i) \\
\frac{\partial C[y_i, a_i]}{\partial a_i} &= \frac{1 - y_i}{1 - a_i} + \frac{y_i}{a_i}
\end{align}
$$
And finally, this is all backpropagation really is...
$$
\begin{align}
\frac{\partial C[y_i, a_i]}{\partial w_j} &= \frac{\partial a_i}{\partial w_j}\frac{\partial C[y_i, a_i]}{\partial a_i}\\
&= \frac{\partial z_k}{\partial w_j}\frac{\partial a_i}{\partial z_k}\frac{\partial C[y_i, a_i]}{\partial a_i}\\
\end{align}
$$
Challenge: derive these yourself.
#### Reading extension
For more on gradient based optimisers [check out this blog post](http://ruder.io/optimizing-gradient-descent/)
For another look at backpropagation - try [Christopher Olah's blog](http://colah.github.io/posts/2015-08-Backprop/)
## Part 3 - Implementation From Scratch!
### ========== Question 3.1 ==========
First thing is first: **don't get stuck on this**. I recommend you attempt this question for an hour and, if you don't get anywhere, move on to Question 3.2. You can even move straight on to Part 4. It's exactly the same problem addressed here in 3.1, but using sklearn instead of coding it yourself.
#### Model Specification
<img src="img/network_design.png" width="50%">
We are going to fit a very small neural network to classify the TC data. Here is the specification of the model:
1. Input of size 9
1. Hidden layer of size 3
* Linear activation function
1. Output layer of size 1
* Logistic activation function
As for the **cost function**: use Cross-Entropy. However, if you're getting bogged down with derivatives, feel free to try squared error to start with (this is what Nielsen and 3 Blue 1 Brown start with in their tutorials). Squared error is [not necessarily the right cost function to use](https://jamesmccaffrey.wordpress.com/2013/11/05/why-you-should-use-cross-entropy-error-instead-of-classification-error-or-mean-squared-error-for-neural-network-classifier-training/) but it will still work!
For a given input vector $x$, we can predict an output probability $a^{(2)}$ (were the $^{(2)}$ indicates the layer number, *not a power* - I'm following 3 Blue 1 Brown notation as best I can) using the following formula:
$$
\begin{align}
a^{(2)} &= f^{(2)}[z^{(2)}] \\
&= f^{(2)}[W^{(2)}a^{(1)}] \\
&= f^{(2)}[W^{(2)}f^{(1)}[z^{(1)}]] \\
&= f^{(2)}[W^{(2)}f^{(1)}[W^{(1)}a^{(0)}]] \\
&= f^{(2)}[W^{(2)}f^{(1)}[W^{(1)}x]] \\
&= \sigma[W^{(2)}(W^{(1)}x)]
\end{align}
$$
where:
* $f^{(2)}$ is the activation function of the output layer (a sigmoid function $\sigma[]$)
* $f^{(1)}$ is the activation function of the hidden layer (the identity - 'linear activation')
* $W^{(2)}$ and $W^{(1)}$ are the parameters to learn
* $a^{(L)} = f^{(L)}[z^{(L)}]$ are the activations **exiting** layer $^{(L)}$
* $z^{(L)} = W^{(L)}a^{(L-1)}$ is the pre-activation weighted sum calculated **within** layer $^{(L)}$
The formula for the Cross-Entropy cost function is:
$$
C(a) = - \frac{1}{N} \sum_{i=1}^N y_i \log a_i + (1-y_i)\log(1-a_i)
$$
Notice how only one term in the sum is ever non-zero because $y_i$ is only ever 0 or 1. In our case, $N$ is the number of data observations in the dataset.
##### Parameters
The parameters of the model are two matrices:
1. $W^{(2)}$ - $3 \times 9$ matrix
* used within the hidden layer (the $1^{st}$ layer) to get $z^{(1)} = W^{(1)}x$ for some $9 \times 1$ input vector $x$. $z^{(1)}$ is thus $3 \times 1$.
1. $W^{(1)}$ - $1 \times 3$ matrix
* used within the output layer (the $2^{nd}$ layer) to get $z^{(2)} = W^{(2)}a^{(1)}$ for some $3 \times 1$ input vector $a^{(1)}$. $z^{(2)}$ is thus $1 \times 1$.
**Note that I'm not asking you to fit *bias parameters*.**
You'll often see parameters referred to as $\theta$, it's a catch all term. In our case it's just a list of all the weights, $\theta = [W^{(1)}, W^{(2)}]$. **We have 3 x 9 + 3 x 1 = 30 parameters to learn in total.**
##### Advice
You can use any of the equations and code I've given you or linked you to in this lab but **you do not have to!** You're free to code as you please. Personally, since this is a simple example, I did not do anything fancy (I didn't create any objects with methods and attributes). I simply:
* created a list containing the two parameter matrices `theta = [W1, W2]`
* created a function to do prediction (the forward pass)
* created a function to do the backward pass (updating the weights)
* This is the tricky bit - I coded functions that are the [relevant derivatives](#http://localhost:8888/notebooks/10_Lab_5_Neural_Networks.ipynb#Backpropagation), and wrote code to iteratively pass back the 'deltas' - (I think Nielsen's equations [here](http://neuralnetworksanddeeplearning.com/chap2.html#the_backpropagation_algorithm) are very useful)
* wrote a training loop which called these two main functions
* each epoch calls the forward pass to predict, then the backward pass to update the parameters.
When the training was finished, my "model" was simply the parameters I had fitted, along with the 'forward pass' function - a function which uses those weights to predict a probability for any input data.
**You do not have to code it up like me**, you can do it however you like! The point of this part is for you to explore, code up all the equations, understand how to calculate the loss, and how to use that loss to update the parameters of the model by backpropagation.
**Debugging**: You're probably going to have issues particularly in the backprop section. You are welcome to make use of the `scipy.optimize.check_grad()` function. This takes as input a function f, g: a function that is (supposed to be) the function's derivative.
If you didn't watch it already, now is a great time to take 10 minutes and watch [Backpropagation calculus | Appendix to deep learning chapter 3](https://www.youtube.com/watch?v=tIeHLnjs5U8)
#### ===== What you actually need to do for this question! =====
Write a training loop which uses gradient descent to learn the parameters. Each iteration of the loop is called an **epoch**. Run your code for *no more than 100 epochs*. You should be able to achieve 100% accuracy on this problem.
In this case, for simplicity, you may initialise the weights to be samples from a normal distribution mean 0 variance 1, but please note that this [is not necessarily good practice](https://intoli.com/blog/neural-network-initialization/)!
**Do not code up a grid search for the learning rate hyperparameter**. You may instead play with the learning rate manually until you are happy. Try small values first like 0.0001 (if your backprop code is correct you **should** see your cost decreasing every epoch). Since this problem is so simple, a range of values should work. Again, with real data, you *must* do a search over hyperparameters, but here we are focussed on *coding* a working model.
To test whether or not what you have written has worked, please output the following:
1. After the training loop:
1. plot a graph of training and validation loss against epoch number
1. print or plot the final parameters you have learned using a Hinton diagram - feel free to use [code you can find online](http://bfy.tw/F74s)
1. pick one weight parameter and produce a plot of its value against epoch number
* Extension: do that for all the weights **leaving one specific input node** (i.e. the weights for one pixel of the input data)
1. use your model to:
1. print a few of the validation data examples and their predicted probabilities
1. print the output for a T and C with no noise (you can make that input data yourself)
1. print the output of a few random binary vectors i.e. 9x1 vectors of only 0s and 1s (again, you can make that input data yourself)
1. Within the training loop:
1. print the training and validation crossentropy loss **and** percentage accuracy every epoch
1. save the value of the training and validation losses for every epoch [for the plot after the loop]
1. save the value of a weight parameter of your choice [for the plot after the loop]
#### ===== Example outputs =====
Below I give you some examples of what I'd like you to produce. **I produced these using a learning rate of 0.003, 100 epochs, and weights initialised with N(0,1) with a random seed of 42**. I found that you could learn faster i.e. you can use a larger learning rate, but I wanted to make smooth plots for you.
You don't need to produce plots exactly like this, you can do them how you like, but try and display the same information. You can also use my plots for checking (if you use the same settings as me).
##### 1A
<img src="img/cost_per_epoch.png">
##### 1B
<img src="img/hinton_W1.png">
<img src="img/hinton_W2.png">
##### 1C
<img src="img/W1_x4__per_epoch.png">
##### 1D
<img src="img/predict_valid_0.png">
<img src="img/predict_valid_No noise T.png">
<img src="img/predict_valid_No noise C.png">
<img src="img/predict_valid_N(0, 1) sample 1.png">
<img src="img/predict_valid_N(0, 1) sample 2.png">
##### 2A
<img src="img/training_log.png">
```
# Your code goes here
```
### ========== Question 3.2 ==========
Did you need a network this large to do this classification task? Give the values for the parameters of a network with no hidden layers, one output node, and an output activation function of a sigmoid that would get 100% accuracy. This network only has 9 parameters.
*Your answer goes here*
### ========== Question 3.3 ==========
You should recognise the model described in question 3.2. What is it?
*Your answer goes here*
### ========== Question 3.4 ==========
Why did I create input data, `X`, that was between [-1, 1] i.e. why wasn't it between [0, 1] like normal?! Would the model specified in question 3.1 above have worked if `X` was in [0, 1]? Explain why or why not.
*Hint: if you're stuck, you can try it out by generating some new data and trying to fit it.*
*Your answer goes here*
### ========== Question 3.5 [EXTENSION] ==========
Create a dataset which makes the problem harder. Have a look at the dataset generation code. You can use the arguments to create data with:
* more letters (make the problem a multiclass classification)
* You'll need to implement the multiclass version of the sigmoid for the output activation function - [the softmax](https://en.wikipedia.org/wiki/Softmax_function) (and of course it's derivative)
* increase the noise on the data
Some other things you could implement:
* include rotated letters in the data
* make larger data (bigger than 3x3)
* make the letters non-centred e.g. 5x5 data with 3x3 letters in 1 of 9 different places
You'll probably need to adapt the code you wrote in 3.1, but you can probably copy and paste most of it. For an additional challenge: introduce [bias parameters](http://neuralnetworksanddeeplearning.com/chap1.html) and create your `X` data in range [0, 1] (i.e. set the bounds argument to [0, 1])...
Some other things to try if you get code happy:
* Implement stochastic gradient descent updates (updating parameters every training example, as opposed to every epoch) - tip: randomise data order each epoch
* Implement batch gradient descent updates - tip: randomise data order each epoch
**Requirements**:
1. Describe the modelling problem and your input data. Plot some examples of the data
1. Write down the model specification (I should be able to reproduce your model with this description):
* number of nodes in each layer
* a description of the parameters to learn (and a total number of parameters)
* the activation functions used for each layer
* cost function used
1. All the outputs asked for in Question 3.1: loss per epoch plot, final parameters, a weight against epoch plot, and example predictions
*Your answer goes here*
```
# Your code goes here
```
## Part 4 - Implementation with Sklearn
### ========== Question 4.1 ==========
If you did Question 3.1, this should be a breeze! Use the same data and perform the same modelling task. This time you can use Sklearn's Neural Network object [MLPClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier).
Before you begin, read the [introduction](http://scikit-learn.org/stable/modules/neural_networks_supervised.html) (sections 1.17.1 and 1.17.2 at a minimum, 1.17.5, 1.17.6, 1.17.7 are recommended).
```
# Your code goes here
```
### ========== Question 4.2 ==========
The learned parameters are stored in the fitted sklearn [MLPClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier) object **as two separate attributes**.
1. Print the parameters learned by your fitted model
1. Print the total number of parameters learned
Look at the number of parameters described in question 3.1 (you do not need to have done this question 3.1 - just read its description). Below the code:
1. Explain why the number of parameters learned by sklearn is different from the number specified in 3.1?
```
# Your code goes here
```
*Your answer goes here*
# [Please sir...I want some more](https://www.youtube.com/watch?v=Ex2r86G0sdc)
Well done, you successfully covered the basics of Neural Networks!
If you enjoyed this lab, you'll love another course @ Edinburgh: [Machine Learning Practical](https://github.com/CSTR-Edinburgh/mlpractical). Check it out.
### Next steps
The first thing to do, if you haven't already, is do the extension question 3.5. **In particular, you should implement bias parameters in your model code**.
Next, go back to the very top of the notebook where I detail things I will not cover. Pick some words you don't understand (perhaps along with the keyword 'example' or 'introduction') and have fun reading/watching some tutorials about them online. Code up what you have learned; if you can code it up without peeking, you know you have understood it very well indeed. Another good "starter for 10" google is "a review of neural networks for [images|text|music|bat detection|captioning images|generation|...]".
Here are some things that you might find fun to read:
* [Visualising networks learning](http://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=5&networkShape=3&seed=0.42978&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false)
* [Trying to understand what features are learned by Deep Nets](https://distill.pub/2017/feature-visualization/)
* [Modelling sound waves](https://deepmind.com/blog/wavenet-generative-model-raw-audio/)
* ...and using that to [encode instruments](https://magenta.tensorflow.org/nsynth)
* An [Introduction to LSTMs](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) and their [unreasonable effectiveness](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)
* How to encode the entire meaning of a word [in a few numbers](http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/)
* [Convolutions for text data?!](http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/)
### Learning resources
Also:
* [there](http://ufldl.stanford.edu/tutorial/supervised/MultiLayerNeuralNetworks/)
* [are](http://neuralnetworksanddeeplearning.com/chap1.html)
* [literally](https://www.coursera.org/learn/machine-learning)
* [so](https://www.coursera.org/learn/neural-networks)
* [many](http://deeplearning.net/)
* [learning](http://datasciencemasters.org/)
* [resources](https://metacademy.org/graphs/concepts/backpropagation)
* [online!](http://www.deeplearningbook.org/)
(about neural nets etc.)
In all seriousness, make sure you check out [metacademy](https://metacademy.org/). You can search for a topic and it gives you a list of free resources, an estimated time you need to understand it, and prerequisite topics.
# Attributions
Parts of this lab were inspired by D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Parallel distributed processing: Explorations
in the microstructure of cognition, vol. 1, MIT Press, Cambridge, MA, USA, 1986,
pp. 318–362.
Thanks also to:
* [3 Blue 1 Brown](https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw)
* [Michael Nielsen](http://neuralnetworksanddeeplearning.com)
* [Christopher Olah](http://colah.github.io/)
for producing some excellent visualisations and learning resources and providing them free of charge.
Additionally, many thanks to the developers of open source software, in particular:
* [Numpy](http://www.numpy.org/)
* [Scipy](https://www.scipy.org/)
* [Sklearn](http://scikit-learn.org/stable/)
* [Matplotlib](https://matplotlib.org/)
* [Jupyter](http://jupyter.org/)
* and of course [Python](https://www.python.org/) itself!
your work is invaluable and appreciated.
# Credits
This lab was created by [James Owers](https://jamesowers.github.io/) in November 2017 and reviewed by [Patric Fulop](https://www.inf.ed.ac.uk/people/students/Patric_Fulop.html).
| github_jupyter |
### Import api_crawler
[Code for api_crawler](https://github.com/biothings/JSON-LD_BioThings_API_DEMO/blob/master/src/api_crawler.py)
```
from api_crawler import uri_query
```
### Given a variant hgvs id, looking for ncbi gene id related to it
```
uri_query(input_value='chr12:g.103234255C>T', input_name='http://identifiers.org/hgvs/', output_name='http://identifiers.org/ncbigene/')
```
### From the ncbi gene we found in previous query, get all wikipathways ids related
```
uri_query(input_value='5053', input_name='http://identifiers.org/ncbigene/', output_name='http://identifiers.org/wikipathways/')
```
### Breakdown of uri_query function
The following code shows each step involved in uri_query function demonstrated above.
[Metadata information about BioThings API(config)](https://github.com/biothings/JSON-LD_BioThings_API_DEMO/blob/master/src/config.py)
[code for biothings_helper](https://github.com/biothings/JSON-LD_BioThings_API_DEMO/blob/master/src/biothings_helper.py)
#### Step 1: Specify input and output
```
input_value = '5053'
input_name='http://identifiers.org/ncbigene/'
output_name='http://identifiers.org/wikipathways/'
```
#### Step 2: Iterate through API metadata info, and find corresponding API based on input & output
```
from config import AVAILABLE_API_SOURCES
from biothings_helper import find_value_from_output_type, query_ids_from_output_type
from api_crawler import api_lookup
# look up api in api metadata info
api_results = api_lookup(input_name, output_name)
print(api_results)
```
#### Step 3: Make API call
```
import requests
# construct url based on metadata info
url = api_results[0]['url_syntax'].replace('{{input}}', input_value)
# make API call
doc = requests.get(url).json()
#for better display in ipython notebook, we are not printing the whole json doc here
#the following code could be used to display the json_doc returned from api call
# print(doc)
```
#### Step 4: Transform JSON doc to JSON-LD doc and Nquads format
```
from jsonld_processor import flatten_doc
# flatten the json doc
doc = flatten_doc(doc)
import json
# load context file
context = json.load(open(api_results[0]['jsonld']))
# construct json-ld doc
doc.update(context)
# transform json-ld doc to nquads format
from pyld import jsonld
t = jsonld.JsonLdProcessor()
nquads = t.parse_nquads(jsonld.to_rdf(doc, {'format': 'application/nquads'}))['@default']
print(nquads[1])
# for better display in ipython notebook, we are not printing the whole nquads doc here\
# the following code could be used to display the whole nquads doc
# print(nquads)
```
#### Step 5: Fetch value using URI from Nquads format
```
value_list = []
for item in nquads:
if item['predicate']['value'] == output_name:
value_list.append(item['object']['value'])
value = list(set(value_list))
print(value)
```
| github_jupyter |
```
# LSTM for international airline passengers problem with window regression framing
import numpy
import numpy as np
import keras
import matplotlib.pyplot as plt
from pandas import read_csv
import math
from keras.models import Sequential
from keras.layers import Dense,Dropout
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import OneHotEncoder
from sklearn.cross_validation import train_test_split
from keras.utils.vis_utils import plot_model
# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return numpy.array(dataX), numpy.array(dataY)
def create_dataset2(dataset, look_back=1):
dataX, dataY = [], []
dataZ=[]
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 1]
dataX.append(a)
b = dataset[i + look_back, 0]
dataZ.append(b)
dataY.append(dataset[i + look_back, 1])
return numpy.array(dataX), numpy.array(dataY),numpy.array(dataZ)
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset
# dataframe = read_csv('w_d_v.csv', usecols=[7], engine='python', skipfooter=3)
dataframe = read_csv('t6192.csv', usecols=[8,0], engine='python',dtype=np.int32,skiprows=1,header=None)
pattern = read_csv('t6192.csv', usecols=[7], engine='python',dtype=np.int32,skiprows=1,header=None)
Matrix = read_csv('matrix621.csv', usecols=[2,3,4,5,6,7,8,9,10,11,12,13], engine='python',header=None)
all_data = read_csv('all_data.csv', usecols=[7], engine='python')
dataset = dataframe.values
Matrix = Matrix.values
pattern=pattern.values
allData=all_data.values
Matrix=np.append([[-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1]],Matrix,axis=0)
week_info = read_csv('t6192.csv', usecols=[11], engine='python',dtype=np.int32,skiprows=1,header=None)
Start_info = read_csv('t6192.csv', usecols=[12], engine='python',dtype=np.int32,skiprows=1,header=None)
End_info = read_csv('t6192.csv', usecols=[13], engine='python',dtype=np.int32,skiprows=1,header=None)
Stay_info = read_csv('t6192.csv', usecols=[14], engine='python',dtype=np.int32,skiprows=1,header=None)
Weather_info = read_csv('t6192.csv', usecols=[15], engine='python',dtype=np.int32,skiprows=1,header=None)
week_info = week_info.values
Start_info = Start_info.values
End_info=End_info.values
Stay_info=Stay_info.values
Weather_info=Weather_info.values
week_info=week_info[3:-1]
Start_info=Start_info[3:-1]
End_info=End_info[2:-2]
Stay_info=Stay_info[2:-2]
Weather_info=Weather_info[3:-1]
print(End_info.shape)
print(Start_info.shape)
look_back = 3
trainX, trainY, trainZ = create_dataset2(dataset, look_back)
AllX, AllY = create_dataset(allData, look_back)
patternX, patternY = create_dataset(pattern, look_back)
trainY=numpy.reshape(trainY,(trainY.shape[0],-1))
AllY=numpy.reshape(AllY,(AllY.shape[0],-1))
trainY[-10:]
trainX[-10:]
Stay_info[-10:]
encX = OneHotEncoder()
encX.fit(trainX)
encY = OneHotEncoder()
encY.fit(trainY)
trainX_one=encX.transform(trainX).toarray()
train_X=numpy.reshape(trainX_one,(trainX_one.shape[0],look_back,-1))
train_Y=encY.transform(trainY).toarray()
```
#还没能直接拆分,其他维度没有做对应
a_train, a_test, b_train, b_test = train_test_split(train_X, train_Y, test_size=0.1, random_state=42)
```
emdedding_size=Matrix.shape[1] #
vo_len=look_back #
vocab_size=Matrix.shape[0] #
a_train=trainX.reshape(-1,3,1)
a_train=a_train.reshape(-1,3)
b_train=train_Y
k=trainZ
pretrained_weights=Matrix
LSTM_size=32
```
print("------------------------")
print("in size:")
print(a_train.shape)
print("------------------------")
print("out size:")
print(b_train.shape)
print("------------------------")
print("user size:")
print(k.shape)
print("------------------------")
```
N_test=-1
print("------------------------")
print("input semantic example:")
for x in a_train[N_test]:
print(pretrained_weights[x])
print("------------------------")
print("user_id example:")
print(k[N_test])
print("------------------------")
print("input move pattern example:")
print(a_train[N_test])
print("------------------------")
print("input week_info example:")
print(week_info[N_test])
print("----------------")
print("input Start time example:")
print("Start time sector 4 means 0-6 o'clock in the morning, 3 means 6-9 o'clock in the morning, 2 means 9-17 working hours, 1 means 17-24 points at night +happyHour")
print(Start_info[N_test])
print("----------------")
print("input End time example:")
print(End_info[N_test])
print("----------------")
print("input Stay time example:")
print("per sector means minute")
print(Stay_info[N_test])
print("----------------")
print("input Weather_info example:")
print("1 fog; 2 fog rain; 3 Fog, Rain, Snow; 4 Fog, Rain, Thunderstorm; 5 snow; 6 rain; 7 Thunderstorm; 8 Hail; 9 rain + Thunderstorm; 10 fog + snow; 11 rain + snow")
print(Weather_info[N_test])
print("------------------------")
print("------------------------")
# print("output encode example:")
# print(b_train[0])
# print("------------------------")
print("output decode example:")
print(trainY[N_test])
print("------------------------")
```
print("------------------------")
print("emdedding_size:")
print(emdedding_size)
print("------------------------")
print("vocab_length:")
print(vo_len)
print("------------------------")
print("vocab_size:")
print(vocab_size)
print("------------------------")
```
print("使用 T+S")
from keras.layers import Input, Embedding, LSTM, Dense,Merge,Flatten
from keras.models import Model
# a_train=a_train.reshape(-1,3)
emdedding_size=100
Location_size=201
User_size=183
LSTM_size=200
Time_size=5
Week_size=8
Stay_size=1440
pretrained_weights_size=12
# Move_Pattern Sequences
input_pattern = Input(shape=(3, ),name="Move_Pattern_Input")
# User-Id
User_id = Input(shape=(1,),name="User_id_Input")
# Temporary
Start_Time = Input(shape=(1,),name="Start_Time_Input")
End_Time = Input(shape=(1,),name="End_Time_Input")
Stay_Time = Input(shape=(1,),name="Stay_Time_Input")
Date_Info = Input(shape=(1,),name="Date_Info_Input")#1-7 Monday to Sunday
# Spatial
Location_Info = Input(shape=(3,),name="Semantic_Location_Info_Input")#12 categories Interest_point
# Weather
Weather_Info = Input(shape=(1,),name="Weather_Info_Input")#1-7 Weather Type
#Spatial
em = Embedding(input_dim=Location_size, output_dim=emdedding_size,input_length=vo_len,name="Spatial_Pattern")(input_pattern)
lstm_out = LSTM(LSTM_size,name="Spatial_Feature")(em)
lstm_out = Dropout(0.2)(lstm_out)
#User_id
em2 = Embedding(input_dim=User_size, output_dim=emdedding_size,input_length=1,name="User_id")(User_id)
em2=Flatten(name="User_Feature")(em2)
#Temporary
emStart_Time = Embedding(input_dim=Time_size, output_dim=emdedding_size,input_length=1,name="Start_Time")(Start_Time)
emEnd_Time = Embedding(input_dim=Time_size, output_dim=emdedding_size,input_length=1,name="End_Time")(End_Time)
emStay_Time = Embedding(input_dim=Stay_size, output_dim=emdedding_size,input_length=1,name="Stay_Time")(Stay_Time)
emDate_Info = Embedding(input_dim=Week_size, output_dim=emdedding_size,input_length=1,name="Date_Info")(Date_Info)
Temporary = keras.layers.concatenate([emStart_Time, emEnd_Time,emStay_Time,emDate_Info],name="Temporary_Feature_Model")
Temporary = Flatten(name="Temporary_Feature")(Temporary)
#Semantic
Location_Semantic=Embedding(input_dim=Location_size, output_dim=pretrained_weights_size,input_length=vo_len,weights=[pretrained_weights],name="Semantic_Location_Info")(Location_Info)
Semantic_lstm = LSTM(36,return_sequences=True,name="Semantic_Feature_Model")(Location_Semantic)
Location_Semantic=Flatten(name="Semantic_Feature")(Semantic_lstm)
#Weather
x = keras.layers.concatenate([lstm_out, em2, Temporary,Location_Semantic])
x=Dense(808,activation='relu',name="C")(x)
x=Dense(404,activation='relu',name="C2")(x)
x=Dense(202,activation='relu',name="C3")(x)
x=Dropout(0.2)(x)
x=Dense(b_train.shape[1],activation='softmax',name='x')(x)
model = Model(inputs=[input_pattern,User_id,Start_Time,End_Time,Stay_Time,Date_Info,Location_Info], outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
# print(model.summary()) # Summarize Model
plot_model(model, to_file='t_lstm_T+S.png',show_shapes=True)
print(a_train.shape)
print(week_info.shape)
Location_info[0:10]
history_T_S = model.fit([a_train,k,Start_info,End_info,Stay_info,week_info,a_train], b_train, epochs=100, batch_size=512, verbose=2)
print("使用 encode2 (语义权重 pretrained_weights)方法")
from keras.layers import Input, Embedding, LSTM, Dense,Merge,Flatten
from keras.models import Model
# a_train=a_train.reshape(-1,3)
emdedding_size=100
Location_size=201
User_size=183
LSTM_size=200
Time_size=5
Week_size=8
Stay_size=1440
# Move_Pattern Sequences
input_pattern = Input(shape=(3, ),name="Move_Pattern")
# User-Id
User_id = Input(shape=(1,),name="User_id")
# Temporary
Start_Time = Input(shape=(1,),name="Start_Time")
End_Time = Input(shape=(1,),name="End_Time")
Stay_Time = Input(shape=(1,),name="Stay_Time")
Date_Info = Input(shape=(1,),name="Date_Info")#1-7 Monday to Sunday
# Spatial
Location_Info = Input(shape=(12,),name="Location_Info")#12 categories Interest_point
# Weather
Weather_Info = Input(shape=(1,),name="Weather_Info")#1-7 Weather Type
em = Embedding(input_dim=Location_size, output_dim=emdedding_size,input_length=vo_len)(input_pattern)
lstm_out = LSTM(LSTM_size)(em)
lstm_out = Dropout(0.2)(lstm_out)
em2 = Embedding(input_dim=User_size, output_dim=emdedding_size,input_length=1)(User_id)
emStart_Time = Embedding(input_dim=Time_size, output_dim=emdedding_size,input_length=1)(Start_Time)
emEnd_Time = Embedding(input_dim=Time_size, output_dim=emdedding_size,input_length=1)(End_Time)
emStay_Time = Embedding(input_dim=Stay_size, output_dim=emdedding_size,input_length=1)(Stay_Time)
emDate_Info = Embedding(input_dim=Week_size, output_dim=emdedding_size,input_length=1)(Date_Info)
Temporary = keras.layers.concatenate([emStart_Time, emEnd_Time,emStay_Time,emDate_Info])
Temporary = Flatten()(Temporary)
em2=Flatten()(em2)
x = keras.layers.concatenate([lstm_out, em2, Temporary])
x=Dense(700,activation='relu',name="C")(x)
x=Dense(400,activation='relu',name="C2")(x)
x=Dense(250,activation='relu',name="C3")(x)
x=Dropout(0.2)(x)
x=Dense(b_train.shape[1],activation='softmax',name='x')(x)
model = Model(inputs=[input_pattern,User_id,Start_Time,End_Time,Stay_Time,Date_Info], outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
# print(model.summary()) # Summarize Model
plot_model(model, to_file='t_lstm624.png',show_shapes=True)
history_nopre = model.fit([a_train,k,Start_info,End_info,Stay_info,week_info], b_train, epochs=100, batch_size=512, verbose=2)
print("使用 encode2 (语义权重 pretrained_weights)方法")
from keras.layers import Input, Embedding, LSTM, Dense,Merge,Flatten
from keras.models import Model
# a_train=a_train.reshape(-1,3)
emdedding_size=100
Location_size=201
User_size=183
LSTM_size=200
Time_size=5
Week_size=8
Stay_size=1440
# Move_Pattern Sequences
input_pattern = Input(shape=(3, ),name="Move_Pattern")
# User-Id
User_id = Input(shape=(1,),name="User_id")
# Temporary
Start_Time = Input(shape=(1,),name="Start_Time")
End_Time = Input(shape=(1,),name="End_Time")
Stay_Time = Input(shape=(1,),name="Stay_Time")
Date_Info = Input(shape=(1,),name="Date_Info")#1-7 Monday to Sunday
# Spatial
Location_Info = Input(shape=(12,),name="Location_Info")#12 categories Interest_point
# Weather
Weather_Info = Input(shape=(1,),name="Weather_Info")#1-7 Weather Type
em = Embedding(input_dim=Location_size, output_dim=emdedding_size,input_length=vo_len)(input_pattern)
lstm_out = LSTM(LSTM_size)(em)
lstm_out = Dropout(0.2)(lstm_out)
em2 = Embedding(input_dim=User_size, output_dim=emdedding_size,input_length=1)(User_id)
emStart_Time = Embedding(input_dim=Time_size, output_dim=emdedding_size,input_length=1)(Start_Time)
emEnd_Time = Embedding(input_dim=Time_size, output_dim=emdedding_size,input_length=1)(End_Time)
emStay_Time = Embedding(input_dim=Stay_size, output_dim=emdedding_size,input_length=1)(Stay_Time)
emDate_Info = Embedding(input_dim=Week_size, output_dim=emdedding_size,input_length=1)(Date_Info)
# Temporary = keras.layers.concatenate([emStart_Time, emEnd_Time,emStay_Time,emDate_Info])
# Temporary = Flatten()(Temporary)
em2=Flatten()(em2)
x = keras.layers.concatenate([lstm_out, em2])
x=Dense(700,activation='relu',name="C")(x)
x=Dense(400,activation='relu',name="C2")(x)
x=Dense(250,activation='relu',name="C3")(x)
x=Dropout(0.2)(x)
x=Dense(b_train.shape[1],activation='softmax',name='x')(x)
model = Model(inputs=[input_pattern,User_id,Start_Time,End_Time,Stay_Time,Date_Info], outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
# print(model.summary()) # Summarize Model
plot_model(model, to_file='t_lstm624_notem.png',show_shapes=True)
history_notem = model.fit([a_train,k,Start_info,End_info,Stay_info,week_info], b_train, epochs=100, batch_size=512, verbose=2)
fig = plt.figure()
plt.plot(history_nopre.history['acc'])
plt.plot(history_notem.history['acc'])
plt.plot(history_T_S.history['acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['None', 'T','T+S'], loc='upper left')
print("使用 encode1 方法")
aa_train=train_X
from keras.layers import Input, Embedding, LSTM, Dense,Merge
from keras.models import Model
input_pattern = Input(shape=(3, aa_train.shape[2]),name="Move_Pattern")
input_id = Input(shape=(1,),name="User_id")
lstm_out = LSTM(units=200,return_sequences=False)(input_pattern)
lstm_out = Dropout(0.2)(lstm_out)
em2 = Embedding(input_dim=User_size, output_dim=emdedding_size,input_length=1)(input_id)
em2=Flatten()(em2)
x = keras.layers.concatenate([lstm_out, em2])
x=Dense(400,activation='relu',name="C")(x)
x=Dense(300,activation='relu',name="C2")(x)
x=Dense(250,activation='relu',name="C3")(x)
x=Dropout(0.2)(x)
x=Dense(b_train.shape[1],activation='softmax')(x)
model = Model(inputs=[input_pattern,input_id], outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
# print(model.summary()) # Summarize Model
plot_model(model, to_file='t_lstm_encode1.png',show_shapes=True)
history_encode1 = model.fit([aa_train,k], b_train, epochs=100, batch_size=512, verbose=2)
aa_train=train_X
from keras.layers import Input, Embedding, LSTM, Dense,Merge
from keras.models import Model
input_pattern = Input(shape=(3, aa_train.shape[2]),name="Move_Pattern")
input_id = Input(shape=(1,),name="User_id")
lstm_out = LSTM(units=300,return_sequences=False)(input_pattern)
lstm_out = Dropout(0.2)(lstm_out)
# em2 = Embedding(input_dim=User_size, output_dim=emdedding_size,input_length=1)(input_id)
# em2=Flatten()(em2)
# x = keras.layers.concatenate([lstm_out, em2])
x=Dense(400,activation='relu',name="C1")(lstm_out)
x=Dense(300,activation='relu',name="C2")(x)
x=Dense(250,activation='relu',name="C3")(x)
x=Dropout(0.2)(x)
x=Dense(b_train.shape[1],activation='softmax')(x)
model = Model(inputs=[input_pattern,input_id], outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
# print(model.summary()) # Summarize Model
plot_model(model, to_file='t_lstm_encode1.png',show_shapes=True)
history_encode1 = model.fit([aa_train,k], b_train, epochs=100, batch_size=512, verbose=2)
from keras.layers import Input, Embedding, LSTM, Dense,Merge
from keras.models import Model
emdedding_size=12 #
vo_len=3 #
vocab_size=11000 #
a_train=patternX
b_train=train_Y
k=trainZ
pretrained_weights=Matrix
input_pattern = Input(shape=(3, ),name="input_pattern")
em = Embedding(input_dim=vocab_size, output_dim=emdedding_size,input_length=vo_len, weights=[pretrained_weights])(input_pattern)
lstm_out = LSTM(units=emdedding_size)(em)
lstm_out = Dropout(0.2)(lstm_out)
x=Dense(250,activation='relu',name="C")(lstm_out)
x=Dropout(0.2)(x)
x=Dense(180,activation='softmax')(x)
model = Model(inputs=input_pattern, outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
print(model.summary()) # Summarize Model
plot_model(model, to_file='t_lstm_test.png',show_shapes=True)
history_withpre2 = model.fit(a_train, b_train, epochs=100, batch_size=16, verbose=2)
```
plot_model(model, to_file='t_lstm_test.png',show_shapes=True)
history_nopre = model.fit(a_train, b_train, epochs=100, batch_size=16, verbose=2)
```
from keras.layers import Input, Embedding, LSTM, Dense,Merge
from keras.models import Model
emdedding_size=12 #
vo_len=3 #
vocab_size=11000 #
a_train=trainX.reshape(-1,3)
b_train=train_Y
k=trainZ
pretrained_weights=Matrix
input_pattern = Input(shape=(3, ),name="input_pattern")
em = Embedding(input_dim=vocab_size, output_dim=emdedding_size,input_length=vo_len, weights=[pretrained_weights])(input_pattern)
lstm_out = LSTM(units=emdedding_size)(em)
lstm_out = Dropout(0.2)(lstm_out)
x=Dense(250,activation='relu',name="C")(lstm_out)
x=Dropout(0.2)(x)
x=Dense(180,activation='softmax')(x)
model = Model(inputs=input_pattern, outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
print(model.summary()) # Summarize Model
plot_model(model, to_file='t_lstm_test.png',show_shapes=True)
history_withpre2 = model.fit(a_train, b_train, epochs=100, batch_size=16, verbose=2)
from keras.layers import Input, Embedding, LSTM, Dense,Merge
from keras.models import Model
emdedding_size=12 #
vo_len=3 #
vocab_size=11000 #
a_train=patternX.reshape(-1,3,1)
b_train=train_Y
k=trainZ
pretrained_weights=Matrix
input_pattern = Input(shape=(3, 1),name="input_pattern")
lstm_out = LSTM(units=64)(input_pattern)
lstm_out = Dropout(0.2)(lstm_out)
x=Dense(250,activation='relu',name="C")(lstm_out)
x=Dropout(0.2)(x)
x=Dense(180,activation='softmax')(x)
model = Model(inputs=input_pattern, outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
print(model.summary()) # Summarize Model
plot_model(model, to_file='t_lstm_test.png',show_shapes=True)
history_withpre2 = model.fit(a_train, b_train, epochs=100, batch_size=16, verbose=2)
```
plot_model(model, to_file='t_lstm_test.png',show_shapes=True)
history1 = model.fit(a_train, b_train, epochs=100, batch_size=16, verbose=2)
```
from keras.layers import Input, Embedding, LSTM, Dense,Merge
from keras.models import Model
input_pattern = Input(shape=(3, a_train.shape[2]),name="input_pattern")
lstm_out = LSTM(512,input_shape=(3, a_train.shape[2]))(input_pattern)
# lstm_out = LSTM(512,return_sequences=True,input_shape=(3, a_train.shape[2]))(input_pattern)
# lstm_out = LSTM(300)(lstm_out)
lstm_out = Dropout(0.2)(lstm_out)
x=Dense(250,activation='relu',name="C")(lstm_out)
x=Dropout(0.2)(x)
x=Dense(a_train.shape[2],activation='softmax')(x)
model = Model(inputs=input_pattern, outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
print(model.summary()) # Summarize Model
plot_model(model, to_file='t_lstm_test.png',show_shapes=True)
history = model.fit(a_train, b_train, epochs=100, batch_size=16, verbose=2, validation_data=(a_test, b_test))
print(history.history.keys())
fig = plt.figure()
plt.plot(history.history['acc'])
plt.plot(history1.history['acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['1-lstm', '2-lstm'], loc='upper left')
fig = plt.figure()
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='lower left')
train_X=train_X.reshape(-1,200)
train_Y.reshape(-1,200)
train_X.shape
train_Y.shape
from keras.layers import Input, Embedding, LSTM, Dense,Merge
from keras.models import Model
a_train=train_X
b_train=train_Y
k=trainZ
input_pattern = Input(shape=(3, a_train.shape[2]),name="input_pattern")
input_id = Input(shape=(1,),name="input_id")
lstm_out = LSTM(250,input_shape=(3, a_train.shape[2]))(input_pattern)
lstm_out = Dropout(0.2)(lstm_out)
x = keras.layers.concatenate([lstm_out, input_id])
x=Dense(250,activation='relu',name="C")(x)
x=Dropout(0.2)(x)
x=Dense(a_train.shape[2],activation='softmax',name='x')(x)
model = Model(inputs=[input_pattern,input_id], outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
print(model.summary()) # Summarize Model
plot_model(model, to_file='t_lstm.png',show_shapes=True)
k=np.zeros(a_train.shape[0],dtype=np.int16)
k=k.reshape(-1,1)
k1=np.zeros(train_X.shape[0],dtype=np.int16)
k1=k1.reshape(-1,1)
history = model.fit({'input_pattern': a_train, 'input_id' : k}, {'x': b_train}, epochs=100, batch_size=64, verbose=2)
fig = plt.figure()
Accuracy=[42.00,47.15 ,48.36, 49.35,47.42, 50.82, 52.31,56.93 ,57.15 ]
x2=(20,30,40,50,60,70,80,90,100)
plt.plot(x2,Accuracy)
x1=range(0,100)
plt.plot(x1,history.history['acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['100% total_data Yu','100% total_data Mine'], loc='upper left')
fig = plt.figure()
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['90% train_data', '100% total_data'], loc='upper left')
```
model.fit(a_train, b_train, epochs=100, batch_size=16, verbose=2, validation_data=(a_test, b_test))
```
model.evaluate(train_X, train_Y, batch_size=64, verbose=2, sample_weight=None)
trainPredict = model.predict(train_X)
D=np.argmax(train_Y,axis = 1)
E=np.argmax(trainPredict,axis = 1)
print(D)
print(E)
A=0 #total number of right
for i,t in enumerate(E):
if D[i]==t :
A=A+1
print(A/D.shape[0])
```
| github_jupyter |
# Sched Square
This tutorial includes everything you need to set up decision optimization engines, build constraint programming models.
When you finish this tutorial, you'll have a foundational knowledge of _Prescriptive Analytics_.
>This notebook is part of **[Prescriptive Analytics for Python](http://ibmdecisionoptimization.github.io/docplex-doc/)**
>
>It requires either an [installation of CPLEX Optimizers](http://ibmdecisionoptimization.github.io/docplex-doc/getting_started.html) or it can be run on [IBM Watson Studio Cloud](https://www.ibm.com/cloud/watson-studio/>) (Sign up for a [free IBM Cloud account](https://dataplatform.cloud.ibm.com/registration/stepone?context=wdp&apps=all>)
and you can start using Watson Studio Cloud right away).
Table of contents:
- [Describe the business problem](#Describe-the-business-problem)
* [How decision optimization (prescriptive analytics) can help](#How--decision-optimization-can-help)
* [Use decision optimization](#Use-decision-optimization)
* [Step 1: Download the library](#Step-1:-Download-the-library)
* [Step 2: Model the Data](#Step-2:-Model-the-data)
* [Step 3: Set up the prescriptive model](#Step-3:-Set-up-the-prescriptive-model)
* [Define the decision variables](#Define-the-decision-variables)
* [Express the business constraints](#Express-the-business-constraints)
* [Express the search phase](#Express-the-search-phase)
* [Solve with Decision Optimization solve service](#Solve-with-Decision-Optimization-solve-service)
* [Step 4: Investigate the solution and run an example analysis](#Step-4:-Investigate-the-solution-and-then-run-an-example-analysis)
* [Summary](#Summary)
****
### Describe the business problem
* The aim of the square example is to place a set of small squares of different sizes into a large square.
*****
## How decision optimization can help
* Prescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes.
* Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
* Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
+ For example:
+ Automate complex decisions and trade-offs to better manage limited resources.
+ Take advantage of a future opportunity or mitigate a future risk.
+ Proactively update recommendations based on changing events.
+ Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
## Use decision optimization
### Step 1: Download the library
Run the following code to install Decision Optimization CPLEX Modeling library. The *DOcplex* library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
```
import sys
try:
import docplex.cp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
```
Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization.
### Step 2: Model the data
```
from docplex.cp.model import *
```
Size of the englobing square
```
SIZE_SQUARE = 112
```
Sizes of the sub-squares
```
SIZE_SUBSQUARE = [50, 42, 37, 35, 33, 29, 27, 25, 24, 19, 18, 17, 16, 15, 11, 9, 8, 7, 6, 4, 2]
```
### Step 3: Set up the prescriptive model
```
mdl = CpoModel(name="SchedSquare")
```
#### Define the decision variables
##### Create array of variables for sub-squares
```
x = []
y = []
rx = pulse((0, 0), 0)
ry = pulse((0, 0), 0)
for i in range(len(SIZE_SUBSQUARE)):
sq = SIZE_SUBSQUARE[i]
vx = interval_var(size=sq, name="X" + str(i))
vx.set_end((0, SIZE_SQUARE))
x.append(vx)
rx += pulse(vx, sq)
vy = interval_var(size=sq, name="Y" + str(i))
vy.set_end((0, SIZE_SQUARE))
y.append(vy)
ry += pulse(vy, sq)
```
#### Express the business constraints
##### Create dependencies between variables
```
for i in range(len(SIZE_SUBSQUARE)):
for j in range(i):
mdl.add((end_of(x[i]) <= start_of(x[j]))
| (end_of(x[j]) <= start_of(x[i]))
| (end_of(y[i]) <= start_of(y[j]))
| (end_of(y[j]) <= start_of(y[i])))
```
##### Set other constraints
```
mdl.add(always_in(rx, (0, SIZE_SQUARE), SIZE_SQUARE, SIZE_SQUARE))
mdl.add(always_in(ry, (0, SIZE_SQUARE), SIZE_SQUARE, SIZE_SQUARE))
```
#### Express the search phase
```
mdl.set_search_phases([search_phase(x), search_phase(y)])
```
#### Solve with Decision Optimization solve service
```
msol = mdl.solve(TimeLimit=20)
```
### Step 4: Investigate the solution and then run an example analysis
#### Print Solution
```
print("Solution: ")
msol.print_solution()
```
#### Import graphical tools
```
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
```
*You can set __POP\_UP\_GRAPHIC=True__ if you prefer a pop up graphic window instead of an inline one.*
```
POP_UP_GRAPHIC=False
if msol and visu.is_visu_enabled():
import matplotlib.cm as cm
from matplotlib.patches import Polygon
if not POP_UP_GRAPHIC:
%matplotlib inline
# Plot external square
print("Plotting squares....")
fig, ax = plt.subplots()
plt.plot((0, 0), (0, SIZE_SQUARE), (SIZE_SQUARE, SIZE_SQUARE), (SIZE_SQUARE, 0))
for i in range(len(SIZE_SUBSQUARE)):
# Display square i
(sx, sy) = (msol.get_var_solution(x[i]), msol.get_var_solution(y[i]))
(sx1, sx2, sy1, sy2) = (sx.get_start(), sx.get_end(), sy.get_start(), sy.get_end())
poly = Polygon([(sx1, sy1), (sx1, sy2), (sx2, sy2), (sx2, sy1)], fc=cm.Set2(float(i) / len(SIZE_SUBSQUARE)))
ax.add_patch(poly)
# Display identifier of square i at its center
ax.text(float(sx1 + sx2) / 2, float(sy1 + sy2) / 2, str(SIZE_SUBSQUARE[i]), ha='center', va='center')
plt.margins(0)
plt.show()
```
## Summary
You learned how to set up and use the IBM Decision Optimization CPLEX Modeling for Python to formulate and solve a Constraint Programming model.
#### References
* [CPLEX Modeling for Python documentation](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html)
* [Decision Optimization on Cloud](https://developer.ibm.com/docloud/)
* Need help with DOcplex or to report a bug? Please go [here](https://stackoverflow.com/questions/tagged/docplex)
* Contact us at dofeedback@wwpdl.vnet.ibm.com
Copyright © 2017, 2018 IBM. IPLA licensed Sample Materials.
| github_jupyter |
```
%matplotlib inline
from astropy.table import Table
data = Table.read('/home/jls/public_html/data/gaia_spectro.hdf5')
dataE = Table.read('/data/jls/GaiaDR2/spectro/input_photometry_and_spectroscopy.hdf5')
def turnoff(d):
return (d['logg']<4.5)&(d['logg']>3.6)&(d['log10_teff']<4.1)
# return (d['logg']>3.)&(d['log10_teff']>np.log10(5700.))
fltr = (data['s']<1.)&(data['s']>0.3)&(data['flag']==0)&turnoff(data)&(data['log10_av']<-.5)
print np.count_nonzero(fltr)
plt.hist2d((dataE['J']-dataE['K'])[(data['vphi']<50.)&(data['Z']>-0.6)&fltr],
(dataE['K']-data['dm'])[(data['vphi']<50.)&(data['Z']>-0.6)&fltr],
bins=40,norm=LogNorm(),range=[[0.,1.],[1.,4.]],
cmap=plt.cm.Reds);
gg = np.genfromtxt('/data/jls/isochrones/PARSEC_Gaia/grid/2mass_spitzer_wise_0.109.dat')
gg2 = np.genfromtxt('/data/jls/isochrones/PARSEC_Gaia/grid/2mass_spitzer_wise_-1.009.dat')
from plotting_general import running_median
plt.plot((gg.T[8]-gg.T[10])[gg.T[1]==10.],gg.T[10][gg.T[1]==10.],ls='dashed',label='Ref. iso 10Gyr')
plt.plot((gg2.T[8]-gg2.T[10])[gg2.T[1]==10.]+0.09*1.,gg2.T[10][gg2.T[1]==10.],ls='dashed',label='[Fe/H]=-1 10Gyr')
plt.plot((gg2.T[8]-gg2.T[10])[gg2.T[1]==10.],gg2.T[10][gg2.T[1]==10.],ls='dashed',label='[Fe/H]=-1 10Gyr')
# plt.hist2d((dataE['J']-dataE['K']-0.09*data['Z']-np.power(10.,data['log10_av'])*0.12)[(data['vphi']>120.)&(data['vphi']<180.)&(data['Z']>-0.6)&fltr],
# (dataE['K']-data['dm']-np.power(10.,data['log10_av'])*0.09)[(data['vphi']>120.)&(data['vphi']<180.)&(data['Z']>-0.6)&fltr],
# bins=40,norm=LogNorm(),range=[[0.3,0.6],[1.,3.7]],label='120<vphi<180, fe/h>-0.6');
rr=running_median(
(dataE['K']-data['dm']-np.power(10.,data['log10_av'])*0.09)[(data['vphi']>120.)&(data['vphi']<180.)&(data['Z']>-0.6)&fltr],
(dataE['J']-dataE['K']-0.09*data['Z']-np.power(10.,data['log10_av'])*0.12)[(data['vphi']>120.)&(data['vphi']<180.)&(data['Z']>-0.6)&fltr])
plt.plot(rr[1],rr[0],label='fe/h>-0.6, 120<vphi<180')
rr=running_median(
(dataE['K']-data['dm']-np.power(10.,data['log10_av'])*0.09)[(data['vphi']<0.)&(data['Z']<-0.6)&fltr],
(dataE['J']-dataE['K']-0.09*data['Z']-np.power(10.,data['log10_av'])*0.12)[(data['vphi']<0.)&(data['Z']<-0.6)&fltr])
plt.plot(rr[1],rr[0],label='fe/h<-0.6, vphi<0')
rr=running_median(
(dataE['K']-data['dm']-np.power(10.,data['log10_av'])*0.09)[(data['vphi']<0.)&(data['Z']>-0.6)&fltr],
(dataE['J']-dataE['K']-0.09*data['Z']-np.power(10.,data['log10_av'])*0.12)[(data['vphi']<0.)&(data['Z']>-0.6)&fltr])
plt.plot(rr[1],rr[0],label='fe/h>-0.6, vphi<0')
plt.xlim(0.35,0.5)
plt.gca().invert_yaxis()
plt.xlabel(r'$(J-K_s-0.09[Fe/H])$')
plt.ylabel(r'$M_K$')
# plt.plot((gg.T[8]-gg.T[10])[gg.T[1]==9.95]+0.09*0.1,gg.T[10][gg.T[1]==9.95],ls='dashed',label='Ref. iso 9Gyr')
plt.plot((gg.T[8]-gg.T[10])[gg.T[1]==10.]+0.09*0.1,gg.T[10][gg.T[1]==10.],ls='dashed',label='Ref. iso 10Gyr')
plt.plot((gg2.T[8]-gg2.T[10])[gg2.T[1]==10.]+0.09*1.,gg2.T[10][gg2.T[1]==10.],ls='dashed',label='[Fe/H]=-1 10Gyr')
plt.plot((gg.T[8]-gg.T[10])[gg.T[1]==10.1],gg.T[10][gg.T[1]==10.1],ls='dashed',label='Ref. iso 12.5Gyr')
plt.legend(loc=6,bbox_to_anchor=(0.,1.2),ncol=2)
plt.ylim(3.7,1.5)
rr=running_median(
(dataE['K']-data['dm'])[(data['vphi']<0.)&(data['Z']<-0.7)&fltr],
(dataE['J']-dataE['K']-0.1*data['Z'])[(data['vphi']<0.)&(data['Z']<-0.7)&fltr])
plt.plot(rr[1],rr[0])
rr=running_median(
(dataE['K']-data['dm'])[(data['vphi']<0.)&(data['Z']<-0.5)&(data['Z']>-0.7)&fltr],
(dataE['J']-dataE['K']-0.1*data['Z'])[(data['vphi']<0.)&(data['Z']<-0.5)&(data['Z']>-0.7)&fltr])
plt.plot(rr[1],rr[0])
plt.xlim(0.3,0.6)
print np.median(data['Z'][(data['vphi']>120.)&(data['vphi']<180.)&(data['Z']>-0.6)&fltr])
plt.hist2d((dataE['J']-dataE['K'])[(data['vphi']>120.)&(data['vphi']<180.)&(data['Z']>-0.6)&fltr],
(dataE['K']-data['dm'])[(data['vphi']>120.)&(data['vphi']<180.)&(data['Z']>-0.6)&fltr],
bins=40,norm=LogNorm(),range=[[0.,1.],[1.,4.]],)
plt.plot((dataE['J']-dataE['K'])[(data['vphi']<0.)&(data['Z']>-0.8)&(data['Z']<-0.5)&fltr],
(dataE['K']-data['dm'])[(data['vphi']<0.)&(data['Z']>-0.8)&(data['Z']<-0.5)&fltr],'.',ms=15)
plt.plot((dataE['J']-dataE['K'])[(data['vphi']<0.)&(data['Z']>-0.5)&fltr],
(dataE['K']-data['dm'])[(data['vphi']<0.)&(data['Z']>-0.5)&fltr],'.',ms=7)
plt.plot((gg.T[8]-gg.T[10])[gg.T[1]==10.],gg.T[10][gg.T[1]==10.])
# plt.plot((gg.T[8]-gg.T[10])[gg.T[1]==10.1],gg.T[10][gg.T[1]==10.1])
plt.hist2d(data['log10_teff'][(data['vphi']<50.)&(data['Z']>-0.6)&fltr],
data['logg'][(data['vphi']<50.)&(data['Z']>-0.6)&fltr],bins=40,norm=LogNorm());
plt.hist(np.power(10.,data['log10_age'])[
(data['vphi']<150.)&(data['vphi']>50.)&(data['Z']>-0.6)&fltr],
range=[1.,13.],bins=40,histtype='step',lw=2,normed=True);
plt.hist(np.power(10.,data['log10_age'])[(data['vphi']<0.)&(data['Z']>-0.6)&fltr],
range=[1.,13.],bins=40,histtype='step',lw=2,normed=True);
plt.hist(np.power(10.,data['log10_age'])[(data['vphi']<0.)&(data['Z']<-0.7)&fltr],
range=[1.,13.],bins=40,histtype='step',lw=2,normed=True);
plt.hist(np.abs(data['z'])[(data['vphi']<150.)&(data['vphi']>50.)&(data['Z']>-0.6)&fltr],
range=[0.,2.],bins=40,histtype='step',lw=2,normed=True);
plt.hist(np.abs(data['z'])[(data['vphi']<0.)&(data['Z']>-0.6)&fltr],
range=[0.,2.],bins=40,histtype='step',lw=2,normed=True);
plt.hist(np.abs(data['z'])[(data['vphi']<0.)&(data['Z']<-0.7)&fltr],
range=[0.,2.],bins=40,histtype='step',lw=2,normed=True);
# plt.hist(data['vphi'][data['Z']<-0.6],range=[-300.,300.],bins=40,histtype='step',lw=2);
# plt.hist(data['vphi'][data['Z']<-1.5],range=[-300.,300.],bins=40,histtype='step',lw=2);
# plt.semilogy()
```
| github_jupyter |
# Programming Exercise 5:
# Regularized Linear Regression and Bias vs Variance
## Introduction
In this exercise, you will implement regularized linear regression and use it to study models with different bias-variance properties. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.
All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below).
Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, [`matplotlib`](https://matplotlib.org/) for plotting, and [`scipy`](https://docs.scipy.org/doc/scipy/reference/) for scientific and numerical computation functions and tools. You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments).
```
# used for manipulating directory paths
import os
# Scientific and vector computation for python
import numpy as np
# Plotting library
from matplotlib import pyplot
# Optimization module in scipy
from scipy import optimize
# will be used to load MATLAB mat datafile format
from scipy.io import loadmat
# library written for this exercise providing additional functions for assignment submission, and others
import utils
# define the submission/grader object for this exercise
grader = utils.Grader()
# tells matplotlib to embed plots within the notebook
%matplotlib inline
```
## Submission and Grading
After completing each part of the assignment, be sure to submit your solutions to the grader. The following is a breakdown of how each part of this exercise is scored.
| Section | Part | Submitted Function | Points |
| :- |:- |:- | :-: |
| 1 | [Regularized Linear Regression Cost Function](#section1) | [`linearRegCostFunction`](#linearRegCostFunction) | 25 |
| 2 | [Regularized Linear Regression Gradient](#section2) | [`linearRegCostFunction`](#linearRegCostFunction) |25 |
| 3 | [Learning Curve](#section3) | [`learningCurve`](#func2) | 20 |
| 4 | [Polynomial Feature Mapping](#section4) | [`polyFeatures`](#polyFeatures) | 10 |
| 5 | [Cross Validation Curve](#section5) | [`validationCurve`](#validationCurve) | 20 |
| | Total Points | |100 |
You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.
<div class="alert alert-block alert-warning">
At the end of each section in this notebook, we have a cell which contains code for submitting the solutions thus far to the grader. Execute the cell to see your score up to the current section. For all your work to be submitted properly, you must execute those cells at least once.
</div>
<a id="section1"></a>
## 1 Regularized Linear Regression
In the first half of the exercise, you will implement regularized linear regression to predict the amount of water flowing out of a dam using the change of water level in a reservoir. In the next half, you will go through some diagnostics of debugging learning algorithms and examine the effects of bias v.s.
variance.
### 1.1 Visualizing the dataset
We will begin by visualizing the dataset containing historical records on the change in the water level, $ x $, and the amount of water flowing out of the dam, $ y $. This dataset is divided into three parts:
- A **training** set that your model will learn on: `X`, `y`
- A **cross validation** set for determining the regularization parameter: `Xval`, `yval`
- A **test** set for evaluating performance. These are “unseen” examples which your model did not see during training: `Xtest`, `ytest`
Run the next cell to plot the training data. In the following parts, you will implement linear regression and use that to fit a straight line to the data and plot learning curves. Following that, you will implement polynomial regression to find a better fit to the data.
```
# Load from ex5data1.mat, where all variables will be store in a dictionary
data = loadmat(os.path.join('Data', 'ex5data1.mat'))
# Extract train, test, validation data from dictionary
# and also convert y's form 2-D matrix (MATLAB format) to a numpy vector
X, y = data['X'], data['y'][:, 0]
Xtest, ytest = data['Xtest'], data['ytest'][:, 0]
Xval, yval = data['Xval'], data['yval'][:, 0]
# m = Number of training examples
m = y.size
# Plot training data
pyplot.plot(X, y, 'ro', ms=10, mec='k', mew=1)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)');
```
### 1.2 Regularized linear regression cost function
Recall that regularized linear regression has the following cost function:
$$ J(\theta) = \frac{1}{2m} \left( \sum_{i=1}^m \left( h_\theta\left( x^{(i)} \right) - y^{(i)} \right)^2 \right) + \frac{\lambda}{2m} \left( \sum_{j=1}^n \theta_j^2 \right)$$
where $\lambda$ is a regularization parameter which controls the degree of regularization (thus, help preventing overfitting). The regularization term puts a penalty on the overall cost J. As the magnitudes of the model parameters $\theta_j$ increase, the penalty increases as well. Note that you should not regularize
the $\theta_0$ term.
You should now complete the code in the function `linearRegCostFunction` in the next cell. Your task is to calculate the regularized linear regression cost function. If possible, try to vectorize your code and avoid writing loops.
<a id="linearRegCostFunction"></a>
```
def linearRegCostFunction(X, y, theta, lambda_=0.0):
"""
Compute cost and gradient for regularized linear regression
with multiple variables. Computes the cost of using theta as
the parameter for linear regression to fit the data points in X and y.
Parameters
----------
X : array_like
The dataset. Matrix with shape (m x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
y : array_like
The functions values at each datapoint. A vector of
shape (m, ).
theta : array_like
The parameters for linear regression. A vector of shape (n+1,).
lambda_ : float, optional
The regularization parameter.
Returns
-------
J : float
The computed cost function.
grad : array_like
The value of the cost function gradient w.r.t theta.
A vector of shape (n+1, ).
Instructions
------------
Compute the cost and gradient of regularized linear regression for
a particular choice of theta.
You should set J to the cost and grad to the gradient.
"""
# Initialize some useful values
m = y.size # number of training examples
# You need to return the following variables correctly
J = 0
grad = np.zeros(theta.shape)
# ====================== YOUR CODE HERE ======================
# Compute h
h = np.dot(X, theta)
# Compute cost J
J = (1/(2*m)) * np.sum(np.square(h - y)) + (lambda_/(2*m)) * (np.dot(theta[1:].T, theta[1:]))
# Compute gradient for j = 0
grad[0] = (1/m) * (np.dot(X[:, 0].T, (h - y)))
# Compute gradient for j >= 1
grad[1:] = (1/m) * (np.dot(X[:, 1:].T, h-y)) + (lambda_/m) * (theta[1:])
# ============================================================
return J, grad
```
When you are finished, the next cell will run your cost function using `theta` initialized at `[1, 1]`. You should expect to see an output of 303.993.
```
theta = np.array([1, 1])
J, _ = linearRegCostFunction(np.concatenate([np.ones((m, 1)), X], axis=1), y, theta, 1)
print('Cost at theta = [1, 1]:\t %f ' % J)
print('(This value should be about 303.993192)\n' % J)
```
After completing a part of the exercise, you can submit your solutions for grading by first adding the function you modified to the submission object, and then sending your function to Coursera for grading.
The submission script will prompt you for your login e-mail and submission token. You can obtain a submission token from the web page for the assignment. You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.
*Execute the following cell to grade your solution to the first part of this exercise.*
```
grader[1] = linearRegCostFunction
grader.grade()
```
<a id="section2"></a>
### 1.3 Regularized linear regression gradient
Correspondingly, the partial derivative of the cost function for regularized linear regression is defined as:
$$
\begin{align}
& \frac{\partial J(\theta)}{\partial \theta_0} = \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left(x^{(i)} \right) - y^{(i)} \right) x_j^{(i)} & \qquad \text{for } j = 0 \\
& \frac{\partial J(\theta)}{\partial \theta_j} = \left( \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left( x^{(i)} \right) - y^{(i)} \right) x_j^{(i)} \right) + \frac{\lambda}{m} \theta_j & \qquad \text{for } j \ge 1
\end{align}
$$
In the function [`linearRegCostFunction`](#linearRegCostFunction) above, add code to calculate the gradient, returning it in the variable `grad`. <font color='red'><b>Do not forget to re-execute the cell containing this function to update the function's definition.</b></font>
When you are finished, use the next cell to run your gradient function using theta initialized at `[1, 1]`. You should expect to see a gradient of `[-15.30, 598.250]`.
```
theta = np.array([1, 1])
J, grad = linearRegCostFunction(np.concatenate([np.ones((m, 1)), X], axis=1), y, theta, 1)
print('Gradient at theta = [1, 1]: [{:.6f}, {:.6f}] '.format(*grad))
print(' (this value should be about [-15.303016, 598.250744])\n')
```
*You should now submit your solutions.*
```
grader[2] = linearRegCostFunction
grader.grade()
```
### Fitting linear regression
Once your cost function and gradient are working correctly, the next cell will run the code in `trainLinearReg` (found in the module `utils.py`) to compute the optimal values of $\theta$. This training function uses `scipy`'s optimization module to minimize the cost function.
In this part, we set regularization parameter $\lambda$ to zero. Because our current implementation of linear regression is trying to fit a 2-dimensional $\theta$, regularization will not be incredibly helpful for a $\theta$ of such low dimension. In the later parts of the exercise, you will be using polynomial regression with regularization.
Finally, the code in the next cell should also plot the best fit line, which should look like the figure below.

The best fit line tells us that the model is not a good fit to the data because the data has a non-linear pattern. While visualizing the best fit as shown is one possible way to debug your learning algorithm, it is not always easy to visualize the data and model. In the next section, you will implement a function to generate learning curves that can help you debug your learning algorithm even if it is not easy to visualize the
data.
```
# add a columns of ones for the y-intercept
X_aug = np.concatenate([np.ones((m, 1)), X], axis=1)
theta = utils.trainLinearReg(linearRegCostFunction, X_aug, y, lambda_=0)
# Plot fit over the data
pyplot.plot(X, y, 'ro', ms=10, mec='k', mew=1.5)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)')
pyplot.plot(X, np.dot(X_aug, theta), '--', lw=2);
```
<a id="section3"></a>
## 2 Bias-variance
An important concept in machine learning is the bias-variance tradeoff. Models with high bias are not complex enough for the data and tend to underfit, while models with high variance overfit to the training data.
In this part of the exercise, you will plot training and test errors on a learning curve to diagnose bias-variance problems.
### 2.1 Learning Curves
You will now implement code to generate the learning curves that will be useful in debugging learning algorithms. Recall that a learning curve plots training and cross validation error as a function of training set size. Your job is to fill in the function `learningCurve` in the next cell, so that it returns a vector of errors for the training set and cross validation set.
To plot the learning curve, we need a training and cross validation set error for different training set sizes. To obtain different training set sizes, you should use different subsets of the original training set `X`. Specifically, for a training set size of $i$, you should use the first $i$ examples (i.e., `X[:i, :]`
and `y[:i]`).
You can use the `trainLinearReg` function (by calling `utils.trainLinearReg(...)`) to find the $\theta$ parameters. Note that the `lambda_` is passed as a parameter to the `learningCurve` function.
After learning the $\theta$ parameters, you should compute the error on the training and cross validation sets. Recall that the training error for a dataset is defined as
$$ J_{\text{train}} = \frac{1}{2m} \left[ \sum_{i=1}^m \left(h_\theta \left( x^{(i)} \right) - y^{(i)} \right)^2 \right] $$
In particular, note that the training error does not include the regularization term. One way to compute the training error is to use your existing cost function and set $\lambda$ to 0 only when using it to compute the training error and cross validation error. When you are computing the training set error, make sure you compute it on the training subset (i.e., `X[:n,:]` and `y[:n]`) instead of the entire training set. However, for the cross validation error, you should compute it over the entire cross validation set. You should store
the computed errors in the vectors error train and error val.
<a id="func2"></a>
```
def learningCurve(X, y, Xval, yval, lambda_=0):
"""
Generates the train and cross validation set errors needed to plot a learning curve
returns the train and cross validation set errors for a learning curve.
In this function, you will compute the train and test errors for
dataset sizes from 1 up to m. In practice, when working with larger
datasets, you might want to do this in larger intervals.
Parameters
----------
X : array_like
The training dataset. Matrix with shape (m x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
y : array_like
The functions values at each training datapoint. A vector of
shape (m, ).
Xval : array_like
The validation dataset. Matrix with shape (m_val x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
yval : array_like
The functions values at each validation datapoint. A vector of
shape (m_val, ).
lambda_ : float, optional
The regularization parameter.
Returns
-------
error_train : array_like
A vector of shape m. error_train[i] contains the training error for
i examples.
error_val : array_like
A vecotr of shape m. error_val[i] contains the validation error for
i training examples.
Instructions
------------
Fill in this function to return training errors in error_train and the
cross validation errors in error_val. i.e., error_train[i] and
error_val[i] should give you the errors obtained after training on i examples.
Notes
-----
- You should evaluate the training error on the first i training
examples (i.e., X[:i, :] and y[:i]).
For the cross-validation error, you should instead evaluate on
the _entire_ cross validation set (Xval and yval).
- If you are using your cost function (linearRegCostFunction) to compute
the training and cross validation error, you should call the function with
the lambda argument set to 0. Do note that you will still need to use
lambda when running the training to obtain the theta parameters.
Hint
----
You can loop over the examples with the following:
for i in range(1, m+1):
# Compute train/cross validation errors using training examples
# X[:i, :] and y[:i], storing the result in
# error_train[i-1] and error_val[i-1]
....
"""
# Number of training examples
m = y.size
# You need to return these values correctly
error_train = np.zeros(m)
error_val = np.zeros(m)
# ====================== YOUR CODE HERE ======================
for i in range(1, m+1):
theta_t = utils.trainLinearReg(linearRegCostFunction, X[:i], y[:i], lambda_=lambda_)
error_train[i-1], _ = linearRegCostFunction(X[:i], y[:i], theta_t, lambda_=0)
error_val[i-1], _ = linearRegCostFunction(Xval, yval, theta_t, lambda_=0)
# =============================================================
return error_train, error_val
```
When you are finished implementing the function `learningCurve`, executing the next cell prints the learning curves and produce a plot similar to the figure below.

In the learning curve figure, you can observe that both the train error and cross validation error are high when the number of training examples is increased. This reflects a high bias problem in the model - the linear regression model is too simple and is unable to fit our dataset well. In the next section, you will implement polynomial regression to fit a better model for this dataset.
```
X_aug = np.concatenate([np.ones((m, 1)), X], axis=1)
Xval_aug = np.concatenate([np.ones((yval.size, 1)), Xval], axis=1)
error_train, error_val = learningCurve(X_aug, y, Xval_aug, yval, lambda_=0)
pyplot.plot(np.arange(1, m+1), error_train, np.arange(1, m+1), error_val, lw=2)
pyplot.title('Learning curve for linear regression')
pyplot.legend(['Train', 'Cross Validation'])
pyplot.xlabel('Number of training examples')
pyplot.ylabel('Error')
pyplot.axis([0, 13, 0, 150])
print('# Training Examples\tTrain Error\tCross Validation Error')
for i in range(m):
print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i]))
```
*You should now submit your solutions.*
```
grader[3] = learningCurve
grader.grade()
```
<a id="section4"></a>
## 3 Polynomial regression
The problem with our linear model was that it was too simple for the data
and resulted in underfitting (high bias). In this part of the exercise, you will address this problem by adding more features. For polynomial regression, our hypothesis has the form:
$$
\begin{align}
h_\theta(x) &= \theta_0 + \theta_1 \times (\text{waterLevel}) + \theta_2 \times (\text{waterLevel})^2 + \cdots + \theta_p \times (\text{waterLevel})^p \\
& = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \cdots + \theta_p x_p
\end{align}
$$
Notice that by defining $x_1 = (\text{waterLevel})$, $x_2 = (\text{waterLevel})^2$ , $\cdots$, $x_p =
(\text{waterLevel})^p$, we obtain a linear regression model where the features are the various powers of the original value (waterLevel).
Now, you will add more features using the higher powers of the existing feature $x$ in the dataset. Your task in this part is to complete the code in the function `polyFeatures` in the next cell. The function should map the original training set $X$ of size $m \times 1$ into its higher powers. Specifically, when a training set $X$ of size $m \times 1$ is passed into the function, the function should return a $m \times p$ matrix `X_poly`, where column 1 holds the original values of X, column 2 holds the values of $X^2$, column 3 holds the values of $X^3$, and so on. Note that you don’t have to account for the zero-eth power in this function.
<a id="polyFeatures"></a>
```
def polyFeatures(X, p):
"""
Maps X (1D vector) into the p-th power.
Parameters
----------
X : array_like
A data vector of size m, where m is the number of examples.
p : int
The polynomial power to map the features.
Returns
-------
X_poly : array_like
A matrix of shape (m x p) where p is the polynomial
power and m is the number of examples. That is:
X_poly[i, :] = [X[i], X[i]**2, X[i]**3 ... X[i]**p]
Instructions
------------
Given a vector X, return a matrix X_poly where the p-th column of
X contains the values of X to the p-th power.
"""
# You need to return the following variables correctly.
X_poly = np.zeros((X.shape[0], p))
# ====================== YOUR CODE HERE ======================
for i in range(p):
X_poly[:, i] = X[:, 0]**(i+1)
# ============================================================
return X_poly
```
Now you have a function that will map features to a higher dimension. The next cell will apply it to the training set, the test set, and the cross validation set.
```
p = 8
# Map X onto Polynomial Features and Normalize
X_poly = polyFeatures(X, p)
X_poly, mu, sigma = utils.featureNormalize(X_poly)
X_poly = np.concatenate([np.ones((m, 1)), X_poly], axis=1)
# Map X_poly_test and normalize (using mu and sigma)
X_poly_test = polyFeatures(Xtest, p)
X_poly_test -= mu
X_poly_test /= sigma
X_poly_test = np.concatenate([np.ones((ytest.size, 1)), X_poly_test], axis=1)
# Map X_poly_val and normalize (using mu and sigma)
X_poly_val = polyFeatures(Xval, p)
X_poly_val -= mu
X_poly_val /= sigma
X_poly_val = np.concatenate([np.ones((yval.size, 1)), X_poly_val], axis=1)
print('Normalized Training Example 1:')
X_poly[0, :]
```
*You should now submit your solutions.*
```
grader[4] = polyFeatures
grader.grade()
```
## 3.1 Learning Polynomial Regression
After you have completed the function `polyFeatures`, we will proceed to train polynomial regression using your linear regression cost function.
Keep in mind that even though we have polynomial terms in our feature vector, we are still solving a linear regression optimization problem. The polynomial terms have simply turned into features that we can use for linear regression. We are using the same cost function and gradient that you wrote for the earlier part of this exercise.
For this part of the exercise, you will be using a polynomial of degree 8. It turns out that if we run the training directly on the projected data, will not work well as the features would be badly scaled (e.g., an example with $x = 40$ will now have a feature $x_8 = 40^8 = 6.5 \times 10^{12}$). Therefore, you will
need to use feature normalization.
Before learning the parameters $\theta$ for the polynomial regression, we first call `featureNormalize` and normalize the features of the training set, storing the mu, sigma parameters separately. We have already implemented this function for you (in `utils.py` module) and it is the same function from the first exercise.
After learning the parameters $\theta$, you should see two plots generated for polynomial regression with $\lambda = 0$, which should be similar to the ones here:
<table>
<tr>
<td><img src="Figures/polynomial_regression.png"></td>
<td><img src="Figures/polynomial_learning_curve.png"></td>
</tr>
</table>
You should see that the polynomial fit is able to follow the datapoints very well, thus, obtaining a low training error. The figure on the right shows that the training error essentially stays zero for all numbers of training samples. However, the polynomial fit is very complex and even drops off at the extremes. This is an indicator that the polynomial regression model is overfitting the training data and will not generalize well.
To better understand the problems with the unregularized ($\lambda = 0$) model, you can see that the learning curve shows the same effect where the training error is low, but the cross validation error is high. There is a gap between the training and cross validation errors, indicating a high variance problem.
```
lambda_ = 0
theta = utils.trainLinearReg(linearRegCostFunction, X_poly, y,
lambda_=lambda_, maxiter=55)
# Plot training data and fit
pyplot.plot(X, y, 'ro', ms=10, mew=1.5, mec='k')
utils.plotFit(polyFeatures, np.min(X), np.max(X), mu, sigma, theta, p)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)')
pyplot.title('Polynomial Regression Fit (lambda = %f)' % lambda_)
pyplot.ylim([-20, 50])
pyplot.figure()
error_train, error_val = learningCurve(X_poly, y, X_poly_val, yval, lambda_)
pyplot.plot(np.arange(1, 1+m), error_train, np.arange(1, 1+m), error_val)
pyplot.title('Polynomial Regression Learning Curve (lambda = %f)' % lambda_)
pyplot.xlabel('Number of training examples')
pyplot.ylabel('Error')
pyplot.axis([0, 13, 0, 100])
pyplot.legend(['Train', 'Cross Validation'])
print('Polynomial Regression (lambda = %f)\n' % lambda_)
print('# Training Examples\tTrain Error\tCross Validation Error')
for i in range(m):
print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i]))
```
One way to combat the overfitting (high-variance) problem is to add regularization to the model. In the next section, you will get to try different $\lambda$ parameters to see how regularization can lead to a better model.
### 3.2 Optional (ungraded) exercise: Adjusting the regularization parameter
In this section, you will get to observe how the regularization parameter affects the bias-variance of regularized polynomial regression. You should now modify the the lambda parameter and try $\lambda = 1, 100$. For each of these values, the script should generate a polynomial fit to the data and also a learning curve.
For $\lambda = 1$, the generated plots should look like the the figure below. You should see a polynomial fit that follows the data trend well (left) and a learning curve (right) showing that both the cross validation and training error converge to a relatively low value. This shows the $\lambda = 1$ regularized polynomial regression model does not have the high-bias or high-variance problems. In effect, it achieves a good trade-off between bias and variance.
<table>
<tr>
<td><img src="Figures/polynomial_regression_reg_1.png"></td>
<td><img src="Figures/polynomial_learning_curve_reg_1.png"></td>
</tr>
</table>
```
lambda_ = 1
theta = utils.trainLinearReg(linearRegCostFunction, X_poly, y,
lambda_=lambda_, maxiter=55)
# Plot training data and fit
pyplot.plot(X, y, 'ro', ms=10, mew=1.5, mec='k')
utils.plotFit(polyFeatures, np.min(X), np.max(X), mu, sigma, theta, p)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)')
pyplot.title('Polynomial Regression Fit (lambda = %f)' % lambda_)
pyplot.ylim([-20, 50])
pyplot.figure()
error_train, error_val = learningCurve(X_poly, y, X_poly_val, yval, lambda_)
pyplot.plot(np.arange(1, 1+m), error_train, np.arange(1, 1+m), error_val)
pyplot.title('Polynomial Regression Learning Curve (lambda = %f)' % lambda_)
pyplot.xlabel('Number of training examples')
pyplot.ylabel('Error')
pyplot.axis([0, 13, 0, 100])
pyplot.legend(['Train', 'Cross Validation'])
print('Polynomial Regression (lambda = %f)\n' % lambda_)
print('# Training Examples\tTrain Error\tCross Validation Error')
for i in range(m):
print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i]))
```
For $\lambda = 100$, you should see a polynomial fit (figure below) that does not follow the data well. In this case, there is too much regularization and the model is unable to fit the training data.

```
lambda_ = 100
theta = utils.trainLinearReg(linearRegCostFunction, X_poly, y,
lambda_=lambda_, maxiter=55)
# Plot training data and fit
pyplot.plot(X, y, 'ro', ms=10, mew=1.5, mec='k')
utils.plotFit(polyFeatures, np.min(X), np.max(X), mu, sigma, theta, p)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)')
pyplot.title('Polynomial Regression Fit (lambda = %f)' % lambda_)
pyplot.ylim([-20, 50])
pyplot.figure()
error_train, error_val = learningCurve(X_poly, y, X_poly_val, yval, lambda_)
pyplot.plot(np.arange(1, 1+m), error_train, np.arange(1, 1+m), error_val)
pyplot.title('Polynomial Regression Learning Curve (lambda = %f)' % lambda_)
pyplot.xlabel('Number of training examples')
pyplot.ylabel('Error')
pyplot.axis([0, 13, 0, 100])
pyplot.legend(['Train', 'Cross Validation'])
print('Polynomial Regression (lambda = %f)\n' % lambda_)
print('# Training Examples\tTrain Error\tCross Validation Error')
for i in range(m):
print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i]))
```
*You do not need to submit any solutions for this optional (ungraded) exercise.*
<a id="section5"></a>
### 3.3 Selecting $\lambda$ using a cross validation set
From the previous parts of the exercise, you observed that the value of $\lambda$ can significantly affect the results of regularized polynomial regression on the training and cross validation set. In particular, a model without regularization ($\lambda = 0$) fits the training set well, but does not generalize. Conversely, a model with too much regularization ($\lambda = 100$) does not fit the training set and testing set well. A good choice of $\lambda$ (e.g., $\lambda = 1$) can provide a good fit to the data.
In this section, you will implement an automated method to select the $\lambda$ parameter. Concretely, you will use a cross validation set to evaluate how good each $\lambda$ value is. After selecting the best $\lambda$ value using the cross validation set, we can then evaluate the model on the test set to estimate
how well the model will perform on actual unseen data.
Your task is to complete the code in the function `validationCurve`. Specifically, you should should use the `utils.trainLinearReg` function to train the model using different values of $\lambda$ and compute the training error and cross validation error. You should try $\lambda$ in the following range: {0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10}.
<a id="validationCurve"></a>
```
def validationCurve(X, y, Xval, yval):
"""
Generate the train and validation errors needed to plot a validation
curve that we can use to select lambda_.
Parameters
----------
X : array_like
The training dataset. Matrix with shape (m x n) where m is the
total number of training examples, and n is the number of features
including any polynomial features.
y : array_like
The functions values at each training datapoint. A vector of
shape (m, ).
Xval : array_like
The validation dataset. Matrix with shape (m_val x n) where m is the
total number of validation examples, and n is the number of features
including any polynomial features.
yval : array_like
The functions values at each validation datapoint. A vector of
shape (m_val, ).
Returns
-------
lambda_vec : list
The values of the regularization parameters which were used in
cross validation.
error_train : list
The training error computed at each value for the regularization
parameter.
error_val : list
The validation error computed at each value for the regularization
parameter.
Instructions
------------
Fill in this function to return training errors in `error_train` and
the validation errors in `error_val`. The vector `lambda_vec` contains
the different lambda parameters to use for each calculation of the
errors, i.e, `error_train[i]`, and `error_val[i]` should give you the
errors obtained after training with `lambda_ = lambda_vec[i]`.
Note
----
You can loop over lambda_vec with the following:
for i in range(len(lambda_vec))
lambda = lambda_vec[i]
# Compute train / val errors when training linear
# regression with regularization parameter lambda_
# You should store the result in error_train[i]
# and error_val[i]
....
"""
# Selected values of lambda (you should not change this)
lambda_vec = [0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10]
# You need to return these variables correctly.
error_train = np.zeros(len(lambda_vec))
error_val = np.zeros(len(lambda_vec))
# ====================== YOUR CODE HERE ======================
for i in range(len(lambda_vec)):
lambda_ = lambda_vec[i]
theta_t = utils.trainLinearReg(linearRegCostFunction, X, y, lambda_=lambda_)
error_train[i], _ = linearRegCostFunction(X, y, theta_t, lambda_=0)
error_val[i], _ = linearRegCostFunction(Xval, yval, theta_t, lambda_=0)
# ============================================================
return lambda_vec, error_train, error_val
```
After you have completed the code, the next cell will run your function and plot a cross validation curve of error v.s. $\lambda$ that allows you select which $\lambda$ parameter to use. You should see a plot similar to the figure below.

In this figure, we can see that the best value of $\lambda$ is around 3. Due to randomness
in the training and validation splits of the dataset, the cross validation error can sometimes be lower than the training error.
```
lambda_vec, error_train, error_val = validationCurve(X_poly, y, X_poly_val, yval)
pyplot.plot(lambda_vec, error_train, '-o', lambda_vec, error_val, '-o', lw=2)
pyplot.legend(['Train', 'Cross Validation'])
pyplot.xlabel('lambda')
pyplot.ylabel('Error')
print('lambda\t\tTrain Error\tValidation Error')
for i in range(len(lambda_vec)):
print(' %f\t%f\t%f' % (lambda_vec[i], error_train[i], error_val[i]))
```
*You should now submit your solutions.*
```
grader[5] = validationCurve
grader.grade()
```
### 3.4 Optional (ungraded) exercise: Computing test set error
In the previous part of the exercise, you implemented code to compute the cross validation error for various values of the regularization parameter $\lambda$. However, to get a better indication of the model’s performance in the real world, it is important to evaluate the “final” model on a test set that was not used in any part of training (that is, it was neither used to select the $\lambda$ parameters, nor to learn the model parameters $\theta$). For this optional (ungraded) exercise, you should compute the test error using the best value of $\lambda$ you found. In our cross validation, we obtained a test error of 3.8599 for $\lambda = 3$.
*You do not need to submit any solutions for this optional (ungraded) exercise.*
```
def testCurve(X, y, Xtest, ytest, lambda_=0):
"""
Generate the train errors and test errors needed to plot a test curve
that we can use to evaluate the 'final' model.
"""
# Number of training examples
m = y.size
# You need to return these values correctly
error_train = np.zeros(m)
error_test = np.zeros(m)
# ====================== YOUR CODE HERE ======================
for i in range(1, m+1):
theta_t = utils.trainLinearReg(linearRegCostFunction, X[:i], y[:i], lambda_=lambda_)
error_train[i-1], _ = linearRegCostFunction(X[:i], y[:i], theta_t, lambda_=0)
error_test[i-1], _ = linearRegCostFunction(Xtest, ytest, theta_t, lambda_=0)
# =============================================================
return error_train, error_test
```
Computing the test error using the best value of 𝜆 ( In our cross validation, we obtained a test error of 3.8599 for 𝜆=3).
```
lambda_ = 3
theta = utils.trainLinearReg(linearRegCostFunction, X_poly, y,
lambda_=lambda_, maxiter=55)
# Plot training data and fit
pyplot.plot(X, y, 'ro', ms=10, mew=1.5, mec='k')
utils.plotFit(polyFeatures, np.min(X), np.max(X), mu, sigma, theta, p)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)')
pyplot.title('Polynomial Regression Fit (lambda = %f)' % lambda_)
pyplot.ylim([-20, 50])
pyplot.figure()
error_train, error_test = testCurve(X_poly, y, X_poly_test, ytest, lambda_)
pyplot.plot(np.arange(1, 1+m), error_train, np.arange(1, 1+m), error_test)
pyplot.title('Polynomial Regression Learning Curve (lambda = %f)' % lambda_)
pyplot.xlabel('Number of training examples')
pyplot.ylabel('Error')
pyplot.axis([0, 13, 0, 100])
pyplot.legend(['Train', 'Test'])
print('Polynomial Regression (lambda = %f)\n' % lambda_)
print('# Training Examples\tTrain Error\tTest Error')
for i in range(m):
print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_test[i]))
```
### 3.5 Optional (ungraded) exercise: Plotting learning curves with randomly selected examples
In practice, especially for small training sets, when you plot learning curves to debug your algorithms, it is often helpful to average across multiple sets of randomly selected examples to determine the training error and cross validation error.
Concretely, to determine the training error and cross validation error for $i$ examples, you should first randomly select $i$ examples from the training set and $i$ examples from the cross validation set. You will then learn the parameters $\theta$ using the randomly chosen training set and evaluate the parameters $\theta$ on the randomly chosen training set and cross validation set. The above steps should then be repeated multiple times (say 50) and the averaged error should be used to determine the training error and cross validation error for $i$ examples.
For this optional (ungraded) exercise, you should implement the above strategy for computing the learning curves. For reference, the figure below shows the learning curve we obtained for polynomial regression with $\lambda = 0.01$. Your figure may differ slightly due to the random selection of examples.

*You do not need to submit any solutions for this optional (ungraded) exercise.*
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import warnings
warnings.filterwarnings('ignore', category=DeprecationWarning)
warnings.filterwarnings('ignore', category=FutureWarning)
import sklearn
sklearn.set_config(print_changed_only=True)
```
# Algorithm Chains and Pipelines
```
from sklearn.svm import SVC
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
# load and split the data
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, random_state=0)
# compute minimum and maximum on the training data
scaler = MinMaxScaler().fit(X_train)
# rescale training data
X_train_scaled = scaler.transform(X_train)
svm = SVC()
# learn an SVM on the scaled training data
svm.fit(X_train_scaled, y_train)
# scale test data and score the scaled data
X_test_scaled = scaler.transform(X_test)
svm.score(X_test_scaled, y_test)
```
### Building Pipelines
```
from sklearn.pipeline import Pipeline
pipe = Pipeline([("scaler", MinMaxScaler()), ("svm", SVC())])
pipe.fit(X_train, y_train)
pipe.score(X_test, y_test)
```
### Using Pipelines in Grid-searches
```
param_grid = {'svm__C': [0.001, 0.01, 0.1, 1, 10, 100],
'svm__gamma': [0.001, 0.01, 0.1, 1, 10, 100]}
from sklearn.model_selection import GridSearchCV
grid = GridSearchCV(pipe, param_grid=param_grid)
grid.fit(X_train, y_train)
print("best cross-validation accuracy:", grid.best_score_)
print("test set score: ", grid.score(X_test, y_test))
print("best parameters: ", grid.best_params_)
```
# Not using Pipelines vs feature selection
```
rnd = np.random.RandomState(seed=0)
X = rnd.normal(size=(100, 10000))
y = rnd.normal(size=(100,))
from sklearn.feature_selection import SelectPercentile, f_regression
select = SelectPercentile(score_func=f_regression,
percentile=5)
select.fit(X, y)
X_selected = select.transform(X)
print(X_selected.shape)
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import Ridge
np.mean(cross_val_score(Ridge(), X_selected, y))
pipe = Pipeline([("select", SelectPercentile(score_func=f_regression, percentile=5)),
("ridge", Ridge())])
np.mean(cross_val_score(pipe, X, y))
```
### The General Pipeline Interface
```
def fit(self, X, y):
X_transformed = X
for step in self.steps[:-1]:
# iterate over all but the final step
# fit and transform the data
X_transformed = step[1].fit_transform(X_transformed, y)
# fit the last step
self.steps[-1][1].fit(X_transformed, y)
return self
def predict(self, X):
X_transformed = X
for step in self.steps[:-1]:
# iterate over all but the final step
# transform the data
X_transformed = step[1].transform(X_transformed)
# predict using the last step
return self.steps[-1][1].predict(X_transformed)
```
### Convenient Pipeline creation with ``make_pipeline``
```
from sklearn.pipeline import make_pipeline
# standard syntax
pipe_long = Pipeline([("scaler", MinMaxScaler()), ("svm", SVC(C=100))])
# abbreviated syntax
pipe_short = make_pipeline(MinMaxScaler(), SVC(C=100))
pipe_short.steps
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
pipe = make_pipeline(StandardScaler(), PCA(n_components=2),
StandardScaler())
pipe.steps
```
#### Accessing step attributes
```
# fit the pipeline defined above to the cancer dataset
pipe.fit(cancer.data)
# extract the first two principal components from the "pca" step
components = pipe.named_steps.pca.components_
print(components.shape)
pipe['pca'].components_.shape
pipe[0]
pipe[1]
pipe[:2]
```
#### Accessing attributes in grid-searched pipeline.
```
from sklearn.linear_model import LogisticRegression
pipe = make_pipeline(StandardScaler(), LogisticRegression(max_iter=1000))
param_grid = {'logisticregression__C': [0.01, 0.1, 1, 10, 100]}
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, random_state=4)
grid = GridSearchCV(pipe, param_grid)
grid.fit(X_train, y_train)
print(grid.best_estimator_)
print(grid.best_estimator_.named_steps.logisticregression)
print(grid.best_estimator_['logisticregression'])
print(grid.best_estimator_.named_steps.logisticregression.coef_)
print(grid.best_estimator_['logisticregression'].coef_)
```
### Grid-searching preprocessing steps and model parameters
```
from sklearn.datasets import load_boston
boston = load_boston()
X_train, X_test, y_train, y_test = train_test_split(
boston.data, boston.target, random_state=0)
from sklearn.preprocessing import PolynomialFeatures
pipe = make_pipeline(
StandardScaler(),
PolynomialFeatures(),
Ridge())
param_grid = {'polynomialfeatures__degree': [1, 2, 3],
'ridge__alpha': [0.001, 0.01, 0.1, 1, 10, 100]}
grid = GridSearchCV(pipe, param_grid=param_grid,
n_jobs=-1, return_train_score=True)
grid.fit(X_train, y_train)
res = pd.DataFrame(grid.cv_results_)
res.head()
res = pd.pivot_table(res, index=['param_polynomialfeatures__degree', 'param_ridge__alpha'],
values=['mean_train_score', 'mean_test_score'])
res['mean_train_score'].unstack()
res['mean_test_score'].unstack()
print(grid.best_params_)
grid.score(X_test, y_test)
from sklearn.linear_model import Lasso
from sklearn.model_selection import RepeatedKFold
pipe = Pipeline([('scaler', StandardScaler()), ('regressor', Ridge())])
param_grid = {'scaler': [StandardScaler(), MinMaxScaler(), 'passthrough'],
'regressor': [Ridge(), Lasso()],
'regressor__alpha': np.logspace(-3, 3, 7)}
grid = GridSearchCV(pipe, param_grid,
cv=RepeatedKFold(n_splits=10, n_repeats=10))
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
grid.best_score_
grid.best_params_
from sklearn.tree import DecisionTreeRegressor
param_grid = [{'regressor': [DecisionTreeRegressor()],
'regressor__max_depth': [2, 3, 4]},
{'regressor': [Ridge()],
'regressor__alpha': [0.1, 1]}
]
```
# More on ColumnTransformer
```
from sklearn.compose import make_column_transformer, ColumnTransformer
bike = pd.read_csv("data/bike_day_raw.csv")
bike.head()
bike.dtypes
bike_data = bike.drop("cnt", axis=1)
cat_features = bike.columns[:6]
cat_features
from sklearn.preprocessing import OneHotEncoder
ct = make_column_transformer((OneHotEncoder(sparse=False), cat_features),
remainder=StandardScaler())
ct.transformers
ColumnTransformer([('ohe', OneHotEncoder(sparse=False), cat_features)],
remainder=StandardScaler())
ColumnTransformer([('ohe', OneHotEncoder(sparse=False), cat_features),
('scaler', StandardScaler(), [6, 7, 8, 9])])
ct.fit(bike_data)
bike_data.shape
ct.transform(bike_data).shape
ct.transform(bike_data)
ct = make_column_transformer((OneHotEncoder(sparse=False), cat_features),
remainder=StandardScaler())
ohe_pipe = make_pipeline(ct, Ridge())
X_train, X_test, y_train, y_test = train_test_split(bike_data, bike.cnt, random_state=42)
cross_val_score(ohe_pipe, X_train, y_train)
from sklearn.preprocessing import PowerTransformer
ct = make_column_transformer((OneHotEncoder(sparse=False), cat_features))
ohe_pipe = make_pipeline(ct, Ridge())
param_grid = {'columntransformer__remainder':
[StandardScaler(), PowerTransformer(method='yeo-johnson')],
'ridge__alpha': np.logspace(-3, 2, 6)}
grid = GridSearchCV(ohe_pipe, param_grid)
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
grid.best_params_
res = pd.DataFrame(grid.cv_results_)
res
plt.plot(res.mean_test_score[:6].values, label="StandardScaler")
plt.plot(res.mean_test_score[6:].values, label="PowerTransformer")
plt.legend()
```
# Exercise
Load the adult dataset. Create a pipline using the ColumnTransformer, OneHotEncoder, Scaling, and polynomial features and a linear classifier.
Search over the best options for the polynomial features together with the regularization of a linear model.
```
pd.read_csv("data/adult.csv", index_col=0).head()
# use OneHotEncoder(handle_unknown='ignore') to ignore new categories in test set.
```
| github_jupyter |
# 利用Python对链家网北京主城区二手房进行数据分析
* 本文主要讲述如何通过pandas对爬虫下来的链家数据进行相应的二手房数据分析,主要分析内容包括各个行政区,各个小区的房源信息情况。
* 数据来源 https://github.com/XuefengHuang/lianjia-scrawler 该repo提供了python程序进行链家网爬虫,并从中提取二手房价格、面积、户型和二手房关注度等数据。
* 分析方法参考 http://www.jianshu.com/p/44f261a62c0f
## 导入链家网二手房在售房源的文件(数据更新时间2017-11-29)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
import sys
stdout = sys.stdout
reload(sys)
sys.setdefaultencoding('utf-8')
sys.stdout = stdout
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False
#所有在售房源信息
house=pd.read_csv('houseinfo.csv')
# 所有小区信息
community=pd.read_csv('community.csv')
# 合并小区信息和房源信息表,可以获得房源更详细的地理位置
community['community'] = community['title']
house_detail = pd.merge(house, community, on='community')
```
## 将数据从字符串提取出来
```
# 将字符串转换成数字
def data_adj(area_data, str):
if str in area_data :
return float(area_data[0 : area_data.find(str)])
else :
return None
# 处理房屋面积数据
house['square'] = house['square'].apply(data_adj,str = '平米')
```
## 删除车位信息
```
car=house[house.housetype.str.contains('车位')]
print '记录中共有车位%d个'%car.shape[0]
house.drop(car.index,inplace=True)
print '现在还剩下%d条记录'%house.shape[0]
```
## 价格最高的5个别墅
```
bieshu=house[house.housetype.str.contains('别墅')]
print '记录中共有别墅%d栋'%bieshu.shape[0]
bieshu.sort_values('totalPrice',ascending=False).head(5)
```
## 删除别墅信息
```
house.drop(bieshu.index,inplace=True)
print '现在还剩下%d条记录'%house.shape[0]
```
## 获取总价前五的房源信息
```
house.sort_values('totalPrice',ascending=False).head(5)
```
## 获取户型数量分布信息
```
housetype = house['housetype'].value_counts()
housetype.head(8).plot(kind='bar',x='housetype',y='size', title='户型数量分布')
plt.legend(['数量'])
plt.show()
```
## 关注人数最多5套房子
```
house['guanzhu'] = house['followInfo'].apply(data_adj,str = '人关注')
house.sort_values('guanzhu',ascending=False).head(5)
```
## 户型和关注人数分布
```
fig, ax1 = plt.subplots(1,1)
type_interest_group = house['guanzhu'].groupby(house['housetype']).agg([('户型', 'count'), ('关注人数', 'sum')])
#取户型>50的数据进行可视化
ti_sort = type_interest_group[type_interest_group['户型'] > 50].sort_values(by='户型')
ti_sort.plot(kind='barh', alpha=0.7, grid=True, ax=ax1)
plt.title('二手房户型和关注人数分布')
plt.ylabel('户型')
plt.show()
```
## 面积分布
```
fig,ax2 = plt.subplots(1,1)
area_level = [0, 50, 100, 150, 200, 250, 300, 500]
label_level = ['小于50', '50-100', '100-150', '150-200', '200-250', '250-300', '300-350']
area_cut = pd.cut(house['square'], area_level, labels=label_level)
area_cut.value_counts().plot(kind='bar', rot=30, alpha=0.4, grid=True, fontsize='small', ax=ax2)
plt.title('二手房面积分布')
plt.xlabel('面积')
plt.legend(['数量'])
plt.show()
```
## 聚类分析
```
# 缺失值处理:直接将缺失值去掉
cluster_data = house[['guanzhu','square','totalPrice']].dropna()
#将簇数设为3
K_model = KMeans(n_clusters=3)
alg = K_model.fit(cluster_data)
'------聚类中心------'
center = pd.DataFrame(alg.cluster_centers_, columns=['关注人数','面积','房价'])
cluster_data['label'] = alg.labels_
center
```
## 北京市在售面积最小二手房
```
house.sort_values('square').iloc[0,:]
```
## 北京市在售面积最大二手房
```
house.sort_values('square',ascending=False).iloc[0,:]
```
## 各个行政区房源均价
```
house_unitprice_perdistrict = house_detail.groupby('district').mean()['unitPrice']
house_unitprice_perdistrict.plot(kind='bar',x='district',y='unitPrice', title='各个行政区房源均价')
plt.legend(['均价'])
plt.show()
```
## 各个区域房源数量排序
```
bizcircle_count=house_detail.groupby('bizcircle').size().sort_values(ascending=False)
bizcircle_count.head(20).plot(kind='bar',x='bizcircle',y='size', title='各个区域房源数量分布')
plt.legend(['数量'])
plt.show()
```
## 各个区域均价排序
```
bizcircle_unitprice=house_detail.groupby('bizcircle').mean()['unitPrice'].sort_values(ascending=False)
bizcircle_unitprice.head(20).plot(kind='bar',x='bizcircle',y='unitPrice', title='各个区域均价分布')
plt.legend(['均价'])
plt.show()
```
## 各个区域小区数量
```
bizcircle_community=community.groupby('bizcircle')['title'].size().sort_values(ascending=False)
bizcircle_community.head(20).plot(kind='bar', x='bizcircle',y='size', title='各个区域小区数量分布')
plt.legend(['数量'])
plt.show()
```
## 按小区均价排序
```
community_unitprice = house.groupby('community').mean()['unitPrice'].sort_values(ascending=False)
community_unitprice.head(15).plot(kind='bar',x='community',y='unitPrice', title='各个小区均价分布')
plt.legend(['均价'])
plt.show()
```
| github_jupyter |
```
import datetime
from pytz import timezone
print "Last run @%s" % (datetime.datetime.now(timezone('US/Pacific')))
from pyspark.context import SparkContext
print "Running Spark Version %s" % (sc.version)
from pyspark.conf import SparkConf
conf = SparkConf()
print conf.toDebugString()
# Read Orders
orders = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('NW/NW-Orders.csv')
order_details = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('NW/NW-Order-Details.csv')
products = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('NW/NW-Products.csv')
orders.count()
# Save as parquet format for folks who couldn't make spark-csv work
orders.repartition(1).write.mode("overwrite").format("parquet").save("orders.parquet")
order_details.repartition(1).write.mode("overwrite").format("parquet").save("order_details.parquet")
products.repartition(1).write.mode("overwrite").format("parquet").save("products.parquet")
# Read & Check
df = sqlContext.read.load("orders.parquet")
df.show(5)
df.count()
# Read & Check
df = sqlContext.read.load("products.parquet")
df.show(5)
df.count()
# Read & Check
df = sqlContext.read.load("order_details.parquet")
df.show(5)
df.count()
order_details.count()
orders.show(10)
order_details.show(10)
products.count()
products.show(1)
# Questions
# 1. How many orders were placed by each customer?
# 2. How many orders were placed by each country ?
# 3. How many orders by month/year ?
# 4. Total Sales for each customer by year
# 5. Average order by customer by year
# These are questions based on customer and sales reports
# Similar questions can be asked about products as well
orders.dtypes
# 1. How many orders were placed by each customer?
orders.groupBy("CustomerID").count().orderBy("count",ascending=False).show(10)
# 2. How many orders were placed by each country ?
orders.groupBy("ShipCuntry").count().orderBy("count",ascending=False).show(10)
# For the next set of questions, let us transform the data
# 1. Add OrderTotal column to the Orders DataFrame
# 1.1. Add Line total to order details
# 1.2. Aggregate total by order id
# 1.3. Join order details & orders to add the order total
# 1.4. Check if there are any null columns
# 2. Add a date column
# 3. Add month and year
# 1.1. Add Line total to order details
order_details_1 = order_details.select(order_details['OrderID'],
(order_details['UnitPrice'].cast('float') *
order_details['Qty'].cast('float') *
(1.0 -order_details['Discount'].cast('float'))).alias('OrderPrice'))
order_details_1.show(10)
# 1.2. Aggregate total by order id
order_tot = order_details_1.groupBy('OrderID').sum('OrderPrice').alias('OrderTotal')
order_tot.orderBy('OrderID').show(5)
# 1.3. Join order details & orders to add the order total
orders_1 = orders.join(order_tot, orders['OrderID'] == order_tot['OrderID'], 'inner')\
.select(orders['OrderID'],
orders['CustomerID'],
orders['OrderDate'],
orders['ShipCuntry'].alias('ShipCountry'),
order_tot['sum(OrderPrice)'].alias('Total'))
orders_1.orderBy('CustomerID').show()
# 1.4. Check if there are any null columns
orders_1.filter(orders_1['Total'].isNull()).show(40)
import pyspark.sql.functions as F
from pyspark.sql.types import DateType,IntegerType
from datetime import datetime
convertToDate = F.udf(lambda s: datetime.strptime(s, '%m/%d/%y'), DateType())
#getMonth = F.udf(lambda d:d.month, IntegerType())
#getYear = F.udf(lambda d:d.year, IntegerType())
getM = F.udf(lambda d:d.month, IntegerType()) # To test UDF in 1.5.1. didn't work in 1.5.0
getY = F.udf(lambda d:d.year, IntegerType())
# 2. Add a date column
orders_2 = orders_1.withColumn('Date',convertToDate(orders_1['OrderDate']))
orders_2.show(2)
# 3. Add month and year
#orders_3 = orders_2.withColumn('Month',getMonth(orders_2['Date'])).withColumn('Year',getYear(orders_2['Date']))
orders_3 = orders_2.withColumn('Month',F.month(orders_2['Date'])).withColumn('Year',F.year(orders_2['Date']))
orders_3 = orders_2.withColumn('Month',getM(orders_2['Date'])).withColumn('Year',getY(orders_2['Date']))
orders_3.show(5)
# 3. How many orders by month/year ?
import time
start_time = time.time()
orders_3.groupBy("Year","Month").sum('Total').show()
print "%s Elapsed : %f" % (datetime.today(), time.time() - start_time)
#[7/3/15 8:20 PM 1.4.1] Elapsed : 22.788190 (with UDF)
#[1.5.0] 2015-09-05 10:29:57.377955 Elapsed : 10.542052 (with F.*)
#[1.5.1] 2015-09-24 17:53:13.605858 Elapsed : 11.024428 (with F.*)
# 4. Total Sales for each customer by year
import time
start_time = time.time()
orders_3.groupBy("CustomerID","Year").sum('Total').show()
print "%s Elapsed : %f" % (datetime.today(), time.time() - start_time)
#[1.4.1] 2015-07-03 20:29:37.499064 Elapsed : 18.372916 (with UDF)
#[1.5.0] 2015-09-05 10:26:14.689536 Elapsed : 11.468665 (with F.*)
#[1.5.1] 2015-09-24 17:53:23.670811 Elapsed : 10.057430 (with F.*)
# 5. Average order by customer by year
import time
start_time = time.time()
orders_3.groupBy("CustomerID","Year").avg('Total').show()
print "%s Elapsed : %.2f" % (datetime.today(), time.time() - start_time)
#[1.4.1] 2015-07-03 20:32:14.734800 Elapsed : 18.88 (with UDF)
#[1.5.0] 2015-09-05 10:26:28.227042 Elapsed : 13.53 (with F.*)
#[1.5.1] 2015-09-24 17:55:25.963050 Elapsed : 10.02 (with F.*)
# 6. Average order by customer
import time
start_time = time.time()
orders_3.groupBy("CustomerID").avg('Total').orderBy('avg(Total)',ascending=False).show()
print "%s Elapsed : %.2f" % (datetime.today(), time.time() - start_time)
#[1.4.1] 2015-07-03 20:33:21.634902 Elapsed : 20.15 (with UDF)
#[1.5.0] 2015-09-05 10:26:40.064432 Elapsed : 11.83 (with F.*)
#[1.5.1] 2015-09-24 17:55:49.818042 Elapsed : 9.43 (with F.*)
```
| github_jupyter |
# Example: Compare CZT to FFT
```
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
# CZT package
import czt
# https://github.com/garrettj403/SciencePlots
plt.style.use(['science', 'notebook'])
```
# Generate Time-Domain Signal
```
# Time data
t = np.arange(0, 20, 0.1) * 1e-3
dt = t[1] - t[0]
Fs = 1 / dt
N = len(t)
print("Sampling period: {:5.2f} ms".format(dt * 1e3))
print("Sampling frequency: {:5.2f} kHz".format(Fs / 1e3))
print("Nyquist frequency: {:5.2f} kHz".format(Fs / 2 / 1e3))
print("Number of points: {:5d}".format(N))
# Signal data
def model1(t):
"""Exponentially decaying sine wave with higher-order distortion."""
output = (1.0 * np.sin(2 * np.pi * 1e3 * t) +
0.3 * np.sin(2 * np.pi * 2.5e3 * t) +
0.1 * np.sin(2 * np.pi * 3.5e3 * t)) * np.exp(-1e3 * t)
return output
def model2(t):
"""Exponentially decaying sine wave without higher-order distortion."""
output = (1.0 * np.sin(2 * np.pi * 1e3 * t)) * np.exp(-1e3 * t)
return output
sig = model1(t)
# Plot time-domain data
plt.figure()
t_tmp = np.linspace(0, 6, 601) / 1e3
plt.plot(t_tmp*1e3, model1(t_tmp), 'k', lw=0.5, label='Data')
plt.plot(t*1e3, sig, 'ro--', label='Samples')
plt.xlabel("Time (ms)")
plt.ylabel("Signal")
plt.xlim([0, 6])
plt.legend()
plt.title("Time-domain signal");
```
# Frequency-domain
```
sig_fft = np.fft.fftshift(np.fft.fft(sig))
f_fft = np.fft.fftshift(np.fft.fftfreq(N, d=dt))
freq, sig_f = czt.time2freq(t, sig)
# Plot results
fig1 = plt.figure(1)
frame1a = fig1.add_axes((.1,.3,.8,.6))
plt.plot(f_fft / 1e3, np.abs(sig_fft), 'k', label='FFT')
plt.plot(freq / 1e3, np.abs(sig_f), 'ro--', label='CZT')
plt.ylabel("Signal magnitude")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3])
plt.legend()
plt.title("Frequency-domain")
frame1b = fig1.add_axes((.1,.1,.8,.2))
plt.plot(f_fft / 1e3, (np.abs(sig_fft) - np.abs(sig_f)) * 1e13, 'r-', label="Data")
plt.xlabel("Frequency (kHz)")
plt.ylabel("Residual\n" + r"($\times10^{-13}$)")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3])
plt.savefig("results/freq-domain.png", dpi=600)
# Plot results
fig2 = plt.figure(2)
frame2a = fig2.add_axes((.1,.3,.8,.6))
plt.plot(f_fft / 1e3, np.angle(sig_fft), 'k', label='FFT')
plt.plot(freq / 1e3, np.angle(sig_f), 'ro--', label='CZT')
plt.ylabel("Signal phase")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3])
plt.legend()
plt.title("Frequency-domain")
frame2b = fig2.add_axes((.1,.1,.8,.2))
plt.plot(f_fft / 1e3, (np.angle(sig_fft) - np.angle(sig_f)) * 1e13, 'r-', label="Data")
plt.xlabel("Frequency (kHz)")
plt.ylabel("Residual\n" + r"($\times10^{-13}$)")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3]);
```
| github_jupyter |
```
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex client library: AutoML text entity extraction model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
## Overview
This tutorial demonstrates how to use the Vertex client library for Python to create text entity extraction models and do online prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users).
### Dataset
The dataset used for this tutorial is the [NCBI Disease Research Abstracts dataset](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) from [National Center for Biotechnology Information](https://www.ncbi.nlm.nih.gov/). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
### Objective
In this tutorial, you create an AutoML text entity extraction model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.
The steps performed include:
- Create a Vertex `Dataset` resource.
- Train the model.
- View the model evaluation.
- Deploy the `Model` resource to a serving `Endpoint` resource.
- Make a prediction.
- Undeploy the `Model`.
### Costs
This tutorial uses billable components of Google Cloud (GCP):
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
## Installation
Install the latest version of Vertex client library.
```
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
```
### Restart the kernel
Once you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
```
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU**
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)
4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
#### Import Vertex client library
Import the Vertex client library into our Python environment.
```
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
```
#### Vertex constants
Setup up the following constants for Vertex:
- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
```
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
```
#### AutoML constants
Set constants unique to AutoML datasets and training:
- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.
- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).
- Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for.
```
# Text Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml"
# Text Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_extraction_io_format_1.0.0.yaml"
# Text Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_extraction_1.0.0.yaml"
```
# Tutorial
Now you are ready to start creating your own AutoML text entity extraction model.
## Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
- Dataset Service for `Dataset` resources.
- Model Service for `Model` resources.
- Pipeline Service for training.
- Endpoint Service for deployment.
- Prediction Service for serving.
```
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
```
## Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
### Create `Dataset` resource instance
Use the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:
1. Uses the dataset client service.
2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters:
- `display_name`: The human-readable name you choose to give it.
- `metadata_schema_uri`: The schema for the dataset type.
3. Calls the client dataset service method `create_dataset`, with the following parameters:
- `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources.
- `dataset`: The Vertex dataset object instance you created.
4. The method returns an `operation` object.
An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
```
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("biomedical-" + TIMESTAMP, DATA_SCHEMA)
```
Now save the unique dataset identifier for the `Dataset` resource instance you created.
```
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
```
### Data preparation
The Vertex `Dataset` resource for text has a couple of requirements for your text entity extraction data.
- Text examples must be stored in a JSONL file. Unlike text classification and sentiment analysis, a CSV index file is not supported.
- The examples must be either inline text or reference text files that are in Cloud Storage buckets.
#### JSONL
For text entity extraction, the JSONL file has a few requirements:
- Each data item is a separate JSON object, on a separate line.
- The key/value pair `text_segment_annotations` is a list of character start/end positions in the text per entity with the corresponding label.
- `display_name`: The label.
- `start_offset/end_offset`: The character offsets of the start/end of the entity.
- The key/value pair `text_content` is the text.
{'text_segment_annotations': [{'end_offset': value, 'start_offset': value, 'display_name': label}, ...], 'text_content': text}
*Note*: The dictionary key fields may alternatively be in camelCase. For example, 'display_name' can also be 'displayName'.
#### Location of Cloud Storage training data.
Now set the variable `IMPORT_FILE` to the location of the JSONL index file in Cloud Storage.
```
IMPORT_FILE = "gs://ucaip-test-us-central1/dataset/ucaip_ten_dataset.jsonl"
```
#### Quick peek at your data
You will use a version of the NCBI Biomedical dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (`wc -l`) and then peek at the first few rows.
```
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
```
### Import data
Now, import the data into your Vertex Dataset resource. Use this helper function `import_data` to import the data. The function does the following:
- Uses the `Dataset` client.
- Calls the client method `import_data`, with the following parameters:
- `name`: The human readable name you give to the `Dataset` resource (e.g., biomedical).
- `import_configs`: The import configuration.
- `import_configs`: A Python list containing a dictionary, with the key/value entries:
- `gcs_sources`: A list of URIs to the paths of the one or more index files.
- `import_schema_uri`: The schema identifying the labeling type.
The `import_data()` method returns a long running `operation` object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
```
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
```
## Train the model
Now train an AutoML text entity extraction model using your Vertex `Dataset` resource. To train the model, do the following steps:
1. Create an Vertex training pipeline for the `Dataset` resource.
2. Execute the pipeline to start the training.
### Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
1. Being reusable for subsequent training jobs.
2. Can be containerized and ran as a batch job.
3. Can be distributed.
4. All the steps are associated with the same pipeline job for tracking progress.
Use this helper function `create_pipeline`, which takes the following parameters:
- `pipeline_name`: A human readable name for the pipeline job.
- `model_name`: A human readable name for the model.
- `dataset`: The Vertex fully qualified dataset identifier.
- `schema`: The dataset labeling (annotation) training schema.
- `task`: A dictionary describing the requirements for the training job.
The helper function calls the `Pipeline` client service'smethod `create_pipeline`, which takes the following parameters:
- `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources.
- `training_pipeline`: the full specification for the pipeline training job.
Let's look now deeper into the *minimal* requirements for constructing a `training_pipeline` specification:
- `display_name`: A human readable name for the pipeline job.
- `training_task_definition`: The dataset labeling (annotation) training schema.
- `training_task_inputs`: A dictionary describing the requirements for the training job.
- `model_to_upload`: A human readable name for the model.
- `input_data_config`: The dataset specification.
- `dataset_id`: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
- `fraction_split`: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
```
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
```
### Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.
The minimal fields you need to specify are:
- `multi_label`: Whether True/False this is a multi-label (vs single) classification.
- `budget_milli_node_hours`: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
- `model_type`: The type of deployed model:
- `CLOUD`: For deploying to Google Cloud.
- `disable_early_stopping`: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.
Finally, you create the pipeline by calling the helper function `create_pipeline`, which returns an instance of a training pipeline object.
```
PIPE_NAME = "biomedical_pipe-" + TIMESTAMP
MODEL_NAME = "biomedical_model-" + TIMESTAMP
task = json_format.ParseDict(
{
"multi_label": False,
"budget_milli_node_hours": 8000,
"model_type": "CLOUD",
"disable_early_stopping": False,
},
Value(),
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
```
Now save the unique identifier of the training pipeline you created.
```
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
```
### Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:
- `name`: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`.
```
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
```
# Deployment
Training the above model may take upwards of 120 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`.
```
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
```
## Model information
Now that your model is trained, you can get some information on your model.
## Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
### List evaluations for all slices
Use this helper function `list_model_evaluations`, which takes the following parameter:
- `name`: The Vertex fully qualified model identifier for the `Model` resource.
This helper function uses the model client service's `list_model_evaluations` method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (`confusionMatrix` and `confidenceMetrics`) you will print the result.
```
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("confusionMatrix", metrics["confusionMatrix"])
print("confidenceMetrics", metrics["confidenceMetrics"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
```
## Deploy the `Model` resource
Now deploy the trained Vertex `Model` resource you created with AutoML. This requires two steps:
1. Create an `Endpoint` resource for deploying the `Model` resource to.
2. Deploy the `Model` resource to the `Endpoint` resource.
### Create an `Endpoint` resource
Use this helper function `create_endpoint` to create an endpoint to deploy the model to for serving predictions, with the following parameter:
- `display_name`: A human readable name for the `Endpoint` resource.
The helper function uses the endpoint client service's `create_endpoint` method, which takes the following parameter:
- `display_name`: A human readable name for the `Endpoint` resource.
Creating an `Endpoint` resource returns a long running operation, since it may take a few moments to provision the `Endpoint` resource for serving. You call `response.result()`, which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the `Endpoint` resource: `response.name`.
```
ENDPOINT_NAME = "biomedical_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
```
Now get the unique identifier for the `Endpoint` resource you created.
```
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
```
### Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
- Single Instance: The online prediction requests are processed on a single compute instance.
- Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.
- Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
- Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
- Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
- Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
```
MIN_NODES = 1
MAX_NODES = 1
```
### Deploy `Model` resource to the `Endpoint` resource
Use this helper function `deploy_model` to deploy the `Model` resource to the `Endpoint` resource you created for serving predictions, with the following parameters:
- `model`: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
- `deploy_model_display_name`: A human readable name for the deployed model.
- `endpoint`: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the `Endpoint` client service's method `deploy_model`, which takes the following parameters:
- `endpoint`: The Vertex fully qualified `Endpoint` resource identifier to deploy the `Model` resource to.
- `deployed_model`: The requirements specification for deploying the model.
- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
- If only one model, then specify as **{ "0": 100 }**, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
- If there are existing models on the endpoint, for which the traffic will be split, then use `model_id` to specify as **{ "0": percent, model_id: percent, ... }**, where `model_id` is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the `deployed_model` parameter. This parameter is specified as a Python dictionary with the minimum required fields:
- `model`: The Vertex fully qualified model identifier of the (upload) model to deploy.
- `display_name`: A human readable name for the deployed model.
- `disable_container_logging`: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
- `automatic_resources`: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication).
#### Traffic Split
Let's now dive deeper into the `traffic_split` parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
#### Response
The method returns a long running operation `response`. We will wait sychronously for the operation to complete by calling the `response.result()`, which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
```
DEPLOYED_NAME = "biomedical_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"automatic_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
```
## Make a online prediction request
Now do a online prediction to your deployed model.
### Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
```
test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'
```
### Make a prediction
Now you have a test item. Use this helper function `predict_item`, which takes the following parameters:
- `filename`: The Cloud Storage path to the test item.
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed.
- `parameters_dict`: Additional filtering parameters for serving prediction results.
This function calls the prediction client service's `predict` method with the following parameters:
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed.
- `instances`: A list of instances (text files) to predict.
- `parameters`: Additional filtering parameters for serving prediction results. *Note*, text models do not support additional parameters.
#### Request
The format of each instance is:
{ 'content': text_item }
Since the `predict()` method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the `predict()` method.
#### Response
The `response` object returns a list, where each element in the list corresponds to the corresponding data item in the request. You will see in the output for each prediction -- in our case there is just one:
- `prediction`: A list of IDs assigned to each entity extracted from the text.
- `confidences`: The confidence level between 0 and 1 for each entity.
- `display_names`: The label name for each entity.
- `textSegmentStartOffsets`: The character start location of the entity in the text.
- `textSegmentEndOffsets`: The character end location of the entity in the text.
```
def predict_item(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{"content": data}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
return response
response = predict_item(test_item, endpoint_id, None)
```
## Undeploy the `Model` resource
Now undeploy your `Model` resource from the serving `Endpoint` resoure. Use this helper function `undeploy_model`, which takes the following parameters:
- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed to.
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` is deployed to.
This function calls the endpoint client service's method `undeploy_model`, with the following parameters:
- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed.
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource is deployed.
- `traffic_split`: How to split traffic among the remaining deployed models on the `Endpoint` resource.
Since this is the only deployed model on the `Endpoint` resource, you simply can leave `traffic_split` empty by setting it to {}.
```
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
```
# Cleaning up
To clean up all GCP resources used in this project, you can [delete the GCP
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
.. meta::
:description: A guide which introduces the most important steps to get started with pymoo, an open-source multi-objective optimization framework in Python.
.. meta::
:keywords: Multi-objective Optimization, Python, Evolutionary Computation, Optimization Test Problem, Hypervolume
```
%%capture
%run part_2.ipynb
```
# Part IV: Analysis of Convergence
**Great!** So far, we have executed an algorithm and already obtained a solution set. But let us not stop here without knowing how the algorithm has performed. This will also answer how we should define a termination criterion if we solve the problem again. The convergence analysis shall consider two cases, i) the Pareto-front is not known, or ii) the Pareto-front has been derived analytically, or a reasonable approximation exists.
## Result
To further check how close the results match the analytically derived optimum, we have to convert the objective space values to the original definition where the second objective $f_2$ was maximized. Plotting then the Pareto-front shows how close the algorithm was able to converge.
```
from pymoo.util.misc import stack
class MyTestProblem(MyProblem):
def _calc_pareto_front(self, flatten=True, *args, **kwargs):
f2 = lambda f1: ((f1/100) ** 0.5 - 1)**2
F1_a, F1_b = np.linspace(1, 16, 300), np.linspace(36, 81, 300)
F2_a, F2_b = f2(F1_a), f2(F1_b)
pf_a = np.column_stack([F1_a, F2_a])
pf_b = np.column_stack([F1_b, F2_b])
return stack(pf_a, pf_b, flatten=flatten)
def _calc_pareto_set(self, *args, **kwargs):
x1_a = np.linspace(0.1, 0.4, 50)
x1_b = np.linspace(0.6, 0.9, 50)
x2 = np.zeros(50)
a, b = np.column_stack([x1_a, x2]), np.column_stack([x1_b, x2])
return stack(a,b, flatten=flatten)
problem = MyTestProblem()
```
For IGD, the Pareto front needs to be known or to be approximated.
In our framework, the Pareto front of **test problems** can be obtained by:
```
pf_a, pf_b = problem.pareto_front(use_cache=False, flatten=False)
pf = problem.pareto_front(use_cache=False, flatten=True)
plt.figure(figsize=(7, 5))
plt.scatter(F[:, 0], F[:, 1], s=30, facecolors='none', edgecolors='b', label="Solutions")
plt.plot(pf_a[:, 0], pf_a[:, 1], alpha=0.5, linewidth=2.0, color="red", label="Pareto-front")
plt.plot(pf_b[:, 0], pf_b[:, 1], alpha=0.5, linewidth=2.0, color="red")
plt.title("Objective Space")
plt.legend()
plt.show()
```
Whether the optimum for your problem is known or not, we encourage all end-users of *pymoo* not to skip the analysis of the obtained solution set. Visualizations for high-dimensional objective spaces (in design and/or objective space) are also provided and shown [here](../visualization/index.ipynb).
In **Part II**, we have run the algorithm without storing, keeping track of the optimization progress, and storing information. However, for analyzing the convergence, historical data need to be stored. One way of accomplishing that is enabling the `save_history` flag, which will store a deep copy of the algorithm object in each iteration and save it in the `Result` object. This approach is more memory-intensive (especially for many iterations) but has the advantage that **any** algorithm-dependent variable can be analyzed posteriorly.
A not negligible step is the post-processing after having obtained the results. We strongly recommend not only analyzing the final result but also the algorithm's behavior. This gives more insights into the convergence of the algorithm.
For such an analysis, intermediate steps of the algorithm need to be considered. This can either be achieved by:
- A `Callback` class storing the necessary information in each iteration of the algorithm.
- Enabling the `save_history` flag when calling the minimize method to store a deep copy of the algorithm's objective each iteration.
We provide some more details about each variant in our [convergence](../misc/convergence.ipynb) tutorial.
As you might have already seen, we have set `save_history=True` when calling the `minmize` method in this getting started guide and, thus, will you the `history` for our analysis. Moreover, we need to decide what metric should be used to measure the performance of our algorithm. In this tutorial, we are going to use `Hypervolume` and `IGD`. Feel free to look at our [performance indicators](../misc/indicators.ipynb) to find more information about metrics to measure the performance of multi-objective algorithms.
```
from pymoo.optimize import minimize
res = minimize(problem,
algorithm,
("n_gen", 40),
seed=1,
save_history=True,
verbose=False)
X, F = res.opt.get("X", "F")
hist = res.history
print(len(hist))
```
From the `history` it is relatively easy to extract the information we need for an analysis.
```
n_evals = [] # corresponding number of function evaluations\
hist_F = [] # the objective space values in each generation
hist_cv = [] # constraint violation in each generation
hist_cv_avg = [] # average constraint violation in the whole population
for algo in hist:
# store the number of function evaluations
n_evals.append(algo.evaluator.n_eval)
# retrieve the optimum from the algorithm
opt = algo.opt
# store the least contraint violation and the average in each population
hist_cv.append(opt.get("CV").min())
hist_cv_avg.append(algo.pop.get("CV").mean())
# filter out only the feasible and append and objective space values
feas = np.where(opt.get("feasible"))[0]
hist_F.append(opt.get("F")[feas])
```
## Constraint Satisfaction
First, let us quickly see when the first feasible solution has been found:
```
k = np.where(np.array(hist_cv) <= 0.0)[0].min()
print(f"At least one feasible solution in Generation {k} after {n_evals[k]} evaluations.")
```
Because this problem does not have much complexity, a feasible solution was found right away. Nevertheless, this can be entirely different for your optimization problem and is also worth being analyzed first.
```
# replace this line by `hist_cv` if you like to analyze the least feasible optimal solution and not the population
vals = hist_cv_avg
k = np.where(np.array(vals) <= 0.0)[0].min()
print(f"Whole population feasible in Generation {k} after {n_evals[k]} evaluations.")
plt.figure(figsize=(7, 5))
plt.plot(n_evals, vals, color='black', lw=0.7, label="Avg. CV of Pop")
plt.scatter(n_evals, vals, facecolor="none", edgecolor='black', marker="p")
plt.axvline(n_evals[k], color="red", label="All Feasible", linestyle="--")
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("Constraint Violation")
plt.legend()
plt.show()
```
## Pareto-front is unknown
If the Pareto-front is not known, we can not know if the algorithm has converged to the true optimum or not. At least not without any further information. However, we can see when the algorithm has made most of its progress during optimization and thus if the number of iterations should be less or more. Additionally, the metrics serve to compare two algorithms with each other.
In multi-objective optimization **normalization** the very important. For that reason, you see below that the Hypervolume is based on a normalized set normalized by the bounds (idea)
More details about it will be shown in Part IV.
### Hypvervolume (HV)
Hypervolume is a very well-known performance indicator for multi-objective problems. It is Pareto-compliant and is based on the volume between a predefined reference point and the solution provided. Therefore, hypervolume requires defining a reference point `ref_point`, which shall be larger than the maximum value of the Pareto front.
```
approx_ideal = F.min(axis=0)
approx_nadir = F.max(axis=0)
from pymoo.indicators.hv import Hypervolume
metric = Hypervolume(ref_point= np.array([1.1, 1.1]),
norm_ref_point=False,
zero_to_one=True,
ideal=approx_ideal,
nadir=approx_nadir)
hv = [metric.do(_F) for _F in hist_F]
plt.figure(figsize=(7, 5))
plt.plot(n_evals, hv, color='black', lw=0.7, label="Avg. CV of Pop")
plt.scatter(n_evals, hv, facecolor="none", edgecolor='black', marker="p")
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("Hypervolume")
plt.show()
```
**Note:** Hypervolume becomes computationally expensive with increasing dimensionality. The exact hypervolume can be calculated efficiently for 2 and 3 objectives. For higher dimensions, some researchers use a hypervolume approximation, which is not available yet in pymoo.
### Running Metric
Another way of analyzing a run when the true Pareto front is **not** known is the recently proposed [running metric](https://www.egr.msu.edu/~kdeb/papers/c2020003.pdf). The running metric shows the difference in the objective space from one generation to another and uses the algorithm's survival to visualize the improvement.
This metric is also being used in pymoo to determine the termination of a multi-objective optimization algorithm if no default termination criteria have been defined.
For instance, this analysis reveals that the algorithm improved from the 4th to the 5th generation significantly.
```
from pymoo.util.running_metric import RunningMetric
running = RunningMetric(delta_gen=5,
n_plots=3,
only_if_n_plots=True,
key_press=False,
do_show=True)
for algorithm in res.history[:15]:
running.notify(algorithm)
```
Plotting until the final population shows the algorithm seems to have more a less converged, and only a slight improvement has been made.
```
from pymoo.util.running_metric import RunningMetric
running = RunningMetric(delta_gen=10,
n_plots=4,
only_if_n_plots=True,
key_press=False,
do_show=True)
for algorithm in res.history:
running.notify(algorithm)
```
## Pareto-front is known or approximated
### IGD/GD/IGD+/GD+
The Pareto-front for a problem can either be provided manually or directly implemented in the `Problem` definition to analyze the run on the fly. Here, we show an example of using the history of the algorithm as an additional post-processing step.
For real-world problems, you have to use an **approximation**. An approximation can be obtained by running an algorithm a couple of times and extracting the non-dominated solutions out of all solution sets. If you have only a single run, an alternative is to use the obtained non-dominated set of solutions as an approximation. However, the result only indicates how much the algorithm's progress in converging to the final set.
```
from pymoo.indicators.igd import IGD
metric = IGD(pf, zero_to_one=True)
igd = [metric.do(_F) for _F in hist_F]
plt.plot(n_evals, igd, color='black', lw=0.7, label="Avg. CV of Pop")
plt.scatter(n_evals, igd, facecolor="none", edgecolor='black', marker="p")
plt.axhline(10**-2, color="red", label="10^-2", linestyle="--")
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("IGD")
plt.yscale("log")
plt.legend()
plt.show()
from pymoo.indicators.igd_plus import IGDPlus
metric = IGDPlus(pf, zero_to_one=True)
igd = [metric.do(_F) for _F in hist_F]
plt.plot(n_evals, igd, color='black', lw=0.7, label="Avg. CV of Pop")
plt.scatter(n_evals, igd, facecolor="none", edgecolor='black', marker="p")
plt.axhline(10**-2, color="red", label="10^-2", linestyle="--")
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("IGD+")
plt.yscale("log")
plt.legend()
plt.show()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.