text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
<center>
<img src="../../img/ods_stickers.jpg">
## Открытый курс по машинному обучению. Сессия № 2
Автор материала: программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий. Материал распространяется на условиях лицензии [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала.
# <center> Тема 5. Композиции алгоритмов, случайный лес
## <center>Практика. Деревья решений и случайный лес в соревновании Kaggle Inclass по кредитному скорингу
Тут веб-формы для ответов нет, ориентируйтесь на рейтинг [соревнования](https://inclass.kaggle.com/c/beeline-credit-scoring-competition-2), [ссылка](https://www.kaggle.com/t/115237dd8c5e4092a219a0c12bf66fc6) для участия.
Решается задача кредитного скоринга.
Признаки клиентов банка:
- Age - возраст (вещественный)
- Income - месячный доход (вещественный)
- BalanceToCreditLimit - отношение баланса на кредитной карте к лимиту по кредиту (вещественный)
- DIR - Debt-to-income Ratio (вещественный)
- NumLoans - число заемов и кредитных линий
- NumRealEstateLoans - число ипотек и заемов, связанных с недвижимостью (натуральное число)
- NumDependents - число членов семьи, которых содержит клиент, исключая самого клиента (натуральное число)
- Num30-59Delinquencies - число просрочек выплат по кредиту от 30 до 59 дней (натуральное число)
- Num60-89Delinquencies - число просрочек выплат по кредиту от 60 до 89 дней (натуральное число)
- Delinquent90 - были ли просрочки выплат по кредиту более 90 дней (бинарный) - имеется только в обучающей выборке
```
import numpy as np
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_auc_score
%matplotlib inline
```
**Загружаем данные.**
```
train_df = pd.read_csv('../../data/credit_scoring_train.csv', index_col='client_id')
test_df = pd.read_csv('../../data/credit_scoring_test.csv', index_col='client_id')
y = train_df['Delinquent90']
train_df.drop('Delinquent90', axis=1, inplace=True)
train_df.head()
```
**Посмотрим на число пропусков в каждом признаке.**
```
train_df.info()
test_df.info()
```
**Заменим пропуски медианными значениями.**
```
train_df['NumDependents'].fillna(train_df['NumDependents'].median(), inplace=True)
train_df['Income'].fillna(train_df['Income'].median(), inplace=True)
test_df['NumDependents'].fillna(test_df['NumDependents'].median(), inplace=True)
test_df['Income'].fillna(test_df['Income'].median(), inplace=True)
```
### Дерево решений без настройки параметров
**Обучите дерево решений максимальной глубины 3, используйте параметр random_state=17 для воспроизводимости результатов.**
```
first_tree = # Ваш код здесь
first_tree.fit # Ваш код здесь
```
**Сделайте прогноз для тестовой выборки.**
```
first_tree_pred = first_tree # Ваш код здесь
```
**Запишем прогноз в файл.**
```
def write_to_submission_file(predicted_labels, out_file,
target='Delinquent90', index_label="client_id"):
# turn predictions into data frame and save as csv file
predicted_df = pd.DataFrame(predicted_labels,
index = np.arange(75000,
predicted_labels.shape[0] + 75000),
columns=[target])
predicted_df.to_csv(out_file, index_label=index_label)
write_to_submission_file(first_tree_pred, 'credit_scoring_first_tree.csv')
```
**Если предсказывать вероятности дефолта для клиентов тестовой выборки, результат будет намного лучше.**
```
first_tree_pred_probs = first_tree.predict_proba(test_df)[:, 1]
write_to_submission_file # Ваш код здесь
```
## Дерево решений с настройкой параметров с помощью GridSearch
**Настройте параметры дерева с помощью `GridSearhCV`, посмотрите на лучшую комбинацию параметров и среднее качество на 5-кратной кросс-валидации. Используйте параметр `random_state=17` (для воспроизводимости результатов), не забывайте про распараллеливание (`n_jobs=-1`).**
```
tree_params = {'max_depth': list(range(3, 8)),
'min_samples_leaf': list(range(5, 13))}
locally_best_tree = GridSearchCV # Ваш код здесь
locally_best_tree.fit # Ваш код здесь
locally_best_tree.best_params_, round(locally_best_tree.best_score_, 3)
```
**Сделайте прогноз для тестовой выборки и пошлите решение на Kaggle.**
```
tuned_tree_pred_probs = locally_best_tree # Ваш код здесь
write_to_submission_file # Ваш код здесь
```
### Случайный лес без настройки параметров
**Обучите случайный лес из деревьев неограниченной глубины, используйте параметр `random_state=17` для воспроизводимости результатов.**
```
first_forest = # Ваш код здесь
first_forest.fit # Ваш код здесь
first_forest_pred = first_forest # Ваш код здесь
```
**Сделайте прогноз для тестовой выборки и пошлите решение на Kaggle.**
```
write_to_submission_file # Ваш код здесь
```
### Случайный лес c настройкой параметров
**Настройте параметр `max_features` леса с помощью `GridSearhCV`, посмотрите на лучшую комбинацию параметров и среднее качество на 5-кратной кросс-валидации. Используйте параметр random_state=17 (для воспроизводимости результатов), не забывайте про распараллеливание (n_jobs=-1).**
```
%%time
forest_params = {'max_features': np.linspace(.3, 1, 7)}
locally_best_forest = GridSearchCV # Ваш код здесь
locally_best_forest.fit # Ваш код здесь
locally_best_forest.best_params_, round(locally_best_forest.best_score_, 3)
tuned_forest_pred = locally_best_forest # Ваш код здесь
write_to_submission_file # Ваш код здесь
```
**Посмотрите, как настроенный случайный лес оценивает важность признаков по их влиянию на целевой. Представьте результаты в наглядном виде с помощью `DataFrame`.**
```
pd.DataFrame(locally_best_forest.best_estimator_.feature_importances_ # Ваш код здесь
```
**Обычно увеличение количества деревьев только улучшает результат. Так что напоследок обучите случайный лес из 300 деревьев с найденными лучшими параметрами. Это может занять несколько минут.**
```
%%time
final_forest = RandomForestClassifier # Ваш код здесь
final_forest.fit(train_df, y)
final_forest_pred = final_forest.predict_proba(test_df)[:, 1]
write_to_submission_file(final_forest_pred, 'credit_scoring_final_forest.csv')
```
**Сделайте посылку на Kaggle.**
| github_jupyter |
# High-level Chainer Example
```
# Parameters
EPOCHS = 10
N_CLASSES=10
BATCHSIZE = 64
LR = 0.01
MOMENTUM = 0.9
GPU = True
LOGGER_URL='msdlvm.southcentralus.cloudapp.azure.com'
LOGGER_USRENAME='admin'
LOGGER_PASSWORD='password'
LOGGER_DB='gpudata'
LOGGER_SERIES='gpu'
import os
from os import path
import sys
import numpy as np
import math
import chainer
import chainer.functions as F
import chainer.links as L
from chainer import optimizers
from chainer import cuda
from utils import cifar_for_library, yield_mb, create_logger, Timer
from gpumon.influxdb import log_context
from influxdb import InfluxDBClient
client = InfluxDBClient(LOGGER_URL, 8086, LOGGER_USRENAME, LOGGER_PASSWORD, LOGGER_DB)
node_id = os.getenv('AZ_BATCH_NODE_ID', default='node')
task_id = os.getenv('AZ_BATCH_TASK_ID', default='chainer')
job_id = os.getenv('AZ_BATCH_JOB_ID', default='chainer')
logger = create_logger(client, node_id=node_id, task_id=task_id, job_id=job_id)
print("OS: ", sys.platform)
print("Python: ", sys.version)
print("Chainer: ", chainer.__version__)
print("Numpy: ", np.__version__)
data_path = path.join(os.getenv('AZ_BATCHAI_INPUT_DATASET'), 'cifar-10-batches-py')
class SymbolModule(chainer.Chain):
def __init__(self):
super(SymbolModule, self).__init__(
conv1=L.Convolution2D(3, 50, ksize=(3,3), pad=(1,1)),
conv2=L.Convolution2D(50, 50, ksize=(3,3), pad=(1,1)),
conv3=L.Convolution2D(50, 100, ksize=(3,3), pad=(1,1)),
conv4=L.Convolution2D(100, 100, ksize=(3,3), pad=(1,1)),
# feature map size is 8*8 by pooling
fc1=L.Linear(100*8*8, 512),
fc2=L.Linear(512, N_CLASSES),
)
def __call__(self, x):
h = F.relu(self.conv2(F.relu(self.conv1(x))))
h = F.max_pooling_2d(h, ksize=(2,2), stride=(2,2))
h = F.dropout(h, 0.25)
h = F.relu(self.conv4(F.relu(self.conv3(h))))
h = F.max_pooling_2d(h, ksize=(2,2), stride=(2,2))
h = F.dropout(h, 0.25)
h = F.dropout(F.relu(self.fc1(h)), 0.5)
return self.fc2(h)
def init_model(m):
optimizer = optimizers.MomentumSGD(lr=LR, momentum=MOMENTUM)
optimizer.setup(m)
return optimizer
def to_chainer(array, **kwargs):
return chainer.Variable(cuda.to_gpu(array), **kwargs)
%%time
# Data into format for library
x_train, x_test, y_train, y_test = cifar_for_library(data_path, channel_first=True)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
print(x_train.dtype, x_test.dtype, y_train.dtype, y_test.dtype)
%%time
# Create symbol
sym = SymbolModule()
if GPU:
chainer.cuda.get_device(0).use() # Make a specified GPU current
sym.to_gpu() # Copy the model to the GPU
%%time
optimizer = init_model(sym)
with Timer() as t:
with log_context(LOGGER_URL, LOGGER_USRENAME, LOGGER_PASSWORD, LOGGER_DB, LOGGER_SERIES,
node_id=node_id, task_id=task_id, job_id=job_id):
for j in range(EPOCHS):
for data, target in yield_mb(x_train, y_train, BATCHSIZE, shuffle=True):
# Get samples
optimizer.update(L.Classifier(sym), to_chainer(data), to_chainer(target))
# Log
print(j)
print('Training took %.03f sec.' % t.interval)
logger('training duration', value=t.interval)
%%time
n_samples = (y_test.shape[0]//BATCHSIZE)*BATCHSIZE
y_guess = np.zeros(n_samples, dtype=np.int)
y_truth = y_test[:n_samples]
c = 0
with chainer.using_config('train', False):
for data, target in yield_mb(x_test, y_test, BATCHSIZE):
# Forwards
pred = chainer.cuda.to_cpu(sym(to_chainer(data)).data.argmax(-1))
# Collect results
y_guess[c*BATCHSIZE:(c+1)*BATCHSIZE] = pred
c += 1
acc=sum(y_guess == y_truth)/len(y_guess)
print("Accuracy: ", acc)
logger('accuracy', value=acc)
```
| github_jupyter |
# KEN 3140 Semantic Web: Lab 5 🧪
### Writing and executing "complex" SPARQL queries on RDF graphs
**Reference specifications: https://www.w3.org/TR/sparql11-query/**
We will use the **DBpedia SPARQL endpoint**:
>**https://dbpedia.org/sparql**
And **SPARQL query editor YASGUI**:
> **https://yasgui.triply.cc**
# Install the SPARQL kernel
This notebook uses the SPARQL Kernel to define and **execute SPARQL queries in the notebook** codeblocks.
You can **install the SPARQL Kernel** locally (or with Conda):
```shell
pip install sparqlkernel --user
jupyter sparqlkernel install --user
```
Or use a Docker image (similar to the one for Java):
```shell
docker run -it --rm -p 8888:8888 -v $(pwd):/home/jovyan -e JUPYTER_ENABLE_LAB=yes -e JUPYTER_TOKEN=YOURPASSWORD umids/jupyterlab:sparql
```
To start running SPARQL query in this notebook, we need to define the **SPARQL kernel parameters**:
```
# Define the SPARQL endpoint to query
%endpoint http://dbpedia.org/sparql
# This is optional, it would increase the log level
%log debug
# Uncomment the next line to return label in english and avoid duplicates
# %lang en
```
# Perform an arithmetic operation
Calculate the GDP per capita of countries from `dbp:gdpNominal` and `dbo:populationTotal`
Starting from this query:
```sparql
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX dbp: <http://dbpedia.org/property/>
SELECT ?country ?gdpValue ?population
WHERE {
?country dbp:gdpNominal ?gdpValue ;
dbo:populationTotal ?population .
} LIMIT 10
```
**Impossible due to different datatypes** 🚫
The GDP is in `http://dbpedia.org/datatype/usDollar`, and the population is a `xsd:nonNegativeInteger`:
```
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX dbp: <http://dbpedia.org/property/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
SELECT ?country ?gdpValue datatype(?gdpValue) AS ?gdpType ?population datatype(?population) AS ?populationType (?gdpValue / ?population AS ?gdpPerCapita)
WHERE {
?country dbp:gdpNominal ?gdpValue ;
dbo:populationTotal ?population .
} LIMIT 10
```
# Cast a variable to a specific datatype
Especially useful when **comparing or performing an arithmetical operations on 2 variables**. Use the `xsd:` prefix for standard datatypes
Here we divide a value in `usDollar` by a `nonNegativeInteger` casting the 2 to `xsd:integer` to calculate the GDP per capita of each country 💶
```
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX dbp: <http://dbpedia.org/property/>
SELECT ?country ?gdpValue ?population (xsd:integer(?gdpValue) / xsd:integer(?population) AS ?gdpPerCapita)
WHERE {
?country dbp:gdpNominal ?gdpValue ;
dbo:populationTotal ?population .
} LIMIT 10
```
# Bind a new variable
* Use the `concat()` function to add "http://country.org/" at the start of a country ISO code.
* Use `BIND` to bind the produced string to a variable
* Make this string an URI using the `uri()` function
Start from this query:
```
SELECT *
WHERE {
?country a dbo:Country ;
dbp:iso31661Alpha ?isoCode .
} LIMIT 10
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX dbp: <http://dbpedia.org/property/>
SELECT *
WHERE {
?country a dbo:Country ;
dbp:iso31661Alpha ?isoCode
BIND(uri(concat("http://country.org/", ?isoCode)) AS ?isoUri)
} LIMIT 10
```
# Count aggregated results
Count the number of books for each author 📚
Start from this query:
```sparql
PREFIX dbo:<http://dbpedia.org/ontology/>
SELECT ?author
WHERE {
?book a dbo:Book ;
dbo:author ?author .
} LIMIT 10
```
```
PREFIX dbo:<http://dbpedia.org/ontology/>
SELECT ?author (count(?book) as ?book_count)
WHERE {
?book a dbo:Book ;
dbo:author ?author .
} LIMIT 10
```
# Count depend on the aggregated results of a row
Here we select also the book, hence getting a count of 1 book for each row 📘
```
PREFIX dbo:<http://dbpedia.org/ontology/>
SELECT ?book ?author (count(?book) as ?book_count)
WHERE {
?book a dbo:Book ;
dbo:author ?author .
} LIMIT 10
```
# Group by
Group solutions by variable value.
Get the average GDP for all countries grouped by the currency they use. Start from:
```sparql
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX dbp: <http://dbpedia.org/property/>
PREFIX dbo: <http://dbpedia.org/ontology/>
SELECT ?currency
WHERE {
?country dbo:currency ?currency ;
dbp:gdpPppPerCapita ?gdp .
}
```
# Group by solution
Use the `AVG()` function to calculate the average of the GDPs grouped by currency:
```
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX dbp: <http://dbpedia.org/property/>
PREFIX dbo: <http://dbpedia.org/ontology/>
SELECT ?currency (AVG(xsd:integer(?gdp)) AS ?avgGdp)
WHERE {
?country dbo:currency ?currency ;
dbp:gdpPppPerCapita ?gdp .
}
GROUP BY ?currency
ORDER BY DESC(?avgGdp)
LIMIT 15
```
# Make a pattern optional
We can define optional patterns that will be retrieved when available.
Put a statement in a `OPTIONAL { }` block to make it optional (it will not used to filter the statements returned)
With this query we get all the books, and their authors. **Change it to define the author property as optional**, so we retrieve books even if no author is defined.
```sparql
PREFIX dbo: <http://dbpedia.org/ontology/>
SELECT *
WHERE {
?book a dbo:Book ;
dbo:author ?author .
}
```
```
PREFIX dbo: <http://dbpedia.org/ontology/>
SELECT *
WHERE {
?book a dbo:Book .
OPTIONAL {
?book rdfs:label ?author .
}
}
```
# Get the graph
Most triplestores supports graphs todays which enable to add a 4th object to the triple (usually to classify it in a larger graph of triples).
```turtle
<http://subject> <http://predicate> <http://object> <http://graph> .
```
Also known as: context
```
PREFIX dbo:<http://dbpedia.org/ontology/>
SELECT ?author ?graph
WHERE {
GRAPH ?graph {
?book a dbo:Book ;
dbo:author ?author .
}
} LIMIT 10
```
We can also query the triples `FROM` a specific graph:
```
PREFIX dbo:<http://dbpedia.org/ontology/>
SELECT ?author ?graph
FROM <http://dbpedia.org>
WHERE {
GRAPH ?graph {
?book a dbo:Book ;
dbo:author ?author .
}
} LIMIT 10
```
# Or get all graphs
This query takes time in big datasets, but is usually cached in Virtuoso triplestores.
```
SELECT DISTINCT ?g
WHERE {
GRAPH ?g {
?s ?p ?o .
}
}
```
# Subqueries
A query inside a query 🤯
Resolve: order the first 10 countries to have been dissolved by date of creation.
* Select all countries that have been dissolved
* Order them by dissolution date (oldest to newest)
* Limit to 10
* Finally, order the results (countries) from the most recently created to the oldest created
Start from:
```
SELECT *
WHERE {
?country a dbo:Country ;
dbo:dissolutionDate ?dissolutionDate ;
dbo:foundingYear ?foundingYear .
} LIMIT 5
```
* Order countries by dissolution date and keep the 10 first
* Order them from the most recently created to the oldest created
```
SELECT *
WHERE {
{
SELECT ?country ?dissolutionDate
WHERE {
?country a dbo:Country ;
dbo:dissolutionDate ?dissolutionDate .
} order by ?dissolutionDate limit 10
}
?country dbo:foundingYear ?foundingYear .
} order by desc(?foundingYear)
```
# Federated query
Same as a subquery, a federated query enable to query another SPARQL endpoint directly
We will need to execute the query on **https://graphdb.dumontierlab.com/repositories/KEN3140_SemanticWeb** (dbpedia blocks federated queries)
* [P688](https://www.wikidata.org/wiki/Property:P688): encodes (the product of a gene)
* [P352](https://www.wikidata.org/wiki/Property:P352): identifier for a protein per the UniProt database.
```
%endpoint https://graphdb.dumontierlab.com/repositories/KEN3140_SemanticWeb
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
SELECT * WHERE {
SERVICE <https://query.wikidata.org/sparql> {
?gene wdt:P688 ?encodedProtein .
?encodedProtein wdt:P352 ?uniprotId .
}
} LIMIT 5
```
# Construct 🧱
Return a graph specified by a template (build triples)
Generate 2 triples:
* Author is of type `schema:Person`
* The `schema:countryOfOrigin` of the author
Starting from this query:
```
%endpoint http://dbpedia.org/sparql
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX schema: <http://schema.org/>
SELECT *
WHERE {
?book a dbo:Book ;
dbo:author ?author .
?author dbo:birthPlace ?birthPlace .
?birthPlace dbo:country ?country .
} LIMIT 5
```
You can define the pattern in the `CONSTRUCT { }` block
```
%endpoint http://dbpedia.org/sparql
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX schema: <http://schema.org/>
CONSTRUCT {
?author a schema:Person ;
schema:countryOfOrigin ?country .
}
WHERE {
?book a dbo:Book ;
dbo:author ?author .
?author dbo:birthPlace ?birthPlace .
?birthPlace dbo:country ?country .
} LIMIT 5
```
# Insert 📝
Same as a `construct` but directly insert triples into your triplestore. You can define in which graph the triples will be inserted
```sparql
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX schema: <http://schema.org/>
INSERT {
GRAPH <http://my-graph> {
?author a schema:Person ;
schema:countryOfOrigin ?country .
}
}
WHERE {
?book a dbo:Book ;
dbo:author ?author .
?author dbo:birthPlace ?birthPlace .
?birthPlace dbo:country ?country .
}
```
# Insert data
Use SPARQL to insert data into your triplestore (**not possible on public endpoints**)
```sparql
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
INSERT DATA {
GRAPH <http://my-graph> {
<my-subject> rdfs:label "inserted object" .
}
}
```
# Delete ❌
To delete particular statements retrieved from a pattern using `WHERE`
Here we delete the `bl:name` statements for the genes we just created:
```sparql
DELETE {
GRAPH <http://graph> {
?geneUri bl:name ?geneLabel.
}
}
WHERE {
?geneUri a bl:Gene .
?geneUri bl:name ?geneLabel .
}
```
# Delete data
Directly provide the statements to delete
```sparql
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
DELETE DATA {
GRAPH <http://my-graph> {
<http://my-subject> rdfs:label "inserted object" .
}
}
```
# Search on DBpedia 🔎
Use **[https://yasgui.triply.cc](https://yasgui.triply.cc)** to write and run SPARQL query on DBpedia
1. Calculate countries density. Density = `dbo:populationTotal` / `dbo:PopulatedPlace/areaTotal`
2. Construct a new triple to define the previously created country density
3. Order by birthDate of authors who wrote at least 3 books with more than 500 pages.
## 1. Calculate countries density
```
SELECT ?country ?area ?population
(xsd:float(?population)/xsd:float(?area) AS ?density)
WHERE {
?country a dbo:Country ;
dbo:populationTotal ?population ;
<http://dbpedia.org/ontology/PopulatedPlace/areaTotal> ?area .
FILTER(?area != 0)
}
```
# 2. Construct density triple
```
CONSTRUCT {
?country dbo:density ?density .
}
WHERE {
?country a dbo:Country ;
dbo:populationTotal ?population ;
<http://dbpedia.org/ontology/PopulatedPlace/areaTotal> ?area .
BIND(xsd:float(?population)/xsd:float(?area) AS ?density)
FILTER(?area != 0)
} LIMIT 10
```
# Useful links 🔧
* Use **[prefix.cc](http://prefix.cc/)** to resolve mysterious prefixes.
* Search for functions in the specifications: **https://www.w3.org/TR/sparql11-query**
* How do I find vocabulary to use in my SPARQL query from DBpedia? Search in **[DBpedia ontology classes](http://mappings.dbpedia.org/server/ontology/classes/)**, or on google, e.g., search for: "**[dbpedia capital](https://www.google.com/search?&q=dbpedia+capital)**"
# Public SPARQL endpoints 🔗
* Wikidata, facts powering Wikipedia infobox: https://query.wikidata.org/sparql
* Bio2RDF, linked data for the life sciences: https://bio2rdf.org/sparql
* Disgenet, gene-disease association: http://rdf.disgenet.org/sparql
* PathwayCommons, resource for biological pathways analysis: http://rdf.pathwaycommons.org/sparql
| github_jupyter |
```
def foobar(a: int, b: str, c: float = 3.2) -> tuple: pass
import collections
import functools
import inspect
from typing import List
Vector = List[float]
def formatannotation(annotation, base_module=None):
if getattr(annotation, '__module__', None) == 'typing':
return repr(annotation).replace('typing.', '')
if isinstance(annotation, type):
if annotation.__module__ in ('builtins', base_module):
return annotation.__qualname__
return annotation.__module__+'.'+annotation.__qualname__
return repr(annotation)
def check(func):
# 获取函数定义的参数
sig = inspect.signature(func)
parameters = sig.parameters # 参数有序字典
arg_keys = tuple(parameters.keys()) # 参数名称
for k, v in sig.parameters.items():
print('{k}: {a!r}'.format(k=k, a=v.annotation))
print("\t", formatannotation(v.annotation))
print("➷", sig.return_annotation)
check(foobar)
def foobar2(a: int, b: str, c: Vector) -> tuple: pass
check(foobar2)
fun=exec('def foobar_g(a: int, b: str, c: float = 3.2) -> tuple: pass')
print(foobar_g.__annotations__)
from typing import Mapping, Sequence
def Employee(object): pass
def notify_by_email(employees: Sequence[Employee],
overrides: Mapping[str, str]) -> None: pass
check(notify_by_email)
from typing import List
Vector = List[float]
def scale(scalar: float, vector: Vector) -> Vector:
return [scalar * num for num in vector]
# typechecks; a list of floats qualifies as a Vector.
new_vector = scale(2.0, [1.0, -4.2, 5.4])
def foo(a, b, *, c, d=10):
pass
sig = inspect.signature(foo)
for param in sig.parameters.values():
if (param.kind == param.KEYWORD_ONLY and
param.default is param.empty):
print('Parameter:', param)
help(param.annotation)
from io import StringIO
from mako.template import Template
from mako.lookup import TemplateLookup
from mako.runtime import Context
# The contents within the ${} tag are evaluated by Python directly, so full expressions are OK:
def render_template(file, ctx):
mylookup = TemplateLookup(directories=['./'], output_encoding='utf-8', encoding_errors='replace')
mytemplate = Template(filename='./templates/'+file, module_directory='/tmp/mako_modules', lookup=mylookup)
mytemplate.render_context(ctx)
return (buf.getvalue())
buf = StringIO()
ctx = Context(buf, form_name="some_form", slots=["some_slot", "some_other_slot"])
print(render_template('custom_form_action.mako', ctx))
from typing import Dict, Text, Any, List, Union
from rasa_core_sdk import ActionExecutionRejection
from rasa_core_sdk import Tracker
from rasa_core_sdk.events import SlotSet
from rasa_core_sdk.executor import CollectingDispatcher
from rasa_core_sdk.forms import FormAction, REQUESTED_SLOT
def build_slots(func):
# 获取函数定义的参数
sig = inspect.signature(func)
parameters = sig.parameters # 参数有序字典
arg_keys = tuple(parameters.keys()) # 参数名称
for k, v in sig.parameters.items():
print('{k}: {a!r}, {t}'.format(k=k, a=v.annotation, t=formatannotation(v.annotation)))
print("➷", sig.return_annotation)
return func.__name__, list(parameters.keys())
def simple(a: int, b: str, c: Vector) -> tuple: pass
form_name, slots=build_slots(simple)
print(form_name, str(slots))
buf = StringIO()
ctx = Context(buf, form_name=form_name, slots=slots)
clsdef=render_template('custom_form_action.mako', ctx)
print(clsdef)
exec(clsdef)
exec("form=CustomFormAction()")
print(form.required_slots(None))
```
| github_jupyter |
# Working with data
Overview of today's learning goals:
1. Introduce pandas
2. Load data files
3. Clean and process data
4. Select, filter, and slice data from a dataset
5. Descriptive stats: central tendency and dispersion
6. Merging and concatenating datasets
7. Grouping and summarizing data
```
# something new: import these packages to work with data
import numpy as np
import pandas as pd
```
## 1. Introducing pandas
https://pandas.pydata.org/
```
# review: a python list is a built-in data type
my_list = [8, 6, 4, 2]
my_list
# a numpy array is like a list
# but faster, more compact, and lots more features
my_array = np.array(my_list)
my_array
```
pandas has two primary data structures we will work with: Series and DataFrames
### 1a. pandas Series
```
# a pandas series is based on a numpy array: it's fast, compact, and has more functionality
# perhaps most notably, it has an index which allows you to work naturally with tabular data
my_series = pd.Series(my_list)
my_series
# look at a list-representation of the index
my_series.index.tolist()
# look at the series' values themselves
my_series.values
# what's the data type of the series' values?
type(my_series.values)
# what's the data type of the individual values themselves?
my_series.dtype
```
### 1b. pandas DataFrames
```
# a dict can contain multiple lists and label them
my_dict = {"hh_income": [75125, 22075, 31950, 115400],
"home_value": [525000, 275000, 395000, 985000]}
my_dict
# a pandas dataframe can contain one or more columns
# each column is a pandas series
# each row is a pandas series
# you can create a dataframe by passing in a list, array, series, or dict
df = pd.DataFrame(my_dict)
df
# the row labels in the index are accessed by the .index attribute of the DataFrame object
df.index.tolist()
# the column labels are accessed by the .columns attribute of the DataFrame object
df.columns
# the data values are accessed by the .values attribute of the DataFrame object
# this is a numpy (two-dimensional) array
df.values
```
## 2. Loading data
In practice, you'll work with data by loading a dataset file into pandas. CSV is the most common format. But pandas can also ingest tab-separated data, JSON, and proprietary file formats like Excel .xlsx files, Stata, SAS, and SPSS.
Below, notice what pandas's `read_csv` function does:
1. recognize the header row and get its variable names
1. read all the rows and construct a pandas DataFrame (an assembly of pandas Series rows and columns)
1. construct a unique index, beginning with zero
1. infer the data type of each variable (ie, column)
```
# load a data file
# note the relative filepath! where is this file located?
# note the dtype argument! always specify that fips codes are strings, otherwise pandas guesses int
df = pd.read_csv("../../data/census_tracts_data_la.csv", dtype={"GEOID10": str})
# dataframe shape as rows, columns
df.shape
# or use len to just see the number of rows
len(df)
# view the dataframe's "head"
df.head()
# view the dataframe's "tail"
df.tail()
```
#### What are these data?
I gathered them from the census bureau (2017 5-year tract-level ACS) for you, then gave them meaningful variable names. It's a set of socioeconomic variables across all LA County census tracts:
|column|description|
|------|-----------|
|total_pop|Estimate!!SEX AND AGE!!Total population|
|median_age|Estimate!!SEX AND AGE!!Total population!!Median age (years)|
|pct_hispanic|Percent Estimate!!HISPANIC OR LATINO AND RACE!!Total population!!Hispanic or Latino (of any race)|
|pct_white|Percent Estimate!!HISPANIC OR LATINO AND RACE!!Total population!!Not Hispanic or Latino!!White alone|
|pct_black|Percent Estimate!!HISPANIC OR LATINO AND RACE!!Total population!!Not Hispanic or Latino!!Black or African American alone|
|pct_asian|Estimate!!HISPANIC OR LATINO AND RACE!!Total population!!Not Hispanic or Latino!!Asian alone|
|pct_male|Percent Estimate!!SEX AND AGE!!Total population!!Male|
|pct_single_family_home|Percent Estimate!!UNITS IN STRUCTURE!!Total housing units!!1-unit detached|
|med_home_value|Estimate!!VALUE!!Owner-occupied units!!Median (dollars)|
|med_rooms_per_home|Estimate!!ROOMS!!Total housing units!!Median rooms|
|pct_built_before_1940|Percent Estimate!!YEAR STRUCTURE BUILT!!Total housing units!!Built 1939 or earlier|
|pct_renting|Percent Estimate!!HOUSING TENURE!!Occupied housing units!!Renter-occupied|
|rental_vacancy_rate|Estimate!!HOUSING OCCUPANCY!!Total housing units!!Rental vacancy rate|
|avg_renter_household_size|Estimate!!HOUSING TENURE!!Occupied housing units!!Average household size of renter-occupied unit|
|med_gross_rent|Estimate!!GROSS RENT!!Occupied units paying rent!!Median (dollars)|
|med_household_income|Estimate!!INCOME AND BENEFITS (IN 2017 INFLATION-ADJUSTED DOLLARS)!!Total households!!Median household income (dollars)|
|mean_commute_time|Estimate!!COMMUTING TO WORK!!Workers 16 years and over!!Mean travel time to work (minutes)|
|pct_commute_drive_alone|Percent Estimate!!COMMUTING TO WORK!!Workers 16 years and over!!Car truck or van drove alone|
|pct_below_poverty|Percent Estimate!!PERCENTAGE OF FAMILIES AND PEOPLE WHOSE INCOME IN THE PAST 12 MONTHS IS BELOW THE POVERTY LEVEL!!All people|
|pct_college_grad_student|Percent Estimate!!SCHOOL ENROLLMENT!!Population 3 years and over enrolled in school!!College or graduate school|
|pct_same_residence_year_ago|Percent Estimate!!RESIDENCE 1 YEAR AGO!!Population 1 year and over!!Same house|
|pct_bachelors_degree|Percent Estimate!!EDUCATIONAL ATTAINMENT!!Population 25 years and over!!Percent bachelor's degree or higher|
|pct_english_only|Percent Estimate!!LANGUAGE SPOKEN AT HOME!!Population 5 years and over!!English only|
|pct_foreign_born|Percent Estimate!!PLACE OF BIRTH!!Total population!!Foreign born|
## 3. Clean and process data
```
df.head(10)
# data types of the columns
df.dtypes
# access a single column like df['col_name']
df["med_gross_rent"].head(10)
# pandas uses numpy's nan to represent null (missing) values
print(np.nan)
print(type(np.nan))
# convert rent from string -> float
df["med_gross_rent"].astype(float)
```
Didn't work! We need to clean up the stray alphabetical characters to get a numerical value. You can do string operations on pandas Series to clean up their values
```
# do a string replace and assign back to that column, then change type to float
df["med_gross_rent"] = df["med_gross_rent"].str.replace(" (USD)", "", regex=False)
df["med_gross_rent"] = df["med_gross_rent"].astype(float)
# now clean up the income column then convert it from string -> float
# do a string replace and assign back to that column
df["med_household_income"] = df["med_household_income"].str.replace("$", "", regex=False)
df["med_household_income"] = df["med_household_income"].astype(float)
# convert rent from float -> int
df["med_gross_rent"].astype(int)
```
You cannot store null values as type `int`, only as type `float`. You have three basic options:
1. Keep the column as float to retain the nulls - they are often important!
2. Drop all the rows that contain nulls if we need non-null data for our analysis
3. Fill in all the nulls with another value if we know a reliable default value
```
df.shape
# drop rows that contain nulls
# this doesn't save the result, because we didn't reassign! (in reality, want to keep the nulls here)
df.dropna(subset=["med_gross_rent"]).shape
# fill in rows that contain nulls
# this doesn't save the result, because we didn't reassign! (in reality, want to keep the nulls here)
df["med_gross_rent"].fillna(value=0).head(10)
# more string operations: slice state fips and county fips out of the tract fips string
# assign them to new dataframe columns
df["state"] = df["GEOID10"].str.slice(0, 2)
df["county"] = df["GEOID10"].str.slice(2, 5)
df.head()
# dict that maps state fips code -> state name
fips = {"04": "Arizona", "06": "California", "41": "Oregon"}
# replace fips code with state name with the replace() method
df["state"] = df["state"].replace(fips)
# you can rename columns with the rename() method
# remember to reassign to save the result
df = df.rename(columns={"state": "state_name"})
# you can drop columns you don't need with the drop() method
# remember to reassign to save the result
df = df.drop(columns=["county"])
# inspect the cleaned-up dataframe
df.head()
# save it to disk as a "clean" copy
# note the relative filepath
df.to_csv("../../data/census_tracts_data_la-clean.csv", index=False, encoding="utf-8")
```
## 4. Selecting and slicing data from a DataFrame
```
# CHEAT SHEET OF COMMON TASKS
# Operation Syntax Result
# ------------------------------------------------------------
# Select column by name df[col] Series
# Select columns by name df[col_list] DataFrame
# Select row by label df.loc[label] Series
# Select row by integer location df.iloc[loc] Series
# Slice rows by label df.loc[a:c] DataFrame
# Select rows by boolean vector df[mask] DataFrame
```
### 4a. Select DataFrame's column(s) by name
We saw some of this a minute ago. Let's look in a bit more detail and break down what's happening.
```
# select a single column by column name
# this is a pandas series
df["total_pop"]
# select multiple columns by a list of column names
# this is a pandas dataframe that is a subset of the original
df[["total_pop", "median_age"]]
# create a new column by assigning df['new_col'] to some set of values
# you can do math operations on any numeric columns
df["monthly_income"] = df["med_household_income"] / 12
df["rent_burden"] = df["med_gross_rent"] / df["monthly_income"]
# inspect the results
df[["med_household_income", "monthly_income", "med_gross_rent", "rent_burden"]].head()
```
### 4b. Select row(s) by label
```
# use .loc to select by row label
# returns the row as a series whose index is the dataframe column names
df.loc[0]
# use .loc to select single value by row label, column name
df.loc[0, "pct_below_poverty"]
# slice of rows from label 5 to label 7, inclusive
# this returns a pandas dataframe
df.loc[5:7]
# slice of rows from label 1 to label 3, inclusive
# slice of columns from pct_hispanic to pct_asian, inclusive
df.loc[1:3, "pct_hispanic":"pct_asian"]
# subset of rows from with labels in list
# subset of columns with names in list
df.loc[[1, 3], ["pct_hispanic", "pct_asian"]]
# you can use a column of unique identifiers as the index
# fips codes uniquely identify each row (but verify!)
df = df.set_index("GEOID10")
df.index.is_unique
df.head()
# .loc works by label, not by position in the dataframe
df.loc[0]
# the index now contains fips codes, so you have to use .loc accordingly to select by row label
df.loc["06037137201"]
```
### 4c. Select by (integer) position
```
# get the row in the zero-th position in the dataframe
df.iloc[0]
# you can slice as well
# note, while .loc[] is inclusive, .iloc[] is not
# get the rows from position 0 up to but not including position 3 (ie, rows 0, 1, and 2)
df.iloc[0:3]
# get the value from the row in position 3 and the column in position 2 (zero-indexed)
df.iloc[3, 2]
```
### 4d. Select/filter by value
You can subset or filter a dataframe for based on the values in its rows/columns.
```
# filter the dataframe by rows with 30%+ rent burden
df[df["rent_burden"] > 0.3]
# what exactly did that do? let's break it out.
df["rent_burden"] > 0.3
# essentially a true/false mask that filters by value
mask = df["rent_burden"] > 0.3
df[mask]
# you can chain multiple conditions together
# pandas logical operators are: | for or, & for and, ~ for not
# these must be grouped by using parentheses due to order of operations
# question: which tracts are both rent-burdened and majority-Black?
mask = (df["rent_burden"] > 0.3) & (df["pct_black"] > 50)
df[mask].shape
# which tracts are both rent-burdened and either majority-Black or majority-Hispanic?
mask1 = df["rent_burden"] > 0.3
mask2 = df["pct_black"] > 50
mask3 = df["pct_hispanic"] > 50
mask = mask1 & (mask2 | mask3)
df[mask].shape
# see the mask
mask
# ~ means not... it essentially flips trues to falses and vice-versa
~mask
# which rows are in a state that begins with "Cal"?
# all of them... because we're looking only at LA county
mask = df["state_name"].str.startswith("Cal")
df[mask].shape
# now it's your turn
# create a new subset dataframe containing all the rows with median home values above $800,000 and percent-White above 60%
# how many rows did you get?
```
## 5. Descriptive stats
```
# what share of majority-White tracts are rent burdened?
mask1 = df["pct_white"] > 50
mask2 = mask1 & (df["rent_burden"] > 0.3)
len(df[mask2]) / len(df[mask1])
# what share of majority-Hispanic tracts are rent burdened?
mask1 = df["pct_hispanic"] > 50
mask2 = mask1 & (df["rent_burden"] > 0.3)
len(df[mask2]) / len(df[mask1])
# you can sort the dataframe by values in some column
df.sort_values("pct_below_poverty", ascending=False).dropna().head()
# use the describe() method to pull basic descriptive stats for some column
df["med_household_income"].describe()
```
#### Or if you need the value of a single stat, call it directly
Key measures of central tendency: mean and median
```
# the mean, or "average" value
df["med_household_income"].mean()
# the median, or "typical" (ie, 50th percentile) value
df["med_household_income"].median()
# now it's your turn
# create a new subset dataframe containing rows with median household income above the (tract) average in LA county
# what is the median median home value across this subset of tracts?
```
Key measures of dispersion or variability: range, IQR, variance, standard deviation
```
df["med_household_income"].min()
# which tract has the lowest median household income?
df["med_household_income"].idxmin()
df["med_household_income"].max()
# what is the 90th-percentile value?
df["med_household_income"].quantile(0.90)
# calculate the distribution's range
df["med_household_income"].max() - df["med_household_income"].min()
# calculate its IQR
df["med_household_income"].quantile(0.75) - df["med_household_income"].quantile(0.25)
# calculate its variance... rarely used in practice
df["med_household_income"].var()
# calculate its standard deviation
# this is the sqrt of the variance... putting it into same units as the variable itself
df["med_household_income"].std()
# now it's your turn
# what's the average (mean) median home value across majority-White tracts? And across majority-Black tracts?
```
## 6. Merge and concatenate
### 6a. Merging DataFrames
```
# create a subset dataframe with only race/ethnicity variables
race_cols = ["pct_asian", "pct_black", "pct_hispanic", "pct_white"]
df_race = df[race_cols]
df_race.head()
# create a subset dataframe with only economic variables
econ_cols = ["med_home_value", "med_household_income"]
df_econ = df[econ_cols].sort_values("med_household_income")
df_econ.head()
# merge them together, aligning rows based on their labels in the index
df_merged = pd.merge(left=df_econ, right=df_race, how="inner", left_index=True, right_index=True)
df_merged.head()
# reset df_econ's index
df_econ = df_econ.reset_index()
df_econ.head()
# merge them together, aligning rows based on their labels in the index
# doesn't work! their indexes do not share any labels to match/align the rows
df_merged = pd.merge(left=df_econ, right=df_race, how="inner", left_index=True, right_index=True)
df_merged
# now it's your turn
# change the "how" argument: what happens if you try an "outer" join? or a "left" join? or a "right" join?
# instead merge where df_race index matches df_econ GEOID10 column
df_merged = pd.merge(left=df_econ, right=df_race, how="inner", left_on="GEOID10", right_index=True)
df_merged.head()
```
### 6b. Concatenating DataFrames
```
# load the orange county tracts data
oc = pd.read_csv("../../data/census_tracts_data_oc.csv", dtype={"GEOID10": str})
oc = oc.set_index("GEOID10")
oc.shape
oc.head()
# merging joins data together aligned by the index, but concatenating just smushes it together along some axis
df_all = pd.concat([df, oc], sort=False)
df_all
```
## 7. Grouping and summarizing
```
# extract county fips from index then replace with friendly name
df_all["county"] = df_all.index.str.slice(2, 5)
df_all["county"] = df_all["county"].replace({"037": "LA", "059": "OC"})
df_all["county"]
# group the rows by county
counties = df_all.groupby("county")
# what is the median pct_white across the tracts in each county?
counties["pct_white"].median()
# look at several columns' medians by county
counties[["pct_bachelors_degree", "pct_foreign_born", "pct_commute_drive_alone"]].median()
# now it's your turn
# group the tracts by county and find the highest/lowest tract percentages that speak English-only
```
| github_jupyter |
##### Copyright 2019 Google LLC
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Online Prediction with scikit-learn on AI Platform
This notebook uses the [Census Income Data Set](https://archive.ics.uci.edu/ml/datasets/Census+Income) to create a simple model, train the model, upload the model to Ai Platform, and lastly use the model to make predictions.
# How to bring your model to AI Platform
Getting your model ready for predictions can be done in 5 steps:
1. Save your model to a file
1. Upload the saved model to [Google Cloud Storage](https://cloud.google.com/storage)
1. Create a model resource on AI Platform
1. Create a model version (linking your scikit-learn model)
1. Make an online prediction
# Prerequisites
Before you jump in, let’s cover some of the different tools you’ll be using to get online prediction up and running on AI Platform.
[Google Cloud Platform](https://cloud.google.com/) lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure.
[AI Platform](https://cloud.google.com/ml-engine/) is a managed service that enables you to easily build machine learning models that work on any type of data, of any size.
[Google Cloud Storage](https://cloud.google.com/storage/) (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.
[Cloud SDK](https://cloud.google.com/sdk/) is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is [installed](https://cloud.google.com/sdk/downloads) in the same environment as your Jupyter kernel.
# Part 0: Setup
* [Create a project on GCP](https://cloud.google.com/resource-manager/docs/creating-managing-projects)
* [Create a Google Cloud Storage Bucket](https://cloud.google.com/storage/docs/quickstart-console)
* [Enable AI Platform Training and Prediction and Compute Engine APIs](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component&_ga=2.217405014.1312742076.1516128282-1417583630.1516128282)
* [Install Cloud SDK](https://cloud.google.com/sdk/downloads)
* [Install scikit-learn](http://scikit-learn.org/stable/install.html)
* [Install NumPy](https://docs.scipy.org/doc/numpy/user/install.html)
* [Install pandas](https://pandas.pydata.org/pandas-docs/stable/install.html)
* [Install Google API Python Client](https://github.com/google/google-api-python-client)
These variables will be needed for the following steps.
** Replace: **
* `PROJECT_ID <YOUR_PROJECT_ID>` - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project.
* `BUCKET_NAME <YOUR_BUCKET_NAME>` - with the bucket id you created above.
* `MODEL_NAME <YOUR_MODEL_NAME>` - with your model name, such as '`census`'
* `VERSION <YOUR_VERSION>` - with your version name, such as '`v1`'
* `REGION <REGION>` - [select a region](https://cloud.google.com/ml-engine/docs/tensorflow/regions#available_regions) or use the default '`us-central1`'. The region is where the model will be deployed.
```
%env PROJECT_ID PROJECT_ID
%env BUCKET_NAME BUCKET_NAME
%env MODEL_NAME census
%env VERSION_NAME v1
%env REGION us-central1
```
## Download the data
The [Census Income Data Set](https://archive.ics.uci.edu/ml/datasets/Census+Income) that this sample
uses for training is hosted by the [UC Irvine Machine Learning
Repository](https://archive.ics.uci.edu/ml/datasets/).
* Training file is `adult.data`
* Evaluation file is `adult.test`
### Disclaimer
This dataset is provided by a third party. Google provides no representation,
warranty, or other guarantees about the validity or any other aspects of this dataset.
```
# Create a directory to hold the data
! mkdir census_data
# Download the data
! curl https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data --output census_data/adult.data
! curl https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test --output census_data/adult.test
```
# Part 1: Train/Save the model
First, the data is loaded into a pandas DataFrame that can be used by scikit-learn. Then a simple model is created and fit against the training data. Lastly, sklearn's built in version of joblib is used to save the model to a file that can be uploaded to AI Platform.
```
import googleapiclient.discovery
import json
import numpy as np
import os
import pandas as pd
import pickle
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelBinarizer
# Define the format of your input data including unused columns (These are the columns from the census data files)
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
# Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn
CATEGORICAL_COLUMNS = (
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
)
# Load the training census dataset
with open('./census_data/adult.data', 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
train_features = raw_training_data.drop('income-level', axis=1).as_matrix().tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
train_labels = (raw_training_data['income-level'] == ' >50K').as_matrix().tolist()
# Load the test census dataset
with open('./census_data/adult.test', 'r') as test_data:
raw_testing_data = pd.read_csv(test_data, names=COLUMNS, skiprows=1)
# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
test_features = raw_testing_data.drop('income-level', axis=1).values.tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
test_labels = (raw_testing_data['income-level'] == ' >50K.').values.tolist()
# Since the census data set has categorical features, we need to convert
# them to numerical values. We'll use a list of pipelines to convert each
# categorical column and then use FeatureUnion to combine them before calling
# the RandomForestClassifier.
categorical_pipelines = []
# Each categorical column needs to be extracted individually and converted to a numerical value.
# To do this, each categorical column will use a pipeline that extracts one feature column via
# SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one.
# A scores array (created below) will select and extract the feature column. The scores array is
# created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN.
for i, col in enumerate(COLUMNS[:-1]):
if col in CATEGORICAL_COLUMNS:
# Create a scores array to get the individual categorical column.
# Example:
# data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical',
# 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States']
# scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
#
# Returns: [['State-gov']]
# Build the scores array.
scores = [0] * len(COLUMNS[:-1])
# This column is the categorical column we want to extract.
scores[i] = 1
skb = SelectKBest(k=1)
skb.scores_ = scores
# Convert the categorical column to a numerical value
lbn = LabelBinarizer()
r = skb.transform(train_features)
lbn.fit(r)
# Create the pipeline to extract the categorical feature
categorical_pipelines.append(
('categorical-{}'.format(i), Pipeline([
('SKB-{}'.format(i), skb),
('LBN-{}'.format(i), lbn)])))
# Create pipeline to extract the numerical features
skb = SelectKBest(k=6)
# From COLUMNS use the features that are numerical
skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0]
categorical_pipelines.append(('numerical', skb))
# Combine all the features using FeatureUnion
preprocess = FeatureUnion(categorical_pipelines)
# Create the classifier
classifier = RandomForestClassifier()
# Transform the features and fit them to the classifier
classifier.fit(preprocess.transform(train_features), train_labels)
# Create the overall model as a single pipeline
pipeline = Pipeline([
('union', preprocess),
('classifier', classifier)
])
# Export the model to a file
joblib.dump(pipeline, 'model.joblib')
print('Model trained and saved')
```
# Part 2: Upload the model
Next, you'll need to upload the model to your project's storage bucket in GCS. To use your model with AI Platform, it needs to be uploaded to Google Cloud Storage (GCS). This step takes your local ‘model.joblib’ file and uploads it GCS via the Cloud SDK using gsutil.
Before continuing, make sure you're [properly authenticated](https://cloud.google.com/sdk/gcloud/reference/auth/) and have [access to the bucket](https://cloud.google.com/storage/docs/access-control/). This next command sets your project to the one specified above.
Note: If you get an error below, make sure the Cloud SDK is installed in the kernel's environment.
```
! gcloud config set project $PROJECT_ID
```
Note: The exact file name of of the exported model you upload to GCS is important! Your model must be named “model.joblib”, “model.pkl”, or “model.bst” with respect to the library you used to export it. This restriction ensures that the model will be safely reconstructed later by using the same technique for import as was used during export.
```
! gsutil cp ./model.joblib gs://$BUCKET_NAME/model.joblib
```
# Part 3: Create a model resource
AI Platform organizes your trained models using model and version resources. An AI Platform model is a container for the versions of your machine learning model. For more information on model resources and model versions look [here](https://cloud.google.com/ml-engine/docs/deploying-models#creating_a_model_version).
At this step, you create a container that you can use to hold several different versions of your actual model.
```
! gcloud ml-engine models create $MODEL_NAME --regions $REGION
```
# Part 4: Create a model version
Now it’s time to get your model online and ready for predictions. The model version requires a few components as specified [here](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions#Version).
* __name__ - The name specified for the version when it was created. This will be the `VERSION_NAME` variable you declared at the beginning.
* __model__ - The name of the model container we created in Part 3. This is the `MODEL_NAME` variable you declared at the beginning.
* __deployment Uri__ - The Google Cloud Storage location of the trained model used to create the version. This is the bucket that you uploaded the model to with your `BUCKET_NAME`
* __runtime version__ - [Select Google Cloud ML runtime version](https://cloud.google.com/ml-engine/docs/tensorflow/runtime-version-list) to use for this deployment. This is set to 1.4
* __framework__ - The framework specifies if you are using: `TENSORFLOW`, `SCIKIT_LEARN`, `XGBOOST`. This is set to `SCIKIT_LEARN`
* __pythonVersion__ - This specifies whether you’re using Python 2.7 or Python 3.5. The default value is set to `“2.7”`, if you are using Python 3.5, set the value to `“3.5”`
Note: If you require a feature of scikit-learn that isn’t available in the publicly released version yet, you can specify “runtimeVersion”: “HEAD” instead, and that would get the latest version of scikit-learn available from the github repo. Otherwise the following versions will be used:
* scikit-learn: 0.19.0
First, we need to create a YAML file to configure our model version.
__REPLACE:__ `PREVIOUSLY_SPECIFIED_BUCKET_NAME` with your `BUCKET_NAME`
```
%%writefile ./config.yaml
deploymentUri: "gs://BUCKET_NAME/"
runtimeVersion: '1.4'
framework: "SCIKIT_LEARN"
pythonVersion: "3.5"
```
Use the created YAML file to create a model version.
Note: It can take several minutes for you model to be available.
```
! gcloud ml-engine versions create $VERSION_NAME \
--model $MODEL_NAME \
--config config.yaml
```
# Part 5: Make an online prediction
It’s time to make an online prediction with your newly deployed model. Before you begin, you'll need to take some of the test data and prepare it, so that the test data can be used by the deployed model.
```
# Get one person that makes <=50K and one that makes >50K to test our model.
print('Show a person that makes <=50K:')
print('\tFeatures: {0} --> Label: {1}\n'.format(test_features[0], test_labels[0]))
with open('less_than_50K.json', 'w') as outfile:
json.dump(test_features[0], outfile)
print('Show a person that makes >50K:')
print('\tFeatures: {0} --> Label: {1}'.format(test_features[3], test_labels[3]))
with open('more_than_50K.json', 'w') as outfile:
json.dump(test_features[3], outfile)
```
## Use gcloud to make online predictions
Use the two people (as seen in the table) gathered in the previous step for the gcloud predictions.
| **Person** | age | workclass | fnlwgt | education | education-num | marital-status | occupation |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
| **1** | 25| Private | 226802 | 11th | 7 | Never-married | Machine-op-inspect |
| **2** | 44| Private | 160323 | Some-college | 10 | Married-civ-spouse | Machine-op-inspct |
| **Person** | relationship | race | sex | capital-gain | capital-loss | hours-per-week | native-country || (Label) income-level|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:||:-:
| **1** | Own-child | Black | Male | 0 | 0 | 40 | United-States || False (<=50K) |
| **2** | Huasband | Black | Male | 7688 | 0 | 40 | United-States || True (>50K) |
Test the model with an online prediction using the data of a person who makes <=50K.
Note: If you see an error, the model from Part 4 may not be created yet as it takes several minutes for a new model version to be created.
```
! gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances less_than_50K.json
```
Test the model with an online prediction using the data of a person who makes >50K.
```
! gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances more_than_50K.json
```
## Use Python to make online predictions
Test the model with the entire test set and print out some of the results.
Note: If running notebook server on Compute Engine, make sure to ["allow full access to all Cloud APIs".](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes)
```
import googleapiclient.discovery
import os
import pandas as pd
PROJECT_ID = os.environ['PROJECT_ID']
VERSION_NAME = os.environ['VERSION_NAME']
MODEL_NAME = os.environ['MODEL_NAME']
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(PROJECT_ID, MODEL_NAME)
name += '/versions/{}'.format(VERSION_NAME)
# Due to the size of the data, it needs to be split in 2
first_half = test_features[:int(len(test_features)/2)]
second_half = test_features[int(len(test_features)/2):]
complete_results = []
for data in [first_half, second_half]:
responses = service.projects().predict(
name=name,
body={'instances': data}
).execute()
if 'error' in responses:
print(response['error'])
else:
complete_results.extend(responses['predictions'])
# Print the first 10 responses
for i, response in enumerate(complete_results[:10]):
print('Prediction: {}\tLabel: {}'.format(response, test_labels[i]))
```
# [Optional] Part 6: Verify Results
Use a confusion matrix to create a visualization of the online predicted results from AI Platform.
```
actual = pd.Series(test_labels, name='actual')
online = pd.Series(complete_results, name='online')
pd.crosstab(actual,online)
```
Use a confusion matrix create a visualization of the predicted results from the local model. These results should be identical to the results above.
```
local_results = pipeline.predict(test_features)
local = pd.Series(local_results, name='local')
pd.crosstab(actual,local)
```
Directly compare the two results
```
identical = 0
different = 0
for i in range(len(complete_results)):
if complete_results[i] == local_results[i]:
identical += 1
else:
different += 1
print('identical: {}, different: {}'.format(identical,different))
```
If all results are identical, it means you've successfully uploaded your local model to AI Platform and performed online predictions correctly.
| github_jupyter |
# Programming Exercise 5:
# Regularized Linear Regression and Bias vs Variance
## Introduction
In this exercise, you will implement regularized linear regression and use it to study models with different bias-variance properties. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.
All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below).
Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, [`matplotlib`](https://matplotlib.org/) for plotting, and [`scipy`](https://docs.scipy.org/doc/scipy/reference/) for scientific and numerical computation functions and tools. You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments).
```
# used for manipulating directory paths
import os
# Scientific and vector computation for python
import numpy as np
# Plotting library
from matplotlib import pyplot
# Optimization module in scipy
from scipy import optimize
# will be used to load MATLAB mat datafile format
from scipy.io import loadmat
# library written for this exercise providing additional functions for assignment submission, and others
import utils
# define the submission/grader object for this exercise
grader = utils.Grader()
# tells matplotlib to embed plots within the notebook
%matplotlib inline
```
## Submission and Grading
After completing each part of the assignment, be sure to submit your solutions to the grader. The following is a breakdown of how each part of this exercise is scored.
| Section | Part | Submitted Function | Points |
| :- |:- |:- | :-: |
| 1 | [Regularized Linear Regression Cost Function](#section1) | [`linearRegCostFunction`](#linearRegCostFunction) | 25 |
| 2 | [Regularized Linear Regression Gradient](#section2) | [`linearRegCostFunction`](#linearRegCostFunction) |25 |
| 3 | [Learning Curve](#section3) | [`learningCurve`](#func2) | 20 |
| 4 | [Polynomial Feature Mapping](#section4) | [`polyFeatures`](#polyFeatures) | 10 |
| 5 | [Cross Validation Curve](#section5) | [`validationCurve`](#validationCurve) | 20 |
| | Total Points | |100 |
You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.
<div class="alert alert-block alert-warning">
At the end of each section in this notebook, we have a cell which contains code for submitting the solutions thus far to the grader. Execute the cell to see your score up to the current section. For all your work to be submitted properly, you must execute those cells at least once.
</div>
<a id="section1"></a>
## 1 Regularized Linear Regression
In the first half of the exercise, you will implement regularized linear regression to predict the amount of water flowing out of a dam using the change of water level in a reservoir. In the next half, you will go through some diagnostics of debugging learning algorithms and examine the effects of bias v.s.
variance.
### 1.1 Visualizing the dataset
We will begin by visualizing the dataset containing historical records on the change in the water level, $x$, and the amount of water flowing out of the dam, $y$. This dataset is divided into three parts:
- A **training** set that your model will learn on: `X`, `y`
- A **cross validation** set for determining the regularization parameter: `Xval`, `yval`
- A **test** set for evaluating performance. These are “unseen” examples which your model did not see during training: `Xtest`, `ytest`
Run the next cell to plot the training data. In the following parts, you will implement linear regression and use that to fit a straight line to the data and plot learning curves. Following that, you will implement polynomial regression to find a better fit to the data.
```
# Load from ex5data1.mat, where all variables will be store in a dictionary
data = loadmat(os.path.join('Data', 'ex5data1.mat'))
# Extract train, test, validation data from dictionary
# and also convert y's form 2-D matrix (MATLAB format) to a numpy vector
X, y = data['X'], data['y'][:, 0]
Xtest, ytest = data['Xtest'], data['ytest'][:, 0]
Xval, yval = data['Xval'], data['yval'][:, 0]
# m = Number of examples
m = y.size
# Plot training data
pyplot.plot(X, y, 'ro', ms=10, mec='k', mew=1)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)');
```
### 1.2 Regularized linear regression cost function
Recall that regularized linear regression has the following cost function:
$$ J(\theta) = \frac{1}{2m} \left( \sum_{i=1}^m \left( h_\theta\left( x^{(i)} \right) - y^{(i)} \right)^2 \right) + \frac{\lambda}{2m} \left( \sum_{j=1}^n \theta_j^2 \right)$$
where $\lambda$ is a regularization parameter which controls the degree of regularization (thus, help preventing overfitting). The regularization term puts a penalty on the overall cost J. As the magnitudes of the model parameters $\theta_j$ increase, the penalty increases as well. Note that you should not regularize
the $\theta_0$ term.
You should now complete the code in the function `linearRegCostFunction` in the next cell. Your task is to calculate the regularized linear regression cost function. If possible, try to vectorize your code and avoid writing loops.
<a id="linearRegCostFunction"></a>
```
def linearRegCostFunction(X, y, theta, lambda_=0.0):
"""
Compute cost and gradient for regularized linear regression
with multiple variables. Computes the cost of using theta as
the parameter for linear regression to fit the data points in X and y.
Parameters
----------
X : array_like
The dataset. Matrix with shape (m x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
y : array_like
The functions values at each datapoint. A vector of
shape (m, ).
theta : array_like
The parameters for linear regression. A vector of shape (n+1,).
lambda_ : float, optional
The regularization parameter.
Returns
-------
J : float
The computed cost function.
grad : array_like
The value of the cost function gradient w.r.t theta.
A vector of shape (n+1, ).
Instructions
------------
Compute the cost and gradient of regularized linear regression for
a particular choice of theta.
You should set J to the cost and grad to the gradient.
"""
# Initialize some useful values
m = y.size # number of training examples
# You need to return the following variables correctly
J = 0
grad = np.zeros(theta.shape)
# ====================== YOUR CODE HERE ======================
# ============================================================
return J, grad
```
When you are finished, the next cell will run your cost function using `theta` initialized at `[1, 1]`. You should expect to see an output of 303.993.
```
theta = np.array([1, 1])
J, _ = linearRegCostFunction(np.concatenate([np.ones((m, 1)), X], axis=1), y, theta, 1)
print('Cost at theta = [1, 1]:\t %f ' % J)
print('This value should be about 303.993192)\n' % J)
```
After completing a part of the exercise, you can submit your solutions for grading by first adding the function you modified to the submission object, and then sending your function to Coursera for grading.
The submission script will prompt you for your login e-mail and submission token. You can obtain a submission token from the web page for the assignment. You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.
*Execute the following cell to grade your solution to the first part of this exercise.*
```
grader[1] = linearRegCostFunction
grader.grade()
```
<a id="section2"></a>
### 1.3 Regularized linear regression gradient
Correspondingly, the partial derivative of the cost function for regularized linear regression is defined as:
$$
\begin{align}
& \frac{\partial J(\theta)}{\partial \theta_0} = \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left(x^{(i)} \right) - y^{(i)} \right) x_j^{(i)} & \qquad \text{for } j = 0 \\
& \frac{\partial J(\theta)}{\partial \theta_j} = \left( \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left( x^{(i)} \right) - y^{(i)} \right) x_j^{(i)} \right) + \frac{\lambda}{m} \theta_j & \qquad \text{for } j \ge 1
\end{align}
$$
In the function [`linearRegCostFunction`](#linearRegCostFunction) above, add code to calculate the gradient, returning it in the variable `grad`. <font color='red'><b>Do not forget to re-execute the cell containing this function to update the function's definition.</b></font>
When you are finished, use the next cell to run your gradient function using theta initialized at `[1, 1]`. You should expect to see a gradient of `[-15.30, 598.250]`.
```
theta = np.array([1, 1])
J, grad = linearRegCostFunction(np.concatenate([np.ones((m, 1)), X], axis=1), y, theta, 1)
print('Gradient at theta = [1, 1]: [{:.6f}, {:.6f}] '.format(*grad))
print(' (this value should be about [-15.303016, 598.250744])\n')
```
*You should now submit your solutions.*
```
grader[2] = linearRegCostFunction
grader.grade()
```
### Fitting linear regression
Once your cost function and gradient are working correctly, the next cell will run the code in `trainLinearReg` (found in the module `utils.py`) to compute the optimal values of $\theta$. This training function uses `scipy`'s optimization module to minimize the cost function.
In this part, we set regularization parameter $\lambda$ to zero. Because our current implementation of linear regression is trying to fit a 2-dimensional $\theta$, regularization will not be incredibly helpful for a $\theta$ of such low dimension. In the later parts of the exercise, you will be using polynomial regression with regularization.
Finally, the code in the next cell should also plot the best fit line, which should look like the figure below.

The best fit line tells us that the model is not a good fit to the data because the data has a non-linear pattern. While visualizing the best fit as shown is one possible way to debug your learning algorithm, it is not always easy to visualize the data and model. In the next section, you will implement a function to generate learning curves that can help you debug your learning algorithm even if it is not easy to visualize the
data.
```
# add a columns of ones for the y-intercept
X_aug = np.concatenate([np.ones((m, 1)), X], axis=1)
theta = utils.trainLinearReg(linearRegCostFunction, X_aug, y, lambda_=0)
# Plot fit over the data
pyplot.plot(X, y, 'ro', ms=10, mec='k', mew=1.5)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)')
pyplot.plot(X, np.dot(X_aug, theta), '--', lw=2);
```
<a id="section3"></a>
## 2 Bias-variance
An important concept in machine learning is the bias-variance tradeoff. Models with high bias are not complex enough for the data and tend to underfit, while models with high variance overfit to the training data.
In this part of the exercise, you will plot training and test errors on a learning curve to diagnose bias-variance problems.
### 2.1 Learning Curves
You will now implement code to generate the learning curves that will be useful in debugging learning algorithms. Recall that a learning curve plots training and cross validation error as a function of training set size. Your job is to fill in the function `learningCurve` in the next cell, so that it returns a vector of errors for the training set and cross validation set.
To plot the learning curve, we need a training and cross validation set error for different training set sizes. To obtain different training set sizes, you should use different subsets of the original training set `X`. Specifically, for a training set size of $i$, you should use the first $i$ examples (i.e., `X[:i, :]`
and `y[:i]`).
You can use the `trainLinearReg` function (by calling `utils.trainLinearReg(...)`) to find the $\theta$ parameters. Note that the `lambda_` is passed as a parameter to the `learningCurve` function.
After learning the $\theta$ parameters, you should compute the error on the training and cross validation sets. Recall that the training error for a dataset is defined as
$$ J_{\text{train}} = \frac{1}{2m} \left[ \sum_{i=1}^m \left(h_\theta \left( x^{(i)} \right) - y^{(i)} \right)^2 \right] $$
In particular, note that the training error does not include the regularization term. One way to compute the training error is to use your existing cost function and set $\lambda$ to 0 only when using it to compute the training error and cross validation error. When you are computing the training set error, make sure you compute it on the training subset (i.e., `X[:n,:]` and `y[:n]`) instead of the entire training set. However, for the cross validation error, you should compute it over the entire cross validation set. You should store
the computed errors in the vectors error train and error val.
<a id="func2"></a>
```
def learningCurve(X, y, Xval, yval, lambda_=0):
"""
Generates the train and cross validation set errors needed to plot a learning curve
returns the train and cross validation set errors for a learning curve.
In this function, you will compute the train and test errors for
dataset sizes from 1 up to m. In practice, when working with larger
datasets, you might want to do this in larger intervals.
Parameters
----------
X : array_like
The training dataset. Matrix with shape (m x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
y : array_like
The functions values at each training datapoint. A vector of
shape (m, ).
Xval : array_like
The validation dataset. Matrix with shape (m_val x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
yval : array_like
The functions values at each validation datapoint. A vector of
shape (m_val, ).
lambda_ : float, optional
The regularization parameter.
Returns
-------
error_train : array_like
A vector of shape m. error_train[i] contains the training error for
i examples.
error_val : array_like
A vecotr of shape m. error_val[i] contains the validation error for
i training examples.
Instructions
------------
Fill in this function to return training errors in error_train and the
cross validation errors in error_val. i.e., error_train[i] and
error_val[i] should give you the errors obtained after training on i examples.
Notes
-----
- You should evaluate the training error on the first i training
examples (i.e., X[:i, :] and y[:i]).
For the cross-validation error, you should instead evaluate on
the _entire_ cross validation set (Xval and yval).
- If you are using your cost function (linearRegCostFunction) to compute
the training and cross validation error, you should call the function with
the lambda argument set to 0. Do note that you will still need to use
lambda when running the training to obtain the theta parameters.
Hint
----
You can loop over the examples with the following:
for i in range(1, m+1):
# Compute train/cross validation errors using training examples
# X[:i, :] and y[:i], storing the result in
# error_train[i-1] and error_val[i-1]
....
"""
# Number of training examples
m = y.size
# You need to return these values correctly
error_train = np.zeros(m)
error_val = np.zeros(m)
# ====================== YOUR CODE HERE ======================
# =============================================================
return error_train, error_val
```
When you are finished implementing the function `learningCurve`, executing the next cell prints the learning curves and produce a plot similar to the figure below.

In the learning curve figure, you can observe that both the train error and cross validation error are high when the number of training examples is increased. This reflects a high bias problem in the model - the linear regression model is too simple and is unable to fit our dataset well. In the next section, you will implement polynomial regression to fit a better model for this dataset.
```
X_aug = np.concatenate([np.ones((m, 1)), X], axis=1)
Xval_aug = np.concatenate([np.ones((yval.size, 1)), Xval], axis=1)
error_train, error_val = learningCurve(X_aug, y, Xval_aug, yval, lambda_=0)
pyplot.plot(np.arange(1, m+1), error_train, np.arange(1, m+1), error_val, lw=2)
pyplot.title('Learning curve for linear regression')
pyplot.legend(['Train', 'Cross Validation'])
pyplot.xlabel('Number of training examples')
pyplot.ylabel('Error')
pyplot.axis([0, 13, 0, 150])
print('# Training Examples\tTrain Error\tCross Validation Error')
for i in range(m):
print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i]))
```
*You should now submit your solutions.*
```
grader[3] = learningCurve
grader.grade()
```
<a id="section4"></a>
## 3 Polynomial regression
The problem with our linear model was that it was too simple for the data
and resulted in underfitting (high bias). In this part of the exercise, you will address this problem by adding more features. For polynomial regression, our hypothesis has the form:
$$
\begin{align}
h_\theta(x) &= \theta_0 + \theta_1 \times (\text{waterLevel}) + \theta_2 \times (\text{waterLevel})^2 + \cdots + \theta_p \times (\text{waterLevel})^p \\
& = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \cdots + \theta_p x_p
\end{align}
$$
Notice that by defining $x_1 = (\text{waterLevel})$, $x_2 = (\text{waterLevel})^2$ , $\cdots$, $x_p =
(\text{waterLevel})^p$, we obtain a linear regression model where the features are the various powers of the original value (waterLevel).
Now, you will add more features using the higher powers of the existing feature $x$ in the dataset. Your task in this part is to complete the code in the function `polyFeatures` in the next cell. The function should map the original training set $X$ of size $m \times 1$ into its higher powers. Specifically, when a training set $X$ of size $m \times 1$ is passed into the function, the function should return a $m \times p$ matrix `X_poly`, where column 1 holds the original values of X, column 2 holds the values of $X^2$, column 3 holds the values of $X^3$, and so on. Note that you don’t have to account for the zero-eth power in this function.
<a id="polyFeatures"></a>
```
def polyFeatures(X, p):
"""
Maps X (1D vector) into the p-th power.
Parameters
----------
X : array_like
A data vector of size m, where m is the number of examples.
p : int
The polynomial power to map the features.
Returns
-------
X_poly : array_like
A matrix of shape (m x p) where p is the polynomial
power and m is the number of examples. That is:
X_poly[i, :] = [X[i], X[i]**2, X[i]**3 ... X[i]**p]
Instructions
------------
Given a vector X, return a matrix X_poly where the p-th column of
X contains the values of X to the p-th power.
"""
# You need to return the following variables correctly.
X_poly = np.zeros((X.shape[0], p))
# ====================== YOUR CODE HERE ======================
# ============================================================
return X_poly
```
Now you have a function that will map features to a higher dimension. The next cell will apply it to the training set, the test set, and the cross validation set.
```
p = 8
# Map X onto Polynomial Features and Normalize
X_poly = polyFeatures(X, p)
X_poly, mu, sigma = utils.featureNormalize(X_poly)
X_poly = np.concatenate([np.ones((m, 1)), X_poly], axis=1)
# Map X_poly_test and normalize (using mu and sigma)
X_poly_test = polyFeatures(Xtest, p)
X_poly_test -= mu
X_poly_test /= sigma
X_poly_test = np.concatenate([np.ones((ytest.size, 1)), X_poly_test], axis=1)
# Map X_poly_val and normalize (using mu and sigma)
X_poly_val = polyFeatures(Xval, p)
X_poly_val -= mu
X_poly_val /= sigma
X_poly_val = np.concatenate([np.ones((yval.size, 1)), X_poly_val], axis=1)
print('Normalized Training Example 1:')
X_poly[0, :]
```
*You should now submit your solutions.*
```
grader[4] = polyFeatures
grader.grade()
```
## 3.1 Learning Polynomial Regression
After you have completed the function `polyFeatures`, we will proceed to train polynomial regression using your linear regression cost function.
Keep in mind that even though we have polynomial terms in our feature vector, we are still solving a linear regression optimization problem. The polynomial terms have simply turned into features that we can use for linear regression. We are using the same cost function and gradient that you wrote for the earlier part of this exercise.
For this part of the exercise, you will be using a polynomial of degree 8. It turns out that if we run the training directly on the projected data, will not work well as the features would be badly scaled (e.g., an example with $x = 40$ will now have a feature $x_8 = 40^8 = 6.5 \times 10^{12}$). Therefore, you will
need to use feature normalization.
Before learning the parameters $\theta$ for the polynomial regression, we first call `featureNormalize` and normalize the features of the training set, storing the mu, sigma parameters separately. We have already implemented this function for you (in `utils.py` module) and it is the same function from the first exercise.
After learning the parameters $\theta$, you should see two plots generated for polynomial regression with $\lambda = 0$, which should be similar to the ones here:
<table>
<tr>
<td><img src="Figures/polynomial_regression.png"></td>
<td><img src="Figures/polynomial_learning_curve.png"></td>
</tr>
</table>
You should see that the polynomial fit is able to follow the datapoints very well, thus, obtaining a low training error. The figure on the right shows that the training error essentially stays zero for all numbers of training samples. However, the polynomial fit is very complex and even drops off at the extremes. This is an indicator that the polynomial regression model is overfitting the training data and will not generalize well.
To better understand the problems with the unregularized ($\lambda = 0$) model, you can see that the learning curve shows the same effect where the training error is low, but the cross validation error is high. There is a gap between the training and cross validation errors, indicating a high variance problem.
```
lambda_ = 0
theta = utils.trainLinearReg(linearRegCostFunction, X_poly, y,
lambda_=lambda_, maxiter=55)
# Plot training data and fit
pyplot.plot(X, y, 'ro', ms=10, mew=1.5, mec='k')
utils.plotFit(polyFeatures, np.min(X), np.max(X), mu, sigma, theta, p)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)')
pyplot.title('Polynomial Regression Fit (lambda = %f)' % lambda_)
pyplot.ylim([-20, 50])
pyplot.figure()
error_train, error_val = learningCurve(X_poly, y, X_poly_val, yval, lambda_)
pyplot.plot(np.arange(1, 1+m), error_train, np.arange(1, 1+m), error_val)
pyplot.title('Polynomial Regression Learning Curve (lambda = %f)' % lambda_)
pyplot.xlabel('Number of training examples')
pyplot.ylabel('Error')
pyplot.axis([0, 13, 0, 100])
pyplot.legend(['Train', 'Cross Validation'])
print('Polynomial Regression (lambda = %f)\n' % lambda_)
print('# Training Examples\tTrain Error\tCross Validation Error')
for i in range(m):
print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i]))
```
One way to combat the overfitting (high-variance) problem is to add regularization to the model. In the next section, you will get to try different $\lambda$ parameters to see how regularization can lead to a better model.
### 3.2 Optional (ungraded) exercise: Adjusting the regularization parameter
In this section, you will get to observe how the regularization parameter affects the bias-variance of regularized polynomial regression. You should now modify the the lambda parameter and try $\lambda = 1, 100$. For each of these values, the script should generate a polynomial fit to the data and also a learning curve.
For $\lambda = 1$, the generated plots should look like the the figure below. You should see a polynomial fit that follows the data trend well (left) and a learning curve (right) showing that both the cross validation and training error converge to a relatively low value. This shows the $\lambda = 1$ regularized polynomial regression model does not have the high-bias or high-variance problems. In effect, it achieves a good trade-off between bias and variance.
<table>
<tr>
<td><img src="Figures/polynomial_regression_reg_1.png"></td>
<td><img src="Figures/polynomial_learning_curve_reg_1.png"></td>
</tr>
</table>
For $\lambda = 100$, you should see a polynomial fit (figure below) that does not follow the data well. In this case, there is too much regularization and the model is unable to fit the training data.

*You do not need to submit any solutions for this optional (ungraded) exercise.*
<a id="section5"></a>
### 3.3 Selecting $\lambda$ using a cross validation set
From the previous parts of the exercise, you observed that the value of $\lambda$ can significantly affect the results of regularized polynomial regression on the training and cross validation set. In particular, a model without regularization ($\lambda = 0$) fits the training set well, but does not generalize. Conversely, a model with too much regularization ($\lambda = 100$) does not fit the training set and testing set well. A good choice of $\lambda$ (e.g., $\lambda = 1$) can provide a good fit to the data.
In this section, you will implement an automated method to select the $\lambda$ parameter. Concretely, you will use a cross validation set to evaluate how good each $\lambda$ value is. After selecting the best $\lambda$ value using the cross validation set, we can then evaluate the model on the test set to estimate
how well the model will perform on actual unseen data.
Your task is to complete the code in the function `validationCurve`. Specifically, you should should use the `utils.trainLinearReg` function to train the model using different values of $\lambda$ and compute the training error and cross validation error. You should try $\lambda$ in the following range: {0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10}.
<a id="validationCurve"></a>
```
def validationCurve(X, y, Xval, yval):
"""
Generate the train and validation errors needed to plot a validation
curve that we can use to select lambda_.
Parameters
----------
X : array_like
The training dataset. Matrix with shape (m x n) where m is the
total number of training examples, and n is the number of features
including any polynomial features.
y : array_like
The functions values at each training datapoint. A vector of
shape (m, ).
Xval : array_like
The validation dataset. Matrix with shape (m_val x n) where m is the
total number of validation examples, and n is the number of features
including any polynomial features.
yval : array_like
The functions values at each validation datapoint. A vector of
shape (m_val, ).
Returns
-------
lambda_vec : list
The values of the regularization parameters which were used in
cross validation.
error_train : list
The training error computed at each value for the regularization
parameter.
error_val : list
The validation error computed at each value for the regularization
parameter.
Instructions
------------
Fill in this function to return training errors in `error_train` and
the validation errors in `error_val`. The vector `lambda_vec` contains
the different lambda parameters to use for each calculation of the
errors, i.e, `error_train[i]`, and `error_val[i]` should give you the
errors obtained after training with `lambda_ = lambda_vec[i]`.
Note
----
You can loop over lambda_vec with the following:
for i in range(len(lambda_vec))
lambda = lambda_vec[i]
# Compute train / val errors when training linear
# regression with regularization parameter lambda_
# You should store the result in error_train[i]
# and error_val[i]
....
"""
# Selected values of lambda (you should not change this)
lambda_vec = [0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10]
# You need to return these variables correctly.
error_train = np.zeros(len(lambda_vec))
error_val = np.zeros(len(lambda_vec))
# ====================== YOUR CODE HERE ======================
# ============================================================
return lambda_vec, error_train, error_val
```
After you have completed the code, the next cell will run your function and plot a cross validation curve of error v.s. $\lambda$ that allows you select which $\lambda$ parameter to use. You should see a plot similar to the figure below.

In this figure, we can see that the best value of $\lambda$ is around 3. Due to randomness
in the training and validation splits of the dataset, the cross validation error can sometimes be lower than the training error.
```
lambda_vec, error_train, error_val = validationCurve(X_poly, y, X_poly_val, yval)
pyplot.plot(lambda_vec, error_train, '-o', lambda_vec, error_val, '-o', lw=2)
pyplot.legend(['Train', 'Cross Validation'])
pyplot.xlabel('lambda')
pyplot.ylabel('Error')
print('lambda\t\tTrain Error\tValidation Error')
for i in range(len(lambda_vec)):
print(' %f\t%f\t%f' % (lambda_vec[i], error_train[i], error_val[i]))
```
*You should now submit your solutions.*
```
grader[5] = validationCurve
grader.grade()
```
### 3.4 Optional (ungraded) exercise: Computing test set error
In the previous part of the exercise, you implemented code to compute the cross validation error for various values of the regularization parameter $\lambda$. However, to get a better indication of the model’s performance in the real world, it is important to evaluate the “final” model on a test set that was not used in any part of training (that is, it was neither used to select the $\lambda$ parameters, nor to learn the model parameters $\theta$). For this optional (ungraded) exercise, you should compute the test error using the best value of $\lambda$ you found. In our cross validation, we obtained a test error of 3.8599 for $\lambda = 3$.
*You do not need to submit any solutions for this optional (ungraded) exercise.*
### 3.5 Optional (ungraded) exercise: Plotting learning curves with randomly selected examples
In practice, especially for small training sets, when you plot learning curves to debug your algorithms, it is often helpful to average across multiple sets of randomly selected examples to determine the training error and cross validation error.
Concretely, to determine the training error and cross validation error for $i$ examples, you should first randomly select $i$ examples from the training set and $i$ examples from the cross validation set. You will then learn the parameters $\theta$ using the randomly chosen training set and evaluate the parameters $\theta$ on the randomly chosen training set and cross validation set. The above steps should then be repeated multiple times (say 50) and the averaged error should be used to determine the training error and cross validation error for $i$ examples.
For this optional (ungraded) exercise, you should implement the above strategy for computing the learning curves. For reference, the figure below shows the learning curve we obtained for polynomial regression with $\lambda = 0.01$. Your figure may differ slightly due to the random selection of examples.

*You do not need to submit any solutions for this optional (ungraded) exercise.*
| github_jupyter |
```
import os
os.chdir('../app')
import matplotlib
print(matplotlib.__version__)
import frontend.stock_analytics as salib
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
from datetime import datetime,timedelta
from pprint import pprint
import matplotlib.patches as patches
import time
import numpy as np
import datetime
import copy
import preprocessing.lob.s03_fill_cache as l03
import re
import preprocessing.preglobal as pg
import math
%matplotlib inline
import random
import math
import scipy.optimize
import scipy.optimize
import json
import analysis_lib as al
import scipy.special
import cv2
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from pymongo import MongoClient, UpdateMany, UpdateOne, InsertOne
import pandas as pd
plt.rcParams['figure.figsize'] = (15, 5)
def binary_search( f, target, cstep=10, stepsize=10, prevturn=True): # mon increasing func
#print(f(cstep), target, cstep, stepsize)
if cstep > 1e5:
return -1
res = target/f(cstep)
if np.abs(res-1) < 1e-4:
return cstep
if res < 1:
stepsize /= 2
prevturn=False
cstep -= stepsize
else:
if prevturn:
stepsize *= 2
else:
stepsize /= 2
cstep += stepsize
return binary_search( f, target, cstep, stepsize,prevturn)
# Simulate using inverse transform
# Theoretische Verteilung
def integral_over_phi_slow(t,deltat, omegak, a, K, phi_0,g):
summand = 0
if len(t) > 0:
for k in range(0,K):
summand += (1-np.exp(-omegak[k]*deltat))*np.sum(a[k]*np.exp(-omegak[k]*(t[-1]-t)))
return deltat*phi_0 + g*summand
def integral_over_phi(t,deltat, omegak, a, K, phi_0,g):
summand = np.sum((1-np.exp(-np.outer(omegak,deltat))).T * np.sum(np.multiply(np.exp(-np.outer(omegak,(t[-1]-t))).T,a), axis=0) ,axis=1) \
if len(t) > 0 else 0
return deltat*phi_0 + g*summand
def probability_for_inter_arrival_time(t, deltat, omegak, a, K, phi_0,g):
x= integral_over_phi(t,deltat, omegak, a, K, phi_0,g)
return 1-np.exp(-x)
def probability_for_inter_arrival_time_slow(t, deltat, omegak, a, K, phi_0,g):
x = np.zeros(len(deltat))
for i in range(0, len(deltat)):
x[i]= integral_over_phi_slow(t,deltat[i], omegak, a, K, phi_0,g)
return 1-np.exp(-x)
g_cache_dict = {}
def simulate_by_itrans(phi_dash, g_params, K, conv1=1e-8, conv2=1e-2, N = 250000, init_array=np.array([]), reseed=True, status_update=True, use_binary_search=True):
# Initialize parameters
g, g_omega, g_beta = g_params
phi_0 = phi_dash * (1-g)
omegak, a = al.generate_series_parameters(g_omega, g_beta, K)
if reseed:
np.random.seed(123)
salib.tic()
i = randii = 0
t = 0.
randpool = np.random.rand(100*N)
# Inverse transform algorithm
init_array = np.array(init_array, dtype='double')
hawkes_array = np.pad(init_array,(0,N-len(init_array)), 'constant', constant_values=0.) #np.zeros(N)
hawkes_array = np.array(hawkes_array, dtype='double')
i = len(init_array)
if i > 0:
t = init_array[-1]
endsize = 20
tau = 0
while i < N:
NN = 10000
u = randpool[randii]
randii+=1
if randii >= len(randpool):
print(i)
if use_binary_search:
f = lambda x: probability_for_inter_arrival_time(hawkes_array[:i],x, omegak, a, K, phi_0, g)
tau = binary_search( f, u,cstep=max(tau,1e-5), stepsize=max(tau,1e-5))
if tau == -1:
return hawkes_array[:i]
else:
notok = 1
while notok>0:
if notok > 10:
NN *= 2
notok = 1
tau_x = np.linspace(0,endsize,NN)
pt = probability_for_inter_arrival_time (hawkes_array[:i],tau_x, omegak, a, K, phi_0, g)
okok = True
if pt[-1]-pt[-2] > conv1:
if status_update:
print('warning, pt does not converge',i,pt[1]-pt[0],pt[-1]-pt[-2])
endsize*=1.1
notok += 1
okok = False
if pt[1]-pt[0] > conv2:
if status_update:
print('warning pt increases to fast',i,pt[1]-pt[0],pt[-1]-pt[-2])
endsize/=1.1
notok +=1
okok = False
if okok:
notok = 0
tt = np.max(np.where(pt < u))
if tt == NN-1:
if status_update:
print('vorzeitig abgebrochen', u, tau_x[tt], pt[tt])
return hawkes_array[:i]
tau = tau_x[tt]
t += tau
hawkes_array[i] = t
i += 1
if status_update and i%(int(N/5))==0:
print(i)
salib.toc()
if status_update:
salib.toc()
return hawkes_array
# SIMULATION USING THINNING
def calc_eff_g(number_of_events, g):
noe_binned_x, noe_binned_y, _ = al.dobins(number_of_events, useinteger=True, N=1000)
noe_binned_y /= noe_binned_y.sum()
assert np.abs(np.sum(noe_binned_y) - 1) < 1e-8
print((noe_binned_x*noe_binned_y).sum(), 'should be', 1/(1-g))
plt.plot(np.log(noe_binned_x),noe_binned_y)
# noe_thin_no_cache_K15
gg = 0.886205
noe_thin_no_cache_K15 = [len(\
simulate_by_thinning(phi_dash=0, g_params=(gg, 0.430042, 0.3),\
K=15, N=1000, reseed=False, status_update=False, caching=False, init_array=np.array([0.]))\
) for i in range(0,10000)]
# noe_thin_cache_K15
gg = 0.886205
noe_thin_cache_K15 = [len(\
simulate_by_thinning(phi_dash=0, g_params=(gg, 0.430042, 0.3),\
K=15, N=1000, reseed=False, status_update=False, caching=True, init_array=np.array([0.]))\
) for i in range(0,10000)]
# noe_itrans_binary_K15
gg = 0.886205
noe_itrans_binary_K15 = [len(\
simulate_by_itrans(phi_dash=0, g_params=(gg, 0.430042, 0.3),\
K=15, N=1000, reseed=False, status_update=False,use_binary_search=True, init_array=np.array([0.]))\
) for i in range(0,10000)]
# noe_itrans_no_binary_K15
gg = 0.886205
noe_itrans_no_binary_K15 = [len(\
simulate_by_itrans(phi_dash=0, g_params=(gg, 0.430042, 0.3),\
K=15, N=1000, reseed=False, status_update=False, use_binary_search=False, init_array=np.array([0.])
, conv1=1e-5, conv2=1e-2
)\
) for i in range(0,10000)]
#noe_thin_cache_K0
gg = 0.886205
noe_thin_cache_K0 = [len(\
simulate_by_thinning(phi_dash=0, g_params=(gg, 2.430042, 0.),\
K=1, N=1000, reseed=False, status_update=False, caching=True, init_array=np.array([0.]))\
) for i in range(0,10000)] # braucht recht lang, weil der cache jedes mal neu aufgebaut wird
# noe_thin_no_cache_K0
gg = 0.886205
noe_thin_no_cache_K0 = [len(\
simulate_by_thinning(phi_dash=0, g_params=(gg, 2.430042, 0.),\
K=1, N=1000, reseed=False, status_update=False, caching=False, init_array=np.array([0.]))\
) for i in range(0,10000)]
# noe_itrans_binary_K0
gg = 0.886205
noe_itrans_binary_K0 = [len(\
simulate_by_itrans(phi_dash=0, g_params=(gg, 2.430042, 0.),\
K=1, N=1000, reseed=False, use_binary_search=True, status_update=False, init_array=np.array([0.]))\
) for i in range(0,10000)]
# noe_itrans_no_binary_K0
gg = 0.886205
noe_itrans_no_binary_K0 = [len(\
simulate_by_itrans(phi_dash=0, g_params=(gg, 2.430042, 0.),\
K=1, N=1000, reseed=False, use_binary_search=False, status_update=False, init_array=np.array([0.]))\
) for i in range(0,10000)]
calc_eff_g(noe_thin_cache_K15,gg)
calc_eff_g(noe_thin_no_cache_K15,gg)
calc_eff_g(noe_thin_cache_K0,gg)
calc_eff_g(noe_thin_no_cache_K0,gg)
calc_eff_g(noe_itrans_binary_K15,gg)
calc_eff_g(noe_itrans_no_binary_K15,gg)
calc_eff_g(noe_itrans_binary_K0,gg)
calc_eff_g(noe_itrans_no_binary_K0,gg)
eff_g_sim = {
'noe_thin_cache_K15':noe_thin_cache_K15,
'noe_thin_no_cache_K15':noe_thin_no_cache_K15,
'noe_thin_cache_K0':noe_thin_cache_K0,
'noe_thin_no_cache_K0':noe_thin_no_cache_K0,
'noe_itrans_binary_K15':noe_itrans_binary_K15,
'noe_itrans_no_binary_K15':noe_itrans_no_binary_K15,
'noe_itrans_binary_K0':noe_itrans_binary_K0,
'noe_itrans_no_binary_K0':noe_itrans_no_binary_K0
}
with open('eff_g_sim.json','w') as f:
json.dump( eff_g_sim, f)
gg
sim_thin_no_cache = simulate_by_thinning(phi_dash=68, g_params=(0.886205, 0.430042, 0.253835), K=15, N=10000, caching=False)
sim_thin_cache = simulate_by_thinning(phi_dash=68, g_params=(0.886205, 0.430042, 0.253835), K=15, N=10000, caching=True)
sim_itrans_binary = simulate_by_itrans(phi_dash=68, g_params=(0.886205, 0.430042, 0.253835), use_binary_search=True, K=15, N=10000, reseed=False)
sim_itrans_nobinary = simulate_by_itrans(phi_dash=68, g_params=(0.886205, 0.430042, 0.253835), use_binary_search=False, K=15, N=10000, reseed=False)
import importlib
importlib.reload(al)
import task_lib as tl
with open('17_simulation.json', 'w') as f:
json.dump([ ('sim_thin_no_cache',sim_thin_no_cache),
('sim_itrans_binary',sim_itrans_binary),
('sim_itrans_nobinary',sim_itrans_nobinary)],f, cls=tl.NumpyEncoder)
al.print_stats([ ('sim_thin_no_cache',sim_thin_no_cache),
('sim_itrans_binary',sim_itrans_binary),
('sim_itrans_nobinary',sim_itrans_nobinary)],
tau = np.logspace(-1,1,20), stepsize_hist=1.)
# Show probability distribution!
tg, tg_omega, tg_beta = (0.786205, 0.430042, 0.253835)
tK = 15
tphi_0 = 0
tomegak, ta = al.generate_series_parameters(tg_omega, tg_beta, K=tK, b=5.)
thawkes_array = np.zeros(10)
thawkes_array[0] = 0
ti = 1
tj = 0
tau_x = np.linspace(0.,100,1000)
pt = probability_for_inter_arrival_time(thawkes_array[tj:ti],tau_x, tomegak, ta, tK, tphi_0, tg)
plt.plot(tau_x,pt,'.')
# TEST IF BOTH ARE THE SAME
tt = np.array([0.01388255])
tdeltat = np.linspace(0,1.2607881726256949,1000)
tomegak = np.array([0.430042, 0.0006565823727274271, 1.0024611832713502e-06, 1.5305443242275112e-09, 2.3368145994246977e-12, 3.567817269741932e-15, 5.447295679084947e-18, 8.31685817180986e-21, 1.2698067798225314e-23, 1.9387240046350152e-26, 2.960017875060756e-29, 4.519315694101945e-32, 6.900030744759242e-35, 1.053487463616628e-37, 1.6084505664563106e-40])
ta = np.array([0.8071834195758446, 0.15563834675047422, 0.030009653805760286, 0.005786358827014711, 0.0011157059222237438, 0.0002151266006998307, 4.1479975508621093e-05, 7.998027034306966e-06, 1.5421522230216854e-06, 2.973525181609743e-07, 5.733449573701989e-08, 1.1055041409263513e-08, 2.1315952811567083e-09, 4.110069129946613e-10, 7.924894750082819e-11])
tK = 15
tphi_0 = 7.738059999999999
tg = 0.886205
assert (np.abs(probability_for_inter_arrival_time_slow(tt, tdeltat, tomegak, ta, tK, tphi_0, tg) - probability_for_inter_arrival_time(tt, tdeltat, tomegak, ta, tK, tphi_0, tg)) < 1e-10).all()
```
| github_jupyter |
# Conservative SDOF - Multiple Scales
- Introduces multiple time scales (Homogenation)
- Treate damped systems easier then L-P
- Built-in stability
Introduce new independent time variables
$$
\begin{gather*}
T_n = \epsilon^n t
\end{gather*}
$$
and
$$
\begin{align*}
\frac{d}{dt} &= \frac{\partial}{\partial T_0}\frac{dT_0}{dt} + \frac{\partial}{\partial T_1}\frac{dT_1}{dt} + \frac{\partial}{\partial T_2}\frac{dT_2}{dt} + \cdots\\
&= \frac{\partial}{\partial T_0} + \epsilon \frac{\partial}{\partial T_1} + \epsilon^2 \frac{\partial}{\partial T_2} + \cdots\\
&= D_0 + \epsilon D_1 + \epsilon^2 D_2 + \cdots
\end{align*}
$$
$$
\begin{align*}
\frac{d^2}{dt^2} &= \left( D_0 + \epsilon D_1 + \epsilon^2 D_2 + \cdots \right)^2
\end{align*}
$$
Introducing the Expansion for $x(t)$
$$
\begin{align*}
x(t) &= x_0(T_0,T_1,T_2,\cdots) + \epsilon x_1(T_0,T_1,T_2,\cdots) + \epsilon^2(T_0,T_1,T_2,\cdots) + \epsilon^3(T_0,T_1,T_2,\cdots) + O(\epsilon^4)
\end{align*}
$$
```
import sympy as sp
from sympy.simplify.fu import TR0, TR7, TR8, TR11
from math import factorial
# Functions for multiple scales
# Function for Time operator
def Dt(f, n, Ts, e=sp.Symbol('epsilon')):
if n==1:
return sp.expand(sum([e**i * sp.diff(f, T_i) for i, T_i in enumerate(Ts)]))
return Dt(Dt(f, 1, Ts, e), n-1, Ts, e)
def collect_epsilon(f, e=sp.Symbol('epsilon')):
N = sp.degree(f, e)
f_temp = f
collected_dict = {}
for i in range(N, 0, -1):
collected_term = f_temp.coeff(e**i)
collected_dict[e**i] = collected_term
delete_terms = sp.expand(e**i * collected_term)
f_temp -= delete_terms
collected_dict[e**0] = f_temp
return collected_dict
N = 3
f = sp.Function('f')
t = sp.Symbol('t', real=True)
# Define the symbolic parameters
epsilon = sp.symbols('epsilon')
T_i = sp.symbols('T_(0:' + str(N) + ')', real=True)
alpha_i = sp.symbols('alpha_(2:' + str(N+1) + ')', real=True)
omega0 = sp.Symbol('omega_0', real=True)
# x0 = sp.Function('x_0')(*T_i)
x1 = sp.Function('x_1')(*T_i)
x2 = sp.Function('x_2')(*T_i)
x3 = sp.Function('x_3')(*T_i)
# Expansion for x(t)
x_e = epsilon*x1 + epsilon**2 * x2 + epsilon**3 * x3
x_e
# Derivatives with time operators
xd = Dt(x_e, 1, T_i, epsilon)
xdd = Dt(x_e, 2, T_i, epsilon)
# EOM
EOM = xdd + sp.expand(omega0**2 * x_e) + sp.expand(sum([alpha_i[i-2] * x_e**i for i in range(2,N+1)]))
EOM
# Ordered Equations by epsilon
epsilon_Eq = collect_epsilon(EOM)
epsilon0_Eq = sp.Eq(epsilon_Eq[epsilon**0], 0)
epsilon0_Eq
epsilon1_Eq = sp.Eq(epsilon_Eq[epsilon**1], 0)
epsilon1_Eq
epsilon2_Eq = sp.Eq(epsilon_Eq[epsilon**2], 0)
epsilon2_Eq
epsilon3_Eq = sp.Eq(epsilon_Eq[epsilon**3], 0)
epsilon3_Eq
# Find the solution for epsilon-1
A = sp.Function('A')(*T_i[1::])
x1_sol = A * sp.exp(sp.I * omega0 * T_i[0]) + sp.conjugate(A) * sp.exp(-sp.I * omega0 * T_i[0])
x1_sol
# Update the epsilon-2 equation
epsilon2_Eq = epsilon2_Eq.subs(x1, x1_sol).doit()
epsilon2_Eq = sp.expand(epsilon2_Eq)
epsilon2_Eq
```
The secular terms will be cancelled out by
$$
\begin{gather*}
D_1 A = 0
\end{gather*}
$$
```
epsilon2_Eq = epsilon2_Eq.subs(sp.diff(A, T_i[1]), 0)
epsilon2_Eq
```
The particular solution of $x_2$ is
$$
\begin{gather*}
x_2 = \frac{\alpha_2 A^2}{3 \omega_0^2} e^{2i\omega_0 T_0} - \frac{\alpha_2 }{\omega^2_0}A\overline{A} + cc
\end{gather*}
$$
```
x2_p = alpha_i[0] * A**2 / 3/omega0**2 * sp.exp(2*sp.I*omega0*T_i[0]) - alpha_i[0]/omega0**2 * A * sp.conjugate(A)
x2_p
epsilon3_Eq = epsilon3_Eq.subs([
(sp.diff(A, T_i[1]), 0), (x1, x1_sol), (x2, x2_p)
]).doit()
epsilon3_Eq = sp.expand(epsilon3_Eq)
epsilon3_Eq = epsilon3_Eq.subs(sp.diff(A, T_i[1], 2), 0)
epsilon3_Eq
```
The to cancel out the secular term we let
$$
\begin{gather*}
2i\omega_0 D_2 A + \dfrac{9\alpha_3 \omega_0^2 - 10\alpha_2^2 }{3\omega_0^2}A^2\overline{A} = 0
\end{gather*}
$$
Question: What if the secular terms arising from $i\omega_0$ and $-i \omega_0$ are handled together - do we get a single real equation?
Substitute the polar $A$
$$
\begin{gather*}
A = \dfrac{1}{2}a e^{i\beta}
\end{gather*}
$$
```
x3_sec = sp.Eq(2*sp.I*omega0*sp.diff(A, T_i[2]) + (9*alpha_i[1]*omega0**2 - 10*alpha_i[0]**2)/3/omega0**2 * A**2 * sp.conjugate(A), 0)
a = sp.Symbol('a', real=True)
beta = sp.Symbol('beta', real=True)
temp = x3_sec.subs(A, a*sp.exp(sp.I * beta)/2)
temp
temp = sp.expand(temp)
temp_im = sp.im(temp.lhs)
temp_im
temp_re = sp.re(temp.lhs)
temp_re
```
Thus separating into real and imaginary parts we obtain
$$
\begin{align*}
\omega_0 D_2 a &=0\\
omega_0 a D_2 \beta + \dfrac{10\alpha_2^2 - 9\alpha_3\omega_0^2}{24\omega_0^2}a^3 &= 0
\end{align*}
$$
$a$ is a constant and
$$
\begin{gather*}
D_2\beta = - \dfrac{10\alpha_2^2 - 9\alpha_3\omega_0^3a}{24\omega_0^2}a^3
\beta = \dfrac{9\alpha_3\omega_0^2 - 10\alpha_2^2 }{24\omega_0^3}a^2 T_2 + \beta_0
\end{gather*}
$$
Here $\beta_0$ is a constant. Now using $T_2 = \epsilon^2 t$ we find that
$$
\begin{gather*}
A = \dfrac{1}{2}a \exp\left[ i\dfrac{9\alpha_3\omega_0^2 - 10\alpha_2^2 }{24\omega_0^3}a^3 \epsilon^2 t + i\beta_0 \right]
\end{gather*}
$$
and substituting in the expressions for $x_1$ and $x_2$ into the equations we have, we obtain the following final results
$$
\begin{gather*}
x = \epsilon a \cos(\omega t + \beta_0) - \dfrac{\epsilon^2 a^2\alpha_2}{2\omega_0^2}\left[ 1 - \dfrac{1}{3}\cos(2\omega t + 2\beta_0) \right] + O(\epsilon^3)
\end{gather*}
$$
where
$$
\begin{gather*}
\omega = \omega_0 \left[ 1 + \dfrac{9\alpha_3 \omega_0^2 - 10\alpha_2^2}{24\omega_0^4}\epsilon^2 a^2 \right] + O(\epsilon^3)
\end{gather*}
$$
| github_jupyter |
# Complex Graphs Metadata Example
## Prerequisites
* A kubernetes cluster with kubectl configured
* curl
* pygmentize
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html) to setup Seldon Core with an ingress.
```
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
```
## Used model
In this example notebook we will use a dummy node that can serve multiple purposes in the graph.
The model will read its metadata from environmental variable (this is done automatically).
Actual logic that happens on each of this endpoint is not subject of this notebook.
We will only concentrate on graph-level metadata that orchestrator constructs from metadata reported by each node.
```
%%writefile models/generic-node/Node.py
import logging
import random
import os
NUMBER_OF_ROUTES = int(os.environ.get("NUMBER_OF_ROUTES", "2"))
class Node:
def predict(self, features, names=[], meta=[]):
logging.info(f"model features: {features}")
logging.info(f"model names: {names}")
logging.info(f"model meta: {meta}")
return features.tolist()
def transform_input(self, features, names=[], meta=[]):
return self.predict(features, names, meta)
def transform_output(self, features, names=[], meta=[]):
return self.predict(features, names, meta)
def aggregate(self, features, names=[], meta=[]):
logging.info(f"model features: {features}")
logging.info(f"model names: {names}")
logging.info(f"model meta: {meta}")
return [x.tolist() for x in features]
def route(self, features, names=[], meta=[]):
logging.info(f"model features: {features}")
logging.info(f"model names: {names}")
logging.info(f"model meta: {meta}")
route = random.randint(0, NUMBER_OF_ROUTES)
logging.info(f"routing to: {route}")
return route
```
### Build image
build image using provided Makefile
```
cd models/generic-node
make build
```
If you are using `kind` you can use `kind_image_install` target to directly
load your image into your local cluster.
## Single Model
In case of single-node graph model-level `inputs` and `outputs`, `x` and `y`, will simply be also the deployment-level `graphinputs` and `graphoutputs`.

```
%%writefile graph-metadata/single.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: graph-metadata-single
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/metadata-generic-node:0.4
name: model
env:
- name: MODEL_METADATA
value: |
---
name: single-node
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [node-input]
shape: [ 1 ]
outputs:
- messagetype: tensor
schema:
names: [node-output]
shape: [ 1 ]
graph:
name: model
type: MODEL
children: []
name: example
replicas: 1
!kubectl apply -f graph-metadata/single.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=graph-metadata-single -o jsonpath='{.items[0].metadata.name}')
```
### Graph Level
Graph level metadata is available at the `api/v1.0/metadata` endpoint of your deployment:
```
import requests
import time
def getWithRetry(url):
for i in range(3):
r = requests.get(url)
if r.status_code == requests.codes.ok:
meta = r.json()
return meta
else:
print("Failed request with status code ",r.status_code)
time.sleep(3)
meta = getWithRetry("http://localhost:8003/seldon/seldon/graph-metadata-single/api/v1.0/metadata")
assert meta == {
"name": "example",
"models": {
"model": {
"name": "single-node",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [
{"messagetype": "tensor", "schema": {"names": ["node-input"], "shape": [1]}}
],
"outputs": [
{"messagetype": "tensor", "schema": {"names": ["node-output"], "shape": [1]}}
],
}
},
"graphinputs": [
{"messagetype": "tensor", "schema": {"names": ["node-input"], "shape": [1]}}
],
"graphoutputs": [
{"messagetype": "tensor", "schema": {"names": ["node-output"], "shape": [1]}}
],
}
meta
```
### Model Level
Compare with `model` metadata available at the `api/v1.0/metadata/model`:
```
import requests
meta = getWithRetry("http://localhost:8003/seldon/seldon/graph-metadata-single/api/v1.0/metadata/model")
assert meta == {
"custom": {},
"name": "single-node",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [{
"messagetype": "tensor", "schema": {"names": ["node-input"], "shape": [1]},
}],
"outputs": [{
"messagetype": "tensor", "schema": {"names": ["node-output"], "shape": [1]},
}],
}
meta
!kubectl delete -f graph-metadata/single.yaml
```
## Two-Level Graph
In two-level graph graph output of the first model is input of the second model, `x2=y1`.
The graph-level input `x` will be first model’s input `x1` and graph-level output `y` will be the last model’s output `y2`.

```
%%writefile graph-metadata/two-levels.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: graph-metadata-two-levels
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/metadata-generic-node:0.4
name: node-one
env:
- name: MODEL_METADATA
value: |
---
name: node-one
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [ a1, a2 ]
shape: [ 2 ]
outputs:
- messagetype: tensor
schema:
names: [ a3 ]
shape: [ 1 ]
- image: seldonio/metadata-generic-node:0.4
name: node-two
env:
- name: MODEL_METADATA
value: |
---
name: node-two
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [ a3 ]
shape: [ 1 ]
outputs:
- messagetype: tensor
schema:
names: [b1, b2]
shape: [ 2 ]
graph:
name: node-one
type: MODEL
children:
- name: node-two
type: MODEL
children: []
name: example
replicas: 1
!kubectl apply -f graph-metadata/two-levels.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=graph-metadata-two-levels -o jsonpath='{.items[0].metadata.name}')
import requests
meta = getWithRetry("http://localhost:8003/seldon/seldon/graph-metadata-two-levels/api/v1.0/metadata")
assert meta == {
"name": "example",
"models": {
"node-one": {
"name": "node-one",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [
{"messagetype": "tensor", "schema": {"names": ["a1", "a2"], "shape": [2]}}
],
"outputs": [
{"messagetype": "tensor", "schema": {"names": ["a3"], "shape": [1]}}
],
},
"node-two": {
"name": "node-two",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [
{"messagetype": "tensor", "schema": {"names": ["a3"], "shape": [1]}}
],
"outputs": [
{"messagetype": "tensor", "schema": {"names": ["b1", "b2"], "shape": [2]}}
],
}
},
"graphinputs": [
{"messagetype": "tensor", "schema": {"names": ["a1", "a2"], "shape": [2]}}
],
"graphoutputs": [
{"messagetype": "tensor", "schema": {"names": ["b1", "b2"], "shape": [2]}}
],
}
meta
!kubectl delete -f graph-metadata/two-levels.yaml
```
## Combiner of two models
In graph with the `combiner` request is first passed to combiner's children and before it gets aggregated by the `combiner` itself.
Input `x` is first passed to both models and their outputs `y1` and `y2` are passed to the combiner.
Combiner's output `y` is the final output of the graph.

```
%%writefile graph-metadata/combiner.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: graph-metadata-combiner
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/metadata-generic-node:0.4
name: node-combiner
env:
- name: MODEL_METADATA
value: |
---
name: node-combiner
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [ c1 ]
shape: [ 1 ]
- messagetype: tensor
schema:
names: [ c2 ]
shape: [ 1 ]
outputs:
- messagetype: tensor
schema:
names: [combiner-output]
shape: [ 1 ]
- image: seldonio/metadata-generic-node:0.4
name: node-one
env:
- name: MODEL_METADATA
value: |
---
name: node-one
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [a, b]
shape: [ 2 ]
outputs:
- messagetype: tensor
schema:
names: [ c1 ]
shape: [ 1 ]
- image: seldonio/metadata-generic-node:0.4
name: node-two
env:
- name: MODEL_METADATA
value: |
---
name: node-two
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [a, b]
shape: [ 2 ]
outputs:
- messagetype: tensor
schema:
names: [ c2 ]
shape: [ 1 ]
graph:
name: node-combiner
type: COMBINER
children:
- name: node-one
type: MODEL
children: []
- name: node-two
type: MODEL
children: []
name: example
replicas: 1
!kubectl apply -f graph-metadata/combiner.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=graph-metadata-combiner -o jsonpath='{.items[0].metadata.name}')
import requests
meta = getWithRetry("http://localhost:8003/seldon/seldon/graph-metadata-combiner/api/v1.0/metadata")
assert meta == {
"name": "example",
"models": {
"node-combiner": {
"name": "node-combiner",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [
{"messagetype": "tensor", "schema": {"names": ["c1"], "shape": [1]}},
{"messagetype": "tensor", "schema": {"names": ["c2"], "shape": [1]}},
],
"outputs": [
{"messagetype": "tensor", "schema": {"names": ["combiner-output"], "shape": [1]}}
],
},
"node-one": {
"name": "node-one",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [
{"messagetype": "tensor", "schema": {"names": ["a", "b"], "shape": [2]}},
],
"outputs": [
{"messagetype": "tensor", "schema": {"names": ["c1"], "shape": [1]}}
],
},
"node-two": {
"name": "node-two",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [
{"messagetype": "tensor", "schema": {"names": ["a", "b"], "shape": [2]}},
],
"outputs": [
{"messagetype": "tensor", "schema": {"names": ["c2"], "shape": [1]}}
],
}
},
"graphinputs": [
{"messagetype": "tensor", "schema": {"names": ["a", "b"], "shape": [2]}},
],
"graphoutputs": [
{"messagetype": "tensor", "schema": {"names": ["combiner-output"], "shape": [1]}}
],
}
meta
!kubectl delete -f graph-metadata/combiner.yaml
```
## Router with two models
In this example request `x` is passed by `router` to one of its children.
Router then returns children output `y1` or `y2` as graph's output `y`.
Here we assume that all children accepts similarly structured input and retun a similarly structured output.

```
%%writefile graph-metadata/router.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: graph-metadata-router
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/metadata-generic-node:0.4
name: node-router
- image: seldonio/metadata-generic-node:0.4
name: node-one
env:
- name: MODEL_METADATA
value: |
---
name: node-one
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [ a, b ]
shape: [ 2 ]
outputs:
- messagetype: tensor
schema:
names: [ node-output ]
shape: [ 1 ]
- image: seldonio/metadata-generic-node:0.4
name: node-two
env:
- name: MODEL_METADATA
value: |
---
name: node-two
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [ a, b ]
shape: [ 2 ]
outputs:
- messagetype: tensor
schema:
names: [ node-output ]
shape: [ 1 ]
graph:
name: node-router
type: ROUTER
children:
- name: node-one
type: MODEL
children: []
- name: node-two
type: MODEL
children: []
name: example
replicas: 1
!kubectl apply -f graph-metadata/router.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=graph-metadata-router -o jsonpath='{.items[0].metadata.name}')
import requests
meta = getWithRetry("http://localhost:8003/seldon/seldon/graph-metadata-router/api/v1.0/metadata")
assert meta == {
"name": "example",
"models": {
'node-router': {
'name': 'seldonio/metadata-generic-node',
'versions': ['0.4'],
'inputs': [],
'outputs': [],
},
"node-one": {
"name": "node-one",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [
{"messagetype": "tensor", "schema": {"names": ["a", "b"], "shape": [2]}}
],
"outputs": [
{"messagetype": "tensor", "schema": {"names": ["node-output"], "shape": [1]}}
],
},
"node-two": {
"name": "node-two",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [
{"messagetype": "tensor", "schema": {"names": ["a", "b"], "shape": [2]}}
],
"outputs": [
{"messagetype": "tensor", "schema": {"names": ["node-output"], "shape": [1]}}
],
}
},
"graphinputs": [
{"messagetype": "tensor", "schema": {"names": ["a", "b"], "shape": [2]}}
],
"graphoutputs": [
{"messagetype": "tensor", "schema": {"names": ["node-output"], "shape": [1]}}
],
}
meta
!kubectl delete -f graph-metadata/router.yaml
```
## Input Transformer
Input transformers work almost exactly the same as chained nodes, see two-level example above.
Following graph is presented in a way that is suppose to make next example (output transfomer) more intuitive.

```
%%writefile graph-metadata/input-transformer.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: graph-metadata-input
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/metadata-generic-node:0.4
name: node-input-transformer
env:
- name: MODEL_METADATA
value: |
---
name: node-input-transformer
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [transformer-input]
shape: [ 1 ]
outputs:
- messagetype: tensor
schema:
names: [transformer-output]
shape: [ 1 ]
- image: seldonio/metadata-generic-node:0.4
name: node
env:
- name: MODEL_METADATA
value: |
---
name: node
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [transformer-output]
shape: [ 1 ]
outputs:
- messagetype: tensor
schema:
names: [node-output]
shape: [ 1 ]
graph:
name: node-input-transformer
type: TRANSFORMER
children:
- name: node
type: MODEL
children: []
name: example
replicas: 1
!kubectl apply -f graph-metadata/input-transformer.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=graph-metadata-input -o jsonpath='{.items[0].metadata.name}')
import requests
meta = getWithRetry("http://localhost:8003/seldon/seldon/graph-metadata-input/api/v1.0/metadata")
assert meta == {
"name": "example",
"models": {
"node-input-transformer": {
"name": "node-input-transformer",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [{
"messagetype": "tensor", "schema": {"names": ["transformer-input"], "shape": [1]},
}],
"outputs": [{
"messagetype": "tensor", "schema": {"names": ["transformer-output"], "shape": [1]},
}],
},
"node": {
"name": "node",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [{
"messagetype": "tensor", "schema": {"names": ["transformer-output"], "shape": [1]},
}],
"outputs": [{
"messagetype": "tensor", "schema": {"names": ["node-output"], "shape": [1]},
}],
}
},
"graphinputs": [{
"messagetype": "tensor", "schema": {"names": ["transformer-input"], "shape": [1]}
}],
"graphoutputs": [{
"messagetype": "tensor", "schema": {"names": ["node-output"], "shape": [1]}
}],
}
meta
!kubectl delete -f graph-metadata/input-transformer.yaml
```
## Output Transformer
Output transformers work almost exactly opposite as chained nodes in the two-level example above.
Input `x` is first passed to the model that is child of the `output-transformer` before it is passed to it.

```
%%writefile graph-metadata/output-transformer.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: graph-metadata-output
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/metadata-generic-node:0.4
name: node-output-transformer
env:
- name: MODEL_METADATA
value: |
---
name: node-output-transformer
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [transformer-input]
shape: [ 1 ]
outputs:
- messagetype: tensor
schema:
names: [transformer-output]
shape: [ 1 ]
- image: seldonio/metadata-generic-node:0.4
name: node
env:
- name: MODEL_METADATA
value: |
---
name: node
versions: [ generic-node/v0.4 ]
platform: seldon
inputs:
- messagetype: tensor
schema:
names: [node-input]
shape: [ 1 ]
outputs:
- messagetype: tensor
schema:
names: [transformer-input]
shape: [ 1 ]
graph:
name: node-output-transformer
type: OUTPUT_TRANSFORMER
children:
- name: node
type: MODEL
children: []
name: example
replicas: 1
!kubectl apply -f graph-metadata/output-transformer.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=graph-metadata-output -o jsonpath='{.items[0].metadata.name}')
import requests
meta = getWithRetry("http://localhost:8003/seldon/seldon/graph-metadata-output/api/v1.0/metadata")
assert meta == {
"name": "example",
"models": {
"node-output-transformer": {
"name": "node-output-transformer",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [{
"messagetype": "tensor", "schema": {"names": ["transformer-input"], "shape": [1]},
}],
"outputs": [{
"messagetype": "tensor", "schema": {"names": ["transformer-output"], "shape": [1]},
}],
},
"node": {
"name": "node",
"platform": "seldon",
"versions": ["generic-node/v0.4"],
"inputs": [{
"messagetype": "tensor", "schema": {"names": ["node-input"], "shape": [1]},
}],
"outputs": [{
"messagetype": "tensor", "schema": {"names": ["transformer-input"], "shape": [1]},
}],
}
},
"graphinputs": [{
"messagetype": "tensor", "schema": {"names": ["node-input"], "shape": [1]}
}],
"graphoutputs": [{
"messagetype": "tensor", "schema": {"names": ["transformer-output"], "shape": [1]}
}],
}
meta
!kubectl delete -f graph-metadata/output-transformer.yaml
```
| github_jupyter |
# INFO
This is my solution for the fourth homework problem.
# **SOLUTION**
# Description
I will use network with:
- input layer with **2 neurons** (two input variables)
- **one** hidden layer with **2 neurons** (I need to split the plane in a nonlinear way, creating a U-shaped plane containing the diagonal points)
- output layer with 1 neuron (result - active or inactive)
Also, as an activation function I will use a sigmoid function - simple, values between (0, 1) and with simple derivative
# CODE
```
import numpy as np
```
Let's define our sigmoid function and its derivative
```
def sigmoid(x, derivative=False):
if derivative:
return x * (1 - x)
else:
return 1 / (1 + np.exp(-x))
```
Now, number of neurons per layer
```
layers_sizes = np.array([2, 2, 1])
```
And layers initialization function
```
def init_layers(sizes):
weights = [np.random.uniform(size=size) for size in zip(sizes[0:-1], sizes[1:])]
biases = [np.random.uniform(size=(size, 1)) for size in sizes[1:]]
return weights, biases
```
Function which execute network (forward propagation).
Takes input layer, following layers weights and biases and activation function. Returns layers outputs.
```
def execute(input, weights, biases, activation_f):
result = [input]
previous_layer = input
for weight, bias in zip(weights, biases):
executed_layer = execute_layer(previous_layer, weight, bias, activation_f)
previous_layer = executed_layer
result.append(executed_layer)
return result
def execute_layer(input_layer, weight, bias, activation_f):
layer_activation = np.dot(input_layer.T, weight).T + bias
return activation_f(layer_activation)
```
And time for the backpropagation function.
Function takes layers outputs, weights, biases and activation function, expected output and learning rate.
```
def backpropagation(layers_outputs,
weights,
biases,
activation_f,
expected_output,
learning_rate):
updated_weights = weights.copy()
updated_biases = biases.copy()
predicted_output = layers_outputs[-1]
output_error = 2 * (expected_output - predicted_output)
output_delta = output_error * activation_f(predicted_output, True)
updated_weights[-1] += layers_outputs[-2].dot(output_delta.T) * learning_rate
updated_biases[-1] += output_delta * learning_rate
next_layer_delta = output_delta
for layer_id in reversed(range(1, len(layers_outputs)-1)):
weight_id = layer_id - 1
error = np.dot(weights[weight_id+1], next_layer_delta)
delta = error * activation_f(layers_outputs[layer_id], True)
updated_weights[weight_id] += layers_outputs[layer_id-1].dot(delta.T) * learning_rate
updated_biases[weight_id] += delta * learning_rate
next_layer_delta = delta
return updated_weights, updated_biases
```
---
Create test set:
```
test_set_X = [np.array([[0], [0]]), np.array([[1], [0]]), np.array([[0], [1]]), np.array([[1], [1]])]
test_set_Y = [np.array([[0]]), np.array([[1]]), np.array([[1]]), np.array([[0]])]
```
And training parameters:
```
learning_rate = 0.07
number_of_iterations = 30000
```
And train out model:
```
weights, biases = init_layers(layers_sizes)
errors = []
for iteration in range(number_of_iterations):
error = 0
for test_x, test_y in zip(test_set_X, test_set_Y):
values = execute(test_x, weights, biases, sigmoid)
predicted_y = values[-1]
error += np.sum((predicted_y - test_y) ** 2) / len(test_y)
new_weights, new_biases = backpropagation(values, weights, biases, sigmoid, test_y, learning_rate)
weights = new_weights
biases = new_biases
print("iteration number {} done! Error: {}".format(iteration, error / len(test_set_X)))
errors.append(error / len(test_set_X))
```
And plot the error over iterations
```
import matplotlib.pyplot as plt
plt.plot(errors)
plt.ylabel('error vs iteration')
plt.show()
```
And print results
```
print("iterations: {}, learning rate: {}".format(number_of_iterations, learning_rate))
for test_x, test_y in zip(test_set_X, test_set_Y):
values = execute(test_x, weights, biases, sigmoid)
predicted_y = values[-1]
print("{} xor {} = {} ({} confidence)".format(test_x[0][0], test_x[1][0], round(predicted_y[0][0]), predicted_y))
```
| github_jupyter |
# Import KBase and cFBA
```
# import kbase
import os
local_cobrakbase_path = 'C:\\Users\\Andrew Freiburger\\Dropbox\\My PC (DESKTOP-M302P50)\\Documents\\UVic Civil Engineering\\Internships\\Agronne\\cobrakbase'
os.environ["HOME"] = local_cobrakbase_path
import cobrakbase
token = 'JOSNYJGASTV5BGELWQTUSATE4TNHZ66U'
kbase = cobrakbase.KBaseAPI(token)
ftp_path = '../../../ModelSEEDDatabase'
# import cFBA
%run ../../modelseedpy/core/mscommunity.py
%run ../../modelseedpy/core/msgapfill.py
%matplotlib inline
```
# 2-member Zahmeeth model
## Unconstained model
### Define and execute the model
```
# import the model
# from modelseedpy.fbapkg import kbasemediapkg
modelInfo_2 = ["CMM_iAH991V2_iML1515.kb",40576]
mediaInfo_2 = ["Btheta_Ecoli_minimal_media",40576]
model = kbase.get_from_ws(modelInfo_2[0],modelInfo_2[1])
media = kbase.get_from_ws(mediaInfo_2[0],mediaInfo_2[1])
# kmp = kbasemediapkg.KBaseMediaPkg(self.model)
# kmp.build_package(media)
# simulate and visualize the model
cfba = MSCommunity(model)
cfba.drain_fluxes(media)
cfba.gapfill(media)
cfba.constrain(media)
solution = cfba.run()
cfba.compute_interactions(solution)
cfba.visualize()
```
## FullThermo-constrained model
### Define and execute the model
```
# import the model
# from modelseedpy.fbapkg import kbasemediapkg
modelInfo_2 = ["CMM_iAH991V2_iML1515.kb",40576]
mediaInfo_2 = ["Btheta_Ecoli_minimal_media",40576]
model = kbase.get_from_ws(modelInfo_2[0],modelInfo_2[1])
media = kbase.get_from_ws(mediaInfo_2[0],mediaInfo_2[1])
# kmp = kbasemediapkg.KBaseMediaPkg(self.model)
# kmp.build_package(media)
# simulate and visualize the model
cfba = MSCommunity(model)
cfba.drain_fluxes(media)
cfba.gapfill(media)
cfba.constrain(media, msdb_path_for_fullthermo = ftp_path, verbose = False)
solution = cfba.run()
cfba.compute_interactions(solution)
cfba.visualize()
```
# 3-member Electrosynth model
## Unconstrained model
### Define and execute the model
```
# import the model
# from modelseedpy.fbapkg import kbasemediapkg
modelInfo_2 = ['electrosynth_comnty.mdl.gf.2021',93204]
mediaInfo_2 = ["CO2_minimal",93204]
model = kbase.get_from_ws(modelInfo_2[0],modelInfo_2[1])
media = kbase.get_from_ws(mediaInfo_2[0],mediaInfo_2[1])
# kmp = kbasemediapkg.KBaseMediaPkg(self.model)
# kmp.build_package(media)
# simulate and visualize the model
cfba = MSCommunity(model)
cfba.drain_fluxes(media)
cfba.gapfill(media)
cfba.constrain(media)
solution = cfba.run()
cfba.compute_interactions(solution)
cfba.visualize()
```
## FullThermo-constrained model
### Define and execute the model
```
# import the model
# from modelseedpy.fbapkg import kbasemediapkg
modelInfo_2 = ['electrosynth_comnty.mdl.gf.2021',93204]
mediaInfo_2 = ["CO2_minimal",93204]
model = kbase.get_from_ws(modelInfo_2[0],modelInfo_2[1])
media = kbase.get_from_ws(mediaInfo_2[0],mediaInfo_2[1])
# kmp = kbasemediapkg.KBaseMediaPkg(self.model)
# kmp.build_package(media)
# simulate and visualize the model
cfba = MSCommunity(model)
cfba.drain_fluxes(media)
cfba.gapfill(media)
cfba.constrain(media, msdb_path_for_fullthermo = ftp_path, verbose = False)
solution = cfba.run()
cfba.compute_interactions(solution)
cfba.visualize()
```
# 2-member Aimee model
## Unconstrained model
### Chitin media
```
# import the model
%run ../../modelseedpy/core/mscommunity.py
# from modelseedpy.fbapkg import kbasemediapkg
modelInfo_2 = ['Cjaponicus_Ecoli_Community',97055]
mediaInfo_2 = ["ChitinM9Media",97055]
model = kbase.get_from_ws(modelInfo_2[0],modelInfo_2[1])
media = kbase.get_from_ws(mediaInfo_2[0],mediaInfo_2[1])
# kmp = kbasemediapkg.KBaseMediaPkg(self.model)
# kmp.build_package(media)
# simulate and visualize the model
cfba = MSCommunity(model)
cfba.drain_fluxes(media)
cfba.gapfill(media)
cfba.constrain(media)
solution = cfba.run()
cfba.compute_interactions(solution)
cfba.visualize()
```
## FullThermo-constrained model
### Define and execute the model
```
# import the model
# from modelseedpy.fbapkg import kbasemediapkg
modelInfo_2 = ['Cjaponicus_Ecoli_Community',97055]
mediaInfo_2 = ["ChitinM9Media",97055]
model = kbase.get_from_ws(modelInfo_2[0],modelInfo_2[1])
media = kbase.get_from_ws(mediaInfo_2[0],mediaInfo_2[1])
# kmp = kbasemediapkg.KBaseMediaPkg(self.model)
# kmp.build_package(media)
# simulate and visualize the model
cfba = MSCommunity(model)
cfba.drain_fluxes(media)
cfba.gapfill(media)
cfba.constrain(media, msdb_path_for_fullthermo = ftp_path, verbose = False)
solution = cfba.run()
cfba.compute_interactions(solution)
cfba.visualize()
```
# 7-member Hotlake model
## Unconstrained model
### Define and execute the model
```
# import the model
%run ../../modelseedpy/core/mscommunity.py
# from modelseedpy.fbapkg import kbasemediapkg
modelInfo_2 = ["Hot_Lake_seven.mdl",93544]
mediaInfo_2 = ["HotLakeMedia",93544]
model = kbase.get_from_ws(modelInfo_2[0],modelInfo_2[1])
media = kbase.get_from_ws(mediaInfo_2[0],mediaInfo_2[1])
# kmp = kbasemediapkg.KBaseMediaPkg(self.model)
# kmp.build_package(media)
# simulate and visualize the model
cfba = MSCommunity(model)
cfba.drain_fluxes(media)
cfba.gapfill(media)
cfba.constrain(media)
solution = cfba.run()
cfba.compute_interactions(solution)
cfba.visualize()
```
## FullThermo-constrained model
### Define and execute the model
```
# import the model
%run ../../modelseedpy/core/mscommunity.py
# from modelseedpy.fbapkg import kbasemediapkg
modelInfo_2 = ["Hot_Lake_seven.mdl",93544]
mediaInfo_2 = ["HotLakeMedia",93544]
model = kbase.get_from_ws(modelInfo_2[0],modelInfo_2[1])
media = kbase.get_from_ws(mediaInfo_2[0],mediaInfo_2[1])
# kmp = kbasemediapkg.KBaseMediaPkg(self.model)
# kmp.build_package(media)
# simulate and visualize the model
cfba = MSCommunity(model)
cfba.drain_fluxes(media)
cfba.gapfill(media)
cfba.constrain(media, msdb_path_for_fullthermo = ftp_path, verbose = False)
solution = cfba.run()
cfba.compute_interactions(solution)
cfba.visualize()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/enakai00/rl_book_solutions/blob/master/Chapter06/SARSA_vs_Q_Learning_vs_MC.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
from numpy import random
from pandas import DataFrame
import copy
class Car:
def __init__(self):
self.path = []
self.actions = [(0, 1), (1, 0), (0, -1), (-1, 0)]
self.episodes = [0]
self.q = {}
self.c ={}
self.restart()
def restart(self):
self.x, self.y = 0, 3
self.path = []
def get_state(self):
return self.x, self.y
def show_path(self):
result = [[' ' for x in range(10)] for y in range(7)]
for c, (x, y, a) in enumerate(self.path):
result[y][x] = str(c)[-1]
result[3][7] = 'G'
return result
def add_episode(self, c=0):
self.episodes.append(self.episodes[-1]+c)
def move(self, action):
self.path.append((self.x, self.y, action))
vx, vy = self.actions[action]
if self.x >= 3 and self.x <= 8:
vy -= 1
if self.x >= 6 and self.x <= 7:
vy -= 1
_x, _y = self.x + vx, self.y + vy
if _x < 0 or _x > 9:
_x = self.x
if _y < 0 or _y > 6:
_y = self.y
self.x, self.y = _x, _y
if (self.x, self.y) == (7, 3): # Finish
return True
return False
def get_action(car, epsilon, default_q=0):
if random.random() < epsilon:
a = random.randint(0, len(car.actions))
else:
a = optimal_action(car, default_q)
return a
def optimal_action(car, default_q=0):
optimal = 0
q_max = 0
initial = True
x, y = car.get_state()
for a in range(len(car.actions)):
sa = "{:02},{:02}:{:02}".format(x, y, a)
if sa not in car.q.keys():
car.q[sa] = default_q
if initial or car.q[sa] > q_max:
q_max = car.q[sa]
optimal = a
initial = False
return optimal
def update_q(car, x, y, a, epsilon, q_learning=False):
sa = "{:02},{:02}:{:02}".format(x, y, a)
if q_learning:
_a = optimal_action(car)
else:
_a = get_action(car, epsilon)
_x, _y = car.get_state()
sa_next = "{:02},{:02}:{:02}".format(_x, _y, _a)
if sa not in car.q.keys():
car.q[sa] = 0
if sa_next not in car.q.keys():
car.q[sa_next] = 0
car.q[sa] += 0.5 * (-1 + car.q[sa_next] - car.q[sa])
if q_learning:
_a = get_action(car, epsilon)
return _a
def trial(car, epsilon = 0.1, q_learning=False):
car.restart()
a = get_action(car, epsilon)
while True:
x, y = car.get_state()
finished = car.move(a)
if finished:
car.add_episode(1)
sa = "{:02},{:02}:{:02}".format(x, y, a)
if sa not in car.q.keys():
car.q[sa] = 0
car.q[sa] += 0.5 * (-1 + 0 - car.q[sa])
break
a = update_q(car, x, y, a, epsilon, q_learning)
car.add_episode(0)
def trial_mc(car, epsilon=0.1):
car.restart()
while True:
x, y = car.get_state()
state = "{:02},{:02}".format(x, y)
a = get_action(car, epsilon, default_q=-10**10)
finished = car.move(a)
if finished:
car.add_episode(1)
g = 0
w = 1
path = copy.copy(car.path)
path.reverse()
for x, y, a in path:
car.x, car.y = x, y
opt_a = optimal_action(car, default_q=-10**10)
sa = "{:02},{:02}:{:02}".format(x, y, a)
g += -1 # Reward = -1 for each step
if sa not in car.c.keys():
car.c[sa] = w
car.q[sa] = g
else:
car.c[sa] += w
car.q[sa] += w*(g-car.q[sa])/car.c[sa]
if opt_a != a:
break
w = w / (1 - epsilon + epsilon/len(car.actions))
break
car.add_episode(0)
car1, car2, car3 = Car(), Car(), Car()
while True:
trial(car1)
if len(car1.episodes) >= 10000:
break
print(car1.episodes[-1])
while True:
trial(car2, q_learning=True)
if len(car2.episodes) >= 10000:
break
print(car2.episodes[-1])
while True:
trial_mc(car3)
if len(car3.episodes) >= 200000:
break
print(car3.episodes[-1])
DataFrame({'SARSA': car1.episodes[:8001],
'Q-Learning': car2.episodes[:8001],
'MC': car3.episodes[:8001]}
).plot()
trial(car1, epsilon=0)
print('SARSA:', len(car1.path))
print ("#" * 12)
for _ in map(lambda lst: ''.join(lst), car1.show_path()):
print('#' + _ + '#')
print ("#" * 12)
print ()
trial(car2, epsilon=0)
print('Q-Learning:', len(car2.path))
print ("#" * 12)
for _ in map(lambda lst: ''.join(lst), car2.show_path()):
print('#' + _ + '#')
print ("#" * 12)
print ()
trial_mc(car3, epsilon=0)
print('MC:', len(car3.path))
print ("#" * 12)
for _ in map(lambda lst: ''.join(lst), car3.show_path()):
print('#' + _ + '#')
print ("#" * 12)
print ()
```
| github_jupyter |
# Regression with Amazon SageMaker XGBoost (Parquet input)
This notebook exhibits the use of a Parquet dataset for use with the SageMaker XGBoost algorithm. The example here is almost the same as [Regression with Amazon SageMaker XGBoost algorithm](xgboost_abalone.ipynb).
This notebook tackles the exact same problem with the same solution, but has been modified for a Parquet input.
The original notebook provides details of dataset and the machine learning use-case.
```
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket here if you wish.
bucket = sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-parquet'
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket)
```
We will use [PyArrow](https://arrow.apache.org/docs/python/) library to store the Abalone dataset in the Parquet format.
```
!pip install pyarrow
%%time
import numpy as np
import pandas as pd
import urllib.request
from sklearn.datasets import load_svmlight_file
# Download the dataset and load into a pandas dataframe
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data", FILE_DATA)
feature_names=['Sex',
'Length',
'Diameter',
'Height',
'Whole weight',
'Shucked weight',
'Viscera weight',
'Shell weight',
'Rings']
data = pd.read_csv('abalone.csv',
header=None,
names=feature_names)
# SageMaker XGBoost has the convention of label in the first column
data = data[feature_names[-1:] + feature_names[:-1]]
# Split the downloaded data into train/test dataframes
train, test = np.split(data.sample(frac=1), [int(.8*len(data))])
# requires PyArrow installed
train.to_parquet('abalone_train.parquet')
test.to_parquet('abalone_test.parquet')
%%time
import sagemaker
sagemaker.Session().upload_data('abalone_train.parquet',
bucket=bucket,
key_prefix=prefix+'/'+'train')
sagemaker.Session().upload_data('abalone_test.parquet',
bucket=bucket,
key_prefix=prefix+'/'+'test')
```
We obtain the new container by specifying the framework version (0.90-1). This version specifies the upstream XGBoost framework version (0.90) and an additional SageMaker version (1). If you have an existing XGBoost workflow based on the previous (0.72) container, this would be the only change necessary to get the same workflow working with the new container.
```
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost', '0.90-1')
```
After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
```
%%time
import time
import boto3
from time import gmtime, strftime
job_name = 'xgboost-parquet-example-training-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "Pipe"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m5.24xlarge",
"VolumeSizeInGB": 20
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"10"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + "/train",
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "application/x-parquet",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + "/test",
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "application/x-parquet",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
%matplotlib inline
from sagemaker.analytics import TrainingJobAnalytics
metric_name = 'validation:rmse'
metrics_dataframe = TrainingJobAnalytics(training_job_name=job_name, metric_names=[metric_name]).dataframe()
plt = metrics_dataframe.plot(kind='line', figsize=(12,5), x='timestamp', y='value', style='b.', legend=False)
plt.set_ylabel(metric_name);
```
| github_jupyter |
# Model Centric Federated Learning - MNIST Example: Create Plan
This notebook is an example of creating a simple model and a training plan
for solving MNIST classification in model-centric (aka cross-device) federated learning fashion.
It consists of the following steps:
* Defining the model
* Defining the Training Plan
* Defining the Averaging Plan & FL configuration
* Hosting everything to PyGrid
* Extra: demonstration of PyGrid API
The process of training a hosted model using existing python FL worker is demonstrated in
the following "[MCFL - Execute Plan](mcfl_execute_plan.ipynb)" notebook.
```
# stdlib
import base64
import json
# third party
import jwt
import requests
import torch as th
from websocket import create_connection
# syft absolute
import syft as sy
from syft import deserialize
from syft import serialize
from syft.core.plan.plan_builder import ROOT_CLIENT
from syft.core.plan.plan_builder import make_plan
from syft.federated.model_centric_fl_client import ModelCentricFLClient
from syft.lib.python.int import Int
from syft.lib.python.list import List
from syft.proto.core.plan.plan_pb2 import Plan as PlanPB
from syft.proto.lib.python.list_pb2 import List as ListPB
th.random.manual_seed(42)
```
## Step 1: Define the model
This model will train on MNIST data, it's very simple yet can demonstrate learning process.
There're 2 linear layers:
* Linear 784x100
* ReLU
* Linear 100x10
```
class MLP(sy.Module):
def __init__(self, torch_ref):
super().__init__(torch_ref=torch_ref)
self.l1 = self.torch_ref.nn.Linear(784, 100)
self.a1 = self.torch_ref.nn.ReLU()
self.l2 = self.torch_ref.nn.Linear(100, 10)
def forward(self, x):
x_reshaped = x.view(-1, 28 * 28)
l1_out = self.a1(self.l1(x_reshaped))
l2_out = self.l2(l1_out)
return l2_out
```
## Step 2: Define Training Plan
```
def set_params(model, params):
for p, p_new in zip(model.parameters(), params):
p.data = p_new.data
def cross_entropy_loss(logits, targets, batch_size):
norm_logits = logits - logits.max()
log_probs = norm_logits - norm_logits.exp().sum(dim=1, keepdim=True).log()
return -(targets * log_probs).sum() / batch_size
def sgd_step(model, lr=0.1):
with ROOT_CLIENT.torch.no_grad():
for p in model.parameters():
p.data = p.data - lr * p.grad
p.grad = th.zeros_like(p.grad.get())
local_model = MLP(th)
@make_plan
def train(
xs=th.rand([64 * 3, 1, 28, 28]),
ys=th.randint(0, 10, [64 * 3, 10]),
params=List(local_model.parameters()),
):
model = local_model.send(ROOT_CLIENT)
set_params(model, params)
for i in range(1):
indices = th.tensor(range(64 * i, 64 * (i + 1)))
x, y = xs.index_select(0, indices), ys.index_select(0, indices)
out = model(x)
loss = cross_entropy_loss(out, y, 64)
loss.backward()
sgd_step(model)
return model.parameters()
```
## Step 3: Define Averaging Plan
Averaging Plan is executed by PyGrid at the end of the cycle,
to average _diffs_ submitted by workers and update the model
and create new checkpoint for the next cycle.
_Diff_ is the difference between client-trained
model params and original model params,
so it has same number of tensors and tensor's shapes
as the model parameters.
We define Plan that processes one diff at a time.
Such Plans require `iterative_plan` flag set to `True`
in `server_config` when hosting FL model to PyGrid.
Plan below will calculate simple mean of each parameter.
```
@make_plan
def avg_plan(
avg=List(local_model.parameters()), item=List(local_model.parameters()), num=Int(0)
):
new_avg = []
for i, param in enumerate(avg):
new_avg.append((avg[i] * num + item[i]) / (num + 1))
return new_avg
```
# Config & keys
```
name = "mnist"
version = "1.0"
client_config = {
"name": name,
"version": version,
"batch_size": 64,
"lr": 0.1,
"max_updates": 1, # custom syft.js option that limits number of training loops per worker
}
server_config = {
"min_workers": 2,
"max_workers": 2,
"pool_selection": "random",
"do_not_reuse_workers_until_cycle": 6,
"cycle_length": 28800, # max cycle length in seconds
"num_cycles": 30, # max number of cycles
"max_diffs": 1, # number of diffs to collect before avg
"minimum_upload_speed": 0,
"minimum_download_speed": 0,
"iterative_plan": True, # tells PyGrid that avg plan is executed per diff
}
def read_file(fname):
with open(fname, "r") as f:
return f.read()
private_key = read_file("example_rsa").strip()
public_key = read_file("example_rsa.pub").strip()
server_config["authentication"] = {
"type": "jwt",
"pub_key": public_key,
}
```
## Step 4: Host in PyGrid
Let's now host everything in PyGrid so that it can be accessed by worker libraries (syft.js, KotlinSyft, SwiftSyft, or even PySyft itself).
# Auth
```
grid_address = "localhost:7000"
grid = ModelCentricFLClient(address=grid_address, secure=False)
grid.connect()
```
# Host
If the process already exists, might you need to clear the db. To do that, set path below correctly and run:
```
# !rm PyGrid/apps/domain/src/nodedatabase.db
response = grid.host_federated_training(
model=local_model,
client_plans={"training_plan": train},
client_protocols={},
server_averaging_plan=avg_plan,
client_config=client_config,
server_config=server_config,
)
response
```
# Authenticate for cycle
```
# Helper function to make WS requests
def sendWsMessage(data):
ws = create_connection("ws://" + grid_address)
ws.send(json.dumps(data))
message = ws.recv()
return json.loads(message)
auth_token = jwt.encode({}, private_key, algorithm="RS256").decode("ascii")
auth_request = {
"type": "model-centric/authenticate",
"data": {
"model_name": name,
"model_version": version,
"auth_token": auth_token,
},
}
auth_response = sendWsMessage(auth_request)
auth_response
```
# Do cycle request
```
cycle_request = {
"type": "model-centric/cycle-request",
"data": {
"worker_id": auth_response["data"]["worker_id"],
"model": name,
"version": version,
"ping": 1,
"download": 10000,
"upload": 10000,
},
}
cycle_response = sendWsMessage(cycle_request)
print("Cycle response:", json.dumps(cycle_response, indent=2).replace("\\n", "\n"))
```
# Download model
```
worker_id = auth_response["data"]["worker_id"]
request_key = cycle_response["data"]["request_key"]
model_id = cycle_response["data"]["model_id"]
training_plan_id = cycle_response["data"]["plans"]["training_plan"]
def get_model(grid_address, worker_id, request_key, model_id):
req = requests.get(
f"http://{grid_address}/model-centric/get-model?worker_id={worker_id}&request_key={request_key}&model_id={model_id}"
)
model_data = req.content
pb = ListPB()
pb.ParseFromString(req.content)
return deserialize(pb)
# Model
model_params_downloaded = get_model(grid_address, worker_id, request_key, model_id)
print("Params shapes:", [p.shape for p in model_params_downloaded])
model_params_downloaded[0]
```
# Download & Execute Plan
```
req = requests.get(
f"http://{grid_address}/model-centric/get-plan?worker_id={worker_id}&request_key={request_key}&plan_id={training_plan_id}&receive_operations_as=list"
)
pb = PlanPB()
pb.ParseFromString(req.content)
plan = deserialize(pb)
xs = th.rand([64 * 3, 1, 28, 28])
ys = th.randint(0, 10, [64 * 3, 10])
(res,) = plan(xs=xs, ys=ys, params=model_params_downloaded)
```
# Report Model diff
```
diff = [orig - new for orig, new in zip(res, local_model.parameters())]
diff_serialized = serialize((List(diff))).SerializeToString()
params = {
"type": "model-centric/report",
"data": {
"worker_id": worker_id,
"request_key": request_key,
"diff": base64.b64encode(diff_serialized).decode("ascii"),
},
}
sendWsMessage(params)
```
# Check new model
```
req_params = {
"name": name,
"version": version,
"checkpoint": "latest",
}
res = requests.get(f"http://{grid_address}/model-centric/retrieve-model", req_params)
params_pb = ListPB()
params_pb.ParseFromString(res.content)
new_model_params = deserialize(params_pb)
new_model_params[0]
# !rm PyGrid/apps/domain/src/nodedatabase.db
```
## Step 5: Train
To train hosted model, you can use existing python FL worker.
See the "[MCFL - Execute Plan](mcfl_execute_plan.ipynb)" notebook that
has example of using Python FL worker.
To understand how to make similar model working for mobile FL workers,
see "[MCFL for Mobile - Create Plan](mcfl_execute_plan_mobile.ipynb)" notebook!
| github_jupyter |
# Malaria Detection
Malaria is a life-threatening disease caused by parasites that are transmitted to people through the bites of infected female Anopheles mosquitoes. It is preventable and curable.
In 2017, there were an estimated 219 million cases of malaria in 90 countries.
Malaria deaths reached 435 000 in 2017.
The WHO African Region carries a disproportionately high share of the global malaria burden. In 2017, the region was home to 92% of malaria cases and 93% of malaria deaths.
Malaria is caused by Plasmodium parasites. The parasites are spread to people through the bites of infected female Anopheles mosquitoes, called *"malaria vectors."* There are 5 parasite species that cause malaria in humans, and 2 of these species – P. falciparum and P. vivax – pose the greatest threat.
**Diagnosis of malaria can be difficult:**
Where malaria is not endemic any more (such as in the United States), health-care providers may not be familiar with the disease. Clinicians seeing a malaria patient may forget to consider malaria among the potential diagnoses and not order the needed diagnostic tests. Laboratorians may lack experience with malaria and fail to detect parasites when examining blood smears under the microscope.
Malaria is an acute febrile illness. In a non-immune individual, symptoms usually appear 10–15 days after the infective mosquito bite. The first symptoms – fever, headache, and chills – may be mild and difficult to recognize as malaria. If not treated within 24 hours, P. falciparum malaria can progress to severe illness, often leading to death.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from fastai import *
from fastai.vision import *
from fastai.callbacks.hooks import *
import os
print(os.listdir("../input/cell-images-for-detecting-malaria/cell_images/cell_images/"))
```
**Dataset**
```
img_dir='../input/cell-images-for-detecting-malaria/cell_images/cell_images/'
path=Path(img_dir)
path
data = ImageDataBunch.from_folder(path, train=".",
valid_pct=0.2,
ds_tfms=get_transforms(flip_vert=True, max_warp=0),
size=224,bs=64,
num_workers=0).normalize(imagenet_stats)
print(f'Classes: \n {data.classes}')
data.show_batch(rows=3, figsize=(7,6))
```
## Model ResNet34
```
learn = cnn_learner(data, models.resnet34, metrics=accuracy, model_dir="/tmp/model/")
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(6,1e-2)
learn.save('stage-2')
learn.recorder.plot_losses()
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_top_losses(9, figsize=(15,11))
```
**Confusion Matrix**
```
interp.plot_confusion_matrix(figsize=(8,8), dpi=60)
interp.most_confused(min_val=2)
pred_data= ImageDataBunch.from_folder(path, train=".",
valid_pct=0.2,
ds_tfms=get_transforms(flip_vert=True, max_warp=0),
size=224,bs=64,
num_workers=0).normalize(imagenet_stats)
predictor=cnn_learner(data, models.resnet34, metrics=accuracy, model_dir="/tmp/model/").load('stage-2')
pred_data.single_from_classes(path, pred_data.classes)
x,y = data.valid_ds[3]
x.show()
data.valid_ds.y[3]
pred_class,pred_idx,outputs = predictor.predict(x)
pred_class
```
## Heatmaps
**The heatmap will help us identify were our model it's looking and it's really useful for decision making**
```
def heatMap(x,y,data, learner, size=(0,224,224,0)):
"""HeatMap"""
# Evaluation mode
m=learner.model.eval()
# Denormalize the image
xb,_ = data.one_item(x)
xb_im = Image(data.denorm(xb)[0])
xb = xb.cuda()
# hook the activations
with hook_output(m[0]) as hook_a:
with hook_output(m[0], grad=True) as hook_g:
preds = m(xb)
preds[0,int(y)].backward()
# Activations
acts=hook_a.stored[0].cpu()
# Avg of the activations
avg_acts=acts.mean(0)
# Show HeatMap
_,ax = plt.subplots()
xb_im.show(ax)
ax.imshow(avg_acts, alpha=0.5, extent=size,
interpolation='bilinear', cmap='magma')
heatMap(x,y,pred_data,learn)
```
***It is very hard to completely eliminate false positives and negatives (in a case like this, it could indicate overfitting, given the relatively small training dataset), but the metric for the suitability of a model for the real world is how the model's sensitivity and specificity compare to that of a group of actual pathologists with domain expertise, when both analyze an identical set of real world data that neither has prior exposure to.***
You might improve the accuracy if you artificially increase the size of the training dataset by changing orientations, mirroring, etc., assuming the orientation of the NIH images of the smears haven't been normalized (I would assume they haven't, but that's a dangerous assumption). I'm also curious if you compared ResNet-34 and -50, as 50 might help your specificity (or not).*
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Surprise Singular Value Decomposition (SVD)
This notebook serves both as an introduction to the [Surprise](http://surpriselib.com/) library, and also introduces the 'SVD' algorithm which is very similar to ALS presented in the ALS deep dive notebook. This algorithm was heavily used during the Netflix Prize competition by the winning BellKor team.
## 1 Matrix factorization algorithm
The SVD model algorithm is very similar to the ALS algorithm presented in the ALS deep dive notebook. The two differences between the two approaches are:
- SVD additionally models the user and item biases (also called baselines in the litterature) from users and items.
- The optimization technique in ALS is Alternating Least Squares (hence the name), while SVD uses stochastic gradient descent.
### 1.1 The SVD model
In ALS, the ratings are modeled as follows:
$$\hat r_{u,i} = q_{i}^{T}p_{u}$$
SVD introduces two new scalar variables: the user biases $b_u$ and the item biases $b_i$. The user biases are supposed to capture the tendency of some users to rate items higher (or lower) than the average. The same goes for items: some items are usually rated higher than some others. The model is SVD is then as follows:
$$\hat r_{u,i} = \mu + b_u + b_i + q_{i}^{T}p_{u}$$
Where $\mu$ is the global average of all the ratings in the dataset. The regularised optimization problem naturally becomes:
$$ \sum(r_{u,i} - (\mu + b_u + b_i + q_{i}^{T}p_{u}))^2 + \lambda(b_i^2 + b_u^2 + ||q_i||^2 + ||p_u||^2)$$
where $\lambda$ is a the regularization parameter.
### 1.2 Stochastic Gradient Descent
Stochastic Gradient Descent (SGD) is a very common algorithm for optimization where the parameters (here the biases and the factor vectors) are iteratively incremented with the negative gradients w.r.t the optimization function. The algorithm essentially performs the following steps for a given number of iterations:
$$b_u \leftarrow b_u + \gamma (e_{ui} - \lambda b_u)$$
$$b_i \leftarrow b_i + \gamma (e_{ui} - \lambda b_i)$$
$$p_u \leftarrow p_u + \gamma (e_{ui} \cdot q_i - \lambda p_u)$$
$$q_i \leftarrow q_i + \gamma (e_{ui} \cdot p_u - \lambda q_i)$$
where $\gamma$ is the learning rate and $e_{ui} = r_{ui} - \hat r_{u,i} = r_{u,i} - (\mu + b_u + b_i + q_{i}^{T}p_{u})$ is the error made by the model for the pair $(u, i)$.
## 2 Surprise implementation of SVD
SVD is implemented in the [Surprise](https://surprise.readthedocs.io/en/stable/) library as a recommender module.
* Detailed documentations of the SVD module in Surprise can be found [here](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVD).
* Source codes of the SVD implementation is available on the Surprise Github repository, which can be found [here](https://github.com/NicolasHug/Surprise/blob/master/surprise/prediction_algorithms/matrix_factorization.pyx).
## 3 Surprise SVD movie recommender
We will use the MovieLens dataset, which is composed of integer ratings from 1 to 5.
Surprise supports dataframes as long as they have three colums reprensenting the user ids, item ids, and the ratings (in this order).
```
import sys
import os
import surprise
import papermill as pm
import scrapbook as sb
import pandas as pd
from recommenders.utils.timer import Timer
from recommenders.datasets import movielens
from recommenders.datasets.python_splitters import python_random_split
from recommenders.evaluation.python_evaluation import (rmse, mae, rsquared, exp_var, map_at_k, ndcg_at_k, precision_at_k,
recall_at_k, get_top_k_items)
from recommenders.models.surprise.surprise_utils import predict, compute_ranking_predictions
print("System version: {}".format(sys.version))
print("Surprise version: {}".format(surprise.__version__))
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
```
### 3.1 Load data
```
data = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=["userID", "itemID", "rating"]
)
data.head()
```
### 3.2 Train the SVD Model
Note that Surprise has a lot of built-in support for [cross-validation](https://surprise.readthedocs.io/en/stable/getting_started.html#use-cross-validation-iterators) or also [grid search](https://surprise.readthedocs.io/en/stable/getting_started.html#tune-algorithm-parameters-with-gridsearchcv) inspired scikit-learn, but we will here use the provided tools instead.
We start by splitting our data into trainset and testset with the `python_random_split` function.
```
train, test = python_random_split(data, 0.75)
```
Surprise needs to build an internal model of the data. We here use the `load_from_df` method to build a `Dataset` object, and then indicate that we want to train on all the samples of this dataset by using the `build_full_trainset` method.
```
# 'reader' is being used to get rating scale (for MovieLens, the scale is [1, 5]).
# 'rating_scale' parameter can be used instead for the later version of surprise lib:
# https://github.com/NicolasHug/Surprise/blob/master/surprise/dataset.py
train_set = surprise.Dataset.load_from_df(train, reader=surprise.Reader('ml-100k')).build_full_trainset()
train_set
```
The [SVD](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVD) has a lot of parameters. The most important ones are:
- `n_factors`, which controls the dimension of the latent space (i.e. the size of the vectors $p_u$ and $q_i$). Usually, the quality of the training set predictions grows with as `n_factors` gets higher.
- `n_epochs`, which defines the number of iteration of the SGD procedure.
Note that both parameter also affect the training time.
We will here set `n_factors` to `200` and `n_epochs` to `30`. To train the model, we simply need to call the `fit()` method.
```
svd = surprise.SVD(random_state=0, n_factors=200, n_epochs=30, verbose=True)
with Timer() as train_time:
svd.fit(train_set)
print("Took {} seconds for training.".format(train_time.interval))
```
### 3.3 Prediction
Now that our model is fitted, we can call `predict` to get some predictions. `predict` returns an internal object `Prediction` which can be easily converted back to a dataframe:
```
predictions = predict(svd, test, usercol='userID', itemcol='itemID')
predictions.head()
```
### 3.4 Remove rated movies in the top k recommendations
To compute ranking metrics, we need predictions on all user, item pairs. We remove though the items already watched by the user, since we choose not to recommend them again.
```
with Timer() as test_time:
all_predictions = compute_ranking_predictions(svd, train, usercol='userID', itemcol='itemID', remove_seen=True)
print("Took {} seconds for prediction.".format(test_time.interval))
all_predictions.head()
```
### 3.5 Evaluate how well SVD performs
The SVD algorithm was specifically designed to predict ratings as close as possible to their actual values. In particular, it is designed to have a very low RMSE (Root Mean Squared Error), computed as:
$$\sqrt{\frac{1}{N} \sum(\hat{r_{ui}} - r_{ui})^2}$$
As we can see, the RMSE and MAE (Mean Absolute Error) are pretty low (i.e. good), indicating that on average the error in the predicted ratings is less than 1. The RMSE is of course a bit higher, because high errors are penalized much more.
For comparison with other models, we also display Top-k and ranking metrics (MAP, NDCG, etc.). Note however that the SVD algorithm was designed for achieving high accuracy, not for top-rank predictions.
```
eval_rmse = rmse(test, predictions)
eval_mae = mae(test, predictions)
eval_rsquared = rsquared(test, predictions)
eval_exp_var = exp_var(test, predictions)
k = 10
eval_map = map_at_k(test, all_predictions, col_prediction='prediction', k=k)
eval_ndcg = ndcg_at_k(test, all_predictions, col_prediction='prediction', k=k)
eval_precision = precision_at_k(test, all_predictions, col_prediction='prediction', k=k)
eval_recall = recall_at_k(test, all_predictions, col_prediction='prediction', k=k)
print("RMSE:\t\t%f" % eval_rmse,
"MAE:\t\t%f" % eval_mae,
"rsquared:\t%f" % eval_rsquared,
"exp var:\t%f" % eval_exp_var, sep='\n')
print('----')
print("MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
# Record results with papermill for tests
sb.glue("rmse", eval_rmse)
sb.glue("mae", eval_mae)
sb.glue("rsquared", eval_rsquared)
sb.glue("exp_var", eval_exp_var)
sb.glue("map", eval_map)
sb.glue("ndcg", eval_ndcg)
sb.glue("precision", eval_precision)
sb.glue("recall", eval_recall)
sb.glue("train_time", train_time.interval)
sb.glue("test_time", test_time.interval)
```
## References
1. Ruslan Salakhutdinov and Andriy Mnih. Probabilistic matrix factorization. 2008. URL: http://papers.nips.cc/paper/3208-probabilistic-matrix-factorization.pdf
2. Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. 2009.
3. Francesco Ricci, Lior Rokach, Bracha Shapira, and Paul B. Kantor. Recommender Systems Handbook. 1st edition, 2010.
| github_jupyter |
# Moving Square Video Prediction
This is the third toy example from Jason Brownlee's [Long Short Term Memory Networks with Python](https://machinelearningmastery.com/lstms-with-python/). It illustrates using a CNN LSTM, ie, an LSTM with input from CNN. Per section 8.2 of the book:
> The moving square video prediction problem is contrived to demonstrate the CNN LSTM. The
problem involves the generation of a sequence of frames. In each image a line is drawn from left to right or right to left. Each frame shows the extension of the line by one pixel. The task is for the model to classify whether the line moved left or right in the sequence of frames. Technically, the problem is a sequence classification problem framed with a many-to-one prediction model.
```
from __future__ import division, print_function
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
import os
import shutil
%matplotlib inline
DATA_DIR = "../../data"
MODEL_FILE = os.path.join(DATA_DIR, "torch-08-moving-square-{:d}.model")
TRAINING_SIZE = 5000
VALIDATION_SIZE = 100
TEST_SIZE = 500
SEQUENCE_LENGTH = 50
FRAME_SIZE = 50
BATCH_SIZE = 32
NUM_EPOCHS = 5
LEARNING_RATE = 1e-3
```
## Prepare Data
Our data is going to be batches of sequences of images. Each image will need to be in channel-first format, since Pytorch only supports that format. So our output data will be in the (batch_size, sequence_length, num_channels, height, width) format.
```
def next_frame(frame, x, y, move_right, upd_int):
frame_size = frame.shape[0]
if x is None and y is None:
x = 0 if (move_right == 1) else (frame_size - 1)
y = np.random.randint(0, frame_size, 1)[0]
else:
if y == 0:
y = np.random.randint(y, y + 1, 1)[0]
elif y == frame_size - 1:
y = np.random.randint(y - 1, y, 1)[0]
else:
y = np.random.randint(y - 1, y + 1, 1)[0]
if move_right:
x = x + 1
else:
x = x - 1
new_frame = frame.copy()
new_frame[y, x] = upd_int
return new_frame, x, y
row, col = None, None
frame = np.ones((5, 5))
move_right = 1 if np.random.random() < 0.5 else 0
for i in range(5):
frame, col, row = next_frame(frame, col, row, move_right, 0)
plt.subplot(1, 5, (i+1))
plt.xticks([])
plt.yticks([])
plt.title((col, row, "R" if (move_right==1) else "L"))
plt.imshow(frame, cmap="gray")
plt.tight_layout()
plt.show()
def generate_data(frame_size, sequence_length, num_samples):
assert(frame_size == sequence_length)
xs, ys = [], []
for bid in range(num_samples):
frame_seq = []
row, col = None, None
frame = np.ones((frame_size, frame_size))
move_right = 1 if np.random.random() < 0.5 else 0
for sid in range(sequence_length):
frm, col, row = next_frame(frame, col, row, move_right, 0)
frm = frm.reshape((1, frame_size, frame_size))
frame_seq.append(frm)
xs.append(np.array(frame_seq))
ys.append(move_right)
return np.array(xs), np.array(ys)
X, y = generate_data(FRAME_SIZE, SEQUENCE_LENGTH, 10)
print(X.shape, y.shape)
Xtrain, ytrain = generate_data(FRAME_SIZE, SEQUENCE_LENGTH, TRAINING_SIZE)
Xval, yval = generate_data(FRAME_SIZE, SEQUENCE_LENGTH, VALIDATION_SIZE)
Xtest, ytest = generate_data(FRAME_SIZE, SEQUENCE_LENGTH, TEST_SIZE)
print(Xtrain.shape, ytrain.shape, Xval.shape, yval.shape, Xtest.shape, ytest.shape)
```
## Define Network
We want to build a CNN-LSTM network. Each image in the sequence will be fed to a CNN which will learn to produce a feature vector for the image. The sequence of vectors will be fed into an LSTM and the LSTM will learn to generate a context vector that will be then fed into a FCN that will predict if the square is moving left or right.
<img src="08-network-design.png"/>
```
class CNN(nn.Module):
def __init__(self, input_height, input_width, input_channels,
output_channels,
conv_kernel_size, conv_stride, conv_padding,
pool_size):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(input_channels, output_channels,
kernel_size=conv_kernel_size,
stride=conv_stride,
padding=conv_padding)
self.relu1 = nn.ReLU()
self.output_height = input_height // pool_size
self.output_width = input_width // pool_size
self.output_channels = output_channels
self.pool_size = pool_size
def forward(self, x):
x = self.conv1(x)
x = self.relu1(x)
x = F.max_pool2d(x, self.pool_size)
x = x.view(x.size(0), self.output_channels * self.output_height * self.output_width)
return x
cnn = CNN(FRAME_SIZE, FRAME_SIZE, 1, 2, 2, 1, 1, 2)
print(cnn)
# size debugging
print("--- size debugging ---")
inp = Variable(torch.randn(BATCH_SIZE, 1, FRAME_SIZE, FRAME_SIZE))
out = cnn(inp)
print(out.size())
class CNNLSTM(nn.Module):
def __init__(self, image_size, input_channels, output_channels,
conv_kernel_size, conv_stride, conv_padding, pool_size,
seq_length, hidden_size, num_layers, output_size):
super(CNNLSTM, self).__init__()
# capture variables
self.num_layers = num_layers
self.seq_length = seq_length
self.image_size = image_size
self.output_channels = output_channels
self.hidden_size = hidden_size
self.lstm_input_size = output_channels * (image_size // pool_size) ** 2
# define network layers
self.cnn = CNN(image_size, image_size, input_channels, output_channels,
conv_kernel_size, conv_stride, conv_padding, pool_size)
self.lstm = nn.LSTM(self.lstm_input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
self.softmax = nn.Softmax()
def forward(self, x):
if torch.cuda.is_available():
h0 = (Variable(torch.randn(self.num_layers, x.size(0), self.hidden_size).cuda()),
Variable(torch.randn(self.num_layers, x.size(0), self.hidden_size).cuda()))
else:
h0 = (Variable(torch.randn(self.num_layers, x.size(0), self.hidden_size)),
Variable(torch.randn(self.num_layers, x.size(0), self.hidden_size)))
cnn_out = []
for i in range(self.seq_length):
cnn_out.append(self.cnn(x[:, i, :, :, :]))
x = torch.cat(cnn_out, dim=1).view(-1, self.seq_length, self.lstm_input_size)
x, h0 = self.lstm(x, h0)
x = self.fc(x[:, -1, :])
x = self.softmax(x)
return x
model = CNNLSTM(FRAME_SIZE, 1, 2, 2, 1, 1, 2, SEQUENCE_LENGTH, 50, 1, 2)
if torch.cuda.is_available():
model.cuda()
print(model)
# size debugging
print("--- size debugging ---")
inp = Variable(torch.randn(BATCH_SIZE, SEQUENCE_LENGTH, 1, FRAME_SIZE, FRAME_SIZE))
out = model(inp)
print(out.size())
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
```
## Train Network
Training on GPU is probably preferable for this example, takes a long time on CPU. During some runs, the training and validation accuracies get stuck, possibly because of bad initializations, the fix appears to be to just retry the training until it results in good training and validation accuracies and use the resulting model.
```
def compute_accuracy(pred_var, true_var):
if torch.cuda.is_available():
ypred = pred_var.cpu().data.numpy()
ytrue = true_var.cpu().data.numpy()
else:
ypred = pred_var.data.numpy()
ytrue = true_var.data.numpy()
return accuracy_score(ypred, ytrue)
history = []
for epoch in range(NUM_EPOCHS):
num_batches = Xtrain.shape[0] // BATCH_SIZE
shuffled_indices = np.random.permutation(np.arange(Xtrain.shape[0]))
train_loss, train_acc = 0., 0.
for bid in range(num_batches):
Xbatch_data = Xtrain[shuffled_indices[bid * BATCH_SIZE : (bid+1) * BATCH_SIZE]]
ybatch_data = ytrain[shuffled_indices[bid * BATCH_SIZE : (bid+1) * BATCH_SIZE]]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
ybatch = Variable(torch.from_numpy(ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
ybatch = ybatch.cuda()
# initialize gradients
optimizer.zero_grad()
# forward
Ybatch_ = model(Xbatch)
loss = loss_fn(Ybatch_, ybatch)
# backward
loss.backward()
train_loss += loss.data[0]
_, ybatch_ = Ybatch_.max(1)
train_acc += compute_accuracy(ybatch_, ybatch)
optimizer.step()
# compute training loss and accuracy
train_loss /= num_batches
train_acc /= num_batches
# compute validation loss and accuracy
val_loss, val_acc = 0., 0.
num_val_batches = Xval.shape[0] // BATCH_SIZE
for bid in range(num_val_batches):
# data
Xbatch_data = Xval[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
ybatch_data = yval[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
ybatch = Variable(torch.from_numpy(ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
ybatch = ybatch.cuda()
Ybatch_ = model(Xbatch)
loss = loss_fn(Ybatch_, ybatch)
val_loss += loss.data[0]
_, ybatch_ = Ybatch_.max(1)
val_acc += compute_accuracy(ybatch_, ybatch)
val_loss /= num_val_batches
val_acc /= num_val_batches
torch.save(model.state_dict(), MODEL_FILE.format(epoch+1))
print("Epoch {:2d}/{:d}: loss={:.3f}, acc={:.3f}, val_loss={:.3f}, val_acc={:.3f}"
.format((epoch+1), NUM_EPOCHS, train_loss, train_acc, val_loss, val_acc))
history.append((train_loss, val_loss, train_acc, val_acc))
losses = [x[0] for x in history]
val_losses = [x[1] for x in history]
accs = [x[2] for x in history]
val_accs = [x[3] for x in history]
plt.subplot(211)
plt.title("Accuracy")
plt.plot(accs, color="r", label="train")
plt.plot(val_accs, color="b", label="valid")
plt.legend(loc="best")
plt.subplot(212)
plt.title("Loss")
plt.plot(losses, color="r", label="train")
plt.plot(val_losses, color="b", label="valid")
plt.legend(loc="best")
plt.tight_layout()
plt.show()
```
## Test/Evaluate Network
```
saved_model = CNNLSTM(FRAME_SIZE, 1, 2, 2, 1, 1, 2, SEQUENCE_LENGTH, 50, 1, 2)
saved_model.load_state_dict(torch.load(MODEL_FILE.format(5)))
if torch.cuda.is_available():
saved_model.cuda()
ylabels, ypreds = [], []
num_test_batches = Xtest.shape[0] // BATCH_SIZE
for bid in range(num_test_batches):
Xbatch_data = Xtest[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
ybatch_data = ytest[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
ybatch = Variable(torch.from_numpy(ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
ybatch = ybatch.cuda()
Ybatch_ = saved_model(Xbatch)
_, ybatch_ = Ybatch_.max(1)
if torch.cuda.is_available():
ylabels.extend(ybatch.cpu().data.numpy())
ypreds.extend(ybatch_.cpu().data.numpy())
else:
ylabels.extend(ybatch.data.numpy())
ypreds.extend(ybatch_.data.numpy())
print("Test accuracy: {:.3f}".format(accuracy_score(ylabels, ypreds)))
print("Confusion matrix")
print(confusion_matrix(ylabels, ypreds))
for i in range(NUM_EPOCHS):
os.remove(MODEL_FILE.format(i + 1))
```
| github_jupyter |
# Bayesian Optimization for Single-Interface Nanoparticle Discovery
**Notebook last update: 3/26/2021** (clean up)
This notebook contains the entire closed-loop process for SINP discovery with BO through SPBCL synthesis, STEM-EDS characterization, as reported in Wahl et al. *to be submitted* 2021.
```
import pandas as pd
from IPython.display import display
import matplotlib.pyplot as plt
import numpy as np
import os
import itertools
import io
from nanoparticle_project import EmbedCompGPUCB, get_comps, \
get_stoichiometric_formulas, compare_to_seed, load_np_data, update_with_new_data
from matminer.featurizers.composition import ElementProperty
from pymatgen import Composition
path = os.getcwd()
```
We will load our dataset into a pandas Dataframe and prepare for downstream modeling. We will prepare the feature space as described in the manuscript using the composition-derived descriptors of Ward et al.
## Prepare seed data and search space
```
df = pd.read_csv('megalibrary_data.csv')
_elts = ['Au%', 'Ag%', 'Cu%', 'Co%', 'Ni%', 'Pt%', 'Pd%', 'Sn%']
for col in _elts:
df[col] = df[col]/100.0
df = df.sample(frac=1) # shuffling the dataframe
df['target'] = -1*np.abs(df["Interfaces"]-1) # set target to single interface NPs
df = df[~df.duplicated()] # drop duplicates
df['Composition'] = df.apply(get_comps,axis=1)
df['n_elems'] = (df[_elts]>0).sum(axis=1)
ep = ElementProperty.from_preset(preset_name='magpie')
featurized_df = ep.featurize_dataframe(df[ ['Composition','target'] ],'Composition').drop('Composition',axis=1)
```
We should now create our search space *D*. First, we generate the composition grid. Then we featurize it as before using compositional descriptors, to generate our `candidate_feats` to search over using BO. Next, we remove any composition from our search space that is closer to a data point in our experimental seed than 5% on any axis.
```
elements = ['Au%', 'Ag%', 'Cu%', 'Co%', 'Ni%', 'Pd%', 'Sn%'] # We will acquire Pt-free in the following iterations
D = get_stoichiometric_formulas(n_components=7, npoints=11)
candidate_data = pd.DataFrame.from_records(D,columns=elements)
candidate_data['Pt%'] = 0.0
candidate_data[ candidate_data <0.00001 ] = 0.0
candidate_data['Composition'] = candidate_data.apply(get_comps,axis=1)
candidate_feats = ep.featurize_dataframe(candidate_data, 'Composition')
candidate_feats = candidate_feats.drop(elements+['Pt%']+['Composition'],axis=1)
for ind,row in df[_elts].iterrows():
candidate_data = candidate_data[_elts][ np.any(np.abs(row-candidate_data)>=0.05,axis=1) ]
candidate_feats = candidate_feats.loc[candidate_data.index]
candidate_feats.shape
```
# Closed-loop optimization procedure
We track the closed loop iterations below step-by-step, making suggestios and updating seed and candidate space with incoming data in each round. We follow this in unfolded form, so that we can closely inspect inputs and outputs in each round.
This is our initial data and the quaternary search space:
```
seed_df = df
seed_data = featurized_df
quaternaries = candidate_data[ ((candidate_data != 0).sum(axis=1) == 4)]
quaternary_feats = candidate_feats.loc[quaternaries.index]
round_number = 1
```
## Round 1
*Optimization agent's suggestions*:
```
agent = EmbedCompGPUCB(n_query=4)
suggestions = agent.get_hypotheses(candidate_data=quaternary_feats, seed_data=seed_data)
display(quaternaries.loc[ suggestions.index ])
compare_to_seed(quaternaries.loc[ suggestions.index ], seed_df)
```
*Experimental feedback in response to suggestions:*
```
new_raw_data = """
Co% Ni% Cu% Au%
13.886 42.787 21.824 21.502
13.883 43.138 21.701 21.278
13.621 42.33 22.244 21.805
22.188 34.332 9.411 34.069
22.186 33.932 9.799 34.083
21.192 34.426 9.112 35.269
8.453 33.012 6.68 51.855
8.935 34.187 6.161 50.718
8.037 34.035 6.445 51.483
10.357 34.259 6.896 48.487
10.767 35.4 6.482 47.352
10.695 36.379 5.961 46.965
13.172 47.616 9.277 29.935
12.56 49.381 8.816 29.243
12.482 47.937 9.203 30.378
12.804 48.143 8.882 30.172
12.396 48.56 9.302 29.742
"""
seed_df, seed_data, quaternaries, quaternary_feats = update_with_new_data(suggestions, new_raw_data, seed_df, seed_data,
quaternaries, quaternary_feats, round_number=round_number,
elements=elements, measured=0)
round_number+=1
```
## Round 2
*Optimization agent's suggestions*:
```
agent = EmbedCompGPUCB(n_query=4)
suggestions = agent.get_hypotheses(candidate_data=quaternary_feats, seed_data=seed_data)
display(quaternaries.loc[ suggestions.index ])
compare_to_seed(quaternaries.loc[ suggestions.index ], seed_df)
new_raw_data = """
Ni% Cu% Ag% Au% Co%
45.71 7.11 7.81 39.37 0
38.42 6.87 11.52 43.19 0
37.13 6.14 13.34 43.39 0
37.61 6.33 14.41 41.65 0
41.49 6.51 8.42 43.58 0
38.63 6.72 6.61 48.04 0
40.04 5.18 13.91 40.87 0
40.46 4.98 14.55 40.01 0
40.36 6.04 7.95 45.64 0
37.9 5.35 15.51 41.24 0
41.92 5.58 9.75 42.76 0
9.35 14.33 0 33.11 43.21
10.1 15.14 0 31.31 43.45
10.92 14.98 0 30.16 43.95
10.63 14.72 0 31.63 43.01
"""
seed_df, seed_data, quaternaries, quaternary_feats = update_with_new_data(suggestions, new_raw_data, seed_df, seed_data,
quaternaries, quaternary_feats, round_number=round_number,
elements=elements, measured=0)
round_number+=1
```
## Round 3
*Optimization agent's suggestions*:
```
agent = EmbedCompGPUCB(n_query=4)
suggestions = agent.get_hypotheses(candidate_data=quaternary_feats, seed_data=seed_data)
display(quaternaries.loc[ suggestions.index ])
compare_to_seed(quaternaries.loc[ suggestions.index ], seed_df)
new_raw_data = """
Ni% Co% Ag% Au% Cu% Pd%
55.4 29.9 5.8 7.5 1.5 0.0
55.5 29.6 4.5 7.8 2.6 0.0
56.2 29.6 4.2 6.3 3.7 0.0
63.1 30.2 2.8 3.9 0 0.0
63.8 30.2 2.3 3.7 0 0.0
62.9 30.2 1.4 3.5 2.1 0.0
18.8 39.7 0 20.9 20.6 0.0
22.8 40.4 0 28.2 8.5 0.0
20 42 0 19 19 0.0
24.4 24.6 0 35.7 15.3 0.0
22.9 24.5 0 43 9.6 0.0
25.4 26.8 0 28.8 19.1 0.0
25.3 26.1 0 25.3 16.2 0.0
0 55 0 24.6 13.2 7.3
0 55.7 0 24.1 13.5 6.7
0 53.4 0 24.8 14.3 7.4
0 56.4 0 22.7 13.7 7.2
"""
seed_df, seed_data, quaternaries, quaternary_feats = update_with_new_data(suggestions, new_raw_data, seed_df, seed_data,
quaternaries, quaternary_feats, round_number=round_number,
elements=elements, measured=0)
round_number+=1
```
## Exploratory Rounds
### Pentanary SINP discovery
```
pentanaries = candidate_data[ ((candidate_data != 0).sum(axis=1) == 5)]
pentanary_feats = candidate_feats.loc[pentanaries.index]
agent = EmbedCompGPUCB(n_query=10)
suggestions_pentanaries = agent.get_hypotheses(candidate_data=pentanary_feats, seed_data=seed_data)
display(pentanaries.loc[ suggestions_pentanaries.index ])
compare_to_seed(pentanaries.loc[ suggestions_pentanaries.index ], seed_df)
new_raw_data = """
Co% Ni% Cu% Pd% Ag% Au%
32.9 10.6 7.3 12.2 0 37
29.5 9.5 6.4 20.5 0 34.2
33.9 10 7 12.3 0 36.8
32.8 9.9 7 13.6 0 36.7
43.9 18 14.7 9 0 14.4
46.3 18.4 13.5 8.2 0 13.6
44.8 18 14.1 8.8 0 14.2
19.4 39.8 10.4 11 0 19.4
19.5 40.2 10.3 10.8 0 19.3
19.7 40 10.4 10.5 0 19.3
19 40.1 10.2 11 0 19.7
22.9 45.1 7.8 0 3.5 20.6
23.1 44.8 7 0 5.9 19.1
23.5 45 7.3 0 5 19.3
22.8 44 7.2 0 6.6 19.5
8.2 23.5 6.5 0 6.3 55.5
7.6 22.6 6.1 0 9.9 53.8
7.9 24.1 6.2 0 7.8 54
7.8 23.2 6 0 10 53
"""
suggestions_targeted_by_team = [6250,6243,5073,5484,6489]
seed_df, seed_data, pentanaries, pentanary_feats = update_with_new_data(suggestions_pentanaries.loc[suggestions_targeted_by_team],
new_raw_data, seed_df, seed_data,
pentanaries, pentanary_feats,
round_number=round_number,
elements=elements, measured=0)
round_number+=1
```
### Hexanary SINP discovery
```
hexanaries = candidate_data[ ((candidate_data != 0).sum(axis=1) == 6)]
hexanaries_feats = candidate_feats.loc[hexanaries.index]
agent = EmbedCompGPUCB(n_query=10)
suggestions_hexanaries = agent.get_hypotheses(candidate_data=hexanaries_feats, seed_data=seed_data)
display(hexanaries.loc[ suggestions_hexanaries.index ].head(10))
compare_to_seed(hexanaries.loc[ suggestions_hexanaries.index ], seed_df)
```
| github_jupyter |
# Shallow Copy Versus Deep Copy Operations
Here's the issue we are looking at now: when we make a copy of an object that contains other objects, what happens if the object we are copying "contains" other objects. So, if `list_orig` has `inner_list` as one of its members, like...
`list_orig = [1, 2, [3, 5], 4]`
and we make a copy of `list_orig` into `list_copy`...
`list_orig` ---> `list_copy`
does `list_copy` have a **copy** of `inner_list`, or do `list_orig` and `list_copy` share the **same** `inner_list`?
As we will see, the default Python behavior is that `list_orig` and `list_copy` will **share** `inner_list`. That is called a *shallow copy*.
However, Python also permits the programmer to "order up" a *deep copy* so that `inner_list` is copied also.
<img src="https://i.stack.imgur.com/AWKJa.jpg" width="30%">
## Deep copy
Let's first look at a deep copy.
```
# initializing list_a
INNER_LIST_IDX = 2
list_orig = [1, 2, [3, 5], 4]
print ("The original elements before deep copying")
print(list_orig)
print(list_orig[INNER_LIST_IDX][0])
```
We will use deepcopy to deep copy `list_orig` and change an element in the new list.
```
import copy
```
(What's in the `copy` module?)
```
dir(copy)
```
The change is made in list_b:
```
list_copy = copy.deepcopy(list_orig)
# Now change first element of the inner list:
list_copy[INNER_LIST_IDX][0] = 7
print("The new list (list_copy) of elements after deep copying and list modification")
print(list_copy)
```
That change is **not** reflected in original list as we made a deep copy:
```
print ("The original list (list_orig) elements after deep copying")
print(list_orig)
print("The list IDs are:", id(list_orig), id(list_copy))
print("The inner list IDs are:", id(list_orig[INNER_LIST_IDX]),
id(list_copy[INNER_LIST_IDX]))
```
## Shallow copy
Like a "shallow" person, and shallow copy only sees the "surface" of the object it is copying... it doesn't peer further inside.
We'll set up `list_orig` as before:
```
INNER_LIST_IDX = 2
# initializing list_1
list_orig = [1, 2, [3, 5], 4]
# original elements of list
print ("The original elements before shallow copying")
print(list_orig)
```
Using copy to shallow copy adding an element to new list
```
import copy
list_copy = copy.copy(list_orig) # not deepcopy()!
list_copy[INNER_LIST_IDX][0] = 7
```
Let's check the result:
```
print ("The original elements after shallow copying")
print(list_orig)
```
Let's change `inner_list` in `list_orig`:
```
list_orig[INNER_LIST_IDX][0] = "That's different!"
```
And let's see what `list_copy`'s inner list now looks like:
```
print(list_copy)
```
So we can see that `list_orig` and `list_copy` share the same inner list, which is now `["That's different!", 5]`. And their IDs show this:
```
print("The list IDs are:", id(list_orig), id(list_copy))
print("The inner list IDs are:", id(list_orig[INNER_LIST_IDX]),
id(list_copy[INNER_LIST_IDX]))
```
**But**... if we change the outer list element at INNER_LIST_IDX... **that** change is not shared!
```
list_orig[INNER_LIST_IDX] = ["Brand new list!", 16]
print("list_orig:", list_orig)
print("list_copy:", list_copy)
```
### Slicing
We should see which of the above slicing gets us!
```
list_slice = list_orig[:]
print(list_slice)
```
What happens to `list_slice` if we change `list_a`:
```
list_orig[INNER_LIST_IDX][0] = "Did our slice change?"
list_orig[0] = "New value at 0!"
print("Original list:", list_orig)
print("Our slice:", list_slice)
```
So, slicing make a *shallow* copy.
### Assignment
And if we don't even slice, but just assign, even the outer lists will be the same, since we haven't made **any** sort of copy at all... we've just put two labels on the same "box":
```
list_alias = list_orig
another_alias = list_alias
yet_another = list_orig
print(list_alias, end="\n\n")
# change elem 0:
list_orig[0] = "Even the outer elems are the same."
print("List alias has element 0 altered:", list_alias, end="\n\n")
print("List slice does not have element 0 altered:", list_slice, end="\n\n")
# see their IDs:
print("list_orig ID:", id(list_orig), end="\n\n")
print("list_alias ID:", id(list_alias), end="\n\n")
print("another_alias ID:", id(another_alias), end="\n\n")
print("list_slice ID:", id(list_slice), end="\n\n")
```
What does `append()` do in terms of shallow versus deep copy?
```
INNER_LIST_IDX = 3
list_orig = [1, 2, 3, [4, 5]]
list_copy = []
for elem in list_orig:
list_copy.append(elem)
list_orig[INNER_LIST_IDX][0] = "Did the copy's inner list change?"
print("list_copy:", list_copy)
```
## What If We Need Different Copy Behavior for Our Own Class?
**Advanced topic**: Python has *dunder* (double-underscore) methods `__copy__()` and `__deepcopy__()` that we can implement in our own class when we have special copying needs.
## What about Dictionaries?
The above discussion was in terms of lists, but the same considerations apply to dictionaries.
<hr>
*Assignment* just creates an *alias* for a dictionary. *All* changes to the original will be reflected in the alias:
```
original = {"a": 1, "b": 2, "c": {"d": 3, "e": 4}}
dict_alias = original
print("dict_alias:", dict_alias)
original["a"] = "A brand new value!"
print("dict_alias:", dict_alias)
```
<hr>
A *shallow* copy copies the "skin" of the dictionary, but not the "innards":
```
from copy import copy, deepcopy
original = {"a": 1, "b": 2, "c": {"d": 3, "e": 4}}
dict_scopy = copy(original)
print("shallow copy:", dict_scopy)
# change the outer part:
original["a"] = "This won't be in shallow copy!"
print("original:", original)
print("shallow copy:", dict_scopy)
# change the innards:
original["c"]["d"] = "This WILL appear in the shallow copy!"
print("shallow copy:", dict_scopy)
dict_scopy["c"]["e"] = "This WILL appear in the original!"
print("original:", original)
original["c"] = "Hello Monte!"
print("original:", original)
print("shallow copy:", dict_scopy)
```
<hr>
A *deep* copy copies the "innards" of the dictionary as well as the "skin":
```
original = {"a": 1, "b": 2, "c": {"d": 3, "e": 4}}
dict_dcopy = deepcopy(original)
print("deep copy:", dict_dcopy)
original["c"]["d"] = "This WON'T appear in the deep copy!"
print("original:", original)
print("deep copy:", dict_dcopy)
```
| github_jupyter |
```
import tensorflow as tf
import pickle
import numpy as np
def load(data_path):
with open(data_path,'rb') as f:
mnist = pickle.load(f)
return mnist["training_images"], mnist["training_labels"], mnist["test_images"], mnist["test_labels"]
class MnistData:
def __init__(self, filenames, need_shuffle, datatype='training'):
all_data = []
all_labels = []
x_train, y_train, x_test, y_test = load(filenames) #"data/mnist.pkl"
if datatype=='training':
self._data = x_train / 127.5 -1
self._labels = y_train
print(self._data.shape)
print(self._labels.shape)
else:
self._data = x_test / 127.5 -1
self._labels = y_test
print(self._data.shape)
print(self._labels.shape)
self._num_examples = self._data.shape[0]
self._need_shuffle = need_shuffle
self._indicator = 0
if self._need_shuffle:
self._shuffle_data()
def _shuffle_data(self):
# [0,1,2,3,4,5] -> [5,3,2,4,0,1]
p = np.random.permutation(self._num_examples)
self._data = self._data[p]
self._labels = self._labels[p]
def next_batch(self, batch_size):
"""return batch_size examples as a batch."""
end_indicator = self._indicator + batch_size
if end_indicator > self._num_examples:
if self._need_shuffle:
self._shuffle_data()
self._indicator = 0
end_indicator = batch_size
else:
raise Exception("have no more examples")
if end_indicator > self._num_examples:
raise Exception("batch size is larger than all examples")
batch_data = self._data[self._indicator: end_indicator]
batch_labels = self._labels[self._indicator: end_indicator]
self._indicator = end_indicator
return batch_data, batch_labels
train_filenames = "../4_basic_image_recognition/data/mnist.pkl"
train_data = MnistData(train_filenames, True, 'training')
test_data = MnistData(train_filenames, False, 'test')
x = tf.placeholder(tf.float32, [None, 28*28])
y = tf.placeholder(tf.int64, [None])
x_image = tf.reshape(x, [-1, 28, 28, 1])
conv_1 = tf.layers.conv2d(inputs=x_image,
filters=32,
kernel_size=(5, 5),
padding = 'same',
activation=tf.nn.relu,
name='conv1')
pool1 = tf.layers.max_pooling2d(inputs=conv_1,
pool_size=(2, 2),
strides=(2,2),
name='pool1')
conv_2 = tf.layers.conv2d(inputs=pool1,
filters=64,
kernel_size=(5, 5),
padding = 'same',
activation=tf.nn.relu,
name='conv2')
pool2 = tf.layers.max_pooling2d(inputs=conv_2,
pool_size=(2,2),
strides=(2,2),
name='pool2')
# fc layer1
flatten = tf.layers.flatten(pool2, name='flatten')
# fc layer2
y_ = tf.layers.dense(flatten, 10)
loss = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)
predict = tf.argmax(y_, 1)
correct_prediction = tf.equal(predict, y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float64))
with tf.name_scope('train_op'):
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)
init = tf.global_variables_initializer()
batch_size = 20
train_steps = 10000
test_steps = 50
# train 10k: 71.35%
with tf.Session() as sess:
sess.run(init)
for i in range(train_steps):
batch_data, batch_labels = train_data.next_batch(batch_size)
loss_val, acc_val, _ = sess.run([loss, accuracy, train_op], feed_dict={x: batch_data, y: batch_labels})
if (i+1) % 100 == 0:
print('[Train] Step: %d, loss: %4.5f, acc: %4.5f' % (i+1, loss_val, acc_val))
if (i+1) % 1000 == 0:
all_test_acc_val = []
for j in range(test_steps):
test_batch_data, test_batch_labels = test_data.next_batch(batch_size)
test_acc_val = sess.run([accuracy], feed_dict = {x: test_batch_data, y: test_batch_labels})
all_test_acc_val.append(test_acc_val)
test_acc = np.mean(all_test_acc_val)
print('[Test ] Step: %d, acc: %4.5f' % (i+1, test_acc))
```
| github_jupyter |
# MLP example using PySNN
```
import numpy as np
import matplotlib.pyplot as plt
import torch
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms
from tqdm import tqdm
from pysnn.connection import Linear
from pysnn.neuron import LIFNeuron, Input
from pysnn.learning import MSTDPET
from pysnn.encoding import PoissonEncoder
from pysnn.network import SNNNetwork
from pysnn.datasets import AND, BooleanNoise, Intensity
```
## Parameter defintions
```
# Architecture
n_in = 10
n_hidden = 5
n_out = 1
# Data
duration = 200
intensity = 50
num_workers = 0
batch_size = 1
# Neuronal Dynamics
thresh = 1.0
v_rest = 0
alpha_v = 10
tau_v = 10
alpha_t = 10
tau_t = 10
duration_refrac = 2
dt = 1
delay = 2
i_dynamics = (dt, alpha_t, tau_t, "exponential")
n_dynamics = (thresh, v_rest, alpha_v, alpha_v, dt, duration_refrac, tau_v, tau_t, "exponential")
c_dynamics = (batch_size, dt, delay)
# Learning
epochs = 100
lr = 0.1
w_init = 0.8
a = 0.0
```
## Network definition
The API is mostly the same as for regular PyTorch. The main differences are that layers are composed of a `Neuron` and `Connection` type,
and the layer has to be added to the network by calling the `add_layer` method. Lastly, all objects return both a
spike (or activation potential) object and a trace object.
```
class Network(SNNNetwork):
def __init__(self):
super(Network, self).__init__()
# Input
self.input = Input((batch_size, 1, n_in), *i_dynamics)
# Layer 1
self.mlp1_c = Linear(n_in, n_hidden, *c_dynamics)
self.mlp1_c.reset_weights(distribution="uniform") # initialize uniform between 0 and 1
self.neuron1 = LIFNeuron((batch_size, 1, n_hidden), *n_dynamics)
self.add_layer("fc1", self.mlp1_c, self.neuron1)
# Layer 2
self.mlp2_c = Linear(n_hidden, n_out, *c_dynamics)
self.mlp2_c.reset_weights(distribution="uniform")
self.neuron2 = LIFNeuron((batch_size, 1, n_out), *n_dynamics)
self.add_layer("fc2", self.mlp2_c, self.neuron2)
def forward(self, input):
x, t = self.input(input)
# Layer 1
x, _ = self.mlp1_c(x, t)
x, t = self.neuron1(x)
# Layer out
x, _ = self.mlp2_c(x, t)
x, t = self.neuron2(x)
return x, t
```
## Dataset
Simple Boolean AND dataset, generated to match the input dimensions of the network.
```
data_transform = transforms.Compose(
[
# BooleanNoise(0.2, 0.8),
Intensity(intensity)
]
)
lbl_transform = transforms.Lambda(lambda x: x * intensity)
train_dataset = AND(
data_encoder=PoissonEncoder(duration, dt),
data_transform=data_transform,
lbl_transform=lbl_transform,
repeats=n_in / 2,
)
train_dataloader = DataLoader(
train_dataset, batch_size=batch_size, shuffle=False, num_workers=num_workers
)
# Visualize input samples
_, axes = plt.subplots(1, 4, sharey=True, figsize=(25, 8))
for s in range(len(train_dataset)):
sample = train_dataset[s][0] # Drop label
sample = sample.sum(-1).numpy() # Total spike by summing over time dimension
sample = np.squeeze(sample)
axes[s].bar(range(len(sample)), sample)
axes[s].set_ylabel("Total number of spikes")
axes[s].set_xlabel("Input neuron")
```
## Training
```
device = torch.device("cpu")
net = Network()
# Learning rule definition
layers = net.layer_state_dict()
learning_rule = MSTDPET(layers, 1, 1, lr, np.exp(-1/10))
# Training loop
for _ in tqdm(range(epochs)):
for batch in train_dataloader:
sample, label = batch
# Iterate over input's time dimension
for idx in range(sample.shape[-1]):
input = sample[:, :, :, idx]
spike_net, _ = net(input)
# Determine reward, provide reward of 1 for desired behaviour, 0 otherwise.
# For positive samples (simulating an AND gate) spike as often as possible, for negative samples spike as little as possible.
if spike_net.long().view(-1) == label:
reward = 1
else:
reward = 0
# Perform a single step of the learning rule
learning_rule.step(reward)
# Reset network state (e.g. voltage, trace, spikes)
net.reset_state()
```
## Generate Data for Visualization
```
out_spikes = []
out_voltage = []
out_trace = []
for batch in train_dataloader:
single_out_s = []
single_out_v = []
single_out_t = []
sample, _ = batch
# Iterate over input's time dimension
for idx in range(sample.shape[-1]):
input = sample[:, :, :, idx]
spike_net, trace_net = net(input)
# Single timestep results logging
single_out_s.append(spike_net.clone())
single_out_t.append(trace_net.clone())
single_out_v.append(net.neuron2.v_cell.clone()) # Clone the voltage to make a copy of the value instead of using a pointer to memory
# Store batch results
out_spikes.append(torch.stack(single_out_s, dim=-1).view(-1))
out_voltage.append(torch.stack(single_out_v, dim=-1).view(-1))
out_trace.append(torch.stack(single_out_t, dim=-1).view(-1))
# Reset network state (e.g. voltage, trace, spikes)
net.reset_state()
```
### Visualize output neuron state over time
In the voltage plots the peaks never reach the voltage of 1, this is because the network has already reset the voltage of the spiking neurons during the forward pass. Thus it is not possible to register the exact voltage surpassing the threshold.
```
_, axes = plt.subplots(3, 4, sharey="row", figsize=(25, 12))
# Process every sample separately
for s in range(len(out_spikes)):
ax_col = axes[:, s]
spikes = out_spikes[s]
voltage = out_voltage[s]
trace = out_trace[s]
data_combined = [spikes, trace, voltage]
names = ["Spikes", "Trace", "Voltage"]
# Set column titles
ax_col[0].set_title(f"Sample {s}")
# Plot all states
for ax, data, name in zip(ax_col, data_combined, names):
ax.plot(data, label=name)
ax.legend()
```
| github_jupyter |
<a href="https://csdms.colorado.edu/wiki/ESPIn2020"><img style="float: center; width: 75%" src="../../../media/ESPIn.png"></a>
# Introduction to Landlab: Creating a simple 2D scarp diffusion model
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This tutorial illustrates how you can use Landlab to construct a simple two-dimensional numerical model on a regular (raster) grid, using a simple forward-time, centered-space numerical scheme. The example is the erosional degradation of an earthquake fault scarp, and which evolves over time in response to the gradual downhill motion of soil. Here we use a simple "geomorphic diffusion" model for landform evolution, in which the downhill flow of soil is assumed to be proportional to the (downhill) gradient of the land surface multiplied by a transport coefficient.
We start by importing the [numpy](https://numpy.org) and [matplotlib](https://matplotlib.org) libraries:
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## Part 1: 1D version using numpy
This example uses a finite-volume numerical solution to the 2D diffusion equation. The 2D diffusion equation in this case is derived as follows. Continuity of mass states that:
$\frac{\partial z}{\partial t} = -\nabla \cdot \mathbf{q}_s$,
where $z$ is elevation, $t$ is time, the vector $\mathbf{q}_s$ is the volumetric soil transport rate per unit width, and $\nabla$ is the divergence operator (here in two dimensions). (Note that we have omitted a porosity factor here; its effect will be subsumed in the transport coefficient). The sediment flux vector depends on the slope gradient:
$\mathbf{q}_s = -D \nabla z$,
where $D$ is a transport-rate coefficient---sometimes called *hillslope diffusivity*---with dimensions of length squared per time. Combining the two, and assuming $D$ is uniform, we have a classical 2D diffusion equation:
$\frac{\partial z}{\partial t} = -\nabla^2 z$.
In this first example, we will create a our 1D domain in $x$ and $z$, and set a value for $D$.
This means that the equation we solve will be in 1D.
$\frac{d z}{d t} = \frac{d q_s}{dx}$,
where
$q_s = -D \frac{d z}{dx}$
```
dx = 1
x = np.arange(0, 100, dx, dtype=float)
z = np.zeros(x.shape, dtype=float)
D = 0.01
```
Next we must create our fault by uplifting some of the domain. We will increment all elements of `z` in which `x>50`.
```
z[x>50] += 100
```
Finally, we will diffuse our fault for 1,000 years.
We will use a timestep with a [Courant–Friedrichs–Lewy condition](https://en.wikipedia.org/wiki/Courant–Friedrichs–Lewy_condition) of $C_{cfl}=0.2$. This will keep our solution numerically stable.
$C_{cfl} = \frac{\Delta t D}{\Delta x^2} = 0.2$
```
dt = 0.2 * dx * dx / D
total_time = 1e3
nts = int(total_time/dt)
z_orig = z.copy()
for i in range(nts):
qs = -D * np.diff(z)/dx
dzdt = -np.diff(qs)/dx
z[1:-1] += dzdt*dt
plt.plot(x, z_orig, label="Original Profile")
plt.plot(x, z, label="Diffused Profile")
plt.legend()
```
The prior example is pretty simple. If this was all you needed to do, you wouldn't need Landlab.
But what if you wanted...
... to use the same diffusion model in 2D instead of 1D.
... to use an irregular grid (in 1 or 2D).
... wanted to combine the diffusion model with a more complex model.
... have a more complex model you want to use over and over again with different boundary conditions.
These are the sorts of problems that Landlab was designed to solve.
In the next two sections we will introduce some of the core capabilities of Landlab.
In Part 2 we will use the RasterModelGrid, fields, and a numerical utility for calculating flux divergence.
In Part 3 we will use the HexagonalModelGrid.
In Part 4 we will use the LinearDiffuser component.
## Part 2: 2D version using Landlab's Model Grids
The Landlab model grids are data structures that represent the model domain (the variable `x` in our prior example). Here we will use `RasterModelGrid` which creates a grid with regularly spaced square grid elements. The RasterModelGrid knows how the elements are connected and how far apart they are.
Lets start by creating a RasterModelGrid class. First we need to import it.
```
from landlab import RasterModelGrid
```
### (a) Explore the RasterModelGrid
Before we make a RasterModelGrid for our fault example, lets explore the Landlab model grid.
Landlab considers the grid as a "dual" graph. Two sets of points, lines and polygons that represent 2D space.
The first graph considers points called "nodes" that are connected by lines called "links". The area that surrounds each node is called a "cell".
First, the nodes
```
from landlab.plot.graph import plot_graph
grid = RasterModelGrid((4, 5), xy_spacing=(3,4))
plot_graph(grid, at="node")
```
You can see that the nodes are points and they are numbered with unique IDs from lower left to upper right.
Next the links
```
plot_graph(grid, at="link")
```
which are lines that connect the nodes and each have a unique ID number.
And finally, the cells
```
plot_graph(grid, at="cell")
```
which are polygons centered around the nodes.
Landlab is a "dual" graph because it also keeps track of a second set of points, lines, and polygons ("corners", "faces", and "patches"). We will not focus on them further.
### *Exercises for section 2a*
(2a.1) Create an instance of a `RasterModelGrid` with 5 rows and 7 columns, with a spacing between nodes of 10 units. Plot the node layout, and identify the ID number of the center-most node.
```
# (enter your solution to 2a.1 here)
rmg = RasterModelGrid((5, 7), 10.0)
plot_graph(rmg, at='node')
```
(2a.2) Find the ID of the cell that contains this node.
```
# (enter your solution to 2a.2 here)
plot_graph(rmg, at='cell')
```
(2a.3) Find the ID of the horizontal link that connects to the last node on the right in the middle column.
```
# (enter your solution to 2a.3 here)
plot_graph(rmg, at='link')
```
### (b) Use the RasterModelGrid for 2D diffusion
Lets continue by making a new grid that is bigger. We will use this for our next fault diffusion example.
The syntax in the next line says: create a new *RasterModelGrid* object called **mg**, with 25 rows, 40 columns, and a grid spacing of 10 m.
```
mg = RasterModelGrid((25, 40), 10.0)
```
Note the use of object-oriented programming here. `RasterModelGrid` is a class; `mg` is a particular instance of that class, and it contains all the data necessary to fully describe the topology and geometry of this particular grid.
Next we'll add a *data field* to the grid, to represent the elevation values at grid nodes. The "dot" syntax below indicates that we are calling a function (or *method*) that belongs to the *RasterModelGrid* class, and will act on data contained in **mg**. The arguments indicate that we want the data elements attached to grid nodes (rather than links, for example), and that we want to name this data field `topographic__elevation`. The `add_zeros` method returns the newly created NumPy array.
```
z = mg.add_zeros('topographic__elevation', at='node')
```
The above line of code creates space in memory to store 1,000 floating-point values, which will represent the elevation of the land surface at each of our 1,000 grid nodes.
Let's plot the positions of all the grid nodes. The nodes' *(x,y)* positions are stored in the arrays `mg.x_of_node` and `mg.y_of_node`, respectively.
```
plt.plot(mg.x_of_node, mg.y_of_node, '.')
```
If we bothered to count, we'd see that there are indeed 1,000 grid nodes, and a corresponding number of `z` values:
```
len(z)
```
Now for some tectonics. Let's say there's a fault trace that angles roughly east-northeast. We can describe the trace with the equation for a line. One trick here: by using `mg.x_of_node`, in the line of code below, we are calculating a *y* (i.e., north-south) position of the fault trace for each grid node---meaning that this is the *y* coordinate of the trace at the *x* coordinate of a given node.
```
fault_trace_y = 50.0 + 0.25 * mg.x_of_node
```
Here comes the earthquake. For all the nodes north of the fault (i.e., those with a *y* coordinate greater than the corresponding *y* coordinate of the fault trace), we'll add elevation equal to 10 meters plus a centimeter for every meter east along the grid (just to make it interesting):
```
z[mg.y_of_node >
fault_trace_y] += 10.0 + 0.01 * mg.x_of_node[mg.y_of_node > fault_trace_y]
```
(A little bit of Python under the hood: the statement `mg.y_of_node > fault_trace_y` creates a 1000-element long boolean array; placing this within the index brackets will select only those array entries that correspond to `True` in the boolean array)
Let's look at our newly created initial topography using Landlab's *imshow_node_grid* plotting function (which we first need to import).
```
from landlab.plot.imshow import imshow_grid
imshow_grid(mg, 'topographic__elevation')
```
To finish getting set up, we will define two parameters: the transport ("diffusivity") coefficient, `D`, and the time-step size, `dt`. (The latter is set using the Courant condition for a forward-time, centered-space finite-difference solution; you can find the explanation in most textbooks on numerical methods).
```
D = 0.01 # m2/yr transport coefficient
dt = 0.2 * mg.dx * mg.dx / D
dt
```
Boundary conditions: for this example, we'll assume that the east and west sides are closed to flow of sediment, but that the north and south sides are open. (The order of the function arguments is east, north, west, south)
```
mg.set_closed_boundaries_at_grid_edges(True, False, True, False)
```
*A note on boundaries:* with a Landlab raster grid, all the perimeter nodes are boundary nodes. In this example, there are 24 + 24 + 39 + 39 = 126 boundary nodes. The previous line of code set those on the east and west edges to be **closed boundaries**, while those on the north and south are **open boundaries** (the default). All the remaining nodes are known as **core** nodes. In this example, there are 1000 - 126 = 874 core nodes:
```
len(mg.core_nodes)
```
One more thing before we run the time loop: we'll create an array to contain soil flux. In the function call below, the first argument tells Landlab that we want one value for each grid link, while the second argument provides a name for this data *field*:
```
qs = mg.add_zeros('sediment_flux', at='link')
```
And now for some landform evolution. We will loop through 25 iterations, representing 50,000 years. On each pass through the loop, we do the following:
1. Calculate, and store in the array `g`, the gradient between each neighboring pair of nodes. These calculations are done on **links**. The gradient value is a positive number when the gradient is "uphill" in the direction of the link, and negative when the gradient is "downhill" in the direction of the link. On a raster grid, link directions are always in the direction of increasing $x$ ("horizontal" links) or increasing $y$ ("vertical" links).
2. Calculate, and store in the array `qs`, the sediment flux between each adjacent pair of nodes by multiplying their gradient by the transport coefficient. We will only do this for the **active links** (those not connected to a closed boundary, and not connecting two boundary nodes of any type); others will remain as zero.
3. Calculate the resulting net flux at each node (positive=net outflux, negative=net influx). The negative of this array is the rate of change of elevation at each (core) node, so store it in a node array called `dzdt'.
4. Update the elevations for the new time step.
```
for i in range(25):
g = mg.calc_grad_at_link(z)
qs[mg.active_links] = -D * g[mg.active_links]
dzdt = -mg.calc_flux_div_at_node(qs)
z[mg.core_nodes] += dzdt[mg.core_nodes] * dt
```
Let's look at how our fault scarp has evolved.
```
imshow_grid(mg, 'topographic__elevation')
```
Notice that we have just created and run a 2D model of fault-scarp creation and diffusion with fewer than two dozen lines of code. How long would this have taken to write in C or Fortran?
While it was very very easy to write in 1D, writing this in 2D would mean we would have needed to keep track of the adjacency of the different parts of the grid. This is the primary problem that the Landlab grids are meant to solve.
Think about how difficult this would be to hand code if the grid were irregular or hexagonal. In order to conserve mass and implement the differential equation you would need to know how nodes were conected, how long the links were, and how big each cell was.
We do such an example after the next section.
### *Exercises for section 2b*
(2b .1) Create an instance of a `RasterModelGrid` called `mygrid`, with 16 rows and 25 columns, with a spacing between nodes of 5 meters. Use the `plot` function in the `matplotlib` library to make a plot that shows the position of each node marked with a dot (hint: see the plt.plot() example above).
```
# (enter your solution to 2b.1 here)
mygrid = RasterModelGrid((16, 25), xy_spacing=5.0)
plt.plot(mygrid.x_of_node, mygrid.y_of_node, '.')
```
(2b.2) Query the grid variables `number_of_nodes` and `number_of_core_nodes` to find out how many nodes are in your grid, and how many of them are core nodes.
```
# (enter your solution to 2b.2 here)
print(mygrid.number_of_nodes)
print(mygrid.number_of_core_nodes)
```
(2b.3) Add a new field to your grid, called `temperature` and attached to nodes. Have the initial values be all zero.
```
# (enter your solution to 2b.3 here)
temp = mygrid.add_zeros('temperature', at='node')
```
(2b.4) Change the temperature of nodes in the top (north) half of the grid to be 10 degrees C. Use the `imshow_grid` function to display a shaded image of the elevation field.
```
# (enter your solution to 2b.4 here)
temp[mygrid.y_of_node >= 40.0] = 10.0
imshow_grid(mygrid, 'temperature')
```
(2b.5) Use the grid function `set_closed_boundaries_at_grid_edges` to assign closed boundaries to the right and left sides of the grid.
```
# (enter your solution to 2b.5 here)
mygrid.set_closed_boundaries_at_grid_edges(True, False, True, False)
imshow_grid(mygrid, 'temperature', color_for_closed='c')
```
(2b.6) Create a new field of zeros called `heat_flux` and attached to links. Using the `number_of_links` grid variable, verify that your new field array has the correct number of items.
```
# (enter your solution to 2b.6 here)
Q = mygrid.add_zeros('heat_flux', at='link')
print(mygrid.number_of_links)
print(len(Q))
```
(2b.7) Use the `calc_grad_at_link` grid function to calculate the temperature gradients at all the links in the grid. Given the node spacing and the temperatures you assigned to the top versus bottom grid nodes, what do you expect the maximum temperature gradient to be? Print the values in the gradient array to verify that this is indeed the maximum temperature gradient.
```
# (enter your solution to 2b.7 here)
print('Expected max gradient is 2 C/m')
temp_grad = mygrid.calc_grad_at_link(temp)
print(temp_grad)
```
(2b.8) Back to hillslopes: Reset the values in the elevation field of the grid `mg` to zero. Then copy and paste the time loop above (i.e., the block in Section 2b that starts with `for i in range(25):`) below. Modify the last line to add uplift of the hillslope material at a rate `uplift_rate` = 0.0001 m/yr (hint: the amount of uplift in each iteration should be the uplift rate times the time-step duration). Then run the block and plot the resulting topography. Try experimenting with different uplift rates and different values of `D`.
```
# (enter your solution to 2b.8 here)
z[:] = 0.0
uplift_rate = 0.0001
for i in range(25):
g = mg.calc_grad_at_link(z)
qs[mg.active_links] = -D * g[mg.active_links]
dzdt = -mg.calc_flux_div_at_node(qs)
z[mg.core_nodes] += (dzdt[mg.core_nodes] + uplift_rate) * dt
imshow_grid(mg, z)
```
### (c) What's going on under the hood?
This example uses a finite-volume numerical solution to the 2D diffusion equation. The 2D diffusion equation in this case is derived as follows. Continuity of mass states that:
$\frac{\partial z}{\partial t} = -\nabla \cdot \mathbf{q}_s$,
where $z$ is elevation, $t$ is time, the vector $\mathbf{q}_s$ is the volumetric soil transport rate per unit width, and $\nabla$ is the divergence operator (here in two dimensions). (Note that we have omitted a porosity factor here; its effect will be subsumed in the transport coefficient). The sediment flux vector depends on the slope gradient:
$\mathbf{q}_s = -D \nabla z$,
where $D$ is a transport-rate coefficient---sometimes called *hillslope diffusivity*---with dimensions of length squared per time. Combining the two, and assuming $D$ is uniform, we have a classical 2D diffusion equation:
$\frac{\partial z}{\partial t} = -\nabla^2 z$.
For the numerical solution, we discretize $z$ at a series of *nodes* on a grid. The example in this notebook uses a Landlab *RasterModelGrid*, in which every interior node sits inside a cell of width $\Delta x$, but we could alternatively have used any grid type that provides nodes, links, and cells.
The gradient and sediment flux vectors will be calculated at the *links* that connect each pair of adjacent nodes. These links correspond to the mid-points of the cell faces, and the values that we assign to links represent the gradients and fluxes, respectively, along the faces of the cells.
The flux divergence, $\nabla \mathbf{q}_s$, will be calculated by summing, for every cell, the total volume inflows and outflows at each cell face, and dividing the resulting sum by the cell area. Note that for a regular, rectilinear grid, as we use in this example, this finite-volume method is equivalent to a finite-difference method.
To advance the solution in time, we will use a simple explicit, forward-difference method. This solution scheme for a given node $i$ can be written:
$\frac{z_i^{t+1} - z_i^t}{\Delta t} = -\frac{1}{A_i} \sum\limits_{j=1}^{N_i} \delta (l_{ij}) q_s (l_{ij}) \lambda(l_{ij})$.
Here the superscripts refer to time steps, $\Delta t$ is time-step size, $q_s(l_{ij})$ is the sediment flux per width associated with the link that crosses the $j$-th face of the cell at node $i$, $\lambda(l_{ij})$ is the width of the cell face associated with that link ($=\Delta x$ for a regular uniform grid), and $N_i$ is the number of active links that connect to node $i$. The variable $\delta(l_{ij})$ contains either +1 or -1: it is +1 if link $l_{ij}$ is oriented away from the node (in which case positive flux would represent material leaving its cell), or -1 if instead the link "points" into the cell (in which case positive flux means material is entering).
To get the fluxes, we first calculate the *gradient*, $G$, at each link, $k$:
$G(k) = \frac{z(H_k) - z(T_k)}{L_k}$.
Here $H_k$ refers the *head node* associated with link $k$, $T_k$ is the *tail node* associated with link $k$. Each link has a direction: from the tail node to the head node. The length of link $k$ is $L_k$ (equal to $\Delta x$ is a regular uniform grid). What the above equation says is that the gradient in $z$ associated with each link is simply the difference in $z$ value between its two endpoint nodes, divided by the distance between them. The gradient is positive when the value at the head node (the "tip" of the link) is greater than the value at the tail node, and vice versa.
The calculation of gradients in $z$ at the links is accomplished with the `calc_grad_at_link` function. The sediment fluxes are then calculated by multiplying the link gradients by $-D$. Once the fluxes at links have been established, the `calc_flux_div_at_node` function performs the summation of fluxes.
### *Exercises for section 2c*
(2c.1) Make a 3x3 `RasterModelGrid` called `tinygrid`, with a cell spacing of 2 m. Use the `plot_graph` function to display the nodes and their ID numbers.
```
# (enter your solution to 2c.1 here)
tinygrid = RasterModelGrid((3, 3), 2.0)
plot_graph(tinygrid, at='node')
```
(2c.2) Give your `tinygrid` a node field called `height` and set the height of the center-most node to 0.5. Use `imshow_grid` to display the height field.
```
# (enter your solution to 2c.2 here)
ht = tinygrid.add_zeros('height', at='node')
ht[4] = 0.5
imshow_grid(tinygrid, ht)
```
(2c.3) The grid should have 12 links (extra credit: verify this with `plot_graph`). When you compute gradients, which of these links will have non-zero gradients? What will the absolute value(s) of these gradients be? Which (if any) will have positive gradients and which negative? To codify your answers, make a 12-element numpy array that contains your predicted gradient value for each link.
```
# (enter your solution to 2c.3 here)
plot_graph(tinygrid, at='link')
pred_grad = np.array([0, 0, 0, 0.25, 0, 0.25, -0.25, 0, -0.25, 0, 0, 0])
print(pred_grad)
```
(2c.4) Test your prediction by running the `calc_grad_at_link` function on your tiny grid. Print the resulting array and compare it with your predictions.
```
# (enter your solution to 2c.4 here)
grad = tinygrid.calc_grad_at_link(ht)
print(grad)
```
(2c.5) Suppose the flux of soil per unit cell width is defined as -0.01 times the height gradient. What would the flux be at the those links that have non-zero gradients? Test your prediction by creating and printing a new array whose values are equal to -0.01 times the link-gradient values.
```
# (enter your solution to 2c.5 here)
flux = -0.01 * grad
print(flux)
```
(2c.6) Consider the net soil accumulation or loss rate around the center-most node in your tiny grid (which is the only one that has a cell). The *divergence* of soil flux can be represented numerically as the sum of the total volumetric soil flux across each of the cell's four faces. What is the flux across each face? (Hint: multiply by face width) What do they add up to? Test your prediction by running the grid function `calc_flux_div_at_node` (hint: pass your unit flux array as the argument). What are the units of the divergence values returned by the `calc_flux_div_at_node` function?
```
# (enter your solution to 2c.6 here)
print('predicted div is 0 m/yr')
dqsdx = tinygrid.calc_flux_div_at_node(flux)
print(dqsdx)
```
## Part 3: Hexagonal grid
Next we will use an non-raster Landlab grid.
We start by making a random set of points with x values between 0 and 400 and y values of 0 and 250. We then add zeros to our grid at a field called "topographic__elevation" and plot the node locations.
Note that the syntax here is exactly the same as in the RasterModelGrid example (once the grid has been created).
```
from landlab import HexModelGrid
mg = HexModelGrid((25, 40), 10, node_layout="rect")
z = mg.add_zeros('topographic__elevation', at='node')
plt.plot(mg.x_of_node, mg.y_of_node, '.')
```
Next we create our fault trace and uplift the hanging wall.
We can plot just like we did with the RasterModelGrid.
```
fault_trace_y = 50.0 + 0.25 * mg.x_of_node
z[mg.y_of_node >
fault_trace_y] += 10.0 + 0.01 * mg.x_of_node[mg.y_of_node > fault_trace_y]
imshow_grid(mg, "topographic__elevation")
```
And we can use the same code as before to create a diffusion model!
Landlab supports multiple grid types. You can read more about them [here](https://landlab.readthedocs.io/en/latest/reference/grid/index.html).
```
qs = mg.add_zeros('sediment_flux', at='link')
for i in range(25):
g = mg.calc_grad_at_link(z)
qs[mg.active_links] = -D * g[mg.active_links]
dzdt = -mg.calc_flux_div_at_node(qs)
z[mg.core_nodes] += dzdt[mg.core_nodes] * dt
imshow_grid(mg, 'topographic__elevation')
```
### *Exercises for section 3*
(3.1-6) Repeat the exercises from section 2c, but this time using a hexagonal tiny grid called `tinyhex`. Your grid should have 7 nodes: one core node and 6 perimeter nodes. (Hints: use `node_layout = 'hex'`, and make a grid with 3 rows and 2 base-row columns.)
```
# (enter your solution to 3.1 here)
tinyhex = HexModelGrid((3, 2), 2.0)
plot_graph(tinyhex, at='node')
# (enter your solution to 3.2 here)
hexht = tinyhex.add_zeros('height', at='node')
hexht[3] = 0.5
imshow_grid(tinyhex, hexht)
# (enter your solution to 3.3 here)
plot_graph(tinyhex, at='link')
pred_grad = np.array([0, 0, 0.25, 0.25, 0, 0.25, -0.25, 0, -0.25, -0.25, 0, 0])
print(pred_grad)
# (enter your solution to 3.4 here)
hexgrad = tinyhex.calc_grad_at_link(hexht)
print(hexgrad)
# (enter your solution to 3.5 here)
hexflux = -0.01 * hexgrad
print(hexflux)
# (enter your solution to 3.6 here)
print(tinyhex.length_of_face)
print(tinyhex.area_of_cell)
total_outflux = 6 * 0.0025 * tinyhex.length_of_face[0]
divergence = total_outflux / tinyhex.area_of_cell[0]
print(total_outflux)
print(divergence)
```
## Part 4: Landlab Components
Finally we will use a Landlab component, called the LinearDiffuser [link to its documentation](https://landlab.readthedocs.io/en/latest/reference/components/diffusion.html).
Landlab was designed to have many of the utilities like `calc_grad_at_link`, and `calc_flux_divergence_at_node` to help you make your own models. Sometimes, however, you may use such a model over and over and over. Then it is nice to be able to put it in its own python class with a standard interface.
This is what a Landlab Component is.
There is a whole [tutorial on components](../component_tutorial/component_tutorial.ipynb) and a [page on the User Guide](https://landlab.readthedocs.io/en/latest/user_guide/components.html). For now we will just show you what the prior example looks like if we use the LinearDiffuser.
First we import it, set up the grid, and uplift our fault block.
```
from landlab.components import LinearDiffuser
mg = HexModelGrid((25, 40), 10, node_layout="rect")
z = mg.add_zeros('topographic__elevation', at='node')
fault_trace_y = 50.0 + 0.25 * mg.x_of_node
z[mg.y_of_node >
fault_trace_y] += 10.0 + 0.01 * mg.x_of_node[mg.y_of_node > fault_trace_y]
```
Next we instantiate a LinearDiffuser. We have to tell the component what value to use for the diffusivity.
```
ld = LinearDiffuser(mg, linear_diffusivity=D)
```
Finally we run the component forward in time and plot. Like many Landlab components, the LinearDiffuser has a method called "run_one_step" that takes one input, the timestep dt. Calling this method runs the LinearDiffuser forward in time by an increment dt.
```
for i in range(25):
ld.run_one_step(dt)
imshow_grid(mg, 'topographic__elevation')
```
### *Exercises for section 4*
(4.1) Repeat the steps above that instantiate and run a `LinearDiffuser` component, but this time give it a `RasterModelGrid`. Use `imshow_grid` to display the topography below.
```
# (enter your solution to 4.1 here)
rmg = RasterModelGrid((25, 40), 10)
z = rmg.add_zeros('topographic__elevation', at='node')
fault_trace_y = 50.0 + 0.25 * rmg.x_of_node
z[rmg.y_of_node >
fault_trace_y] += 10.0 + 0.01 * rmg.x_of_node[rmg.y_of_node > fault_trace_y]
ld = LinearDiffuser(rmg, linear_diffusivity=D)
for i in range(25):
ld.run_one_step(dt)
imshow_grid(rmg, 'topographic__elevation')
```
(4.2) Using either a raster or hex grid (your choice) with a `topographic__elevation` field that is initially all zeros, write a modified version of the loop that adds uplift to the core nodes each iteration, at a rate of 0.0001 m/yr. Run the model for enough time to accumulate 10 meters of uplift. Plot the terrain to verify that the land surface height never gets higher than 10 m.
```
# (enter your solution to 4.2 here)
rmg = RasterModelGrid((40, 40), 10) # while we're at it, make it a bit bigger
z = rmg.add_zeros('topographic__elevation', at='node')
ld = LinearDiffuser(rmg, linear_diffusivity=D)
for i in range(50):
ld.run_one_step(dt)
z[rmg.core_nodes] += dt * 0.0001
imshow_grid(rmg, 'topographic__elevation')
```
(4.3) Now run the same model long enough that it reaches (or gets very close to) a dynamic equilibrium between uplift and erosion. What shape does the hillslope have?
```
# (enter your solution to 4.3 here)
z[:] = 0.0
uplift_rate = 0.0001
for i in range(4000):
ld.run_one_step(dt)
z[rmg.core_nodes] += dt * uplift_rate
imshow_grid(rmg, 'topographic__elevation')
plt.figure()
plt.plot(rmg.x_of_node, z, '.')
```
(BONUS CHALLENGE QUESTION) Derive an analytical solution for the cross-sectional shape of your steady-state hillslope. Plot this solution next to the actual model's cross-section.
#### *SOLUTION (derivation)*
##### Derivation of the original governing equation
(Note: you could just start with the governing equation and go from there, but we include this here for completeness).
Consider a topographic profile across a hillslope. The horizontal coordinate along the profile is $x$, measured from the left side of the profile (i.e., the base of the hill on the left side, where $x=0$). The horizontal coordinate perpendicular to the profile is $y$. Assume that at any time, the hillslope is perfectly symmetrical in the $y$ direction, and that there is no flow of soil in this direction.
Now consider a vertical column of soil somewhere along the profile. The left side of the column is at position $x$, and the right side is at position $x+\Delta x$, with $\Delta x$ being the width of the column in the $x$ direction. The width of the column in the $y$ direction is $W$. The height of the column, $z$, is also the height of the land surface at that location. Height is measured relative to the height of the base of the slope (in other words, $z(0) = 0$).
The total mass of soil inside the column, and above the slope base, is equal to the volume of soil material times its density times the fraction of space that it fills, which is 1 - porosity. Denoting soil particle density by $\rho$ and porosity by $\phi$, the soil mass in a column of height $h$ is
$m = (1-\phi ) \rho \Delta x W z$.
Conservation of mass dictates that the rate of change of mass equals the rate of mass inflow minus the rate of mass outflow. Assume that mass enters or leaves only by (1) soil creep, and (2) uplift of the hillslope material relative to the elevation of the hillslope base. The rate of the latter, in terms of length per time, will be denoted $U$. The rate of soil creep at a particular position $x$, in terms of bulk volume (including pores) per time per width, will be denoted $q_s(x)$. With this definition in mind, mass conservation dictates that:
$\frac{\partial (1-\phi ) \rho \Delta x W z}{\partial t} = \rho (1-\phi ) \Delta x W U + \rho (1-\phi ) q_s(x) - \rho (1-\phi ) q_s(x+\Delta x)$.
Assume that porosity and density are steady and uniform. Then,
$\frac{\partial z}{\partial t} = U + \frac{q_s(x) - q_s(x+\Delta x)}{\Delta x}$.
Factoring out -1 from the right-most term, and taking the limit as $\Delta x\rightarrow 0$, we get a differential equation that expresses conservation of mass for this situation:
$\frac{\partial z}{\partial t} = U - \frac{\partial q_s}{\partial x}$.
Next, substitute the soil-creep rate law
$q_s = -D \frac{\partial z}{\partial x}$,
to obtain
$\frac{\partial z}{\partial t} = U + D \frac{\partial^2 z}{\partial x^2}$.
##### Steady state
Steady means $dz/dt = 0$. If we go back to the mass conservation law a few steps ago and apply steady state, we find
$\frac{dq_s}{dx} = U$.
If you think of a hillslope that slopes down to the right, you can think of this as indicating that for every step you take to the right, you get another increment of incoming soil via uplift relative to baselevel. (Turns out it works the same way for a slope that angles down the left, but that's less obvious in the above math)
Integrate to get:
$q_s = Ux + C_1$, where $C_1$ is a constant of integration.
To evaluate the integration constant, let's assume the crest of the hill is right in the middle of the profile, at $x=L/2$, with $L$ being the total length of the profile. Net downslope soil flux will be zero at the crest (where the slope is zero), so for this location:
$q_s = 0 = UL/2 + C_1$,
and therefore,
$C_1 = -UL/2$,
and
$q_s = U (x - L/2)$.
Now substitute the creep law for $q_s$ and divide both sides by $-D$:
$\frac{dz}{dx} = \frac{U}{D} (L/2 - x)$.
Integrate:
$z = \frac{U}{D} (Lx/2 - x^2/2) + C_2$.
To evaluate $C_2$, recall that $z(0)=0$ (and also $z(L)=0$), so $C_2=0$. Hence, here's our analytical solution, which describes a parabola:
$\boxed{z = \frac{U}{2D} (Lx - x^2)}$.
```
# (enter your solution to the bonus challenge question here)
L = 390.0 # hillslope length, m
x_analytic = np.arange(0.0, L)
z_analytic = 0.5 * (uplift_rate / D) * (L * x_analytic - x_analytic * x_analytic)
plt.plot(rmg.x_of_node, z, '.')
plt.plot(x_analytic, z_analytic, 'r')
```
Hey, hang on a minute, that's not a very good fit! What's going on?
Turns out our 2D hillslope isn't as tall as the idealized 1D profile because of the boundary conditions: with soil free to flow east and west as well as north and south, the crest ends up lower than it would be if it were perfectly symmetrical in one direction.
So let's try re-running the numerical model, but this time with the north and south boundaries closed so that the hill shape becomes uniform in the $y$ direction:
```
rmg = RasterModelGrid((40, 40), 10)
z = rmg.add_zeros('topographic__elevation', at='node')
rmg.set_closed_boundaries_at_grid_edges(False, True, False, True) # closed on N and S
ld = LinearDiffuser(rmg, linear_diffusivity=D)
for i in range(4000):
ld.run_one_step(dt)
z[rmg.core_nodes] += dt * uplift_rate
imshow_grid(rmg, 'topographic__elevation')
plt.plot(rmg.x_of_node, z, '.')
plt.plot(x_analytic, z_analytic, 'r')
```
That's more like it!
Congratulations on making it to the end of this tutorial!
### Click here for more <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">Landlab tutorials</a>
| github_jupyter |
## Padding Characters around Strings
Let us go through how to pad characters to strings using Spark Functions.
```
%%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/w85C18tvYNA?rel=0&controls=1&showinfo=0" frameborder="0" allowfullscreen></iframe>
```
* We typically pad characters to build fixed length values or records.
* Fixed length values or records are extensively used in Mainframes based systems.
* Length of each and every field in fixed length records is predetermined and if the value of the field is less than the predetermined length then we pad with a standard character.
* In terms of numeric fields we pad with zero on the leading or left side. For non numeric fields, we pad with some standard character on leading or trailing side.
* We use `lpad` to pad a string with a specific character on leading or left side and `rpad` to pad on trailing or right side.
* Both lpad and rpad, take 3 arguments - column or expression, desired length and the character need to be padded.
Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our [10 node state of the art cluster/labs](https://labs.itversity.com/plans) to learn Spark SQL using our unique integrated LMS.
```
from pyspark.sql import SparkSession
import getpass
username = getpass.getuser()
spark = SparkSession. \
builder. \
config('spark.ui.port', '0'). \
config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
enableHiveSupport(). \
appName(f'{username} | Python - Processing Column Data'). \
master('yarn'). \
getOrCreate()
```
If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.
**Using Spark SQL**
```
spark2-sql \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
```
**Using Scala**
```
spark2-shell \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
```
**Using Pyspark**
```
pyspark2 \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
```
### Tasks - Padding Strings
Let us perform simple tasks to understand the syntax of `lpad` or `rpad`.
* Create a Dataframe with single value and single column.
* Apply `lpad` to pad with - to Hello to make it 10 characters.
```
l = [('X',)]
df = spark.createDataFrame(l).toDF("dummy")
from pyspark.sql.functions import lit, lpad
df.select(lpad(lit("Hello"), 10, "-").alias("dummy")).show()
```
* Let’s create the **employees** Dataframe
```
employees = [(1, "Scott", "Tiger", 1000.0,
"united states", "+1 123 456 7890", "123 45 6789"
),
(2, "Henry", "Ford", 1250.0,
"India", "+91 234 567 8901", "456 78 9123"
),
(3, "Nick", "Junior", 750.0,
"united KINGDOM", "+44 111 111 1111", "222 33 4444"
),
(4, "Bill", "Gomes", 1500.0,
"AUSTRALIA", "+61 987 654 3210", "789 12 6118"
)
]
employeesDF = spark.createDataFrame(employees). \
toDF("employee_id", "first_name",
"last_name", "salary",
"nationality", "phone_number",
"ssn"
)
employeesDF.show()
employeesDF.printSchema()
```
* Use **pad** functions to convert each of the field into fixed length and concatenate. Here are the details for each of the fields.
* Length of the employee_id should be 5 characters and should be padded with zero.
* Length of first_name and last_name should be 10 characters and should be padded with - on the right side.
* Length of salary should be 10 characters and should be padded with zero.
* Length of the nationality should be 15 characters and should be padded with - on the right side.
* Length of the phone_number should be 17 characters and should be padded with - on the right side.
* Length of the ssn can be left as is. It is 11 characters.
* Create a new Dataframe **empFixedDF** with column name **employee**. Preview the data by disabling truncate.
```
from pyspark.sql.functions import lpad, rpad, concat
empFixedDF = employeesDF.select(
concat(
lpad("employee_id", 5, "0"),
rpad("first_name", 10, "-"),
rpad("last_name", 10, "-"),
lpad("salary", 10, "0"),
rpad("nationality", 15, "-"),
rpad("phone_number", 17, "-"),
"ssn"
).alias("employee")
)
empFixedDF.show(truncate=False)
```
| github_jupyter |
# NBA Free throw analysis
Now let's see some of these methods in action on real world data.
I'm not a basketball guru by any means, but I thought it would be fun to see whether we can find players that perform differently in free throws when playing at home versus away.
[Basketballvalue.com](http://basketballvalue.com/downloads.php) has
some nice play by play data on season and playoff data between 2007 and 2012, which we will use for this analysis.
It's not perfect, for example it only records player's last names, but it will do for the purpose of demonstration.
## Getting data:
- Download and extract play by play data from 2007 - 2012 data from http://basketballvalue.com/downloads.php
- Concatenate all text files into file called `raw.data`
- Run following to extract free throw data into `free_throws.csv`
```
cat raw.data | ack Free Throw | sed -E 's/[0-9]+([A-Z]{3})([A-Z]{3})[[:space:]][0-9]*[[:space:]].?[0-9]{2}:[0-9]{2}:[0-9]{2}[[:space:]]*\[([A-z]{3}).*\][[:space:]](.*)[[:space:]]Free Throw.*(d|\))/\1,\2,\3,\4,\5/ ; s/(.*)d$/\10/ ; s/(.*)\)$/\11/' > free_throws.csv
```
```
from __future__ import division
import pandas as pd
import numpy as np
import scipy as sp
import scipy.stats
import toyplot as tp
```
## Data munging
Because only last name is included, we analyze "player-team" combinations to avoid duplicates.
This could mean that the same player has multiple rows if he changed teams.
```
df = pd.read_csv('free_throws.csv', names=["away", "home", "team", "player", "score"])
df["at_home"] = df["home"] == df["team"]
df.head()
```
## Overall free throw%
We note that at home the ft% is slightly higher, but there is not much difference
```
df.groupby("at_home").mean()
```
## Aggregating to player level
We use a pivot table to get statistics on every player.
```
sdf = pd.pivot_table(df, index=["player", "team"], columns="at_home", values=["score"],
aggfunc=[len, sum], fill_value=0).reset_index()
sdf.columns = ['player', 'team', 'atm_away', 'atm_home', 'score_away', 'score_home']
sdf['atm_total'] = sdf['atm_away'] + sdf['atm_home']
sdf['score_total'] = sdf['score_away'] + sdf['score_home']
sdf.sample(10)
```
## Individual tests
For each player, we assume each free throw is an independent draw from a Bernoulli distribution with probability $p_{ij}$ of succeeding where $i$ denotes the player and $j=\{a, h\}$ denoting away or home, respectively.
Our null hypotheses are that there is no difference between playing at home and away, versus the alternative that there is a difference.
While you could argue a one-sided test for home advantage is also appropriate, I am sticking with a two-sided test.
$$\begin{aligned}
H_{0, i}&: p_{i, a} = p_{i, h},\\
H_{1, i}&: p_{i, a} \neq p_{i, h}.
\end{aligned}$$
To get test statistics, we conduct a simple two-sample proportions test, where our test statistic is:
$$Z = \frac{\hat p_h - \hat p_a}{\sqrt{\hat p (1-\hat p) (\frac{1}{n_h} + \frac{1}{n_a})}}$$
where
- $n_h$ and $n_a$ are the number of attempts at home and away, respectively
- $X_h$ and $X_a$ are the number of free throws made at home and away
- $\hat p_h = X_h / n_h$ is the MLE for the free throw percentage at home
- likewise, $\hat p_a = X_a / n_a$ for away ft%
- $\hat p = \frac{X_h + X_a}{n_h + n_a}$ is the MLE for overall ft%, used for the pooled variance estimator
Then we know from Stats 101 that $Z \sim N(0, 1)$ under the null hypothesis that there is no difference in free throw percentages.
For a normal approximation to hold, we need $np > 5$ and $n(1-p) > 5$, since $p \approx 0.75$, let's be a little conservative and say we need at least 50 samples for a player to get a good normal approximation.
This leads to data on 936 players, and for each one we compute Z, and the corresponding p-value.
```
data = sdf.query('atm_total > 50').copy()
len(data)
data['p_home'] = data['score_home'] / data['atm_home']
data['p_away'] = data['score_away'] / data['atm_away']
data['p_ovr'] = (data['score_total']) / (data['atm_total'])
# two-sided
data['zval'] = (data['p_home'] - data['p_away']) / np.sqrt(data['p_ovr'] * (1-data['p_ovr']) * (1/data['atm_away'] + 1/data['atm_home']))
data['pval'] = 2*(1-sp.stats.norm.cdf(np.abs(data['zval'])))
# one-sided testing home advantage
# data['zval'] = (data['p_home'] - data['p_away']) / np.sqrt(data['p_ovr'] * (1-data['p_ovr']) * (1/data['atm_away'] + 1/data['atm_home']))
# data['pval'] = (1-sp.stats.norm.cdf(data['zval']))
data.sample(10)
canvas = tp.Canvas(800, 300)
ax1 = canvas.axes(grid=(1, 2, 0), label="Histogram p-values")
hist_p = ax1.bars(np.histogram(data["pval"], bins=50, normed=True), color="steelblue")
hisp_p_density = ax1.plot([0, 1], [1, 1], color="red")
ax2 = canvas.axes(grid=(1, 2, 1), label="Histogram z-values")
hist_z = ax2.bars(np.histogram(data["zval"], bins=50, normed=True), color="orange")
x = np.linspace(-3, 3, 200)
hisp_z_density = ax2.plot(x, sp.stats.norm.pdf(x), color="red")
```
# Global tests
We can test the global null hypothesis, that is, there is no difference in free throw % between playing at home and away for any player using both Fisher's Combination Test and the Bonferroni method.
Which one is preferred in this case? I would expect to see many small difference in effects rather than a few players showing huge effects, so Fisher's Combination Test probably has much better power.
## Fisher's combination test
We expect this test to have good power: if there is a difference between playing at home and away we would expect to see a lot of little effects.
```
T = -2 * np.sum(np.log(data["pval"]))
print 'p-value for Fisher Combination Test: {:.3e}'.format(1 - sp.stats.chi2.cdf(T, 2*len(data)))
```
## Bonferroni's method
The theory would suggest this test has a lot less power, it's unlikely to have a few players where the difference is relatively huge.
```
print '"p-value" Bonferroni: {:.3e}'.format(min(1, data["pval"].min() * len(data)))
```
## Conclusion
Indeed, we find a small p-value for Fisher's Combination Test, while Bonferroni's method does not reject the null hypothesis.
In fact, if we multiply the smallest p-value by the number of hypotheses, we get a number larger than 1, so we aren't even remotely close to any significance.
# Multiple tests
So there definitely seems some evidence that there is a difference in performance.
If you tell a sport's analyst that there is evidence that at least some players that perform differently away versus at home, their first question will be: "So who is?"
Let's see if we can properly answer that question.
## Naive method
Let's first test each null hypothesis ignoring the fact that we are dealing with many hypotheses. Please don't do this at home!
```
alpha = 0.05
data["reject_naive"] = 1*(data["pval"] < alpha)
print 'Number of rejections: {}'.format(data["reject_naive"].sum())
```
If we don't correct for multiple comparisons, there are actually 65 "significant" results (at $\alpha = 0.05$), which corresponds to about 7% of the players.
We expect around 46 rejections by chance, so it's a bit more than expected, but this is a bogus method so no matter what, we should discard the results.
## Bonferroni correction
Let's do it the proper way though, first using Bonferroni correction.
Since this method is basically the same as the Bonferroni global test, we expect no rejections:
```
from statsmodels.sandbox.stats.multicomp import multipletests
data["reject_bc"] = 1*(data["pval"] < alpha / len(data))
print 'Number of rejections: {}'.format(data["reject_bc"].sum())
```
Indeed, no rejections.
## Benjamini-Hochberg
Let's also try the BHq procedure, which has a bit more power than Bonferonni.
```
is_reject, corrected_pvals, _, _ = multipletests(data["pval"], alpha=0.1, method='fdr_bh')
data["reject_fdr"] = 1*is_reject
data["pval_fdr"] = corrected_pvals
print 'Number of rejections: {}'.format(data["reject_fdr"].sum())
```
Even though the BHq procedure has more power, we can't reject any of the individual hypothesis, hence we don't find sufficient evidence for any of the players that free throw performance is affected by location.
# Taking a step back
If we take a step back and take another look at our data, we quickly find that we shouldn't be surprised with our results.
In particular, our tests are clearly underpowered.
That is, the probability of rejecting the null hypothesis when there is a true effect is very small given the effect sizes that are reasonable.
While there are definitely sophisticated approaches to power analysis, we can use a [simple tool](http://statpages.org/proppowr.html) to get a rough estimate.
The free throw% is around 75% percent, and at that level it takes almost 2500 total attempts to detect a difference in ft% of 5% ($\alpha = 0.05$, power = $0.8$), and 5% is a pretty remarkable difference when only looking at home and away difference.
For most players, the observed difference is not even close to 5%, and we have only 11 players in our dataset with more than 2500 free throws.
To have any hope to detect effects for those few players that have plenty of data, the worst thing one can do is throw in a bunch of powerless tests.
It would have been much better to restrict our analysis to players where we have a lot of data.
Don't worry, I've already done that and again we cannot reject a single hypothesis.
So unfortunately it seems we won't be impressing our friends with cool results, more likely we will be the annoying person pointing out the fancy stats during a game don't really mean anything.
There is one cool take-away though: Fisher's combination test did reject the global null hypothesis even though each single test had almost no power, combined they did yield a significant result.
If we aggregate the data across all players first and then conduct a single test of proportions, it turns out we cannot reject that hypothesis.
```
len(data.query("atm_total > 2500"))
reduced_data = data.query("atm_total > 2500").copy()
is_reject2, corrected_pvals2, _, _ = multipletests(reduced_data["pval"], alpha=0.1, method='fdr_bh')
reduced_data["reject_fdr2"] = 1*is_reject2
reduced_data["pval_fdr2"] = corrected_pvals2
print 'Number of rejections: {}'.format(reduced_data["reject_fdr2"].sum())
```
| github_jupyter |
# Noise Detection Algorithm
The data presented are measurements of a gaussian beam for varying beam-frequencies and distances. Due to technical difficulties, our measuring device would sometimes crash and provide us with completely noisy data, or data that was only half complete. Our intent was to automize the measuring process, by making the lab-computer automatically evaluate the data. In the case of faulty data, it was supposed to restart the measurement.
# Importing packages
```
# general
import numpy as np
import matplotlib.pyplot as plt
import re # to extract number in name
from pathlib import Path # to extract data from registry
import pandas as pd # data frames
from mpl_toolkits.axes_grid1 import make_axes_locatable #adjust colorbars to axis
from tkinter import Tcl # sort file names
# machine learning
# spli data
from sklearn.model_selection import train_test_split
# process data
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
# classification algorithm
from sklearn.neighbors import KNeighborsClassifier
# pipeline
from sklearn.pipeline import Pipeline
# model evaluation
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
```
# Importing and processing Data
## Importing samples
```
p=Path('.')
# list(path.glob'./*.dat') finds all data ".dat" data in entered directory
paths=list([x for x in p.iterdir() if x.is_dir() and x.name=='Measurements'][0].glob('./*.dat')) #use ** to also include subregistries
#remember to change name if directory name is changed
# generate lists
path_names=list(map(lambda x: x.name, paths)) # get path names
# extract frequencies
pattern=re.compile(r"(\d+)")
freq_list=np.array([pattern.match(x).groups()[0] for x in path_names if x.startswith("Noise")==False]).astype(int)
freq_list=np.sort(freq_list)
# sort paths
sorted_path_names=np.array(Tcl().call('lsort', '-dict', path_names)) # sorted path names
# hack for sorting lists from short too long (this should be the default)
duplicate_freqs=[x for x in np.unique(freq_list) if list(freq_list).count(x)!=1]
for duplis in duplicate_freqs:
duplicate_indices=np.where(freq_list==duplis)
for index in duplicate_indices[0][::-1][:-1]:
backup=np.copy(sorted_path_names[index-1])
sorted_path_names[index-1]=sorted_path_names[index]
sorted_path_names[index]=backup
# Sort Noise-data
for index in np.where(np.array(list(map(lambda x:x.startswith("Noise"),sorted_path_names)))==True)[0][::-1][:-1]:
backup=np.copy(sorted_path_names[index-1])
sorted_path_names[index-1]=sorted_path_names[index]
sorted_path_names[index]=backup
# sort
_,paths=zip(*sorted(zip([list(sorted_path_names).index(path_name) for path_name in path_names], paths)))
path_names=np.copy(sorted_path_names)
del sorted_path_names
#remember to change name if directory name is changed
data_dict={path.name: np.genfromtxt(path,skip_header=1)[:,2] for path in paths}
# add images
size=np.int(np.sqrt(len(data_dict[path_names[0]])))
data_dict["images"]=[data_dict[name].reshape((size,size)) for name in path_names]
len(data_dict["images"])
```
## Assigning Features
```
# targets
# key targets 0=noise, 1=okay data, 2= good data
three_targets={
'70GHz.dat':2,'70GHz-1.dat':2,
'70GHz-2.dat':0,'75GHz.dat':2,
'75GHz-1.dat':2,'80GHz.dat':2,
'80GHz-1.dat':2,'85GHz.dat':2,
'85GHz-1.dat':2,'85GHz-2.dat':0,
'85GHz-3.dat':2,'85GHz-4.dat':0,
'85GHz-5.dat':0,'85GHz-6.dat':0,
'85GHz-7.dat':2,'85GHz-8.dat':1,
'85GHz-9.dat':1,'85GHz-10.dat':2,
'90GHz.dat':2,'90GHz-1.dat':2,
'90GHz-2.dat':2,'90GHz-3.dat':1,
'95GHz.dat':2,'95GHz-1.dat':2,
'95GHz-2.dat':2,'95GHz-3.dat':0,
'95GHz-4.dat':2,'95GHz-5.dat':2,
'95GHz-6.dat':2,'95GHz-7.dat':1,
'95GHz-8.dat':2,'95GHz-9.dat':0,
'95GHz-10.dat':0,'95GHz-11.dat':0,
'100GHz.dat':2,'100GHz-1.dat':2,
'100GHz-2.dat':0,'105GHz.dat':2,
'105GHz-1.dat':2,'105GHz-2.dat':0,
'110GHz.dat':0,'110GHz-1.dat':0,
'110GHz-2.dat':0,'Noise.dat':0,
'Noise-1.dat':0
}
```
## Forming Dictionary out of Data
```
# Create dictionary out of data
y=np.zeros(len(three_targets)).astype(int)
X=np.zeros((len(three_targets),len(data_dict[path_names[0]])))
size=np.int(np.sqrt(len(data_dict[path_names[0]])))
names=len(three_targets)*[""]
# counter
count=0
# format data correctly
for name in path_names:
if name in three_targets:
names[count]=name
y[count]=three_targets[name]
X[count,:]=data_dict[name]
count+=1
else:
print("{} --- not yet labeled".format(name))
beam_data={"data_names":names,"data":X,"target":y,
"target_names":["full noise","half_noise","no noise"],"images":[x.reshape((size,size)) for x in X]}
```
## Plotting all data
```
tes=np.arange(len(path_names))
print (len(tes[15:30]))
del tes
#all data
rows=int(len(path_names) / 3) + (len(path_names) % 3 > 0) # how many rows
# plot
fig = plt.figure(figsize=(18, 5*rows))
print("Yet to be labeled:")
for i,name in enumerate(path_names[30:]):
boolean=False # check if something has to be labeled:
ax=plt.subplot(rows,3,i+1)
ax.set_axis_off() # hide axis
im=ax.imshow(data_dict["images"][list(data_dict.keys()).index(name)],cmap='jet', interpolation='nearest')
if name in beam_data["data_names"]:
ax.set_title("name: {} \n labeled: {}".format(name[:-4],beam_data["target_names"][beam_data["target"][list(beam_data["data_names"]).index(name)]]),fontsize=14)
else:
boolean=True
ax.set_title("name: {}\n not labeled yet".format(name),color="red")
print("'{}'".format(name))
#adjust colorbar to plot
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
cbar=plt.colorbar(im, cax=cax)
cbar.ax.set_ylabel('intensity',fontsize=15)
if boolean==False: print("nothing")
plt.tight_layout()
# for export
#plt.savefig("Images/ml-databatch-3.png")
```
# Feature Engineering
## rescaling
```
def rescale_local_max(X):
return [X[i]/maxi for i,maxi in enumerate(np.amax(X,axis=1))]
```
## rotating for symmetrical prediction (generalization)
In this dataset, the transition to half_noise always occurs from no_noise in the top region to full_noise in the bottom region. To generalize the algorithm for other problems, the half data will now be rotated (Note that this may produce too optimistic evaluations of model predictions for this class).
For even further generalization, all data in 2D representation can be repositioned to have their peak in the center. This way, dispositions of the curve location would not need to be considered within the model, which could improve not only generalization performace, but also general performance in general.
```
# rescale
X_total=np.array(rescale_local_max(beam_data["data"]))
y_total=np.array(beam_data["target"])
# rescale
X_total=rescale_local_max(beam_data["data"])
# take half data
X_half_noise=np.array(X_total)[np.where(beam_data["target"]==1)]
# X_data in total
X_added=np.zeros((X_half_noise.shape[0]*3,X_half_noise.shape[1]))
for i in range(X_half_noise.shape[0]):
X_added[3*i,:]=np.rot90(X_half_noise[i].reshape((size,size))).flatten()
X_added[3*i+1,:]=np.rot90(X_added[3*i,:].reshape((size,size))).flatten()
X_added[3*i+2,:]=np.rot90(X_added[3*i+1,:].reshape((size,size))).flatten()
X_total=np.vstack((X_total,X_added))
y_total=np.concatenate((beam_data["target"],[1]*X_half_noise.shape[0]*3))
# provide example of rotated data
fig=plt.figure(figsize=(10,2.5))
ax=plt.subplot(1,4,1)
ax.imshow(X_half_noise[3,:].reshape((size,size)),cmap="jet")
ax.set_title("original")
ax.set_axis_off() # hide axis
for i in range(3):
ax=plt.subplot(1,4,i+2)
ax.imshow(X_added[9+i,:].reshape((size,size)),cmap="jet")
ax.set_title("rotated by {}°".format(90*(i+1)))
ax.set_axis_off() # hide axis
plt.tight_layout()
plt.savefig("Images/rotated-maps.png")
# split sets
X_train, X_test, y_train, y_test = train_test_split(X_total,y_total, stratify=y_total, random_state=0)
```
# Machine Learning (applying PCA + knn)
```
np.array(X_train).shape
```
In this application we have few samples (40) and many features (441). To reduce the number of features, we apply Prinicipal Component Analysis (PCA), to reduce the dimension in feature space (Here from 221 to 2) and therefore improve the performance of our alogorithm. We use the k-neighbours classifier (knn), as it performs particulary well on small sample sizes.
## PCA and it's data rescaling results
```
# scale to mean=0, std=1
scaler = StandardScaler()
# fit scaling
scaler.fit(X_train)
# apply scaling
scaled_X_train=scaler.transform(X_train)
# n_components=amount of principal components
pca = PCA(n_components=2) # n_components=0.95 alternatively
# fit PCA model to beast cancer data
pca.fit(scaled_X_train)
# transform
pca_X_train = pca.transform(scaled_X_train)
# Create data
# PCA on train data
g0 = pca_X_train[np.where(y_train==0)]
g1 = pca_X_train[np.where(y_train==1)]
g2 = pca_X_train[np.where(y_train==2)]
train_data = (g0, g1, g2)
colors = ("red","orange","blue")
groups = ("full_noise", "half_noise","no_noise")
train_marker=("o")
# PCA on test data
# transform
scaled_X_test=scaler.transform(X_test)
pca_X_test = pca.transform(scaled_X_test)
# # # # #
h0 = pca_X_test[np.where(y_test==0)]
h1 = pca_X_test[np.where(y_test==1)]
h2 = pca_X_test[np.where(y_test==2)]
test_data = (h0, h1, h2)
test_marker=("^")
# Create plot
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(1, 1, 1, )
# plot train-transform
for data, color, group in zip(train_data, colors, groups):
x =data[:,0]
y =data[:,1]
ax.scatter(x, y, alpha=0.7, c=color, edgecolors='black', s=50, label="train:"+group, marker=train_marker)
# plot test-transform
for data, color, group in zip(test_data, colors, groups):
x =data[:,0]
y =data[:,1]
ax.scatter(x, y, alpha=0.7, c=color, edgecolors='black', s=100, label="test:"+group, marker=test_marker)
# labels
#plt.title('PCA-transformed plot')
plt.xlabel("prinicipal component Nr.1",fontsize=15)
plt.ylabel("principal component Nr.2",fontsize=15)
plt.legend(loc="best")
plt.tight_layout()
plt.grid(True)
plt.savefig("Images/PCA_map.png", bbox_inches='tight')
plt.show()
# plot components
fig, axes = plt.subplots(1, 2,figsize=(15,6))
for i, (component, ax) in enumerate(zip(pca.components_, axes.ravel())):
im=ax.imshow(component.reshape((size,size)),cmap='jet',interpolation="nearest")
ax.set_title("component Nr.{}".format(i+1),fontsize=30)
ax.set_xlabel("pixel in x-direction",fontsize=25)
ax.set_ylabel("pixel in y-direction",fontsize=25)
fig.colorbar(im, ax=ax, fraction=0.046, pad=0.04)
#fig.suptitle('PCA component weights', fontsize=20)
plt.tight_layout()
plt.savefig("Images/PCA-componenets.png", bbox_inches='tight')
```
## Creating and fitting pipeline
because of small sample-size, no parameter optimization will be conducted
(as splitting the data in another validation set would reduce the already small test set)
```
# create pipeline
pipe = Pipeline([("scaler", StandardScaler()), ("component_analyzer", PCA(n_components=2)),
("classifier", KNeighborsClassifier(n_neighbors=1))])# fitting
pipe.fit(X_train,y_train)
```
## Algorithm performance
### General evaluation via stratified KFold cross validation
stratified makes sure that all classes are represented in each training set. Shuffle makes sure, that the data is shuffled before it is split and is only necessary when the data is sorted.
```
kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)
print("Cross-validation scores:\n{}".format(
cross_val_score(pipe, X_total, y_total, cv=kfold)))
```
### confusion matrix shows, how test samples were classified
```
# cross calidation confusion matrix
splits=list(kfold.split(X_total,y_total))
for i,split in enumerate(splits):
pipe.fit(X_total[split[0]],y_total[split[0]])
print("Split Nr.{}:\n{}\n".format(i,confusion_matrix(pipe.predict(X_total[split[1]]),y_total[split[1]])))
# show incorrectly classified beam map
i=2 # index with non-diagonal confusion matrix
pipe.fit(X_total[splits[i][0]],y_total[splits[i][0]])
false_pred=pipe.predict(X_total[splits[i][1]])[pipe.predict(X_total[splits[i][1]])!=y_total[splits[i][1]]][0]
beam_data["target_names"][false_pred]
# index
index=splits[i][1][np.where(pipe.predict(X_total[splits[i][1]])!=y_total[splits[i][1]])]
# plot
plt.figure(figsize=(5,5.5))
ax=plt.subplot(1,1,1)
ax.set_axis_off() # hide axis
ax.set_title("target= {}\npredicted= {}".format(beam_data["target_names"][y_total[index][0]],
beam_data["target_names"][false_pred]),
fontsize=25)
ax.imshow(X_total[index].reshape((size,size)),
cmap="jet")
#adjust colorbar to plot
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
cbar=plt.colorbar(im, cax=cax)
cbar.ax.set_ylabel('intensity',fontsize=15)
plt.tight_layout()
plt.savefig("Images/false-prediction.png", bbox_inches='tight')
```
## Evaluation of one singular split
### predict test set
```
predictions=pipe.predict(X_test)
print(predictions)
print("confusion matrix:\n{}".format(confusion_matrix(y_test,predictions)))
```
### falsely classified in test data
```
# collect indices
index=[]
predictions=pipe.predict(X_test)
for i in range(len(y_test)):
if y_test[i]!=predictions[i]:
print(y_test[i],predictions[i])
index.append(i)
# show false predictions in test_data
rows=int(len(index) / 3) + (len(index) % 3 > 0) # how many rows
# plot
fig = plt.figure(figsize=(20, 5*rows+1))
for i,ind in enumerate(index):
ax.set_axis_off() # hide axis
ax=plt.subplot(rows,3,i+1)
im=ax.imshow(X_test[ind].reshape((size,size)),cmap='jet', interpolation='nearest')
ax.set_title("pred: {},\n correct: {}".format(beam_data["target_names"][predictions[ind]],
beam_data["target_names"][y_test[ind]]))
#adjust colorbar to plot
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
cbar=plt.colorbar(im, cax=cax)
cbar.ax.set_ylabel('intensity',fontsize=15)
```
### falsely classified in train data (unreasonable here, due to knn:n_neighbours=1)
```
# collect indices
index=[]
predictions=pipe.predict(X_train)
for i in range(len(y_train)):
if y_train[i]!=predictions[i]:
print((y_train[i],predictions[i]))
index.append(i)
# show false predictions in train data
rows=int(len(index) / 3) + (len(index) % 3 > 0) # how many rows
# plot
fig = plt.figure(figsize=(20, 5*rows+1))
for i,ind in enumerate(index):
ax.set_axis_off() # hide axis
ax=plt.subplot(rows,3,i+1)
im=ax.imshow(X_train[ind].reshape((size,size)),cmap='jet', interpolation='nearest')
ax.set_title("pred: {},\n correct: {}".format(beam_data["target_names"][predictions[ind]],
beam_data["target_names"][y_train[ind]]))
#adjust colorbar to plot
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
cbar=plt.colorbar(im, cax=cax)
cbar.ax.set_ylabel('intensity',fontsize=15)
```
# Additional plots for thesis
## 3 categories
```
names=["75GHz.dat","105GHz-2.dat","85GHz-8.dat"]
indici=[beam_data["data_names"].index(name) for name in names]
maps=[beam_data["images"][ind] for ind in indici]
fig=plt.figure(figsize=(11,8))
for i,map_i in enumerate(maps,0):
ax=plt.subplot(1,3,i+1)
ax.set_axis_off() # hide axis
ax.set_title("{}\n labeled:{}".format(names[i],beam_data["target_names"][beam_data["target"][indici][i]]))
#plot
im=ax.imshow(map_i,cmap="jet")
#adjust colorbar to plot
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
cbar=plt.colorbar(im, cax=cax)
cbar.ax.set_ylabel('intensity',fontsize=15)
plt.tight_layout()
plt.savefig("Images/3-categories.png", bbox_inches='tight')
```
## questionable labels
```
names=["85GHz.dat","85GHz-1.dat","95GHz-5.dat"]
indici=[beam_data["data_names"].index(name) for name in names]
maps=[beam_data["images"][ind] for ind in indici]
fig=plt.figure(figsize=(11,8))
for i,map_i in enumerate(maps,0):
ax=plt.subplot(1,3,i+1)
ax.set_axis_off() # hide axis
ax.set_title("{}\n labeled:{}".format(names[i],beam_data["target_names"][beam_data["target"][indici][i]]))
#plot
im=ax.imshow(map_i,cmap="jet")
#adjust colorbar to plot
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
cbar=plt.colorbar(im, cax=cax)
cbar.ax.set_ylabel('intensity',fontsize=15)
plt.tight_layout()
plt.savefig("Images/questionable-labels.png", bbox_inches='tight')
```
| github_jupyter |
# Computer Vision Nanodegree
## Project: Image Captioning
---
In this notebook, you will train your CNN-RNN model.
You are welcome and encouraged to try out many different architectures and hyperparameters when searching for a good model.
This does have the potential to make the project quite messy! Before submitting your project, make sure that you clean up:
- the code you write in this notebook. The notebook should describe how to train a single CNN-RNN architecture, corresponding to your final choice of hyperparameters. You should structure the notebook so that the reviewer can replicate your results by running the code in this notebook.
- the output of the code cell in **Step 2**. The output should show the output obtained when training the model from scratch.
This notebook **will be graded**.
Feel free to use the links below to navigate the notebook:
- [Step 1](#step1): Training Setup
- [Step 2](#step2): Train your Model
- [Step 3](#step3): (Optional) Validate your Model
<a id='step1'></a>
## Step 1: Training Setup
In this step of the notebook, you will customize the training of your CNN-RNN model by specifying hyperparameters and setting other options that are important to the training procedure. The values you set now will be used when training your model in **Step 2** below.
You should only amend blocks of code that are preceded by a `TODO` statement. **Any code blocks that are not preceded by a `TODO` statement should not be modified**.
### Task #1
Begin by setting the following variables:
- `batch_size` - the batch size of each training batch. It is the number of image-caption pairs used to amend the model weights in each training step.
- `vocab_threshold` - the minimum word count threshold. Note that a larger threshold will result in a smaller vocabulary, whereas a smaller threshold will include rarer words and result in a larger vocabulary.
- `vocab_from_file` - a Boolean that decides whether to load the vocabulary from file.
- `embed_size` - the dimensionality of the image and word embeddings.
- `hidden_size` - the number of features in the hidden state of the RNN decoder.
- `num_epochs` - the number of epochs to train the model. We recommend that you set `num_epochs=3`, but feel free to increase or decrease this number as you wish. [This paper](https://arxiv.org/pdf/1502.03044.pdf) trained a captioning model on a single state-of-the-art GPU for 3 days, but you'll soon see that you can get reasonable results in a matter of a few hours! (_But of course, if you want your model to compete with current research, you will have to train for much longer._)
- `save_every` - determines how often to save the model weights. We recommend that you set `save_every=1`, to save the model weights after each epoch. This way, after the `i`th epoch, the encoder and decoder weights will be saved in the `models/` folder as `encoder-i.pkl` and `decoder-i.pkl`, respectively.
- `print_every` - determines how often to print the batch loss to the Jupyter notebook while training. Note that you **will not** observe a monotonic decrease in the loss function while training - this is perfectly fine and completely expected! You are encouraged to keep this at its default value of `100` to avoid clogging the notebook, but feel free to change it.
- `log_file` - the name of the text file containing - for every step - how the loss and perplexity evolved during training.
If you're not sure where to begin to set some of the values above, you can peruse [this paper](https://arxiv.org/pdf/1502.03044.pdf) and [this paper](https://arxiv.org/pdf/1411.4555.pdf) for useful guidance! **To avoid spending too long on this notebook**, you are encouraged to consult these suggested research papers to obtain a strong initial guess for which hyperparameters are likely to work best. Then, train a single model, and proceed to the next notebook (**3_Inference.ipynb**). If you are unhappy with your performance, you can return to this notebook to tweak the hyperparameters (and/or the architecture in **model.py**) and re-train your model.
### Question 1
**Question:** Describe your CNN-RNN architecture in detail. With this architecture in mind, how did you select the values of the variables in Task 1? If you consulted a research paper detailing a successful implementation of an image captioning model, please provide the reference.
**Answer:**
### (Optional) Task #2
Note that we have provided a recommended image transform `transform_train` for pre-processing the training images, but you are welcome (and encouraged!) to modify it as you wish. When modifying this transform, keep in mind that:
- the images in the dataset have varying heights and widths, and
- if using a pre-trained model, you must perform the corresponding appropriate normalization.
### Question 2
**Question:** How did you select the transform in `transform_train`? If you left the transform at its provided value, why do you think that it is a good choice for your CNN architecture?
**Answer:**
### Task #3
Next, you will specify a Python list containing the learnable parameters of the model. For instance, if you decide to make all weights in the decoder trainable, but only want to train the weights in the embedding layer of the encoder, then you should set `params` to something like:
```
params = list(decoder.parameters()) + list(encoder.embed.parameters())
```
### Question 3
**Question:** How did you select the trainable parameters of your architecture? Why do you think this is a good choice?
**Answer:**
### Task #4
Finally, you will select an [optimizer](http://pytorch.org/docs/master/optim.html#torch.optim.Optimizer).
### Question 4
**Question:** How did you select the optimizer used to train your model?
**Answer:**
```
import torch
import torch.nn as nn
from torchvision import transforms
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from model import EncoderCNN, DecoderRNN
import math
## TODO #1: Select appropriate values for the Python variables below.
batch_size = ... # batch size
vocab_threshold = ... # minimum word count threshold
vocab_from_file = ... # if True, load existing vocab file
embed_size = ... # dimensionality of image and word embeddings
hidden_size = ... # number of features in hidden state of the RNN decoder
num_epochs = 3 # number of training epochs
save_every = 1 # determines frequency of saving model weights
print_every = 100 # determines window for printing average loss
log_file = 'training_log.txt' # name of file with saved training loss and perplexity
# (Optional) TODO #2: Amend the image transform below.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Build data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=vocab_from_file)
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the encoder and decoder.
encoder = EncoderCNN(embed_size)
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move models to GPU if CUDA is available.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder.to(device)
decoder.to(device)
# Define the loss function.
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()
# TODO #3: Specify the learnable parameters of the model.
params = ...
# TODO #4: Define the optimizer.
optimizer = ...
# Set the total number of training steps per epoch.
total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)
```
<a id='step2'></a>
## Step 2: Train your Model
Once you have executed the code cell in **Step 1**, the training procedure below should run without issue.
It is completely fine to leave the code cell below as-is without modifications to train your model. However, if you would like to modify the code used to train the model below, you must ensure that your changes are easily parsed by your reviewer. In other words, make sure to provide appropriate comments to describe how your code works!
You may find it useful to load saved weights to resume training. In that case, note the names of the files containing the encoder and decoder weights that you'd like to load (`encoder_file` and `decoder_file`). Then you can load the weights by using the lines below:
```python
# Load pre-trained weights before resuming training.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
```
While trying out parameters, make sure to take extensive notes and record the settings that you used in your various training runs. In particular, you don't want to encounter a situation where you've trained a model for several hours but can't remember what settings you used :).
### A Note on Tuning Hyperparameters
To figure out how well your model is doing, you can look at how the training loss and perplexity evolve during training - and for the purposes of this project, you are encouraged to amend the hyperparameters based on this information.
However, this will not tell you if your model is overfitting to the training data, and, unfortunately, overfitting is a problem that is commonly encountered when training image captioning models.
For this project, you need not worry about overfitting. **This project does not have strict requirements regarding the performance of your model**, and you just need to demonstrate that your model has learned **_something_** when you generate captions on the test data. For now, we strongly encourage you to train your model for the suggested 3 epochs without worrying about performance; then, you should immediately transition to the next notebook in the sequence (**3_Inference.ipynb**) to see how your model performs on the test data. If your model needs to be changed, you can come back to this notebook, amend hyperparameters (if necessary), and re-train the model.
That said, if you would like to go above and beyond in this project, you can read about some approaches to minimizing overfitting in section 4.3.1 of [this paper](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7505636). In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset.
```
import torch.utils.data as data
import numpy as np
import os
import requests
import time
# Open the training log file.
f = open(log_file, 'w')
old_time = time.time()
response = requests.request("GET",
"http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token",
headers={"Metadata-Flavor":"Google"})
for epoch in range(1, num_epochs+1):
for i_step in range(1, total_step+1):
if time.time() - old_time > 60:
old_time = time.time()
requests.request("POST",
"https://nebula.udacity.com/api/v1/remote/keep-alive",
headers={'Authorization': "STAR " + response.text})
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Zero the gradients.
decoder.zero_grad()
encoder.zero_grad()
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
outputs = decoder(features, captions)
# Calculate the batch loss.
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
# Backward pass.
loss.backward()
# Update the parameters in the optimizer.
optimizer.step()
# Get training statistics.
stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
# Print training statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# Print training statistics to file.
f.write(stats + '\n')
f.flush()
# Print training statistics (on different line).
if i_step % print_every == 0:
print('\r' + stats)
# Save the weights.
if epoch % save_every == 0:
torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d.pkl' % epoch))
torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d.pkl' % epoch))
# Close the training log file.
f.close()
```
<a id='step3'></a>
## Step 3: (Optional) Validate your Model
To assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this **optional** task, you are required to first complete all of the steps in the next notebook in the sequence (**3_Inference.ipynb**); as part of that notebook, you will write and test code (specifically, the `sample` method in the `DecoderRNN` class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here.
If you decide to validate your model, please do not edit the data loader in **data_loader.py**. Instead, create a new file named **data_loader_val.py** containing the code for obtaining the data loader for the validation data. You can access:
- the validation images at filepath `'/opt/cocoapi/images/train2014/'`, and
- the validation image caption annotation file at filepath `'/opt/cocoapi/annotations/captions_val2014.json'`.
The suggested approach to validating your model involves creating a json file such as [this one](https://github.com/cocodataset/cocoapi/blob/master/results/captions_val2014_fakecap_results.json) containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you [find online](https://github.com/tylin/coco-caption) to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of [this paper](https://arxiv.org/pdf/1411.4555.pdf). For more information about how to use the annotation file, check out the [website](http://cocodataset.org/#download) for the COCO dataset.
```
# (Optional) TODO: Validate your model.
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
import time
data = pd.read_csv('mnist.csv')
data.head()
train_data = data.sample(frac=0.8)
test_data = data.drop(train_data.index)
train_labels = train_data['label'].values
train_data = train_data.drop('label', axis=1).values
test_labels = test_data['label'].values
test_data = test_data.drop('label', axis=1).values
num_im = 25
num_cells = math.ceil(math.sqrt(num_im))
plt.figure(figsize=(10, 10))
for i in range(num_im):
pixels = train_data[i]
size = math.ceil(math.sqrt(pixels.size))
pixels = pixels.reshape(size, size)
plt.subplot(num_cells, num_cells, i+1)
plt.title(train_labels[i])
plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)
plt.imshow(pixels)
def sigmoid(x):
return 1 / (1 + np.e ** -x)
x = np.linspace(-10, 10, 50)
y = sigmoid(x)
plt.plot(x, y)
plt.title('Sigmoid Function')
plt.show()
num_samples, num_features = train_data.shape
W = None
train_data = (train_data - train_data.min()) / (train_data.max() - train_data.min())
X = np.c_[np.ones(num_samples), train_data]
num_iters = 5000
lr = 0.001
lambda_ = 0.01
start = time.time()
# train each label using one vs all
for label in range(10):
w = np.random.rand(num_features+1)
y = (train_labels == label).astype(float)
for i in range(num_iters):
diff = sigmoid(X @ w) - y
for wi in range(len(w)):
if wi == 0:
t = diff
reg_term = 0
else:
t = diff @ X[:, wi]
reg_term = lambda_ * X[:, wi]
w[wi] -= lr * np.sum(t - reg_term) / num_samples
if (i + 1) % 1000 == 0:
reg_term = lambda_ * np.sum(w[1:] @ w[1:].T) / num_samples
t = sigmoid(X @ w)
cost = np.sum(-y * np.log(t) - (1-y) * np.log(1-t)) / 2 / num_samples - reg_term
print(f'label={label} iteration={i+1} cost={cost:.5f}')
W = w if W is None else np.vstack((W, w))
print(f'Training Finished | Time Taken = {time.time() - start:.2f}s')
train_pred = np.argmax(X @ W.T, axis=1)
train_acc = np.sum(train_pred == train_labels) / num_samples
print(f'Train Accuracy = {train_acc:.5f}')
test_X = np.c_[np.ones(len(test_data)), test_data]
test_pred = np.argmax(test_X @ W.T, axis=1)
test_acc = np.sum(test_pred == test_labels) / len(test_data)
print(f'Test Accuracy = {test_acc:.5f}')
num_im = 9
num_cells = math.ceil(math.sqrt(num_im))
plt.figure(figsize=(10, 10))
for i in range(num_im):
pixels = W[i][1:]
size = math.ceil(math.sqrt(pixels.size))
pixels = pixels.reshape(size, size)
plt.subplot(num_cells, num_cells, i+1)
plt.title(i)
plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)
plt.imshow(pixels, cmap='Greys')
num_im = 64
num_cells = math.ceil(math.sqrt(num_im))
plt.figure(figsize=(15, 15))
for i in range(num_im):
label = test_labels[i]
pixels = test_data[i]
size = math.ceil(math.sqrt(pixels.size))
pixels = pixels.reshape(size, size)
x = np.concatenate(([1], test_data[i]))
pred = np.argmax(x @ W.T)
plt.subplot(num_cells, num_cells, i+1)
plt.title(pred)
plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)
plt.imshow(pixels, cmap='Greens' if pred == label else 'Reds')
```
| github_jupyter |
```
from toolz import curry
import pandas as pd
import numpy as np
from scipy.special import expit
from linearmodels.panel import PanelOLS
import statsmodels.formula.api as smf
import seaborn as sns
from matplotlib import pyplot as plt
from matplotlib import style
style.use("ggplot")
```
# Difference-in-Diferences: Death and Rebirth
## The Promise of Panel Data
Panel data is when we have multiple units `i` over multiple periods of time `t`. Think about a policy evaluation scenario in the US, where you want to check the effect of cannabis legalization on crime rate. You have crime rate data on multiple states `i` over multiple time perios `t`. You also observe at what point in time each state adopts legislation in the direction of canabis legalization. I hope you can see why this is incredibly powerfull for causal inference. Call canabis legalization the treatment `D` (since `T` is taken; it represents time). We can follow the trend on crime rates for a particular state that eventually gets treated and see if there are any disruptions in the trend at the treatment time. In a way, a state serves as its own control unit, in a sort of before and after comparisson. Furthermore, becase we have multiple states, we can also compare treated states to control states. When we put both comparissons toguether, treated vs control and before and after treatement, we end up with an incredibly powerfull tool to infer counterfactuals and, hence, causal effects.
Panel data methods are often used in govenment policy evaluation, but we can easily make an argument about why it is also incredibly usefull for the (tech) industry. Companies often track user data across multiple periods of time, which results in a rich panel data structure. To expore that idea further, let's consider a hypothetical example of a tech company that traked customers for multiple years. Along those years, it rolled out a new product for some of its customers. More specifically, some customers got acess to the new product in 1985, others in 1994 and others in the year 2000. In causal infererence terms, we can already see that the new product can be seen as a treatment. We call each of those **groups of customers that got treated at the same time a cohort**. In this hypothetical exemple, we want to figure out the impact of the new product on sales. The folowing image shows how sales evolve over time for each of the treated cohorts, plus a never treated group of customers.
```
time = range(1980, 2010)
cohorts = [1985,1994,2000,2011]
units = range(1, 100+1)
np.random.seed(1)
df_hom_effect = pd.DataFrame(dict(
year = np.tile(time, len(units)),
unit = np.repeat(units, len(time)),
cohort = np.repeat(np.random.choice(cohorts, len(units)), len(time)),
unit_fe = np.repeat(np.random.normal(0, 5, size=len(units)), len(time)),
time_fe = np.tile(np.random.normal(size=len(time)), len(units)),
)).assign(
trend = lambda d: (d["year"] - d["year"].min())/8,
post = lambda d: (d["year"] >= d["cohort"]).astype(int),
).assign(
treat = 1,
y0 = lambda d: 10 + d["trend"] + d["unit_fe"] + 0.1*d["time_fe"],
).assign(
treat_post = lambda d: d["treat"]*d["post"],
y1 = lambda d: d["y0"] + 1
).assign(
tau = lambda d: d["y1"] - d["y0"],
sales = lambda d: np.where(d["treat_post"] == 1, d["y1"], d["y0"])
).drop(columns=["unit_fe", "time_fe", "trend", "y0", "y1"])
plt.figure(figsize=(10,4))
[plt.vlines(x=c, ymin=9, ymax=15, color="black", ls="dashed") for c in cohorts[:-1]]
sns.lineplot(
data=(df_hom_effect
.replace({"cohort":{2011:"never-treated"}})
.groupby(["cohort", "year"])["sales"]
.mean()
.reset_index()),
x="year",
y = "sales",
hue="cohort",
);
```
Let's take a momen to appreciate the richness of the data depicted in the above plot. First, we can see that each cohorts have its own baseline level. Thats simply because different customers buy different ammounts. For instance, it looks like customers in the never treated cohort have a higher baseline (of about 11), compared to other cohorts. This means that simply comparing treated cohorts to control cohorts would yield a biased result, since $Y_{0}$ for the neve treated is higher than the $Y_{0}$ for the treated. Fortunatly, we can compare acorss units and time.
Speaking of time, notice how there is an overall upward trend with some wigles (for example, there is a dip in the year 1999). Since latter years have higher $Y_{0}$ than early years, simply comparing the same unit across time would also yield in biased results. Once again, we are fortunate that the pannel data structure allow us to compare not only across time, but also across units.
Another way to see the power of panel data structure in through the lens of linear models and linear regression. Let's say each of our customers `i` has a spend propensity $\gamma$. This is because of indosincasies due to stuff we can't observe, like customer's salary, family sinze and so on. Also, we can say that each year has an sales level $\theta$. Again, maibe because there is a crisis in one year, sales drop. If that is the case, a good way of modeling sales is to say it depends on the customer effect $\gamma$ and the time effect $\theta$, plus some random noise.
$$
Sales_{it} = \gamma_i + \theta_t + e_{it}
$$
To include the treatment in this picture, lets define a variable $D_{it}$ wich is 1 if the unit is treated. In our example, this variable would be always zero for the never treated cohort. It would also be zero for all the other cohorts at the begining, but it would turn into 1 at year 1985 for the cohort treated in 1985 and stay on after that. Same thing for other cohorts, it would turn into 1 at 1994 fot the cohort treated in 1994 and so on. We can include in our model of sales as follows:
$$
Sales_{it} = \tau D_{it} + \gamma_i + \theta_t + e_{it}
$$
Estimating the above model is OLS is what is called the Two-Way Fixed Effects Models (TWFE). Notice that $\tau$ would be the treatment effect, as it tells us how much sales changes once units are treated. Another way of looking at it is to invoke the "holding things constant" propriety of linear regression. If we estimate the above model, we could read the estimate of $\tau$ as how much sales would change if we flip the treatment from 0 to 1 while holding the unit `i` and time `t` fixed. Take a minute to appriciate how bold this is! To say we would hold each unit fixed while seeng how $D$ changes the outcome is to say we are controling for all unit specific characteristic, known and unknown. For example, we would be controling for customers past sales, wich we could measure, but also stuff we have no idea about, like how much the customer like our brand, his salary... The only requirement is that this caracteristic is fixed over the time of the analysis. Moreover, to say we would hold each time period fixed is to say we are controlling for all year specifit characteristic. For instance, since we are holding year fixed, while looking at the effect of $D$, that trend over there would vanish.
To see all this power in action all we have to do is run an OLS model with the treatment indicator $D$ (`treat_post` here), plut dummies for the units and time. In our particular example, I've generated data in such a way that the effect of the treatment (new product) is to increase sales by 1. Notice how TWFE nais in recovering that treatment effect
```
formula = f"""sales ~ treat_post + C(unit) + C(year)"""
mod = smf.ols(formula, data=df_hom_effect)
result = mod.fit()
result.params["treat_post"]
```
Since I've simulated the data above, I know exactly the true individual treatment effect, which is stored in the `tau` column. Since the TWFE recovers the treatment effect on the treated, we can verify that the true ATT matches the one estimated above.
```
df_hom_effect.query("treat_post==1")["tau"].mean()
```
Before anyone comes and say that generating one dummy column for each unit is impossible with big data, let me come foreward and tell you that, yes, that is true. But there is a easy work around. We can use the FWL theorem to partiall that single regression into two. In fact, runing the above model is numerically equivalent to estimating the following model
$$
\tilde{Sales}_{it} = \tau \tilde D_{it} + e_{it}
$$
where
$$
\tilde{Sales}_{it} = Sales_{it} - \underbrace{\frac{1}{T}\sum_{t=0}^T Sales_{it}}_\text{Time Average} - \underbrace{\frac{1}{N}\sum_{i=0}^N Sales_{it}}_\text{Unit Average}
$$
and
$$
\tilde{D}_{it} = D_{it} - \frac{1}{T}\sum_{t=0}^T D_{it} - \frac{1}{N}\sum_{i=0}^N D_{it}
$$
In words now, in case the math is too crowded, we subtract the unit average across time (first term) and the time average across units (second term) from both the treatment indicator and the outcome variable to constrict the residuals. This process is often times called de-meaning, since we subtract the mean from the outcome and treatment. Finally, here is the same exact thing, but in code:
```
@curry
def demean(df, col_to_demean):
return df.assign(**{col_to_demean: (df[col_to_demean]
- df.groupby("unit")[col_to_demean].transform("mean")
- df.groupby("year")[col_to_demean].transform("mean"))})
formula = f"""sales ~ treat_post"""
mod = smf.ols(formula,
data=df_hom_effect
.pipe(demean(col_to_demean="treat_post"))
.pipe(demean(col_to_demean="sales")))
result = mod.fit()
result.summary().tables[1]
```
As we can see, with the alternative implementation, TWFE is also able to perfectly recover the ATT of 1.
## Assuptions
Two
## Death
## Trend in the Effect
```
time = range(1980, 2010)
cohorts = [1985,1994,2000,2011]
units = range(1, 100+1)
np.random.seed(3)
df_trend_effect = pd.DataFrame(dict(
year = np.tile(time, len(units)),
unit = np.repeat(units, len(time)),
cohort = np.repeat(np.random.choice(cohorts, len(units)), len(time)),
unit_fe = np.repeat(np.random.normal(size=len(units)), len(time)),
time_fe = np.tile(np.random.normal(size=len(time)), len(units)),
)).assign(
relative_year = lambda d: d["year"] - d["cohort"],
trend = lambda d: (d["year"] - d["year"].min())/8,
post = lambda d: (d["year"] >= d["cohort"]).astype(int),
).assign(
treat = 1,
y0 = lambda d: 10 + d["unit_fe"] + 0.02*d["time_fe"],
).assign(
y1 = lambda d: d["y0"] + np.minimum(0.2*(np.maximum(0, d["year"] - d["cohort"])), 1)
).assign(
tau = lambda d: d["y1"] - d["y0"],
outcome = lambda d: np.where(d["treat"]*d["post"] == 1, d["y1"], d["y0"])
)
plt.figure(figsize=(10,4))
sns.lineplot(
data=df_trend_effect.groupby(["cohort", "year"])["outcome"].mean().reset_index(),
x="year",
y = "outcome",
hue="cohort",
);
formula = f"""outcome ~ treat:post + C(year) + C(unit)"""
mod = smf.ols(formula, data=df_trend_effect)
result = mod.fit()
result.params["treat:post"]
df_trend_effect.query("treat==1 & post==1")["tau"].mean()
```
### Event Study Desing
```
relative_years = range(-10,10+1)
formula = "outcome~"+"+".join([f'Q({c})' for c in relative_years]) + "+C(unit)+C(year)"
mod = smf.ols(formula,
data=(df_trend_effect.join(pd.get_dummies(df_trend_effect["relative_year"]))))
result = mod.fit()
ax = (df_trend_effect
.query("treat==1")
.query("relative_year>-10")
.query("relative_year<10")
.groupby("relative_year")["tau"].mean().plot())
ax.plot(relative_years, result.params[-len(relative_years):]);
```
## Covariates
## X-Specific Trends
```
time = range(1980, 2000)
cohorts = [1990]
units = range(1, 100+1)
np.random.seed(3)
x = np.random.choice(np.random.normal(size=len(units)//10), size=len(units))
df_cov_trend = pd.DataFrame(dict(
year = np.tile(time, len(units)),
unit = np.repeat(units, len(time)),
cohort = np.repeat(np.random.choice(cohorts, len(units)), len(time)),
unit_fe = np.repeat(np.random.normal(size=len(units)), len(time)),
time_fe = np.tile(np.random.normal(size=len(time)), len(units)),
x = np.repeat(x, len(time)),
)).assign(
trend = lambda d: d["x"]*(d["year"] - d["year"].min())/20,
post = lambda d: (d["year"] >= d["cohort"]).astype(int),
).assign(
treat = np.repeat(np.random.binomial(1, expit(x)), len(time)),
y0 = lambda d: 10 + d["trend"] + 0.5*d["unit_fe"] + 0.01*d["time_fe"],
).assign(
y1 = lambda d: d["y0"] + 1
).assign(
tau = lambda d: d["y1"] - d["y0"],
outcome = lambda d: np.where(d["treat"]*d["post"] == 1, d["y1"], d["y0"])
)
plt.figure(figsize=(10,4))
sns.lineplot(
data=df_cov_trend.groupby(["treat", "year"])["outcome"].mean().reset_index(),
x="year",
y = "outcome",
hue="treat",
);
facet_col = "x"
all_facet_values = sorted(df_cov_trend[facet_col].unique())
g = sns.FacetGrid(df_cov_trend, col=facet_col, sharey=False, sharex=False, col_wrap=4, height=5, aspect=1)
for x, ax in zip(all_facet_values, g.axes):
plot_df = df_cov_trend.query(f"{facet_col}=={x}")
sns.lineplot(
data=plot_df.groupby(["treat", "year"])["outcome"].mean().reset_index(),
x="year",
y = "outcome",
hue="treat",
ax=ax
)
ax.set_title(f"X = {round(x, 2)}")
plt.tight_layout()
formula = f"""outcome ~ treat:post + C(year) + C(unit)"""
mod = smf.ols(formula, data=df_cov_trend)
result = mod.fit()
result.params["treat:post"]
formula = f"""outcome ~ treat:post + x * C(year) + C(unit)"""
mod = smf.ols(formula, data=df_cov_trend)
result = mod.fit()
result.params["treat:post"]
df_cov_trend.query("treat==1 & post==1")["tau"].mean()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/beta/AlphaFold2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#AlphaFold2 w/ MMseqs2
Easy to use version of AlphaFold 2 [(Jumper et al. 2021, Nature)](https://www.nature.com/articles/s41586-021-03819-2) a protein structure prediction pipeline, with an API hosted at the Södinglab based on the MMseqs2 server [(Mirdita et al. 2019, Bioinformatics)](https://academic.oup.com/bioinformatics/article/35/16/2856/5280135) for the multiple sequence alignment creation.
**Limitations**
- This notebook does NOT use the AlphaFold2's jackhmmer pipeline for MSA/template generation. It may give better or worse results depending on number of sequences that can be found. Check out the [full AlphaFold2 pipeline](https://github.com/deepmind/alphafold) or Deepmind's official [google-colab notebook](https://colab.research.google.com/github/deepmind/alphafold/blob/main/notebooks/AlphaFold.ipynb).
- For homo-oligomeric setting, amber-relax and templates are currently NOT supported.
- For a typical Google-Colab session, with a `16G-GPU`, the max total length is **1400 residues**. Sometimes a `12G-GPU` is assigned, in which the max length is ~1000 residues.
**WARNING**:
<strong>For detailed instructions, see <a href="#Instructions">bottom</a> of notebook!</strong>
```
#@title Input protein sequence, then hit `Runtime` -> `Run all`
import os
os.environ['TF_FORCE_UNIFIED_MEMORY'] = '1'
os.environ['XLA_PYTHON_CLIENT_MEM_FRACTION'] = '2.0'
import re
import hashlib
from google.colab import files
def add_hash(x,y):
return x+"_"+hashlib.sha1(y.encode()).hexdigest()[:5]
query_sequence = 'PIAQIHILEGRSDEQKETLIREVSEAISRSLDAPLTSVRVIITEMAKGHFGIGGELASK' #@param {type:"string"}
# remove whitespaces
query_sequence = "".join(query_sequence.split())
query_sequence = re.sub(r'[^a-zA-Z]','', query_sequence).upper()
jobname = 'test' #@param {type:"string"}
# remove whitespaces
jobname = "".join(jobname.split())
jobname = re.sub(r'\W+', '', jobname)
jobname = add_hash(jobname, query_sequence)
with open(f"{jobname}.fasta", "w") as text_file:
text_file.write(">1\n%s" % query_sequence)
# number of models to use
#@markdown ---
#@markdown ### Advanced settings
msa_mode = "MMseqs2 (UniRef+Environmental)" #@param ["MMseqs2 (UniRef+Environmental)", "MMseqs2 (UniRef only)","single_sequence","custom"]
num_models = 5 #@param [1,2,3,4,5] {type:"raw"}
use_msa = True if msa_mode.startswith("MMseqs2") else False
use_env = True if msa_mode == "MMseqs2 (UniRef+Environmental)" else False
use_custom_msa = True if msa_mode == "custom" else False
use_amber = False #@param {type:"boolean"}
use_templates = False #@param {type:"boolean"}
use_ptm = True #@param {type:"boolean"}
#@markdown ---
#@markdown ### Experimental options
homooligomer = 1 #@param [1,2,3,4,5,6,7,8] {type:"raw"}
save_to_google_drive = False #@param {type:"boolean"}
#@markdown ---
#@markdown Don't forget to hit `Runtime` -> `Run all` after updating form
if homooligomer > 1:
if use_amber:
print("amber disabled: amber is not currently supported for homooligomers")
use_amber = False
if use_templates:
print("templates disabled: templates are not currently supported for homooligomers")
use_templates = False
# decide which a3m to use
if use_msa:
a3m_file = f"{jobname}.a3m"
elif use_custom_msa:
a3m_file = f"{jobname}.custom.a3m"
if not os.path.isfile(a3m_file):
custom_msa_dict = files.upload()
custom_msa = list(custom_msa_dict.keys())[0]
header = 0
import fileinput
for line in fileinput.FileInput(custom_msa,inplace=1):
if line.startswith(">"):
header = header + 1
if line.startswith("#"):
continue
if line.rstrip() == False:
continue
if line.startswith(">") == False and header == 1:
query_sequence = line.rstrip()
print(line, end='')
os.rename(custom_msa, a3m_file)
print(f"moving {custom_msa} to {a3m_file}")
else:
a3m_file = f"{jobname}.single_sequence.a3m"
with open(a3m_file, "w") as text_file:
text_file.write(">1\n%s" % query_sequence)
if save_to_google_drive == True:
from pydrive.drive import GoogleDrive
from pydrive.auth import GoogleAuth
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
print("You are logged into Google Drive and are good to go!")
#@title Install dependencies
%%bash -s $use_amber $use_msa $use_templates
USE_AMBER=$1
USE_MSA=$2
USE_TEMPLATES=$3
if [ ! -f AF2_READY ]; then
# install dependencies
pip -q install biopython
pip -q install dm-haiku
pip -q install ml-collections
pip -q install py3Dmol
# download model
if [ ! -d "alphafold/" ]; then
git clone https://github.com/deepmind/alphafold.git --quiet
(cd alphafold; git checkout 1e216f93f06aa04aa699562f504db1d02c3b704c --quiet)
mv alphafold alphafold_
mv alphafold_/alphafold .
# remove "END" from PDBs, otherwise biopython complains
sed -i "s/pdb_lines.append('END')//" /content/alphafold/common/protein.py
sed -i "s/pdb_lines.append('ENDMDL')//" /content/alphafold/common/protein.py
fi
# download model params (~1 min)
if [ ! -d "params/" ]; then
wget -qnc https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar
mkdir params
tar -xf alphafold_params_2021-07-14.tar -C params/
rm alphafold_params_2021-07-14.tar
fi
touch AF2_READY
fi
# download libraries for interfacing with MMseqs2 API
if [ ${USE_MSA} == "True" ] || [ ${USE_TEMPLATES} == "True" ]; then
if [ ! -f MMSEQ2_READY ]; then
apt-get -qq -y update 2>&1 1>/dev/null
apt-get -qq -y install jq curl zlib1g gawk 2>&1 1>/dev/null
touch MMSEQ2_READY
fi
fi
# setup conda
if [ ${USE_AMBER} == "True" ] || [ ${USE_TEMPLATES} == "True" ]; then
if [ ! -f CONDA_READY ]; then
wget -qnc https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -bfp /usr/local 2>&1 1>/dev/null
rm Miniconda3-latest-Linux-x86_64.sh
touch CONDA_READY
fi
fi
# setup template search
if [ ${USE_TEMPLATES} == "True" ] && [ ! -f HH_READY ]; then
conda install -y -q -c conda-forge -c bioconda kalign3=3.2.2 hhsuite=3.3.0 python=3.7 2>&1 1>/dev/null
touch HH_READY
fi
# setup openmm for amber refinement
if [ ${USE_AMBER} == "True" ] && [ ! -f AMBER_READY ]; then
conda install -y -q -c conda-forge openmm=7.5.1 python=3.7 pdbfixer 2>&1 1>/dev/null
(cd /usr/local/lib/python3.7/site-packages; patch -s -p0 < /content/alphafold_/docker/openmm.patch)
wget -qnc https://git.scicore.unibas.ch/schwede/openstructure/-/raw/7102c63615b64735c4941278d92b554ec94415f8/modules/mol/alg/src/stereo_chemical_props.txt
mv stereo_chemical_props.txt alphafold/common/
touch AMBER_READY
fi
#@title Import libraries
# setup the model
if "IMPORTED" not in dir():
import numpy as np
import pickle
from string import ascii_uppercase
from alphafold.common import protein
from alphafold.data import pipeline
from alphafold.data import templates
from alphafold.model import data
from alphafold.model import config
from alphafold.model import model
from alphafold.data.tools import hhsearch
from absl import logging
logging.set_verbosity("error")
# plotting libraries
import py3Dmol
import matplotlib.pyplot as plt
IMPORTED = True
if use_amber and "relax" not in dir():
import sys
sys.path.insert(0, '/usr/local/lib/python3.7/site-packages/')
from alphafold.relax import relax
def mk_template(jobname):
template_featurizer = templates.TemplateHitFeaturizer(
mmcif_dir="templates/",
max_template_date="2100-01-01",
max_hits=20,
kalign_binary_path="kalign",
release_dates_path=None,
obsolete_pdbs_path=None)
hhsearch_pdb70_runner = hhsearch.HHSearch(binary_path="hhsearch",databases=[jobname])
a3m_lines = "\n".join(open(f"{jobname}.a3m","r").readlines())
hhsearch_result = hhsearch_pdb70_runner.query(a3m_lines)
hhsearch_hits = pipeline.parsers.parse_hhr(hhsearch_result)
templates_result = template_featurizer.get_templates(query_sequence=query_sequence,
query_pdb_code=None,
query_release_date=None,
hits=hhsearch_hits)
return templates_result.features
def set_bfactor(pdb_filename, bfac, idx_res, chains):
I = open(pdb_filename,"r").readlines()
O = open(pdb_filename,"w")
for line in I:
if line[0:6] == "ATOM ":
seq_id = int(line[22:26].strip()) - 1
seq_id = np.where(idx_res == seq_id)[0][0]
O.write(f"{line[:21]}{chains[seq_id]}{line[22:60]}{bfac[seq_id]:6.2f}{line[66:]}")
O.close()
def predict_structure(prefix, feature_dict, Ls, random_seed=0):
"""Predicts structure using AlphaFold for the given sequence."""
# Minkyung's code
# add big enough number to residue index to indicate chain breaks
idx_res = feature_dict['residue_index']
L_prev = 0
# Ls: number of residues in each chain
for L_i in Ls[:-1]:
idx_res[L_prev+L_i:] += 200
L_prev += L_i
chains = list("".join([ascii_uppercase[n]*L for n,L in enumerate(Ls)]))
feature_dict['residue_index'] = idx_res
plddts = []
paes = []
unrelaxed_pdb_lines = []
relaxed_pdb_lines = []
# Run the models.
if use_templates:
model_names = ["model_1","model_2","model_3","model_4","model_5"][:num_models]
model_start = ["model_1","model_3"]
model_end = ["model_2","model_5"]
else:
model_names = ["model_4","model_1","model_2","model_3","model_5"][:num_models]
model_start = ["model_4"]
model_end = ["model_5"]
for n, model_name in enumerate(model_names):
name = model_name+"_ptm" if use_ptm else model_name
model_config = config.model_config(name)
model_config.data.eval.num_ensemble = 1
if msa_mode == "single_sequence":
model_config.data.common.max_extra_msa = 1
model_config.data.eval.max_msa_clusters = 1
model_params = data.get_model_haiku_params(name, data_dir=".")
if model_name in model_start:
model_runner = model.RunModel(model_config, model_params)
processed_feature_dict = model_runner.process_features(feature_dict,random_seed=0)
else:
# swap params
for k in model_runner.params.keys():
model_runner.params[k] = model_params[k]
print(f"running model_{n+1}")
prediction_result = model_runner.predict(processed_feature_dict)
unrelaxed_protein = protein.from_prediction(processed_feature_dict,prediction_result)
unrelaxed_pdb_lines.append(protein.to_pdb(unrelaxed_protein))
plddts.append(prediction_result['plddt'])
if use_ptm:
paes.append(prediction_result['predicted_aligned_error'])
if use_amber:
# Relax the prediction.
amber_relaxer = relax.AmberRelaxation(max_iterations=0,tolerance=2.39,
stiffness=10.0,exclude_residues=[],
max_outer_iterations=20)
relaxed_pdb_str, _, _ = amber_relaxer.process(prot=unrelaxed_protein)
relaxed_pdb_lines.append(relaxed_pdb_str)
# Delete unused outputs to save memory.
if model_name in model_end:
del model_runner
del processed_feature_dict
del model_params
del prediction_result
# rerank models based on predicted lddt
lddt_rank = np.mean(plddts,-1).argsort()[::-1]
out = {}
print("reranking models based on avg. predicted lDDT")
for n,r in enumerate(lddt_rank):
print(f"model_{n+1} {np.mean(plddts[r])}")
unrelaxed_pdb_path = f'{prefix}_unrelaxed_model_{n+1}.pdb'
with open(unrelaxed_pdb_path, 'w') as f: f.write(unrelaxed_pdb_lines[r])
set_bfactor(unrelaxed_pdb_path, plddts[r], idx_res, chains)
if use_amber:
relaxed_pdb_path = f'{prefix}_relaxed_model_{n+1}.pdb'
with open(relaxed_pdb_path, 'w') as f: f.write(relaxed_pdb_lines[r])
set_bfactor(relaxed_pdb_path, plddts[r], idx_res, chains)
if use_ptm:
out[f"model_{n+1}"] = {"plddt":plddts[r], "pae":paes[r]}
else:
out[f"model_{n+1}"] = {"plddt":plddts[r]}
return out
#@title Call MMseqs2 to get MSA/templates
%%bash -s $use_amber $use_msa $use_templates $jobname $use_env
USE_AMBER=$1
USE_MSA=$2
USE_TEMPLATES=$3
NAME=$4
USE_ENV=$5
if [ ${USE_MSA} == "True" ] || [ ${USE_TEMPLATES} == "True" ]; then
if [ ! -f ${NAME}.mmseqs2.tar.gz ]; then
# query MMseqs2 webserver
echo "submitting job"
MODE=all
if [ ${USE_ENV} == "True" ]; then
MODE=env
fi
ID=$(curl -s -F q=@${NAME}.fasta -F mode=${MODE} https://a3m.mmseqs.com/ticket/msa | jq -r '.id')
STATUS=$(curl -s https://a3m.mmseqs.com/ticket/${ID} | jq -r '.status')
while [ "${STATUS}" == "RUNNING" ] || [ "${STATUS}" == "PENDING" ]; do
STATUS=$(curl -s https://a3m.mmseqs.com/ticket/${ID} | jq -r '.status')
sleep 1
done
if [ "${STATUS}" == "COMPLETE" ]; then
curl -s https://a3m.mmseqs.com/result/download/${ID} > ${NAME}.mmseqs2.tar.gz
tar xzf ${NAME}.mmseqs2.tar.gz
if [ ${USE_ENV} == "True" ]; then
cat uniref.a3m bfd.mgnify30.metaeuk30.smag30.a3m > tmp.a3m
tr -d '\000' < tmp.a3m > ${NAME}.a3m
rm uniref.a3m bfd.mgnify30.metaeuk30.smag30.a3m tmp.a3m
else
tr -d '\000' < uniref.a3m > ${NAME}.a3m
rm uniref.a3m
fi
mv pdb70.m8 ${NAME}.m8
else
echo "MMseqs2 server did not return a valid result."
cp ${NAME}.fasta ${NAME}.a3m
fi
fi
if [ ${USE_MSA} == "True" ]; then
echo "Found $(grep -c ">" ${NAME}.a3m) sequences (after redundacy filtering)"
fi
if [ ${USE_TEMPLATES} == "True" ] && [ ! -f ${NAME}_hhm.ffindex ]; then
echo "getting templates"
if [ -s ${NAME}.m8 ]; then
if [ ! -d templates ]; then
mkdir templates/
fi
printf "pdb\tevalue\n"
head -n 20 ${NAME}.m8 | awk '{print $2"\t"$11}'
TMPL=$(head -n 20 ${NAME}.m8 | awk '{printf $2","}')
curl -s https://a3m-templates.mmseqs.com/template/${TMPL} | tar xzf - -C templates/
mv templates/pdb70_a3m.ffdata ${NAME}_a3m.ffdata
mv templates/pdb70_a3m.ffindex ${NAME}_a3m.ffindex
mv templates/pdb70_hhm.ffdata ${NAME}_hhm.ffdata
mv templates/pdb70_hhm.ffindex ${NAME}_hhm.ffindex
cp ${NAME}_a3m.ffindex ${NAME}_cs219.ffindex
touch ${NAME}_cs219.ffdata
else
echo "no templates found"
fi
fi
fi
#@title Gather input features, predict structure
# parse TEMPLATES
if use_templates and os.path.isfile(f"{jobname}_hhm.ffindex"):
template_features = mk_template(jobname)
else:
use_templates = False
template_features = {}
# parse MSA
a3m_lines = "".join(open(a3m_file,"r").readlines())
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines)
if homooligomer == 1:
msas = [msa]
deletion_matrices = [deletion_matrix]
else:
# make multiple copies of msa for each copy
# AAA------
# ---AAA---
# ------AAA
#
# note: if you concat the sequences (as below), it does NOT work
# AAAAAAAAA
msas = []
deletion_matrices = []
Ln = len(query_sequence)
for o in range(homooligomer):
L = Ln * o
R = Ln * (homooligomer-(o+1))
msas.append(["-"*L+seq+"-"*R for seq in msa])
deletion_matrices.append([[0]*L+mtx+[0]*R for mtx in deletion_matrix])
# gather features
feature_dict = {
**pipeline.make_sequence_features(sequence=query_sequence*homooligomer,
description="none",
num_res=len(query_sequence)*homooligomer),
**pipeline.make_msa_features(msas=msas,deletion_matrices=deletion_matrices),
**template_features
}
outs = predict_structure(jobname, feature_dict, Ls=[len(query_sequence)]*homooligomer)
#@title Make plots
dpi = 100 #@param {type:"integer"}
# gather MSA info
deduped_full_msa = list(dict.fromkeys(msa))
msa_arr = np.array([list(seq) for seq in deduped_full_msa])
seqid = (np.array(list(query_sequence)) == msa_arr).mean(-1)
seqid_sort = seqid.argsort() #[::-1]
non_gaps = (msa_arr != "-").astype(float)
non_gaps[non_gaps == 0] = np.nan
##################################################################
plt.figure(figsize=(14,4),dpi=dpi)
##################################################################
plt.subplot(1,2,1); plt.title("Sequence coverage")
plt.imshow(non_gaps[seqid_sort]*seqid[seqid_sort,None],
interpolation='nearest', aspect='auto',
cmap="rainbow_r", vmin=0, vmax=1, origin='lower')
plt.plot((msa_arr != "-").sum(0), color='black')
plt.xlim(-0.5,msa_arr.shape[1]-0.5)
plt.ylim(-0.5,msa_arr.shape[0]-0.5)
plt.colorbar(label="Sequence identity to query",)
plt.xlabel("Positions")
plt.ylabel("Sequences")
##################################################################
plt.subplot(1,2,2); plt.title("Predicted lDDT per position")
for model_name,value in outs.items():
plt.plot(value["plddt"],label=model_name)
if homooligomer > 0:
for n in range(homooligomer+1):
x = n*(len(query_sequence)-1)
plt.plot([x,x],[0,100],color="black")
plt.legend()
plt.ylim(0,100)
plt.ylabel("Predicted lDDT")
plt.xlabel("Positions")
plt.savefig(jobname+"_coverage_lDDT.png")
##################################################################
plt.show()
if use_ptm:
print("Predicted Alignment Error")
##################################################################
plt.figure(figsize=(3*num_models,2), dpi=dpi)
for n,(model_name,value) in enumerate(outs.items()):
plt.subplot(1,num_models,n+1)
plt.title(model_name)
plt.imshow(value["pae"],label=model_name,cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.savefig(jobname+"_PAE.png")
plt.show()
##################################################################
#@title Display 3D structure {run: "auto"}
model_num = 1 #@param ["1", "2", "3", "4", "5"] {type:"raw"}
color = "chain" #@param ["chain", "lDDT", "rainbow"]
show_sidechains = False #@param {type:"boolean"}
show_mainchains = False #@param {type:"boolean"}
def plot_plddt_legend():
thresh = ['plDDT:','Very low (<50)','Low (60)','OK (70)','Confident (80)','Very high (>90)']
plt.figure(figsize=(1,0.1),dpi=100)
########################################
for c in ["#FFFFFF","#FF0000","#FFFF00","#00FF00","#00FFFF","#0000FF"]:
plt.bar(0, 0, color=c)
plt.legend(thresh, frameon=False,
loc='center', ncol=6,
handletextpad=1,
columnspacing=1,
markerscale=0.5,)
plt.axis(False)
return plt
def plot_confidence(model_num=1):
model_name = f"model_{model_num}"
"""Plots the legend for plDDT."""
#########################################
if use_ptm:
plt.figure(figsize=(10,3),dpi=100)
plt.subplot(1,2,1)
else:
plt.figure(figsize=(5,3),dpi=100)
plt.title('Predicted lDDT')
plt.plot(outs[model_name]["plddt"])
for n in range(homooligomer+1):
x = n*(len(query_sequence))
plt.plot([x,x],[0,100],color="black")
plt.ylabel('plDDT')
plt.xlabel('position')
#########################################
if use_ptm:
plt.subplot(1,2,2);plt.title('Predicted Aligned Error')
plt.imshow(outs[model_name]["pae"], cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.xlabel('Scored residue')
plt.ylabel('Aligned residue')
#########################################
return plt
def show_pdb(model_num=1, show_sidechains=False, show_mainchains=False, color="lDDT"):
model_name = f"model_{model_num}"
if use_amber:
pdb_filename = f"{jobname}_relaxed_{model_name}.pdb"
else:
pdb_filename = f"{jobname}_unrelaxed_{model_name}.pdb"
view = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js',)
view.addModel(open(pdb_filename,'r').read(),'pdb')
if color == "lDDT":
view.setStyle({'cartoon': {'colorscheme': {'prop':'b','gradient': 'roygb','min':50,'max':90}}})
elif color == "rainbow":
view.setStyle({'cartoon': {'color':'spectrum'}})
elif color == "chain":
for n,chain,color in zip(range(homooligomer),list("ABCDEFGH"),
["lime","cyan","magenta","yellow","salmon","white","blue","orange"]):
view.setStyle({'chain':chain},{'cartoon': {'color':color}})
if show_sidechains:
BB = ['C','O','N']
view.addStyle({'and':[{'resn':["GLY","PRO"],'invert':True},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"GLY"},{'atom':'CA'}]},
{'sphere':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
if show_mainchains:
BB = ['C','O','N','CA']
view.addStyle({'atom':BB},{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.zoomTo()
return view
if (model_num-1) < num_models:
show_pdb(model_num,show_sidechains, show_mainchains, color).show()
if color == "lDDT": plot_plddt_legend().show()
plot_confidence(model_num).show()
else:
print("this model was not made")
#@title Package and download results
#@markdown If you having issues downloading the result archive, try disabling your adblocker and run this cell again. If that fails click on the little folder icon to the left, navigate to file: `jobname.result.zip`, right-click and select \"Download\" (see [screenshot](https://pbs.twimg.com/media/E6wRW2lWUAEOuoe?format=jpg&name=small)).
with open(f"{jobname}.log", "w") as text_file:
text_file.write(f"num_models={num_models}\n")
text_file.write(f"use_amber={use_amber}\n")
text_file.write(f"use_msa={use_msa}\n")
text_file.write(f"msa_mode={msa_mode}\n")
text_file.write(f"use_templates={use_templates}\n")
text_file.write(f"homooligomer={homooligomer}\n")
text_file.write(f"use_ptm={use_ptm}\n")
citations = {
"Ovchinnikov2021": """@software{Ovchinnikov2021,
author = {Ovchinnikov, Sergey and Steinegger, Martin and Mirdita, Milot},
title = {{ColabFold - Making Protein folding accessible to all via Google Colab}},
year = {2021},
publisher = {Zenodo},
version = {v1.0-alpha},
doi = {10.5281/zenodo.5123297},
url = {https://doi.org/10.5281/zenodo.5123297},
comment = {The AlphaFold notebook}
}""",
"LevyKarin2020": """@article{LevyKarin2020,
author = {{Levy Karin}, Eli and Mirdita, Milot and S{\"{o}}ding, Johannes},
doi = {10.1186/s40168-020-00808-x},
journal = {Microbiome},
number = {1},
title = {{MetaEuk—sensitive, high-throughput gene discovery, and annotation for large-scale eukaryotic metagenomics}},
volume = {8},
year = {2020},
comment = {MetaEuk database}
}""",
"Delmont2020": """@article{Delmont2020,
author = {Delmont, Tom O. and Gaia, Morgan and Hinsinger, Damien D. and Fremont, Paul and Guerra, Antonio Fernandez and Eren, A. Murat and Vanni, Chiara and Kourlaiev, Artem and D'Agata, Leo and Clayssen, Quentin and Villar, Emilie and Labadie, Karine and Cruaud, Corinne and Poulain, Julie and da Silva, Corinne and Wessner, Marc and Noel, Benjamin and Aury, Jean Marc and de Vargas, Colomban and Bowler, Chris and Karsenti, Eric and Pelletier, Eric and Wincker, Patrick and Jaillon, Olivier and Sunagawa, Shinichi and Acinas, Silvia G. and Bork, Peer and Karsenti, Eric and Bowler, Chris and Sardet, Christian and Stemmann, Lars and de Vargas, Colomban and Wincker, Patrick and Lescot, Magali and Babin, Marcel and Gorsky, Gabriel and Grimsley, Nigel and Guidi, Lionel and Hingamp, Pascal and Jaillon, Olivier and Kandels, Stefanie and Iudicone, Daniele and Ogata, Hiroyuki and Pesant, St{\'{e}}phane and Sullivan, Matthew B. and Not, Fabrice and Karp-Boss, Lee and Boss, Emmanuel and Cochrane, Guy and Follows, Michael and Poulton, Nicole and Raes, Jeroen and Sieracki, Mike and Speich, Sabrina},
journal = {bioRxiv},
title = {{Functional repertoire convergence of distantly related eukaryotic plankton lineages revealed by genome-resolved metagenomics}},
year = {2020},
comment = {SMAG database}
}""",
"Mitchell2019": """@article{Mitchell2019,
author = {Mitchell, Alex L and Almeida, Alexandre and Beracochea, Martin and Boland, Miguel and Burgin, Josephine and Cochrane, Guy and Crusoe, Michael R and Kale, Varsha and Potter, Simon C and Richardson, Lorna J and Sakharova, Ekaterina and Scheremetjew, Maxim and Korobeynikov, Anton and Shlemov, Alex and Kunyavskaya, Olga and Lapidus, Alla and Finn, Robert D},
doi = {10.1093/nar/gkz1035},
journal = {Nucleic Acids Res.},
title = {{MGnify: the microbiome analysis resource in 2020}},
year = {2019},
comment = {MGnify database}
}""",
"Eastman2017": """@article{Eastman2017,
author = {Eastman, Peter and Swails, Jason and Chodera, John D. and McGibbon, Robert T. and Zhao, Yutong and Beauchamp, Kyle A. and Wang, Lee-Ping and Simmonett, Andrew C. and Harrigan, Matthew P. and Stern, Chaya D. and Wiewiora, Rafal P. and Brooks, Bernard R. and Pande, Vijay S.},
doi = {10.1371/journal.pcbi.1005659},
journal = {PLOS Comput. Biol.},
number = {7},
title = {{OpenMM 7: Rapid development of high performance algorithms for molecular dynamics}},
volume = {13},
year = {2017},
comment = {Amber relaxation}
}""",
"Jumper2021": """@article{Jumper2021,
author = {Jumper, John and Evans, Richard and Pritzel, Alexander and Green, Tim and Figurnov, Michael and Ronneberger, Olaf and Tunyasuvunakool, Kathryn and Bates, Russ and {\v{Z}}{\'{i}}dek, Augustin and Potapenko, Anna and Bridgland, Alex and Meyer, Clemens and Kohl, Simon A. A. and Ballard, Andrew J. and Cowie, Andrew and Romera-Paredes, Bernardino and Nikolov, Stanislav and Jain, Rishub and Adler, Jonas and Back, Trevor and Petersen, Stig and Reiman, David and Clancy, Ellen and Zielinski, Michal and Steinegger, Martin and Pacholska, Michalina and Berghammer, Tamas and Bodenstein, Sebastian and Silver, David and Vinyals, Oriol and Senior, Andrew W. and Kavukcuoglu, Koray and Kohli, Pushmeet and Hassabis, Demis},
doi = {10.1038/s41586-021-03819-2},
journal = {Nature},
pmid = {34265844},
title = {{Highly accurate protein structure prediction with AlphaFold.}},
year = {2021},
comment = {AlphaFold2 + BFD Database}
}""",
"Mirdita2019": """@article{Mirdita2019,
author = {Mirdita, Milot and Steinegger, Martin and S{\"{o}}ding, Johannes},
doi = {10.1093/bioinformatics/bty1057},
journal = {Bioinformatics},
number = {16},
pages = {2856--2858},
pmid = {30615063},
title = {{MMseqs2 desktop and local web server app for fast, interactive sequence searches}},
volume = {35},
year = {2019},
comment = {MMseqs2 search server}
}""",
"Steinegger2019": """@article{Steinegger2019,
author = {Steinegger, Martin and Meier, Markus and Mirdita, Milot and V{\"{o}}hringer, Harald and Haunsberger, Stephan J. and S{\"{o}}ding, Johannes},
doi = {10.1186/s12859-019-3019-7},
journal = {BMC Bioinform.},
number = {1},
pages = {473},
pmid = {31521110},
title = {{HH-suite3 for fast remote homology detection and deep protein annotation}},
volume = {20},
year = {2019},
comment = {PDB70 database}
}""",
"Mirdita2017": """@article{Mirdita2017,
author = {Mirdita, Milot and von den Driesch, Lars and Galiez, Clovis and Martin, Maria J. and S{\"{o}}ding, Johannes and Steinegger, Martin},
doi = {10.1093/nar/gkw1081},
journal = {Nucleic Acids Res.},
number = {D1},
pages = {D170--D176},
pmid = {27899574},
title = {{Uniclust databases of clustered and deeply annotated protein sequences and alignments}},
volume = {45},
year = {2017},
comment = {Uniclust30/UniRef30 database},
}""",
"Berman2003": """@misc{Berman2003,
author = {Berman, Helen and Henrick, Kim and Nakamura, Haruki},
booktitle = {Nat. Struct. Biol.},
doi = {10.1038/nsb1203-980},
number = {12},
pages = {980},
pmid = {14634627},
title = {{Announcing the worldwide Protein Data Bank}},
volume = {10},
year = {2003},
comment = {templates downloaded from wwPDB server}
}""",
}
to_cite = [ "Jumper2021", "Ovchinnikov2021" ]
if use_msa: to_cite += ["Mirdita2019"]
if use_msa: to_cite += ["Mirdita2017"]
if use_env: to_cite += ["Mitchell2019"]
if use_env: to_cite += ["Delmont2020"]
if use_env: to_cite += ["LevyKarin2020"]
if use_templates: to_cite += ["Steinegger2019"]
if use_templates: to_cite += ["Berman2003"]
if use_amber: to_cite += ["Eastman2017"]
with open(f"{jobname}.bibtex", 'w') as writer:
for i in to_cite:
writer.write(citations[i])
writer.write("\n")
print(f"Found {len(to_cite)} citation{'s' if len(to_cite) > 1 else ''} for tools or databases.")
if use_custom_msa:
print("Don't forget to cite your custom MSA generation method.")
!zip -FSr $jobname".result.zip" $jobname".log" $a3m_file $jobname"_"*"relaxed_model_"*".pdb" $jobname".bibtex" $jobname"_"*".png"
files.download(f"{jobname}.result.zip")
if save_to_google_drive == True and drive != None:
uploaded = drive.CreateFile({'title': f"{jobname}.result.zip"})
uploaded.SetContentFile(f"{jobname}.result.zip")
uploaded.Upload()
print(f"Uploaded {jobname}.result.zip to Google Drive with ID {uploaded.get('id')}")
```
# Instructions <a name="Instructions"></a>
**Quick start**
1. Change the runtime type to GPU at "Runtime" -> "Change runtime type" (improves speed).
2. Paste your protein sequence in the input field below.
3. Press "Runtime" -> "Run all".
4. The pipeline consists of 10 steps. The currently running steps is indicated by a circle with a stop sign next to it.
**Result zip file contents**
1. PDB formatted structures sorted by avg. pIDDT. (relaxed, unrelaxed).
2. Plots of the model quality.
3. Plots of the MSA coverage.
4. Parameter log file.
5. A3M formatted input MSA.
6. BibTeX file with citations for all used tools and databases.
At the end of the job a download modal box will pop up with a `jobname.result.zip` file. Additionally, if the `save_to_google_drive` option was selected, the `jobname.result.zip` will be uploaded to your Google Drive.
**Using a custom MSA as input**
To predict the structure with a custom MSA (A3M formatted): (1) Change the msa_mode: to "custom", (2) Wait for an upload box to appear at the end of the "Input Protein ..." box. Upload your A3M. The first fasta entry of the A3M must be the query sequence without gaps.
To generate good input MSAs the HHblits server can be used here: https://toolkit.tuebingen.mpg.de/tools/hhblits
After submitting your query, click "Query Template MSA" -> "Download Full A3M". Download the a3m file and upload it to the notebook.
**Troubleshooting**
* Try to restart the session "Runtime" -> "Factory reset runtime".
* Check your input sequence.
**Known issues**
* Colab assigns different types of GPUs with varying amount of memory. Some might have not enough memory to predict the structure.
* Your browser can block the pop-up for downloading the result file. You can choose the `save_to_google_drive` option to upload to Google Drive instead or manually download the result file: Click on the little folder icon to the left, navigate to file: `jobname.result.zip`, right-click and select \"Download\" (see [screenshot](https://pbs.twimg.com/media/E6wRW2lWUAEOuoe?format=jpg&name=small)).
**Limitations**
* MSAs: MMseqs2 is very precise and sensitive but might find less hits compared to HHblits/HMMer searched against BFD or Mgnify.
* Computing resources: Our MMseqs2 API can probably handle ~20k requests per day.
* For best results, we recommend using the full pipeline: https://github.com/deepmind/alphafold
**Description of the plots**
* **Number of sequences per position** - We want to see at least 30 sequences per position, for best performance, ideally 100 sequences.
* **Predicted lDDT per position (pLDDT)** - model confidence (out of 100) at each position. Higher the better.
* **Predicted Alignment Error (PAE)** - For homooligomers, this could be a useful metric to assess how confident the model is about the interface. Lower the better.
**Bugs**
- If you encounter any bugs, please report the issue to https://github.com/sokrypton/ColabFold/issues
**Q&A**
- *What is `use_ptm`?* Deepmind finetuned their original 5 trained model params to return PAE and predicted TMscore. We use these models by default to generate the PAE plots. But sometimes using the original models, without finetunning gives different results. Disabling this option maybe useful if you want to reproduce CASP results, or want to get more diversity in the predictions.
**Acknowledgments**
- We would like to thank the AlphaFold team for developing an excellent model and open sourcing the software.
- A colab by Sergey Ovchinnikov ([@sokrypton](https://twitter.com/sokrypton)), Milot Mirdita ([@milot_mirdita](https://twitter.com/milot_mirdita)) and Martin Steinegger ([@thesteinegger](https://twitter.com/thesteinegger)).
- Minkyung Baek ([@minkbaek](https://twitter.com/minkbaek)) and Yoshitaka Moriwaki ([@Ag_smith](https://twitter.com/Ag_smith)) for protein-complex prediction proof-of-concept in AlphaFold2.
- Also, credit to [David Koes](https://github.com/dkoes) for his awesome [py3Dmol](https://3dmol.csb.pitt.edu/) plugin, without whom these notebooks would be quite boring!
- For related notebooks see: [ColabFold](https://github.com/sokrypton/ColabFold)
| github_jupyter |
# Exercise 5 - Variational quantum eigensolver
## Historical background
During the last decade, quantum computers matured quickly and began to realize Feynman's initial dream of a computing system that could simulate the laws of nature in a quantum way. A 2014 paper first authored by Alberto Peruzzo introduced the **Variational Quantum Eigensolver (VQE)**, an algorithm meant for finding the ground state energy (lowest energy) of a molecule, with much shallower circuits than other approaches.[1] And, in 2017, the IBM Quantum team used the VQE algorithm to simulate the ground state energy of the lithium hydride molecule.[2]
VQE's magic comes from outsourcing some of the problem's processing workload to a classical computer. The algorithm starts with a parameterized quantum circuit called an ansatz (a best guess) then finds the optimal parameters for this circuit using a classical optimizer. The VQE's advantage over classical algorithms comes from the fact that a quantum processing unit can represent and store the problem's exact wavefunction, an exponentially hard problem for a classical computer.
This exercise 5 allows you to realize Feynman's dream yourself, setting up a variational quantum eigensolver to determine the ground state and the energy of a molecule. This is interesting because the ground state can be used to calculate various molecular properties, for instance the exact forces on nuclei than can serve to run molecular dynamics simulations to explore what happens in chemical systems with time.[3]
### References
1. Peruzzo, Alberto, et al. "A variational eigenvalue solver on a photonic quantum processor." Nature communications 5.1 (2014): 1-7.
2. Kandala, Abhinav, et al. "Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets." Nature 549.7671 (2017): 242-246.
3. Sokolov, Igor O., et al. "Microcanonical and finite-temperature ab initio molecular dynamics simulations on quantum computers." Physical Review Research 3.1 (2021): 013125.
## Introduction
For the implementation of VQE, you will be able to make choices on how you want to compose your simulation, in particular focusing on the ansatz quantum circuits.
This is motivated by the fact that one of the important tasks when running VQE on noisy quantum computers is to reduce the loss of fidelity (which introduces errors) by finding the most compact quantum circuit capable of representing the ground state.
Practically, this entails to minimizing the number of two-qubit gates (e.g. CNOTs) while not loosing accuracy.
<div class="alert alert-block alert-success">
<b>Goal</b>
Find the shortest ansatz circuits for representing accurately the ground state of given problems. Be creative!
<b>Plan</b>
First you will learn how to compose a VQE simulation for the smallest molecule and then apply what you have learned to a case of a larger one.
**1. Tutorial - VQE for H$_2$:** familiarize yourself with VQE and select the best combination of ansatz/classical optimizer by running statevector simulations.
**2. Final Challenge - VQE for LiH:** perform similar investigation as in the first part but restricting to statevector simulator only. Use the qubit number reduction schemes available in Qiskit and find the optimal circuit for this larger system. Optimize the circuit and use your imagination to find ways to select the best building blocks of parameterized circuits and compose them to construct the most compact ansatz circuit for the ground state, better than the ones already available in Qiskit.
</div>
<div class="alert alert-block alert-danger">
Below is an introduction to the theory behind VQE simulations. You don't have to understand the whole thing before moving on. Don't be scared!
</div>
## Theory
Here below is the general workflow representing how the molecular simulations using VQE are performed on quantum computers.
<img src="resources/workflow.png" width=800 height= 1400/>
The core idea hybrid quantum-classical approach is to outsource to **CPU (classical processing unit)** and **QPU (quantum processing unit)** the parts that they can do best. The CPU takes care of listing the terms that need to be measured to compute the energy and also optimizing the circuit parameters. The QPU implements a quantum circuit representing the quantum state of a system and measures the energy. Some more details are given below:
**CPU** can compute efficiently the energies associated to electron hopping and interactions (one-/two-body integrals by means of a Hartree-Fock calculation) that serve to represent the total energy operator, Hamiltonian. The [Hartree–Fock (HF) method](https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method#:~:text=In%20computational%20physics%20and%20chemistry,system%20in%20a%20stationary%20state.) efficiently computes an approximate grounds state wavefunction by assuming that the latter can be represented by a single Slater determinant (e.g. for H$_2$ molecule in STO-3G basis with 4 spin-orbitals and qubits, $|\Psi_{HF} \rangle = |0101 \rangle$ where electrons occupy the lowest energy spin-orbitals). What QPU does later in VQE is finding a quantum state (corresponding circuit and its parameters) that can also represent other states associated missing electronic correlations (i.e. $\sum_i c_i |i\rangle$ states in $|\Psi \rangle = c_{HF}|\Psi_{HF} \rangle + \sum_i c_i |i\rangle $ where $i$ is a bitstring).
After a HF calculation, operators in the Hamiltonian are mapped to measurements on a QPU using fermion-to-qubit transformations (see Hamiltonian section below). One can further analyze the properties of the system to reduce the number of qubits or shorten the ansatz circuit:
- For Z2 symmetries and two-qubit reduction, see [Bravyi *et al*, 2017](https://arxiv.org/abs/1701.08213v1).
- For entanglement forging, see [Eddins *et al.*, 2021](https://arxiv.org/abs/2104.10220v1).
- For the adaptive ansatz see, [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). You may use the ideas found in those works to find ways to shorten the quantum circuits.
**QPU** implements quantum circuits (see Ansatzes section below), parameterized by angles $\vec\theta$, that would represent the ground state wavefunction by placing various single qubit rotations and entanglers (e.g. two-qubit gates). The quantum advantage lies in the fact that QPU can efficiently represent and store the exact wavefunction, which becomes intractable on a classical computer for systems that have more than a few atoms. Finally, QPU measures the operators of choice (e.g. ones representing a Hamiltonian).
Below we go slightly more in mathematical details of each component of the VQE algorithm. It might be also helpful if you watch our [video episode about VQE](https://www.youtube.com/watch?v=Z-A6G0WVI9w).
### Hamiltonian
Here we explain how we obtain the operators that we need to measure to obtain the energy of a given system.
These terms are included in the molecular Hamiltonian defined as:
$$
\begin{aligned}
\hat{H} &=\sum_{r s} h_{r s} \hat{a}_{r}^{\dagger} \hat{a}_{s} \\
&+\frac{1}{2} \sum_{p q r s} g_{p q r s} \hat{a}_{p}^{\dagger} \hat{a}_{q}^{\dagger} \hat{a}_{r} \hat{a}_{s}+E_{N N}
\end{aligned}
$$
with
$$
h_{p q}=\int \phi_{p}^{*}(r)\left(-\frac{1}{2} \nabla^{2}-\sum_{I} \frac{Z_{I}}{R_{I}-r}\right) \phi_{q}(r)
$$
$$
g_{p q r s}=\int \frac{\phi_{p}^{*}\left(r_{1}\right) \phi_{q}^{*}\left(r_{2}\right) \phi_{r}\left(r_{2}\right) \phi_{s}\left(r_{1}\right)}{\left|r_{1}-r_{2}\right|}
$$
where the $h_{r s}$ and $g_{p q r s}$ are the one-/two-body integrals (using the Hartree-Fock method) and $E_{N N}$ the nuclear repulsion energy.
The one-body integrals represent the kinetic energy of the electrons and their interaction with nuclei.
The two-body integrals represent the electron-electron interaction.
The $\hat{a}_{r}^{\dagger}, \hat{a}_{r}$ operators represent creation and annihilation of electron in spin-orbital $r$ and require mappings to operators, so that we can measure them on a quantum computer.
Note that VQE minimizes the electronic energy so you have to retrieve and add the nuclear repulsion energy $E_{NN}$ to compute the total energy.
So, for every non-zero matrix element in the $ h_{r s}$ and $g_{p q r s}$ tensors, we can construct corresponding Pauli string (tensor product of Pauli operators) with the following fermion-to-qubit transformation.
For instance, in Jordan-Wigner mapping for an orbital $r = 3$, we obtain the following Pauli string:
$$
\hat a_{3}^{\dagger}= \hat \sigma_z \otimes \hat \sigma_z \otimes\left(\frac{ \hat \sigma_x-i \hat \sigma_y}{2}\right) \otimes 1 \otimes \cdots \otimes 1
$$
where $\hat \sigma_x, \hat \sigma_y, \hat \sigma_z$ are the well-known Pauli operators. The tensor products of $\hat \sigma_z$ operators are placed to enforce the fermionic anti-commutation relations.
A representation of the Jordan-Wigner mapping between the 14 spin-orbitals of a water molecule and some 14 qubits is given below:
<img src="resources/mapping.png" width=600 height= 1200/>
Then, one simply replaces the one-/two-body excitations (e.g. $\hat{a}_{r}^{\dagger} \hat{a}_{s}$, $\hat{a}_{p}^{\dagger} \hat{a}_{q}^{\dagger} \hat{a}_{r} \hat{a}_{s}$) in the Hamiltonian by corresponding Pauli strings (i.e. $\hat{P}_i$, see picture above). The resulting operator set is ready to be measured on the QPU.
For additional details see [Seeley *et al.*, 2012](https://arxiv.org/abs/1208.5986v1).
### Ansatzes
There are mainly 2 types of ansatzes you can use for chemical problems.
- **q-UCC ansatzes** are physically inspired, and roughly map the electron excitations to quantum circuits. The q-UCCSD ansatz (`UCCSD`in Qiskit) possess all possible single and double electron excitations. The paired double q-pUCCD (`PUCCD`) and singlet q-UCCD0 (`SUCCD`) just consider a subset of such excitations (meaning significantly shorter circuits) and have proved to provide good results for dissociation profiles. For instance, q-pUCCD doesn't have single excitations and the double excitations are paired as in the image below.
- **Heuristic ansatzes (`TwoLocal`)** were invented to shorten the circuit depth but still be able to represent the ground state.
As in the figure below, the R gates represent the parametrized single qubit rotations and $U_{CNOT}$ the entanglers (two-qubit gates). The idea is that after repeating certain $D$-times the same block (with independent parameters) one can reach the ground state.
For additional details refer to [Sokolov *et al.* (q-UCC ansatzes)](https://arxiv.org/abs/1911.10864v2) and [Barkoutsos *et al.* (Heuristic ansatzes)](https://arxiv.org/pdf/1805.04340.pdf).
<img src="resources/ansatz.png" width=700 height= 1200/>
### VQE
Given a Hermitian operator $\hat H$ with an unknown minimum eigenvalue $E_{min}$, associated with the eigenstate $|\psi_{min}\rangle$, VQE provides an estimate $E_{\theta}$, bounded by $E_{min}$:
\begin{align*}
E_{min} \le E_{\theta} \equiv \langle \psi(\theta) |\hat H|\psi(\theta) \rangle
\end{align*}
where $|\psi(\theta)\rangle$ is the trial state associated with $E_{\theta}$. By applying a parameterized circuit, represented by $U(\theta)$, to some arbitrary starting state $|\psi\rangle$, the algorithm obtains an estimate $U(\theta)|\psi\rangle \equiv |\psi(\theta)\rangle$ on $|\psi_{min}\rangle$. The estimate is iteratively optimized by a classical optimizer by changing the parameter $\theta$ and minimizing the expectation value of $\langle \psi(\theta) |\hat H|\psi(\theta) \rangle$.
As applications of VQE, there are possibilities in molecular dynamics simulations, see [Sokolov *et al.*, 2021](https://arxiv.org/abs/2008.08144v1), and excited states calculations, see [Ollitrault *et al.*, 2019](https://arxiv.org/abs/1910.12890) to name a few.
<div class="alert alert-block alert-danger">
<b> References for additional details</b>
For the qiskit-nature tutorial that implements this algorithm see [here](https://qiskit.org/documentation/nature/tutorials/01_electronic_structure.html)
but this won't be sufficient and you might want to look on the [first page of github repository](https://github.com/Qiskit/qiskit-nature) and the [test folder](https://github.com/Qiskit/qiskit-nature/tree/main/test) containing tests that are written for each component, they provide the base code for the use of each functionality.
</div>
## Part 1: Tutorial - VQE for H$_2$ molecule
In this part, you will simulate H$_2$ molecule using the STO-3G basis with the PySCF driver and Jordan-Wigner mapping.
We will guide you through the following parts so then you can tackle harder problems.
#### 1. Driver
The interfaces to the classical chemistry codes that are available in Qiskit are called drivers.
We have for example `PSI4Driver`, `PyQuanteDriver`, `PySCFDriver` are available.
By running a driver (Hartree-Fock calculation for a given basis set and molecular geometry), in the cell below, we obtain all the necessary information about our molecule to apply then a quantum algorithm.
```
#from qiskit_nature.drivers import PySCFDriver
#molecule = "H .0 .0 .0; H .0 .0 0.739"
#driver = PySCFDriver(atom=molecule)
#qmolecule = driver.run()
from qiskit_nature.drivers import PySCFDriver
molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474'
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run()
```
<div class="alert alert-block alert-danger">
<b> Tutorial questions 1</b>
Look into the attributes of `qmolecule` and answer the questions below.
1. We need to know the basic characteristics of our molecule. What is the total number of electrons in your system?
2. What is the number of molecular orbitals?
3. What is the number of spin-orbitals?
3. How many qubits would you need to simulate this molecule with Jordan-Wigner mapping?
5. What is the value of the nuclear repulsion energy?
You can find the answers at the end of this notebook.
</div>
```
# WRITE YOUR CODE BETWEEN THESE LINES - START
n_el = qmolecule.num_alpha + qmolecule.num_beta
n_mo = qmolecule.num_molecular_orbitals
n_so = 2 * qmolecule.num_molecular_orbitals
n_q = 2* qmolecule.num_molecular_orbitals
e_nn = qmolecule.nuclear_repulsion_energy
# WRITE YOUR CODE BETWEEN THESE LINES - END
```
#### 2. Electronic structure problem
You can then create an `ElectronicStructureProblem` that can produce the list of fermionic operators before mapping them to qubits (Pauli strings).
```
from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
from qiskit_nature.transformers import FreezeCoreTransformer
#freezeCoreTransformer = FreezeCoreTransformer()
#qmolecule = freezeCoreTransformer.transform(qmolecule)
problem = ElectronicStructureProblem(driver,q_molecule_transformers=[FreezeCoreTransformer(freeze_core=True,remove_orbitals=[3,4])])
#problem = ElectronicStructureProblem(driver)
# Generate the second-quantized operators
second_q_ops = problem.second_q_ops()
# Hamiltonian
main_op = second_q_ops[0]
```
#### 3. QubitConverter
Allows to define the mapping that you will use in the simulation. You can try different mapping but
we will stick to `JordanWignerMapper` as allows a simple correspondence: a qubit represents a spin-orbital in the molecule.
```
from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper
from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter
# Setup the mapper and qubit converter
mapper_type = 'ParityMapper'
if mapper_type == 'ParityMapper':
mapper = ParityMapper()
elif mapper_type == 'JordanWignerMapper':
mapper = JordanWignerMapper()
elif mapper_type == 'BravyiKitaevMapper':
mapper = BravyiKitaevMapper()
converter = QubitConverter(mapper=mapper, two_qubit_reduction=True,z2symmetry_reduction=[1])
#converter = QubitConverter(mapper=mapper, two_qubit_reduction=True)
# The fermionic operators are mapped to qubit operators
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
qubit_op = converter.convert(main_op, num_particles=num_particles)
```
#### 4. Initial state
As we described in the Theory section, a good initial state in chemistry is the HF state (i.e. $|\Psi_{HF} \rangle = |0101 \rangle$). We can initialize it as follows:
```
from qiskit_nature.circuit.library import HartreeFock
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
init_state = HartreeFock(num_spin_orbitals, num_particles, converter)
print(init_state)
```
#### 5. Ansatz
One of the most important choices is the quantum circuit that you choose to approximate your ground state.
Here is the example of qiskit circuit library that contains many possibilities for making your own circuit.
```
from qiskit.circuit.library import TwoLocal
from qiskit_nature.circuit.library import UCCSD, PUCCD, SUCCD
# Choose the ansatz
ansatz_type = "TwoLocal"
# Parameters for q-UCC antatze
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
# Put arguments for twolocal
if ansatz_type == "TwoLocal":
# Single qubit rotations that are placed on all qubits with independent parameters
rotation_blocks = ['ry', 'rz']
# Entangling gates
entanglement_blocks = 'cx'
# How the qubits are entangled
entanglement = 'full'
# Repetitions of rotation_blocks + entanglement_blocks with independent parameters
repetitions = 3
# Skip the final rotation_blocks layer
skip_final_rotation_layer = True
ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions,
entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer)
# Add the initial state
ansatz.compose(init_state, front=True, inplace=True)
elif ansatz_type == "UCCSD":
ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "PUCCD":
ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "SUCCD":
ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "Custom":
# Example of how to write your own circuit
from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister
# Define the variational parameter
theta = Parameter('a')
n = qubit_op.num_qubits
# Make an empty quantum circuit
qc = QuantumCircuit(qubit_op.num_qubits)
qubit_label = 0
# Place a Hadamard gate
qc.h(qubit_label)
# Place a CNOT ladder
for i in range(n-1):
qc.cx(i, i+1)
# Visual separator
qc.barrier()
# rz rotations on all qubits
qc.rz(theta, range(n))
ansatz = qc
ansatz.compose(init_state, front=True, inplace=True)
print(ansatz)
```
#### 6. Backend
This is where you specify the simulator or device where you want to run your algorithm.
We will focus on the `statevector_simulator` in this challenge.
```
from qiskit import Aer
backend = Aer.get_backend('statevector_simulator')
```
#### 7. Optimizer
The optimizer guides the evolution of the parameters of the ansatz so it is very important to investigate the energy convergence as it would define the number of measurements that have to be performed on the QPU.
A clever choice might reduce drastically the number of needed energy evaluations.
```
from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B, SPSA, SLSQP
optimizer_type = 'L_BFGS_B'
# You may want to tune the parameters
# of each optimizer, here the defaults are used
if optimizer_type == 'COBYLA':
optimizer = COBYLA(maxiter=500)
elif optimizer_type == 'L_BFGS_B':
optimizer = L_BFGS_B(maxfun=500)
elif optimizer_type == 'SPSA':
optimizer = SPSA(maxiter=500)
elif optimizer_type == 'SLSQP':
optimizer = SLSQP(maxiter=500)
```
#### 8. Exact eigensolver
For learning purposes, we can solve the problem exactly with the exact diagonalization of the Hamiltonian matrix so we know where to aim with VQE.
Of course, the dimensions of this matrix scale exponentially in the number of molecular orbitals so you can try doing this for a large molecule of your choice and see how slow this becomes.
For very large systems you would run out of memory trying to store their wavefunctions.
```
from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
import numpy as np
def exact_diagonalizer(problem, converter):
solver = NumPyMinimumEigensolverFactory()
calc = GroundStateEigensolver(converter, solver)
result = calc.solve(problem)
return result
result_exact = exact_diagonalizer(problem, converter)
exact_energy = np.real(result_exact.eigenenergies[0])
print("Exact electronic energy", exact_energy)
print(result_exact)
# The targeted electronic energy for H2 is -1.85336 Ha
# Check with your VQE result.
```
#### 9. VQE and initial parameters for the ansatz
Now we can import the VQE class and run the algorithm.
```
from qiskit.algorithms import VQE
from IPython.display import display, clear_output
# Print and save the data in lists
def callback(eval_count, parameters, mean, std):
# Overwrites the same line when printing
display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std))
clear_output(wait=True)
counts.append(eval_count)
values.append(mean)
params.append(parameters)
deviation.append(std)
counts = []
values = []
params = []
deviation = []
# Set initial parameters of the ansatz
# We choose a fixed small displacement
# So all participants start from similar starting point
try:
initial_point = [0.01] * len(ansatz.ordered_parameters)
except:
initial_point = [0.01] * ansatz.num_parameters
algorithm = VQE(ansatz,
optimizer=optimizer,
quantum_instance=backend,
callback=callback,
initial_point=initial_point)
result = algorithm.compute_minimum_eigenvalue(qubit_op)
print(result)
```
#### 9. Scoring function
We need to judge how good are your VQE simulations, your choice of ansatz/optimizer.
For this, we implemented the following simple scoring function:
$$ score = N_{CNOT}$$
where $N_{CNOT}$ is the number of CNOTs.
But you have to reach the chemical accuracy which is $\delta E_{chem} = 0.004$ Ha $= 4$ mHa, which may be hard to reach depending on the problem.
You have to reach the accuracy we set in a minimal number of CNOTs to win the challenge.
The lower the score the better!
```
# Store results in a dictionary
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import Unroller
# Unroller transpile your circuit into CNOTs and U gates
pass_ = Unroller(['u', 'cx'])
pm = PassManager(pass_)
ansatz_tp = pm.run(ansatz)
cnots = ansatz_tp.count_ops()['cx']
score = cnots
accuracy_threshold = 4.0 # in mHa
energy = result.optimal_value
if ansatz_type == "TwoLocal":
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': rotation_blocks,
'entanglement_blocks': entanglement_blocks,
'entanglement': entanglement,
'repetitions': repetitions,
'skip_final_rotation_layer': skip_final_rotation_layer,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
else:
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': None,
'entanglement_blocks': None,
'entanglement': None,
'repetitions': None,
'skip_final_rotation_layer': None,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
# Plot the results
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
ax.set_xlabel('Iterations')
ax.set_ylabel('Energy')
ax.grid()
fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}')
plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}")
ax.plot(counts, values)
ax.axhline(exact_energy, linestyle='--')
fig_title = f"\
{result_dict['optimizer']}-\
{result_dict['mapping']}-\
{result_dict['ansatz']}-\
Energy({result_dict['energy (Ha)']:.3f})-\
Score({result_dict['score']:.0f})\
.png"
fig.savefig(fig_title, dpi=300)
# Display and save the data
import pandas as pd
import os.path
filename = 'results_h2.csv'
if os.path.isfile(filename):
result_df = pd.read_csv(filename)
result_df = result_df.append([result_dict])
else:
result_df = pd.DataFrame.from_dict([result_dict])
result_df.to_csv(filename)
result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks',
'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']]
```
<div class="alert alert-block alert-danger">
<b>Tutorial questions 2</b>
Experiment with all the parameters and then:
1. Can you find your best (best score) heuristic ansatz (by modifying parameters of `TwoLocal` ansatz) and optimizer?
2. Can you find your best q-UCC ansatz (choose among `UCCSD, PUCCD or SUCCD` ansatzes) and optimizer?
3. In the cell where we define the ansatz,
can you modify the `Custom` ansatz by placing gates yourself to write a better circuit than your `TwoLocal` circuit?
For each question, give `ansatz` objects.
Remember, you have to reach the chemical accuracy $|E_{exact} - E_{VQE}| \leq 0.004 $ Ha $= 4$ mHa.
</div>
```
# WRITE YOUR CODE BETWEEN THESE LINES - START
# WRITE YOUR CODE BETWEEN THESE LINES - END
```
## Part 2: Final Challenge - VQE for LiH molecule
In this part, you will simulate LiH molecule using the STO-3G basis with the PySCF driver.
</div>
<div class="alert alert-block alert-success">
<b>Goal</b>
Experiment with all the parameters and then find your best ansatz. You can be as creative as you want!
For each question, give `ansatz` objects as for Part 1. Your final score will be based only on Part 2.
</div>
Be aware that the system is larger now. Work out how many qubits you would need for this system by retrieving the number of spin-orbitals.
### Reducing the problem size
You might want to reduce the number of qubits for your simulation:
- you could freeze the core electrons that do not contribute significantly to chemistry and consider only the valence electrons. Qiskit already has this functionality implemented. So inspect the different transformers in `qiskit_nature.transformers` and find the one that performs the freeze core approximation.
- you could use `ParityMapper` with `two_qubit_reduction=True` to eliminate 2 qubits.
- you could reduce the number of qubits by inspecting the symmetries of your Hamiltonian. Find a way to use `Z2Symmetries` in Qiskit.
### Custom ansatz
You might want to explore the ideas proposed in [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [H. L. Tang *et al.*,2019](https://arxiv.org/abs/1911.10205), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205).
You can even get try machine learning algorithms to generate best ansatz circuits.
### Setup the simulation
Let's now run the Hartree-Fock calculation and the rest is up to you!
<div class="alert alert-block alert-danger">
<b>Attention</b>
We give below the `driver`, the `initial_point`, the `initial_state` that should remain as given.
You are free then to explore all other things available in Qiskit.
So you have to start from this initial point (all parameters set to 0.01):
`initial_point = [0.01] * len(ansatz.ordered_parameters)`
or
`initial_point = [0.01] * ansatz.num_parameters`
and your initial state has to be the Hartree-Fock state:
`init_state = HartreeFock(num_spin_orbitals, num_particles, converter)`
For each question, give `ansatz` object.
Remember you have to reach the chemical accuracy $|E_{exact} - E_{VQE}| \leq 0.004 $ Ha $= 4$ mHa.
</div>
```
from qiskit_nature.drivers import PySCFDriver
molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474'
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run()
# WRITE YOUR CODE BETWEEN THESE LINES - START
from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
from qiskit_nature.transformers import FreezeCoreTransformer
from qiskit_nature.mappers.second_quantization import ParityMapper
from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter
from qiskit_nature.circuit.library import HartreeFock
from qiskit_nature.circuit.library import UCCSD
from qiskit.circuit.library import TwoLocal
from qiskit import Aer
from qiskit.algorithms.optimizers import COBYLA
from qiskit.algorithms.optimizers import SPSA
from qiskit.algorithms.optimizers import L_BFGS_B
from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
import numpy as np
backend = Aer.get_backend('statevector_simulator')
n_el = qmolecule.num_alpha + qmolecule.num_beta
n_mo = qmolecule.num_molecular_orbitals
n_so = 2 * qmolecule.num_molecular_orbitals
n_q = 2* qmolecule.num_molecular_orbitals
e_nn = qmolecule.nuclear_repulsion_energy
############################################################
freezeCoreTransformer = FreezeCoreTransformer()
qmolecule = freezeCoreTransformer.transform(qmolecule)
problem = ElectronicStructureProblem(driver,q_molecule_transformers=[FreezeCoreTransformer(freeze_core=True,remove_orbitals=[3])])
#problem = ElectronicStructureProblem(driver)
# Generate the second-quantized operators
second_q_ops = problem.second_q_ops()
# Hamiltonian
main_op = second_q_ops[0]
############################################################
# Setup the mapper and qubit converter
converter = QubitConverter(mapper=ParityMapper(), two_qubit_reduction=True,z2symmetry_reduction=[1])
#converter = QubitConverter(mapper=mapper, two_qubit_reduction=True)
# The fermionic operators are mapped to qubit operators
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
qubit_op = converter.convert(main_op, num_particles=num_particles)
###########################################################
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
init_state = HartreeFock(num_spin_orbitals, num_particles, converter)
#print(init_state)
###########################################################
ansatz = TwoLocal(converter,num_particles,num_spin_orbitals,initial_state = init_state)
#print(ansatz)
###########################################################
optimizer = L_BFGS_B(maxfun=500)
###########################################################
def exact_diagonalizer(problem, converter):
solver = NumPyMinimumEigensolverFactory()
calc = GroundStateEigensolver(converter, solver)
result = calc.solve(problem)
return result
result_exact = exact_diagonalizer(problem, converter)
exact_energy = np.real(result_exact.eigenenergies[0])
#print("Exact electronic energy", exact_energy)
#print(result_exact)
########################################################
from qiskit.algorithms import VQE
from IPython.display import display, clear_output
# Print and save the data in lists
def callback(eval_count, parameters, mean, std):
# Overwrites the same line when printing
display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std))
clear_output(wait=True)
counts.append(eval_count)
values.append(mean)
params.append(parameters)
deviation.append(std)
counts = []
values = []
params = []
deviation = []
# Set initial parameters of the ansatz
# We choose a fixed small displacement
# So all participants start from similar starting point
try:
initial_point = [0.01] * len(ansatz.ordered_parameters)
except:
initial_point = [0.01] * ansatz.num_parameters
algorithm = VQE(ansatz,
optimizer=optimizer,
quantum_instance=backend,
callback=callback,
initial_point=initial_point)
result = algorithm.compute_minimum_eigenvalue(qubit_op)
print(result)
# WRITE YOUR CODE BETWEEN THESE LINES - END
# Check your answer using following code
from qc_grader import grade_ex5
freeze_core = True # change to True if you freezed core electrons
grade_ex5(ansatz,qubit_op,result,freeze_core)
# Submit your answer. You can re-submit at any time.
from qc_grader import submit_ex5
submit_ex5(ansatz,qubit_op,result,freeze_core)
```
## Answers for Part 1
<div class="alert alert-block alert-danger">
<b>Questions</b>
Look into the attributes of `qmolecule` and answer the questions below.
1. We need to know the basic characteristics of our molecule. What is the total number of electrons in your system?
2. What is the number of molecular orbitals?
3. What is the number of spin-orbitals?
3. How many qubits would you need to simulate this molecule with Jordan-Wigner mapping?
5. What is the value of the nuclear repulsion energy?
</div>
<div class="alert alert-block alert-success">
<b>Answers </b>
1. `n_el = qmolecule.num_alpha + qmolecule.num_beta`
2. `n_mo = qmolecule.num_molecular_orbitals`
3. `n_so = 2 * qmolecule.num_molecular_orbitals`
4. `n_q = 2* qmolecule.num_molecular_orbitals`
5. `e_nn = qmolecule.nuclear_repulsion_energy`
</div>
## Additional information
**Created by:** Igor Sokolov, Junye Huang, Rahul Pratap Singh
**Version:** 1.0.1
| github_jupyter |
OK, to begin we need to import some standart Python modules
```
# -*- coding: utf-8 -*-
"""
Created on Fri Feb 12 13:21:45 2016
@author: GrinevskiyAS
"""
from __future__ import division
import numpy as np
from numpy import sin,cos,tan,pi,sqrt
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
%matplotlib inline
font = {'family': 'Arial', 'weight': 'normal', 'size':14}
mpl.rc('font', **font)
```
First, let us setup the working area.
```
#This would be the size of each grid cell (X is the spatial coordinate, T is two-way time)
xstep=10
tstep=10
#size of the whole grid
xmax = 301
tmax = 201
#that's the arrays of x and t
xarray=np.arange(0, xmax, xstep)
tarray=np.arange(0, tmax, tstep)
#now fimally we created a 2D array img, which is now all zeros, but later we will add some amplitudes there
img=np.zeros((len(xarray), len(tarray)))
```
Let's show our all-zero image
```
plt.imshow(img.T,interpolation='none',cmap=cm.Greys, vmin=-2,vmax=2, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
```
What we are now going to do is create a class named **`Hyperbola`**
Each object of this class is capable of computing traveltimes to a certain subsurface point (diffractor) and plotting this point response on a grid
```
class Hyperbola:
def __init__(self, xarray, tarray, x0, v, t0):
###input parameters define a difractor's position (x0,t0), P-wave velocity of homogeneous subsurface, and x- and t-arrays to compute traveltimes on.
###
self.x=xarray
self.x0=x0
self.t0=t0
self.v=v
#compute traveltimes
self.t=sqrt(t0**2 + (2*(xarray-x0)/v)**2)
#obtain some grid parameters
xstep=xarray[1]-xarray[0]
tbegin=tarray[0]
tend=tarray[-1]
tstep=tarray[1]-tarray[0]
#delete t's and x's for samples where t exceeds maxt
self.x=self.x[ (self.t>=tbegin) & (self.t <= tend) ]
self.t=self.t[ (self.t>=tbegin) & (self.t <= tend) ]
self.imgind=((self.x-xarray[0])/xstep).astype(int)
#compute amplitudes' fading according to geometrical spreading
self.amp = 1/(self.t/self.t0)
self.grid_resample(xarray, tarray)
def grid_resample(self, xarray, tarray):
# that's a function that computes at which 'cells' of image should we place the hyperbola
tend=tarray[-1]
tstep=tarray[1]-tarray[0]
self.xind=((self.x-xarray[0])/xstep).astype(int) #X cells numbers
self.tind=np.round((self.t-tarray[0])/tstep).astype(int) #T cells numbers
self.tind=self.tind[self.tind*tstep<=tarray[-1]] #delete T's exceeding max.T
self.tgrid=tarray[self.tind] # get 'gridded' T-values
self.coord=np.vstack((self.xind,tarray[self.tind]))
def add_to_img(self, img, wavelet):
# puts the hyperbola into the right cells of image with a given wavelet
maxind=np.size(img,1)
wavlen=np.floor(len(wavelet)/2).astype(int)
self.imgind=self.imgind[self.tind < maxind-1]
self.tind=self.tind[self.tind < maxind-1]
ind_begin=self.tind-wavlen
for i,sample in enumerate(wavelet):
img[self.imgind,ind_begin+i]=img[self.imgind,ind_begin+i]+sample
return img
```
For testing purposes, let's create an object named Hyp_test and view its parameters
```
Hyp_test = Hyperbola(xarray, tarray, x0 = 100, t0 = 30, v = 2)
#Create a fugure and add axes to it
fgr_test1 = plt.figure(figsize=(7,5), facecolor='w')
ax_test1 = fgr_test1.add_subplot(111)
#Now plot Hyp_test's parameters: X vs T
ax_test1.plot(Hyp_test.x, Hyp_test.t, 'r', lw = 2)
#and their 'gridded' equivalents
ax_test1.plot(Hyp_test.x, Hyp_test.tgrid, ls='none', marker='o', ms=6, mfc=[0,0.5,1],mec='none')
#Some commands to add gridlines, change the directon of T axis and move x axis to top
ax_test1.set_ylim(tarray[-1],tarray[0])
ax_test1.xaxis.set_ticks_position('top')
ax_test1.grid(True, alpha = 0.1, ls='-',lw=.5)
ax_test1.set_xlabel('X, m')
ax_test1.set_ylabel('T, ms')
ax_test1.xaxis.set_label_position('top')
plt.show()
```
| github_jupyter |
## cuDF perf tests
### Loading financial time-series (per-minute ETFs) data from CSV files into a cuDF and running the queries
```
data_path = '/workspace/data/datasets/unianalytica/group/analytics-perf-tests/symbols/'
import sys
import os
import csv
import pandas as pd
import numpy as np
import cudf
from pymapd import connect
import pyarrow as pa
import pandas as pd
from datetime import datetime
import pytz
import time
```
### 1.Load up all files to one cuDF DataFrame
#### Reading the CSV files into a Pandas DF (takes about 2 minutes - 63 files, 3.5 GB CSV format total size)
```
symbol_dfs_list = []
records_count = 0
symbols_files = sorted(os.listdir(data_path))
for ix in range(len(symbols_files)):
current_symbol_df = pd.read_csv(data_path + symbols_files[ix], parse_dates=[2], infer_datetime_format=True,
names=['symbol_record_id', 'symbol', 'datetime', 'open', 'high', 'low', 'close', 'volume', 'split_factor', 'earnings', 'dividends'])
records_count = records_count + len(current_symbol_df)
symbol_dfs_list.append(current_symbol_df)
print('Finished reading; now concatenating the DFs...')
symbols_pandas_df = pd.concat(symbol_dfs_list)
symbols_pandas_df.index = np.arange(records_count)
del(symbol_dfs_list)
print('Built a Pandas DF of {} records.'.format(records_count))
symbols_pandas_df.head()
```
#### Building a cuDF from Pandas DF:
Replacing the `symbol` column here with `symbol_id`, as cuDF still cannot handle strings.
```
symbols_list = sorted(pd.unique(symbols_pandas_df.symbol))
print(symbols_list)
keys = symbols_list
values = list(range(1, len(symbols_list)+1))
dictionary = dict(zip(keys, values))
symbols_pandas_df.insert(0, 'symbol_id', np.array([dictionary[x] for x in symbols_pandas_df.symbol.values]))
symbols_pandas_df_cudf = symbols_pandas_df.drop('symbol', axis=1)
symbols_pandas_df_cudf.head()
symbols_pandas_df_cudf.dtypes
```
### Now, building the cuDF from Pandas DF:
```
symbols_gdf = cudf.DataFrame.from_pandas(symbols_pandas_df_cudf)
del(symbols_pandas_df_cudf)
print(symbols_gdf)
```
### 2.Perf Tests
#### 2.1 Descriptive statistics
```
%%timeit -n1 -r3
print('Trading volume stats: mean of {}, variance of {}'.format(symbols_gdf['volume'].mean(), symbols_gdf['volume'].var()))
```
#### 2.2 Sorting
```
%%timeit -n1 -r3
print(symbols_gdf[['symbol_id', 'datetime', 'volume']].sort_values(by='volume', ascending=False).head(1))
```
#### 2.3 Mixed analytics (math ops + sorting) [finding the top per-minute return]:
```
%%timeit -n1 -r3
symbols_gdf['return'] = 100*(symbols_gdf['close']-symbols_gdf['open'])/symbols_gdf['open']
print(symbols_gdf[['symbol_id', 'datetime', 'return']].sort_values(by='return', ascending=False).head(1))
```
## License
Copyright (c) 2019, PatternedScience Inc.
This code was originally run on the [UniAnalytica](https://www.unianalytica.com) platform, is published by PatternedScience Inc. on [GitHub](https://github.com/patternedscience/GPU-Analytics-Perf-Tests) and is licensed under the terms of Apache License 2.0; a copy of the license is available in the GitHub repository.
| github_jupyter |
# Tutorial 3: SQL data source
## Preparing
### Step 1. Install LightAutoML
Uncomment if doesn't clone repository by git. (ex.: colab, kaggle version)
```
#! pip install -U lightautoml
```
### Step 2. Import necessary libraries
```
# Standard python libraries
import os
import time
import requests
# Installed libraries
import numpy as np
import pandas as pd
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
import torch
# Imports from our package
import gensim
from lightautoml.automl.presets.tabular_presets import TabularAutoML, TabularUtilizedAutoML
from lightautoml.dataset.roles import DatetimeRole
from lightautoml.tasks import Task
```
### Step 3. Parameters
```
N_THREADS = 8 # threads cnt for lgbm and linear models
N_FOLDS = 5 # folds cnt for AutoML
RANDOM_STATE = 42 # fixed random state for various reasons
TEST_SIZE = 0.2 # Test size for metric check
TIMEOUT = 300 # Time in seconds for automl run
TARGET_NAME = 'TARGET' # Target column name
```
### Step 4. Fix torch number of threads and numpy seed
```
np.random.seed(RANDOM_STATE)
torch.set_num_threads(N_THREADS)
```
### Step 5. Example data load
Load a dataset from the repository if doesn't clone repository by git.
```
DATASET_DIR = '../data/'
DATASET_NAME = 'sampled_app_train.csv'
DATASET_FULLNAME = os.path.join(DATASET_DIR, DATASET_NAME)
DATASET_URL = 'https://raw.githubusercontent.com/sberbank-ai-lab/LightAutoML/master/examples/data/sampled_app_train.csv'
%%time
if not os.path.exists(DATASET_FULLNAME):
os.makedirs(DATASET_DIR, exist_ok=True)
dataset = requests.get(DATASET_URL).text
with open(DATASET_FULLNAME, 'w') as output:
output.write(dataset)
%%time
data = pd.read_csv(DATASET_FULLNAME)
data.head()
```
### Step 6. (Optional) Some user feature preparation
Cell below shows some user feature preparations to create task more difficult (this block can be omitted if you don't want to change the initial data):
```
%%time
data['BIRTH_DATE'] = (np.datetime64('2018-01-01') + data['DAYS_BIRTH'].astype(np.dtype('timedelta64[D]'))).astype(str)
data['EMP_DATE'] = (np.datetime64('2018-01-01') + np.clip(data['DAYS_EMPLOYED'], None, 0).astype(np.dtype('timedelta64[D]'))
).astype(str)
data['constant'] = 1
data['allnan'] = np.nan
data['report_dt'] = np.datetime64('2018-01-01')
data.drop(['DAYS_BIRTH', 'DAYS_EMPLOYED'], axis=1, inplace=True)
```
### Step 7. (Optional) Data splitting for train-test
Block below can be omitted if you are going to train model only or you have specific train and test files:
```
%%time
train_data, test_data = train_test_split(data,
test_size=TEST_SIZE,
stratify=data[TARGET_NAME],
random_state=RANDOM_STATE)
print('Data splitted. Parts sizes: train_data = {}, test_data = {}'
.format(train_data.shape, test_data.shape))
train_data.head()
```
### Step 8. (Optional) Reading data from SqlDataSource
#### Preparing datasets as SQLite data bases
```
import sqlite3 as sql
for _fname in ('train.db', 'test.db'):
if os.path.exists(_fname):
os.remove(_fname)
train_db = sql.connect('train.db')
train_data.to_sql('data', train_db)
test_db = sql.connect('test.db')
test_data.to_sql('data', test_db)
```
#### Using dataset wrapper for a connection
```
from lightautoml.reader.tabular_batch_generator import SqlDataSource
# train_data is replaced with a wrapper for an SQLAlchemy connection
# Wrapper requires SQLAlchemy connection string and query to obtain data from
train_data = SqlDataSource('sqlite:///train.db', 'select * from data', index='index')
test_data = SqlDataSource('sqlite:///test.db', 'select * from data', index='index')
```
## AutoML preset usage
### Step 1. Create Task
```
%%time
task = Task('binary', )
```
### Step 2. Setup columns roles
Roles setup here set target column and base date, which is used to calculate date differences:
```
%%time
roles = {'target': TARGET_NAME,
DatetimeRole(base_date=True, seasonality=(), base_feats=False): 'report_dt',
}
```
### Step 3. Create AutoML from preset
To create AutoML model here we use `TabularAutoML` preset, which looks like:

All params we set above can be send inside preset to change its configuration:
```
%%time
automl = TabularAutoML(task = task,
timeout = TIMEOUT,
general_params = {'nested_cv': False, 'use_algos': [['linear_l2', 'lgb', 'lgb_tuned']]},
reader_params = {'cv': N_FOLDS, 'random_state': RANDOM_STATE},
tuning_params = {'max_tuning_iter': 20, 'max_tuning_time': 30},
lgb_params = {'default_params': {'num_threads': N_THREADS}})
oof_pred = automl.fit_predict(train_data, roles = roles)
print('oof_pred:\n{}\nShape = {}'.format(oof_pred, oof_pred.shape))
```
### Step 4. Predict to test data and check scores
```
%%time
test_pred = automl.predict(test_data)
print('Prediction for test data:\n{}\nShape = {}'
.format(test_pred, test_pred.shape))
print('Check scores...')
print('OOF score: {}'.format(roc_auc_score(train_data.data[TARGET_NAME].values, oof_pred.data[:, 0])))
print('TEST score: {}'.format(roc_auc_score(test_data.data[TARGET_NAME].values, test_pred.data[:, 0])))
```
### Step 5. Create AutoML with time utilization
Below we are going to create specific AutoML preset for TIMEOUT utilization (try to spend it as much as possible):
```
%%time
automl = TabularUtilizedAutoML(task = task,
timeout = TIMEOUT,
general_params = {'nested_cv': False, 'use_algos': [['linear_l2', 'lgb', 'lgb_tuned']]},
reader_params = {'cv': N_FOLDS, 'random_state': RANDOM_STATE},
tuning_params = {'max_tuning_iter': 20, 'max_tuning_time': 30},
lgb_params = {'default_params': {'num_threads': N_THREADS}})
oof_pred = automl.fit_predict(train_data, roles = roles)
print('oof_pred:\n{}\nShape = {}'.format(oof_pred, oof_pred.shape))
```
### Step 6. Predict to test data and check scores for utilized automl
```
%%time
test_pred = automl.predict(test_data)
print('Prediction for test data:\n{}\nShape = {}'
.format(test_pred, test_pred.shape))
print('Check scores...')
print('OOF score: {}'.format(roc_auc_score(train_data.data[TARGET_NAME].values, oof_pred.data[:, 0])))
print('TEST score: {}'.format(roc_auc_score(test_data.data[TARGET_NAME].values, test_pred.data[:, 0])))
```
| github_jupyter |
# Chapter3 ニューラルネットワークの基本
## 3. 糖尿病の予後予測【サンプルコード】
```
# 必要なパッケージのインストール
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
import torch
from torch.utils.data import TensorDataset, DataLoader
from torch import nn
import torch.nn.functional as F
from torch import optim
```
## 3.1. 糖尿病(Diabetes)データセット
```
# データセットのロード
diabetes = load_diabetes()
# データセットの説明
print(diabetes.DESCR)
# データフレームに変換
df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)
# 1年後の進行度の追加
df['target'] = diabetes.target
print(df.head())
# 基本統計量の確認
print(df.describe())
# データセットの可視化
sns.pairplot(df, x_vars=diabetes.feature_names, y_vars='target')
plt.show()
```
## 3.2. 前準備
```
# データセットの読み込み
diabetes = load_diabetes()
data = diabetes.data # 特徴量
label = diabetes.target.reshape(-1, 1) # 一年後の糖尿病の進行度
# データセットのサイズの確認
print("data size: {}".format(data.shape))
print("label size: {}".format(label.shape))
```
## 3.3. 訓練データとテストデータの用意
```
# 学習データとテストデータを分割
train_data, test_data, train_label, test_label = train_test_split(
data, label, test_size=0.2)
# 学習データとテストデータのサイズの確認
print("train_data size: {}".format(len(train_data)))
print("test_data size: {}".format(len(test_data)))
print("train_label size: {}".format(len(train_label)))
print("test_label size: {}".format(len(test_label)))
# ndarrayをPyTorchのTensorに変換
train_x = torch.Tensor(train_data)
test_x = torch.Tensor(test_data)
train_y = torch.Tensor(train_label) # torch.float32のデータ型に
test_y = torch.Tensor(test_label) # torch.float32のデータ型に
# 特徴量とラベルを結合したデータセットを作成
train_dataset = TensorDataset(train_x, train_y)
test_dataset = TensorDataset(test_x, test_y)
# ミニバッチサイズを指定したデータローダーを作成
train_batch = DataLoader(
dataset=train_dataset, # データセットの指定
batch_size=20, # バッチサイズの指定
shuffle=True, # シャッフルするかどうかの指定
num_workers=2) # コアの数
test_batch = DataLoader(
dataset=test_dataset,
batch_size=20,
shuffle=False,
num_workers=2)
# ミニバッチデータセットの確認
for data, label in train_batch:
print("batch data size: {}".format(data.size())) # バッチの入力データサイズ
print("batch label size: {}".format(label.size())) # バッチのラベルサイズ
break
```
## 3.4. ニューラルネットワークの定義
```
# ニューラルネットワークの定義
class Net(nn.Module):
def __init__(self, D_in, H, D_out):
super(Net, self).__init__()
self.linear1 = nn.Linear(D_in, H)
self.linear2 = nn.Linear(H, H) # 追加
self.linear3 = nn.Linear(H, D_out)
self.dropout = nn.Dropout(p=0.5)
def forward(self, x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x)) # 追加
x = F.relu(self.linear2(x)) # 追加
x = self.dropout(x) # 追加
x = self.linear3(x)
return x
# ハイパーパラメータの定義
D_in = 10 # 入力次元: 10
H = 200 # 隠れ層次元: 200
D_out = 1 # 出力次元: 1
epoch = 100 # 学習回数: 100
# ネットワークのロード
# CPUとGPUどちらを使うかを指定
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = Net(D_in, H, D_out).to(device)
# デバイスの確認
print("Device: {}".format(device))
```
## 3.5. 損失関数と最適化関数の定義
```
# 損失関数の定義
criterion = nn.MSELoss() # 今回の損失関数(平均二乗誤差: MSE)
criterion2 = nn.L1Loss() # 参考用(平均絶対誤差: MAE)
# 最適化関数の定義
optimizer = optim.Adam(net.parameters())
```
## 3.6. 学習
```
# 損失を保存するリストを作成
train_loss_list = [] # 学習損失(MSE)
test_loss_list = [] # 評価損失(MSE)
train_mae_list = [] # 学習MAE
test_mae_list = [] # 評価MAE
# 学習(エポック)の実行
for i in range(epoch):
# エポックの進行状況を表示
print('---------------------------------------------')
print("Epoch: {}/{}".format(i+1, epoch))
# 損失の初期化
train_loss = 0 # 学習損失(MSE)
test_loss = 0 # 評価損失(MSE)
train_mae = 0 # 学習MAE
test_mae = 0 # 評価MAE
# ---------学習パート--------- #
# ニューラルネットワークを学習モードに設定
net.train()
# ミニバッチごとにデータをロードし学習
for data, label in train_batch:
# GPUにTensorを転送
data = data.to(device)
label = label.to(device)
# 勾配を初期化
optimizer.zero_grad()
# データを入力して予測値を計算(順伝播)
y_pred = net(data)
# 損失(誤差)を計算
loss = criterion(y_pred, label) # MSE
mae = criterion2(y_pred, label) # MAE
# 勾配の計算(逆伝搬)
loss.backward()
# パラメータ(重み)の更新
optimizer.step()
# ミニバッチごとの損失を蓄積
train_loss += loss.item() # MSE
train_mae += mae.item() # MAE
# ミニバッチの平均の損失を計算
batch_train_loss = train_loss / len(train_batch)
batch_train_mae = train_mae / len(train_batch)
# ---------学習パートはここまで--------- #
# ---------評価パート--------- #
# ニューラルネットワークを評価モードに設定
net.eval()
# 評価時の計算で自動微分機能をオフにする
with torch.no_grad():
for data, label in test_batch:
# GPUにTensorを転送
data = data.to(device)
label = label.to(device)
# データを入力して予測値を計算(順伝播)
y_pred = net(data)
# 損失(誤差)を計算
loss = criterion(y_pred, label) # MSE
mae = criterion2(y_pred, label) # MAE
# ミニバッチごとの損失を蓄積
test_loss += loss.item() # MSE
test_mae += mae.item() # MAE
# ミニバッチの平均の損失を計算
batch_test_loss = test_loss / len(test_batch)
batch_test_mae = test_mae / len(test_batch)
# ---------評価パートはここまで--------- #
# エポックごとに損失を表示
print("Train_Loss: {:.4f} Train_MAE: {:.4f}".format(
batch_train_loss, batch_train_mae))
print("Test_Loss: {:.4f} Test_MAE: {:.4f}".format(
batch_test_loss, batch_test_mae))
# 損失をリスト化して保存
train_loss_list.append(batch_train_loss)
test_loss_list.append(batch_test_loss)
train_mae_list.append(batch_train_mae)
test_mae_list.append(batch_test_mae)
```
## 3.7. 結果の可視化
```
# 損失(MSE)
plt.figure()
plt.title('Train and Test Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.plot(range(1, epoch+1), train_loss_list, color='blue',
linestyle='-', label='Train_Loss')
plt.plot(range(1, epoch+1), test_loss_list, color='red',
linestyle='--', label='Test_Loss')
plt.legend() # 凡例
# MAE
plt.figure()
plt.title('Train and Test MAE')
plt.xlabel('Epoch')
plt.ylabel('MAE')
plt.plot(range(1, epoch+1), train_mae_list, color='blue',
linestyle='-', label='Train_MAE')
plt.plot(range(1, epoch+1), test_mae_list, color='red',
linestyle='--', label='Test_MAE')
plt.legend() # 凡例
# 表示
plt.show()
```
| github_jupyter |
```
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this Jupyter notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: middle;
}
</style>
''')
```

### <h1><center>Module 2: Terminology of Digital Signal Processing</center></h1>
[Digital Signal Processing](https://en.wikipedia.org/wiki/Digital_signal_processing) (or DSP) is one of the *most powerful technologies* that will shape science and engineering in the twenty-first century. Revolutionary changes have already been made in a broad range of fields: communications, medical imaging, radar & sonar, high fidelity music reproduction, and oil prospecting, to name just a few.
Each of these areas has developed a deep DSP technology, with its own algorithms, mathematics, and specialized techniques. This combination of breath and depth makes it *impossible* for any one individual to master all of the DSP technology that has been developed. DSP education involves two tasks:
* learning general concepts that apply to the field as a whole; and
* learning specialized techniques for your particular area of interest.
The purpose of this module is to provide you with some of the key terminology that we will be covering in this DSP course: **signals**; **continuous, discrete** and **digital**; **systems**; and **processing**.
## Signal
A [signal](https://en.wikipedia.org/wiki/Signal) is anything that conveys **information** and is a description of how one (or a set of) **parameter(s)** relates to another parameter(s) (e.g., amplitude or voltage as a function of time). Examples of signals are everywhere, including:
* Seismic or radar pulse
* Speech
* DNA sequence
* Stock price
* Image
* Video
A signal can have single or multiple independent variables. For example, you'll be familiar with the following examples:
* 1D - Speech: $s=s(t)$
* 2D - Image: $I=I(x,y)$, Topography map: $elev=elev(lat,long)$
* 3D - 3D Seismic/GPR shot gather: $S = S(t,r_x,r_y)$
* 4D - EM/Seismic wavefield: $W=W(t,x,y,z)$
* 5D - 3D Seismic data set: $D=D(t,r_x,r_y,s_x,s_y)$
## Continuous, Discrete and Digital
There are three types of signals that are functions of *time*:
1. **Continuous-time** (analog) - $x(t)$: defined on a continuous range of time *t*, amplitude can be any value.
2. **Discrete-time** - $x(nT)$: defined only at discrete instants of time: $t=...-T,0,T,2T...$ where the amplitude can be any value.
3. **Digital** (quantized) - $x_Q[nT]$: both time and amplitude are discrete. Signals only defined at $t=...,-T,0,T,2T,...$ and amplitude is confined to a finite set of numbers.
<img src="Fig/1-SignalTypes.png" width="700">
**Figure 1. Illustrating the differences between continuous-time, discrete-time and digital signals.**
In DSP we deal with $x_Q[nT]$ because this corresponds with computer-based processing (which is quantized by definition - e.g. 16-bit vs 32-bit system). In this course we will assume that **discrete-time signal** is equivalent to a **digital signal** (equivalent to saying that the quantizer has infinite resolution). Thus, we will commonly write continuous and discrete (and quantized) signal as $x(t)$ and $x[nT]$, where parentheses and square brackets will denote continuity and discreteness, respectively.
## Systems
A **system** is a mathematical model or abstraction of a physical process that relates **input** to **output**. Any system that processes [digital](https://en.wikipedia.org/wiki/Digital_signal) signals is called a **digital system** or **digital signal processor**.
Examples include:
* Amplifier
* input: ${\rm cos}\,\omega t$
* output: $10\,{\rm cos}\,\omega t$
* Delay
* input: $f[nT]$
* output: $g[(n+p)T]$ where integer $p>0$
* Feature extraction
* Input: ${\rm cos}\,\omega_1 t + {\rm cos}\,\omega_2 t$
* Output: [$\omega_1,\omega_2$]
* Cellphone communication
* Input: Voice
* Output: CDMA signal
## Processing
**Processing** performs a particular function by passing a [signal](https://en.wikipedia.org/wiki/Signal)
through a **system**. Examples include:
* [Analog](https://en.wikipedia.org/wiki/Analog_signal) processing of **analog** signal
<img src="Fig/1-ASP.png" width="250">
**Figure 2. Illustrating the analog processing of an analog signal.**
* Digital processing of analog signal
<img src="Fig/1-A2D2A.png" width="700">
**Figure 3. Illustrating the steps required to digitially process an analog signal.**
## Signals vs. Underlying Processes
In most cases we aim to model geophysical phenomena using **deterministic equations** (e.g., acoustic wave equation, Maxwell's equations). Given an known input, these equations will generate a predictable output (for a noise-less system). However, distinguishing between the acquired **signal** and the **underlying stochastic process** is often very important.
For example, imagine creating a 1000 point signal by flipping a coin 1000 times. If the coin flip is heads, the corresponding sample is made a value of one. On tails, the sample is set to zero. The process that created this signal has a mean of exactly 0.5, determined by the relative probability of each possible outcome: 50% heads, 50% tails. However, it is unlikely that the actual 1000 point signal will have a mean of exactly 0.5. Random chance will make the number of ones and zeros slightly different each time the signal is generated. The probabilities of the underlying process are constant, but the statistics of the acquired signal change each time the experiment is repeated. This random irregularity found in actual data is called by such names as: **statistical variation**, **statistical fluctuation**, and **statistical noise**.
Finally, we live in a world where even though we understand many deterministic equations and systems of equations, the data we record and use are contaminated with both **coherent and incoherent noise**. Much of the signal processing work that you will do is trying to limit these sources of noise in order to enhance the signal. This is one of the main reasons why it is good to study digital signal processing!
## DSP versus ASP
It is worth mentioning that one can also perform various [Analog Signal Processing](https://en.wikipedia.org/wiki/Analog_signal_processing) (ASP) tasks. ASP is any type of signal processing conducted on continuous analog signals by some analog means. Examples include "bass", "treble" and "volume" controls on stereos, and "tint" controls on TVs.
There are a number of key advanages of DSP over ASP:
* Allows development with use of computers (e.g., with Python, Matlab)
* *Robust tool kits and modularity* - Leverage significant number of complex tools in Python/Matlab without having to redesign physical hardware every time.
* Allows *flexibility* in reconfiguring the DSP operators by changing the program (not the hardware!)
* *Reliable*: processing of 0's and 1's is almost immune to noise, and data are easily stored without deterioration
* *Security* through encryption/scrambling
* *Simplicity* - most operators are additions and subtractions (can be scalar-scalar, vector-scalar, vector-vector, matrix-vector)
However, there are also a number of advantages of ASP over DSP:
* Excellent for throughput signals
* Preclude the need for data storage and interface with computer CPU
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/bbc-text.csv \
-O /tmp/bbc-text.csv
import csv
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
#Stopwords list from https://github.com/Yoast/YoastSEO.js/blob/develop/src/config/stopwords.js
# Convert it to a Python list and paste it here
stopwords = [ "a", "about", "above", "after", "again", "against", "all", "am", "an", "and", "any", "are", "as", "at", "be", "because", "been", "before", "being", "below", "between", "both", "but", "by", "could", "did", "do", "does", "doing", "down", "during", "each", "few", "for", "from", "further", "had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here", "here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i", "i'd", "i'll", "i'm", "i've", "if", "in", "into", "is", "it", "it's", "its", "itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on", "once", "only", "or", "other", "ought", "our", "ours", "ourselves", "out", "over", "own", "same", "she", "she'd", "she'll", "she's", "should", "so", "some", "such", "than", "that", "that's", "the", "their", "theirs", "them", "themselves", "then", "there", "there's", "these", "they", "they'd", "they'll", "they're", "they've", "this", "those", "through", "to", "too", "under", "until", "up", "very", "was", "we", "we'd", "we'll", "we're", "we've", "were", "what", "what's", "when", "when's", "where", "where's", "which", "while", "who", "who's", "whom", "why", "why's", "with", "would", "you", "you'd", "you'll", "you're", "you've", "your", "yours", "yourself", "yourselves" ]
sentences = []
labels = []
with open("/tmp/bbc-text.csv", 'r') as csvfile:
fr = csv.reader(csvfile, delimiter=',')
next(fr)
for row in fr:
labels.append(row[0])
sentence = row[1]
for word in stopwords:
token = " "+word+" "
sentence = sentence.replace(token, " ")
sentence = sentence.replace(" ", " ")
sentences.append(sentence)
print(len(sentences))
print(sentences[0])
#Expected output
# 2225
# tv future hands viewers home theatre systems plasma high-definition tvs digital video recorders moving living room way people watch tv will radically different five years time. according expert panel gathered annual consumer electronics show las vegas discuss new technologies will impact one favourite pastimes. us leading trend programmes content will delivered viewers via home networks cable satellite telecoms companies broadband service providers front rooms portable devices. one talked-about technologies ces digital personal video recorders (dvr pvr). set-top boxes like us s tivo uk s sky+ system allow people record store play pause forward wind tv programmes want. essentially technology allows much personalised tv. also built-in high-definition tv sets big business japan us slower take off europe lack high-definition programming. not can people forward wind adverts can also forget abiding network channel schedules putting together a-la-carte entertainment. us networks cable satellite companies worried means terms advertising revenues well brand identity viewer loyalty channels. although us leads technology moment also concern raised europe particularly growing uptake services like sky+. happens today will see nine months years time uk adam hume bbc broadcast s futurologist told bbc news website. likes bbc no issues lost advertising revenue yet. pressing issue moment commercial uk broadcasters brand loyalty important everyone. will talking content brands rather network brands said tim hanlon brand communications firm starcom mediavest. reality broadband connections anybody can producer content. added: challenge now hard promote programme much choice. means said stacey jolna senior vice president tv guide tv group way people find content want watch simplified tv viewers. means networks us terms channels take leaf google s book search engine future instead scheduler help people find want watch. kind channel model might work younger ipod generation used taking control gadgets play them. might not suit everyone panel recognised. older generations comfortable familiar schedules channel brands know getting. perhaps not want much choice put hands mr hanlon suggested. end kids just diapers pushing buttons already - everything possible available said mr hanlon. ultimately consumer will tell market want. 50 000 new gadgets technologies showcased ces many enhancing tv-watching experience. high-definition tv sets everywhere many new models lcd (liquid crystal display) tvs launched dvr capability built instead external boxes. one example launched show humax s 26-inch lcd tv 80-hour tivo dvr dvd recorder. one us s biggest satellite tv companies directtv even launched branded dvr show 100-hours recording capability instant replay search function. set can pause rewind tv 90 hours. microsoft chief bill gates announced pre-show keynote speech partnership tivo called tivotogo means people can play recorded programmes windows pcs mobile devices. reflect increasing trend freeing multimedia people can watch want want.
tokenizer = Tokenizer(oov_token="<OOV>")
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
print(len(word_index))
# Expected output
# 29714
sequences = tokenizer.texts_to_sequences(sentences)
padded = pad_sequences(sequences, padding='post')
print(padded[0])
print(padded.shape)
# Expected output
# [ 96 176 1158 ... 0 0 0]
# (2225, 2442)
label_tokenizer = Tokenizer()
label_tokenizer.fit_on_texts(labels)
label_word_index = label_tokenizer.word_index
label_seq = label_tokenizer.texts_to_sequences(labels)
print(label_seq)
print(label_word_index)
# Expected Output
# [[4], [2], [1], [1], [5], [3], [3], [1], [1], [5], [5], [2], [2], [3], [1], [2], [3], [1], [2], [4], [4], [4], [1], [1], [4], [1], [5], [4], [3], [5], [3], [4], [5], [5], [2], [3], [4], [5], [3], [2], [3], [1], [2], [1], [4], [5], [3], [3], [3], [2], [1], [3], [2], [2], [1], [3], [2], [1], [1], [2], [2], [1], [2], [1], [2], [4], [2], [5], [4], [2], [3], [2], [3], [1], [2], [4], [2], [1], [1], [2], [2], [1], [3], [2], [5], [3], [3], [2], [5], [2], [1], [1], [3], [1], [3], [1], [2], [1], [2], [5], [5], [1], [2], [3], [3], [4], [1], [5], [1], [4], [2], [5], [1], [5], [1], [5], [5], [3], [1], [1], [5], [3], [2], [4], [2], [2], [4], [1], [3], [1], [4], [5], [1], [2], [2], [4], [5], [4], [1], [2], [2], [2], [4], [1], [4], [2], [1], [5], [1], [4], [1], [4], [3], [2], [4], [5], [1], [2], [3], [2], [5], [3], [3], [5], [3], [2], [5], [3], [3], [5], [3], [1], [2], [3], [3], [2], [5], [1], [2], [2], [1], [4], [1], [4], [4], [1], [2], [1], [3], [5], [3], [2], [3], [2], [4], [3], [5], [3], [4], [2], [1], [2], [1], [4], [5], [2], [3], [3], [5], [1], [5], [3], [1], [5], [1], [1], [5], [1], [3], [3], [5], [4], [1], [3], [2], [5], [4], [1], [4], [1], [5], [3], [1], [5], [4], [2], [4], [2], [2], [4], [2], [1], [2], [1], [2], [1], [5], [2], [2], [5], [1], [1], [3], [4], [3], [3], [3], [4], [1], [4], [3], [2], [4], [5], [4], [1], [1], [2], [2], [3], [2], [4], [1], [5], [1], [3], [4], [5], [2], [1], [5], [1], [4], [3], [4], [2], [2], [3], [3], [1], [2], [4], [5], [3], [4], [2], [5], [1], [5], [1], [5], [3], [2], [1], [2], [1], [1], [5], [1], [3], [3], [2], [5], [4], [2], [1], [2], [5], [2], [2], [2], [3], [2], [3], [5], [5], [2], [1], [2], [3], [2], [4], [5], [2], [1], [1], [5], [2], [2], [3], [4], [5], [4], [3], [2], [1], [3], [2], [5], [4], [5], [4], [3], [1], [5], [2], [3], [2], [2], [3], [1], [4], [2], [2], [5], [5], [4], [1], [2], [5], [4], [4], [5], [5], [5], [3], [1], [3], [4], [2], [5], [3], [2], [5], [3], [3], [1], [1], [2], [3], [5], [2], [1], [2], [2], [1], [2], [3], [3], [3], [1], [4], [4], [2], [4], [1], [5], [2], [3], [2], [5], [2], [3], [5], [3], [2], [4], [2], [1], [1], [2], [1], [1], [5], [1], [1], [1], [4], [2], [2], [2], [3], [1], [1], [2], [4], [2], [3], [1], [3], [4], [2], [1], [5], [2], [3], [4], [2], [1], [2], [3], [2], [2], [1], [5], [4], [3], [4], [2], [1], [2], [5], [4], [4], [2], [1], [1], [5], [3], [3], [3], [1], [3], [4], [4], [5], [3], [4], [5], [2], [1], [1], [4], [2], [1], [1], [3], [1], [1], [2], [1], [5], [4], [3], [1], [3], [4], [2], [2], [2], [4], [2], [2], [1], [1], [1], [1], [2], [4], [5], [1], [1], [4], [2], [4], [5], [3], [1], [2], [3], [2], [4], [4], [3], [4], [2], [1], [2], [5], [1], [3], [5], [1], [1], [3], [4], [5], [4], [1], [3], [2], [5], [3], [2], [5], [1], [1], [4], [3], [5], [3], [5], [3], [4], [3], [5], [1], [2], [1], [5], [1], [5], [4], [2], [1], [3], [5], [3], [5], [5], [5], [3], [5], [4], [3], [4], [4], [1], [1], [4], [4], [1], [5], [5], [1], [4], [5], [1], [1], [4], [2], [3], [4], [2], [1], [5], [1], [5], [3], [4], [5], [5], [2], [5], [5], [1], [4], [4], [3], [1], [4], [1], [3], [3], [5], [4], [2], [4], [4], [4], [2], [3], [3], [1], [4], [2], [2], [5], [5], [1], [4], [2], [4], [5], [1], [4], [3], [4], [3], [2], [3], [3], [2], [1], [4], [1], [4], [3], [5], [4], [1], [5], [4], [1], [3], [5], [1], [4], [1], [1], [3], [5], [2], [3], [5], [2], [2], [4], [2], [5], [4], [1], [4], [3], [4], [3], [2], [3], [5], [1], [2], [2], [2], [5], [1], [2], [5], [5], [1], [5], [3], [3], [3], [1], [1], [1], [4], [3], [1], [3], [3], [4], [3], [1], [2], [5], [1], [2], [2], [4], [2], [5], [5], [5], [2], [5], [5], [3], [4], [2], [1], [4], [1], [1], [3], [2], [1], [4], [2], [1], [4], [1], [1], [5], [1], [2], [1], [2], [4], [3], [4], [2], [1], [1], [2], [2], [2], [2], [3], [1], [2], [4], [2], [1], [3], [2], [4], [2], [1], [2], [3], [5], [1], [2], [3], [2], [5], [2], [2], [2], [1], [3], [5], [1], [3], [1], [3], [3], [2], [2], [1], [4], [5], [1], [5], [2], [2], [2], [4], [1], [4], [3], [4], [4], [4], [1], [4], [4], [5], [5], [4], [1], [5], [4], [1], [1], [2], [5], [4], [2], [1], [2], [3], [2], [5], [4], [2], [3], [2], [4], [1], [2], [5], [2], [3], [1], [5], [3], [1], [2], [1], [3], [3], [1], [5], [5], [2], [2], [1], [4], [4], [1], [5], [4], [4], [2], [1], [5], [4], [1], [1], [2], [5], [2], [2], [2], [5], [1], [5], [4], [4], [4], [3], [4], [4], [5], [5], [1], [1], [3], [2], [5], [1], [3], [5], [4], [3], [4], [4], [2], [5], [3], [4], [3], [3], [1], [3], [3], [5], [4], [1], [3], [1], [5], [3], [2], [2], [3], [1], [1], [1], [5], [4], [4], [2], [5], [1], [3], [4], [3], [5], [4], [4], [2], [2], [1], [2], [2], [4], [3], [5], [2], [2], [2], [2], [2], [4], [1], [3], [4], [4], [2], [2], [5], [3], [5], [1], [4], [1], [5], [1], [4], [1], [2], [1], [3], [3], [5], [2], [1], [3], [3], [1], [5], [3], [2], [4], [1], [2], [2], [2], [5], [5], [4], [4], [2], [2], [5], [1], [2], [5], [4], [4], [2], [2], [1], [1], [1], [3], [3], [1], [3], [1], [2], [5], [1], [4], [5], [1], [1], [2], [2], [4], [4], [1], [5], [1], [5], [1], [5], [3], [5], [5], [4], [5], [2], [2], [3], [1], [3], [4], [2], [3], [1], [3], [1], [5], [1], [3], [1], [1], [4], [5], [1], [3], [1], [1], [2], [4], [5], [3], [4], [5], [3], [5], [3], [5], [5], [4], [5], [3], [5], [5], [4], [4], [1], [1], [5], [5], [4], [5], [3], [4], [5], [2], [4], [1], [2], [5], [5], [4], [5], [4], [2], [5], [1], [5], [2], [1], [2], [1], [3], [4], [5], [3], [2], [5], [5], [3], [2], [5], [1], [3], [1], [2], [2], [2], [2], [2], [5], [4], [1], [5], [5], [2], [1], [4], [4], [5], [1], [2], [3], [2], [3], [2], [2], [5], [3], [2], [2], [4], [3], [1], [4], [5], [3], [2], [2], [1], [5], [3], [4], [2], [2], [3], [2], [1], [5], [1], [5], [4], [3], [2], [2], [4], [2], [2], [1], [2], [4], [5], [3], [2], [3], [2], [1], [4], [2], [3], [5], [4], [2], [5], [1], [3], [3], [1], [3], [2], [4], [5], [1], [1], [4], [2], [1], [5], [4], [1], [3], [1], [2], [2], [2], [3], [5], [1], [3], [4], [2], [2], [4], [5], [5], [4], [4], [1], [1], [5], [4], [5], [1], [3], [4], [2], [1], [5], [2], [2], [5], [1], [2], [1], [4], [3], [3], [4], [5], [3], [5], [2], [2], [3], [1], [4], [1], [1], [1], [3], [2], [1], [2], [4], [1], [2], [2], [1], [3], [4], [1], [2], [4], [1], [1], [2], [2], [2], [2], [3], [5], [4], [2], [2], [1], [2], [5], [2], [5], [1], [3], [2], [2], [4], [5], [2], [2], [2], [3], [2], [3], [4], [5], [3], [5], [1], [4], [3], [2], [4], [1], [2], [2], [5], [4], [2], [2], [1], [1], [5], [1], [3], [1], [2], [1], [2], [3], [3], [2], [3], [4], [5], [1], [2], [5], [1], [3], [3], [4], [5], [2], [3], [3], [1], [4], [2], [1], [5], [1], [5], [1], [2], [1], [3], [5], [4], [2], [1], [3], [4], [1], [5], [2], [1], [5], [1], [4], [1], [4], [3], [1], [2], [5], [4], [4], [3], [4], [5], [4], [1], [2], [4], [2], [5], [1], [4], [3], [3], [3], [3], [5], [5], [5], [2], [3], [3], [1], [1], [4], [1], [3], [2], [2], [4], [1], [4], [2], [4], [3], [3], [1], [2], [3], [1], [2], [4], [2], [2], [5], [5], [1], [2], [4], [4], [3], [2], [3], [1], [5], [5], [3], [3], [2], [2], [4], [4], [1], [1], [3], [4], [1], [4], [2], [1], [2], [3], [1], [5], [2], [4], [3], [5], [4], [2], [1], [5], [4], [4], [5], [3], [4], [5], [1], [5], [1], [1], [1], [3], [4], [1], [2], [1], [1], [2], [4], [1], [2], [5], [3], [4], [1], [3], [4], [5], [3], [1], [3], [4], [2], [5], [1], [3], [2], [4], [4], [4], [3], [2], [1], [3], [5], [4], [5], [1], [4], [2], [3], [5], [4], [3], [1], [1], [2], [5], [2], [2], [3], [2], [2], [3], [4], [5], [3], [5], [5], [2], [3], [1], [3], [5], [1], [5], [3], [5], [5], [5], [2], [1], [3], [1], [5], [4], [4], [2], [3], [5], [2], [1], [2], [3], [3], [2], [1], [4], [4], [4], [2], [3], [3], [2], [1], [1], [5], [2], [1], [1], [3], [3], [3], [5], [3], [2], [4], [2], [3], [5], [5], [2], [1], [3], [5], [1], [5], [3], [3], [2], [3], [1], [5], [5], [4], [4], [4], [4], [3], [4], [2], [4], [1], [1], [5], [2], [4], [5], [2], [4], [1], [4], [5], [5], [3], [3], [1], [2], [2], [4], [5], [1], [3], [2], [4], [5], [3], [1], [5], [3], [3], [4], [1], [3], [2], [3], [5], [4], [1], [3], [5], [5], [2], [1], [4], [4], [1], [5], [4], [3], [4], [1], [3], [3], [1], [5], [1], [3], [1], [4], [5], [1], [5], [2], [2], [5], [5], [5], [4], [1], [2], [2], [3], [3], [2], [3], [5], [1], [1], [4], [3], [1], [2], [1], [2], [4], [1], [1], [2], [5], [1], [1], [4], [1], [2], [3], [2], [5], [4], [5], [3], [2], [5], [3], [5], [3], [3], [2], [1], [1], [1], [4], [4], [1], [3], [5], [4], [1], [5], [2], [5], [3], [2], [1], [4], [2], [1], [3], [2], [5], [5], [5], [3], [5], [3], [5], [1], [5], [1], [3], [3], [2], [3], [4], [1], [4], [1], [2], [3], [4], [5], [5], [3], [5], [3], [1], [1], [3], [2], [4], [1], [3], [3], [5], [1], [3], [3], [2], [4], [4], [2], [4], [1], [1], [2], [3], [2], [4], [1], [4], [3], [5], [1], [2], [1], [5], [4], [4], [1], [3], [1], [2], [1], [2], [1], [1], [5], [5], [2], [4], [4], [2], [4], [2], [2], [1], [1], [3], [1], [4], [1], [4], [1], [1], [2], [2], [4], [1], [2], [4], [4], [3], [1], [2], [5], [5], [4], [3], [1], [1], [4], [2], [4], [5], [5], [3], [3], [2], [5], [1], [5], [5], [2], [1], [3], [4], [2], [1], [5], [4], [3], [3], [1], [1], [2], [2], [2], [2], [2], [5], [2], [3], [3], [4], [4], [5], [3], [5], [2], [3], [1], [1], [2], [4], [2], [4], [1], [2], [2], [3], [1], [1], [3], [3], [5], [5], [3], [2], [3], [3], [2], [4], [3], [3], [3], [3], [3], [5], [5], [4], [3], [1], [3], [1], [4], [1], [1], [1], [5], [4], [5], [4], [1], [4], [1], [1], [5], [5], [2], [5], [5], [3], [2], [1], [4], [4], [3], [2], [1], [2], [5], [1], [3], [5], [1], [1], [2], [3], [4], [4], [2], [2], [1], [3], [5], [1], [1], [3], [5], [4], [1], [5], [2], [3], [1], [3], [4], [5], [1], [3], [2], [5], [3], [5], [3], [1], [3], [2], [2], [3], [2], [4], [1], [2], [5], [2], [1], [1], [5], [4], [3], [4], [3], [3], [1], [1], [1], [2], [4], [5], [2], [1], [2], [1], [2], [4], [2], [2], [2], [2], [1], [1], [1], [2], [2], [5], [2], [2], [2], [1], [1], [1], [4], [2], [1], [1], [1], [2], [5], [4], [4], [4], [3], [2], [2], [4], [2], [4], [1], [1], [3], [3], [3], [1], [1], [3], [3], [4], [2], [1], [1], [1], [1], [2], [1], [2], [2], [2], [2], [1], [3], [1], [4], [4], [1], [4], [2], [5], [2], [1], [2], [4], [4], [3], [5], [2], [5], [2], [4], [3], [5], [3], [5], [5], [4], [2], [4], [4], [2], [3], [1], [5], [2], [3], [5], [2], [4], [1], [4], [3], [1], [3], [2], [3], [3], [2], [2], [2], [4], [3], [2], [3], [2], [5], [3], [1], [3], [3], [1], [5], [4], [4], [2], [4], [1], [2], [2], [3], [1], [4], [4], [4], [1], [5], [1], [3], [2], [3], [3], [5], [4], [2], [4], [1], [5], [5], [1], [2], [5], [4], [4], [1], [5], [2], [3], [3], [3], [4], [4], [2], [3], [2], [3], [3], [5], [1], [4], [2], [4], [5], [4], [4], [1], [3], [1], [1], [3], [5], [5], [2], [3], [3], [1], [2], [2], [4], [2], [4], [4], [1], [2], [3], [1], [2], [2], [1], [4], [1], [4], [5], [1], [1], [5], [2], [4], [1], [1], [3], [4], [2], [3], [1], [1], [3], [5], [4], [4], [4], [2], [1], [5], [5], [4], [2], [3], [4], [1], [1], [4], [4], [3], [2], [1], [5], [5], [1], [5], [4], [4], [2], [2], [2], [1], [1], [4], [1], [2], [4], [2], [2], [1], [2], [3], [2], [2], [4], [2], [4], [3], [4], [5], [3], [4], [5], [1], [3], [5], [2], [4], [2], [4], [5], [4], [1], [2], [2], [3], [5], [3], [1]]
# {'sport': 1, 'business': 2, 'politics': 3, 'tech': 4, 'entertainment': 5}
```
| github_jupyter |
# Adadelta
:label:`sec_adadelta`
Adadelta is yet another variant of AdaGrad (:numref:`sec_adagrad`). The main difference lies in the fact that it decreases the amount by which the learning rate is adaptive to coordinates. Moreover, traditionally it referred to as not having a learning rate since it uses the amount of change itself as calibration for future change. The algorithm was proposed in :cite:`Zeiler.2012`. It is fairly straightforward, given the discussion of previous algorithms so far.
## The Algorithm
In a nutshell, Adadelta uses two state variables, $\mathbf{s}_t$ to store a leaky average of the second moment of the gradient and $\Delta\mathbf{x}_t$ to store a leaky average of the second moment of the change of parameters in the model itself. Note that we use the original notation and naming of the authors for compatibility with other publications and implementations (there is no other real reason why one should use different Greek variables to indicate a parameter serving the same purpose in momentum, Adagrad, RMSProp, and Adadelta).
Here are the technical details of Adadelta. Given the parameter du jour is $\rho$, we obtain the following leaky updates similarly to :numref:`sec_rmsprop`:
$$\begin{aligned}
\mathbf{s}_t & = \rho \mathbf{s}_{t-1} + (1 - \rho) \mathbf{g}_t^2.
\end{aligned}$$
The difference to :numref:`sec_rmsprop` is that we perform updates with the rescaled gradient $\mathbf{g}_t'$, i.e.,
$$\begin{aligned}
\mathbf{x}_t & = \mathbf{x}_{t-1} - \mathbf{g}_t'. \\
\end{aligned}$$
So what is the rescaled gradient $\mathbf{g}_t'$? We can calculate it as follows:
$$\begin{aligned}
\mathbf{g}_t' & = \frac{\sqrt{\Delta\mathbf{x}_{t-1} + \epsilon}}{\sqrt{{\mathbf{s}_t + \epsilon}}} \odot \mathbf{g}_t, \\
\end{aligned}$$
where $\Delta \mathbf{x}_{t-1}$ is the leaky average of the squared rescaled gradients $\mathbf{g}_t'$. We initialize $\Delta \mathbf{x}_{0}$ to be $0$ and update it at each step with $\mathbf{g}_t'$, i.e.,
$$\begin{aligned}
\Delta \mathbf{x}_t & = \rho \Delta\mathbf{x}_{t-1} + (1 - \rho) {\mathbf{g}_t'}^2,
\end{aligned}$$
and $\epsilon$ (a small value such as $10^{-5}$) is added to maintain numerical stability.
## Implementation
Adadelta needs to maintain two state variables for each variable, $\mathbf{s}_t$ and $\Delta\mathbf{x}_t$. This yields the following implementation.
```
%matplotlib inline
from d2l import tensorflow as d2l
import tensorflow as tf
def init_adadelta_states(feature_dim):
s_w = tf.Variable(tf.zeros((feature_dim, 1)))
s_b = tf.Variable(tf.zeros(1))
delta_w = tf.Variable(tf.zeros((feature_dim, 1)))
delta_b = tf.Variable(tf.zeros(1))
return ((s_w, delta_w), (s_b, delta_b))
def adadelta(params, grads, states, hyperparams):
rho, eps = hyperparams['rho'], 1e-5
for p, (s, delta), grad in zip(params, states, grads):
s[:].assign(rho * s + (1 - rho) * tf.math.square(grad))
g = (tf.math.sqrt(delta + eps) / tf.math.sqrt(s + eps)) * grad
p[:].assign(p - g)
delta[:].assign(rho * delta + (1 - rho) * g * g)
```
Choosing $\rho = 0.9$ amounts to a half-life time of 10 for each parameter update. This tends to work quite well. We get the following behavior.
```
data_iter, feature_dim = d2l.get_data_ch11(batch_size=10)
d2l.train_ch11(adadelta, init_adadelta_states(feature_dim),
{'rho': 0.9}, data_iter, feature_dim);
```
For a concise implementation we simply use the `adadelta` algorithm from the `Trainer` class. This yields the following one-liner for a much more compact invocation.
```
# adadelta is not converging at default learning rate
# but it's converging at lr = 5.0
trainer = tf.keras.optimizers.Adadelta
d2l.train_concise_ch11(trainer, {'learning_rate':5.0, 'rho': 0.9}, data_iter)
```
## Summary
* Adadelta has no learning rate parameter. Instead, it uses the rate of change in the parameters itself to adapt the learning rate.
* Adadelta requires two state variables to store the second moments of gradient and the change in parameters.
* Adadelta uses leaky averages to keep a running estimate of the appropriate statistics.
## Exercises
1. Adjust the value of $\rho$. What happens?
1. Show how to implement the algorithm without the use of $\mathbf{g}_t'$. Why might this be a good idea?
1. Is Adadelta really learning rate free? Could you find optimization problems that break Adadelta?
1. Compare Adadelta to Adagrad and RMS prop to discuss their convergence behavior.
[Discussions](https://discuss.d2l.ai/t/1077)
| github_jupyter |
<a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/65_vector_styling.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
Uncomment the following line to install [geemap](https://geemap.org) if needed.
```
# !pip install geemap
```
**Styling Earth Engine vector data**
```
import ee
import geemap
# geemap.update_package()
```
## Use the default style
```
Map = geemap.Map()
states = ee.FeatureCollection("TIGER/2018/States")
Map.addLayer(states, {}, "US States")
Map
```
## Use Image.paint()
```
Map = geemap.Map()
states = ee.FeatureCollection("TIGER/2018/States")
image = ee.Image().paint(states, 0, 3)
Map.addLayer(image, {'palette': 'red'}, "US States")
Map
```
## Use FeatureCollection.style()
```
Map = geemap.Map()
states = ee.FeatureCollection("TIGER/2018/States")
style = {'color': '0000ffff', 'width': 2, 'lineType': 'solid', 'fillColor': '00000080'}
Map.addLayer(states.style(**style), {}, "US States")
Map
```
## Use add_styled_vector()
```
Map = geemap.Map()
states = ee.FeatureCollection("TIGER/2018/States")
vis_params = {
'color': '000000',
'colorOpacity': 1,
'pointSize': 3,
'pointShape': 'circle',
'width': 2,
'lineType': 'solid',
'fillColorOpacity': 0.66,
}
palette = ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']
Map.add_styled_vector(
states, column="NAME", palette=palette, layer_name="Styled vector", **vis_params
)
Map
import geemap.colormaps as cm
Map = geemap.Map()
states = ee.FeatureCollection("TIGER/2018/States")
vis_params = {
'color': '000000',
'colorOpacity': 1,
'pointSize': 3,
'pointShape': 'circle',
'width': 2,
'lineType': 'solid',
'fillColorOpacity': 0.66,
}
palette = list(cm.palettes.gist_earth.n12)
Map.add_styled_vector(
states, column="NAME", palette=palette, layer_name="Styled vector", **vis_params
)
Map
Map = geemap.Map()
states = ee.FeatureCollection("TIGER/2018/States").filter(
ee.Filter.inList('NAME', ['California', 'Nevada', 'Utah', 'Arizona'])
)
palette = {
'California': 'ff0000',
'Nevada': '00ff00',
'Utah': '0000ff',
'Arizona': 'ffff00',
}
vis_params = {
'color': '000000',
'colorOpacity': 1,
'width': 2,
'lineType': 'solid',
'fillColorOpacity': 0.66,
}
Map.add_styled_vector(
states, column="NAME", palette=palette, layer_name="Styled vector", **vis_params
)
Map
```
## Use interactive GUI
```
Map = geemap.Map()
states = ee.FeatureCollection("TIGER/2018/States")
Map.addLayer(states, {}, "US States")
Map
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mscouse/TBS_investment_management/blob/main/PM_labs_part_4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
[](https://colab.research.google.com/drive/1F1J2rObxMwR11cnRzm5m_b1cLwW1U-nL?usp=sharing)
# <strong> Investment Management 1</strong>
---
#<strong> Part 4: Data sources & data collection in Python.</strong>
In the course repository on GitHub, you will find several introductory Colab notebooks covering the following topics:
**Part 1: Introduction to Python and Google Colab notebooks.**
**Part 2: Getting started with Colab notebooks & basic features.**
**Part 3: Data visualisation libraries.**
**Part 4: Data sources & data collection in Python (CURRENT NOTEBOOK).**
**Part 5: Basic financial calculations in python.**
The notebooks have been designed to help you get started with Python and Google Colab. See the **“1_labs_introduction”** folder for more information. Each notebook contains all necessary libraries and references to the required subsets of data.
# <strong>Data sources and data collection</strong>
To perform data analysis, the first step is to load a file containing the pertinent data – such as a CSV or Excel file - into Colab. There are several ways to do so. You can import your own data into Colab notebooks from Google Drive, GitHub and many other sources. Some of these are discussed below.
To find out more about importing data, and how Colab can be used for data analysis, see the <a href="https://github.com/mscouse/TBS_investment_management/blob/main/Python_workspace.pdf">Python Workspace</a> document in the course GitHub repository or a more <a href="https://neptune.ai/blog/google-colab-dealing-with-files">comprehensive guide</a> prepared by Siddhant Sadangi of Reuters.
##1. Uploading files from your local drive
It is easy to upload your locally stored data files. To upload the data from your local drive, type in the following code in a new “Code” cell in Colab (as demonstrated below):
```
from google.colab import files
files.upload()
```
Once executed, the code will prompt you to select a file containing your data. Click on **“Choose Files”** then select and upload the file. Wait for the file to be 100% uploaded. You should see the name of the file in the code cell once it is uploaded.
On the left side of Colab interface, there is a **"Files/ Folder"** tab. You can find the uploaded file in that directory.
If you want to read the uploaded data into a Pandas dataframe (named `df` in this example), use the following code in a new code cell. The **'filename.csv'** should match the name of the uploaded file, including the `.csv` extension:
```
import pandas as pd
df = pd.read_csv('filename.csv')
```
```
from google.colab import files
files.upload()
```
##2. Upload files from GitHub (via its RAW URL)
You can either clone an entire GitHub repository to your Colab environment or access individual files from their raw link. We use the latter method throughout the course/assignments.
**Clone a GitHub repository**
You can clone a GitHub repository into your Colab environment in the same way as you would on your local machine, using `!git clone` followed by the clone URL of the repository:
```
# use the correct URL
!git clone https://github.com/repository_name.git
```
Once the repository is cloned, refresh the file-explorer to browse through its contents. Then you can simply read the files as you would in your local machine (see above).
**Load GitHub files using raw links**
There is no neeed to clone the repository to Colab if you need to work with only a few files from that repository. You can load individual files directly from GitHub using thier raw links, as follows:
1. click on the file in the repository;
2. click on `View Raw`;
3. copy the URL of the raw file,
4. use this URL as the location of your file (see sample code below)
```
import pandas as pd
# step 1: store the link to your dataset as a string titled "url"
url="https://raw.githubusercontent.com/mscouse/TBS_investment_management/main/1_labs_introduction/stock_prices_1.csv"
# step 2: Load the dataset into pandas. The dataset is stored as a pandas dataframe "df".
df = pd.read_csv(url)
```
Try doing it yourself using the code cells below.
```
# import any required libraries
import pandas as pd
# store the URL link to your GitHub dataset as a string titled "url"
url = 'https://raw.githubusercontent.com/mscouse/TBS_investment_management/main/1_labs_introduction/stock_prices_1.csv'
# load the dataset into Pandas. The dataset will be stored as a Pandas Dataframe "df".
# Note that the file we deal with in this example contains dates in the first column.
# Therefore, we parse the dates using "parse_dates" and set the date column to be
# the index of the dataframe (using the "index_col" parameter)
df = pd.read_csv(url, parse_dates=['date'], index_col=['date'])
df.head()
```
##3. Accessing financial data
There are several open source Python library designed to help researchers access financial data. One example is `yfinance` (formerly known as `fix-yahoo-finance`). It is a popular library, developed as a means to access the financial data available on Yahoo Finance.
Other widely used libraries are `pandas_datareader`, `yahoo_fin`, `ffn`, `PyNance`, and `alpha vantage`.
In this section we focus on the former library, `yfinance`. As this library is not pre-installed in Google Colab by default, we will first execute the following code to install it:
```
!pip install yfinance
```
The `!pip install <package>` command looks for the latest version of the package and installs it. This only needs to be done once per session.
```
# install the yfinance library
!pip install yfinance
```
As you may know, **Yahoo Finance** offers historical market data on stocks, bonds, cryptocurrencies, and currencies. It also aggregates companies' fundamental data.
We will be using several modules and functions included with the `yfinance` library to download historical market data from Yahoo Finance. For more information on the library, see <a href="https://pypi.org/project/yfinance/">here</a>.
**Company information**
The first `module` of the `yfinance` library we consider is `Ticker`. By using the `Ticker` function we pass the stock symbol for which we need to download the data. It allows us to access ticker-specific data, such as stock info, corporate actions, company financials, etc. In the example below we are working with Apple - its ticker is”AAPL”. The first step is to call the `Ticker` function to initialize the stock we work with.
```
# import required libraries (note that yfinance needs to be imported in addition to being installed)
import yfinance as yf
# assign ticker to Python variable
aapl = yf.Ticker("AAPL")
# get stock info
aapl.info
```
**Downloading stock data**
To download the historical stock data, we need to use the `history` function. As arguments, we can pass **start** and **end** dates to set a specific time period. Otherwise, we can set the period to **max** which will return all the stock data available on Yahoo for the chosen ticker.
Available paramaters for the `history()` method are:
* period: data period to download (either use `period` parameter or use `start` and `end`). Valid periods are: 1d, 5d, 1mo, 3mo, 6mo, 1y, 2y, 5y, 10y, ytd, max;
* interval: data interval (intraday data cannot extend past 60 days). Valid intervals are: 1m, 2m, 5m, 15m, 30m, 60m, 90m, 1h, 1d, 5d, 1wk, 1mo, 3mo;
* start: if not using `period` - download start date string (YYYY-MM-DD) or datetime;
* end: if not using `period` - download end date string (YYYY-MM-DD) or datetime;
```
# get historical market data
hist = aapl.history(period="max")
hist.head()
```
**Displaying corporate actions and analysts recommendations**
To display information about the dividends and stock splits, or the analysts recommendations use the `actions` and `recommendations` functions.
```
# show company corporate actions, such as dividends and stock splits
aapl.actions
# show analysts recommendations
aapl.recommendations
```
**Data for multiple stocks**
To download data for multiple tickers, we need to use the `download()` method, as follows:
```
# Version 1
import yfinance as yf
stock_data = yf.download("AAPL MSFT BRK-A", start="2015-01-01", end="2021-01-20")
```
Alternatively, we can rewrite the code above as:
```
# Version 2
import yfinance as yf
tickers = "AAPL MSFT BRK-A"
date_1 = "2015-01-01"
date_2 = "2021-01-20"
stock_data = yf.download(tickers, start=date_1, end=date_2)
```
To access the closing adjusted price data for the tickers in the `stock_data` dataframe the code above creates, you should use: `stock_data['Adj Close']`. To access the closing adjusted price data for 'AAPL' only, use: `stock_data['Adj Close']['AAPL']`.
```
# Version 1
# import required libraries
import yfinance as yf
# fetch data for multiple tickers
stock_data = yf.download("AAPL MSFT BRK-A", start="2015-01-01", end="2021-01-20")
# display the last 5 rows of the dataframe; we choose to display the "Adj Close" column only
stock_data["Adj Close"].tail()
# Version 2
import yfinance as yf
# assign required values to variables
tickers = "AAPL MSFT BRK-A"
date_1 = "2015-01-01"
date_2 = "2021-01-20"
# fetch data for multiple tickers
stock_data = yf.download(tickers, start=date_1, end=date_2)
# display the last 5 rows of the dataframe; we choose to display the "Adj Close" column only
stock_data["Adj Close"].tail()
# display the last 5 rows of AAPL adjusted close prices
stock_data['Adj Close']['AAPL'].tail()
```
However, if you want to group stock data by ticker, use the following code:
```
# Version 3
import yfinance as yf
tickers = "AAPL MSFT BRK-A"
date_1 = "2015-01-01"
date_2 = "2021-01-20"
stock_data = yf.download(tickers, start=date_1, end=date_2, group_by="ticker")
```
To access the closing adjusted price data for 'AAPL' only, use: `stock_data['AAPL']['Adj Close']`.
```
# Version 3
import yfinance as yf
# assign required values to variables
tickers = "AAPL MSFT BRK-A"
date_1 = "2015-01-01"
date_2 = "2021-01-20"
# fetch data for multiple tickers
stock_data = yf.download(tickers, start=date_1, end=date_2, group_by="ticker")
# display the last 5 rows of "Adj Close" prices for AAPL only
stock_data["AAPL"]["Adj Close"].tail()
```
| github_jupyter |
# Section 2.2: Naive Bayes
In contrast to *k*-means clustering, Naive Bayes is a supervised machine-learning (ML) algorithm. It provides good speed and good accuracy and is often used in aspects of natural-language processing such text classification or, in our case in this section, spam detection.
Spam emails are more than just a nuisance. As recently as 2008, spam constituted an apocalyptic 97.8 percent of all email traffic according to a [2009 Microsoft security report](http://download.microsoft.com/download/4/3/8/438BE24D-4D58-4D9A-900A-A1FC58220813/Microsoft_Security_Intelligence_Report _volume8_July-Dec2009_English.pdf). That tide has thankfully turned and, as of May 2019, spam makes up only about [85 percent of email traffic](https://www.talosintelligence.com/reputation_center/email_rep) — thanks, in no small part, to Naive Bayes spam filters.
Naive Bayes is a convenient algorithm for spam detection because it does not require encoding complex rules. All it needs is training examples, of which there are plenty when it comes to email spam. Naive Bayes does all this through the use of [conditional probability](https://en.wikipedia.org/wiki/Conditional_probability).
> **Learning objective:** By the end of this section, you should have a basic understanding of how naive Bayes works and some of the reasons for its popularity.
## Conditional probability
Ordinary probability deals with the likelihood of isolated events occurring. For example, rolling a 6 on a fair six-sided die will occur, on average, on one out of six rolls. Mathematicians express this probability as $P({\rm die}=6)=\frac{1}{6}$.
Conditional probability concerns itself with the contingencies of interconnected events: what is the probability of event $A$ happening if event $B$ occurs. Mathematicians denote this as $P(A|B)$, or "the probability of $A$ given $B$."
In order to compute the probability of conditional events, we use the following equation:
$P(A \mid B)=\cfrac{P(A \cap B)}{P(B)}$
This equation is nice, but it assumes that we know the joint probability $P(A\cap B)$, which we often don't. Instead, we often need to know something about $A$ but all we can directly observe is $B$. For instance, when we want to infer whether an email is spam only by knowing the words it contains. For this, we need Bayes' law.
## Bayes' law
Bayes' law takes its name from the eighteenth-century English statistician and philosopher Thomas Bayes, who described the probability of an event based solely on prior knowledge of conditions that might be related to that event thus:
$P(A \mid B)=\cfrac{P(B \mid A)P(A)}{P(B)}$
In words, Bayes' Law says that if I know the prior probabilities $P(A)$ and $P(B)$, in addition to the likelihood (even just an assumed likelihood) $P(B \mid A)$, I can compute the posterior probability $P(A \mid B)$. Let's apply this to spam.
<img align="center" style="padding-right:10px;" src="Images/spam.png" border="5">
In order to use Bayesian probability on spam email messages like this one, consider it (and all other emails, spam or ham) to be bags of words. We don't care about word order or even word meaning. We just want to count the frequency of certain words in spam messages versus the frequency of those same words in valid email messages.
Let's say that, after having counted the words in hundreds of emails that we have received, we determine the probability of the word "debt" appearing in any kind of email message (spam or ham) to be 0.157, with the probability of "debt" appearing in spam messages being 0.309. Further more, let's say that we assume that there is a 50 percent chance that any given email message we receive is spam (for this example, we don't know either way what type of email it might be, so it's a coin flip). Mathematically, we could thus say:
- Probability that a given message is spam: $P({\rm S})=0.5$
- Probability that “debt” appears in a given message: $P({\rm debt})=0.157$
- Probability that “debt” appears in a spam message: $P({\rm debt} \mid {\rm S})=0.309$
Plugging this in to Bayes' law, we get the following probability that an email message containing the word "debt" is spam:
$P({\rm S} \mid {\rm debt})=\cfrac{P({\rm debt} \mid {\rm S})P({\rm S})}{P({\rm debt})}=\cfrac{(0.309)(0.5)}{0.157}=\cfrac{0.1545}{0.157}=0.984$
Thus if an email contains the word "debt," we calculate that it is 98.4 percent likely to be spam.
## What makes it naive?
Our above calculation is great for looking at individual words, but emails contain several words that can give us clues to an email's relative likelihood of being spam or ham. For example, say we wanted to determine whether an email is spam given that it contains the words "debt" and "bills." We can begin by reasoning that the probability that an email containing "debt" and "bills" is spam is, if not equal, at least proportional to the probability of "debt" and "bills" appearing in known spam messages times the probability of any given message being spam:
$P({\rm S} \mid {\rm debt, bills}) \propto P({\rm debt, bills} \mid {\rm S})P({\rm S})$
(**Mathematical note:** The symbol ∝ represents proportionality rather than equality.)
Now if we assume that the occurrence of the words "debt" and "bills" are independent events, we can extend this proportionality:
$P({\rm S} \mid {\rm debt, bills}) \propto P({\rm debt} \mid {\rm S})P({\rm bills} \mid {\rm S})P({\rm S})$
We should state here that this assumption of independence is generally not true. Just look at the example spam message above. The probability that "bills" will appears in a spam message containing "debt" is probably quite high. However, assuming that the probabilities of words occurring in our email messages are independent is useful and works surprising well. This assumption of independence is the naive part of the Baysian probabilities that we will use in this section; expressed mathematically, the working assumption that will underpin the ML in this section is that for any collection of $n$ words:
$P({\rm S}\mid {\rm word_1}, {\rm word_2},\ldots, {\rm word}_n)=P({\rm S})P({\rm word_1}\mid {\rm S})P({\rm word_2}\mid {\rm S})\cdots P({\rm word}_n\mid {\rm S})$
> **Key takeaway:** We cannot emphasize enough that this chain rule expressed in the equation above—that the probability of a message being spam based on the words in it is equal to the product of the likelihoods of those individual words appearing in messages known to be spam is ***not*** true. But it gets good results and, in the world of data science, fast and good enough always trump mathematical fidelity.
## Import the dataset
In this section, we'll use the [SMS Spam Collection dataset](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection). It contains 5,574 messages collected for SMS spam research and tagged as "spam" or "ham." The dataset files contain one message per line with each line being composed of the tag and the raw text of the SMS message. For example:
| Class | Message |
|:------|:------------------------------|
| ham | What you doing?how are you? |
| ham | Ok lar... Joking wif u oni... |
Let’s now import pandas and load the dataset. (Note that the path name is case sensitive.)
```
import pandas as pd
df = pd.read_csv('Data/SMSSpamCollection', sep='\t', names=['Class', 'Message'])
```
> **Question**
>
> What do the `sep` and `names` parameters do in the code cell above? (**Hint:** If you are unsure, you can refer to the built-in Help documentation using `pd.read_csv?` in the code cell below.)
Let's take an initial look at what's in the dataset.
```
df.head()
```
Note that several entries in the `Message` column are truncated. We can use the `set_option()` function to set pandas to display the maximum width of each entry.
```
pd.set_option('display.max_colwidth', -1)
df.head()
```
> **Question**
>
> What do you think the purpose of the `-1` parameter passed to `pd.set_option()` is in the code cell above?
Alternatively, we can dig into individual messages.
```
df['Message'][13]
```
## Explore the data
Now that we have an idea of some of the individual entries in the dataset, let's get a better sense of the dataset as a whole.
```
df.info()
```
> **Exercise**
>
> Now run the `describe()` method on `df`. Does it provide much useful information about this dataset? If not, why not?
> **Possible exercise solution**
```
df.describe()
```
We can also visualize the dataset to graphically see the mix of spam to ham. (Note that we need to include the `%matplotlib inline` magic command in order to actually see the bar chart here in the notebook.)
```
%matplotlib inline
df.groupby('Class').count().plot(kind='bar')
```
> **Key takeaway:** Notice that here an in previous sections we have stuck together several methods to run on a `DataFrame`. This kind of additive method-stacking is part of what makes Python and pandas such a power combination for the rough-and-ready data exploration that is a crucial part of data science.
## Explore the data using word clouds
Because our data is largely not numeric, you might have noticed that some of our go-to data exploration tools (such as bar charts and the `describe()` method) have been of limited use in exploring this data. Instead, word clouds can be a powerful way of getting a quick glance at what's represented in text data as a whole.
```
!pip install wordcloud
```
We will have to supply a number of parameters to the `WordCloud()` function and to matplotlib in order to render the word clouds, so we will save ourselves some redundant work by writing a short function to handle it. Parameters for `WordCloud()` will include the stop words we want to ignore and font size for the words in the cloud. For matplotlib, these parameters will include instructions for rendering the word cloud.
```
from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt
def get_wordcloud(text_data,title):
wordcloud = WordCloud(background_color='black',
stopwords=set(STOPWORDS),
max_font_size=40,
relative_scaling=1.0,
random_state=1
).generate(str(text_data))
fig = plt.figure(1, figsize=(12, 12))
plt.axis('off')
plt.title(title)
plt.imshow(wordcloud)
plt.show()
```
Now it is time to plot the word clouds.
```
spam_msg = df.loc[df['Class']=='spam']['Message']
get_wordcloud(spam_msg,'Spam Cloud')
ham_msg = df.loc[df['Class']=='ham']['Message']
get_wordcloud(ham_msg,'Ham Cloud')
```
Looking at the two word clouds, it is immediately apparent that the frequency of the most common words is different between our spam and our ham messages, which will form the primary basis of our spam detection.
## Explore the data numerically
Just because the data does not naturally lend itself to numerical analysis "out of the box" does not mean that we can't do so. We can also analyze the average length of spam and ham messages to see if there are differences. For this, we need to create a new column.
```
df['Length_of_msg'] = df['Message'].apply(len)
df.head()
```
> **Question**
>
> What does the `apply()` method do in the code cell above? (**Hint:** If you are unsure, you can refer to [this page](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html).)
Now that we have the length of each message, we can visualize those message lengths using a histogram.
```
df.groupby('Class')['Length_of_msg'].plot(kind='hist', bins=50)
```
The orange histogram is the spam messages. Because there are so many more ham messages than spam, let's break these out separately to see the details more clearly.
```
df.hist(bins=50,by='Class', column='Length_of_msg')
```
Spam messages skew much longer than ham messages.
> **Question**
>
> Why does it appear in the details histograms that there is almost no overlap between the lengths of ham and spam text messages? What do the differences in scale tell us (and what could they inadvertently obscure)?
Let's look at the differences in length of the two classes of message numerically.
```
df.groupby('Class').mean()
```
These numbers accord with what we saw in the histograms.
Now, let's get to the actual modeling and spam detection.
## Prepare the data for modeling
One of the great strengths of naive Bayes analysis is that we don't have to go too deep into text processing in order to develop robust spam detection. However, the text is raw and it does require a certain amount of cleaning. To do this, we will use one of the most commonly used text-analytics libraries in Python, the Natural Language Toolkit (NLTK). However, before we can import it, we will need to first install it.
```
!pip install nltk
```
We can now import NLTK, in addition to the native Python string library to help with our text manipulation. We will also download the latest list of stop words (such as 'the', 'is', and 'are') for NLTK.
```
import string
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
```
Part of our data preparation will be *vectorizing* the text data. Recall that earlier in the section when we first introduced naive Bayes analysis, we stated that we wanted to treat our messages as "bags of words" rather than as English-language messages. Vectorization is the process by which we convert our collection of text messages to a matrix of word counts.
Part of the vectorization process will be for us to remove punctuation from the messages and exclude stop words from our analysis. We will write a function to perform those tasks here, because we will want to access those actions later on.
```
def txt_preprocess(text):
#Remove punctuation
temp = [w for w in text if w not in string.punctuation]
temp = ''.join(temp)
#Exclude stopwords
processedtext = [w for w in temp.split() if w.lower() not in stopwords.words('english')]
return processedtext
```
Scikit-learn provides a count-vectorizer function. We will now import it and then use the `txt_preprocess()` function we just wrote as a custom analyzer for it.
```
from sklearn.feature_extraction.text import CountVectorizer
X = df['Message']
y = df['Class']
CountVect = CountVectorizer(analyzer=txt_preprocess).fit(X)
```
> **Technical note:** The convention of using an upper-case `X` to represent the independent variables (the predictors) and a lower-case `y` to represent the dependent variable (the response) comes from statistics and is commonly used by data scientists.
In order to see how the vectorizer transformed the words, let's check it against a common English word like "go."
```
print(CountVect.vocabulary_.get('go'))
```
So "go" appears 6,864 times in our dataset.
Now, before we transform the entire dataset and train the model, we have the final preparatory step of splitting our data into training and test data to perform.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=50)
```
Finally, we will transform our training messages into a [document-term matrix](https://en.wikipedia.org/wiki/Document-term_matrix). "Document" might sound a little grandiose in this case as it refers to individual text messages, but it is a term of art for text analysis.
```
X_train_data = CountVect.transform(X_train)
```
This can be a tricky concept, so let's look at the training-text matrix directly.
```
print(X_train_data)
X_train_data.shape
```
`X_train_data` is now a 3900x11425 matrix, where each of the 3,900 rows represents a text ("document") from the training dataset and each column is a specific word (11,425 of them in this case).
> **Key takeaway:** Putting our bag of words into a document-term matrix like this is a standard tool of natural-language processing and text analysis, and it is used in contexts beyond naive Bayes analysis in which word-frequency is important, such as [term frequency–inverse document frequency (TF-IDF)](https://en.wikipedia.org/wiki/Tf%E2%80%93idf).
## Train the model
Now it is time to train our naive Bayes model. For our model, we will use the multinomial naive Bayes classifier. "Multinomial" in this case derives from our assumption that, for our bag of $n$ words, $P({\rm S}\mid {\rm word_1}, {\rm word_2},\ldots, {\rm word}_n)=P({\rm S})P({\rm word_1}\mid {\rm S})P({\rm word_2}\mid {\rm S})\cdots P({\rm word}_n\mid {\rm S})$ and that we don't assume that our word likelihoods follow a normal distribution.
```
from sklearn.naive_bayes import MultinomialNB
naivebayes_model = MultinomialNB()
naivebayes_model.fit(X_train_data,y_train)
```
Our model is now fitted. However, before we run our predictions on all of our test data, let's see what our model says about some artificial data in order to get a better sense of what our model will do with all of the messages in our test dat. From the word clouds we constructed earlier, we can see that "call" and "free" are both prominent words among our spam messages, so let's create our own spam message and see how our model classifies it.
```
pred = naivebayes_model.predict(CountVect.transform(['Call for a free offer!']))
pred
```
As we expected, our model correctly classified this message as spam.
> **Exercise**
>
> Review the ham word cloud above, construct a ham message, and then run it against the model to see how it is classified.
> **Possible exercise solution**
```
pred2 = naivebayes_model.predict(CountVect.transform(['Let me know what time we should go.']))
pred2
```
Now let's run our test data through the model. First, we need to transform it to a document-term matrix.
```
X_test_data = CountVect.transform(X_test)
X_test_data.shape
```
> **Exercise**
>
> Run the predictions for the test data.
> **Exercise solution**
```
predictions = naivebayes_model.predict(X_test_data)
predictions
```
Now it's time to evaluate our model's performance.
```
from sklearn.metrics import classification_report, confusion_matrix
print(classification_report(predictions, y_test))
```
> **Exercise**
>
> Overall, our model is good for spam detection, but our recall score (the proportion of actual positives that were identified correctly) is surprisingly low. Why might this be? What implications does it have for spam detection? (**Hint:** Use the scikit-learn `confusion_matrix()` function to better understand the specific performance of the model. For help interpreting the confusion matrix, see [this page](https://en.wikipedia.org/wiki/Confusion_matrix).)
> **Possible exercise solution**
```
print(confusion_matrix(y_test, predictions))
```
> **Takeaway**
>
> The performance of our naive Bayes model helps underscore the algorithm's popularity, particularly for spam detection. Even untuned, we got good performance, performance that would only continue to improve in production as users submitted more examples of spam messages.
## Further exploration
Beyond detecting spam, we can use ML to explore the SMS data more deeply. To do so, we can use sophisticated, cloud-based cognitive tools such as Microsoft Azure Cognitive Services.
### Azure Cognitive Services
The advantage of using cloud-based services is that they provide cutting-edge models that you can access without having to train the models. This can help accelerate both your exploration and your use of ML.
Azure provides Cognitive Services APIs that can be consumed using Python to conduct image recognition, speech recognition, and text recognition, just to name a few. For the purposes of this subsection, we're going to look at using the Azure Text Analytics API.
First, we’ll start by obtaining a Cognitive Services API key. Note that you can get a free key for seven days (after which you'll be required to pay for continued access to the API).
To learn more about pricing for Cognitive Services, see https://azure.microsoft.com/en-us/pricing/details/cognitive-services/
Browse to **Try Azure Cognitive Services** at https://azure.microsoft.com/en-us/try/cognitive-services/
1. Click **Language APIs**.
2. By **Text Analytics**, click **Get API key**.
3. In the **Try Cognitive Services for free** window, under **7-day trial**, click **Get stared**.
4. In the **Microsoft Cognitive Services Terms** window, accept the terms of the free trial and click **Next**.
5. In the **Sign-in to Continue** window, select your preferred means of signing in to your Azure account.
Once you have your API keys in hand, you're ready to start. Substitute the API key that you get for the seven-day trial below where it reads `ACCOUNT_KEY`.
```
# subscription_key = 'ACCOUNT_KEY'
subscription_key = '8efb79ce8fd84c95bd1aa2f9d68ae734'
assert subscription_key
# If using a Free Trial account, this URL does not need to be updated.
# If using a paid account, verify that it matches the region where the
# Text Analytics Service was setup.
text_analytics_base_url = "https://westcentralus.api.cognitive.microsoft.com/text/analytics/v2.1/"
```
We will also need to import the NumPy and requests modules.
```
import numpy as np
import requests
```
The Azure Text Analytics API has a hard limit of 1,000 calls at a time, so we will need to split our 5,572 SMS messages into at least six chunks to run them through Azure.
```
chunks = np.array_split(df, 6)
for chunk in chunks:
print(len(chunk))
```
Two of the things that cognitives services like those provided by Azure offer are language identification and sentiment analysis. Both are relevant for our dataset, so we will prepare our data for both by submitting them as JavaScript Object Notation (JSON) documents. We'll prepare the data for language identification first.
```
# Prepare the header for the JSON document including your subscription key
headers = {"Ocp-Apim-Subscription-Key": subscription_key}
# Supply the URL for the language-identification API.
language_api_url = text_analytics_base_url + "languages"
# Iterate over the chunked DataFrame.
for i in range(len(chunks)):
# Reset the indexes within the chunks to avoid problems later on.
chunks[i] = chunks[i].reset_index()
# Split up the message from the DataFrame and put them in JSON format.
documents = {'documents': []}
for j in range(len(chunks[i]['Message'])):
documents['documents'].append({'id': str(j), 'text': chunks[i]['Message'][j]})
# Call the API and capture the responses.
response = requests.post(language_api_url, headers=headers, json=documents)
languages = response.json()
# Put the identified languages in a list.
lang_list = []
for document in languages['documents']:
lang_list.append(document['detectedLanguages'][0]['name'])
# Put the list of identified languages in a new column of the chunked DataFrame.
chunks[i]['Language'] = np.array(lang_list)
```
Now we need perform similar preparation of the data for sentiment analysis.
```
# Supply the URL for the sentiment-analysis API.
sentiment_api_url = text_analytics_base_url + "sentiment"
# Iterate over the chunked DataFrame.
for i in range(len(chunks)):
# We have alread reset the chunk-indexes, so we don't need to do again.
# Split up the messages from the DataFrame and put them in JSON format.
documents = {'documents': []}
for j in range(len(chunks[i]['Message'])):
documents['documents'].append({'id': str(j), 'text': chunks[i]['Message'][j]})
# Call the API and capture the responses.
response = requests.post(sentiment_api_url, headers=headers, json=documents)
sentiments = response.json()
# Put the identified sentiments in a list.
sent_list = []
for document in sentiments['documents']:
sent_list.append(document['score'])
# Put the list of identified sentiments in a new column of the chunked DataFrame.
chunks[i]['Sentiment'] = np.array(sent_list)
```
We now need to re-assembled our chunked DataFrame.
```
azure_df = pd.DataFrame(columns=['Index', 'Class', 'Message', 'Language', 'Sentiment'])
for i in range(len(chunks)):
azure_df = pd.concat([azure_df, chunks[i]])
if i == 0:
azure_df['index'] = chunks[i].index
azure_df.set_index('index', inplace=True)
azure_df.drop(['Index'], axis=1, inplace=True)
azure_df.head()
```
We can also look at the tail of the `DataFrame` to check that our indexing worked as expected.
```
azure_df.tail()
```
Let's now see if all of the SMS messages were in English (and, if not, how many messages of which languages we are looking at).
```
azure_df.groupby('Language')['Message'].count().plot(kind='bar')
```
So the overwhelming majority of the messages are in English, though we have several additional languages in our dataset. Let's look at the actual numbers.
> **Exercise**
>
> Now use the `groupby` method to display actual counts of the languages detected in the dataset rather than a bar chart of them.
> **Exercise solution**
```
azure_df.groupby('Language')['Message'].count()
```
We have a surprising array of languages, perhaps, but the non-English messages are really just outliers and should have no real impact on the spam detection.
Now let's look at the sentiment analysis for our messages.
```
azure_df.groupby('Class')['Sentiment'].plot(kind='hist', bins=50)
```
It is perhaps not too surprising that the sentiments represented in the dataset should be bifurcated: SMS is a medium that captures extremes better than nuanced middle ground. That said, the number of dead-center messages is interesting. The proportion of spam messages right in the middle is also interesting. Let's break the two classes (ham and spam) into separate histograms to get a better look.
> **Exercise**
>
> Break out the single histogram above into two histograms (one for each class of message). (**Hint:** Refer back to the code we used to do this earlier in the section.)
> **Exercise solution**
```
azure_df.hist(bins=50,by='Class', column='Sentiment')
```
The number of spam messages in our dataset is about a tenth of the amount of ham, yet the number of spam messages with exactly neutral sentiment is about half that of the ham, indicating that spam messages, on average, tend to be more neutral than legitimate messages. We can also notice that non-neutral spam messages tend to have more positive than negative sentiment, which makes intuitive sense.
> **Takeaway**
>
> Beyond providing additional insight into our data, sophisticated language-identification and sentiment-analysis algorithms provided by cloud-based services like Azure can provide additional details that could potentially help improve spam detection. For example, how patterns of sentiments in spam differ from those in legitimate messages.
| github_jupyter |
# Computing FSAs
**(C) 2017-2019 by [Damir Cavar](http://damir.cavar.me/)**
**Version:** 1.0, September 2019
**License:** [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/) ([CA BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/))
## Introduction
Consider the following automaton:
<img src="NDFSAMatrixOp.png" caption="Non-deterministic Finite State Automaton" style="width: 200px;"/>
We can represent it in terms of transition tables. We will use the Python numpy module for that.
```
from numpy import array
```
The transitions are coded in terms of state to state transitions. The columns and rows represent the states 0, 1, and 2. The following transition matrix shows all transitions that are associated with the label "a", that is from 0 to 0, from 0 to 1, and from 1 to 0.
```
a = array([
[1, 1, 0],
[1, 0, 0],
[0, 0, 0]
])
```
The following transition matrix shows that for the transitions associated with "b".
```
b = array([
[0, 1, 0],
[0, 1, 0],
[0, 0, 0]
])
```
The following transition matrix shows this for the transitions associated with "c".
```
c = array([
[0, 0, 0],
[0, 0, 1],
[0, 0, 0]
])
```
We can define the start state using an init vector. This init vector indicates that the start state should be 0.
```
init = array([
[1, 0, 0]
])
```
The set of final states can be encoded as a column vector that in this case defines state 3 as the only final state.
```
final = array([
[0],
[0],
[1]
])
```
If we want to compute the possibility for a sequence like "aa" to be accepted by this automaton, we could compute the dot product of the init-vector and the a matrices, with the dot product of the final state.
```
init.dot(a).dot(c).dot(final)
```
The 0 indicates that there is no path from the initial state to the final state based on a sequence "aa".
Let us verify this for a sequence "bc", for which we know that there is such a path:
```
init.dot(b).dot(c).dot(final)
```
Just to verify once more, let us consider the sequence "aabc":
```
init.dot(a).dot(a).dot(b).dot(c).dot(final)
```
There are obviously three paths in our Non-deterministic Finite State Automaton that generate the sequence "aabc".
## Wrapping the Process into a Function
We could define the FSA above as a 5-tuple $(\Sigma, Q, i, F, E)$, with:
$\Sigma = \{a, b, c\}$, the set of symbols.
$Q = \{ 0, 1, 2 \}$, the set of states.
$i \in Q$, with $i = 0$, the initial state.
$F \subseteq Q$, with $F = \{ 2 \}$, the set of final states.
$E \subseteq Q \times (\Sigma \cup \epsilon) \times Q$, the set of transitions.
$E$ is the subset of tuples determined by the cartesian product of the set of states, the set of symbols including the empty set, and the set of states. This tuple defines a transition from one state to another state with a specific symbol.
$E$ could also be defined in terms of a function $\delta(\sigma, q)$, with $\sigma$ an input symbol and $q$ the current state. $\delta(\sigma, q)$ returns the new state of the transition, or a failure. The possible transitions for any given symbol from any state can be defined in a transition table:
| | a | b | c |
| :---: | :---: | :---: | :---: |
| **0** | 0, 1 | 1 | - |
| **1** | 0 | 1 | 2 |
| **2:** | - | - | - |
We can define the automaton in Python:
```
S = set( ['a', 'b', 'c'] )
Q = set( [0, 1, 2] )
i = 0
F = set( [ 2 ] )
td = { (0, 'a'): [0, 1],
(1, 'a'): [0],
(0, 'b'): [1],
(1, 'b'): [1],
(1, 'c'): [2]
}
def df(state, symbol):
print(state, symbol)
return td.get(tuple( [state, symbol] ), [])
def accept(sequence):
agenda = []
state = i
count = len(sequence)
agenda.append((state, 0))
while agenda:
print(agenda)
if not agenda:
break
state, pos = agenda.pop()
states = df(state, sequence[pos])
if not states:
print("No transition")
return False
state = states[0]
if pos == count - 1:
print("Reached end")
if F.intersection(set(states)):
return True
break
for s in states[1:]:
agenda.append( (s, pos+1) )
if state in F:
print("Not final state")
return True
return False
accept("aac")
alphabetMatrices = {}
alphabetMatrices["a"] = array([
[1, 1, 0],
[1, 0, 0],
[0, 0, 0]
])
alphabetMatrices["b"] = array([
[0, 1, 0],
[0, 1, 0],
[0, 0, 0]
])
alphabetMatrices["c"] = array([
[0, 0, 0],
[0, 0, 1],
[0, 0, 0]
])
alphabetMatrices["default"] = array([
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]
])
def paths(seq):
res = init
for x in seq:
res = res.dot( alphabetMatrices.get(x, alphabetMatrices["default"]) )
return res.dot(array([
[0],
[0],
[1]
]))[0][0]
paths("aabc")
```
**(C) 2016-2019 by [Damir Cavar](http://damir.cavar.me/) <<dcavar@iu.edu>>**
| github_jupyter |
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from constants_and_util import *
import matplotlib.pyplot as plt
import pandas as pd
import random
import numpy as np
from copy import deepcopy
from scipy.signal import argrelextrema
import statsmodels.api as sm
from scipy.special import expit
from scipy.stats import scoreatpercentile
import pickle
import os
from collections import Counter
import dataprocessor
import compare_to_seasonal_cycles
assert not USE_SIMULATED_DATA
import sys
import cPickle
assert sys.version[0] == '2'
import seaborn as sns
import generate_results_for_paper
generate_results_for_paper.make_figure_to_illustrate_data_for_one_user()
generate_results_for_paper.make_maps_of_countries_with_clue_data()
results = compare_to_seasonal_cycles.load_all_results()
# Sentence about relative change in happy/sad curve.
happy_sad_curve = compare_to_seasonal_cycles.convert_regression_format_to_simple_mean_format(
results['emotion*happy_versus_emotion*sad']['by_very_active_northern_hemisphere_loggers'][True]['linear_regression'],
'linear_regression')
cycle_amplitude = compare_to_seasonal_cycles.get_cycle_amplitude(happy_sad_curve,
cycle='date_relative_to_period',
metric_to_use='max_minus_min',
hourly_period_to_exclude=None)
overall_sad_frac = 1 - results['emotion*happy_versus_emotion*sad']['by_very_active_northern_hemisphere_loggers'][True]['overall_positive_frac']
print("Happy/sad frac: %2.3f; period cycle amplitude: %2.3f; relative change: %2.3f" % (overall_sad_frac,
cycle_amplitude,
cycle_amplitude/overall_sad_frac))
generate_results_for_paper.make_cycle_amplitudes_bar_plot_for_figure_1(results)
generate_results_for_paper.make_happiness_by_date_date_trump_effects_plot_for_figure_1(results)
generate_results_for_paper.make_happiness_by_date_date_trump_effects_plot_for_figure_1(results,
plot_red_line=False)
compare_to_seasonal_cycles.make_four_cycle_plots(results,
['by_very_active_northern_hemisphere_loggers'],
['emotion*happy_versus_emotion*sad'],
ylimits_by_pair={'emotion*happy_versus_emotion*sad':4},
figname='figures_for_paper/four_cycle_plot.png',
suptitle=False,
include_amplitudes_in_title=False,
different_colors_for_each_cycle=True)
```
# Figure 2
this has already been filtered for countries with MIN_USERS_FOR_SUBGROUP and MIN_OBS_FOR_SUBGROUP
```
generate_results_for_paper.make_maps_for_figure_2(results)
```
# Figure 3: age effects.
this has already been filtered for ages with MIN_USERS_FOR_SUBGROUP and MIN_OBS_FOR_SUBGROUP
```
opposite_pairs_to_plot = ['emotion*happy_versus_emotion*sad',
'continuous_features*heart_rate_versus_continuous_features*null',
'continuous_features*bbt_versus_continuous_features*null',
'continuous_features*weight_versus_continuous_features*null']
generate_results_for_paper.make_age_trend_plot(results,
opposite_pairs_to_plot=opposite_pairs_to_plot,
specifications_to_plot=['age'],
figname='figures_for_paper/main_fig4.pdf',
plot_curves_for_two_age_groups=True,
n_subplot_rows=2,
n_subplot_columns=4,
figsize=[14, 8],
subplot_kwargs={'wspace':.3,
'hspace':.65,
'right':.95,
'left':.15,
'top':.92,
'bottom':.1},
plot_yerr=True)
generate_results_for_paper.make_age_trend_plot(results,
opposite_pairs_to_plot=ORDERED_SYMPTOM_NAMES,
specifications_to_plot=['age',
'country+age',
'country+age+behavior',
'country+age+behavior+app usage'],
figname='figures_for_paper/age_trend_robustness.png',
plot_curves_for_two_age_groups=False,
n_subplot_rows=5,
n_subplot_columns=3,
figsize=[12, 15],
subplot_kwargs={'wspace':.7,
'hspace':.95,
'right':.72,
'left':.12,
'top':.95,
'bottom':.1},
age_ticks_only_at_bottom=False,
label_kwargs={'fontsize':11},
linewidth=1,
plot_legend=True,
include_ylabel=False)
```
| github_jupyter |
# smFRET Analysis
This notebook is for simple analysis of smFRET data, starting with an hdf5 file and ending with a FRET efficiency histogram that can be fitted with a gaussians. Burst data can be exported as a .csv for analysis elsewhere.
You can analysis uncorrected data if you are simply looking for relative changes in the conformational ensemble, or accurate FRET correction parameters can be supplied if you want FRET efficiencies that can be converted to distances.
# Import packages
```
from fretbursts import *
sns = init_notebook()
import lmfit
import phconvert
import os
import matplotlib.pyplot as plt
import matplotlib as mpl
from fretbursts.burstlib_ext import burst_search_and_gate
```
# Name and Load in data
First name the data file and check it exists, it will look for the file starting from wherever this notebook is saved.
```
filename = "definitiveset/1cx.hdf5"
if os.path.isfile(filename):
print("File found")
else:
print("File not found, check file name is correct")
```
Load in the file and set correction factors. If you aren't using accurate correction factors, set d.leakage and d.dir_ex to 0 and d.gamma to 1
You may get warnings that some parameters are not defined in the file, this is fine as they will be defined in this workbook anyway
```
d = loader.photon_hdf5(filename)
for i in range(0, len(d.ph_times_t)): #sorting code
indices = d.ph_times_t[i].argsort()
d.ph_times_t[i], d.det_t[i] = d.ph_times_t[i][indices], d.det_t[i][indices]
d.leakage = 0.081 #alpha
d.dir_ex = 0.076 #delta
d.gamma = 0.856
d.beta = 0.848
#d.leakage = 0.
#d.dir_ex = 0.
#d.gamma = 1.
```
# Check alternation cycle is correct
We need to check that the ALEX parameters defined in the HDF5 file are appropriate for the laser cycle used in the experiment. If this is correct, the following histogram should look correct. It is a combined plot of every photon that arrives over the supplied alternation periods.
```
bpl.plot_alternation_hist(d)
```
IF THE ABOVE HISTOGRAM LOOKS CORRECT: then run loader.alex_apply_period, which rewrites the time stamps into groups based on their excitation period. If you want to change the alternation period after this you will have to reload the data into FRET bursts.
IF THE ABOVE HISTOGRAM LOOKS WRONG: then the supplied alternation parameters do not match up to the alternation of the lasers in the data. This could be because the lasers were actually on a different alternation, or because the data set doesn't start at zero so is frame shifted etc.
In this case, you can un-hash the code below and alter the parameters manually.
```
#d.add(det_donor_accept = (0, 1),
# alex_period = 10000,
# offset = 0,
# D_ON = (0, 4500),
# A_ON = (5000, 9500))
loader.alex_apply_period(d)
time = d.time_max
print('Total data time = %s'%time)
```
The following will plot a time trace of the first second of your experiment.
```
dplot(d, timetrace, binwidth=1e-3, tmin=0, tmax=15, figsize=(8,5))
plt.xlim(0,1);
plt.ylim(-45,45);
plt.ylabel(" Photons/ms in APD0 Photons/ms in APD1");
```
# Background Estimation
Background estimation works by plotting log of photons by the delay between them, assuming a poisson distribution of photon arrivals and fitting a line. The plot will contain single molecule bursts however, so a threshold (in microseconds) has to be defined where the fit begins.
The variable "time_s" defines the size of the windows in which background is recalculated. Lower values will make the experiment more accurately sensisitive to fluctuations in background however higher values will give more photons with which to calculate a more precise average background. If the fit fails you may need to increase this value.
```
threshold = 1500
d.calc_bg(bg.exp_fit, time_s=300, tail_min_us=(threshold),)
dplot(d, hist_bg, show_fit=True)
```
This code will plot the calculated background in each window and acts as a good reporter of whether anything major has happened to the solution over the time course of the experiment
```
dplot(d, timetrace_bg);
```
# Burst Searching and Selecting
"d.burst_search()" can be used to do an all photon burst search (APBS), however "burst_search_and_gate(d)" will apply a DCBS / dual channel burst search (Nir 2006), this effectively does independent searches in the DD+DA channel and the AA channel, and then returns the intersecton of these bursts, ensuring that any FRET information is only included whilst an acceptor is still active in the detection volume.
The two numbers given to "F=" are the signal to background threshold in the DD+DA and AA channels respectively. If your background is particularly high in one but not the other you may want to change these independently.
```
bursts=burst_search_and_gate(d, F=(20, 20), m=10, mute=True)
```
The following will plot a graph of burst number vs sizes which can inform your selection thresholding.
```
sizes = bursts.burst_sizes_ich(add_naa=True)
plt.hist(sizes, cumulative=-1, bins = 30, range = (0, 100), histtype="stepfilled", density=False)
plt.xlabel('Burst size (n photons)')
plt.ylabel("N bursts with > n")
```
We can now set thresholds on how many photons we want in each burst, this can be done on all channels together, or just one channel. Thresholding DD+DA will reduce the width in E, thresholding in AA will ensure there are no donor only bursts.
```
bursts = bursts.select_bursts(select_bursts.size, add_naa=True, th1=20,) #all channels
bursts = bursts.select_bursts(select_bursts.size, th1=50) #DD + DA
bursts = bursts.select_bursts(select_bursts.naa , th1=10) #AA
```
# Histograms
Now we can start plotting and fitting the data
```
g=alex_jointplot(bursts)
```
This will fit a gaussian to the E values.
If you set pdf=True then the data will be displayed as a probability density function, pdf=False will give it as bursts instead
```
model = mfit.factory_gaussian()
model.set_param_hint('center', value=1.1, vary=True)
bursts.E_fitter.fit_histogram(model=model, verbose=False, pdf=False)
dplot(bursts, hist_fret, pdf=False, show_model=True, show_fit_value=True, fit_from='center');
bursts.E_fitter.params
```
This will export the data to a .csv file, type the save location between the ""'s. This csv file can be opened in excel or origin and contains information about each burst, most importantly E and S but also things like burst length and width.
```
csvfile = "1cx.csv"
burstmatrix = bext.burst_data(bursts)
burstmatrix.to_csv(csvfile)
```
| github_jupyter |
# Verzweigung
#### Marcel Lüthi, Departement Mathematik und Informatik, Universität Basel
### If-Anweisung

### Anweisungsblöcke
Anweisungsblöcke sind geklammerte Folgen von Anweisungen:
```
{
Anweisung1;
Anweisung2;
...
Anweisung3;
}
```
``then`` und ``else``-Zweig in der ``if``-Anweisung entsprechen jeweils Anweisungsblöcken.
### Randbemerkung: Einrückungen
> Anweisungen in Anweisungsblöcken sollten eingerückt werden
<div style="display: block; margin-top:0.0cm">
<div style="display: inline-block">
Gut
<pre><code class="language-java" data-trim>
if (n != 0) {
n = n / 2;
n--;
} else {
n = n * 2;
n++;
}
</code></pre></div>
<div style="display: inline-block">
Schlecht
<pre><code class="language-java" data-trim>
if (n != 0) {
n = n / 2;
n--;
} else {
n = n * 2;
n++;
}
</code></pre></div>
</div>
* Einrückungen nicht wichtig für Java - aber für den Leser
### Vergleichsoperatoren
Vergleich zweier Werte liefert wahr (``true``) oder falsch (``false``)
| | Bedeutung | Beispiel |
|------|-----------|----------|
| == | gleich | x == 3 |
| != | ungleich | x != y |
| > | grösser | 4 > 3 |
| < | kleiner | x + 1 < 0 |
| >= | grösser oder gleich | x >= y|
| <= | kleiner oder gleich | x <= y|
Wird z.B. in If-Anweisung verwendet
#### Miniübung
* Vervollständigen Sie das Programm, so dass jeweils nur die zutreffende Aussage für die Zahl z ausgegeben wird.
```
int z = 5;
System.out.println("z ist eine gerade, positive Zahl");
System.out.println("z ist eine gerade, negative Zahl");
System.out.println("z ist eine ungerade, positive Zahl");
System.out.println("z ist eine ungerade, negative Zahl");
```
### Zusammengesetzte Vergleiche
Und (&&) und Oder (||) Verknüpfung
| x | y | x ``&&`` y| x | | y |
|---|---|--------|----------------|
| true | true | true | true |
| true | false | false | true |
| false | true | false | true |
| false | false | false | false |
! Nicht-Verknüpfung
| x | !x |
|---|---|
| true | false |
| false | true |
Beispiel:
```java
if (x >= 0 && x <= 10 || x >= 100 && x <= 110) {
x = y;
}
```
### Datentyp boolean
Datentyp wie ``int``, aber mit nur zwei Werten ``true`` und ``false``.
Beispiel:
```
int x = 1;
boolean p = false;
boolean q = x > 0;
p = p || q && x < 10
```
#### Beachte
* Boolesche Werte können mit &&, || und ! verknüpft werden.
* Jeder Vergleich liefert einen Wert vom Typ boolean.
* Boolesche Werte können in boolean- Variablen abgespeichert werden ("flags").
* Namen für boolean- Variablen sollten mit Adjektiv beginnen: equal, full.
# Übungen
### Übung 1: Maximum dreier Zahlen
* Schreiben Sie ein Programm, welches das Maximum dreier Zahlen berechnet.
* Schreiben Sie das Programm jeweils mit einfachen als auch mit zusammengesetzten Bedingungen

### Übung 2: Parametrisieren Sie das gezeichnete Haus
In längeren Programmen kommt es häufig vor, dass sich eine komplexe Anweisungsfolge nur in kleinen Teilen unterscheidet. Dies ist in folgendem Programm illustriert, wo wir mal wieder die Turtle Grafik verwenden.
* Führen Sie boolsche Variablen ```hasWindow``` und ``hasChimney`` ein, welche es erlauben ein Haus wahlweise mit Kamin, mit Fenster oder mit beidem zu zeichnen.
* Führen Sie Variablen ein, um die Zeichnung einfacher parametrisieren zu können.
```
// Laden der Turtle Bibliothek
// Diese Kommandos funktionieren nur in Jupyter-notebooks und entsprechen nicht gültigem Java.
%mavenRepo bintray https://dl.bintray.com/egp/maven
%maven ch.unibas.informatik:jturtle:0.5
import ch.unibas.informatik.jturtle.Turtle;
Turtle t = new Turtle();
// head
t.home();
t.penDown();
t.backward(50);
t.forward(50);
t.turnRight(45);
t.forward(50);
t.turnRight(90);
t.forward(20);
t.turnLeft(135);
t.forward(10);
t.turnRight(90);
t.forward(10);
t.turnRight(90);
t.forward(20);
t.turnLeft(45);
t.forward(15);
t.turnRight(45);
t.forward(50);
t.turnRight(90);
t.forward(70);
t.toImage();
```
#### Mini Übung:
* Fügen Sie eine Verzweigung ein, die ein Fenster zeichnet, wenn eine Variable ```hasWindow``` auf true gesetzt ist.
* Führen Sie Variablen ein, um die Zeichnung zu parametrisieren (Höhe/Breite des Hauses, etc.)
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.datasets import load_breast_cancer
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (8,10)
class CancelOut(keras.layers.Layer):
'''
CancelOut layer, keras implementation.
'''
def __init__(self, activation='sigmoid', cancelout_loss=True, lambda_1=0.002, lambda_2=0.001):
super(CancelOut, self).__init__()
self.lambda_1 = lambda_1
self.lambda_2 = lambda_2
self.cancelout_loss = cancelout_loss
if activation == 'sigmoid': self.activation = tf.sigmoid
if activation == 'softmax': self.activation = tf.nn.softmax
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1],),
initializer=tf.keras.initializers.Constant(1),
trainable=True,
)
def call(self, inputs):
if self.cancelout_loss:
self.add_loss( self.lambda_1 * tf.norm(self.w, ord=1) + self.lambda_2 * tf.norm(self.w, ord=2))
return tf.math.multiply(inputs, self.activation(self.w))
def get_config(self):
return {"activation": self.activation}
def plot_importance(importances):
indices = np.argsort(importances)
plt.title('Feature Importances')
plt.barh(range(len(indices)), importances[indices], color='b', align='center')
plt.yticks(range(len(indices)), [features[i] for i in indices])
plt.xlabel('Relative Importance')
plt.show()
scaler = StandardScaler()
X = scaler.fit_transform(load_breast_cancer()['data'])
y = load_breast_cancer()['target']
features = load_breast_cancer()['feature_names']
```
### Sigmoid + Loss
```
inputs = keras.Input(shape=(X.shape[1],))
x = CancelOut(activation='sigmoid')(inputs)
x = layers.Dense(32, activation="relu")(x)
#x = CancelOut()(x)
x = layers.Dense(32, activation="relu")(x)
#x = CancelOut()(x)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(loss='binary_crossentropy',
optimizer='adam')
model.summary()
model.fit(X, y, epochs=20, batch_size=8)
cancelout_feature_importance_sigmoid = model.get_weights()[0]
cancelout_feature_importance_sigmoid
```
### Softmax + No Loss
```
inputs = keras.Input(shape=(X.shape[1],))
x = CancelOut(activation='softmax', cancelout_loss=False)(inputs)
x = layers.Dense(32, activation="relu")(x)
#x = CancelOut()(x)
x = layers.Dense(32, activation="relu")(x)
#x = CancelOut()(x)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(loss='binary_crossentropy',
optimizer='adam')
model.summary()
model.fit(X, y, epochs=20, batch_size=8)
cancelout_feature_importance_softmax = model.get_weights()[0]
cancelout_feature_importance_softmax
```
### RandomForest Feature Importance
```
from sklearn.ensemble import RandomForestClassifier
rnd_clf = RandomForestClassifier(n_estimators=500, n_jobs=-1, random_state=42).fit(X,y)
rf_importances = rnd_clf.feature_importances_
```
### Plots
```
print('Sigmoid + Loss')
plot_importance(cancelout_feature_importance_sigmoid)
print('Softmax + No Loss')
plot_importance(cancelout_feature_importance_softmax)
print("Random Forest")
plot_importance(rf_importances)
```
| github_jupyter |
# Deep Learning and Transfer Learning with pre-trained models
This notebook uses a pretrained model to build a classifier (CNN)
```
# import required libs
import os
import keras
import numpy as np
from keras import backend as K
from keras import applications
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
import matplotlib.pyplot as plt
params = {'legend.fontsize': 'x-large',
'figure.figsize': (15, 5),
'axes.labelsize': 'x-large',
'axes.titlesize':'x-large',
'xtick.labelsize':'x-large',
'ytick.labelsize':'x-large'}
plt.rcParams.update(params)
%matplotlib inline
```
## Load VGG
```
vgg_model = applications.VGG19(include_top=False, weights='imagenet')
vgg_model.summary()
```
Set Parameters
```
batch_size = 128
num_classes = 10
epochs = 50
bottleneck_path = r'F:\work\kaggle\cifar10_cnn\bottleneck_features_train_vgg19.npy'
```
## Get CIFAR10 Dataset
```
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
y_train.shape
```
## Pretrained Model for Feature Extraction
```
if not os.path.exists(bottleneck_path):
bottleneck_features_train = vgg_model.predict(x_train,verbose=1)
np.save(open(bottleneck_path, 'wb'),
bottleneck_features_train)
else:
bottleneck_features_train = np.load(open(bottleneck_path,'rb'))
bottleneck_features_train[0].shape
bottleneck_features_test = vgg_model.predict(x_test,verbose=1)
```
## Custom Classifier
```
clf_model = Sequential()
clf_model.add(Flatten(input_shape=bottleneck_features_train.shape[1:]))
clf_model.add(Dense(512, activation='relu'))
clf_model.add(Dropout(0.5))
clf_model.add(Dense(256, activation='relu'))
clf_model.add(Dropout(0.5))
clf_model.add(Dense(num_classes, activation='softmax'))
```
## Visualize the network architecture
```
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(clf_model, show_shapes=True,
show_layer_names=True, rankdir='TB').create(prog='dot', format='svg'))
```
## Compile the model
```
clf_model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
```
## Train the classifier
```
clf_model.fit(bottleneck_features_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1)
```
## Predict and test model performance
```
score = clf_model.evaluate(bottleneck_features_test, y_test, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
### Assign label to a test image
```
def predict_label(img_idx,show_proba=True):
plt.imshow(x_test[img_idx],aspect='auto')
plt.title("Image to be Labeled")
plt.show()
print("Actual Class:{}".format(np.nonzero(y_test[img_idx])[0][0]))
test_image =np.expand_dims(x_test[img_idx], axis=0)
bf = vgg_model.predict(test_image,verbose=0)
pred_label = clf_model.predict_classes(bf,batch_size=1,verbose=0)
print("Predicted Class:{}".format(pred_label[0]))
if show_proba:
print("Predicted Probabilities")
print(clf_model.predict_proba(bf))
img_idx = 3999 # sample indices : 999,1999 and 3999
for img_idx in [999,1999,3999]:
predict_label(img_idx)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Feature columns visualization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github.com/tensorflow/examples/blob/master/community/en/hashing_trick.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/tree/master/community/en/hashing_trick.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
This example demonstrates the use `tf.feature_column.crossed_column` on some simulated Atlanta housing price data.
This spatial data is used primarily so the results can be easily visualized.
These functions are designed primarily for categorical data, not to build interpolation tables.
If you actually want to build smart interpolation tables in TensorFlow you may want to consider [TensorFlow Lattice](https://research.googleblog.com/2017/10/tensorflow-lattice-flexibility.html).
## Setup
```
import os
import subprocess
import tempfile
import tensorflow as tf
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = 12, 6
mpl.rcParams['image.cmap'] = 'viridis'
```
## Build Synthetic Data
```
# Define the grid
min_latitude = 33.641336
max_latitude = 33.887157
delta_latitude = max_latitude-min_latitude
min_longitude = -84.558798
max_longitude = -84.287259
delta_longitude = max_longitude-min_longitude
resolution = 100
# Use RandomState so the behavior is repeatable.
R = np.random.RandomState(1)
# The price data will be a sum of Gaussians, at random locations.
n_centers = 20
centers = R.rand(n_centers, 2) # shape: (centers, dimensions)
# Each Gaussian has a maximum price contribution, at the center.
# Price_
price_delta = 0.5+2*R.rand(n_centers)
# Each Gaussian also has a standard-deviation and variance.
std = 0.2*R.rand(n_centers) # shape: (centers)
var = std**2
def price(latitude, longitude):
# Convert latitude, longitude to x,y in [0,1]
x = (longitude - min_longitude)/delta_longitude
y = (latitude - min_latitude)/delta_latitude
# Cache the shape, and flatten the inputs.
shape = x.shape
assert y.shape == x.shape
x = x.flatten()
y = y.flatten()
# Convert x, y examples into an array with shape (examples, dimensions)
xy = np.array([x,y]).T
# Calculate the square distance from each example to each center.
components2 = (xy[:,None,:] - centers[None,:,:])**2 # shape: (examples, centers, dimensions)
r2 = components2.sum(axis=2) # shape: (examples, centers)
# Calculate the z**2 for each example from each center.
z2 = r2/var[None,:]
price = (np.exp(-z2)*price_delta).sum(1) # shape: (examples,)
# Restore the original shape.
return price.reshape(shape)
# Build the grid. We want `resolution` cells between `min` and `max` on each dimension
# so we need `resolution+1` evenly spaced edges. The centers are at the average of the
# upper and lower edge.
latitude_edges = np.linspace(min_latitude, max_latitude, resolution+1)
latitude_centers = (latitude_edges[:-1] + latitude_edges[1:])/2
longitude_edges = np.linspace(min_longitude, max_longitude, resolution+1)
longitude_centers = (longitude_edges[:-1] + longitude_edges[1:])/2
latitude_grid, longitude_grid = np.meshgrid(
latitude_centers,
longitude_centers)
# Evaluate the price at each center-point
actual_price_grid = price(latitude_grid, longitude_grid)
price_min = actual_price_grid.min()
price_max = actual_price_grid.max()
price_mean = actual_price_grid.mean()
price_mean
def show_price(price):
plt.imshow(
price,
# The color axis goes from `price_min` to `price_max`.
vmin=price_min, vmax=price_max,
# Put the image at the correct latitude and longitude.
extent=(min_longitude, max_longitude, min_latitude, max_latitude),
# Make the image square.
aspect = 1.0*delta_longitude/delta_latitude)
show_price(actual_price_grid)
```
## Build Datasets
```
# For test data we will use the grid centers.
test_features = {'latitude':latitude_grid.flatten(), 'longitude':longitude_grid.flatten()}
test_ds = tf.data.Dataset.from_tensor_slices((test_features,
actual_price_grid.flatten()))
test_ds = test_ds.cache().batch(512).prefetch(1)
# For training data we will use a set of random points.
train_latitude = min_latitude + np.random.rand(50000)*delta_latitude
train_longitude = min_longitude + np.random.rand(50000)*delta_longitude
train_price = price(train_latitude, train_longitude)
train_features = {'latitude':train_latitude, 'longitude':train_longitude}
train_ds = tf.data.Dataset.from_tensor_slices((train_features, train_price))
train_ds = train_ds.cache().shuffle(100000).batch(512).prefetch(1)
```
## Generate a plot from an Estimator
```
ag = actual_price_grid.reshape(resolution, resolution)
ag.shape
def plot_model(model, ds = test_ds):
# Create two plot axes
actual, predicted = plt.subplot(1,2,1), plt.subplot(1,2,2)
# Plot the actual price.
plt.sca(actual)
plt.pcolor(actual_price_grid.reshape(resolution, resolution))
# Generate predictions over the grid from the estimator.
pred = model.predict(ds)
# Convert them to a numpy array.
pred = np.fromiter((item for item in pred), np.float32)
# Plot the predictions on the secodn axis.
plt.sca(predicted)
plt.pcolor(pred.reshape(resolution, resolution))
```
## A linear regressor
```
# Use `normalizer_fn` so that the model only sees values in [0, 1]
norm_latitude = lambda latitude:(latitude-min_latitude)/delta_latitude - 0.5
norm_longitude = lambda longitude:(longitude-min_longitude)/delta_longitude - 0.5
linear_fc = [tf.feature_column.numeric_column('latitude', normalizer_fn = norm_latitude),
tf.feature_column.numeric_column('longitude', normalizer_fn = norm_longitude)]
# Build and train the model
model = tf.keras.Sequential([
tf.keras.layers.DenseFeatures(feature_columns=linear_fc),
tf.keras.layers.Dense(1),
])
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.MeanAbsoluteError())
model.fit(train_ds, epochs=200, validation_data=test_ds)
plot_model(model)
```
## A DNN regressor
Important: Pure categorical data doesn't the spatial relationships that make this example possible. Embeddings are a way your model can learn spatial relationships.
```
# Build and train the model
model = tf.keras.Sequential([
tf.keras.layers.DenseFeatures(feature_columns=linear_fc),
tf.keras.layers.Dense(100, activation='elu'),
tf.keras.layers.Dense(100, activation='elu'),
tf.keras.layers.Dense(1),
])
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.MeanAbsoluteError())
model.fit(train_ds, epochs=200, validation_data=test_ds)
plot_model(model)
```
# Linear model with buckets
```
# Bucketize the latitude and longitude usig the `edges`
latitude_bucket_fc = tf.feature_column.bucketized_column(
tf.feature_column.numeric_column('latitude'),
list(latitude_edges))
longitude_bucket_fc = tf.feature_column.bucketized_column(
tf.feature_column.numeric_column('longitude'),
list(longitude_edges))
seperable_fc = [
latitude_bucket_fc,
longitude_bucket_fc]
# Build and train the model
model = tf.keras.Sequential([
tf.keras.layers.DenseFeatures(feature_columns=seperable_fc),
tf.keras.layers.Dense(1),
])
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.MeanAbsoluteError())
model.fit(train_ds, epochs=200, validation_data=test_ds)
plot_model(model)
```
# Using `crossed_column` on its own.
```
# Cross the bucketized columns, using 5000 hash bins (for an average weight sharing of 2).
crossed_lat_lon_fc = tf.feature_column.crossed_column(
[latitude_bucket_fc, longitude_bucket_fc], 2000)
crossed_lat_lon_fc = tf.feature_column.indicator_column(crossed_lat_lon_fc)
crossed_fc = [crossed_lat_lon_fc]
# Build and train the model
model = tf.keras.Sequential([
tf.keras.layers.DenseFeatures(feature_columns=crossed_fc),
tf.keras.layers.Dense(1),
])
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.MeanAbsoluteError())
model.fit(train_ds, epochs=200, validation_data=test_ds)
plot_model(model)
```
# Using raw categories with `crossed_column`
The model generalizes better if it also has access to the raw categories, outside of the cross.
```
# Build and train the model
model = tf.keras.Sequential([
tf.keras.layers.DenseFeatures(feature_columns=crossed_fc+seperable_fc),
tf.keras.layers.Dense(1, kernel_regularizer=tf.keras.regularizers.l2(0.0001)),
])
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.MeanAbsoluteError(),
metrics=[tf.keras.losses.MeanAbsoluteError()])
model.fit(train_ds, epochs=200, validation_data=test_ds)
plot_model(model)
```
| github_jupyter |
# Example 1b: Spin-Bath model (Underdamped Case)
### Introduction
The HEOM method solves the dynamics and steady state of a system and its environment, the latter of which is encoded in a set of auxiliary density matrices.
In this example we show the evolution of a single two-level system in contact with a single Bosonic environment. The properties of the system are encoded in Hamiltonian, and a coupling operator which describes how it is coupled to the environment.
The Bosonic environment is implicitly assumed to obey a particular Hamiltonian (see paper), the parameters of which are encoded in the spectral density, and subsequently the free-bath correlation functions.
In the example below we show how to model the underdamped Brownian motion Spectral Density.
### Drude-Lorentz (overdamped) spectral density
Note that in the above, and the following, we set $\hbar = k_\mathrm{B} = 1$.
### Brownian motion (underdamped) spectral density
The underdamped spectral density is:
$$J_U = \frac{\alpha^2 \Gamma \omega}{(\omega_c^2 - \omega^2)^2 + \Gamma^2 \omega^2)}.$$
Here $\alpha$ scales the coupling strength, $\Gamma$ is the cut-off frequency, and $\omega_c$ defines a resonance frequency. With the HEOM we must use an exponential decomposition:
The Matsubara decomposition of this spectral density is, in real and imaginary parts:
\begin{equation*}
c_k^R = \begin{cases}
\alpha^2 \coth(\beta( \Omega + i\Gamma/2)/2)/4\Omega & k = 0\\
\alpha^2 \coth(\beta( \Omega - i\Gamma/2)/2)/4\Omega & k = 0\\
-2\alpha^2\Gamma/\beta \frac{\epsilon_k }{((\Omega + i\Gamma/2)^2 + \epsilon_k^2)(\Omega - i\Gamma/2)^2 + \epsilon_k^2)} & k \geq 1\\
\end{cases}
\end{equation*}
\begin{equation*}
\nu_k^R = \begin{cases}
-i\Omega + \Gamma/2, i\Omega +\Gamma/2, & k = 0\\
{2 \pi k} / {\beta } & k \geq 1\\
\end{cases}
\end{equation*}
\begin{equation*}
c_k^I = \begin{cases}
i\alpha^2 /4\Omega & k = 0\\
-i\alpha^2 /4\Omega & k = 0\\
\end{cases}
\end{equation*}
\begin{equation*}
\nu_k^I = \begin{cases}
i\Omega + \Gamma/2, -i\Omega + \Gamma/2, & k = 0\\
\end{cases}
\end{equation*}
Note that in the above, and the following, we set $\hbar = k_\mathrm{B} = 1$.
```
%pylab inline
from qutip import *
%load_ext autoreload
%autoreload 2
from bofin.heom import BosonicHEOMSolver
def cot(x):
return 1./np.tan(x)
def coth(x):
"""
Calculates the coth function.
Parameters
----------
x: np.ndarray
Any numpy array or list like input.
Returns
-------
cothx: ndarray
The coth function applied to the input.
"""
return 1/np.tanh(x)
# Defining the system Hamiltonian
eps = .5 # Energy of the 2-level system.
Del = 1.0 # Tunnelling term
Hsys = 0.5 * eps * sigmaz() + 0.5 * Del* sigmax()
# Initial state of the system.
rho0 = basis(2,0) * basis(2,0).dag()
# System-bath coupling (Drude-Lorentz spectral density)
Q = sigmaz() # coupling operator
#solver time steps
nsteps = 1000
tlist = np.linspace(0, 50, nsteps)
#correlation function plotting time steps
tlist_corr = np.linspace(0, 20, 1000)
#Bath properties:
gamma = .1 # cut off frequency
lam = .5 # coupling strenght
w0 = 1 #resonance frequency
T = 1
beta = 1./T
#HEOM parameters
NC = 10 # cut off parameter for the bath
#Spectral Density
wlist = np.linspace(0, 5, 1000)
pref = 1.
J = [lam**2 * gamma * w / ((w0**2-w**2)**2 + (gamma**2)*(w**2)) for w in wlist]
# Plot the results
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))
axes.plot(wlist, J, 'r', linewidth=2)
axes.set_xlabel(r'$\omega$', fontsize=28)
axes.set_ylabel(r'J', fontsize=28)
#first of all lets look athe correlation functions themselves
Nk = 3 # number of exponentials
Om = np.sqrt(w0**2 - (gamma/2)**2)
Gamma = gamma/2.
#mats
def Mk(t,k):
ek = 2*pi*k/beta
return (-2*lam**2*gamma/beta)*ek*exp(-ek*abs(t))/(((Om+1.0j*Gamma)**2+ek**2)*((Om-1.0j*Gamma)**2+ek**2))
def c(t):
Cr = coth(beta*(Om+1.0j*Gamma)/2)*exp(1.0j*Om*t)+coth(beta*(Om-1.0j*Gamma)/2)*exp(-1.0j*Om*t)
#Cr = coth(beta*(Om+1.0j*Gamma)/2)*exp(1.0j*Om*t)+conjugate(coth(beta*(Om+1.0j*Gamma)/2)*exp(1.0j*Om*t))
Ci = exp(-1.0j*Om*t)-exp(1.0j*Om*t)
return (lam**2/(4*Om))*exp(-Gamma*abs(t))*(Cr+Ci) + sum([Mk(t,k) for k in range(1,Nk+1)])
plt.figure(figsize=(8,8))
plt.plot(tlist_corr ,[real(c(t)) for t in tlist_corr ], '-', color="black", label="Re[C(t)]")
plt.plot(tlist_corr ,[imag(c(t)) for t in tlist_corr ], '-', color="red", label="Im[C(t)]")
plt.legend()
plt.show()
#The Matsubara terms modify the real part
Nk = 3# number of exponentials
Om = np.sqrt(w0**2 - (gamma/2)**2)
Gamma = gamma/2.
#mats
def Mk(t,k):
ek = 2*pi*k/beta
return (-2*lam**2*gamma/beta)*ek*exp(-ek*abs(t))/(((Om+1.0j*Gamma)**2+ek**2)*((Om-1.0j*Gamma)**2+ek**2))
plt.figure(figsize=(8,8))
plt.plot(tlist_corr ,[sum([real(Mk(t,k)) for k in range(1,4)]) for t in tlist_corr ], '-', color="black", label="Re[M(t)] Nk=3")
plt.plot(tlist_corr ,[sum([real(Mk(t,k)) for k in range(1,6)]) for t in tlist_corr ], '--', color="red", label="Re[M(t)] Nk=5")
plt.legend()
plt.show()
#Lets collate the parameters for the HEOM
ckAR = [(lam**2/(4*Om))*coth(beta*(Om+1.0j*Gamma)/2),(lam**2/(4*Om))*coth(beta*(Om-1.0j*Gamma)/2)]
ckAR.extend([(-2*lam**2*gamma/beta)*( 2*pi*k/beta)/(((Om+1.0j*Gamma)**2+ (2*pi*k/beta)**2)*((Om-1.0j*Gamma)**2+( 2*pi*k/beta)**2))+0.j for k in range(1,Nk+1)])
vkAR = [-1.0j*Om+Gamma,1.0j*Om+Gamma]
vkAR.extend([2 * np.pi * k * T + 0.j for k in range(1,Nk+1)])
factor=1./4.
ckAI =[-factor*lam**2*1.0j/(Om),factor*lam**2*1.0j/(Om)]
vkAI = [-(-1.0j*(Om) - Gamma),-(1.0j*(Om) - Gamma)]
NC=14
NR = len(ckAR)
NI = len(ckAI)
Q2 = [Q for kk in range(NR+NI)]
print(Q2)
options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)
HEOM = BosonicHEOMSolver(Hsys, Q2, ckAR, ckAI, vkAR, vkAI, NC, options=options)
result = HEOM.run(rho0, tlist)
# Define some operators with which we will measure the system
# Define some operators with which we will measure the system
# 1,1 element of density matrix - corresonding to groundstate
P11p=basis(2,0) * basis(2,0).dag()
P22p=basis(2,1) * basis(2,1).dag()
# 1,2 element of density matrix - corresonding to coherence
P12p=basis(2,0) * basis(2,1).dag()
# Calculate expectation values in the bases
P11 = expect(result.states, P11p)
P22 = expect(result.states, P22p)
P12= expect(result.states, P12p)
#DL = " 2*pi* 2.0 * {lam} / (pi * {gamma} * {beta}) if (w==0) else 2*pi*(2.0*{lam}*{gamma} *w /(pi*(w**2+{gamma}**2))) * ((1/(exp((w) * {beta})-1))+1)".format(gamma=gamma, beta = beta, lam = lam)
UD = " 2* {lam}**2 * {gamma} / ( {w0}**4 * {beta}) if (w==0) else 2* ({lam}**2 * {gamma} * w /(({w0}**2 - w**2)**2 + {gamma}**2 * w**2)) * ((1/(exp((w) * {beta})-1))+1)".format(gamma = gamma, beta = beta, lam = lam, w0 = w0)
optionsODE = Options(nsteps=15000, store_states=True,rtol=1e-12,atol=1e-12)
outputBR = brmesolve(Hsys, rho0, tlist, a_ops=[[sigmaz(),UD]], options = optionsODE)
# Calculate expectation values in the bases
P11BR = expect(outputBR.states, P11p)
P22BR = expect(outputBR.states, P22p)
P12BR = expect(outputBR.states, P12p)
#Prho0BR = expect(outputBR.states,rho0)
#This Thermal state of a reaction coordinate should, at high temperatures and not to broad baths, tell us the steady-state
dot_energy, dot_state = Hsys.eigenstates()
deltaE = dot_energy[1] - dot_energy[0]
gamma2 = gamma
wa = w0 # reaction coordinate frequency
g = lam/sqrt(2*wa)
#nb = (1 / (np.exp(wa/w_th) - 1))
NRC = 10
Hsys_exp = tensor(qeye(NRC), Hsys)
Q_exp = tensor(qeye(NRC), Q)
a = tensor(destroy(NRC), qeye(2))
H0 = wa * a.dag() * a + Hsys_exp
# interaction
H1 = (g * (a.dag() + a) * Q_exp)
H = H0 + H1
#print(H.eigenstates())
energies, states = H.eigenstates()
rhoss = 0*states[0]*states[0].dag()
for kk, energ in enumerate(energies):
rhoss += (states[kk]*states[kk].dag()*exp(-beta*energies[kk]))
rhoss = rhoss/rhoss.norm()
P12RC = tensor(qeye(NRC), basis(2,0) * basis(2,1).dag())
P12RC = expect(rhoss,P12RC)
P11RC = tensor(qeye(NRC), basis(2,0) * basis(2,0).dag())
P11RC = expect(rhoss,P11RC)
matplotlib.rcParams['figure.figsize'] = (7, 5)
matplotlib.rcParams['axes.titlesize'] = 25
matplotlib.rcParams['axes.labelsize'] = 30
matplotlib.rcParams['xtick.labelsize'] = 28
matplotlib.rcParams['ytick.labelsize'] = 28
matplotlib.rcParams['legend.fontsize'] = 28
matplotlib.rcParams['axes.grid'] = False
matplotlib.rcParams['savefig.bbox'] = 'tight'
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rcParams['font.family'] = 'STIXgeneral'
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams["font.serif"] = "STIX"
matplotlib.rcParams['text.usetex'] = False
# Plot the results
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(12,7))
plt.yticks([P11RC,0.6,1.0],[0.38,0.6,1])
axes.plot(tlist, np.real(P11BR), 'y--', linewidth=3, label="Bloch-Redfield")
axes.plot(tlist, np.real(P11), 'b', linewidth=3, label="Matsubara $N_k=3$")
axes.plot(tlist, [P11RC for t in tlist], color='black', linestyle="-.",linewidth=2, label="Thermal state")
axes.locator_params(axis='y', nbins=6)
axes.locator_params(axis='x', nbins=6)
axes.set_ylabel(r'$\rho_{11}$',fontsize=30)
axes.set_xlabel(r'$t \Delta$',fontsize=30)
axes.locator_params(axis='y', nbins=4)
axes.locator_params(axis='x', nbins=4)
axes.legend(loc=0)
fig.savefig("figures/fig3.pdf")
from qutip.ipynbtools import version_table
version_table()
```
| github_jupyter |
# Dataprep
### Objective
Crawls through raw_data directory and converts diffusion and flair into a data array
### Prerequisites
All diffusion and FLAIR should be registrated and put in a NIFTI file format.
### Data organisation
- All b0 diffusion should be named "patientid_hX_DWIb0.nii.gz" where "hX" corresponds to time delay and can be "h0" or "h1" (to stratify on delay)
- All b1000 diffusion should be named "patientid_hX_DWIb1000.nii.gz" where "hX" corresponds to time delay and can be "h0" or "h1" (to stratify on delay)
- All corresponding FLAIR sequences should be named: "patientid_hX_qX_FLAIR.nii.gz" where "qX" corresponds to quality and can be "q0" or "q1" or "q2" (to stratify on quality)
- Optionally, you can add a weighted mask "patientid_hX_MASK.nii.gz" with values between 0 (background), 1 (brain mask) and 2 (stroke region) that will be used for training weight. If you don't provide it, a crude stroke segmentation with ADC < 600 will be used as a weighting map.
## Load modules
```
import os, glob, h5py
import numpy as np
from skimage.morphology import dilation, opening
from modules.niftitools import twoniftis2array, flairnifti2array, masknifti2array
```
## Crawl through files
```
dwifiles_precheck = glob.glob(os.path.join("raw_data", "*_DWIb0.nii.gz"))
patnames, timepoints, qualities, b0files, b1000files, flairfiles, maskfiles = [], [], [], [], [], [], []
num_patients = 0
for dwifile in dwifiles_precheck:
name, timepoint, _ = os.path.basename(dwifile).split("_")
timepoint = int(timepoint.replace("h",""))
matchesb1000 = glob.glob(os.path.join("raw_data", name+"_h"+str(timepoint)+"_DWIb1000.nii.gz"))
matchesFlair = glob.glob(os.path.join("raw_data", name+"_h"+str(timepoint)+"_q*_FLAIR.nii.gz"))
if len(matchesFlair) and len(matchesb1000):
_, _, quality, _ = os.path.basename(matchesFlair[0]).split("_")
patnames.append(name)
timepoints.append(timepoint)
qualities.append(int(quality.replace("q","")))
b0files.append(dwifile)
b1000files.append(matchesb1000[0])
flairfiles.append(matchesFlair[0])
matchesMask = glob.glob(os.path.join("raw_data", name+"_h"+str(timepoint)+"_MASK.nii.gz"))
if len(matchesMask):
maskfiles.append(matchesMask[0])
else:
maskfiles.append(None)
num_patients += 1
```
## Create data arrays
```
z_slices = 25
outputdir = "data"
with h5py.File(os.path.join(outputdir,"metadata.hdf5"), "w") as metadata:
metadata.create_dataset("patientnames", data=np.array(patnames, dtype="S"))
metadata.create_dataset("shape_x", data=(num_patients,256,256,z_slices,3))
metadata.create_dataset("shape_y", data=(num_patients,256,256,z_slices,1))
metadata.create_dataset("shape_mask", data=(num_patients,256,256,z_slices,1))
metadata.create_dataset("shape_meta", data=(num_patients,2))
fx = np.memmap(os.path.join(outputdir,"data_x.dat"), dtype="float32", mode="w+",
shape=(num_patients,256,256,z_slices,3))
fy = np.memmap(os.path.join(outputdir,"data_y.dat"), dtype="float32", mode="w+",
shape=(num_patients,256,256,z_slices,1))
fmask = np.memmap(os.path.join(outputdir,"data_mask.dat"), dtype="uint8", mode="w+",
shape=(num_patients,256,256,z_slices,1))
fmeta = np.memmap(os.path.join(outputdir,"data_meta.dat"), dtype="float32", mode="w+",
shape=(num_patients,2))
if num_patients > 0:
print("Imported following patients:", end=" ")
for i in range(num_patients):
if i>0:
print(", ",end="")
fmeta[i,0] = qualities[i]
fmeta[i,1] = timepoints[i]
Xdata, mask, _ = twoniftis2array(b0files[i], b1000files[i],z_slices)
Xdata = Xdata.transpose(1,2,3,0)
fx[i] = Xdata
if maskfiles[i] is not None:
fmask[i] = masknifti2array(maskfiles[i],z_slices)[...,np.newaxis]
else:
crudemask = dilation(dilation(dilation(opening(np.logical_and(mask, Xdata[...,2]<600)))))
crudemask = crudemask.astype("uint8") + mask.astype("uint8")
fmask[i] = crudemask[...,np.newaxis]
fy[i] = flairnifti2array(flairfiles[i],mask,z_slices)[...,np.newaxis]
print(name, end="")
del fx, fy, fmask, fmeta
```
| github_jupyter |
# DAY0 - Looking for Dataset + Problem
```
# needed to make web requests
import requests
#store the data we get as a dataframe
import pandas as pd
#convert the response as a structured json
import json
#mathematical operations on lists
import numpy as np
#parse the datetimes we get from NOAA
from datetime import datetime
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from datetime import timedelta
from sklearn.metrics import accuracy_score
#add the access token you got from NOAA
Token = 'xKIlBHakeOEdyBfhPkKcDKyLzjofRpNY'
#MIAMI INTERNATIONAL AIRPORT, FL US station
station_id = 'GHCND:USW00012839'
# https://www.ncdc.noaa.gov/cdo-web/datatools/findstation
#initialize lists to store data
dates_temp = []
dates_prcp = []
temps = []
prcp = []
#for each year from 2015-2019 ...
for year in range(2015, 2020):
year = str(year)
print('working on year '+year)
#make the api call
r = requests.get('https://www.ncdc.noaa.gov/cdo-web/api/v2/data?datasetid=GHCND&datatypeid=TAVG&limit=1000&stationid=GHCND:USW00023129&startdate='+year+'-01-01&enddate='+year+'-12-31', headers={'token':Token})
#load the api response as a json
d = json.loads(r.text)
#get all items in the response which are average temperature readings
avg_temps = [item for item in d['results'] if item['datatype']=='TAVG']
#get the date field from all average temperature readings
dates_temp += [item['date'] for item in avg_temps]
#get the actual average temperature from all average temperature readings
temps += [item['value'] for item in avg_temps]
#initialize dataframe
df_temp = pd.DataFrame()
#populate date and average temperature fields (cast string date to datetime and convert temperature from tenths of Celsius to Fahrenheit)
df_temp['date'] = [datetime.strptime(d, "%Y-%m-%dT%H:%M:%S") for d in dates_temp]
df_temp['avgTemp'] = [float(v)/10.0*1.8 + 32 for v in temps]
df_temp['date'].head()
df_temp['avgTemp'].head()
# ...not taking this approach...
```
# new approach ->
# DAY1 - Brainstorming & Data Preparation
Idea generation & planning
Data gathering & cleaning
Data storage
# Let's start with the Solar Dataset
```
import pandas as pd
solar = pd.read_csv('/Users/gracemartinez/Downloads/solar.csv')
solar.head()
solar.shape
solar.columns
# replacing space between words with underscore
solar.columns = solar.columns.str.replace(' ','_')
solar.columns
```
```
# verifying types of data
solar.dtypes
# need to know what each column means/represents to know if they're a correct type
'''
- Need to change:
Calendar_Date object -> (-)Y/M/D, ex. -1997 May 22
Eclipse_Time object -> Date
- will drop for python, have again for tableau:
Latitude object, separate into 2 columns: decimal# & Letter
Longitude object, separate into 2 columns: decimal# & Letter
- i dont think i need them: need to see the correlation with the Y
Path_Width_(km) object 1/3 of null values.
Central_Duration object 1/3 of null values.
# how many of each type in the data
solar['Eclipse_Type'].value_counts()
# we will data clean by putting all common categories together
# only 4 types of lunar eclipse: P, A, T, H
def eclipseClean(x):
if 'P' in x:
return('P')
if 'A' in x:
return('A')
if 'T' in x:
return('T')
if 'H' in x:
return('H')
# map function; recount the types in new category types
solar['Eclipse_Type'] = list(map(eclipseClean,solar['Eclipse_Type']))
solar['Eclipse_Type'].value_counts()
len(solar['Eclipse_Type'].value_counts())
# so now we have the 4 main categories of 'Eclipse Type',
# but we can drop 'H' column because it's "Hybrid or Annular/Total Eclipse."
# and we're only dealing with Partial, Annular, or Total Eclipses.
# we're removing the hybrid category type 'H'
# because the data will be compromised/disparity. might give unnecessary outliers
solar = solar.drop(solar[solar.Eclipse_Type == 'H'].index)
# recount the new 3 categories
solar['Eclipse_Type'].value_counts()
# now let's look at the Latitude & Longitude columns
solar.Latitude.head()
# Using regex to separate Latitude & Longitude columns
import re
solar['Latitude_Number'] = solar['Latitude'].str.replace('([A-Z]+)', '')
solar['Latitude_Letter'] = solar['Latitude'].str.extract('([A-Z]+)')
solar.head()
# Same with Logitude
solar['Longitude_Number'] = solar['Longitude'].str.replace('([A-Z]+)', '')
solar['Longitude_Letter'] = solar['Longitude'].str.extract('([A-Z]+)')
solar.head()
# Dropping original Latitude & Longitude columns
solar.drop(columns =["Latitude", "Longitude"], inplace = True)
solar.head()
solar.isnull().sum()
# how much correlation do the columns with Null have
solar.isnull().sum() / solar.shape[0]
# Need to drop 2 columns with high missing null values
solar = solar.drop(["Path_Width_(km)", "Central_Duration"], axis=1)
# Also drop 'Catalog_Number' column since it is just like the index, hence unnecessary
solar = solar.drop(["Catalog_Number"], axis=1)
solar.head()
len(solar.columns)
# make new column with no negative symbol
def c0(x):
if '-' in x:
x = x.replace('-','')
return x
solar['Calendar_Date_Clean'] = list(map(c0, solar['Calendar_Date']))
solar.head()
# Look for months only
import re
re.findall('[A-z]+' , solar['Calendar_Date_Clean'][0])
# too much time trying to convert to correct datetime format,
# used simple regex to remove negative symbol and extracted month
def c1(x):
if '-' in x:
x = x.replace('-','')
return((re.findall('[A-z]+', x))[0])
solar['Calendar_Date_Month'] = list(map(c1, solar['Calendar_Date']))
solar['Calendar_Date_Month']
# Look for years only
def c2(x):
if '-' in x:
x = x.replace('-','')
temp = re.findall('\d\d\d\d', x)
if len(temp)>0:
return temp[0]
else:
return temp
solar['Calendar_Date_Year'] = list(map(c2, solar['Calendar_Date']))
solar.head()
# drop original column
solar = solar.drop(["Calendar_Date"], axis=1)
solar.head()
# Use only first 800 rows, encompassing dates beginning from the 17th century. (1601-1999)
# Gregorian calendar starts after 1582
solar[0:800]
# Saving for future Tableau usage
solar.to_csv('Solar_tableau.csv')
SolarCategoricals = solar.select_dtypes(object)
SolarCategoricals
# Need to convert Latitude_Number & Longitude_Number to float64 type
solar["Latitude_Number"] = pd.to_numeric(solar["Latitude_Number"])
solar.head()
solar["Longitude_Number"] = pd.to_numeric(solar["Longitude_Number"])
solar.head()
solar.dtypes
SolarCategoricals = solar.select_dtypes(object)
SolarCategoricals.head()
len(SolarCategoricals.columns)
# Total columns
len(solar.dtypes)
solar = solar.drop(['Latitude_Number', 'Longitude_Number', 'Latitude_Letter', 'Longitude_Letter', 'Eclipse_Time'], axis=1)
SolarNumericals = solar._get_numeric_data()
SolarNumericals
# use corr function, will untilize number of numerical columns
S_corr_matrix = SolarNumericals.corr()
S_corr_matrix.head()
# set fig size to have better readibility of heatmap
fig, ax = plt.subplots(figsize=(8,8))
S_heatmap = sns.heatmap(S_corr_matrix, annot =True, ax=ax)
S_heatmap
# Due to high correlation, we are dropping 'Saros_Number' & 'Lunation_Number'
solar = solar.drop(['Saros_Number', 'Lunation_Number'], axis=1)
# update
SolarNumericals = solar._get_numeric_data()
SolarNumericals
S_corr_matrix = SolarNumericals.corr()
S_corr_matrix.head()
fig, ax = plt.subplots(figsize=(8,8))
S_heatmap = sns.heatmap(S_corr_matrix, annot =True, ax=ax)
S_heatmap
plt.hist(solar["Eclipse_Type"], bins = len(solar["Eclipse_Type"].unique()))
plt.xticks(rotation='vertical')
# normalize numerical values
import pandas as pd
from sklearn import preprocessing
x = SolarNumericals.values #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
df = pd.DataFrame(x_scaled)
df.head()
solar.head()
# Now let's play with the categories!
solar.dtypes
SolarCategoricals = solar.select_dtypes(object)
SolarCategoricals.head()
from sklearn import preprocessing
solar.head()
solar.columns
# there are 2 categorical & numericals
'''
The Target[Y] is finding the Eclipse_Type
The Features[X] are what will determine the best outcome for Y
need to determine which are the best features to use to get the prediction
while having a high measurement of acuracy.
We will do a Train/Test Split in order to verify.
'''
```
```
solar
numericals = solar._get_numeric_data()
numericals = pd.DataFrame(numericals)
numericals
# Normalize x values
from sklearn.preprocessing import Normalizer
transformer = Normalizer().fit(numericals)
normalized_x = transformer.transform(numericals)
pd.DataFrame(normalized_x)
categoricals = solar.select_dtypes('object')
categoricals = categoricals['Eclipse_Type']
y = categoricals
# importing the necessary libraries
import pandas as pd
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
# defining the target variable (dependent variable) as y
y = solar.Eclipse_Type
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y = le.fit_transform(y)
y
# creating training and testing variables
# test_size = the percentage of the data for testing. It’s usually around 80/20 or 70/30. In this case 80/20
X_train, X_test, y_train, y_test = train_test_split(normalized_x, y, test_size=0.2)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
```
# LINEAR REGRESSION
```
'''
# fitting the model on the training data
lm = linear_model.LinearRegression()
model = lm.fit(X_train, y_train)
predictions = lm.predict(X_test)
# ...nvm i have to use logistic regression for this CLASSIFICATION PROBLEM *eye roll, sweats*
'''
# show first five predicted values
predictions[0:5]
# plotting the model - The line / model
plt.scatter(y_test, predictions)
plt.xlabel(“True_Values”)
plt.ylabel(“Predictions”)
```
# LOGISTIC REGRESSION
```
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
# Training the Logistic Regression Model:
# Split data into 'X' features and 'y' target label sets
X = normalized_x
y = le.fit_transform(y)
# Import module to split dataset
from sklearn.model_selection import train_test_split
# Split data set into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,) # random_state= _no._ simply sets a seed to the random generator, so that your train-test splits are always deterministic. If you don't set a seed, it is different each time.
# Import module for fitting
from sklearn.linear_model import LogisticRegression
# Create instance (i.e. object) of LogisticRegression
logmodel = LogisticRegression()
# Fit the model using the training data
# X_train -> parameter supplies the data features
# y_train -> parameter supplies the target labels
logmodel.fit(X_train, y_train)
"""
NOW,
Evaluate the Model by reviewing the classification report or confusion matrix.
By reviewing these tables, we are able to evaluate the model.
Below we are able to identify that the model has a precision of 51.4% accuracy.
To improve this we could gather more data, conduct further feature engineering and more to continue to adjust.
"""
pd.DataFrame(y_test)
from sklearn.metrics import classification_report, accuracy_score
predictions = logmodel.predict(pd.DataFrame(X_test))
print(classification_report(y_test, predictions))
print(accuracy_score(y_test, predictions))
# has a 53% model accuracy
# varies everytime i run it.
```
# RANDOM FOREST ALGORITHM
```
# Dividing data into training and testing sets:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# training our random forests to solve this classification problem
from sklearn.ensemble import RandomForestClassifier
randomForestClassification = RandomForestClassifier(n_estimators=100,random_state=259)
randomForestClassification.fit(X_train, y_train)
y_pred = randomForestClassification.predict(X_test)
pd.Series(y_pred).value_counts()
# Evaluating the Algorithm -
"""
For classification problems the metrics used to evaluate an algorithm are
accuracy, confusion matrix, precision recall, and F1 values. (also known as balanced F-score or F-measure. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0.)
Executing the following script to find these values:
"""
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
# print(confusion_matrix(y_test,y_pred))
print(classification_report(y_test,y_pred))
print(accuracy_score(y_test, y_pred))
# The accuracy achieved for by our random forest classifier with 100 trees is 85%.
# varies everytime i run it.
randomForestClassification.feature_importances_
```
# SVM ALGORITHM - Support Vector Machine / Classification
```
# X = normalized_x
# y = le.fit_transform(y)
# Fitting a Support Vector Machine
# import support vector classifier
from sklearn.svm import SVC # "Support Vector Classifier"
clf = SVC(kernel='linear')
# fitting x samples and y classes
clf.fit(X, y)
print(classification_report(y_test, predictions))
print(accuracy_score(y_test, predictions))
# The accuracy achieved by our svm classifier is 37%.
# varies everytime i run it.
```
# NOW LUNAR DATASET
```
lunar = pd.read_csv('/Users/gracemartinez/Downloads/lunar.csv')
lunar.head()
lunar.shape
lunar.columns
lunar.columns = lunar.columns.str.replace(' ','_')
lunar.columns
# number of columns of data
len(lunar.columns)
# different types of each column
lunar.dtypes
# need to know what each column mean/represents to know if they're a correct type
# amount of different individual types of each lunar type
lunar['Eclipse_Type'].value_counts()
# lenght of
len(lunar['Eclipse_Type'].value_counts())
# only 3 types of lunar eclipse: N, P, T
def eclipsetypeClean(x):
if 'N' in x:
return('N')
if 'P' in x:
return('P')
if 'T' in x:
return('T')
# map function to clean up the amount of different types
lunar['Eclipse_Type'] = list(map(eclipsetypeClean,lunar['Eclipse_Type']))
# recounting to make sure the different type of lunar types are accounted for
lunar['Eclipse_Type'].value_counts()
# amount of different types of lunar eclipses
len(lunar['Eclipse_Type'].value_counts())
# so there would be 3 different categories of 'Eclipse Type'
lunar.isnull().sum()
# there are no missing values
# not necessary for estimation of problem in python
lunar = lunar.drop(["Latitude", "Longitude"], axis=1)
lunar.head()
len(lunar.columns)
LunarCategoricals = lunar.select_dtypes(object)
LunarCategoricals
len(LunarCategoricals.columns)
LunarNumericals = lunar._get_numeric_data()
LunarNumericals
len(LunarNumericals.columns)
L_corr_matrix = LunarNumericals.corr()
# set fig size to have better readibility of heatmap
fig, ax = plt.subplots(figsize=(8,8))
L_heatmap = sns.heatmap(L_corr_matrix, annot =True, ax=ax)
L_heatmap
plt.hist(lunar["Eclipse_Type"], bins = len(lunar["Eclipse_Type"].unique()))
plt.xticks(rotation='vertical')
lunar.columns = lunar.columns.str.replace(' ','_')
lunar.columns
```
| github_jupyter |
# Mask R-CNN
This notebook shows how to train a Mask R-CNN object detection and segementation model on a custom coco-style data set.
```
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
sys.path.insert(0, '../libraries')
from mrcnn.config import Config
import mrcnn.utils as utils
import mrcnn.model as modellib
import mrcnn.visualize as visualize
from mrcnn.model import log
import mcoco.coco as coco
import mextra.utils as extra_utils
%matplotlib inline
%config IPCompleter.greedy=True
HOME_DIR = '/home/keras'
DATA_DIR = os.path.join(HOME_DIR, "data/shapes")
WEIGHTS_DIR = os.path.join(HOME_DIR, "data/weights")
MODEL_DIR = os.path.join(DATA_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(WEIGHTS_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
def get_ax(rows=1, cols=1, size=8):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Change the default size attribute to control the size
of rendered images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
```
# Dataset
Organize the dataset using the following structure:
```
DATA_DIR
│
└───annotations
│ │ instances_<subset><year>.json
│
└───<subset><year>
│ image021.jpeg
│ image022.jpeg
```
```
dataset_train = coco.CocoDataset()
dataset_train.load_coco(DATA_DIR, subset="shapes_train", year="2018")
dataset_train.prepare()
dataset_validate = coco.CocoDataset()
dataset_validate.load_coco(DATA_DIR, subset="shapes_validate", year="2018")
dataset_validate.prepare()
dataset_test = coco.CocoDataset()
dataset_test.load_coco(DATA_DIR, subset="shapes_test", year="2018")
dataset_test.prepare()
# Load and display random samples
image_ids = np.random.choice(dataset_train.image_ids, 4)
for image_id in image_ids:
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
```
# Configuration
```
image_size = 64
rpn_anchor_template = (1, 2, 4, 8, 16) # anchor sizes in pixels
rpn_anchor_scales = tuple(i * (image_size // 16) for i in rpn_anchor_template)
class ShapesConfig(Config):
"""Configuration for training on the shapes dataset.
"""
NAME = "shapes"
# Train on 1 GPU and 2 images per GPU. Put multiple images on each
# GPU if the images are small. Batch size is 2 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 1
# Number of classes (including background)
NUM_CLASSES = 1 + 3 # background + 3 shapes (triangles, circles, and squares)
# Use smaller images for faster training.
IMAGE_MAX_DIM = image_size
IMAGE_MIN_DIM = image_size
# Use smaller anchors because our image and objects are small
RPN_ANCHOR_SCALES = rpn_anchor_scales
# Aim to allow ROI sampling to pick 33% positive ROIs.
TRAIN_ROIS_PER_IMAGE = 32
STEPS_PER_EPOCH = 400
VALIDATION_STEPS = STEPS_PER_EPOCH / 20
config = ShapesConfig()
config.display()
```
# Model
```
model = modellib.MaskRCNN(mode="training", config=config, model_dir=MODEL_DIR)
inititalize_weights_with = "coco" # imagenet, coco, or last
if inititalize_weights_with == "imagenet":
model.load_weights(model.get_imagenet_weights(), by_name=True)
elif inititalize_weights_with == "coco":
model.load_weights(COCO_MODEL_PATH, by_name=True,
exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
elif inititalize_weights_with == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last()[1], by_name=True)
```
# Training
Training in two stages
## Heads
Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass layers='heads' to the train() function.
```
model.train(dataset_train, dataset_validate,
learning_rate=config.LEARNING_RATE,
epochs=2,
layers='heads')
```
## Fine-tuning
Fine-tune all layers. Pass layers="all to train all layers.
```
model.train(dataset_train, dataset_validate,
learning_rate=config.LEARNING_RATE / 10,
epochs=3, # starts from the previous epoch, so only 1 additional is trained
layers="all")
```
# Detection
```
class InferenceConfig(ShapesConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
inference_config = InferenceConfig()
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
config=inference_config,
model_dir=MODEL_DIR)
# Get path to saved weights
# Either set a specific path or find last trained weights
# model_path = os.path.join(ROOT_DIR, ".h5 file name here")
print(model.find_last()[1])
model_path = model.find_last()[1]
# Load trained weights (fill in path to trained weights here)
assert model_path != "", "Provide path to trained weights"
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
```
### Test on a random image from the test set
First, show the ground truth of the image, then show detection results.
```
image_id = random.choice(dataset_test.image_ids)
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_test, inference_config,
image_id, use_mini_mask=False)
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
results = model.detect([original_image], verbose=1)
r = results[0]
visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'],
dataset_test.class_names, r['scores'], ax=get_ax())
```
# Evaluation
Use the test dataset to evaluate the precision of the model on each class.
```
predictions =\
extra_utils.compute_multiple_per_class_precision(model, inference_config, dataset_test,
number_of_images=250, iou_threshold=0.5)
complete_predictions = []
for shape in predictions:
complete_predictions += predictions[shape]
print("{} ({}): {}".format(shape, len(predictions[shape]), np.mean(predictions[shape])))
print("--------")
print("average: {}".format(np.mean(complete_predictions)))
print(model.find_last()[1])
```
## Convert result to COCO
Converting the result back to a COCO-style format for further processing
```
import json
import pylab
import matplotlib.pyplot as plt
from tempfile import NamedTemporaryFile
from pycocotools.coco import COCO
coco_dict = extra_utils.result_to_coco(results[0], dataset_test.class_names,
np.shape(original_image)[0:2], tolerance=0)
with NamedTemporaryFile('w') as jsonfile:
json.dump(coco_dict, jsonfile)
jsonfile.flush()
coco_data = COCO(jsonfile.name)
category_ids = coco_data.getCatIds(catNms=['square', 'circle', 'triangle'])
image_data = coco_data.loadImgs(1)[0]
image = original_image
plt.imshow(image); plt.axis('off')
pylab.rcParams['figure.figsize'] = (8.0, 10.0)
annotation_ids = coco_data.getAnnIds(imgIds=image_data['id'], catIds=category_ids, iscrowd=None)
annotations = coco_data.loadAnns(annotation_ids)
coco_data.showAnns(annotations)
```
| github_jupyter |
```
import sys
sys.path.append('./../')
%load_ext autoreload
%autoreload 2
from ontology import get_ontology
ontology = get_ontology('../data/doid.obo')
name2doid = {term.name: term.id for term in ontology.get_terms()}
doid2name = {term.id: term.name for term in ontology.get_terms()}
import numpy as np
import re
```
# Wiki links from obo descriptions
```
import wiki
lst = wiki.get_links_from_ontology(ontology)
print r'example:{:}'.format(repr(lst[10]))
```
### urllib2 to read page in html
```
page = wiki.get_html(lst[101])
page[:1000]
```
# Fuzzy logic
```
import fuzzywuzzy.process as fuzzy_process
from fuzzywuzzy import fuzz
string = "ventricular arrhythmia"
names = np.sort(name2doid.keys())
print fuzzy_process.extractOne(string, names, scorer=fuzz.token_set_ratio)
string = "Complete remission of hairy cell leukemia variant (HCL-v) complicated by red cell aplasia post treatment with rituximab."
print fuzzy_process.extractOne(string, names, scorer=fuzz.partial_ratio)
```
# Wikipedia search engine: headers
```
query = "ventricular arrhythmia"
top = wiki.get_top_headers(query)
top
for header in top:
results = fuzzy_process.extractOne(header, names, scorer=fuzz.token_set_ratio)
print results
page = wikipedia.WikipediaPage(title='Cell_proliferation')
page.summary
```
[name for name in names if len(re.split(' ', name)) > 3]
### pub-med
```
import pubmed
query = 'hcl-v'
titles = pubmed.get(query)
titles_len = [len(title) for title in titles]
for i, string in enumerate(titles):
print("%d) %s" % (i+1, string))
print fuzzy_process.extractOne(string, names, scorer=fuzz.partial_ratio)
print
```
def find_synonym(s_ref, s):
last = s_ref.find('(' + s + ')')
if last == -1:
return None
n_upper = len(''.join([c for c in s if c.isupper()]))
first = [(i,c) for i, c in enumerate(s_ref[:last]) if c.isupper()][-n_upper][0]
return s_ref[first:last-1]
print find_synonym('Wolff-Parkinson-White syndrome (WPW) and athletes: Darwin at play?',
'WPW')
### synonyms
```
import utils
print utils.find_synonym('Wolff-Parkinson-White syndrome (WPW) and athletes: Darwin at play?', 'WPW')
print utils.find_synonym('Complete remission of hairy cell leukemia variant (HCL-v)...', 'hcl-v')
```
### Assymetric distance
```
s_ref = 'artery disease'
s = 'nonartery'
print utils.assym_dist(s, s_ref)
```
### Length statistics
```
print 'Mean term name length:', np.mean([len(term.name) for term in ontology.get_terms()])
print 'Mean article title length:', np.mean(titles_len)
```
### Unique words
```
words = [re.split(' |-', term.name) for term in ontology.get_terms()]
words = np.unique([l for sublist in words for l in sublist if len(l) > 0])
words = [w for w in words if len(w) >= 4]
words[:10]
```
# Threading
```
from threading import Thread
from time import sleep
from ontology import get_ontology
query_results = None
def fn_get_q(query):
global query_results
query_results = fuzzy_process.extractOne(query, names, scorer=fuzz.ratio)
return True
wiki_results = None
def fn_get_wiki(query):
global wiki_results
header = wiki.get_top_headers(query, 1)[0]
wiki_results = fuzzy_process.extractOne(header, names, scorer=fuzz.ratio)
#sleep(0.1)
return True
pubmed_results = None
def fn_get_pubmed(query):
global pubmed_results
string = pubmed.get(query, topK=1)
if string is not None:
string = string[0]
print string
pubmed_results = fuzzy_process.extractOne(string, names, scorer=fuzz.partial_ratio)
return True
else:
return False
'''main'''
## from bot
query = 'valve disease'
def find_answer(query):
query = query.lower()
# load ontology
ontology = get_ontology('../data/doid.obo')
name2doid = {term.name: term.id for term in ontology.get_terms()}
doid2name = {term.id: term.name for term in ontology.get_terms()}
## exact match
if query in name2doid.keys():
doid = name2doid[query]
else:
# exact match -- no
th_get_q = Thread(target = fn_get_q, args = (query,))
th_get_wiki = Thread(target = fn_get_wiki, args = (query,))
th_get_pubmed = Thread(target = fn_get_pubmed, args = (query,))
th_get_q.start()
th_get_wiki.start()
th_get_pubmed.start()
## search engine query --> vertices, p=100(NLP??); synonyms
## new thread for synonyms???
## synonyms NLP
## new thread for NLP
## tree search on vertices (returned + synonyms)
## sleep ?
th_get_q.join()
print query_results
th_get_wiki.join()
print wiki_results
th_get_pubmed.join()
print pubmed_results
## final answer
## draw graph
doid = None
graph = None
return doid, graph
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mirianfsilva/The-Heat-Diffusion-Equation/blob/master/FiniteDiff_test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Implementation of schemes for the Heat Equation:
- Forward Time, Centered Space;
- Backward Time, Centered Space;
- Crank-Nicolson.
\begin{equation}
\partial_{t}u = \partial^2_{x}u , \quad 0 < x < 1, \quad t > 0 \\
\end{equation}
\begin{equation}
\partial_{x}u(0,t) = 0, \quad \partial_x{u}(1,t) = 0\\
\end{equation}
\begin{equation}
u(x, 0) = cos(\pi x)
\end{equation}
### Exact Solution:
\begin{equation}
u(x,t) = e^{-\pi^2t}cos(\pi x)
\end{equation}
```
#Numerical Differential Equations - Federal University of Minas Gerais
""" Utils """
import math, sys
import numpy as np
import sympy as sp
from scipy import sparse
from sympy import fourier_series, pi
from scipy.fftpack import *
from scipy.sparse import diags
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from os import path
count = 0
#Heat Diffusion in one dimensional wire within the Explicit Method
"""
λ = 2, λ = 1/2 e λ = 1/6
M = 4, M = 8, M = 10, M = 12 e M = 14
"""
#Heat function exact solution
def Solution(x, t):
return np.exp((-np.pi**2)*t)*np.cos(np.pi*x)
# ---- Surface plot ----
def surfaceplot(U, Uexact, tspan, xspan, M):
N = M**2
#meshgrid : Return coordinate matrices from coordinate vectors
X, T = np.meshgrid(tspan, xspan)
fig = plt.figure(figsize=plt.figaspect(0.3))
#fig2 = plt.figure(figsize=plt.figaspect(0.5))
#fig3 = plt.figure(figsize=plt.figaspect(0.5))
# ---- Exact Solution ----
ax = fig.add_subplot(1, 4, 1,projection='3d')
surf = ax.plot_surface(X, T, Uexact, linewidth=0, cmap=cm.jet, antialiased=True)
ax.set_title('Exact Solution')
ax.set_xlabel('Time')
ax.set_ylabel('Space')
ax.set_zlabel('U')
# ---- Method Aproximation Solution ----
ax1 = fig.add_subplot(1, 4, 2,projection='3d')
surf = ax1.plot_surface(X, T, U, linewidth=0, cmap=cm.jet, antialiased=True)
ax1.set_title('Approximation')
ax1.set_xlabel('Time')
ax1.set_ylabel('Space')
ax1.set_zlabel('U')
plt.tight_layout()
ax.view_init(30,230)
ax1.view_init(30,230)
fig.savefig(path.join("plot_METHOD{0}.png".format(count)),dpi=600)
plt.draw()
'''
Exact Solution for 1D reaction-diffusion equation:
u_t = k * u_xx
with Neumann boundary conditions
at x=0: u_x(0,t) = 0
at x=L: u_x(L,t) = 0
with L = 1 and initial conditions:
u(x,0) = np.cos(np.pi*x)
'''
def ExactSolution(M, T = 0.5, L = 1):
N = (M**2) #GRID POINTS on time interval
xspan = np.linspace(0, L, M)
tspan = np.linspace(0, T, N)
Uexact = np.zeros((M, N))
for i in range(0, M):
for j in range(0, N):
Uexact[i][j] = Solution(xspan[i], tspan[j])
return (Uexact, tspan, xspan)
'''
Forward method to solve 1D reaction-diffusion equation:
u_t = k * u_xx
with Neumann boundary conditions
at x=0: u_x(0,t) = 0 = sin(2*np.pi)
at x=L: u_x(L,t) = 0 = sin(2*np.pi)
with L = 1 and initial conditions:
u(x,0) = (1.0/2.0)+ np.cos(2.0*np.pi*x) - (1.0/2.0)*np.cos(3*np.pi*x)
u_x(x,t) = (-4.0*(np.pi**2))np.exp(-4.0*(np.pi**2)*t)*np.cos(2.0*np.pi*x) +
(9.0/2.0)*(np.pi**2)*np.exp(-9.0*(np.pi**2)*t)*np.cos(3*np.pi*x))
'''
def ForwardEuler(M, lambd, T = 0.5, L = 1, k = 1):
#Parameters needed to solve the equation within the explicit method
#M = GRID POINTS on space interval
N = (M**2) #GRID POINTS on time interval
# ---- Length of the wire in x direction ----
x0, xL = 0, L
# ----- Spatial discretization step -----
dx = (xL - x0)/(M-1)
# ---- Final time ----
t0,tF = 0, T
# ----- Time step -----
dt = (tF - t0)/(N-1)
#lambd = dt*k/dx**2
# ----- Creates grids -----
xspan = np.linspace(x0, xL, M)
tspan = np.linspace(t0, tF, N)
# ----- Initializes matrix solution U -----
U = np.zeros((M, N))
# ----- Initial condition -----
U[:,0] = np.cos(np.pi*xspan)
# ----- Neumann boundary conditions -----
"""
To implement these boundary conditions, we again use “false points”, x_0 and x_N+1 which are external points.
We use a difference to approximate ∂u/∂x (xL,t) and set it equal to the desired boundary condition:
"""
f = np.arange(1, N+1)
f = (-3*U[0,:] + 4*U[1,:] - U[2,:])/2*dx
U[0,:] = (4*U[1,:] - U[2,:])/3
g = np.arange(1, N+1)
g = (-3*U[-1,:] + 4*U[-2,:] - U[-3,:])/2*dx
U[-1,:] = (4*U[-2,:] - U[-3,:])/3
# ----- ftcs -----
for k in range(0, N-1):
for i in range(1, M-1):
U[i, k+1] = lambd*U[i-1, k] + (1-2*lambd)*U[i,k] + lambd*U[i+1,k]
return (U, tspan, xspan)
U, tspan, xspan = ForwardEuler(M = 14, lambd = 1.0/6.0)
Uexact, x, t = ExactSolution(M = 14)
surfaceplot(U, Uexact, tspan, xspan, M = 14)
'''
Backward method to solve 1D reaction-diffusion equation:
u_t = k * u_xx
with Neumann boundary conditions
at x=0: u_x(0,t) = 0 = sin(2*np.pi)
at x=L: u_x(L,t) = 0 = sin(2*np.pi)
with L = 1 and initial conditions:
u(x,0) = (1.0/2.0)+ np.cos(2.0*np.pi*x) - (1.0/2.0)*np.cos(3*np.pi*x)
u_x(x,t) = (-4.0*(np.pi**2))np.exp(-4.0*(np.pi**2)*t)*np.cos(2.0*np.pi*x) +
(9.0/2.0)*(np.pi**2)*np.exp(-9.0*(np.pi**2)*t)*np.cos(3*np.pi*x))
'''
def BackwardEuler(M, lambd, T = 0.5, L = 1, k = 1):
#Parameters needed to solve the equation within the explicit method
# M = GRID POINTS on space interval
N = (M**2) #GRID POINTS on time interval
# ---- Length of the wire in x direction ----
x0, xL = 0, L
# ----- Spatial discretization step -----
dx = (xL - x0)/(M-1)
# ---- Final time ----
t0, tF = 0, T
# ----- Time step -----
dt = (tF - t0)/(N-1)
# k = 1.0 Diffusion coefficient
#lambd = dt*k/dx**2
a = 1 + 2*lambd
xspan = np.linspace(x0, xL, M)
tspan = np.linspace(t0, tF, N)
main_diag = (1 + 2*lambd)*np.ones((1,M))
off_diag = -lambd*np.ones((1, M-1))
a = main_diag.shape[1]
diagonals = [main_diag, off_diag, off_diag]
#Sparse Matrix diagonals
A = sparse.diags(diagonals, [0,-1,1], shape=(a,a)).toarray()
A[0,1] = -2*lambd
A[M-1,M-2] = -2*lambd
# --- Initializes matrix U -----
U = np.zeros((M, N))
# --- Initial condition -----
U[:,0] = np.cos(np.pi*xspan)
# ---- Neumann boundary conditions -----
f = np.arange(1, N+1) #LeftBC
#(-3*U[i,j] + 4*U[i-1,j] - U[i-2,j])/2*dx = 0
f = U[0,:] = (4*U[1,:] - U[2,:])/3
g = np.arange(1, N+1) #RightBC
#(-3*U[N,j] + 4*U[N-1,j] - U[N-2,j])/2*dx = 0
g = U[-1,:] = (4*U[-2,:] - U[-3,:])/3
for i in range(1, N):
c = np.zeros((M-2,1)).ravel()
b1 = np.asarray([2*lambd*dx*f[i], 2*lambd*dx*g[i]])
b1 = np.insert(b1, 1, c)
b2 = np.array(U[0:M, i-1])
b = b1 + b2 # Right hand side
U[0:M, i] = np.linalg.solve(A,b) # Solve x=A\b
return (U, tspan, xspan)
U, tspan, xspan = BackwardEuler(M = 14, lambd = 1.0/6.0)
Uexact, x, t = ExactSolution(M = 14)
surfaceplot(U, Uexact, tspan, xspan, M = 14)
'''
Crank-Nicolson method to solve 1D reaction-diffusion equation:
u_t = D * u_xx
with Neumann boundary conditions
at x=0: u_x = sin(2*pi)
at x=L: u_x = sin(2*pi)
with L=1 and initial condition:
u(x,0) = u(x,0) = (1.0/2.0)+ np.cos(2.0*np.pi*x) - (1.0/2.0)*np.cos(3*np.pi*x)
'''
def CrankNicolson(M, lambd, T = 0.5, L = 1, k = 1):
#Parameters needed to solve the equation within the explicit method
# M = GRID POINTS on space interval
N = (M**2) #GRID POINTS on time interval
# ---- Length of the wire in x direction ----
x0, xL = 0, L
# ----- Spatial discretization step -----
dx = (xL - x0)/(M-1)
# ---- Final time ----
t0, tF = 0, T
# ----- Time step -----
dt = (tF - t0)/(N-1)
#lambd = dt*k/(2.0*dx**2)
a0 = 1 + 2*lambd
c0 = 1 - 2*lambd
xspan = np.linspace(x0, xL, M)
tspan = np.linspace(t0, tF, N)
maindiag_a0 = a0*np.ones((1,M))
offdiag_a0 = (-lambd)*np.ones((1, M-1))
maindiag_c0 = c0*np.ones((1,M))
offdiag_c0 = lambd*np.ones((1, M-1))
#Left-hand side tri-diagonal matrix
a = maindiag_a0.shape[1]
diagonalsA = [maindiag_a0, offdiag_a0, offdiag_a0]
A = sparse.diags(diagonalsA, [0,-1,1], shape=(a,a)).toarray()
A[0,1] = (-2)*lambd
A[M-1,M-2] = (-2)*lambd
#Right-hand side tri-diagonal matrix
c = maindiag_c0.shape[1]
diagonalsC = [maindiag_c0, offdiag_c0, offdiag_c0]
Arhs = sparse.diags(diagonalsC, [0,-1,1], shape=(c,c)).toarray()
Arhs[0,1] = 2*lambd
Arhs[M-1,M-2] = 2*lambd
# ----- Initializes matrix U -----
U = np.zeros((M, N))
#----- Initial condition -----
U[:,0] = np.cos(np.pi*xspan)
#----- Neumann boundary conditions -----
#Add one line above and one line below using finit differences
f = np.arange(1, N+1) #LeftBC
#(-3*U[i,j] + 4*U[i-1,j] - U[i-2,j])/2*dx = 0
f = U[0,:] = (4*U[1,:] - U[2,:])/3
g = np.arange(1, N+1) #RightBC
#(-3*U[N,j] + 4*U[N-1,j] - U[N-2,j])/2*dx = 0
g = U[-1,:] = (4*U[-2,:] - U[-3,:])/3
for k in range(1, N):
ins = np.zeros((M-2,1)).ravel()
b1 = np.asarray([4*lambd*dx*f[k], 4*lambd*dx*g[k]])
b1 = np.insert(b1, 1, ins)
b2 = np.matmul(Arhs, np.array(U[0:M, k-1]))
b = b1 + b2 # Right hand side
U[0:M, k] = np.linalg.solve(A,b) # Solve x=A\b
return (U, tspan, xspan)
U, tspan, xspan = CrankNicolson(M = 14, lambd = 1.0/6.0)
Uexact, x, t = ExactSolution(M = 14)
surfaceplot(U, Uexact, tspan, xspan, M = 14)
```
| github_jupyter |
# Overview
The tool serves to let you create task files from CSVs and zip files that you upload through the browser
```
import ipywidgets as ipw
import pandas as pd
import json, io, os, tempfile
import fileupload as fu
from IPython.display import display, FileLink
def upload_as_file_widget(callback=None):
"""Create an upload files button that creates a temporary file and calls a function with the path.
"""
_upload_widget = fu.FileUploadWidget()
def _virtual_file(change):
file_ext = os.path.splitext(change['owner'].filename)[-1]
print('Uploaded `{}`'.format(change['owner'].filename))
if callback is not None:
with tempfile.NamedTemporaryFile(suffix=file_ext) as f:
f.write(change['owner'].data)
callback(f.name)
_upload_widget.observe(_virtual_file, names='data')
return _upload_widget
def make_task(in_df,
image_path='Image Index',
output_labels='Finding Labels',
base_image_directory = 'sample_data'):
return {
'google_forms': {'form_url': 'https://docs.google.com/forms/d/e/1FAIpQLSfBmvqCVeDA7IZP2_mw_HZ0OTgDk2a0JN4VlY5KScECWC-_yw/viewform',
'sheet_url': 'https://docs.google.com/spreadsheets/d/1T02tRhe3IUUHYsMchc7hmH8nVI3uR0GffdX1PNxKIZA/edit?usp=sharing'
},
'dataset': {
'image_path': image_path, # column name
'output_labels': output_labels, # column name
'dataframe': in_df.to_dict(),
'base_image_directory': base_image_directory # path
}
}
def save_task(annotation_task, out_path='task.json'):
with open(out_path, 'w') as f:
json.dump(annotation_task, f)
return out_path
```
## Instructions
Load a CSV file and select the columns for the image path, labels and the name of the directory where the images are located
```
def _load_csv_app(in_path):
"""
A callback to create an app from an uploaded CSV file
>>> _load_csv_app('sample_data/dataset_overview.csv')
"""
ds_df = pd.read_csv(in_path)
table_viewer = ipw.HTML(value=ds_df.sample(3).T.style.render(), layout = ipw.Layout(width="45%"))
image_path_widget = ipw.Dropdown(
options=ds_df.columns,
value=ds_df.columns[0],
description='Image Path Column:',
disabled=False
)
output_labels_widget = ipw.Dropdown(
options=ds_df.columns,
value=ds_df.columns[0],
description='Label Column:',
disabled=False
)
all_dir_list = [p for p, _, _ in os.walk('.') if os.path.isdir(p) and not any([k.startswith('.') and len(k)>1 for k in p.split('/')])]
base_image_directory_widget = ipw.Select(
options=all_dir_list,
value=None,
rows=5,
description='Local Image Folder:',
disabled=False
)
def _create_task(btn):
c_task = make_task(ds_df,
image_path = image_path_widget.value,
output_labels = output_labels_widget.value,
base_image_directory = base_image_directory_widget.value
)
display(FileLink(save_task(c_task)))
create_but = ipw.Button(description='Create Task')
create_but.on_click(_create_task)
controls = ipw.VBox([image_path_widget, output_labels_widget,
base_image_directory_widget, create_but])
out_widget = ipw.HBox([controls,
table_viewer])
display(out_widget)
return out_widget
upload_as_file_widget(_load_csv_app)
```
| github_jupyter |
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/transform/simple">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/transform/simple.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/blob/master/docs/tutorials/transform/simple.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
</table></div>
##### Copyright © 2020 Google Inc.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Preprocess data with TensorFlow Transform
***The Feature Engineering Component of TensorFlow Extended (TFX)***
This example colab notebook provides a very simple example of how <a target='_blank' href='https://www.tensorflow.org/tfx/transform/'>TensorFlow Transform (<code>tf.Transform</code>)</a> can be used to preprocess data using exactly the same code for both training a model and serving inferences in production.
TensorFlow Transform is a library for preprocessing input data for TensorFlow, including creating features that require a full pass over the training dataset. For example, using TensorFlow Transform you could:
* Normalize an input value by using the mean and standard deviation
* Convert strings to integers by generating a vocabulary over all of the input values
* Convert floats to integers by assigning them to buckets, based on the observed data distribution
TensorFlow has built-in support for manipulations on a single example or a batch of examples. `tf.Transform` extends these capabilities to support full passes over the entire training dataset.
The output of `tf.Transform` is exported as a TensorFlow graph which you can use for both training and serving. Using the same graph for both training and serving can prevent skew, since the same transformations are applied in both stages.
### Upgrade Pip
To avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately.
```
try:
import colab
!pip install --upgrade pip
except:
pass
```
### Install TensorFlow Transform
**Note: In Google Colab, because of package updates, the first time you run this cell you may need to restart the runtime (Runtime > Restart runtime ...).**
```
!pip install -q -U tensorflow_transform==0.24.1
```
## Did you restart the runtime?
If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages.
## Imports
```
import pprint
import tempfile
import tensorflow as tf
import tensorflow_transform as tft
import tensorflow_transform.beam as tft_beam
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import schema_utils
```
## Data: Create some dummy data
We'll create some simple dummy data for our simple example:
* `raw_data` is the initial raw data that we're going to preprocess
* `raw_data_metadata` contains the schema that tells us the types of each of the columns in `raw_data`. In this case, it's very simple.
```
raw_data = [
{'x': 1, 'y': 1, 's': 'hello'},
{'x': 2, 'y': 2, 's': 'world'},
{'x': 3, 'y': 3, 's': 'hello'}
]
raw_data_metadata = dataset_metadata.DatasetMetadata(
schema_utils.schema_from_feature_spec({
'y': tf.io.FixedLenFeature([], tf.float32),
'x': tf.io.FixedLenFeature([], tf.float32),
's': tf.io.FixedLenFeature([], tf.string),
}))
```
## Transform: Create a preprocessing function
The _preprocessing function_ is the most important concept of tf.Transform. A preprocessing function is where the transformation of the dataset really happens. It accepts and returns a dictionary of tensors, where a tensor means a <a target='_blank' href='https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/Tensor'><code>Tensor</code></a> or <a target='_blank' href='https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/SparseTensor'><code>SparseTensor</code></a>. There are two main groups of API calls that typically form the heart of a preprocessing function:
1. **TensorFlow Ops:** Any function that accepts and returns tensors, which usually means TensorFlow ops. These add TensorFlow operations to the graph that transforms raw data into transformed data one feature vector at a time. These will run for every example, during both training and serving.
2. **Tensorflow Transform Analyzers/Mappers:** Any of the analyzers/mappers provided by tf.Transform. These also accept and return tensors, and typically contain a combination of Tensorflow ops and Beam computation, but unlike TensorFlow ops they only run in the Beam pipeline during analysis requiring a full pass over the entire training dataset. The Beam computation runs only once, during training, and typically make a full pass over the entire training dataset. They create tensor constants, which are added to your graph. For example, tft.min computes the minimum of a tensor over the training dataset while tft.scale_by_min_max first computes the min and max of a tensor over the training dataset and then scales the tensor to be within a user-specified range, [output_min, output_max]. tf.Transform provides a fixed set of such analyzers/mappers, but this will be extended in future versions.
Caution: When you apply your preprocessing function to serving inferences, the constants that were created by analyzers during training do not change. If your data has trend or seasonality components, plan accordingly.
Note: The `preprocessing_fn` is not directly callable. This means that
calling `preprocessing_fn(raw_data)` will not work. Instead, it must
be passed to the Transform Beam API as shown in the following cells.
```
def preprocessing_fn(inputs):
"""Preprocess input columns into transformed columns."""
x = inputs['x']
y = inputs['y']
s = inputs['s']
x_centered = x - tft.mean(x)
y_normalized = tft.scale_to_0_1(y)
s_integerized = tft.compute_and_apply_vocabulary(s)
x_centered_times_y_normalized = (x_centered * y_normalized)
return {
'x_centered': x_centered,
'y_normalized': y_normalized,
's_integerized': s_integerized,
'x_centered_times_y_normalized': x_centered_times_y_normalized,
}
```
## Putting it all together
Now we're ready to transform our data. We'll use Apache Beam with a direct runner, and supply three inputs:
1. `raw_data` - The raw input data that we created above
2. `raw_data_metadata` - The schema for the raw data
3. `preprocessing_fn` - The function that we created to do our transformation
<aside class="key-term"><b>Key Term:</b> <a target='_blank' href='https://beam.apache.org/'>Apache Beam</a> uses a <a target='_blank' href='https://beam.apache.org/documentation/programming-guide/#applying-transforms'>special syntax to define and invoke transforms</a>. For example, in this line:
<code><blockquote>result = pass_this | 'name this step' >> to_this_call</blockquote></code>
The method <code>to_this_call</code> is being invoked and passed the object called <code>pass_this</code>, and <a target='_blank' href='https://stackoverflow.com/questions/50519662/what-does-the-redirection-mean-in-apache-beam-python'>this operation will be referred to as <code>name this step</code> in a stack trace</a>. The result of the call to <code>to_this_call</code> is returned in <code>result</code>. You will often see stages of a pipeline chained together like this:
<code><blockquote>result = apache_beam.Pipeline() | 'first step' >> do_this_first() | 'second step' >> do_this_last()</blockquote></code>
and since that started with a new pipeline, you can continue like this:
<code><blockquote>next_result = result | 'doing more stuff' >> another_function()</blockquote></code></aside>
```
def main():
# Ignore the warnings
with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
transformed_dataset, transform_fn = ( # pylint: disable=unused-variable
(raw_data, raw_data_metadata) | tft_beam.AnalyzeAndTransformDataset(
preprocessing_fn))
transformed_data, transformed_metadata = transformed_dataset # pylint: disable=unused-variable
print('\nRaw data:\n{}\n'.format(pprint.pformat(raw_data)))
print('Transformed data:\n{}'.format(pprint.pformat(transformed_data)))
if __name__ == '__main__':
main()
```
## Is this the right answer?
Previously, we used `tf.Transform` to do this:
```
x_centered = x - tft.mean(x)
y_normalized = tft.scale_to_0_1(y)
s_integerized = tft.compute_and_apply_vocabulary(s)
x_centered_times_y_normalized = (x_centered * y_normalized)
```
####x_centered
With input of `[1, 2, 3]` the mean of x is 2, and we subtract it from x to center our x values at 0. So our result of `[-1.0, 0.0, 1.0]` is correct.
####y_normalized
We wanted to scale our y values between 0 and 1. Our input was `[1, 2, 3]` so our result of `[0.0, 0.5, 1.0]` is correct.
####s_integerized
We wanted to map our strings to indexes in a vocabulary, and there were only 2 words in our vocabulary ("hello" and "world"). So with input of `["hello", "world", "hello"]` our result of `[0, 1, 0]` is correct. Since "hello" occurs most frequently in this data, it will be the first entry in the vocabulary.
####x_centered_times_y_normalized
We wanted to create a new feature by crossing `x_centered` and `y_normalized` using multiplication. Note that this multiplies the results, not the original values, and our new result of `[-0.0, 0.0, 1.0]` is correct.
| github_jupyter |
## U.S. GDP vs. Wage Income
### For every wage dollar paid, what is GDP output?
- Each worker on average currently contributes over
90,000 dollars annually of goods and services valued as GDP.
- Each worker on average currently earns about
43,300 dollars annually (steadily up from 35,000 since the 1990's).
- So one dollar in paid wages currently yields
2.23 dollars of products and services --
but that multiplier is not a constant historically.
### What can we say about GDP growth by observing wage growth?
We find the assumption of time-invariant multiplier
gives poor results, whereas we obtain a reasonable
regression fit (Appendix 3) by treating the multiplier as
time-variant (workers are increasingly more productive):
$\%(G) \approx 1.3 * \%(m w)$
In contrast, our *local numerical approximation*
derived in the conclusion suggests using the most
recent estimated parameters:
$\%(G) \approx 1.9 * \%(w)$
So roughly speaking, 1.0% wage growth equates to 1.9% GDP growth
(yet data shows real wages can decline substantially due to the economy).
The abuse of notation is due to the fact that
our observations are not in continuous-time,
but rather in interpolated discrete-time
and in (non-logarithmic) percentage terms.
Short URL: https://git.io/gdpwage
*Dependencies:*
- Repository: https://github.com/rsvp/fecon235
- Python: matplotlib, pandas
*CHANGE LOG*
2016-11-10 Revisit after two years. Use PREAMBLE-p6.16.0428.
Update results with newly estimated parameters.
This notebook should run under Python 2.7 or 3.
Appendix 3 modified to reflect trend fit of multiplier.
2014-12-07 Update code and commentary.
2014-08-15 First version.
```
from fecon235.fecon235 import *
# PREAMBLE-p6.16.0428 :: Settings and system details
from __future__ import absolute_import, print_function
system.specs()
pwd = system.getpwd() # present working directory as variable.
print(" :: $pwd:", pwd)
# If a module is modified, automatically reload it:
%load_ext autoreload
%autoreload 2
# Use 0 to disable this feature.
# Notebook DISPLAY options:
# Represent pandas DataFrames as text; not HTML representation:
import pandas as pd
pd.set_option( 'display.notebook_repr_html', False )
from IPython.display import HTML # useful for snippets
# e.g. HTML('<iframe src=http://en.mobile.wikipedia.org/?useformat=mobile width=700 height=350></iframe>')
from IPython.display import Image
# e.g. Image(filename='holt-winters-equations.png', embed=True) # url= also works
from IPython.display import YouTubeVideo
# e.g. YouTubeVideo('1j_HxD4iLn8', start='43', width=600, height=400)
from IPython.core import page
get_ipython().set_hook('show_in_pager', page.as_hook(page.display_page), 0)
# Or equivalently in config file: "InteractiveShell.display_page = True",
# which will display results in secondary notebook pager frame in a cell.
# Generate PLOTS inside notebook, "inline" generates static png:
%matplotlib inline
# "notebook" argument allows interactive zoom and resize.
```
## Examine U.S. population statistics
```
# Total US population in millions, released monthly:
pop = get( m4pop ) / 1000.0
plot( pop )
georet( pop, 12 )
```
This gives the annualized geometric growth rate of about 1.13%,
but one might also look at fertility rates which supports the population,
e.g. 2.1 children per female will ensure growth
(cf. fertility rates in Japan which has been declining over the decades).
```
# Fraction of population which works:
emppop = get( m4emppop ) / 100.0
```
Workers would be employed adults, which presumably exclude children
(20% of pop) and most elderly persons (14% of pop).
There is a dramatic drop in working% from about 64% in 2001
to about 59% recently.
```
plot( emppop )
# Total US workers in millions:
workers = todf( pop * emppop )
plot( workers )
georet( workers, 12 )
tail( workers )
```
**Total population and the number of workers grow annually around 1.15% --
but the annualized volatility for workers is much larger than the
total population (1.16% vs 0.09%). The decrease in workers due to the
Great Recession is remarkably, and since that period there has
been a steady increase north of 190 million workers.**
## Examine U.S. Gross Domestic Product
```
# Deflator, scaled to 1 current dollar:
defl = get( m4defl )
# Nominal GDP in billions:
gdp = get( m4gdpus )
# The release cycle is quarterly, but we resample to monthly,
# in order to sync with the deflator.
# We do NOT use m4gdpusr directly because that is in 2009 dollars.
# Real GDP in current billions:
gdpr = todf( defl * gdp )
tail( gdpr )
# Real GDP: showing rise from $4 trillion economy
# in the 1960's to nearly $19 trillion dollars.
plot( gdpr )
georet( gdpr, 12 )
```
Real GDP geometric rate of growth is 2.8% per annum
(presumably due to the working population).
We could say that is the *natural growth rate*
of the US economy.
### Real GDP per worker (NOT per capita)
```
# Real GDP per worker -- NOT per capita:
gdprworker = todf( gdpr / workers )
plot( gdprworker )
# plotted in thousands of dollars
```
**Chart shows each worker on average currently contributes
over *90,000 dollars annually*
of goods and services valued as GDP.**
```
georet( gdprworker, 12 )
```
Workers have generally been more *productive* since WW2,
increasingly contributing to GDP at an annual pace of 1.6%.
## Examine wage income
```
# Nominal annual INCOME, assuming 40 working hours per week, 50 weeks per year:
inc = get( m4wage ) * 2000
# REAL income in thousands per worker:
rinc = todf((defl * inc) / 1000.0)
# Income in thousands, current dollars per worker:
plot( rinc )
```
**INCOME chart shows each worker on average currently earns
about *43,300 dollars annually*
(steadily up from 35,000 since the 1990's).**
```
tail( rinc )
georet( rinc, 12 )
```
In general, real income does not always steadily go up,
as the chart demonstrates. A stagnating economy with high inflation
will wear away real wages.
Since 1964, the geometric rate of real wage growth has been
approximately 0.5% -- far less in comparison to the
natural growth rate of the economy.
## How do wages multiply out to GDP?
```
# Ratio of real GDP to real income per worker:
gdpinc = todf( gdprworker / rinc )
```
Implicitly our assumption is that workers earn wages at the
nonfarm non-supervisory private-sector rate.
This is not a bad assumption for our purposes,
provided changes in labor rates are uniformly applied
across other various categories since we are
focusing on the multiplier effect.
```
plot( gdpinc )
tail( gdpinc )
```
*The ratio of real GDP to real income per worker has increased
from 1.4 in the 1970's to 2.2 recently.*
(There is a noticeable temporary dip after the 2007 crisis.)
**One dollar in paid wages currently
yields 2.23 dollars of products and services.**
The time-series shows workers have become
more productive in producing national wealth.
Hypothesis: over the years, *technology* has exerted
upward pressure on productivity, and downward pressure on wages.
In other words, the slope of gdpinc is a function of
technological advances.
(Look for counterexamples in other countries.)
```
# Let's fit and plot the simplified time trend:
gdpinc_trend = trend( gdpinc )
# The computed slope will be relative to one month.
plot( gdpinc_trend )
```
The estimated slope implies that each year adds 0.02 to gdpinc multiplier.
Clearly, we can rule out a *constant* gdpinc multiplier effect.
Rather than a straight line estimator, we can forecast
the gdpinc multiplier using the Holt-Winters method,
one year ahead, month-by-month...
```
# Holt-Winters monthly forecasts:
holtfred( gdpinc, 12 )
```
Interestingly, forecasting the local terrain is more complex
than a global linear regression.
The Holt-Winters forecast for mid-2017 shows a
0.03 *decrease* in the gdpinc multiplier.
If gdpinc multiplier is constant, then mathematically a
x% change in wages would translate to a straightforward x% change in GDP.
This is why the Fed Reserve, especially Janet Yellen,
pays so much attention to wage growth.
But our analysis clearly shows the multiplier is not stable.
Linear regression between real GDP growth and real wage growth
performs poorly when the multiplier is treated
as if it is time-invariant (see Appendix 1).
## CONCLUSION: Numerical approximation for GDP growth based on observations from wage growth
We found evidence of a **time-variant multiplier**
$m_t$ such that $G_t = m_t w_t$.
Let us express GDP growth as the percentage change:
$\begin{aligned}
\frac{G_{t+1} - G_t}{G_t} = \frac{m_{t+1} w_{t+1}}{m_t w_t} - 1
\end{aligned}$
Notice that LHS is just the growth rate of $m_t w_t$.
Abusing notation, we could write $\%(G) = \%(m w)$
Empirically the multiplier varies linearly as a function of time.
Let us evaluate the GDP growth numerically on the LHS,
using the most recent multiplier and its expected linear incrementation,
assuming wage increase of 1% *year-over-year*:
$\begin{aligned}
(\frac{2.23 + 0.02}{2.23}) {1.01} - 1 = 0.0191
\end{aligned}$
Under such assumptions, GDP increases 1.91% over one year.
In other words, **as a rough current approximation:
GDP_growth = 1.9 \* wage_growth,** i.e.
$\%(G) \approx 1.9 * \%(w)$ at current estimated parameters.
This is an useful approximation since GDP is only released quarterly,
whereas wage data is released monthly.
(The result also depends on the interpolation
method used in our *resample_main()*.)
Appendix 3 arrives at the following linear regression result:
$\%(G) \approx 1.3 * \%(m w)$
which takes the entire dataset since 1964 into account,
using gdpinc_trend as time-varying multipler.
- - - -
### APPENDIX 1: Linear regression of 0.49 R-squared if gdpinc multiplier is mistakenly treated as a constant
```
stat2( gdprworker[Y], rinc[Y] )
```
- - - -
### APPENDIX 2: Linear regression between real GDP growth and real wage growth: 0.19 R-squared when multiplier is treated as time-invariant
Note: an alternative is to use the difference betweeen
logarithmic values, but we intentionally use the pcent() function YoY
since our data frequency is not even remotely continuous-time.
```
# Examine year-over-year percentage growth:
stat2( pcent(gdpr, 12)[Y], pcent(rinc, 12)[Y] )
```
- - - -
### Appendix 3: Improved linear regression of growth model: 0.60 R-squared with time-variant multiplier (trend based)
Let the Python variable mw represents the series $m_t w_t$
in our analytical model described in the conclusion:
```
# The string argument allows us to label a DataFrame column:
mw = todf( gdpinc_trend * rinc, 'mw' )
mwpc = todf( pcent( mw, 12), 'mwpc' )
gdprpc = todf( pcent( gdpr, 12), 'Gpc' )
dataf = paste( [gdprpc, mwpc] )
# The 0 in the formula means no intercept:
result = regressformula( dataf['1964':], 'Gpc ~ 0 + mwpc' )
print(result.summary())
```
R-squared after 1964 looks respectable at around 0.60, however,
the fit is terrible after the Great Recession.
The estimated coefficent implies this fitted equation:
$\%(G) \approx 1.3 * \%(m w)$
In contrast, our *local numerical approximation* derived in the conclusion
suggests for the most recent estimated parameters:
$\%(G) \approx 1.9 * \%(w)$
| github_jupyter |
# WARNING
**Please make sure to "COPY AND EDIT NOTEBOOK" to use compatible library dependencies! DO NOT CREATE A NEW NOTEBOOK AND COPY+PASTE THE CODE - this will use latest Kaggle dependencies at the time you do that, and the code will need to be modified to make it work. Also make sure internet connectivity is enabled on your notebook**
# Preliminaries
First install a critical dependency for our code. **NOTE THAT THIS NOTEBOOK USES TENSORFLOW 1.14 BECAUSE ELMo WAS NOT PORTED TO TENSORFLOW 2.X AT THE TIME OF DEVELOPMENT. You can confirm if that is still the case now by going to https://tfhub.dev/s?q=elmo To see equivalent Tensorflow 2.X BERT Code for the Spam problem, see https://www.kaggle.com/azunre/tlfornlp-chapters2-3-spam-bert-tf2**
```
!pip install keras==2.2.4 # critical dependency
```
Write requirements to file, anytime you run it, in case you have to go back and recover Kaggle dependencies. **MOST OF THESE REQUIREMENTS WOULD NOT BE NECESSARY FOR LOCAL INSTALLATION**
Latest known such requirements are hosted for each notebook in the companion github repo, and can be pulled down and installed here if needed. Companion github repo is located at https://github.com/azunre/transfer-learning-for-nlp
```
!pip freeze > kaggle_image_requirements.txt
# Import neural network libraries
import tensorflow as tf
import tensorflow_hub as hub
from keras import backend as K
import keras.layers as layers
from keras.models import Model, load_model
from keras.engine import Layer
# Initialize tensorflow/keras session
sess = tf.Session()
K.set_session(sess)
# Some other key imports
import os
import re
import pandas as pd
import numpy as np
import random
```
# Define Tokenization, Stop-word and Punctuation Removal Functions
Before proceeding, we must decide how many samples to draw from each class. We must also decide the maximum number of tokens per email, and the maximum length of each token. This is done by setting the following overarching hyperparameters
```
Nsamp = 1000 # number of samples to generate in each class - 'spam', 'not spam'
maxtokens = 50 # the maximum number of tokens per document
maxtokenlen = 20 # the maximum length of each token
```
**Tokenization**
```
def tokenize(row):
if row is None or row is '':
tokens = ""
else:
tokens = row.split(" ")[:maxtokens]
return tokens
```
**Use regular expressions to remove unnecessary characters**
Next, we define a function to remove punctuation marks and other nonword characters (using regular expressions) from the emails with the help of the ubiquitous python regex library. In the same step, we truncate all tokens to hyperparameter maxtokenlen defined above.
```
import re
def reg_expressions(row):
tokens = []
try:
for token in row:
token = token.lower()
token = re.sub(r'[\W\d]', "", token)
token = token[:maxtokenlen] # truncate token
tokens.append(token)
except:
token = ""
tokens.append(token)
return tokens
```
**Stop-word removal**
Let’s define a function to remove stopwords - words that occur so frequently in language that they offer no useful information for classification. This includes words such as “the” and “are”, and the popular library NLTK provides a heavily-used list that will employ.
```
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
# print(stopwords) # see default stopwords
# it may be beneficial to drop negation words from the removal list, as they can change the positive/negative meaning
# of a sentence - but we didn't find it to make a difference for this problem
# stopwords.remove("no")
# stopwords.remove("nor")
# stopwords.remove("not")
def stop_word_removal(row):
token = [token for token in row if token not in stopwords]
token = filter(None, token)
return token
```
# Download and Assemble IMDB Review Dataset
Download the labeled IMDB reviews
```
!wget -q "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
!tar xzf aclImdb_v1.tar.gz
```
Shuffle and preprocess data
```
# function for shuffling data
def unison_shuffle(data, header):
p = np.random.permutation(len(header))
data = data[p]
header = np.asarray(header)[p]
return data, header
def load_data(path):
data, sentiments = [], []
for folder, sentiment in (('neg', 0), ('pos', 1)):
folder = os.path.join(path, folder)
for name in os.listdir(folder):
with open(os.path.join(folder, name), 'r') as reader:
text = reader.read()
text = tokenize(text)
text = stop_word_removal(text)
text = reg_expressions(text)
data.append(text)
sentiments.append(sentiment)
data_np = np.array(data)
data, sentiments = unison_shuffle(data_np, sentiments)
return data, sentiments
train_path = os.path.join('aclImdb', 'train')
test_path = os.path.join('aclImdb', 'test')
raw_data, raw_header = load_data(train_path)
print(raw_data.shape)
print(len(raw_header))
# Subsample required number of samples
random_indices = np.random.choice(range(len(raw_header)),size=(Nsamp*2,),replace=False)
data_train = raw_data[random_indices]
header = raw_header[random_indices]
print("DEBUG::data_train::")
print(data_train)
```
Display sentiments and their frequencies in the dataset, to ensure it is roughly balanced between classes
```
unique_elements, counts_elements = np.unique(header, return_counts=True)
print("Sentiments and their frequencies:")
print(unique_elements)
print(counts_elements)
# function for converting data into the right format, due to the difference in required format from sklearn models
# we expect a single string per email here, versus a list of tokens for the sklearn models previously explored
def convert_data(raw_data,header):
converted_data, labels = [], []
for i in range(raw_data.shape[0]):
# combine list of tokens representing each email into single string
out = ' '.join(raw_data[i])
converted_data.append(out)
labels.append(header[i])
converted_data = np.array(converted_data, dtype=object)[:, np.newaxis]
return converted_data, np.array(labels)
data_train, header = unison_shuffle(data_train, header)
# split into independent 70% training and 30% testing sets
idx = int(0.7*data_train.shape[0])
# 70% of data for training
train_x, train_y = convert_data(data_train[:idx],header[:idx])
# remaining 30% for testing
test_x, test_y = convert_data(data_train[idx:],header[idx:])
print("train_x/train_y list details, to make sure it is of the right form:")
print(len(train_x))
print(train_x)
print(train_y[:5])
print(train_y.shape)
```
# Build, Train and Evaluate ELMo Model
Create a custom tf hub ELMO embedding layer
```
class ElmoEmbeddingLayer(Layer):
def __init__(self, **kwargs):
self.dimensions = 1024 # initialize output dimension of ELMo embedding
self.trainable=True
super(ElmoEmbeddingLayer, self).__init__(**kwargs)
def build(self, input_shape): # function for building ELMo embedding
self.elmo = hub.Module('https://tfhub.dev/google/elmo/2', trainable=self.trainable,
name="{}_module".format(self.name)) # download pretrained ELMo model
# extract trainable parameters, which are only a small subset of the total - this is a constraint of
# the tf hub module as shared by the authors - see https://tfhub.dev/google/elmo/2
# the trainable parameters are 4 scalar weights on the sum of the outputs of ELMo layers
self.trainable_weights += K.tf.trainable_variables(scope="^{}_module/.*".format(self.name))
super(ElmoEmbeddingLayer, self).build(input_shape)
def call(self, x, mask=None): # specify function for calling embedding
result = self.elmo(K.squeeze(K.cast(x, tf.string), axis=1),
as_dict=True,
signature='default',
)['default']
return result
def compute_output_shape(self, input_shape): # specify output shape
return (input_shape[0], self.dimensions)
```
We now use the custom TF hub ELMo embedding layer within a higher-level function to define the overall model. More specifically, we put a dense trainable layer of output dimension 256 on top of the ELMo embedding.
```
# Function to build model
def build_model():
input_text = layers.Input(shape=(1,), dtype="string")
embedding = ElmoEmbeddingLayer()(input_text)
dense = layers.Dense(256, activation='relu')(embedding)
pred = layers.Dense(1, activation='sigmoid')(dense)
model = Model(inputs=[input_text], outputs=pred)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
return model
# Build and fit
model = build_model()
history = model.fit(train_x,
train_y,
validation_data=(test_x, test_y),
epochs=5,
batch_size=32)
```
**Save trained model**
```
model.save('ElmoModel.h5')
```
**Visualize Convergence**
```
import matplotlib.pyplot as plt
df_history = pd.DataFrame(history.history)
fig,ax = plt.subplots()
plt.plot(range(df_history.shape[0]),df_history['val_acc'],'bs--',label='validation')
plt.plot(range(df_history.shape[0]),df_history['acc'],'r^--',label='training')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.title('ELMo Movie Review Classification Training')
plt.legend(loc='best')
plt.grid()
plt.show()
fig.savefig('ELMoConvergence.eps', format='eps')
fig.savefig('ELMoConvergence.pdf', format='pdf')
fig.savefig('ELMoConvergence.png', format='png')
fig.savefig('ELMoConvergence.svg', format='svg')
```
**Make figures downloadable to local system in interactive mode**
```
from IPython.display import HTML
def create_download_link(title = "Download file", filename = "data.csv"):
html = '<a href={filename}>{title}</a>'
html = html.format(title=title,filename=filename)
return HTML(html)
create_download_link(filename='ELMoConvergence.svg')
# you must remove all downloaded files - having too many of them on completion will make Kaggle reject your notebook
!rm -rf aclImdb
!rm aclImdb_v1.tar.gz
```
| github_jupyter |
##### Copyright 2020 The Cirq Developers
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Noise
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/noise"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/noise.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/noise.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/noise.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
```
For simulation, it is useful to have `Gate` objects that enact noisy quantum evolution. Cirq supports modeling noise via *operator sum* representations of noise (these evolutions are also known as quantum operations or quantum dynamical maps).
This formalism models evolution of the density matrix $\rho$ via
$$
\rho \rightarrow \sum_{k = 1}^{m} A_k \rho A_k^\dagger
$$
where $A_k$ are known as *Kraus operators*. These operators are not necessarily unitary but must satisfy the trace-preserving property
$$
\sum_k A_k^\dagger A_k = I .
$$
A channel with $m = 1$ unitary Kraus operator is called *coherent* (and is equivalent to a unitary gate operation), otherwise the channel is called *incoherent*. For a given noisy channel, Kraus operators are not necessarily unique. For more details on these operators, see [John Preskill's lecture notes](http://theory.caltech.edu/~preskill/ph219/chap3_15.pdf).
## Common channels
Cirq defines many commonly used quantum channels in [`ops/common_channels.py`](https://github.com/quantumlib/Cirq/blob/master/cirq/ops/common_channels.py). For example, the single-qubit bit-flip channel
$$
\rho \rightarrow (1 - p) \rho + p X \rho X
$$
with parameter $p = 0.1$ can be created as follows.
```
"""Get a single-qubit bit-flip channel."""
bit_flip = cirq.bit_flip(p=0.1)
```
To see the Kraus operators of a channel, the `cirq.channel` protocol can be used. (See the [protocols guide](./protocols.ipynb).)
```
for i, kraus in enumerate(cirq.channel(bit_flip)):
print(f"Kraus operator {i + 1} is:\n", kraus, end="\n\n")
```
As mentioned, all channels are subclasses of `cirq.Gate`s. As such, they can act on qubits and be used in circuits in the same manner as gates.
```
"""Example of using channels in a circuit."""
# See the number of qubits a channel acts on.
nqubits = bit_flip.num_qubits()
print(f"Bit flip channel acts on {nqubits} qubit(s).\n")
# Apply the channel to each qubit in a circuit.
circuit = cirq.Circuit(
bit_flip.on_each(cirq.LineQubit.range(3))
)
print(circuit)
```
Channels can even be controlled.
```
"""Example of controlling a channel."""
# Get the controlled channel.
controlled_bit_flip = bit_flip.controlled(num_controls=1)
# Use it in a circuit.
circuit = cirq.Circuit(
controlled_bit_flip(*cirq.LineQubit.range(2))
)
print(circuit)
```
In addition to the bit-flip channel, other common channels predefined in Cirq are shown below. Definitions of these channels can be found in their docstrings - e.g., `help(cirq.depolarize)`.
* `cirq.phase_flip`
* `cirq.phase_damp`
* `cirq.amplitude_damp`
* `cirq.depolarize`
* `cirq.asymmetric_depolarize`
* `cirq.reset`
For example, the asymmetric depolarizing channel is defined by
$$
\rho \rightarrow (1-p_x-p_y-p_z) \rho + p_x X \rho X + p_y Y \rho Y + p_z Z \rho Z
$$
and can be instantiated as follows.
```
"""Get an asymmetric depolarizing channel."""
depo = cirq.asymmetric_depolarize(
p_x=0.10,
p_y=0.05,
p_z=0.15,
)
circuit = cirq.Circuit(
depo.on_each(cirq.LineQubit(0))
)
print(circuit)
```
## The `channel` and `mixture` protocols
We have seen the `cirq.channel` protocol which returns the Kraus operators of a channel. Some channels have the interpretation of randomly applying a single unitary Kraus operator $U_k$ with probability $p_k$, namely
$$
\rho \rightarrow \sum_k p_k U_k \rho U_k^\dagger {\rm ~where~} \sum_k p_k =1 {\rm ~and~ U_k U_k^\dagger= I}.
$$
For example, the bit-flip channel from above
$$
\rho \rightarrow (1 - p) \rho + p X \rho X
$$
can be interpreted as doing nothing (applying identity) with probability $1 - p$ and flipping the bit (applying $X$) with probability $p$. Channels with these interpretations support the `cirq.mixture` protocol. This protocol returns the probabilities and unitary Kraus operators of the channel.
```
"""Example of using the mixture protocol."""
for prob, kraus in cirq.mixture(bit_flip):
print(f"With probability {prob}, apply\n", kraus, end="\n\n")
```
Channels that do not have this interpretation do not support the `cirq.mixture` protocol. Such channels apply Kraus operators with probabilities that depend on the state $\rho$.
An example of a channel which does not support the mixture protocol is the amplitude damping channel with parameter $\gamma$ defined by Kraus operators
$$
M_0 = \begin{bmatrix} 1 & 0 \cr 0 & \sqrt{1 - \gamma} \end{bmatrix}
\text{and }
M_1 = \begin{bmatrix} 0 & \sqrt{\gamma} \cr 0 & 0 \end{bmatrix} .
$$
```
"""The amplitude damping channel is an example of a channel without a mixture."""
channel = cirq.amplitude_damp(0.1)
if cirq.has_mixture(channel):
print(f"Channel {channel} has a _mixture_ or _unitary_ method.")
else:
print(f"Channel {channel} does not have a _mixture_ or _unitary_ method.")
```
To summarize:
* Every `Gate` in Cirq supports the `cirq.channel` protocol.
- If magic method `_channel_` is not defined, `cirq.channel` looks for `_mixture_` then for `_unitary_`.
* A subset of channels which support `cirq.channel` also support the `cirq.mixture` protocol.
- If magic method `_mixture_` is not defined, `cirq.mixture` looks for `_unitary_`.
* A subset of channels which support `cirq.mixture` also support the `cirq.unitary` protocol.
For concrete examples, consider `cirq.X`, `cirq.BitFlipChannel`, and `cirq.AmplitudeDampingChannel` which are all subclasses of `cirq.Gate`.
* `cirq.X` defines the `_unitary_` method.
- As a result, it supports the `cirq.unitary` protocol, the `cirq.mixture` protocol, and the `cirq.channel` protocol.
* `cirq.BitFlipChannel` defines the `_mixture_` method but not the `_unitary_` method.
- As a result, it only supports the `cirq.mixture` protocol and the `cirq.channel` protocol.
* `cirq.AmplitudeDampingChannel` defines the `_channel_` method, but not the `_mixture_` method or the `_unitary_` method.
- As a result, it only supports the `cirq.channel` protocol.
## Custom channels
Channels not defined in `cirq.ops.common_channels` can be user-defined. Defining custom channels is similar to defining [custom gates](./custom_gates.ipynb).
A minimal example for defining the channel
$$
\rho \mapsto (1 - p) \rho + p Y \rho Y
$$
is shown below.
```
"""Minimal example of defining a custom channel."""
class BitAndPhaseFlipChannel(cirq.SingleQubitGate):
def __init__(self, p: float) -> None:
self._p = p
def _mixture_(self):
ps = [1.0 - self._p, self._p]
ops = [cirq.unitary(cirq.I), cirq.unitary(cirq.Y)]
return tuple(zip(ps, ops))
def _has_mixture_(self) -> bool:
return True
def _circuit_diagram_info_(self, args) -> str:
return f"BitAndPhaseFlip({self._p})"
```
Note: The `_has_mixture_` magic method is not strictly required but is recommended.
We can now instantiate this channel and get its mixture:
```
"""Custom channels can be used like any other channels."""
bit_phase_flip = BitAndPhaseFlipChannel(p=0.05)
for prob, kraus in cirq.mixture(bit_phase_flip):
print(f"With probability {prob}, apply\n", kraus, end="\n\n")
```
Note: Since `_mixture_` is defined, the `cirq.channel` protocol can also be used.
The custom channel can be used in a circuit just like other predefined channels.
```
"""Example of using a custom channel in a circuit."""
circuit = cirq.Circuit(
bit_phase_flip.on_each(*cirq.LineQubit.range(3))
)
circuit
```
Note: If a custom channel does not have a mixture, it should instead define the `_channel_` magic method to return a sequence of Kraus operators (as `numpy.ndarray`s). Defining a `_has_channel_` method which returns `True` is optional but recommended.
This method of defining custom channels is the most general, but simple channels such as the custom `BitAndPhaseFlipChannel` can also be created directly from a `Gate` with the convenient `Gate.with_probability` method.
```
"""Create a channel with Gate.with_probability."""
channel = cirq.Y.with_probability(probability=0.05)
```
This produces the same mixture as the custom `BitAndPhaseFlip` channel above.
```
for prob, kraus in cirq.mixture(channel):
print(f"With probability {prob}, apply\n", kraus, end="\n\n")
```
Note that the order of Kraus operators is reversed from above, but this of course does not affect the action of the channel.
## Simulating noisy circuits
### Density matrix simulation
The `cirq.DensityMatrixSimulator` can simulate any noisy circuit (i.e., can apply any quantum channel) because it stores the full density matrix $\rho$. This simulation strategy updates the state $\rho$ by directly applying the Kraus operators of each quantum channel.
```
"""Simulating a circuit with the density matrix simulator."""
# Get a circuit.
qbit = cirq.GridQubit(0, 0)
circuit = cirq.Circuit(
cirq.X(qbit),
cirq.amplitude_damp(0.1).on(qbit)
)
# Display it.
print("Simulating circuit:")
print(circuit)
# Simulate with the density matrix simulator.
dsim = cirq.DensityMatrixSimulator()
rho = dsim.simulate(circuit).final_density_matrix
# Display the final density matrix.
print("\nFinal density matrix:")
print(rho)
```
Note that the density matrix simulator supports the `run` method which only gives access to measurements as well as the `simulate` method (used above) which gives access to the full density matrix.
### Monte Carlo wavefunction simulation
Noisy circuits with arbitrary channels can also be simulated with the `cirq.Simulator`. When simulating such a channel, a single Kraus operator is randomly sampled (according to the probability distribution) and applied to the wavefunction. This method is known as "Monte Carlo (wavefunction) simulation" or "quantum trajectories."
Note: For channels which do not support the `cirq.mixture` protocol, the probability of applying each Kraus operator depends on the state. In contrast, for channels which do support the `cirq.mixture` protocol, the probability of applying each Kraus operator is independent of the state.
```
"""Simulating a noisy circuit via Monte Carlo simulation."""
# Get a circuit.
qbit = cirq.NamedQubit("Q")
circuit = cirq.Circuit(cirq.bit_flip(p=0.5).on(qbit))
# Display it.
print("Simulating circuit:")
print(circuit)
# Simulate with the cirq.Simulator.
sim = cirq.Simulator()
psi = sim.simulate(circuit).dirac_notation()
# Display the final wavefunction.
print("\nFinal wavefunction:")
print(psi)
```
To see that the output is stochastic, you can run the cell above multiple times. Since $p = 0.5$ in the bit-flip channel, you should get $|0\rangle$ roughly half the time and $|1\rangle$ roughly half the time. The `run` method with many repetitions can also be used to see this behavior.
```
"""Example of Monte Carlo wavefunction simulation with the `run` method."""
circuit = cirq.Circuit(
cirq.bit_flip(p=0.5).on(qbit),
cirq.measure(qbit),
)
res = sim.run(circuit, repetitions=100)
print(res.histogram(key=qbit))
```
## Adding noise to circuits
Often circuits are defined with just unitary operations, but we want to simulate them with noise. There are several methods for inserting noise in Cirq.
For any circuit, the `with_noise` method can be called to insert a channel after every moment.
```
"""One method to insert noise in a circuit."""
# Define some noiseless circuit.
circuit = cirq.testing.random_circuit(
qubits=3, n_moments=3, op_density=1, random_state=11
)
# Display the noiseless circuit.
print("Circuit without noise:")
print(circuit)
# Add noise to the circuit.
noisy = circuit.with_noise(cirq.depolarize(p=0.01))
# Display it.
print("\nCircuit with noise:")
print(noisy)
```
This circuit can then be simulated using the methods described above.
The `with_noise` method creates a `cirq.NoiseModel` from its input and adds noise to each moment. A `cirq.NoiseModel` can be explicitly created and used to add noise to a single operation, single moment, or series of moments as follows.
```
"""Add noise to an operation, moment, or sequence of moments."""
# Create a noise model.
noise_model = cirq.NoiseModel.from_noise_model_like(cirq.depolarize(p=0.01))
# Get a qubit register.
qreg = cirq.LineQubit.range(2)
# Add noise to an operation.
op = cirq.CNOT(*qreg)
noisy_op = noise_model.noisy_operation(op)
# Add noise to a moment.
moment = cirq.Moment(cirq.H.on_each(qreg))
noisy_moment = noise_model.noisy_moment(moment, system_qubits=qreg)
# Add noise to a sequence of moments.
circuit = cirq.Circuit(cirq.H(qreg[0]), cirq.CNOT(*qreg))
noisy_circuit = noise_model.noisy_moments(circuit, system_qubits=qreg)
```
Note: In the last two examples, the argument `system_qubits` can be a subset of the qubits in the moment(s).
The output of each "noisy method" is a `cirq.OP_TREE` which can be converted to a circuit by passing it into the `cirq.Circuit` constructor. For example, we create a circuit from the `noisy_moment` below.
```
"""Creating a circuit from a noisy cirq.OP_TREE."""
cirq.Circuit(noisy_moment)
```
Another technique is to pass a noise channel to the density matrix simulator as shown below.
```
"""Define a density matrix simulator with a noise model."""
noisy_dsim = cirq.DensityMatrixSimulator(
noise=cirq.generalized_amplitude_damp(p=0.1, gamma=0.5)
)
```
This will not explicitly add channels to the circuit being simulated, but the circuit will be simulated as though these channels were present.
Other than these general methods, channels can be added to circuits at any moment just as gates are. The channels can be different, be correlated, act on a subset of qubits, be custom defined, etc.
```
"""Defining a circuit with multiple noisy channels."""
qreg = cirq.LineQubit.range(4)
circ = cirq.Circuit(
cirq.H.on_each(qreg),
cirq.depolarize(p=0.01).on_each(qreg),
cirq.qft(*qreg),
bit_phase_flip.on_each(qreg[1::2]),
cirq.qft(*qreg, inverse=True),
cirq.reset(qreg[1]),
cirq.measure(*qreg),
cirq.bit_flip(p=0.07).controlled(1).on(*qreg[2:]),
)
print("Circuit with multiple channels:\n")
print(circ)
```
Circuits can also be modified with standard methods like `insert` to add channels at any point in the circuit. For example, to model simple state preparation errors, one can add bit-flip channels to the start of the circuit as follows.
```
"""Example of inserting channels in circuits."""
circ.insert(0, cirq.bit_flip(p=0.1).on_each(qreg))
print(circ)
```
| github_jupyter |
- title: Equivalence between Policy Gradients and Soft Q-Learning
- summary: Inspecting the gradients of entropy-augmented policy updates to show their equivalence
- author: Braden Hoagland
- date: 2019-08-12
- image: /static/images/soft_q.png
# Introduction
This article will dive into a lot of the math surrounding the gradients of different maximum entropy RL learning methods. Usually we work in the space of objective functions in practice: with both policy gradients and Q-learning, we'll form an objective function and allow an autodiff library to calculate the gradients for us. We never have to see what's going on behind the scenes, which has its pros and cons. A benefit is that working with objective functions is much easier than calculating gradients by hand. On the other hand, it's easy to lose sight of what's really going on when we work at such an abstract level.
This abstraction issue is tackled in the paper `Equivalence Between Policy Gradients and Soft Q-Learning` (https://arxiv.org/abs/1704.06440), and I think it provides some pretty eye-opening insights into what the most common RL algorithms are really doing. I'll be working off of version 4 of the paper from Oct. 2018, the most recent version of the paper at the time of writing.
First I'll walk through some of the basic definitions in the max-entropy RL setting, then I'll pick out the most important bits of math from the paper that show how entropy-augmented Q-learning is really just a policy gradient method.
# Maximum Entropy RL and the Boltzmann Policy
In standard RL, we try to maximize expected cumulative reward $\mathbb{E}[\sum_t r_t]$. In the max-entropy setting, we augment this reward signal with an entropy bonus. The expected cumulative reward of a policy $\pi$ is commonly denoted as $\eta(\pi)$
\begin{align*}
\eta(\pi) &= \mathbb{E} \Big[ \sum_t (r_t + \alpha \mathcal{H}(\pi)) \Big] \\
&= \mathbb{E} \Big[ \sum_t \big( r_t - \alpha \log\pi(a_t | s_t) \big) \Big]
\end{align*}
where $\pi$ is our current policy and $\alpha$ weights how important the entropy is in our reward definition. This intuitively makes the reward seem higher when our policy exhibits high entropy, allowing it to explore its environment more extensively. A key component of this augmented objective is that the entropy is *inside* the sum. Thus an optimal policy will not only try to act with high entropy *now*, but will act in such a way that it finds highly-entropic states in the *future*.
The paper uses slightly different notation, opting to use KL divergence (AKA "relative entropy") instead of just entropy. This uses a reference policy $\bar{\pi}$, which can be thought of as an old, worse policy that we wish to improve on
\begin{align*}
\eta(\pi) &= \mathbb{E} \Big[ \sum_t (r_t - \alpha \log\pi(a_t|s_t) + \alpha \log\bar{\pi}(a_t|s_t) \Big] \\
&= \mathbb{E} \Big[ \sum_t \big(r_t - \alpha D_{KL}(\pi \,\Vert\, \bar{\pi}) \big) \Big]
\end{align*}
In the max-entropy setting, optimal policies are stochastic and proportional to exponential of the optimal Q-function. This can be expressed formally as
$$ \pi^* \propto e^{Q^*(s,a)} $$
If this doesn't seem very intuitive, I would recommend a quick scan of the article https://bair.berkeley.edu/blog/2017/10/06/soft-q-learning/. It offers a brief introduction to max-entropy RL (specifically for Q-learning) and some helpful intuitions as to why the above relationship is a good property for a policy to have.
To actually get a policy in this form, we'll change up the definition slightly
$$
\pi = \frac{\bar{\pi} \, e^{Q(s,a) / \alpha}}{\mathbb{E}_{\bar{a}\sim\bar{\pi}} [e^{Q(s,\bar{a}) / \alpha}]}
$$
The numerator of this expression is simply stating that we want our new policy to be like our old policy, but slightly in the direction of $e^Q$. If $\alpha$ is higher (i.e. we want more entropy), we move less in the direction of $e^Q$. The denominator is a normalization constant that ensures that our entire expression is still a valid probability distribution (i.e. the sum over all possible actions comes out to 1).
You may have noticed that the denominator of our policy is really just $e^V$ since $V = \mathbb{E}_{a}[Q]$. We'll use this to simplify our policy
\begin{align*}
V(s) &= \alpha \log \mathbb{E}_{a\sim\bar{\pi}} \big[ e^{Q(s,a)/\alpha} \big] \\
\pi &= \bar{\pi} \, e^{(Q(s,a) - V(s)) / \alpha}
\end{align*}
This new policy definition shows more directly that our policy is proportional to the exponential of the advantage. If our policy is proportional to $e^Q$, it should also be proportional to $e^A$, so this makes sense. From now on, we'll refer to this policy as the 'Boltzmann Policy' and denote it $\pi^B$.
# Soft Q-Learning with Boltzmann Backups
From this point onward, there will inevitably be sections of math that seem to leave out non-trivial amounts of work. This is because I think this paper mainly benefits our intuitions about RL. The math proves these new intuitions, but by itself is hard to read. If you're curious and wish to go through all the derivations, I would highly recommend working through the full paper on your own. With that disclaimer out of the way, we can get started...
With normal Q-learning, we define our backup operator $\mathcal{T}$ as follows
$$
\mathcal{T}Q = \mathbb{E}_{r,s'} \big[ r + \gamma \mathbb{E}_{a'\sim\pi}[Q(s', a')] \big]
$$
In the max-entropy setting, we'll have to add in an entropy bonus to the reward signal and simplify accordingly
\begin{align*}
\mathcal{T}Q &= \mathbb{E}_{r,s'} \big[ r + \gamma \mathbb{E}_{a'}[Q(s', a')] - \alpha D_{KL} \big( \pi(\cdot|s') \;\Vert\; \bar{\pi}(\cdot|s') \big) \big] \\
&= \mathbb{E}_{r,s'} \big[ r + \gamma \alpha \log \mathbb{E}_{a'\sim\bar{\pi}}[e^{Q(s',a')/\alpha}] \big]
\end{align*}
See equations 11 and 13 from the paper (which rely on equations 2-6) if you want to see just how exactly that simplication works. To actually perform the optimization step $Q \gets \mathcal{T}Q$, we'll minimize the mean squared error between our current $Q$ and an estimate of $\mathcal{T}Q$. Our regression targets can be defined
\begin{align*}
y &= r + \gamma \alpha \log \mathbb{E}_{a'\sim\bar{\pi}} \big[ e^{Q(s', a') / \alpha} \big] \\
&= r + \gamma V(s')
\end{align*}
Using Boltzmann backups instead of the traditional Q-learning backups is what transforms normal Q-learning into what's conventionally called "soft" Q-learning. That's really all there is to it.
# Policy Gradients and Entropy
I'm assuming you have a solid grasp of policy gradients if you're reading this article, so I'm gonna focus on how they usually aren't applied correctly in the max-entropy setting. PG methods are commonly augmented with an entropy term, like with the following example provided from the paper
$$
\mathbb{E}_{t, s,a} \Big[ \nabla_\theta \log\pi_\theta(a|s) \sum_{t' \geq t} r_{t'} - \alpha D_{KL}\big (\pi_\theta(\cdot|s) \;\Vert\; \pi(\cdot|s) \big) \Big]
$$
This example essentially tries to maximize reward-to-go with an entropy for the *current* timestep. Maximizing this objective technically isn't what we want, even if it's common practice. What we really want is to maximize a sum over all rewards and entropies that our agent experiences from now into the future.
# Soft Q-Learning = Policy Gradient
The first of two conclusions that this paper comes to is that Soft Q-Learning and the Policy Gradient have exact first-order equivalence. Using the value function and Boltzmann policy definitions from earlier, we can derive the gradient of $\mathbb{E}_{s,a} \big[ \frac{1}{2} \Vert Q_\theta(s,a) - y \Vert^2 \big]$. The paper is able to produce the following expression
$$
\mathbb{E}_{s,a} \Big[ \color{red}{-\alpha \nabla_\theta \log\pi_\theta(a|s) \Delta_{TD} + \alpha^2 \nabla_\theta D_{KL}\big( \pi_\theta(\cdot|s) \;\Vert\; \bar{\pi}(\cdot|s) \big)} + \color{blue}{\nabla_\theta \frac{1}{2} \Vert V_\theta(s) - \hat{V} \Vert^2} \Big]
$$
where $\Delta_{TD}$ is the discounted n-step TD error and $\hat{V}$ is the value regression target formed by $\Delta_{TD}$.
That's kind of a lot, but we can break it down pretty easily. The terms in red represent 1) the usual policy gradient and 2) an additional KL divergence gradient term. The red terms overall represent the gradient you get if you use a policy gradient algorithm with a KL divergence term as your entropy bonus (the actor loss in an actor-critic formulation). The term in blue is quite simply the gradient used to minimize the mean squared error between our current value estimates and our value targets (the critic loss in an actor-critic formulation).
Don't forget that we never explicitly tried to calculate these terms. They came about naturally as an effect of minimizing mean squared error of our Q function and a Boltzmann backup target.
# Soft Q-Learning and the Natural Policy Gradient
The next section of the paper details another connection between Soft Q-learning and policy gradient methods, specifically that damped Q-learning updates are exactly equivalent to natural policy gradient updates.
The natural policy gradient weights the policy gradient with the Fisher information matrix $\mathbb{E}_{s,a} \Big[ \big( \nabla_\theta \log\pi_\theta(a|s) \big)^T \big( \nabla_\theta \log\pi_\theta(a|s) \big) \Big]$. The paper shows that the natural policy gradient in the max-entropy setting is equivalent not to soft Q-learning by itself, but instead to a damped version. In this damped version, we calculate a backed-up Q value and then interpolate between it and the current Q value estimate (basically using Polyak averaging instead of running gradient descent on a mean squared error term).
Although not nearly as direct, this connection highlights how higher-order connections between soft Q-learning and policy gradient methods exist. Higher-order equalities between functions point to functions that are increasingly similar, so this connection really drives the point home that soft Q-learning is deceptively like the policy gradient methods we've been using all this time.
# Experimental Results
The paper authors decided to be nice to us and actually test the theory they derived on some Atari games.
They started out with testing whether or not the usual way of adding entropy bonuses to policy gradient methods is actually worse than the theoretical claims they had just made. As it turns out, using future entropy bonuses $\Big( \text{i.e. } \big( \sum r + \mathcal{H} \big) \Big)$ instead of the simpler, immediate entropy bonus $\Big( \text{i.e. } \big( \sum r \big) + \mathcal{H} \Big)$ results in either similar or superior performance. The below graphs show the results from the experiments, with the future entropy version in blue and the immediate entropy version in red.

They then tested how soft Q-learning compared to normal Q-learning. To make traditional DQN into soft Q-learning, they just modified the regression targets for the Q function. They used the normal target, a target with a KL divergence penalty, and a target with just an entropy bonus. They found that just the entropy bonus resulted in the most improvement, although both soft methods outperformed the "hard" DQN.

To round things out, they tested soft Q-learning and the policy gradient on the same Atari environments to see if they were equivalent in practice. After all, the math shows that their expectations are equivalent, but the variance of those expectations could be different. The experiments they ran make it seem like the two methods are pretty close to each other, with no method seeming largely superior.

# Conclusion and Future Work
Hopefully this made you reconsider what's really going on under the hood with Q-learning. Personally, it blew my mind that two seemingly disparate learning methods could boil down to the same expected update. The theoretical possibilities that this connection could lead to is also incredibly exciting.
Of course, this paper focuses its empirical testing just on environemnts with discrete action spaces. Since the Boltzmann policy is intractable to sample from in continuous action spaces, more advanced soft Q-learning algorithms (such as Soft Actor-Critic) are currently being pioneered to get accurate results in those more complicated settings as well.
| github_jupyter |
----
<img src="../../../files/refinitiv.png" width="20%" style="vertical-align: top;">
# Data Library for Python
----
## Content layer - Pricing stream - Used as a real-time data cache
This notebook demonstrates how to retrieve level 1 streaming data (such as trades and quotes) either directly from the Refinitiv Data Platform or via Refinitiv Workspace or CodeBook. The example shows how to define a Pricing stream object, which automatically manages a streaming cache available for access at any time. Your application can then reach into this cache and pull out real-time snapshots as Pandas DataFrames by just calling a simple access method.
Using a Pricing stream object that way prevents your application from sending too many requests to the platform. This is particularly useful if your application needs to retrieve real-time snapshots at regular and short intervals.
#### Learn more
To learn more about the Refinitiv Data Library for Python please join the Refinitiv Developer Community. By [registering](https://developers.refinitiv.com/iam/register) and [login](https://developers.refinitiv.com/content/devportal/en_us/initCookie.html) to the Refinitiv Developer Community portal you will get free access to a number of learning materials like
[Quick Start guides](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/quick-start),
[Tutorials](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/learning),
[Documentation](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/docs)
and much more.
#### Getting Help and Support
If you have any questions regarding the API usage, please post them on
the [Refinitiv Data Q&A Forum](https://community.developers.refinitiv.com/spaces/321/index.html).
The Refinitiv Developer Community will be happy to help.
## Set the configuration file location
For a better ease of use, you have the option to set initialization parameters of the Refinitiv Data Library in the _refinitiv-data.config.json_ configuration file. This file must be located beside your notebook, in your user folder or in a folder defined by the _RD_LIB_CONFIG_PATH_ environment variable. The _RD_LIB_CONFIG_PATH_ environment variable is the option used by this series of examples. The following code sets this environment variable.
```
import os
os.environ["RD_LIB_CONFIG_PATH"] = "../../../Configuration"
```
## Some Imports to start with
```
import refinitiv.data as rd
from refinitiv.data.content import pricing
from pandas import DataFrame
from IPython.display import display, clear_output
```
## Open the data session
The open_session() function creates and open sessions based on the information contained in the refinitiv-data.config.json configuration file. Please edit this file to set the session type and other parameters required for the session you want to open.
```
rd.open_session()
```
## Retrieve data
### Create and open a Pricing stream object
The Pricing stream object is created for a list of instruments and fields. The fields parameter is optionnal. If you omit it, the Pricing stream will retrieve all fields available for the requested instruments
```
stream = rd.content.pricing.Definition(
universe = ['EUR=', 'GBP=', 'JPY=', 'CAD='],
fields = ['BID', 'ASK']
).get_stream()
```
The open method tells the Pricing stream object to subscribe to the streams of the requested instruments.
```
stream.open()
```
As soon as the open method returns, the stream object is ready to be used. Its internal cache is constantly kept updated with the latest streaming information received from Eikon / Refinitiv Workspace. All this happens behind the scene, waiting for your application to pull out data from the cache.
### Extract snapshot data from the streaming cache
Once the stream is opened, you can use the get_snapshot method to pull out data from its internal cache. get_snapshot can be called any number of times. As these calls return the latest received values, successive calls to get_snapshot may return different values. Returned DataFrames do not change in real-time, get_snapshot must be called every time your application needs fresh values.
```
df = stream.get_snapshot()
display(df)
```
### Get a snapshot for a subset of instruments and fields
```
df = stream.get_snapshot(
universe = ['EUR=', 'GBP='],
fields = ['BID', 'ASK']
)
display(df)
```
### Other options to get values from the streaming cache
#### Direct access to real-time fields
```
print('GBP/BID:', stream['GBP=']['BID'])
print('EUR/BID:', stream['EUR=']['BID'])
```
#### Direct acces to a streaming instrument
```
gbp = stream['GBP=']
print(gbp['BID'])
```
#### Iterate on fields
```
print('GBP=')
for field_name, field_value in stream['GBP=']:
print('\t' + field_name + ': ', field_value)
print('JPY=')
for field_name, field_value in stream['JPY=']:
print('\t' + field_name + ': ', field_value)
```
#### Iterate on streaming instruments and fields
```
for streaming_instrument in stream:
print(streaming_instrument.name)
for field_name, field_value in streaming_instrument:
print('\t' + field_name + ': ', field_value)
```
### Close the stream
```
stream.close()
```
Once closed is called the Pricing stream object stops updating its internal cache. The get_snapshot function can still be called but after the close it always return the same values.
### Invalid or un-licensed instruments
What happens if you request using an invalid RIC or an instrument you are not entitled to?
Let's request a mixture of valid and invalid RICs
```
mixed = rd.content.pricing.Definition(
['EUR=', 'GBP=', 'JPY=', 'CAD=', 'BADRIC'],
fields=['BID', 'ASK']
).get_stream()
mixed.open()
mixed.get_snapshot()
```
You can check the Status of any instrument, so lets check the invalid one
```
display(mixed['BADRIC'].status)
```
As you will note, for an invalid instrument we get:
{'status': <StreamState.Closed: 1>, **'code': 'NotFound'**, 'message': '** The Record could not be found'}
However, if you are not licensed for the instrument you would see something like:
{'status': <StreamState.Closed: 1>, **'code': 'NotEntitled'**, 'message': 'A21: DACS User Profile denied access to vendor'}
**NOTE**: The exact wording of **message** can change over time - therefore,only use the **code** value for any programmatic decision making.
```
mixed.close()
```
## Close the session
```
rd.close_session()
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Start-to-Finish Example: `GiRaFFE_NRPy` 3D tests
### Author: Patrick Nelson
### Adapted from [Start-to-Finish Example: Head-On Black Hole Collision](../Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb)
## This module implements a basic GRFFE code to evolve one-dimensional GRFFE waves.
### NRPy+ Source Code for this module:
* [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Exact_Wald.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Exact_Wald.py) [\[**tutorial**\]](Tutorial-GiRaFFEfood_NRPy_Exact_Wald.ipynb) Generates Exact Wald initial data
* [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Aligned_Rotator.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Aligned_Rotator.py) [\[**tutorial**\]](Tutorial-GiRaFFEfood_NRPy_Aligned_Rotator.ipynb) Generates Aligned Rotator initial data
* [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py) [\[**tutorial**\]](Tutorial-GiRaFFEfood_NRPy_1D_tests.ipynb) Generates Alfvén Wave initial data.
* [GiRaFFE_NRPy/Afield_flux.py](../../edit/in_progress/GiRaFFE_NRPy/Afield_flux.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-Afield_flux.ipynb) Generates the expressions to find the flux term of the induction equation.
* [GiRaFFE_NRPy/GiRaFFE_NRPy_A2B.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_A2B.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-Afield_flux.ipynb) Generates the driver to compute the magnetic field from the vector potential/
* [GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-BCs.ipynb) Generates the code to apply boundary conditions to the vector potential, scalar potential, and three-velocity.
* [GiRaFFE_NRPy/GiRaFFE_NRPy_C2P_P2C.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_C2P_P2C.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-C2P_P2C.ipynb) Generates the conservative-to-primitive and primitive-to-conservative solvers.
* [GiRaFFE_NRPy/GiRaFFE_NRPy_Metric_Face_Values.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Metric_Face_Values.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-Metric_Face_Values.ipynb) Generates code to interpolate metric gridfunctions to cell faces.
* [GiRaFFE_NRPy/GiRaFFE_NRPy_PPM.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_PPM.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-PPM.ipynb) Genearates code to reconstruct primitive variables on cell faces.
* [GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-Source_Terms.ipynb) Generates the expressions to find the flux term of the Poynting flux evolution equation.
* [GiRaFFE_NRPy/Stilde_flux.py](../../edit/in_progress/GiRaFFE_NRPy/Stilde_flux.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-Stilde_flux.ipynb) Generates the expressions to find the flux term of the Poynting flux evolution equation.
* [../GRFFE/equations.py](../../edit/GRFFE/equations.py) [\[**tutorial**\]](../Tutorial-GRFFE_Equations-Cartesian.ipynb) Generates code necessary to compute the source terms.
* [../GRHD/equations.py](../../edit/GRHD/equations.py) [\[**tutorial**\]](../Tutorial-GRHD_Equations-Cartesian.ipynb) Generates code necessary to compute the source terms.
Here we use NRPy+ to generate the C source code necessary to set up initial data for an Alfvén wave (see [the original GiRaFFE paper](https://arxiv.org/pdf/1704.00599.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Set core NRPy+ parameters for numerical grids
1. [Step 2](#grffe): Output C code for GRFFE evolution
1. [Step 2.a](#mol): Output macros for Method of Lines timestepping
1. [Step 3](#gf_id): Import `GiRaFFEfood_NRPy` initial data modules
1. [Step 4](#cparams): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`
1. [Step 5](#mainc): `GiRaFFE_NRPy_standalone.c`: The Main C Code
<a id='setup'></a>
# Step 1: Set up core functions and parameters for solving GRFFE equations \[Back to [top](#toc)\]
$$\label{setup}$$
```
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# First, we'll add the parent directory to the list of directories Python will check for modules.
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step P1: Import needed NRPy+ core modules:
from outputC import outCfunction, lhrh, add_to_Cfunction_dict # NRPy+: Core C code output module
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("GiRaFFE_unstaggered_new_way_standalone_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(Ccodesdir)
cmd.mkdir(outdir)
# Step P5: Set timestepping algorithm (we adopt the Method of Lines)
REAL = "double" # Best to use double here.
default_CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Step P6: Set the finite differencing order to 2.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",2)
# Step P7: Enable SIMD-optimized code?
# I.e., generate BSSN and Ricci C code kernels using SIMD-vectorized
# compiler intrinsics, which *greatly improve the code's performance*,
# though at the expense of making the C-code kernels less
# human-readable.
# * Important note in case you wish to modify the BSSN/Ricci kernels
# here by adding expressions containing transcendental functions
# (e.g., certain scalar fields):
# Note that SIMD-based transcendental function intrinsics are not
# supported by the default installation of gcc or clang (you will
# need to use e.g., the SLEEF library from sleef.org, for this
# purpose). The Intel compiler suite does support these intrinsics
# however without the need for external libraries.
enable_SIMD = False
# Step 1.b: Enable reference metric precomputation.
enable_rfm_precompute = False
if enable_SIMD and not enable_rfm_precompute:
print("ERROR: SIMD does not currently handle transcendental functions,\n")
print(" like those found in rfmstruct (rfm_precompute).\n")
print(" Therefore, enable_SIMD==True and enable_rfm_precompute==False\n")
print(" is not supported.\n")
sys.exit(1)
# Step 1.c: Enable "FD functions". In other words, all finite-difference stencils
# will be output as inlined static functions. This is essential for
# compiling highly complex FD kernels with using certain versions of GCC;
# GCC 10-ish will choke on BSSN FD kernels at high FD order, sometimes
# taking *hours* to compile. Unaffected GCC versions compile these kernels
# in seconds. FD functions do not slow the code performance, but do add
# another header file to the C source tree.
# With gcc 7.5.0, enable_FD_functions=True decreases performance by 10%
enable_FD_functions = False
thismodule = "Start_to_Finish-GiRaFFE_NRPy-3D_tests-unstaggered_new_way"
TINYDOUBLE = par.Cparameters("REAL", thismodule, "TINYDOUBLE", 1e-100)
import GiRaFFE_NRPy.GiRaFFE_NRPy_Main_Driver_new_way as md
# par.set_paramsvals_value("GiRaFFE_NRPy.GiRaFFE_NRPy_C2P_P2C::enforce_speed_limit_StildeD = False")
par.set_paramsvals_value("GiRaFFE_NRPy.GiRaFFE_NRPy_C2P_P2C::enforce_current_sheet_prescription = False")
```
<a id='grffe'></a>
# Step 2: Output C code for GRFFE evolution \[Back to [top](#toc)\]
$$\label{grffe}$$
We will first write the C codes needed for GRFFE evolution. We have already written a module to generate all these codes and call the functions in the appropriate order, so we will import that here. We will take the slightly unusual step of doing this before we generate the initial data functions because the main driver module will register all the gridfunctions we need. It will also generate functions that, in addition to their normal spot in the MoL timestepping, will need to be called during the initial data step to make sure all the variables are appropriately filled in.
<a id='mol'></a>
## Step 2.a: Output macros for Method of Lines timestepping \[Back to [top](#toc)\]
$$\label{mol}$$
Now, we generate the code to implement the method of lines using the fourth-order Runge-Kutta algorithm.
```
RK_method = "RK4"
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = """
GiRaFFE_NRPy_RHSs(¶ms,auxevol_gfs,RK_INPUT_GFS,RK_OUTPUT_GFS);""",
post_RHS_string = """
GiRaFFE_NRPy_post_step(¶ms,xx,auxevol_gfs,RK_OUTPUT_GFS,n+1);\n""",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
```
<a id='gf_id'></a>
# Step 3: Import `GiRaFFEfood_NRPy` initial data modules \[Back to [top](#toc)\]
$$\label{gf_id}$$
With the preliminaries out of the way, we will write the C functions to set up initial data. There are two categories of initial data that must be set: the spacetime metric variables, and the GRFFE plasma variables. We will set up the spacetime first.
```
# There are several initial data routines we need to test. We'll control which one we use with a string option
initial_data = "ExactWald" # Valid options: "ExactWald", "AlignedRotator"
spacetime = "ShiftedKerrSchild" # Valid options: "ShiftedKerrSchild", "flat"
if spacetime == "ShiftedKerrSchild":
# Exact Wald is more complicated. We'll need the Shifted Kerr Schild metric in Cartesian coordinates.
import BSSN.ShiftedKerrSchild as sks
sks.ShiftedKerrSchild(True)
import reference_metric as rfm
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
# Use the Jacobian matrix to transform the vectors to Cartesian coordinates.
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
rfm.reference_metric()
Jac_dUCart_dDrfmUD,Jac_dUrfm_dDCartUD = rfm.compute_Jacobian_and_inverseJacobian_tofrom_Cartesian()
# Transform the coordinates of the Jacobian matrix from spherical to Cartesian:
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
tmpa,tmpb,tmpc = sp.symbols("tmpa,tmpb,tmpc")
for i in range(3):
for j in range(3):
Jac_dUCart_dDrfmUD[i][j] = Jac_dUCart_dDrfmUD[i][j].subs([(rfm.xx[0],tmpa),(rfm.xx[1],tmpb),(rfm.xx[2],tmpc)])
Jac_dUCart_dDrfmUD[i][j] = Jac_dUCart_dDrfmUD[i][j].subs([(tmpa,rfm.xxSph[0]),(tmpb,rfm.xxSph[1]),(tmpc,rfm.xxSph[2])])
Jac_dUrfm_dDCartUD[i][j] = Jac_dUrfm_dDCartUD[i][j].subs([(rfm.xx[0],tmpa),(rfm.xx[1],tmpb),(rfm.xx[2],tmpc)])
Jac_dUrfm_dDCartUD[i][j] = Jac_dUrfm_dDCartUD[i][j].subs([(tmpa,rfm.xxSph[0]),(tmpb,rfm.xxSph[1]),(tmpc,rfm.xxSph[2])])
gammaSphDD = ixp.zerorank2()
for i in range(3):
for j in range(3):
gammaSphDD[i][j] += sks.gammaSphDD[i][j].subs(sks.r,rfm.xxSph[0]).subs(sks.th,rfm.xxSph[1])
betaSphU = ixp.zerorank1()
for i in range(3):
betaSphU[i] += sks.betaSphU[i].subs(sks.r,rfm.xxSph[0]).subs(sks.th,rfm.xxSph[1])
alpha = sks.alphaSph.subs(sks.r,rfm.xxSph[0]).subs(sks.th,rfm.xxSph[1])
gammaDD = rfm.basis_transform_tensorDD_from_rfmbasis_to_Cartesian(Jac_dUrfm_dDCartUD, gammaSphDD)
unused_gammaUU,gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
sqrtgammaDET = sp.sqrt(gammaDET)
betaU = rfm.basis_transform_vectorD_from_rfmbasis_to_Cartesian(Jac_dUrfm_dDCartUD, betaSphU)
# Description and options for this initial data
desc = "Generate a spinning black hole with Shifted Kerr Schild metric."
loopopts_id ="AllPoints,Read_xxs"
elif spacetime == "flat":
gammaDD = ixp.zerorank2(DIM=3)
for i in range(3):
for j in range(3):
if i==j:
gammaDD[i][j] = sp.sympify(1) # else: leave as zero
betaU = ixp.zerorank1() # All should be 0
alpha = sp.sympify(1)
# Description and options for this initial data
desc = "Generate a flat spacetime metric."
loopopts_id ="AllPoints" # we don't need to read coordinates for flat spacetime.
name = "set_initial_spacetime_metric_data"
values_to_print = [
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD00"),rhs=gammaDD[0][0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD01"),rhs=gammaDD[0][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD02"),rhs=gammaDD[0][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD11"),rhs=gammaDD[1][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD12"),rhs=gammaDD[1][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD22"),rhs=gammaDD[2][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","betaU0"),rhs=betaU[0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","betaU1"),rhs=betaU[1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","betaU2"),rhs=betaU[2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","alpha"),rhs=alpha)
]
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *params,REAL *xx[3],REAL *auxevol_gfs",
body = fin.FD_outputC("returnstring",values_to_print,params="outCverbose=False"),
loopopts = loopopts_id)
```
Now, we will write out the initial data function for the GRFFE variables.
```
import GiRaFFEfood_NRPy.GiRaFFEfood_NRPy as gid
if initial_data=="ExactWald":
gid.GiRaFFEfood_NRPy_generate_initial_data(ID_type = initial_data, stagger_enable = False,M=sks.M,KerrSchild_radial_shift=sks.r0,gammaDD=gammaDD,sqrtgammaDET=sqrtgammaDET)
desc = "Generate exact Wald initial test data for GiRaFFEfood_NRPy."
elif initial_data=="SplitMonopole":
gid.GiRaFFEfood_NRPy_generate_initial_data(ID_type = initial_data, stagger_enable = False,M=sks.M,a=sks.a,KerrSchild_radial_shift=sks.r0,alpha=alpha,betaU=betaSphU,gammaDD=gammaDD,sqrtgammaDET=sqrtgammaSphDET)
desc = "Generate Split Monopole initial test data for GiRaFFEfood_NRPy."
elif initial_data=="AlignedRotator":
gf.GiRaFFEfood_NRPy_generate_initial_data(ID_type = initial_data, stagger_enable = True)
desc = "Generate aligned rotator initial test data for GiRaFFEfood_NRPy."
else:
print("Unsupported Initial Data string "+initial_data+"! Supported ID: ExactWald, or SplitMonopole")
name = "initial_data"
values_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","AD0"),rhs=gid.AD[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD1"),rhs=gid.AD[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD2"),rhs=gid.AD[2]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU0"),rhs=gid.ValenciavU[0]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU1"),rhs=gid.ValenciavU[1]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU2"),rhs=gid.ValenciavU[2]),\
# lhrh(lhs=gri.gfaccess("auxevol_gfs","BU0"),rhs=gid.BU[0]),\
# lhrh(lhs=gri.gfaccess("auxevol_gfs","BU1"),rhs=gid.BU[1]),\
# lhrh(lhs=gri.gfaccess("auxevol_gfs","BU2"),rhs=gid.BU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","psi6Phi"),rhs=sp.sympify(0))\
]
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *params,REAL *xx[3],REAL *auxevol_gfs,REAL *out_gfs",
body = fin.FD_outputC("returnstring",values_to_print,params="outCverbose=False"),
loopopts ="AllPoints,Read_xxs")
```
<a id='cparams'></a>
# Step 4: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](#toc)\]
$$\label{cparams}$$
Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.
Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
```
# Step 3.e: Output C codes needed for declaring and setting Cparameters; also set free_parameters.h
# Step 3.e.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.e.ii: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""// Override parameter defaults with values based on command line arguments and NGHOSTS.
params.Nxx0 = atoi(argv[1]);
params.Nxx1 = atoi(argv[2]);
params.Nxx2 = atoi(argv[3]);
params.Nxx_plus_2NGHOSTS0 = params.Nxx0 + 2*NGHOSTS;
params.Nxx_plus_2NGHOSTS1 = params.Nxx1 + 2*NGHOSTS;
params.Nxx_plus_2NGHOSTS2 = params.Nxx2 + 2*NGHOSTS;
// Step 0d: Set up space and time coordinates
// Step 0d.i: Declare \Delta x^i=dxx{0,1,2} and invdxx{0,1,2}, as well as xxmin[3] and xxmax[3]:
const REAL xxmin[3] = {-1.5,-1.5,-1.5};
const REAL xxmax[3] = { 1.5, 1.5, 1.5};
params.dxx0 = (xxmax[0] - xxmin[0]) / ((REAL)params.Nxx0+1);
params.dxx1 = (xxmax[1] - xxmin[1]) / ((REAL)params.Nxx1+1);
params.dxx2 = (xxmax[2] - xxmin[2]) / ((REAL)params.Nxx2+1);
printf("dxx0,dxx1,dxx2 = %.5e,%.5e,%.5e\\n",params.dxx0,params.dxx1,params.dxx2);
params.invdx0 = 1.0 / params.dxx0;
params.invdx1 = 1.0 / params.dxx1;
params.invdx2 = 1.0 / params.dxx2;
const int poison_grids = 0;
// Standard GRFFE parameters:
params.GAMMA_SPEED_LIMIT = 2000.0;
params.diss_strength = 0.1;
""")
if initial_data=="ExactWald":
with open(os.path.join(Ccodesdir,"free_parameters.h"),"a") as file:
file.write("""params.r0 = 0.4;
params.a = 0.0;
""")
```
<a id='bc_functs'></a>
# Step 4: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](#toc)\]
$$\label{bc_functs}$$
Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
...But, for the moment, we're actually just using this because it writes the file `gridfunction_defines.h`.
```
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(Ccodesdir,enable_copy_of_static_Ccodes=False)
```
<a id='mainc'></a>
# Step 5: `GiRaFFE_NRPy_standalone.c`: The Main C Code \[Back to [top](#toc)\]
$$\label{mainc}$$
```
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"GiRaFFE_NRPy_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(3)+"""
#define NGHOSTS_A2B """+str(2)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the CFL Factor. Can be overwritten at command line.
REAL CFL_FACTOR = """+str(default_CFL_FACTOR)+";")
```
Here, we write the main function and add it to the C function dictionaries so that it can be correctly added to the make file.
```
#include "GiRaFFE_NRPy_REAL__NGHOSTS__CFL_FACTOR.h"
#include "declare_Cparameters_struct.h"
def add_to_Cfunction_dict_main__GiRaFFE_NRPy_3D_tests_unstaggered():
includes = ["NRPy_basic_defines.h", "GiRaFFE_main_defines.h", "NRPy_function_prototypes.h", "time.h", "set_initial_spacetime_metric_data.h", "initial_data.h"]
desc = """main() function:
Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
Step 1: Set up scalar wave initial data
Step 2: Evolve scalar wave initial data forward in time using Method of Lines with RK4 algorithm,
applying quadratic extrapolation outer boundary conditions.
Step 3: Output relative error between numerical and exact solution.
Step 4: Free all allocated memory
"""
prefunc = """const int NSKIP_1D_OUTPUT = 1;
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
"""
c_type = "int"
name = "main"
params = "int argc, const char *argv[]"
body = """
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if(argc != 4 || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < NGHOSTS) {
printf("Error: Expected three command-line arguments: ./GiRaFFE_NRPy_standalone [Nx] [Ny] [Nz],\\n");
printf("where Nx is the number of grid points in the x direction, and so forth.\\n");
printf("Nx,Ny,Nz MUST BE larger than NGHOSTS (= %d)\\n",NGHOSTS);
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
#include "set_Cparameters-nopointer.h"
// ... and then set up the numerical grid structure in time:
const REAL t_final = 0.5;
const REAL CFL_FACTOR = 0.5; // Set the CFL Factor
// Step 0c: Allocate memory for gridfunctions
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *evol_gfs_exact = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *auxevol_gfs_exact = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// For debugging, it can be useful to set everything to NaN initially.
if(poison_grids) {
for(int ii=0;ii<NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot;ii++) {
y_n_gfs[ii] = 1.0/0.0;
y_nplus1_running_total_gfs[ii] = 1.0/0.0;
//k_odd_gfs[ii] = 1.0/0.0;
//k_even_gfs[ii] = 1.0/0.0;
diagnostic_output_gfs[ii] = 1.0/0.0;
evol_gfs_exact[ii] = 1.0/0.0;
}
for(int ii=0;ii<NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot;ii++) {
auxevol_gfs[ii] = 1.0/0.0;
auxevol_gfs_exact[ii] = 1.0/0.0;
}
}
// Step 0d: Set up coordinates: Set dx, and then dt based on dx_min and CFL condition
// This is probably already defined above, but just in case...
#ifndef MIN
#define MIN(A, B) ( ((A) < (B)) ? (A) : (B) )
#endif
REAL dt = CFL_FACTOR * MIN(dxx0,MIN(dxx1,dxx2)); // CFL condition
int Nt = (int)(t_final / dt + 0.5); // The number of points in time.
//Add 0.5 to account for C rounding down integers.
// Step 0e: Set up cell-centered Cartesian coordinate grids
REAL *xx[3];
xx[0] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS0);
xx[1] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS1);
xx[2] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS2);
for(int j=0;j<Nxx_plus_2NGHOSTS0;j++) xx[0][j] = xxmin[0] + (j-NGHOSTS+1)*dxx0;
for(int j=0;j<Nxx_plus_2NGHOSTS1;j++) xx[1][j] = xxmin[1] + (j-NGHOSTS+1)*dxx1;
for(int j=0;j<Nxx_plus_2NGHOSTS2;j++) xx[2][j] = xxmin[2] + (j-NGHOSTS+1)*dxx2;
// Step 1: Set up initial data to be exact solution at time=0:
//REAL time;
set_initial_spacetime_metric_data(¶ms,xx,auxevol_gfs);
initial_data(¶ms,xx,auxevol_gfs,y_n_gfs);
// Fill in the remaining quantities
apply_bcs_potential(¶ms,y_n_gfs);
driver_A_to_B(¶ms,y_n_gfs,auxevol_gfs);
//override_BU_with_old_GiRaFFE(¶ms,auxevol_gfs,0);
GiRaFFE_NRPy_prims_to_cons(¶ms,auxevol_gfs,y_n_gfs);
apply_bcs_velocity(¶ms,auxevol_gfs);
// Extra stack, useful for debugging:
GiRaFFE_NRPy_cons_to_prims(¶ms,xx,auxevol_gfs,y_n_gfs);
//GiRaFFE_NRPy_prims_to_cons(¶ms,auxevol_gfs,y_n_gfs);
//GiRaFFE_NRPy_cons_to_prims(¶ms,xx,auxevol_gfs,y_n_gfs);
//GiRaFFE_NRPy_prims_to_cons(¶ms,auxevol_gfs,y_n_gfs);
//GiRaFFE_NRPy_cons_to_prims(¶ms,xx,auxevol_gfs,y_n_gfs);
for(int n=0;n<=Nt;n++) { // Main loop to progress forward in time.
//for(int n=0;n<=1;n++) { // Main loop to progress forward in time.
// Step 1a: Set current time to correct value & compute exact solution
//time = ((REAL)n)*dt;
/* Step 2: Validation: Output relative error between numerical and exact solution, */
if((n)%NSKIP_1D_OUTPUT ==0) {
// Step 2c: Output relative error between exact & numerical at center of grid.
const int i0mid=Nxx_plus_2NGHOSTS0/2;
const int i1mid=Nxx_plus_2NGHOSTS1/2;
const int i2mid=Nxx_plus_2NGHOSTS2/2;
char filename[100];
sprintf(filename,"out%d-%08d.txt",Nxx0,n);
FILE *out2D = fopen(filename, "w");
for(int i0=0;i0<Nxx_plus_2NGHOSTS0;i0++) {
const int idx = IDX3S(i0,i1mid,i2mid);
fprintf(out2D,"%.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e\\n",
xx[0][i0],
auxevol_gfs[IDX4ptS(BU0GF,idx)],auxevol_gfs[IDX4ptS(BU1GF,idx)],auxevol_gfs[IDX4ptS(BU2GF,idx)],
y_n_gfs[IDX4ptS(AD0GF,idx)],y_n_gfs[IDX4ptS(AD1GF,idx)],y_n_gfs[IDX4ptS(AD2GF,idx)],
y_n_gfs[IDX4ptS(STILDED0GF,idx)],y_n_gfs[IDX4ptS(STILDED1GF,idx)],y_n_gfs[IDX4ptS(STILDED2GF,idx)],
auxevol_gfs[IDX4ptS(VALENCIAVU0GF,idx)],auxevol_gfs[IDX4ptS(VALENCIAVU1GF,idx)],auxevol_gfs[IDX4ptS(VALENCIAVU2GF,idx)],
y_n_gfs[IDX4ptS(PSI6PHIGF,idx)]);
}
fclose(out2D);
set_initial_spacetime_metric_data(¶ms,xx,auxevol_gfs_exact);
initial_data(¶ms,xx,auxevol_gfs_exact,evol_gfs_exact);
// Fill in the remaining quantities
driver_A_to_B(¶ms,evol_gfs_exact,auxevol_gfs_exact);
GiRaFFE_NRPy_prims_to_cons(¶ms,auxevol_gfs_exact,evol_gfs_exact);
sprintf(filename,"out%d-%08d_exact.txt",Nxx0,n);
FILE *out2D_exact = fopen(filename, "w");
for(int i0=0;i0<Nxx_plus_2NGHOSTS0;i0++) {
const int idx = IDX3S(i0,i1mid,i2mid);
fprintf(out2D_exact,"%.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e\\n",
xx[0][i0],
auxevol_gfs_exact[IDX4ptS(BU0GF,idx)],auxevol_gfs_exact[IDX4ptS(BU1GF,idx)],auxevol_gfs_exact[IDX4ptS(BU2GF,idx)],
evol_gfs_exact[IDX4ptS(AD0GF,idx)],evol_gfs_exact[IDX4ptS(AD1GF,idx)],evol_gfs_exact[IDX4ptS(AD2GF,idx)],
evol_gfs_exact[IDX4ptS(STILDED0GF,idx)],evol_gfs_exact[IDX4ptS(STILDED1GF,idx)],evol_gfs_exact[IDX4ptS(STILDED2GF,idx)],
auxevol_gfs_exact[IDX4ptS(VALENCIAVU0GF,idx)],auxevol_gfs_exact[IDX4ptS(VALENCIAVU1GF,idx)],auxevol_gfs_exact[IDX4ptS(VALENCIAVU2GF,idx)],
evol_gfs_exact[IDX4ptS(PSI6PHIGF,idx)]);
}
fclose(out2D_exact);
}
// Step 3: Evolve scalar wave initial data forward in time using Method of Lines with RK4 algorithm,
// applying quadratic extrapolation outer boundary conditions.
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
} // End main loop to progress forward in time.
// Step 4: Free all allocated memory
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
free(auxevol_gfs_exact);
free(evol_gfs_exact);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
"""
add_to_Cfunction_dict(
includes=includes,
desc=desc,
c_type=c_type, name=name, params=params,
prefunc = prefunc, body=body,
rel_path_to_Cparams=os.path.join("."), enableCparameters=False)
md.add_to_Cfunction_dict__AD_gauge_term_psi6Phi_flux_term(includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
md.add_to_Cfunction_dict__AD_gauge_term_psi6Phi_fin_diff(includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
md.add_to_Cfunction_dict__cons_to_prims(md.StildeD,md.BU,md.gammaDD,md.betaU,md.alpha,includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
md.add_to_Cfunction_dict__prims_to_cons(md.gammaDD,md.betaU,md.alpha,md.ValenciavU,md.BU,md.sqrt4pi,includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
import GiRaFFE_NRPy.GiRaFFE_NRPy_Source_Terms as source
source.add_to_Cfunction_dict__functions_for_StildeD_source_term(md.outCparams,md.gammaDD,md.betaU,md.alpha,
md.ValenciavU,md.BU,md.sqrt4pi,includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
import GiRaFFE_NRPy.Stilde_flux as Sf
Sf.add_to_Cfunction_dict__Stilde_flux(includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"], inputs_provided = True, alpha_face=md.alpha_face, gamma_faceDD=md.gamma_faceDD,
beta_faceU=md.beta_faceU, Valenciav_rU=md.Valenciav_rU, B_rU=md.B_rU,
Valenciav_lU=md.Valenciav_lU, B_lU=md.B_lU, sqrt4pi=md.sqrt4pi)
import GiRaFFE_NRPy.GiRaFFE_NRPy_Afield_flux_handwritten as Af
Af.add_to_Cfunction_dict__GiRaFFE_NRPy_Afield_flux(md.gammaDD, md.betaU, md.alpha, Ccodesdir)
import GiRaFFE_NRPy.GiRaFFE_NRPy_Metric_Face_Values as FCVAL
FCVAL.add_to_Cfunction_dict__GiRaFFE_NRPy_FCVAL(includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
import GiRaFFE_NRPy.GiRaFFE_NRPy_PPM as PPM
PPM.add_to_Cfunction_dict__GiRaFFE_NRPy_PPM(Ccodesdir)
import GiRaFFE_NRPy.GiRaFFE_NRPy_A2B as A2B
A2B.add_to_Cfunction_dict__GiRaFFE_NRPy_A2B(md.gammaDD,md.AD,md.BU,includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
import GiRaFFE_NRPy.GiRaFFE_NRPy_BCs as BC
BC.add_to_Cfunction_dict__GiRaFFE_NRPy_BCs()
md.add_to_Cfunction_dict__driver_function()
add_to_Cfunction_dict_main__GiRaFFE_NRPy_3D_tests_unstaggered()
```
Now, we will register the remaining C functions and contributions to `NRPy_basic_defines.h`, then we output `NRPy_basic_defines.h` and `NRPy_function_prototypes.h`.
```
import outputC as outC
outC.outputC_register_C_functions_and_NRPy_basic_defines() # #define M_PI, etc.
# Declare paramstruct, register set_Cparameters_to_default(),
# and output declare_Cparameters_struct.h and set_Cparameters[].h:
outC.NRPy_param_funcs_register_C_functions_and_NRPy_basic_defines(os.path.join(Ccodesdir))
gri.register_C_functions_and_NRPy_basic_defines(enable_griddata_struct=False, enable_bcstruct_in_griddata_struct=False,
enable_rfmstruct=False,
enable_MoL_gfs_struct=False,
extras_in_griddata_struct=None) # #define IDX3S(), etc.
fin.register_C_functions_and_NRPy_basic_defines(NGHOSTS_account_for_onezone_upwind=True,
enable_SIMD=enable_SIMD) # #define NGHOSTS, and UPWIND() macro if SIMD disabled
# Output functions for computing all finite-difference stencils.
# Must be called after defining all functions depending on FD stencils.
if enable_FD_functions:
fin.output_finite_difference_functions_h(path=Ccodesdir)
# Call this last: Set up NRPy_basic_defines.h and NRPy_function_prototypes.h.
outC.construct_NRPy_basic_defines_h(Ccodesdir, enable_SIMD=enable_SIMD)
with open(os.path.join(Ccodesdir,"GiRaFFE_basic_defines.h"),"w") as file:
file.write("""#define NGHOSTS_A2B """+str(2)+"\n"+"""extern int kronecker_delta[4][3];
extern int MAXFACE;
extern int NUL;
extern int MINFACE;
extern int VX,VY,VZ,BX,BY,BZ;
extern int NUM_RECONSTRUCT_GFS;
// Structure to track ghostzones for PPM:
typedef struct __gf_and_gz_struct__ {
REAL *gf;
int gz_lo[4],gz_hi[4];
} gf_and_gz_struct;
""")
with open(os.path.join(Ccodesdir,"GiRaFFE_main_defines.h"),"w") as file:
file.write("""#define NGHOSTS_A2B """+str(2)+"\n"+PPM.kronecker_code+"""const int VX=0,VY=1,VZ=2,BX=3,BY=4,BZ=5;
const int NUM_RECONSTRUCT_GFS = 6;
const int MAXFACE = -1;
const int NUL = +0;
const int MINFACE = +1;
// Structure to track ghostzones for PPM:
typedef struct __gf_and_gz_struct__ {
REAL *gf;
int gz_lo[4],gz_hi[4];
} gf_and_gz_struct;
""")
outC.construct_NRPy_function_prototypes_h(Ccodesdir)
cmd.new_C_compile(Ccodesdir, os.path.join("output", "GiRaFFE_NRPy_standalone"),
uses_free_parameters_h=True, compiler_opt_option="fast") # fastdebug or debug also supported
# !gcc -g -O2 -fopenmp GiRaFFE_standalone_Ccodes/GiRaFFE_NRPy_standalone.c -o GiRaFFE_NRPy_standalone -lm
# Change to output directory
os.chdir(outdir)
# Clean up existing output files
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
# cmd.Execute(os.path.join(Ccodesdir,"output","GiRaFFE_NRPy_standalone"), "640 16 16", os.path.join(outdir,"out640.txt"))
cmd.Execute("GiRaFFE_NRPy_standalone", "64 64 64","out64.txt")
# cmd.Execute("GiRaFFE_NRPy_standalone", "239 15 15","out239.txt")
# !OMP_NUM_THREADS=1 valgrind --track-origins=yes -v ./GiRaFFE_NRPy_standalone 1280 32 32
# Return to root directory
os.chdir(os.path.join("../../"))
```
Now, we will load the data generated by the simulation and plot it in order to test for convergence.
```
import numpy as np
import matplotlib.pyplot as plt
Data_numer = np.loadtxt(os.path.join(Ccodesdir,"output","out64-00000020.txt"))
# Data_num_2 = np.loadtxt(os.path.join("GiRaFFE_standalone_Ccodes","output","out239-00000080.txt"))
# Data_old = np.loadtxt("/home/penelson/OldCactus/Cactus/exe/ABE-GiRaFFEfood_1D_AlfvenWave/giraffe-grmhd_primitives_bi.x.asc")
# Data_o_2 = np.loadtxt("/home/penelson/OldCactus/Cactus/exe/ABE-GiRaFFEfood_1D_AlfvenWave_2/giraffe-grmhd_primitives_bi.x.asc")
# Data_numer = Data_old[5000:5125,11:15] # The column range is chosen for compatibility with the plotting script.
# Data_num_2 = Data_o_2[19600:19845,11:15] # The column range is chosen for compatibility with the plotting script.
Data_exact = np.loadtxt(os.path.join(Ccodesdir,"output","out64-00000020_exact.txt"))
# Data_exa_2 = np.loadtxt(os.path.join("GiRaFFE_standalone_Ccodes","output","out239-00000080_exact.txt"))
predicted_order = 2.0
column = 3
plt.figure()
# # plt.plot(Data_exact[2:-2,0],np.log2(np.absolute((Data_numer[2:-2,column]-Data_exact[2:-2,column])/\
# # (Data_num_2[2:-2:2,column]-Data_exa_2[2:-2:2,column]))),'.')
plt.plot(Data_exact[:,0],Data_exact[:,column])
plt.plot(Data_exact[:,0],Data_numer[:,column],'.')
# plt.xlim(-0.0,1.0)
# # plt.ylim(-1.0,5.0)
# # plt.ylim(-0.0005,0.0005)
# plt.xlabel("x")
# plt.ylabel("BU2")
plt.show()
# # 0 1 2 3 4 5 6 7 8 9 10 11 12 13
# labels = ["x","BU0","BU1","BU2","AD0","AD1","AD2","StildeD0","StildeD1","StildeD2","ValenciavU0","ValenciavU1","ValenciavU2", "psi6Phi"]
# old_files = ["",
# "giraffe-grmhd_primitives_bi.x.asc","giraffe-grmhd_primitives_bi.x.asc","giraffe-grmhd_primitives_bi.x.asc",
# # "giraffe-em_ax.x.asc","giraffe-em_ay.x.asc","giraffe-em_az.x.asc",
# "cell_centered_Ai.txt","cell_centered_Ai.txt","cell_centered_Ai.txt",
# "giraffe-grmhd_conservatives.x.asc","giraffe-grmhd_conservatives.x.asc","giraffe-grmhd_conservatives.x.asc",
# "giraffe-grmhd_primitives_allbutbi.x.asc","giraffe-grmhd_primitives_allbutbi.x.asc","giraffe-grmhd_primitives_allbutbi.x.asc",
# "giraffe-em_psi6phi.x.asc"]
# column = 5
# column_old = [0,12,13,14,0,1,2,12,13,14,12,13,14,12]
# old_path = "/home/penelson/OldCactus/Cactus/exe/ABE-GiRaFFEfood_1D_AlfvenWave"
# new_path = os.path.join("GiRaFFE_standalone_Ccodes","output")
# data_old = np.loadtxt(os.path.join(old_path,old_files[column]))
# # data_old = data_old[250:375,:]# Select only the second timestep
# # data_old = data_old[125:250,:]# Select only the first timestep
# # data_old = data_old[0:125,:]# Select only the zeroth timestep
# data_new = np.loadtxt(os.path.join(new_path,"out119-00000001.txt"))
# deltaA_old = data_old[125:250,:] - data_old[0:125,:]
# data_new_t0 = np.loadtxt(os.path.join(new_path,"out119-00000000.txt"))
# deltaA_new = data_new[:,:] - data_new_t0[:,:]
# plt.figure()
# # plt.plot(data_new[3:-3,0],data_new[3:-3,column]-data_old[3:-3,column_old[column]])
# # plt.plot(data_new[:,0],data_new[:,column]-((3*np.sin(5*np.pi*data_new[:,0]/np.sqrt(1 - (-0.5)**2))/20 + 23/20)*(data_new[:,0]/2 + np.sqrt(1 - (-0.5)**2)/20 + np.absolute(data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10)/2)*(-1e-100/2 + data_new[:,0]/2 - np.sqrt(1 - (-0.5)**2)/20 - np.absolute(-1e-100 + data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)/2)/((-1e-100 + data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)*(1e-100 + data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10)) + 13*(data_new[:,0]/2 - np.sqrt(1 - (-0.5)**2)/20 + np.absolute(data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)/2)/(10*(1e-100 + data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)) + (-1e-100/2 + data_new[:,0]/2 + np.sqrt(1 - (-0.5)**2)/20 - np.absolute(-1e-100 + data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10)/2)/(-1e-100 + data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10))/np.sqrt(1 - (-0.5)**2))
# # plt.plot(data_new[1:,0]-(data_new[0,0]-data_new[1,0])/2.0,(data_new[0:-1,column]+data_new[1:,column])/2,'.',label="GiRaFFE_NRPy+injected BU")
# # plt.plot(data_new[1:,0]-(data_new[0,0]-data_new[1,0])/2.0,data_old[1:,column_old[column]],label="old GiRaFFE")
# # -(data_old[0,9]-data_old[1,9])/2.0
# # plt.plot(data_new[3:-3,0],deltaA_new[3:-3,column],'.')
# plt.plot(data_new[3:-3,0],deltaA_old[3:-3,column_old[column]]-deltaA_new[3:-3,column])
# # plt.xlim(-0.1,0.1)
# # plt.ylim(-0.2,0.2)
# plt.legend()
# plt.xlabel(labels[0])
# plt.ylabel(labels[column])
# plt.show()
# # print(np.argmin(deltaA_old[3:-3,column_old[column]]-deltaA_new[3:-3,column]))
```
This code will create an animation of the wave over time.
```
# import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from IPython.display import HTML
import matplotlib.image as mgimg
import glob
import sys
from matplotlib import animation
cmd.delete_existing_files("out119-00*.png")
globby = glob.glob(os.path.join('GiRaFFE_standalone_Ccodes','output','out119-00*.txt'))
file_list = []
for x in sorted(globby):
file_list.append(x)
number_of_files = int(len(file_list)/2)
for timestep in range(number_of_files):
fig = plt.figure()
numer_filename = file_list[2*timestep]
exact_filename = file_list[2*timestep+1]
Numer = np.loadtxt(numer_filename)
Exact = np.loadtxt(exact_filename)
plt.title("Alfven Wave")
plt.xlabel("x")
plt.ylabel("BU2")
plt.xlim(-0.5,0.5)
plt.ylim(1.0,1.7)
plt.plot(Numer[3:-3,0],Numer[3:-3,3],'.',label="Numerical")
plt.plot(Exact[3:-3,0],Exact[3:-3,3],label="Exact")
plt.legend()
savefig(numer_filename+".png",dpi=150)
plt.close(fig)
sys.stdout.write("%c[2K" % 27)
sys.stdout.write("Processing file "+numer_filename+"\r")
sys.stdout.flush()
## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ##
# https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame
# https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation
# !rm -f GiRaFFE_NRPy-1D_tests.mp4
cmd.delete_existing_files("GiRaFFE_NRPy-1D_tests.mp4")
fig = plt.figure(frameon=False)
ax = fig.add_axes([0, 0, 1, 1])
ax.axis('off')
myimages = []
for i in range(number_of_files):
img = mgimg.imread(file_list[2*i]+".png")
imgplot = plt.imshow(img)
myimages.append([imgplot])
ani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000)
plt.close()
ani.save('GiRaFFE_NRPy-1D_tests.mp4', fps=5,dpi=150)
%%HTML
<video width="480" height="360" controls>
<source src="GiRaFFE_NRPy-1D_tests.mp4" type="video/mp4">
</video>
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-GiRaFFE_NRPy-3D_tests-unstaggered_new_way",location_of_template_file=os.path.join(".."))
```
| github_jupyter |
```
import keras
import keras.backend as K
from keras.datasets import mnist
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Conv1D, MaxPooling1D, LSTM, ConvLSTM2D, GRU, BatchNormalization, LocallyConnected2D, Permute, TimeDistributed, Bidirectional
from keras.layers import Concatenate, Reshape, Conv2DTranspose, Embedding, Multiply, Activation
from functools import partial
from collections import defaultdict
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import isolearn.io as isoio
import isolearn.keras as isol
import matplotlib.pyplot as plt
from sklearn import preprocessing
import pandas as pd
from sequence_logo_helper import dna_letter_at, plot_dna_logo
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
def contain_tf_gpu_mem_usage() :
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
set_session(sess)
contain_tf_gpu_mem_usage()
#optimus 5-prime functions
def test_data(df, model, test_seq, obs_col, output_col='pred'):
'''Predict mean ribosome load using model and test set UTRs'''
# Scale the test set mean ribosome load
scaler = preprocessing.StandardScaler()
scaler.fit(df[obs_col].reshape(-1,1))
# Make predictions
predictions = model.predict(test_seq).reshape(-1)
# Inverse scaled predicted mean ribosome load and return in a column labeled 'pred'
df.loc[:,output_col] = scaler.inverse_transform(predictions)
return df
def one_hot_encode(df, col='utr', seq_len=50):
# Dictionary returning one-hot encoding of nucleotides.
nuc_d = {'a':[1,0,0,0],'c':[0,1,0,0],'g':[0,0,1,0],'t':[0,0,0,1], 'n':[0,0,0,0]}
# Creat empty matrix.
vectors=np.empty([len(df),seq_len,4])
# Iterate through UTRs and one-hot encode
for i,seq in enumerate(df[col].str[:seq_len]):
seq = seq.lower()
a = np.array([nuc_d[x] for x in seq])
vectors[i] = a
return vectors
def r2(x,y):
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
return r_value**2
#Train data
e_train = pd.read_csv("bottom5KIFuAUGTop5KIFuAUG.csv")
e_train.loc[:,'scaled_rl'] = preprocessing.StandardScaler().fit_transform(e_train.loc[:,'rl'].values.reshape(-1,1))
seq_e_train = one_hot_encode(e_train,seq_len=50)
x_train = seq_e_train
x_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1], x_train.shape[2]))
y_train = np.array(e_train['scaled_rl'].values)
y_train = np.reshape(y_train, (y_train.shape[0],1))
print("x_train.shape = " + str(x_train.shape))
print("y_train.shape = " + str(y_train.shape))
#Load Predictor
predictor_path = 'optimusRetrainedMain.hdf5'
predictor = load_model(predictor_path)
predictor.trainable = False
predictor.compile(optimizer=keras.optimizers.SGD(lr=0.1), loss='mean_squared_error')
#Generate (original) predictions
pred_train = predictor.predict(x_train[:, 0, ...], batch_size=32)
###########################################
####################L2X####################
###########################################
from keras.callbacks import ModelCheckpoint
from keras.models import Model, Sequential
import numpy as np
import tensorflow as tf
from keras.layers import MaxPooling2D, Flatten, Conv2D, Input, GlobalMaxPooling2D, Multiply, Lambda, Embedding, Dense, Dropout, Activation
from keras.datasets import imdb
from keras import backend as K
from keras.engine.topology import Layer
# Define various Keras layers.
class Concatenate1D(Layer):
"""
Layer for concatenation.
"""
def __init__(self, **kwargs):
super(Concatenate1D, self).__init__(**kwargs)
def call(self, inputs):
input1, input2 = inputs
input1 = tf.expand_dims(input1, axis = -2) # [batchsize, 1, input1_dim]
dim1 = int(input2.get_shape()[1])
input1 = tf.tile(input1, [1, dim1, 1])
return tf.concat([input1, input2], axis = -1)
def compute_output_shape(self, input_shapes):
input_shape1, input_shape2 = input_shapes
input_shape = list(input_shape2)
input_shape[-1] = int(input_shape[-1]) + int(input_shape1[-1])
input_shape[-2] = int(input_shape[-2])
return tuple(input_shape)
class Concatenate2D(Layer):
"""
Layer for concatenation.
"""
def __init__(self, **kwargs):
super(Concatenate2D, self).__init__(**kwargs)
def call(self, inputs):
input1, input2 = inputs
input1 = tf.expand_dims(tf.expand_dims(input1, axis = -2), axis = -2) # [batchsize, 1, 1, input1_dim]
dim1 = int(input2.get_shape()[1])
dim2 = int(input2.get_shape()[2])
input1 = tf.tile(input1, [1, dim1, dim2, 1])
return tf.concat([input1, input2], axis = -1)
def compute_output_shape(self, input_shapes):
input_shape1, input_shape2 = input_shapes
input_shape = list(input_shape2)
input_shape[-1] = int(input_shape[-1]) + int(input_shape1[-1])
input_shape[-2] = int(input_shape[-2])
input_shape[-3] = int(input_shape[-3])
return tuple(input_shape)
class Sample_Concrete(Layer):
"""
Layer for sample Concrete / Gumbel-Softmax variables.
"""
def __init__(self, tau0, k, **kwargs):
self.tau0 = tau0
self.k = k
super(Sample_Concrete, self).__init__(**kwargs)
def call(self, logits):
# logits: [batch_size, d, 1]
logits_ = K.permute_dimensions(logits, (0,2,1))# [batch_size, 1, d]
d = int(logits_.get_shape()[2])
unif_shape = [batch_size,self.k,d]
uniform = K.random_uniform_variable(shape=unif_shape,
low = np.finfo(tf.float32.as_numpy_dtype).tiny,
high = 1.0)
gumbel = - K.log(-K.log(uniform))
noisy_logits = (gumbel + logits_)/self.tau0
samples = K.softmax(noisy_logits)
samples = K.max(samples, axis = 1)
logits = tf.reshape(logits,[-1, d])
threshold = tf.expand_dims(tf.nn.top_k(logits, self.k, sorted = True)[0][:,-1], -1)
discrete_logits = tf.cast(tf.greater_equal(logits,threshold),tf.float32)
output = K.in_train_phase(samples, discrete_logits)
return tf.expand_dims(output,-1)
def compute_output_shape(self, input_shape):
return input_shape
def construct_gumbel_selector(X_ph, n_filters=32, n_dense_units=32):
"""
Build the L2X model for selection operator.
"""
first_layer = Conv2D(n_filters, (1, 7), padding='same', activation='relu', strides=1, name = 'conv1_gumbel')(X_ph)
# global info
net_new = GlobalMaxPooling2D(name = 'new_global_max_pooling1d_1')(first_layer)
global_info = Dense(n_dense_units, name = 'new_dense_1', activation='relu')(net_new)
# local info
net = Conv2D(n_filters, (1, 7), padding='same', activation='relu', strides=1, name = 'conv2_gumbel')(first_layer)
local_info = Conv2D(n_filters, (1, 7), padding='same', activation='relu', strides=1, name = 'conv3_gumbel')(net)
combined = Concatenate2D()([global_info,local_info])
net = Dropout(0.2, name = 'new_dropout_2')(combined)
net = Conv2D(n_filters, (1, 1), padding='same', activation='relu', strides=1, name = 'conv_last_gumbel')(net)
logits_T = Conv2D(1, (1, 1), padding='same', activation=None, strides=1, name = 'conv4_gumbel')(net)
return logits_T
def L2X(x_train, y_train, pred_train, x_val, y_val, pred_val, k=10, batch_size=32, epochs=5, hidden_dims=250):
"""
Generate scores on features on validation by L2X.
Train the L2X model with variational approaches
if train = True.
"""
Mean1D = Lambda(lambda x, k=k: K.sum(x, axis = 1) / float(k), output_shape=lambda x: [x[0],x[2]])
Mean2D = Lambda(lambda x, k=k: K.sum(x, axis = (1, 2)) / float(k), output_shape=lambda x: [x[0],x[3]])
print('Creating model...')
# P(S|X)
with tf.variable_scope('selection_model'):
X_ph = Input(shape=(x_train.shape[1], x_train.shape[2], x_train.shape[3]))
logits_T = construct_gumbel_selector(X_ph)
tau = 0.5
#Extra code: Flatten 2D
orig_logits_T = logits_T
logits_T = Lambda(lambda x: K.reshape(x, (K.shape(x)[0], x_train.shape[1] * x_train.shape[2], 1)))(logits_T)
T = Sample_Concrete(tau, k)(logits_T)
#Extra code: Inflate 2D
T = Lambda(lambda x: K.reshape(x, (K.shape(x)[0], x_train.shape[1], x_train.shape[2], 1)))(T)
# q(X_S)
with tf.variable_scope('prediction_model'):
#Same architecture as original predictor
net = Multiply()([X_ph, T])
net = Conv2D(activation="relu", padding='same', filters=120, kernel_size=(1, 8))(net)
net = Conv2D(activation="relu", padding='same', filters=120, kernel_size=(1, 8))(net)
net = Conv2D(activation="relu", padding='same', filters=120, kernel_size=(1, 8))(net)
net = Flatten()(net)
net = Dense(hidden_dims, activation='relu')(net)
net = Dropout(0.2)(net)
preds = Dense(pred_train.shape[1], activation='linear', name = 'new_dense')(net)
'''
#Default approximator
net = Mean2D(Multiply()([X_ph, T]))
net = Dense(hidden_dims)(net)
net = Dropout(0.2)(net)
net = Activation('relu')(net)
preds = Dense(pred_train.shape[1], activation='softmax', name = 'new_dense')(net)
'''
model = Model(inputs=X_ph, outputs=preds)
model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['mean_squared_error'])
train_mse = np.mean((pred_train[:, 0] - y_train[:, 0])**2)
val_mse = np.mean((pred_val[:, 0] - y_val[:, 0])**2)
print('The train and validation mse of the original model is {} and {}'.format(train_mse, val_mse))
#print(model.summary())
'''
checkpoint = ModelCheckpoint("saved_models/l2x.hdf5", monitor='val_mean_squared_error', verbose=1, save_best_only=True, save_weights_only=True, mode='min')
model.fit(x_train, pred_train,
validation_data=(x_val, pred_val),
callbacks=[checkpoint],
epochs=epochs, batch_size=batch_size
)
'''
model.load_weights('saved_models/l2x.hdf5', by_name=True)
pred_model = Model([X_ph], [orig_logits_T, preds])
pred_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
pred_model.load_weights('saved_models/l2x.hdf5', by_name=True)
scores, q = pred_model.predict(x_val, verbose=1, batch_size=batch_size)
return scores, q
#Gradient saliency/backprop visualization
import matplotlib.collections as collections
import operator
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
import matplotlib as mpl
from matplotlib.text import TextPath
from matplotlib.patches import PathPatch, Rectangle
from matplotlib.font_manager import FontProperties
from matplotlib import gridspec
from matplotlib.ticker import FormatStrFormatter
def plot_importance_scores(importance_scores, ref_seq, figsize=(12, 2), score_clip=None, sequence_template='', plot_start=0, plot_end=96) :
end_pos = ref_seq.find("#")
fig = plt.figure(figsize=figsize)
ax = plt.gca()
if score_clip is not None :
importance_scores = np.clip(np.copy(importance_scores), -score_clip, score_clip)
max_score = np.max(np.sum(importance_scores[:, :], axis=0)) + 0.01
for i in range(0, len(ref_seq)) :
mutability_score = np.sum(importance_scores[:, i])
dna_letter_at(ref_seq[i], i + 0.5, 0, mutability_score, ax)
plt.sca(ax)
plt.xlim((0, len(ref_seq)))
plt.ylim((0, max_score))
plt.axis('off')
plt.yticks([0.0, max_score], [0.0, max_score], fontsize=16)
for axis in fig.axes :
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
plt.tight_layout()
plt.show()
#Execute L2X benchmark on synthetic datasets
k = int(np.ceil(0.2 * 50))
batch_size = 32
hidden_dims = 40
epochs = 5
encoder = isol.OneHotEncoder(50)
score_clip = None
allFiles = ["optimus5_synthetic_random_insert_if_uorf_1_start_1_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_1_start_2_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_2_start_1_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_2_start_2_stop_variable_loc_512.csv",
"optimus5_synthetic_examples_3.csv"]
for csv_to_open in allFiles :
#Load dataset for benchmarking
dataset_name = csv_to_open.replace(".csv", "")
benchmarkSet = pd.read_csv(csv_to_open)
seq_e_test = one_hot_encode(benchmarkSet, seq_len=50)
x_test = seq_e_test[:, None, ...]
print(x_test.shape)
pred_test = predictor.predict(x_test[:, 0, ...], batch_size=32)
y_test = pred_test
importance_scores_test, q_test = L2X(
x_train,
y_train,
pred_train,
x_test,
y_test,
pred_test,
k=k,
batch_size=batch_size,
epochs=epochs,
hidden_dims=hidden_dims
)
for plot_i in range(0, 3) :
print("Test sequence " + str(plot_i) + ":")
plot_dna_logo(x_test[plot_i, 0, :, :], sequence_template='N'*50, plot_sequence_template=True, figsize=(12, 1), plot_start=0, plot_end=50)
plot_importance_scores(np.maximum(importance_scores_test[plot_i, 0, :, :].T, 0.), encoder.decode(x_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=score_clip, sequence_template='N'*50, plot_start=0, plot_end=50)
#Save predicted importance scores
model_name = "l2x_" + dataset_name
np.save(model_name + "_importance_scores_test", importance_scores_test)
```
| github_jupyter |
# Lab 2 Single Qubit Gates
Prerequisite
[Ch.1.3 Representing Qubit States](https://qiskit.org/textbook/ch-states/representing-qubit-states.html)
[Ch.1.4 Single Qubit Gates](https://qiskit.org/textbook/ch-states/single-qubit-gates.html)
Other relevant materials
[Grokking the Bloch Sphere](https://javafxpert.github.io/grok-bloch/)
```
import numpy as np
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, transpile, Aer, IBMQ, execute
from qiskit.tools.jupyter import *
from qiskit.visualization import *
from ibm_quantum_widgets import *
from qiskit.providers.aer import QasmSimulator
# Loading your IBM Quantum account(s)
provider = IBMQ.load_account()
backend = Aer.get_backend('statevector_simulator')
```
## Part 1 - Effect of Single-Qubit Gates on state |0>
### Goal
Create quantum circuits to apply various single qubit gates on state |0> and understand the change in state and phase of the qubit.
| | | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc1 = QuantumCircuit(4)
# perform gate operations on individual qubits
qc1.x(0)
qc1.y(1)
qc1.z(2)
qc1.s(3)
# Draw circuit
qc1.draw()
# Plot blochshere
out1 = execute(qc1,backend).result().get_statevector()
plot_bloch_multivector(out1)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Statevector (Post Measurement)|
|-|-|-|-|
|Input State = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0j & 1+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘1’<br><br>Post measurement, quit state is ‘1’ with phase 0 | | | |
|Input State = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0j & 0+1j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘1’<br><br>Post measurement, quit state is ‘1’ with phase pi/2 | | | |
|Input State = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘0’<br><br>Post measurement, quit state is ‘0’ with phase 0 | | | |
|Input State = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘0’<br><br>Post measurement, quit state is ‘0’ with phase 0 | | | |
## Part 2 - Effect of Single-Qubit Gates on state |1>
### Goal
Create quantum circuits to apply various single qubit gates on state |1> and understand the change in state and phase of the qubit.
| | | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc2 = QuantumCircuit(4)
# initialize qubits
qc2.x(range(4))
# perform gate operations on individual qubits
qc2.x(0)
qc2.y(1)
qc2.z(2)
qc2.s(3)
# Draw circuit
qc2.draw()
# Plot blochshere
out2 = execute(qc2,backend).result().get_statevector()
plot_bloch_multivector(out2)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Statevector (Post Measurement)|
|-|-|-|-|
|Input State = $\begin{pmatrix}0+0j & 1+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘0’<br><br>Post measurement, quit state is ‘0’ with phase 0 | | | |
|Input State = $\begin{pmatrix}0+0j & 1+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0-1j & 0+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘0’<br><br>Post measurement, quit state is ‘0’ with phase 3pi/2 | | | |
|Input State = $\begin{pmatrix}0+0j & 1+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0j & -1+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘1’<br><br>Post measurement, quit state is ‘1’ with phase pi | | | |
|Input State = $\begin{pmatrix}0+0j & 1+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0j & 0+1j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘1’<br><br>Post measurement, quit state is ‘1’ with phase pi/2 | | | |
## Part 3 - Effect of Single-Qubit Gates on state |+>
### Goal
Create quantum circuits to apply various single qubit gates on state |+> and understand the change in state and phase of the qubit.
| | | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc3 = QuantumCircuit(4)
# initialize qubits
qc3.h(range(4))
# perform gate operations on individual qubits
qc3.x(0)
qc3.y(1)
qc3.z(2)
qc3.s(3)
# Draw circuit
qc3.draw()
# Plot blochshere
out3 = execute(qc3,backend).result().get_statevector()
plot_bloch_multivector(out3)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Probability (Histogram)|
|-|-|-|-|
|Input State = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0-0.707j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
## Part 4 - Effect of Single-Qubit Gates on state |->
### Goal
Create quantum circuits to apply various single qubit gates on state |-> and understand the change in state and phase of the qubit.
| | | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc4 = QuantumCircuit(4)
# initialize qubits
qc4.x(range(4))
qc4.h(range(4))
# perform gate operations on individual qubits
qc4.x(0)
qc4.y(1)
qc4.z(2)
qc4.s(3)
# Draw circuit
qc4.draw()
# Plot blochshere
out4 = execute(qc4,backend).result().get_statevector()
plot_bloch_multivector(out4)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Probability (Histogram)|
|-|-|-|-|
|Input State = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}-0.707+0j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0.707j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
## Part 5 - Effect of Single-Qubit Gates on state |i>
### Goal
Create quantum circuits to apply various single qubit gates on state |i> and understand the change in state and phase of the qubit.
| | | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc5 = QuantumCircuit(4)
# initialize qubits
qc5.h(range(4))
qc5.s(range(4))
# perform gate operations on individual qubits
qc5.x(0)
qc5.y(1)
qc5.z(2)
qc5.s(3)
# Draw circuit
qc5.draw()
# Plot blochshere
out5 = execute(qc5,backend).result().get_statevector()
plot_bloch_multivector(out5)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Probability (Histogram)|
|-|-|-|-|
|Input State = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0.707j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
## Part 6 - Effect of Single-Qubit Gates on state |-i>
### Goal
Create quantum circuits to apply various single qubit gates on state |-i> and understand the change in state and phase of the qubit.
| |  | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc6 = QuantumCircuit(4)
# initialize qubits
qc6.x(range(4))
qc6.h(range(4))
qc6.s(range(4))
# perform gate operations on individual qubits
qc6.x(0)
qc6.y(1)
qc6.z(2)
qc6.s(3)
# Draw circuit
qc6.draw()
# Plot blochshere
out6 = execute(qc6,backend).result().get_statevector()
plot_bloch_multivector(out6)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Probability (Histogram)|
|-|-|-|-|
|Input State = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0-0.707j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}-0.707+0j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
# PyStan: Golf case study
Source: https://mc-stan.org/users/documentation/case-studies/golf.html
```
import pystan
import numpy as np
import pandas as pd
from scipy.stats import norm
import requests
from lxml import html
from io import StringIO
from matplotlib import pyplot as plt
```
Aux functions for visualization
```
def stanplot_postetior_hist(stan_sample, params):
'''This function takes a PyStan posterior sample object and a touple of parameter names, and plots posterior dist histogram of named parameter'''
post_sample_params = {}
for p in params:
post_sample_params[p] = stan_sample.extract(p)[p]
fig, panes = plt.subplots(1,len(params))
fig.suptitle('Posterior Dist of Params')
for p,w in zip(params, panes):
w.hist(post_sample_params[p])
w.set_title(p)
fig.show()
def stanplot_posterior_lineplot(x, y, stan_sample, params, f, sample_size=100, alpha=0.05, color='green'):
'''Posterior dist line plot
params:
x: x-axis values from actual data used for training
y: y-axis values from actual data used for training
stan_sample: a fitted PyStan sample object
params: list of parameter names required for calculating the posterior curve
f: a function the describes the model. Should take as parameters `x` and `*params` as inputs and return a list (or list-coercable object) that will be used for plotting the sampled curves
sample_size: how many curves to draw from the posterior dist
alpha: transparency of drawn curves (from pyplot, default=0.05)
color: color of drawn curves (from pyplot. default='green')
'''
tmp = stan_sample.stan_args
total_samples = (tmp[0]['iter'] - tmp[0]['warmup']) * len(tmp)
sample_rows = np.random.choice(a=total_samples, size=sample_size, replace=False)
sampled_param_array = np.array(list(stan_sample.extract(params).values()))[:, sample_rows]
_ = plt.plot(x, y)
for param_touple in zip(*sampled_param_array):
plt.plot(x, f(x, *param_touple), color=color, alpha=alpha)
def sigmoid_linear_curve(x, a, b):
return 1 / (1 + np.exp(-1 * (a + b * x)))
def trig_curve(x, sigma, r=(1.68/2)/12, R=(4.25/2)/12):
return 2 * norm.cdf(np.arcsin((R - r) / x) / sigma) - 1
def overshot_curve(x, sigma_distance, sigma_angle, r=(1.68/2)/12, R=(4.25/2)/12, overshot=1., distance_tolerance=3.):
p_angle = 2 * norm.cdf(np.arcsin((R - r) / x) / sigma_angle) - 1
p_upper = norm.cdf((distance_tolerance - overshot) / ((x + overshot) * sigma_distance))
p_lower = norm.cdf((-1 * overshot) / ((x + overshot) * sigma_distance))
return p_angle * (p_upper - p_lower)
```
## Data
Scrape webpage
```
url = 'https://statmodeling.stat.columbia.edu/2019/03/21/new-golf-putting-data-and-a-new-golf-putting-model'
xpath = '/html/body/div/div[3]/div/div[1]/div[3]/div[2]/pre[1]'
header = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
r = requests.get(url, headers=header)
```
Parse HTML to string
```
html_table = html.fromstring(r.text).xpath(xpath)[0]
```
Rease data into a Pandas DF
```
with StringIO(html_table.text) as f:
df = pd.read_csv(f, sep = ' ')
df.head()
```
And finally add some columns
```
df['p'] = df['y'] / df['n']
df['sd'] = np.sqrt(df['p'] * (1 - df['p']) / df['n'])
stan_data = {'x': df['x'], 'y': df['y'], 'n': df['n'], 'N': df.shape[0]}
```
### Plot data
```
#_ = df.plot(x='x', y='p')
plt.plot(df['x'], df['p'])
plt.fill_between(x=df['x'], y1=df['p'] - 2 * df['sd'], y2=df['p'] + 2 * df['sd'], alpha=0.3)
plt.show()
```
## Models
### Logistic model
```
stan_logistic = pystan.StanModel(file='./logistic.stan')
post_sample_logistic = stan_logistic.sampling(data=stan_data)
print(post_sample_logistic)
stanplot_postetior_hist(post_sample_logistic, ('a', 'b'))
stanplot_posterior_lineplot(df['x'], df['p'], post_sample_logistic, ('a', 'b'), sigmoid_linear_curve)
```
### Simple triginometric model
```
stan_trig = pystan.StanModel(file='./trig.stan')
stan_data.update({'r': (1.68/2)/12, 'R': (4.25/2)/12})
post_sample_trig = stan_trig.sampling(data=stan_data)
print(post_sample_trig)
stanplot_postetior_hist(post_sample_trig, ('sigma', 'sigma_degrees'))
stanplot_posterior_lineplot(df['x'], df['p'], post_sample_trig, ('sigma'), trig_curve)
```
### Augmented trigonometric model
```
stan_overshot = pystan.StanModel(file='./trig_overshot.stan')
stan_data.update({'overshot': 1., 'distance_tolerance': 3.})
post_sample_overshot = stan_overshot.sampling(data=stan_data)
print(post_sample_overshot)
stanplot_postetior_hist(post_sample_overshot, ('sigma_distance', 'sigma_angle', 'sigma_y'))
stanplot_posterior_lineplot(
x=df['x'],
y=df['p'],
stan_sample=post_sample_overshot,
params=('sigma_distance', 'sigma_angle'),
f=overshot_curve
)
```
| github_jupyter |
# #1 Discovering Butterfree - Feature Set Basics
Welcome to **Discovering Butterfree** tutorial series!
This first tutorial will cover some basics of Butterfree library and you learn how to create your first feature set :rocket: :rocket:
Before diving into the tutorial make sure you have a basic understanding of these main data concepts: **features**, **feature sets** and the **"Feature Store Architecture"**, you can read more about this [here]().
## Library Basics:
Buterfree's main objective is to make feature engineering easy. The library provides a high-level API for declarative feature definitions. But behind these abstractions, Butterfree is essentially an **ETL (Extract - Transform - Load)** framework, so this reflects in terms of the organization of the project.
### Extract
`from butterfree.extract import ...`
Module with the entities responsible for extracting data into the pipeline. The module provides the following tools:
* `readers`: data connectors. Currently Butterfree provides readers for files, tables registered in Spark Hive metastore, and Kafka topics.
* `pre_processing`: a utility tool for making some transformations or re-arrange the structure of the reader's input data before the feature engineering.
* `source`: a composition of `readers`. The entity responsible for merging datasets coming from the defined readers into a single dataframe input for the `Transform` stage.
### Transform
`from butterfree.transform import ...`
The main module of the library, responsible for feature engineering, in other words, all the transformations on the data. The module provides the following main tools:
* `features`: the entity that defines what a feature is. Holds a transformation and metadata about the feature.
* `transformations`: provides a set of components for transforming the data, with the possibility to use Spark native functions, aggregations, SQL expressions and others.
* `feature_set`: an entity that defines a feature set. Holds features and the metadata around it.
### Load
`from butterfree.load import ...`
The module is responsible for saving the data in some data storage. The module provides the following tools:
* `writers`: provide connections to data sources to write data. Currently Butterfree provides ways to save data on S3 registered as tables Spark Hive metastore and to Cassandra DB.
* `sink`: a composition of writers. The entity responsible for triggering the writing jobs on a set of defined writers
### Pipelines
Pipelines are responsible for integrating all other modules (`extract`, `transform`, `load`) in order to define complete ETL jobs from source data to data storage destination.
`from butterfree.pipelines import ...`
* `feature_set_pipeline`: defines an ETL pipeline for creating feature sets.
## Example:
Simulating the following scenario:
- We want to create a feature set with features about houses for rent (listings).
- We are interested in houses only for the **Kanto** region.
We have two sets of data:
- Table: `listing_events`. Table with data about events of house listings.
- File: `region.json`. Static file with data about the cities and regions.
Our desire is to have result dataset with the following schema:
| id | timestamp | rent | rent_over_area | bedrooms | bathrooms | area | bedrooms_over_area | bathrooms_over_area | latitude | longitude | h3 | city | region
| - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| int | timestamp | float | float | int | int | float | float | float | double | double | string | string | string |
For more information about H3 geohash click [here](https://h3geo.org/docs/)
The following code blocks will show how to generate this feature set using Butterfree library:
```
# setup spark
from pyspark import SparkContext, SparkConf
from pyspark.sql import session
conf = SparkConf().set('spark.driver.host','127.0.0.1')
sc = SparkContext(conf=conf)
spark = session.SparkSession(sc)
# fix working dir
import pathlib
import os
path = os.path.join(pathlib.Path().absolute(), '../..')
os.chdir(path)
# butterfree spark client
from butterfree.clients import SparkClient
spark_client = SparkClient()
```
### Showing test data
```
listing_evengs_df = spark.read.json(f"{path}/examples/data/listing_events.json")
listing_evengs_df.createOrReplaceTempView("listing_events") # creating listing_events table
print(">>> listing_events table:")
listing_evengs_df.toPandas()
print(">>> region.json file:")
spark.read.json(f"{path}/examples/data/region.json").toPandas()
```
### Extract
- For the extract part, we need the `Source` entity and the `FileReader` and `TableReader` for the data we have.
- We need to declare a query with the rule for joining the results of the readers too.
- As proposed in the problem we can filter the region dataset to get only **Kanto** region.
```
from butterfree.extract import Source
from butterfree.extract.readers import FileReader, TableReader
from butterfree.extract.pre_processing import filter
readers = [
TableReader(id="listing_events", table="listing_events",),
FileReader(id="region", path=f"{path}/examples/data/region.json", format="json",).with_(
transformer=filter, condition="region == 'Kanto'"
),
]
query = """
select
listing_events.*,
region.city,
region.lat,
region.lng,
region.region
from
listing_events
join region
on listing_events.region_id = region.id
"""
source = Source(readers=readers, query=query)
# showing source result
source_df = source.construct(spark_client)
source_df.toPandas()
```
### Transform
- At the transform part, a set of `Feature` objects is declared.
- An Instance of `FeatureSet` is used to hold the features.
- A `FeatureSet` can only be created when it is possible to define a unique tuple formed by key columns and a time reference. This is an **architectural requirement** for the data. So least one `KeyFeature` and one `TimestampFeature` is needed.
- Every `Feature` needs a unique name, a description, and a data-type definition.
```
from butterfree.transform import FeatureSet
from butterfree.transform.features import Feature, KeyFeature, TimestampFeature
from butterfree.transform.transformations import SQLExpressionTransform
from butterfree.transform.transformations.h3_transform import H3HashTransform
from butterfree.constants import DataType
keys = [
KeyFeature(
name="id",
description="Unique identificator code for houses.",
dtype=DataType.BIGINT,
)
]
# from_ms = True because the data originally is not in a Timestamp format.
ts_feature = TimestampFeature(from_ms=True)
features = [
Feature(
name="rent",
description="Rent value by month described in the listing.",
dtype=DataType.FLOAT,
),
Feature(
name="rent_over_area",
description="Rent value by month divided by the area of the house.",
transformation=SQLExpressionTransform("rent / area"),
dtype=DataType.FLOAT,
),
Feature(
name="bedrooms",
description="Number of bedrooms of the house.",
dtype=DataType.INTEGER,
),
Feature(
name="bathrooms",
description="Number of bathrooms of the house.",
dtype=DataType.INTEGER,
),
Feature(
name="area",
description="Area of the house, in squared meters.",
dtype=DataType.FLOAT,
),
Feature(
name="bedrooms_over_area",
description="Number of bedrooms divided by the area.",
transformation=SQLExpressionTransform("bedrooms / area"),
dtype=DataType.FLOAT,
),
Feature(
name="bathrooms_over_area",
description="Number of bathrooms divided by the area.",
transformation=SQLExpressionTransform("bathrooms / area"),
dtype=DataType.FLOAT,
),
Feature(
name="latitude",
description="House location latitude.",
from_column="lat", # arg from_column is needed when changing column name
dtype=DataType.DOUBLE,
),
Feature(
name="longitude",
description="House location longitude.",
from_column="lng",
dtype=DataType.DOUBLE,
),
Feature(
name="h3",
description="H3 hash geohash.",
transformation=H3HashTransform(
h3_resolutions=[10], lat_column="latitude", lng_column="longitude",
),
dtype=DataType.STRING,
),
Feature(name="city", description="House location city.", dtype=DataType.STRING,),
Feature(
name="region",
description="House location region.",
dtype=DataType.STRING,
),
]
feature_set = FeatureSet(
name="house_listings",
entity="house", # entity: to which "business context" this feature set belongs
description="Features describring a house listing.",
keys=keys,
timestamp=ts_feature,
features=features,
)
# showing feature set result
feature_set_df = feature_set.construct(source_df, spark_client)
feature_set_df.toPandas()
```
### Load
- For the load part we need `Writer` instances and a `Sink`.
- writers define where to load the data.
- The `Sink` gets the transformed data (feature set) and trigger the load to all the defined writers.
- `debug_mode` will create a temporary view instead of trying to write in a real data store.
```
from butterfree.load.writers import (
HistoricalFeatureStoreWriter,
OnlineFeatureStoreWriter,
)
from butterfree.load import Sink
writers = [HistoricalFeatureStoreWriter(debug_mode=True), OnlineFeatureStoreWriter(debug_mode=True)]
sink = Sink(writers=writers)
```
## Pipeline
- The `Pipeline` entity wraps all the other defined elements.
- `run` command will trigger the execution of the pipeline, end-to-end.
```
from butterfree.pipelines import FeatureSetPipeline
pipeline = FeatureSetPipeline(source=source, feature_set=feature_set, sink=sink)
result_df = pipeline.run()
```
### Showing the results
```
print(">>> Historical Feature house_listings feature set table:")
spark.table("historical_feature_store__house_listings").orderBy(
"id", "timestamp"
).toPandas()
print(">>> Online Feature house_listings feature set table:")
spark.table("online_feature_store__house_listings").orderBy("id", "timestamp").toPandas()
```
- We can see that we were able to create all the desired features in an easy way
- The **historical feature set** holds all the data, and we can see that it is partitioned by year, month and day (columns added in the `HistoricalFeatureStoreWriter`)
- In the **online feature set** there is only the latest data for each id
| github_jupyter |
## Stacking
### 參考資料:
[Kaggle ensembling guide](https://mlwave.com/kaggle-ensembling-guide/)
<p></p>
[Introduction to Ensembling/Stacking in Python](https://www.kaggle.com/arthurtok/introduction-to-ensembling-stacking-in-python)
#### 5-fold stacking

#### stacking network

```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
# Scikit-Learn 官網作圖函式
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure(figsize=(10,6)) #調整作圖大小
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
# Class to extend the Sklearn classifier
class SklearnHelper(object):
def __init__(self, clf, seed=0, params=None, seed_flag=False):
params['random_state'] = seed
if(seed_flag == False):
params.pop('random_state')
self.clf = clf(**params)
def train(self, x_train, y_train):
self.clf.fit(x_train, y_train)
def predict(self, x):
return self.clf.predict(x)
def fit(self,x,y):
return self.clf.fit(x,y)
def feature_importances(self,x,y):
print(self.clf.fit(x,y).feature_importances_)
return self.clf.fit(x,y).feature_importances_
#Out-of-Fold Predictions
def get_oof(clf, x_train, y_train, x_test):
oof_train = np.zeros((ntrain,))
oof_test = np.zeros((ntest,))
oof_test_skf = np.empty((NFOLDS, ntest))
for i, (train_index, test_index) in enumerate(kf): # kf:KFold(ntrain, n_folds= NFOLDS,...)
x_tr = x_train[train_index]
y_tr = y_train[train_index]
x_te = x_train[test_index]
clf.train(x_tr, y_tr)
oof_train[test_index] = clf.predict(x_te) # partial index from x_train
oof_test_skf[i, :] = clf.predict(x_test) # Row(n-Fold), Column(predict value)
#oof_test[:] = oof_test_skf.mean(axis=0) #predict value average by column, then output 1-row, ntest columns
#oof_test[:] = pd.DataFrame(oof_test_skf).mode(axis=0)[0]
#oof_test[:] = np.median(oof_test_skf, axis=0)
oof_test[:] = np.mean(oof_test_skf, axis=0)
return oof_train.reshape(-1, 1), oof_test.reshape(-1, 1) #make sure return n-rows, 1-column shape.
```
### Load Dataset
```
train = pd.read_csv('input/train.csv', encoding = "utf-8", dtype = {'type': np.int32})
test = pd.read_csv('input/test.csv', encoding = "utf-8")
#把示範用的 type 4, 資料去除, 以免干擾建模
train = train[train['type']!=4]
from sklearn.model_selection import train_test_split
X = train[['花瓣寬度','花瓣長度','花萼寬度','花萼長度']]
y = train['type']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=100)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
test_std = sc.transform(test[['花瓣寬度','花瓣長度','花萼寬度','花萼長度']])
```
### Model Build
```
from sklearn.cross_validation import KFold
NFOLDS = 5 # set folds for out-of-fold prediction
SEED = 0 # for reproducibility
ntrain = X_train_std.shape[0] # X.shape[0]
ntest = test_std.shape[0] # test.shape[0]
kf = KFold(ntrain, n_folds= NFOLDS, random_state=SEED)
# Put in our parameters for said classifiers
# Decision Tree
dt_params = {
'criterion':'gini',
'max_depth':5
}
# KNN
knn_params = {
'n_neighbors':5
}
# Random Forest parameters
rf_params = {
'n_jobs': -1,
'n_estimators': 500,
'criterion': 'gini',
'max_depth': 4,
#'min_samples_leaf': 2,
'warm_start': True,
'oob_score': True,
'verbose': 0
}
# Extra Trees Parameters
et_params = {
'n_jobs': -1,
'n_estimators': 800,
'max_depth': 6,
'min_samples_leaf': 2,
'verbose': 0
}
# AdaBoost parameters
ada_params = {
'n_estimators': 800,
'learning_rate' : 0.75
}
# Gradient Boosting parameters
gb_params = {
'n_estimators': 500,
'max_depth': 5,
'min_samples_leaf': 2,
'verbose': 0
}
# Support Vector Classifier parameters
svc_params = {
'kernel' : 'linear',
'C' : 1.0,
'probability': True
}
# Support Vector Classifier parameters
svcr_params = {
'kernel' : 'rbf',
'C' : 1.0,
'probability': True
}
# Bagging Classifier
bag_params = {
'n_estimators' : 500,
'oob_score': True
}
#XGBoost Classifier
xgbc_params = {
'n_estimators': 500,
'max_depth': 4,
'learning_rate': 0.05,
'nthread': -1
}
#Linear Discriminant Analysis
lda_params = {}
#Quadratic Discriminant Analysis
qda1_params = {
'reg_param': 0.8,
'tol': 0.00001
}
#Quadratic Discriminant Analysis
qda2_params = {
'reg_param': 0.6,
'tol': 0.0001
}
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier, ExtraTreesClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from xgboost import XGBClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
dt = SklearnHelper(clf=DecisionTreeClassifier, seed=SEED, params=dt_params, seed_flag=True)
knn = SklearnHelper(clf=KNeighborsClassifier, seed=SEED, params=knn_params)
rf = SklearnHelper(clf=RandomForestClassifier, seed=SEED, params=rf_params, seed_flag=True)
et = SklearnHelper(clf=ExtraTreesClassifier, seed=SEED, params=et_params, seed_flag=True)
ada = SklearnHelper(clf=AdaBoostClassifier, seed=SEED, params=ada_params, seed_flag=True)
gb = SklearnHelper(clf=GradientBoostingClassifier, seed=SEED, params=gb_params, seed_flag=True)
svc = SklearnHelper(clf=SVC, seed=SEED, params=svc_params, seed_flag=True)
svcr = SklearnHelper(clf=SVC, seed=SEED, params=svcr_params, seed_flag=True)
bag = SklearnHelper(clf=BaggingClassifier, seed=SEED, params=bag_params, seed_flag=True)
xgbc = SklearnHelper(clf=XGBClassifier, seed=SEED, params=xgbc_params)
lda = SklearnHelper(clf=LinearDiscriminantAnalysis, seed=SEED, params=lda_params)
qda1 = SklearnHelper(clf=QuadraticDiscriminantAnalysis, seed=SEED, params=qda1_params)
qda2 = SklearnHelper(clf=QuadraticDiscriminantAnalysis, seed=SEED, params=qda2_params)
# Create Numpy arrays of train, test and target ( Survived) dataframes to feed into our models
y_train = y_train.ravel()
#y.ravel()
#x_train = X.values # Creates an array of the train data
#x_test = test.values # Creats an array of the test data
#STD dataset:
x_train = X_train_std
x_test = test_std
# Create our OOF train and test predictions. These base results will be used as new features
dt_oof_train, dt_oof_test = get_oof(dt, x_train, y_train, x_test) # Decision Tree
knn_oof_train, knn_oof_test = get_oof(knn, x_train, y_train, x_test) # KNeighbors
rf_oof_train, rf_oof_test = get_oof(rf, x_train, y_train, x_test) # Random Forest
et_oof_train, et_oof_test = get_oof(et, x_train, y_train, x_test) # Extra Trees
ada_oof_train, ada_oof_test = get_oof(ada, x_train, y_train, x_test) # AdaBoost
gb_oof_train, gb_oof_test = get_oof(gb, x_train, y_train, x_test) # Gradient Boost
svc_oof_train, svc_oof_test = get_oof(svc, x_train, y_train, x_test) # SVM-l
svcr_oof_train, svcr_oof_test = get_oof(svcr, x_train, y_train, x_test) # SVM-r
bag_oof_train, bag_oof_test = get_oof(bag, x_train, y_train, x_test) # Bagging
xgbc_oof_train, xgbc_oof_test = get_oof(xgbc, x_train, y_train, x_test) # XGBoost
lda_oof_train, lda_oof_test = get_oof(lda, x_train, y_train, x_test) # Linear Discriminant Analysis
qda1_oof_train, qda1_oof_test = get_oof(qda1, x_train, y_train, x_test) # Quadratic Discriminant Analysis
qda2_oof_train, qda2_oof_test = get_oof(qda2, x_train, y_train, x_test) # Quadratic Discriminant Analysis
dt_features = dt.feature_importances(x_train,y_train)
##knn_features = knn.feature_importances(x_train,y_train)
rf_features = rf.feature_importances(x_train,y_train)
et_features = et.feature_importances(x_train, y_train)
ada_features = ada.feature_importances(x_train, y_train)
gb_features = gb.feature_importances(x_train,y_train)
##svc_features = svc.feature_importances(x_train,y_train)
##svcr_features = svcr.feature_importances(x_train,y_train)
##bag_features = bag.feature_importances(x_train,y_train)
xgbc_features = xgbc.feature_importances(x_train,y_train)
##lda_features = lda.feature_importances(x_train,y_train)
##qda1_features = qda1.feature_importances(x_train,y_train)
##qda2_features = qda2.feature_importances(x_train,y_train)
cols = X.columns.values
# Create a dataframe with features
feature_dataframe = pd.DataFrame( {'features': cols,
'Decision Tree': dt_features,
'Random Forest': rf_features,
'Extra Trees': et_features,
'AdaBoost': ada_features,
'Gradient Boost': gb_features,
'XGBoost': xgbc_features
})
# Create the new column containing the average of values
feature_dataframe['mean'] = np.mean(feature_dataframe, axis= 1) # axis = 1 computes the mean row-wise
feature_dataframe
```
### First-Level Summary
```
#First-level output as new features
base_predictions_train = pd.DataFrame({
'DecisionTree': dt_oof_train.ravel(),
'KNeighbors': knn_oof_train.ravel(),
'RandomForest': rf_oof_train.ravel(),
'ExtraTrees': et_oof_train.ravel(),
'AdaBoost': ada_oof_train.ravel(),
'GradientBoost': gb_oof_train.ravel(),
'SVM-l': svc_oof_train.ravel(),
'SVM-r': svcr_oof_train.ravel(),
'Bagging': bag_oof_train.ravel(),
'XGBoost': xgbc_oof_train.ravel(),
'LDA': lda_oof_train.ravel(),
'QDA-1': qda1_oof_train.ravel(),
'QDA-2': qda2_oof_train.ravel(),
'type': y_train
})
base_predictions_train.head()
x_train = np.concatenate(( #dt_oof_train,
knn_oof_train,
rf_oof_train,
et_oof_train,
ada_oof_train,
gb_oof_train,
svc_oof_train,
#svcr_oof_train,
bag_oof_train,
xgbc_oof_train,
lda_oof_train,
#qda1_oof_train,
qda2_oof_train
), axis=1)
x_test = np.concatenate(( #dt_oof_test,
knn_oof_test,
rf_oof_test,
et_oof_test,
ada_oof_test,
gb_oof_test,
svc_oof_test,
#svcr_oof_test,
bag_oof_test,
xgbc_oof_test,
lda_oof_test,
#qda1_oof_test,
qda2_oof_test
), axis=1)
```
### Second Level Summary
### Level-2 XGBoost
```
#Second level learning model
import xgboost as xgb
l2_gbm = xgb.XGBClassifier(
learning_rate = 0.05,
n_estimators= 2000,
max_depth= 4,
#min_child_weight= 2,
gamma=0.9,
subsample=0.8,
colsample_bytree=0.8,
#scale_pos_weight=1,
objective= 'binary:logistic',
nthread= -1
).fit(x_train, y_train)
#level-2 CV: x_train, y_train
from sklearn import metrics
print(metrics.classification_report(y_train, l2_gbm.predict(x_train)))
from sklearn.model_selection import KFold
cv = KFold(n_splits=5, random_state=None, shuffle=True)
estimator = l2_gbm
plot_learning_curve(estimator, "level2 - XGBoost", x_train, y_train, cv=cv, train_sizes=np.linspace(0.2, 1.0, 8))
#level2 - XGB
l2_gbm_pred = l2_gbm.predict(x_test)
metrics.precision_recall_fscore_support(y_train, l2_gbm.predict(x_train), average='weighted')
print(l2_gbm_pred)
```
### Level-2 Linear Discriminant Analysis
```
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
l2_lda = LinearDiscriminantAnalysis()
l2_lda.fit(x_train, y_train)
print(metrics.classification_report(y_train, l2_lda.predict(x_train)))
from sklearn.model_selection import KFold
cv = KFold(n_splits=5, random_state=None, shuffle=True)
estimator = l2_lda
#plot_learning_curve(estimator, "lv2 Linear Discriminant Analysis", x_train, y_train, cv=cv, train_sizes=np.linspace(0.2, 1.0, 8))
#level2 - LDA
l2_lda_pred = l2_lda.predict(x_test)
metrics.precision_recall_fscore_support(y_train, l2_lda.predict(x_train), average='weighted')
print(l2_lda_pred)
```
### Level-2 Quadratic Discriminant Analysis
```
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
l2_qda = QuadraticDiscriminantAnalysis(reg_param=0.01, tol=0.001)
l2_qda.fit(x_train, y_train)
print(metrics.classification_report(y_train, l2_qda.predict(x_train)))
from sklearn.model_selection import KFold
cv = KFold(n_splits=5, random_state=None, shuffle=True)
estimator = l2_qda
plot_learning_curve(estimator, "Quadratic Discriminant Analysis", x_train, y_train, cv=cv, train_sizes=np.linspace(0.2, 1.0, 8))
#level2 - QDA
l2_qda_pred = l2_qda.predict(x_test)
metrics.precision_recall_fscore_support(y_train, l2_qda.predict(x_train), average='weighted')
print(l2_qda_pred)
```
| github_jupyter |
```
import numpy as np
import json
from PIL import Image, ImageDraw
import os
import cv2
import pandas as pd
from tqdm import tqdm
import shutil
import random
import matplotlib.pyplot as plt
%matplotlib inline
from procrustes import procrustes
from sklearn.decomposition import PCA
import sys
sys.path.append('../inference/')
from face_detector import FaceDetector
# this face detector is taken from here
# https://github.com/TropComplique/FaceBoxes-tensorflow
# (facial keypoints detector will be trained to work well with this detector)
```
The purpose of this script is to explore images/annotations of the CelebA dataset.
Also it cleans CelebA.
Also it converts annotations into json format.
```
IMAGES_DIR = '/home/gpu2/hdd/dan/CelebA/img_celeba.7z/out/'
ANNOTATIONS_PATH = '/home/gpu2/hdd/dan/CelebA/list_landmarks_celeba.txt'
SPLIT_PATH = '/home/gpu2/hdd/dan/CelebA/list_eval_partition.txt'
```
# Read data
```
# collect paths to all images
all_paths = []
for name in tqdm(os.listdir(IMAGES_DIR)):
all_paths.append(os.path.join(IMAGES_DIR, name))
metadata = pd.DataFrame(all_paths, columns=['full_path'])
# strip root folder
metadata['name'] = metadata.full_path.apply(lambda x: os.path.relpath(x, IMAGES_DIR))
# number of images is taken from the official website
assert len(metadata) == 202599
# see all unique endings
metadata.name.apply(lambda x: x.split('.')[-1]).unique()
```
### Detect a face on each image
```
# load faceboxes detector
face_detector = FaceDetector('../inference/model-step-240000.pb', visible_device_list='0')
detections = []
for p in tqdm(metadata.full_path):
image = cv2.imread(p)
image = image[:, :, [2, 1, 0]] # to RGB
detections.append(face_detector(image))
# take only images where one high confidence box is detected
bad_images = [metadata.name[i] for i, (b, s) in enumerate(detections) if len(b) != 1 or s.max() < 0.5]
boxes = {}
for n, (box, score) in zip(metadata.name, detections):
if n not in bad_images:
ymin, xmin, ymax, xmax = box[0]
boxes[n] = (xmin, ymin, xmax, ymax)
```
### Read keypoints from annotations
```
def get_numbers(s):
s = s.strip().split(' ')
return [s[0]] + [int(i) for i in s[1:] if i]
with open(ANNOTATIONS_PATH, 'r') as f:
content = f.readlines()
content = content[2:]
content = [get_numbers(s) for s in content]
landmarks = {}
more_bad_images = []
for i in content:
name = i[0]
keypoints = [
[i[1], i[2]], # lefteye_x lefteye_y
[i[3], i[4]], # righteye_x righteye_y
[i[5], i[6]], # nose_x nose_y
[i[7], i[8]], # leftmouth_x leftmouth_y
[i[9], i[10]], # rightmouth_x rightmouth_y
]
# assert that landmarks are inside the box
if name in bad_images:
continue
xmin, ymin, xmax, ymax = boxes[name]
points = np.array(keypoints)
is_normal = (points[:, 0] > xmin).all() and\
(points[:, 0] < xmax).all() and\
(points[:, 1] > ymin).all() and\
(points[:, 1] < ymax).all()
if not is_normal:
more_bad_images.append(name)
landmarks[name] = keypoints
# number of weird landmarks
len(more_bad_images)
to_remove = more_bad_images + bad_images
metadata = metadata.loc[~metadata.name.isin(to_remove)]
metadata = metadata.reset_index(drop=True)
# backup results
metadata.to_csv('metadata.csv')
np.save('boxes.npy', boxes)
np.save('landmarks.npy', landmarks)
np.save('to_remove.npy', to_remove)
# metadata = pd.read_csv('metadata.csv', index_col=0)
# boxes = np.load('boxes.npy')[()]
# landmarks = np.load('landmarks.npy')[()]
# to_remove = np.load('to_remove.npy')
# size after cleaning
len(metadata)
```
# Show some bounding boxes and landmarks
```
def draw_boxes_on_image(path, box, keypoints):
image = Image.open(path)
draw = ImageDraw.Draw(image, 'RGBA')
xmin, ymin, xmax, ymax = box
fill = (255, 255, 255, 45)
outline = 'red'
draw.rectangle(
[(xmin, ymin), (xmax, ymax)],
fill=fill, outline=outline
)
for x, y in keypoints:
draw.ellipse([
(x - 2.0, y - 2.0),
(x + 2.0, y + 2.0)
], outline='red')
return image
i = random.randint(0, len(metadata) - 1) # choose a random image
some_boxes = boxes[metadata.name[i]]
keypoints = landmarks[metadata.name[i]]
draw_boxes_on_image(metadata.full_path[i], some_boxes, keypoints)
```
# Procrustes analysis (Pose-based Data Balancing strategy)
```
landmarks_array = []
boxes_array = []
for n in metadata.name:
landmarks_array.append(np.array(landmarks[n]))
boxes_array.append(np.array(boxes[n]))
landmarks_array = np.stack(landmarks_array, axis=0)
landmarks_array = landmarks_array.astype('float32')
boxes_array = np.stack(boxes_array)
mean_shape = landmarks_array.mean(0) # reference shape
num_images = len(landmarks_array)
aligned = []
for shape in tqdm(landmarks_array):
Z, _ = procrustes(mean_shape, shape, reflection=False)
aligned.append(Z)
aligned = np.stack(aligned)
pca = PCA(n_components=1)
projected = pca.fit_transform(aligned.reshape((-1, 10)))
projected = projected[:, 0]
plt.hist(projected, bins=40);
# frontal faces:
indices = np.where(np.abs(projected) < 5)[0]
# faces turned to the left:
# indices = np.where(projected > 15)[0]
# faces turned to the right:
# indices = np.where(projected < -30)[0]
i = indices[random.randint(0, len(indices) - 1)]
some_boxes = boxes[metadata.name[i]]
keypoints = landmarks[metadata.name[i]]
draw_boxes_on_image(metadata.full_path[i], some_boxes, keypoints)
# it is not strictly a yaw angle
metadata['yaw'] = projected
```
# Create train-val split
```
split = pd.read_csv(SPLIT_PATH, header=None, sep=' ')
split.columns = ['name', 'assignment']
split = split.loc[~split.name.isin(to_remove)]
split = split.reset_index(drop=True)
split.assignment.value_counts()
# "0" represents training image, "1" represents validation image, "2" represents testing image
train = list(split.loc[split.assignment.isin([0, 1]), 'name'])
val = list(split.loc[split.assignment.isin([2]), 'name'])
```
# Upsample rare poses
```
metadata['is_train'] = metadata.name.isin(train).astype('int')
bins = [metadata.yaw.min() - 1.0, -20.0, -5.0, 5.0, 20.0, metadata.yaw.max() + 1.0]
metadata['bin'] = pd.cut(metadata.yaw, bins, labels=False)
metadata.loc[metadata.is_train == 1, 'bin'].value_counts()
bins_to_upsample = [0, 1, 3, 4]
num_samples = 80000
val_metadata = metadata.loc[metadata.is_train == 0]
upsampled = [metadata.loc[(metadata.is_train == 1) & (metadata.bin == 2)]]
for b in bins_to_upsample:
to_use = (metadata.is_train == 1) & (metadata.bin == b)
m = metadata.loc[to_use].sample(n=num_samples, replace=True)
upsampled.append(m)
upsampled = pd.concat(upsampled)
upsampled.bin.value_counts()
metadata = pd.concat([upsampled, val_metadata])
```
# Convert
```
def get_annotation(name, new_name, width, height, translation):
xmin, ymin, xmax, ymax = boxes[name]
keypoints = landmarks[name]
tx, ty = translation
keypoints = [[p[0] - tx, p[1] - ty]for p in keypoints]
xmin, ymin = xmin - tx, ymin - ty
xmax, ymax = xmax - tx, ymax - ty
annotation = {
"filename": new_name,
"size": {"depth": 3, "width": width, "height": height},
"box": {"ymin": int(ymin), "ymax": int(ymax), "xmax": int(xmax), "xmin": int(xmin)},
"landmarks": keypoints
}
return annotation
# create folders for the converted dataset
TRAIN_DIR = '/mnt/datasets/dan/CelebA/train/'
shutil.rmtree(TRAIN_DIR, ignore_errors=True)
os.mkdir(TRAIN_DIR)
os.mkdir(os.path.join(TRAIN_DIR, 'images'))
os.mkdir(os.path.join(TRAIN_DIR, 'annotations'))
VAL_DIR = '/mnt/datasets/dan/CelebA/val/'
shutil.rmtree(VAL_DIR, ignore_errors=True)
os.mkdir(VAL_DIR)
os.mkdir(os.path.join(VAL_DIR, 'images'))
os.mkdir(os.path.join(VAL_DIR, 'annotations'))
counter = 0
for T in tqdm(metadata.itertuples()):
# get width and height of an image
image = cv2.imread(T.full_path)
h, w, c = image.shape
assert c == 3
# name of the image
name = T.name
assert name.endswith('.jpg')
if name in train:
result_dir = TRAIN_DIR
elif name in val:
result_dir = VAL_DIR
else:
print('WTF')
break
# crop the image to save space
xmin, ymin, xmax, ymax = boxes[name]
width, height = xmax - xmin, ymax - ymin
assert width > 0 and height > 0
xmin = max(int(xmin - width), 0)
ymin = max(int(ymin - height), 0)
xmax = min(int(xmax + width), w)
ymax = min(int(ymax + height), h)
crop = image[ymin:ymax, xmin:xmax, :]
# we need to transform annotations after cropping
translation = [xmin, ymin]
# we need to rename images because of upsampling
new_name = str(counter) + '.jpg'
counter += 1
cv2.imwrite(os.path.join(result_dir, 'images', new_name), crop)
# save annotation for it
d = get_annotation(name, new_name, xmax - xmin, ymax - ymin, translation)
json_name = new_name[:-4] + '.json'
json.dump(d, open(os.path.join(result_dir, 'annotations', json_name), 'w'))
```
| github_jupyter |
<img align="left" width="40%" src="http://www.lsce.ipsl.fr/Css/img/banniere_LSCE_75.png">
<br>Patrick BROCKMANN - LSCE (Climate and Environment Sciences Laboratory)
<hr>
### Discover Milankovitch Orbital Parameters over Time by reproducing figure from https://biocycle.atmos.colostate.edu/shiny/Milankovitch/
```
from bokeh.io import output_notebook, show
from bokeh.plotting import figure
from bokeh.layouts import gridplot, column
from bokeh.models import CustomJS, Slider, RangeSlider
from bokeh.models import Span
output_notebook()
import ipywidgets as widgets
from ipywidgets import Layout
from ipywidgets import interact
import pandas as pd
import numpy as np
```
### Download files
Data files from http://vo.imcce.fr/insola/earth/online/earth/earth.html
```
! wget -nc http://vo.imcce.fr/insola/earth/online/earth/La2004/INSOLN.LA2004.BTL.100.ASC
! wget -nc http://vo.imcce.fr/insola/earth/online/earth/La2004/INSOLP.LA2004.BTL.ASC
```
### Read files
```
# t Time from J2000 in 1000 years
# e eccentricity
# eps obliquity (radians)
# pibar longitude of perihelion from moving equinox (radians)
df1 = pd.read_csv('INSOLN.LA2004.BTL.250.ASC', delim_whitespace=True, names=['t', 'e', 'eps', 'pibar'])
df1.set_index('t', inplace=True)
df2 = pd.read_csv('INSOLP.LA2004.BTL.ASC', delim_whitespace=True, names=['t', 'e', 'eps', 'pibar'])
df2.set_index('t', inplace=True)
#df = pd.read_csv('La2010a_ecc3.dat', delim_whitespace=True, names=['t', 'e'])
#df = pd.read_csv('La2010a_alkhqp3L.dat', delim_whitespace=True, names=['t','a','l','k','h','q','p'])
# INSOLP.LA2004.BTL.ASC has a FORTRAN DOUBLE notation D instead of E
df2['e'] = df2['e'].str.replace('D','E')
df2['e'] = df2['e'].astype(float)
df2['eps'] = df2['eps'].str.replace('D','E')
df2['eps'] = df2['eps'].astype(float)
df2['pibar'] = df2['pibar'].str.replace('D','E')
df2['pibar'] = df2['pibar'].astype(float)
df2['e'][0]
df = pd.concat([df1[::-1],df2[1:]])
df
# t Time from J2000 in 1000 years
# e eccentricity
# eps obliquity (radians)
# pibar longitude of perihelion from moving equinox (radians)
df['eccentricity'] = df['e']
df['perihelion'] = df['pibar']
df['obliquity'] = 180. * df['eps'] / np.pi
df['precession'] = df['eccentricity'] * np.sin(df['perihelion'])
#latitude <- 65. * pi / 180.
#Q.day <- S0*(1+eccentricity*sin(perihelion+pi))^2 *sin(latitude)*sin(obliquity)
latitude = 65. * np.pi / 180.
df['insolation'] = 1367 * ( 1 + df['eccentricity'] * np.sin(df['perihelion'] + np.pi))**2 * np.sin(latitude) * np.sin(df['eps'])
df
```
### Build plot
```
a = widgets.IntRangeSlider(
layout=Layout(width='600px'),
value=[-2000, 50],
min=-250000,
max=21000,
step=100,
disabled=False,
continuous_update=False,
orientation='horizontal',
description='-249Myr to +21Myr:',
)
def plot1(limits):
years = df[limits[0]:limits[1]].index
zeroSpan = Span(location=0, dimension='height', line_color='black',
line_dash='solid', line_width=1)
p1 = figure(title='Eccentricity', active_scroll="wheel_zoom")
p1.line(years, df[limits[0]:limits[1]]['eccentricity'], color='red')
p1.yaxis.axis_label = "Degrees"
p1.add_layout(zeroSpan)
p2 = figure(title='Obliquity', x_range=p1.x_range)
p2.line(years, df[limits[0]:limits[1]]['obliquity'], color='forestgreen')
p2.yaxis.axis_label = "Degrees"
p2.add_layout(zeroSpan)
p3 = figure(title='Precessional index', x_range=p1.x_range)
p3.line(years, df[limits[0]:limits[1]]['precession'], color='dodgerblue')
p3.yaxis.axis_label = "Degrees"
p3.add_layout(zeroSpan)
p4 = figure(title='Mean Daily Insolation at 65N on Summer Solstice', x_range=p1.x_range)
p4.line(years, df[limits[0]:limits[1]]['insolation'], color='#ffc125')
p4.yaxis.axis_label = "Watts/m2"
p4.add_layout(zeroSpan)
show(gridplot([p1,p2,p3,p4], ncols=1, plot_width=600, plot_height=200))
interact(plot1, limits=a)
# Merged tool of subfigures is not marked as active
# https://github.com/bokeh/bokeh/issues/10659
p1 = figure(title='Eccentricity', active_scroll="wheel_zoom")
years = df[0:2000].index
p1.line(years, df[0:2000]['eccentricity'], color='red')
p2 = figure(title='Obliquity', x_range=p1.x_range)
p2.line(years, df[0:2000]['obliquity'], color='forestgreen')
show(gridplot([p1,p2], ncols=1, plot_width=600, plot_height=200, merge_tools=True))
```
| github_jupyter |
# Tutorial 5: Inception, ResNet and DenseNet

**Filled notebook:**
[](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial5/Inception_ResNet_DenseNet.ipynb)
[](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial5/Inception_ResNet_DenseNet.ipynb)
**Pre-trained models:**
[](https://github.com/phlippe/saved_models/tree/main/tutorial5)
[](https://drive.google.com/drive/folders/1zOgLKmYJ2V3uHz57nPUMY6tq15RmEtNg?usp=sharing)
In this tutorial, we will implement and discuss variants of modern CNN architectures. There have been many different architectures been proposed over the past few years. Some of the most impactful ones, and still relevant today, are the following: [GoogleNet](https://arxiv.org/abs/1409.4842)/Inception architecture (winner of ILSVRC 2014), [ResNet](https://arxiv.org/abs/1512.03385) (winner of ILSVRC 2015), and [DenseNet](https://arxiv.org/abs/1608.06993) (best paper award CVPR 2017). All of them were state-of-the-art models when being proposed, and the core ideas of these networks are the foundations for most current state-of-the-art architectures. Thus, it is important to understand these architectures in detail and learn how to implement them.
Let's start with importing our standard libraries here.
```
## Standard libraries
import os
import numpy as np
import random
from PIL import Image
from types import SimpleNamespace
## Imports for plotting
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf') # For export
from matplotlib.colors import to_rgb
import matplotlib
matplotlib.rcParams['lines.linewidth'] = 2.0
import seaborn as sns
sns.reset_orig()
## PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import torch.optim as optim
# Torchvision
import torchvision
from torchvision.datasets import CIFAR10
from torchvision import transforms
```
We will use the same `set_seed` function as in the previous tutorials, as well as the path variables `DATASET_PATH` and `CHECKPOINT_PATH`. Adjust the paths if necessary.
```
# Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10)
DATASET_PATH = "../data"
# Path to the folder where the pretrained models are saved
CHECKPOINT_PATH = "../saved_models/tutorial5"
# Function for setting the seed
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
set_seed(42)
# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.determinstic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
```
We also have pretrained models and Tensorboards (more on this later) for this tutorial, and download them below.
```
import urllib.request
from urllib.error import HTTPError
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/tutorial5/"
# Files to download
pretrained_files = ["GoogleNet.ckpt", "ResNet.ckpt", "ResNetPreAct.ckpt", "DenseNet.ckpt",
"tensorboards/GoogleNet/events.out.tfevents.googlenet",
"tensorboards/ResNet/events.out.tfevents.resnet",
"tensorboards/ResNetPreAct/events.out.tfevents.resnetpreact",
"tensorboards/DenseNet/events.out.tfevents.densenet"]
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if "/" in file_name:
os.makedirs(file_path.rsplit("/",1)[0], exist_ok=True)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print("Downloading %s..." % file_url)
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print("Something went wrong. Please try to download the file from the GDrive folder, or contact the author with the full output including the following error:\n", e)
```
Throughout this tutorial, we will train and evaluate the models on the CIFAR10 dataset. This allows you to compare the results obtained here with the model you have implemented in the first assignment. As we have learned from the previous tutorial about initialization, it is important to have the data preprocessed with a zero mean. Therefore, as a first step, we will calculate the mean and standard deviation of the CIFAR dataset:
```
train_dataset = CIFAR10(root=DATASET_PATH, train=True, download=True)
DATA_MEANS = (train_dataset.data / 255.0).mean(axis=(0,1,2))
DATA_STD = (train_dataset.data / 255.0).std(axis=(0,1,2))
print("Data mean", DATA_MEANS)
print("Data std", DATA_STD)
```
We will use this information to define a `transforms.Normalize` module which will normalize our data accordingly. Additionally, we will use data augmentation during training. This reduces the risk of overfitting and helps CNNs to generalize better. Specifically, we will apply two random augmentations.
First, we will flip each image horizontally by a chance of 50% (`transforms.RandomHorizontalFlip`). The object class usually does not change when flipping an image, and we don't expect any image information to be dependent on the horizontal orientation. This would be however different if we would try to detect digits or letters in an image, as those have a certain orientation.
The second augmentation we use is called `transforms.RandomResizedCrop`. This transformation scales the image in a small range, while eventually changing the aspect ratio, and crops it afterward in the previous size. Therefore, the actual pixel values change while the content or overall semantics of the image stays the same.
We will randomly split the training dataset into a training and a validation set. The validation set will be used for determining early stopping. After finishing the training, we test the models on the CIFAR test set.
```
test_transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize(DATA_MEANS, DATA_STD)
])
# For training, we add some augmentation. Networks are too powerful and would overfit.
train_transform = transforms.Compose([transforms.RandomHorizontalFlip(),
transforms.RandomResizedCrop((32,32),scale=(0.8,1.0),ratio=(0.9,1.1)),
transforms.ToTensor(),
transforms.Normalize(DATA_MEANS, DATA_STD)
])
# Loading the training dataset. We need to split it into a training and validation part
# We need to do a little trick because the validation set should not use the augmentation.
train_dataset = CIFAR10(root=DATASET_PATH, train=True, transform=train_transform, download=True)
val_dataset = CIFAR10(root=DATASET_PATH, train=True, transform=test_transform, download=True)
set_seed(42)
train_set, _ = torch.utils.data.random_split(train_dataset, [45000, 5000])
set_seed(42)
_, val_set = torch.utils.data.random_split(val_dataset, [45000, 5000])
# Loading the test set
test_set = CIFAR10(root=DATASET_PATH, train=False, transform=test_transform, download=True)
# We define a set of data loaders that we can use for various purposes later.
train_loader = data.DataLoader(train_set, batch_size=128, shuffle=True, drop_last=True, pin_memory=True, num_workers=4)
val_loader = data.DataLoader(val_set, batch_size=128, shuffle=False, drop_last=False, num_workers=4)
test_loader = data.DataLoader(test_set, batch_size=128, shuffle=False, drop_last=False, num_workers=4)
```
To verify that our normalization works, we can print out the mean and standard deviation of the single batch. The mean should be close to 0 and the standard deviation close to 1 for each channel:
```
imgs, _ = next(iter(train_loader))
print("Batch mean", imgs.mean(dim=[0,2,3]))
print("Batch std", imgs.std(dim=[0,2,3]))
```
Finally, let's visualize a few images from the training set, and how they look like after random data augmentation:
```
NUM_IMAGES = 4
images = [train_dataset[idx][0] for idx in range(NUM_IMAGES)]
orig_images = [Image.fromarray(train_dataset.data[idx]) for idx in range(NUM_IMAGES)]
orig_images = [test_transform(img) for img in orig_images]
img_grid = torchvision.utils.make_grid(torch.stack(images + orig_images, dim=0), nrow=4, normalize=True, pad_value=0.5)
img_grid = img_grid.permute(1, 2, 0)
plt.figure(figsize=(8,8))
plt.title("Augmentation examples on CIFAR10")
plt.imshow(img_grid)
plt.axis('off')
plt.show()
plt.close()
```
## PyTorch Lightning
In this notebook and in many following ones, we will make use of the library [PyTorch Lightning](https://www.pytorchlightning.ai/). PyTorch Lightning is a framework that simplifies your code needed to train, evaluate, and test a model in PyTorch. It also handles logging into [TensorBoard](https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html), a visualization toolkit for ML experiments, and saving model checkpoints automatically with minimal code overhead from our side. This is extremely helpful for us as we want to focus on implementing different model architectures and spend little time on other code overhead. Note that at the time of writing/teaching, the framework has been released in version 1.0. Future versions might have a slightly changed interface and thus might not work perfectly with the code (we will try to keep it up-to-date as much as possible).
Now, we will take the first step in PyTorch Lightning, and continue to explore the framework in our other tutorials. First, we import the library:
```
# PyTorch Lightning
try:
import pytorch_lightning as pl
except ModuleNotFoundError: # Google Colab does not have PyTorch Lightning installed by default. Hence, we do it here if necessary
!pip install pytorch-lightning==1.0.3
import pytorch_lightning as pl
```
PyTorch Lightning comes with a lot of useful functions, such as one for setting the seed:
```
# Setting the seed
pl.seed_everything(42)
```
Thus, in the future, we don't have to define our own `set_seed` function anymore.
In PyTorch Lightning, we define `pl.LightningModule`'s (inheriting from `torch.nn.Module`) that organize our code into 5 main sections:
1. Initialization (`__init__`), where we create all necessary parameters/models
2. Optimizers (`configure_optimizers`) where we create the optimizers, learning rate scheduler, etc.
3. Training loop (`training_step`) where we only have to define the loss calculation for a single batch (the loop of optimizer.zero_grad(), loss.backward() and optimizer.step(), as well as any logging/saving operation, is done in the background)
4. Validation loop (`validation_step`) where similarly to the training, we only have to define what should happen per step
5. Test loop (`test_step`) which is the same as validation, only on a test set.
Therefore, we don't abstract the PyTorch code, but rather organize it and define some default operations that are commonly used. If you need to change something else in your training/validation/test loop, there are many possible functions you can overwrite (see the [docs](https://pytorch-lightning.readthedocs.io/en/stable/lightning_module.html) for details).
Now we can look at an example of how a Lightning Module for training a CNN looks like:
```
class CIFARTrainer(pl.LightningModule):
def __init__(self, model_name, model_hparams, optimizer_name, optimizer_hparams):
"""
Inputs:
model_name - Name of the model/CNN to run. Used for creating the model (see function below)
model_hparams - Hyperparameters for the model, as dictionary.
optimizer_name - Name of the optimizer to use. Currently supported: Adam, SGD
optimizer_hparams - Hyperparameters for the optimizer, as dictionary. This includes learning rate, weight decay, etc.
"""
super().__init__()
# Exports the hyperparameters to a YAML file, and create "self.hparams" namespace
self.save_hyperparameters()
# Create model
self.model = create_model(model_name, model_hparams)
# Create loss module
self.loss_module = nn.CrossEntropyLoss()
# Example input for visualizing the graph in Tensorboard
self.example_input_array = torch.zeros((1, 3, 32, 32), dtype=torch.float32)
def forward(self, imgs):
# Forward function that is run when visualizing the graph
return self.model(imgs)
def configure_optimizers(self):
# We will support Adam or SGD as optimizers.
if self.hparams.optimizer_name == "Adam":
# AdamW is Adam with a correct implementation of weight decay (see here for details: https://arxiv.org/pdf/1711.05101.pdf)
optimizer = optim.AdamW(self.parameters(), **self.hparams.optimizer_hparams)
elif self.hparams.optimizer_name == "SGD":
optimizer = optim.SGD(self.parameters(), **self.hparams.optimizer_hparams)
else:
assert False, "Unknown optimizer: \"%s\"" % self.hparams.optimizer_name
# We will reduce the learning rate by 0.1 after 100 and 150 epochs
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[100,150], gamma=0.1)
return [optimizer], [scheduler]
def training_step(self, batch, batch_idx):
# "batch" is the output of the training data loader.
imgs, labels = batch
preds = self.model(imgs)
loss = self.loss_module(preds, labels)
acc = (preds.argmax(dim=-1) == labels).float().mean()
self.log('train_acc', acc, on_step=False, on_epoch=True) # Logs the accuracy per epoch to tensorboard (weighted average over batches)
self.log('train_loss', loss)
return loss # Return tensor to call ".backward" on
def validation_step(self, batch, batch_idx):
imgs, labels = batch
preds = self.model(imgs).argmax(dim=-1)
acc = (labels == preds).float().mean()
self.log('val_acc', acc) # By default logs it per epoch (weighted average over batches)
def test_step(self, batch, batch_idx):
imgs, labels = batch
preds = self.model(imgs).argmax(dim=-1)
acc = (labels == preds).float().mean()
self.log('test_acc', acc) # By default logs it per epoch (weighted average over batches), and returns it afterwards
```
We see that the code is organized and clear, which helps if someone else tries to understand your code.
Another important part of PyTorch Lightning is the concept of callbacks. Callbacks are self-contained functions that contain the non-essential logic of your Lightning Module. They are usually called after finishing a training epoch, but can also influence other parts of your training loop. For instance, we will use the following two pre-defined callbacks: `LearningRateMonitor` and `ModelCheckpoint`. The learning rate monitor adds the current learning rate to our TensorBoard, which helps to verify that our learning rate scheduler works correctly. The model checkpoint callback allows you to customize the saving routine of your checkpoints. For instance, how many checkpoints to keep, when to save, which metric to look out for, etc. We import them below:
```
# Callbacks
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
```
To allow running multiple different models with the same Lightning module, we define a function below that maps a model name to the model class. At this stage, the dictionary `model_dict` is empty, but we will fill it throughout the notebook with our new models.
```
model_dict = {}
def create_model(model_name, model_hparams):
if model_name in model_dict:
return model_dict[model_name](**model_hparams)
else:
assert False, "Unknown model name \"%s\". Available models are: %s" % (model_name, str(model_dict.keys()))
```
Similarly, to use the activation function as another hyperparameter in our model, we define a "name to function" dict below:
```
act_fn_by_name = {
"tanh": nn.Tanh,
"relu": nn.ReLU,
"leakyrelu": nn.LeakyReLU,
"gelu": nn.GELU
}
```
If we pass the classes or objects directly as an argument to the Lightning module, we couldn't take advantage of PyTorch Lightning's automatically hyperparameter saving and loading.
Besides the Lightning module, the second most important module in PyTorch Lightning is the `Trainer`. The trainer is responsible to execute the training steps defined in the Lightning module and completes the framework. Similar to the Lightning module, you can override any key part that you don't want to be automated, but the default settings are often the best practice to do. For a full overview, see the [documentation](https://pytorch-lightning.readthedocs.io/en/stable/trainer.html). The most important functions we use below are:
* `trainer.fit`: Takes as input a lightning module, a training dataset, and an (optional) validation dataset. This function trains the given module on the training dataset with occasional validation (default once per epoch, can be changed)
* `trainer.test`: Takes as input a model and a dataset on which we want to test. It returns the test metric on the dataset.
For training and testing, we don't have to worry about things like setting the model to eval mode (`model.eval()`) as this is all done automatically. See below how we define a training function for our models:
```
def train_model(model_name, save_name=None, **kwargs):
"""
Inputs:
model_name - Name of the model you want to run. Is used to look up the class in "model_dict"
save_name (optional) - If specified, this name will be used for creating the checkpoint and logging directory.
"""
if save_name is None:
save_name = model_name
# Create a PyTorch Lightning trainer with the generation callback
trainer = pl.Trainer(default_root_dir=os.path.join(CHECKPOINT_PATH, save_name), # Where to save models
checkpoint_callback=ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc"), # Save the best checkpoint based on the maximum val_acc recorded. Saves only weights and not optimizer
gpus=1 if str(device)=="cuda:0" else 0, # We run on a single GPU (if possible)
max_epochs=180, # How many epochs to train for if no patience is set
callbacks=[LearningRateMonitor("epoch")], # Log learning rate every epoch
progress_bar_refresh_rate=1) # In case your notebook crashes due to the progress bar, consider increasing the refresh rate
trainer.logger._log_graph = True # If True, we plot the computation graph in tensorboard
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, save_name + ".ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model at %s, loading..." % pretrained_filename)
model = CIFARTrainer.load_from_checkpoint(pretrained_filename) # Automatically loads the model with the saved hyperparameters
else:
pl.seed_everything(42) # To be reproducable
model = CIFARTrainer(model_name=model_name, **kwargs)
trainer.fit(model, train_loader, val_loader)
model = CIFARTrainer.load_from_checkpoint(trainer.checkpoint_callback.best_model_path) # Load best checkpoint after training
# Test best model on validation and test set
val_result = trainer.test(model, test_dataloaders=val_loader, verbose=False)
test_result = trainer.test(model, test_dataloaders=test_loader, verbose=False)
result = {"test": test_result[0]["test_acc"], "val": val_result[0]["test_acc"]}
return model, result
```
Finally, we can focus on the Convolutional Neural Networks we want to implement today: GoogleNet, ResNet, and DenseNet.
## Inception
The [GoogleNet](https://arxiv.org/abs/1409.4842), proposed in 2014, won the ImageNet Challenge because of its usage of the Inception modules. In general, we will mainly focus on the concept of Inception in this tutorial instead of the specifics of the GoogleNet, as based on Inception, there have been many follow-up works ([Inception-v2](https://arxiv.org/abs/1512.00567), [Inception-v3](https://arxiv.org/abs/1512.00567), [Inception-v4](https://arxiv.org/abs/1602.07261), [Inception-ResNet](https://arxiv.org/abs/1602.07261),...). The follow-up works mainly focus on increasing efficiency and enabling very deep Inception networks. However, for a fundamental understanding, it is sufficient to look at the original Inception block.
An Inception block applies four convolution blocks separately on the same feature map: a 1x1, 3x3, and 5x5 convolution, and a max pool operation. This allows the network to look at the same data with different receptive fields. Of course, learning only 5x5 convolution would be theoretically more powerful. However, this is not only more computation and memory heavy but also tends to overfit much easier. The overall inception block looks like below (figure credit - [Szegedy et al.](https://arxiv.org/abs/1409.4842)):
<center width="100%"><img src="inception_block.svg" style="display: block; margin-left: auto; margin-right: auto;" width="500px"/></center>
The additional 1x1 convolutions before the 3x3 and 5x5 convolutions are used for dimensionality reduction. This is especially crucial as the feature maps of all branches are merged afterward, and we don't want any explosion of feature size. As 5x5 convolutions are 25 times more expensive than 1x1 convolutions, we can save a lot of computation and parameters by reducing the dimensionality before the large convolutions.
We can now try to implement the Inception Block ourselves:
```
class InceptionBlock(nn.Module):
def __init__(self, c_in, c_red : dict, c_out : dict, act_fn):
"""
Inputs:
c_in - Number of input feature maps from the previous layers
c_red - Dictionary with keys "3x3" and "5x5" specifying the output of the dimensionality reducing 1x1 convolutions
c_out - Dictionary with keys "1x1", "3x3", "5x5", and "max"
act_fn - Activation class constructor (e.g. nn.ReLU)
"""
super().__init__()
# 1x1 convolution branch
self.conv_1x1 = nn.Sequential(
nn.Conv2d(c_in, c_out["1x1"], kernel_size=1),
nn.BatchNorm2d(c_out["1x1"]),
act_fn()
)
# 3x3 convolution branch
self.conv_3x3 = nn.Sequential(
nn.Conv2d(c_in, c_red["3x3"], kernel_size=1),
nn.BatchNorm2d(c_red["3x3"]),
act_fn(),
nn.Conv2d(c_red["3x3"], c_out["3x3"], kernel_size=3, padding=1),
nn.BatchNorm2d(c_out["3x3"]),
act_fn()
)
# 5x5 convolution branch
self.conv_5x5 = nn.Sequential(
nn.Conv2d(c_in, c_red["5x5"], kernel_size=1),
nn.BatchNorm2d(c_red["5x5"]),
act_fn(),
nn.Conv2d(c_red["5x5"], c_out["5x5"], kernel_size=5, padding=2),
nn.BatchNorm2d(c_out["5x5"]),
act_fn()
)
# Max-pool branch
self.max_pool = nn.Sequential(
nn.MaxPool2d(kernel_size=3, padding=1, stride=1),
nn.Conv2d(c_in, c_out["max"], kernel_size=1),
nn.BatchNorm2d(c_out["max"]),
act_fn()
)
def forward(self, x):
x_1x1 = self.conv_1x1(x)
x_3x3 = self.conv_3x3(x)
x_5x5 = self.conv_5x5(x)
x_max = self.max_pool(x)
x_out = torch.cat([x_1x1, x_3x3, x_5x5, x_max], dim=1)
return x_out
```
The GoogleNet architecture consists of stacking multiple Inception blocks with occasional max pooling to reduce the height and width of the feature maps. The original GoogleNet was designed for image sizes of ImageNet (224x224 pixels) and had almost 7 million parameters. As we train on CIFAR10 with image sizes of 32x32, we don't require such a heavy architecture, and instead, apply a reduced version. The number of channels for dimensionality reduction and output per filter (1x1, 3x3, 5x5, and max pooling) need to be manually specified and can be changed if interested. The general intuition is to have the most filters for the 3x3 convolutions, as they are powerful enough to take the context into account while requiring almost a third of the parameters of the 5x5 convolution.
```
class GoogleNet(nn.Module):
def __init__(self, num_classes=10, act_fn_name="relu", **kwargs):
super().__init__()
self.hparams = SimpleNamespace(num_classes=num_classes,
act_fn_name=act_fn_name,
act_fn=act_fn_by_name[act_fn_name])
self._create_network()
self._init_params()
def _create_network(self):
# A first convolution on the original image to scale up the channel size
self.input_net = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, padding=1),
nn.BatchNorm2d(64),
self.hparams.act_fn()
)
# Stacking inception blocks
self.inception_blocks = nn.Sequential(
InceptionBlock(64, c_red={"3x3":32,"5x5":16}, c_out={"1x1":16,"3x3":32,"5x5":8,"max":8}, act_fn=self.hparams.act_fn),
InceptionBlock(64, c_red={"3x3":32,"5x5":16}, c_out={"1x1":24,"3x3":48,"5x5":12,"max":12}, act_fn=self.hparams.act_fn),
nn.MaxPool2d(3, stride=2, padding=1), # 32x32 => 16x16
InceptionBlock(96, c_red={"3x3":32,"5x5":16}, c_out={"1x1":24,"3x3":48,"5x5":12,"max":12}, act_fn=self.hparams.act_fn),
InceptionBlock(96, c_red={"3x3":32,"5x5":16}, c_out={"1x1":16,"3x3":48,"5x5":16,"max":16}, act_fn=self.hparams.act_fn),
InceptionBlock(96, c_red={"3x3":32,"5x5":16}, c_out={"1x1":16,"3x3":48,"5x5":16,"max":16}, act_fn=self.hparams.act_fn),
InceptionBlock(96, c_red={"3x3":32,"5x5":16}, c_out={"1x1":32,"3x3":48,"5x5":24,"max":24}, act_fn=self.hparams.act_fn),
nn.MaxPool2d(3, stride=2, padding=1), # 16x16 => 8x8
InceptionBlock(128, c_red={"3x3":48,"5x5":16}, c_out={"1x1":32,"3x3":64,"5x5":16,"max":16}, act_fn=self.hparams.act_fn),
InceptionBlock(128, c_red={"3x3":48,"5x5":16}, c_out={"1x1":32,"3x3":64,"5x5":16,"max":16}, act_fn=self.hparams.act_fn)
)
# Mapping to classification output
self.output_net = nn.Sequential(
nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten(),
nn.Linear(128, self.hparams.num_classes)
)
def _init_params(self):
# Based on our discussion in Tutorial 4, we should initialize the convolutions according to the activation function
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, nonlinearity=self.hparams.act_fn_name)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def forward(self, x):
x = self.input_net(x)
x = self.inception_blocks(x)
x = self.output_net(x)
return x
```
Now, we can integrate our model to the model dictionary we defined above:
```
model_dict["GoogleNet"] = GoogleNet
```
The training of the model is handled by PyTorch Lightning, and we just have to define the command to start. Note that we train for almost 200 epochs, which takes about an hour on Lisa's default GPUs (GTX1080Ti). We would recommend using the saved models and train your own model if you are interested.
```
googlenet_model, googlenet_results = train_model(model_name="GoogleNet",
model_hparams={"num_classes": 10,
"act_fn_name": "relu"},
optimizer_name="Adam",
optimizer_hparams={"lr": 1e-3,
"weight_decay": 1e-4})
```
We will compare the results later in the notebooks, but we can already print them here for a first glance:
```
print("GoogleNet Results", googlenet_results)
```
### Tensorboard log
A nice extra of PyTorch Lightning is the automatic logging into TensorBoard. To give you a better intuition of what TensorBoard can be used, we can look at the board that PyTorch Lightning has been generated when training the GoogleNet. TensorBoard provides an inline functionality for Jupyter notebooks, and we use it here:
```
# Import tensorboard
from torch.utils.tensorboard import SummaryWriter
%load_ext tensorboard
# Opens tensorboard in notebook. Adjust the path to your CHECKPOINT_PATH!
%tensorboard --logdir ../saved_models/tutorial5/tensorboards/GoogleNet/
```
<center width="100%"><img src="tensorboard_screenshot_GoogleNet.png" width="1000px"></center>
TensorBoard is organized in multiple tabs. The main tab is the scalar tab where we can log the development of single numbers. For example, we have plotted the training loss, accuracy, learning rate, etc. If we look at the training or validation accuracy, we can really see the impact of using a learning rate scheduler. Reducing the learning rate gives our model a nice increase in training performance. Similarly, when looking at the training loss, we see a sudden decrease at this point. However, the high numbers on the training set compared to validation indicate that our model was overfitting which is inevitable for such large networks.
Another interesting tab in TensorBoard is the graph tab. It shows us the network architecture organized by building blocks from the input to the output. It basically shows the operations taken in the forward step of `CIFARTrainer`. Double-click on a module to open it. Feel free to explore the architecture from a different perspective. The graph visualization can often help you to validate that your model is actually doing what it is supposed to do, and you don't miss any layers in the computation graph.
## ResNet
The [ResNet](https://arxiv.org/abs/1512.03385) paper is one of the [most cited AI papers](https://www.natureindex.com/news-blog/google-scholar-reveals-most-influential-papers-research-citations-twenty-twenty), and has been the foundation for neural networks with more than 1,000 layers. Despite its simplicity, the idea of residual connections is highly effective as it supports stable gradient propagation through the network. Instead of modeling $x_{l+1}=F(x_{l})$, we model $x_{l+1}=x_{l}+F(x_{l})$ where $F$ is a non-linear mapping (usually a sequence of NN modules likes convolutions, activation functions, and normalizations). If we do backpropagation on such residual connections, we obtain:
$$\frac{\partial x_{l+1}}{\partial x_{l}} = \mathbf{I} + \frac{\partial F(x_{l})}{\partial x_{l}}$$
The bias towards the identity matrix guarantees a stable gradient propagation being less effected by $F$ itself. There have been many variants of ResNet proposed, which mostly concern the function $F$, or operations applied on the sum. In this tutorial, we look at two of them: the original ResNet block, and the [Pre-Activation ResNet block](https://arxiv.org/abs/1603.05027). We visually compare the blocks below (figure credit - [He et al.](https://arxiv.org/abs/1603.05027)):
<center width="100%"><img src="resnet_block.svg" style="display: block; margin-left: auto; margin-right: auto;" width="300px"/></center>
The original ResNet block applies a non-linear activation function, usually ReLU, after the skip connection. In contrast, the pre-activation ResNet block applies the non-linearity at the beginning of $F$. Both have their advantages and disadvantages. For very deep network, however, the pre-activation ResNet has shown to perform better as the gradient flow is guaranteed to have the identity matrix as calculated above, and is not harmed by any non-linear activation applied to it. For comparison, in this notebook, we implement both ResNet types as shallow networks.
Let's start with the original ResNet block. The visualization above already shows what layers are included in $F$. One special case we have to handle is when we want to reduce the image dimensions in terms of width and height. The basic ResNet block requires $F(x_{l})$ to be of the same shape as $x_{l}$. Thus, we need to change the dimensionality of $x_{l}$ as well before adding to $F(x_{l})$. The original implementation used an identity mapping with stride 2 and padded additional feature dimensions with 0. However, the more common implementation is to use a 1x1 convolution with stride 2 as it allows us to change the feature dimensionality while being efficient in parameter and computation cost. The code for the ResNet block is relatively simple, and shown below:
```
class ResNetBlock(nn.Module):
def __init__(self, c_in, act_fn, subsample=False, c_out=-1):
"""
Inputs:
c_in - Number of input features
act_fn - Activation class constructor (e.g. nn.ReLU)
subsample - If True, we want to apply a stride inside the block and reduce the output shape by 2 in height and width
c_out - Number of output features. Note that this is only relevant if subsample is True, as otherwise, c_out = c_in
"""
super().__init__()
if not subsample:
c_out = c_in
# Network representing F
self.net = nn.Sequential(
nn.Conv2d(c_in, c_out, kernel_size=3, padding=1, stride=1 if not subsample else 2, bias=False), # No bias needed as the Batch Norm handles it
nn.BatchNorm2d(c_out),
act_fn(),
nn.Conv2d(c_out, c_out, kernel_size=3, padding=1, bias=False),
nn.BatchNorm2d(c_out)
)
# 1x1 convolution with stride 2 means we take the upper left value, and transform it to new output size
self.downsample = nn.Conv2d(c_in, c_out, kernel_size=1, stride=2) if subsample else None
self.act_fn = act_fn()
def forward(self, x):
z = self.net(x)
if self.downsample is not None:
x = self.downsample(x)
out = z + x
out = self.act_fn(out)
return out
```
The second block we implement is the pre-activation ResNet block. For this, we have to change the order of layer in `self.net`, and do not apply an activation function on the output. Additionally, the downsampling operation has to apply a non-linearity as well as the input, $x_l$, has not been processed by a non-linearity yet. Hence, the block looks as follows:
```
class PreActResNetBlock(nn.Module):
def __init__(self, c_in, act_fn, subsample=False, c_out=-1):
"""
Inputs:
c_in - Number of input features
act_fn - Activation class constructor (e.g. nn.ReLU)
subsample - If True, we want to apply a stride inside the block and reduce the output shape by 2 in height and width
c_out - Number of output features. Note that this is only relevant if subsample is True, as otherwise, c_out = c_in
"""
super().__init__()
if not subsample:
c_out = c_in
# Network representing F
self.net = nn.Sequential(
nn.BatchNorm2d(c_in),
act_fn(),
nn.Conv2d(c_in, c_out, kernel_size=3, padding=1, stride=1 if not subsample else 2, bias=False),
nn.BatchNorm2d(c_out),
act_fn(),
nn.Conv2d(c_out, c_out, kernel_size=3, padding=1, bias=False)
)
# 1x1 convolution needs to apply non-linearity as well as not done on skip connection
self.downsample = nn.Sequential(
nn.BatchNorm2d(c_in),
act_fn(),
nn.Conv2d(c_in, c_out, kernel_size=1, stride=2, bias=False)
) if subsample else None
def forward(self, x):
z = self.net(x)
if self.downsample is not None:
x = self.downsample(x)
out = z + x
return out
```
Similarly to the model selection, we define a dictionary to create a mapping from string to block class. We will use the string name as hyperparameter value in our model to choose between the ResNet blocks. Feel free to implement any other ResNet block type and add it here as well.
```
resnet_blocks_by_name = {
"ResNetBlock": ResNetBlock,
"PreActResNetBlock": PreActResNetBlock
}
```
The overall ResNet architecture consists of stacking multiple ResNet blocks, of which some are downsampling the input. When talking about ResNet blocks in the whole network, we usually group them by the same output shape. Hence, if we say the ResNet has `[3,3,3]` blocks, it means that we have 3 times a group of 3 ResNet blocks, where a subsampling is taking place in the fourth and seventh block. The ResNet with `[3,3,3]` blocks on CIFAR10 is visualized below.
<center width="100%"><img src="resnet_notation.svg" width="500px"></center>
The three groups operate on the resolutions $32\times32$, $16\times16$ and $8\times8$ respectively. The blocks in orange denote ResNet blocks with downsampling. The same notation is used by many other implementations such as in the [torchvision library](https://pytorch.org/docs/stable/_modules/torchvision/models/resnet.html#resnet18) from PyTorch. Thus, our code looks as follows:
```
class ResNet(nn.Module):
def __init__(self, num_classes=10, num_blocks=[3,3,3], c_hidden=[16,32,64], act_fn_name="relu", block_name="ResNetBlock", **kwargs):
"""
Inputs:
num_classes - Number of classification outputs (10 for CIFAR10)
num_blocks - List with the number of ResNet blocks to use. The first block of each group uses downsampling, except the first.
c_hidden - List with the hidden dimensionalities in the different blocks. Usually multiplied by 2 the deeper we go.
act_fn_name - Name of the activation function to use, looked up in "act_fn_by_name"
block_name - Name of the ResNet block, looked up in "resnet_blocks_by_name"
"""
super().__init__()
assert block_name in resnet_blocks_by_name
self.hparams = SimpleNamespace(num_classes=num_classes,
c_hidden=c_hidden,
num_blocks=num_blocks,
act_fn_name=act_fn_name,
act_fn=act_fn_by_name[act_fn_name],
block_class=resnet_blocks_by_name[block_name])
self._create_network()
self._init_params()
def _create_network(self):
c_hidden = self.hparams.c_hidden
# A first convolution on the original image to scale up the channel size
if self.hparams.block_class == PreActResNetBlock: # => Don't apply non-linearity on output
self.input_net = nn.Sequential(
nn.Conv2d(3, c_hidden[0], kernel_size=3, padding=1, bias=False)
)
else:
self.input_net = nn.Sequential(
nn.Conv2d(3, c_hidden[0], kernel_size=3, padding=1, bias=False),
nn.BatchNorm2d(c_hidden[0]),
self.hparams.act_fn()
)
# Creating the ResNet blocks
blocks = []
for block_idx, block_count in enumerate(self.hparams.num_blocks):
for bc in range(block_count):
subsample = (bc == 0 and block_idx > 0) # Subsample the first block of each group, except the very first one.
blocks.append(
self.hparams.block_class(c_in=c_hidden[block_idx if not subsample else (block_idx-1)],
act_fn=self.hparams.act_fn,
subsample=subsample,
c_out=c_hidden[block_idx])
)
self.blocks = nn.Sequential(*blocks)
# Mapping to classification output
self.output_net = nn.Sequential(
nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten(),
nn.Linear(c_hidden[-1], self.hparams.num_classes)
)
def _init_params(self):
# Based on our discussion in Tutorial 4, we should initialize the convolutions according to the activation function
# Fan-out focuses on the gradient distribution, and is commonly used in ResNets
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity=self.hparams.act_fn_name)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def forward(self, x):
x = self.input_net(x)
x = self.blocks(x)
x = self.output_net(x)
return x
```
We also need to add the new ResNet class to our model dictionary:
```
model_dict["ResNet"] = ResNet
```
Finally, we can train our ResNet models. One difference to the GoogleNet training is that we explicitly use SGD with Momentum as optimizer instead of Adam. Adam often leads to a slightly worse accuracy on plain, shallow ResNets. It is not 100% clear why Adam performs worse in this context, but one possible explanation is related to ResNet's loss surface. ResNet has been shown to produce smoother loss surfaces than networks without skip connection (see [Li et al., 2018](https://arxiv.org/pdf/1712.09913.pdf) for details). A possible visualization of the loss surface with/out skip connections is below (figure credit - [Li et al.](https://arxiv.org/pdf/1712.09913.pdf)):
<center width="100%"><img src="resnet_loss_surface.svg" style="display: block; margin-left: auto; margin-right: auto;" width="600px"/></center>
The $x$ and $y$ axis shows a projection of the parameter space, and the $z$ axis shows the loss values achieved by different parameter values. On smooth surfaces like the one on the right, we might not require an adaptive learning rate as Adam provides. Instead, Adam can get stuck in local optima while SGD finds the wider minima that tend to generalize better.
However, to answer this question in detail, we would need an extra tutorial because it is not easy to answer. For now, we conclude: for ResNet architectures, consider the optimizer to be an important hyperparameter, and try training with both Adam and SGD. Let's train the model below with SGD:
```
resnet_model, resnet_results = train_model(model_name="ResNet",
model_hparams={"num_classes": 10,
"c_hidden": [16,32,64],
"num_blocks": [3,3,3],
"act_fn_name": "relu"},
optimizer_name="SGD",
optimizer_hparams={"lr": 0.1,
"momentum": 0.9,
"weight_decay": 1e-4})
```
Let's also train the pre-activation ResNet as comparison:
```
resnetpreact_model, resnetpreact_results = train_model(model_name="ResNet",
model_hparams={"num_classes": 10,
"c_hidden": [16,32,64],
"num_blocks": [3,3,3],
"act_fn_name": "relu",
"block_name": "PreActResNetBlock"},
optimizer_name="SGD",
optimizer_hparams={"lr": 0.1,
"momentum": 0.9,
"weight_decay": 1e-4},
save_name="ResNetPreAct")
```
### Tensorboard log
Similarly to our GoogleNet model, we also have a TensorBoard log for the ResNet model. We can open it below.
```
# Opens tensorboard in notebook. Adjust the path to your CHECKPOINT_PATH! Feel free to change "ResNet" to "ResNetPreAct"
%tensorboard --logdir ../saved_models/tutorial5/tensorboards/ResNet/
```
<center width="100%"><img src="tensorboard_screenshot_ResNet.png" width="1000px"></center>
Feel free to explore the TensorBoard yourself, including the computation graph. In general, we can see that with SGD, the ResNet has a higher training loss than the GoogleNet in the first stage of the training. After reducing the learning rate however, the model achieves even higher validation accuracies. We compare the precise scores at the end of the notebook.
## DenseNet
[DenseNet](https://arxiv.org/abs/1608.06993) is another architecture for enabling very deep neural networks and takes a slightly different perspective on residual connections. Instead of modeling the difference between layers, DenseNet considers residual connections as a possible way to reuse features across layers, removing any necessity to learn redundant feature maps. If we go deeper into the network, the model learns abstract features to recognize patterns. However, some complex patterns consist of a combination of abstract features (e.g. hand, face, etc.), and low-level features (e.g. edges, basic color, etc.). To find these low-level features in the deep layers, standard CNNs have to learn copy such feature maps, which wastes a lot of parameter complexity. DenseNet provides an efficient way of reusing features by having each convolution depends on all previous input features, but add only a small amount of filters to it. See the figure below for an illustration (figure credit - [Hu et al.](https://arxiv.org/abs/1608.06993)):
<center width="100%"><img src="densenet_block.svg" style="display: block; margin-left: auto; margin-right: auto;" width="500px"/></center>
The last layer, called the transition layer, is responsible for reducing the dimensionality of the feature maps in height, width, and channel size. Although those technically break the identity backpropagation, there are only a few in a network so that it doesn't affect the gradient flow much.
We split the implementation of the layers in DenseNet into three parts: a `DenseLayer`, and a `DenseBlock`, and a `TransitionLayer`. The module `DenseLayer` implements a single layer inside a dense block. It applies a 1x1 convolution for dimensionality reduction with a subsequential 3x3 convolution. The output channels are concatenated to the originals and returned. Note that we apply the Batch Normalization as the first layer of each block. This allows slightly different activations for the same features to different layers, depending on what is needed. Overall, we can implement it as follows:
```
class DenseLayer(nn.Module):
def __init__(self, c_in, bn_size, growth_rate, act_fn):
"""
Inputs:
c_in - Number of input channels
bn_size - Bottleneck size (factor of growth rate) for the output of the 1x1 convolution. Typically between 2 and 4.
growth_rate - Number of output channels of the 3x3 convolution
act_fn - Activation class constructor (e.g. nn.ReLU)
"""
super().__init__()
self.net = nn.Sequential(
nn.BatchNorm2d(c_in),
act_fn(),
nn.Conv2d(c_in, bn_size * growth_rate, kernel_size=1, bias=False),
nn.BatchNorm2d(bn_size * growth_rate),
act_fn(),
nn.Conv2d(bn_size * growth_rate, growth_rate, kernel_size=3, padding=1, bias=False)
)
def forward(self, x):
out = self.net(x)
out = torch.cat([out, x], dim=1)
return out
```
The module `DenseBlock` summarizes multiple dense layers applied in sequence. Each dense layer takes as input the original input concatenated with all previous layers' feature maps:
```
class DenseBlock(nn.Module):
def __init__(self, c_in, num_layers, bn_size, growth_rate, act_fn):
"""
Inputs:
c_in - Number of input channels
num_layers - Number of dense layers to apply in the block
bn_size - Bottleneck size to use in the dense layers
growth_rate - Growth rate to use in the dense layers
act_fn - Activation function to use in the dense layers
"""
super().__init__()
layers = []
for layer_idx in range(num_layers):
layers.append(
DenseLayer(c_in=c_in + layer_idx * growth_rate, # Input channels are original plus the feature maps from previous layers
bn_size=bn_size,
growth_rate=growth_rate,
act_fn=act_fn)
)
self.block = nn.Sequential(*layers)
def forward(self, x):
out = self.block(x)
return out
```
Finally, the `TransitionLayer` takes as input the final output of a dense block and reduces its channel dimensionality using a 1x1 convolution. To reduce the height and width dimension, we take a slightly different approach than in ResNet and apply an average pooling with kernel size 2 and stride 2. This is because we don't have an additional connection to the output that would consider the full 2x2 patch instead of a single value. Besides, it is more parameter efficient than using a 3x3 convolution with stride 2. Thus, the layer is implemented as follows:
```
class TransitionLayer(nn.Module):
def __init__(self, c_in, c_out, act_fn):
super().__init__()
self.transition = nn.Sequential(
nn.BatchNorm2d(c_in),
act_fn(),
nn.Conv2d(c_in, c_out, kernel_size=1, bias=False),
nn.AvgPool2d(kernel_size=2, stride=2) # Average the output for each 2x2 pixel group
)
def forward(self, x):
return self.transition(x)
```
Now we can put everything together and create our DenseNet. To specify the number of layers, we use a similar notation as in ResNets and pass on a list of ints representing the number of layers per block. After each dense block except the last one, we apply a transition layer to reduce the dimensionality by 2.
```
class DenseNet(nn.Module):
def __init__(self, num_classes=10, num_layers=[6,6,6,6], bn_size=2, growth_rate=16, act_fn_name="relu", **kwargs):
super().__init__()
self.hparams = SimpleNamespace(num_classes=num_classes,
num_layers=num_layers,
bn_size=bn_size,
growth_rate=growth_rate,
act_fn_name=act_fn_name,
act_fn=act_fn_by_name[act_fn_name])
self._create_network()
self._init_params()
def _create_network(self):
c_hidden = self.hparams.growth_rate * self.hparams.bn_size # The start number of hidden channels
# A first convolution on the original image to scale up the channel size
self.input_net = nn.Sequential(
nn.Conv2d(3, c_hidden, kernel_size=3, padding=1) # No batch norm or activation function as done inside the Dense layers
)
# Creating the dense blocks, eventually including transition layers
blocks = []
for block_idx, num_layers in enumerate(self.hparams.num_layers):
blocks.append(
DenseBlock(c_in=c_hidden,
num_layers=num_layers,
bn_size=self.hparams.bn_size,
growth_rate=self.hparams.growth_rate,
act_fn=self.hparams.act_fn)
)
c_hidden = c_hidden + num_layers * self.hparams.growth_rate # Overall output of the dense block
if block_idx < len(self.hparams.num_layers)-1: # Don't apply transition layer on last block
blocks.append(
TransitionLayer(c_in=c_hidden,
c_out=c_hidden // 2,
act_fn=self.hparams.act_fn))
c_hidden = c_hidden // 2
self.blocks = nn.Sequential(*blocks)
# Mapping to classification output
self.output_net = nn.Sequential(
nn.BatchNorm2d(c_hidden), # The features have not passed a non-linearity until here.
self.hparams.act_fn(),
nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten(),
nn.Linear(c_hidden, self.hparams.num_classes)
)
def _init_params(self):
# Based on our discussion in Tutorial 4, we should initialize the convolutions according to the activation function
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, nonlinearity=self.hparams.act_fn_name)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def forward(self, x):
x = self.input_net(x)
x = self.blocks(x)
x = self.output_net(x)
return x
```
Let's also add the DenseNet to our model dictionary:
```
model_dict["DenseNet"] = DenseNet
```
Lastly, we train our network. In contrast to ResNet, DenseNet does not show any issues with Adam, and hence we train it with this optimizer. The other hyperparameters are chosen to result in a network with a similar parameter size as the ResNet and GoogleNet. Commonly, when designing very deep networks, DenseNet is more parameter efficient than ResNet while achieving a similar or even better performance.
```
densenet_model, densenet_results = train_model(model_name="DenseNet",
model_hparams={"num_classes": 10,
"num_layers": [6,6,6,6],
"bn_size": 2,
"growth_rate": 16,
"act_fn_name": "relu"},
optimizer_name="Adam",
optimizer_hparams={"lr": 1e-3,
"weight_decay": 1e-4})
```
### Tensorboard log
Finally, we also have another TensorBoard for the DenseNet training. We take a look at it below:
```
# Opens tensorboard in notebook. Adjust the path to your CHECKPOINT_PATH! Feel free to change "ResNet" to "ResNetPreAct"
%tensorboard --logdir ../saved_models/tutorial5/tensorboards/DenseNet/
```
<center width="100%"><img src="tensorboard_screenshot_DenseNet.png" width="1000px"></center>
The overall course of the validation accuracy and training loss resemble the training of GoogleNet, which is also related to training the network with Adam. Feel free to explore the training metrics yourself.
## Conclusion and Comparison
After discussing each model separately, and training all of them, we can finally compare them. First, let's organize the results of all models in a table:
```
%%html
<!-- Some HTML code to increase font size in the following table -->
<style>
th {font-size: 120%;}
td {font-size: 120%;}
</style>
import tabulate
from IPython.display import display, HTML
all_models = [
("GoogleNet", googlenet_results, googlenet_model),
("ResNet", resnet_results, resnet_model),
("ResNetPreAct", resnetpreact_results, resnetpreact_model),
("DenseNet", densenet_results, densenet_model)
]
table = [[model_name,
"%4.2f%%" % (100.0*model_results["val"]),
"%4.2f%%" % (100.0*model_results["test"]),
"{:,}".format(sum([np.prod(p.shape) for p in model.parameters()]))]
for model_name, model_results, model in all_models]
display(HTML(tabulate.tabulate(table, tablefmt='html', headers=["Model", "Val Accuracy", "Test Accuracy", "Num Parameters"])))
```
First of all, we see that all models are performing reasonably well. Simple models as you have implemented them in the practical achieve considerably lower performance, which is beside the lower number of parameters also attributed to the architecture design choice. GoogleNet is the model to obtain the lowest performance on the validation and test set, although it is very close to DenseNet. A proper hyperparameter search over all the channel sizes in GoogleNet would likely improve the accuracy of the model to a similar level, but this is also expensive given a large number of hyperparameters. ResNet outperforms both DenseNet and GoogleNet by more than 1% on the validation set, while there is a minor difference between both versions, original and pre-activation. We can conclude that for shallow networks, the place of the activation function does not seem to be crucial, although papers have reported the contrary for very deep networks (e.g. [He et al.](https://arxiv.org/abs/1603.05027)).
In general, we can conclude that ResNet is a simple, but powerful architecture. If we would apply the models on more complex tasks with larger images and more layers inside the networks, we would likely see a bigger gap between GoogleNet and skip-connection architectures like ResNet and DenseNet. A comparison with deeper models on CIFAR10 can be for example found [here](https://github.com/kuangliu/pytorch-cifar). Interestingly, DenseNet outperforms the original ResNet on their setup but comes closely behind the Pre-Activation ResNet. The best model, a Dual Path Network ([Chen et. al](https://arxiv.org/abs/1707.01629)), is actually a combination of ResNet and DenseNet showing that both offer different advantages.
### Which model should I choose for my task?
We have reviewed four different models. So, which one should we choose if have given a new task? Usually, starting with a ResNet is a good idea given the superior performance of the CIFAR dataset and its simple implementation. Besides, for the parameter number we have chosen here, ResNet is the fastest as DenseNet and GoogleNet have many more layers that are applied in sequence in our primitive implementation. However, if you have a really difficult task, such as semantic segmentation on HD images, more complex variants of ResNet and DenseNet are recommended.
| github_jupyter |
# Explorando Cartpole con Reinforcement Learning usando Deep Q-learning
Este cuaderno es una modificación del tutorial de [Pytorch RL DQN](https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html)
Sigue la línea de clase de Reinforcement Learning, Q-learning & OpenAI de la RIIA 2019
```
# Veamos de qué se trata el ambiente de Cartpole:
import gym
env = gym.make('CartPole-v0')
env.reset()
for _ in range(30):
env.render()
env.step(env.action_space.sample()) # Toma acción aleatoria
env.close()
# Importamos las bibliotecas necesarias:
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from IPython import display
plt.ion()
from collections import namedtuple
from itertools import count
from PIL import Image
# Las soluciones usan pytorch, pueden usar keras y/o tensorflow si prefieren
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
# Ambiente de OpenAI "Cart pole"
enviroment = gym.make('CartPole-v0').unwrapped
enviroment.render()
# Revisa si hay GPU disponible y lo utiliza
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print('Número de acciones: {}'.format(enviroment.action_space.n))
print('Dimensión de estado: {}'.format(enviroment.observation_space))
# Factor de descuento temporal
gamma = 0.8
# Número de muestras que extraer del repositorio de experiencia para entrenar la red
No_grupo = 64
# Parámetros para la tasa de epsilon-gredy, ésta va cayendo exponencialmente
eps_inicial = 0.9
eps_final = 0.05
eps_tasa = 200
# Parámetro para el descenso por gradiente estocástico
lr = 0.001
# Cada cuanto actualizar la red de etiqueta
actualizar_red_med = 10
# Número de episodios para entrenar
No_episodios = 200
iters = 0
duracion_episodios = []
```
Define una función llamda `genera_accion` que reciba el vector del `estado` y tome la acción óptima o una acción aleatoria. La acción aleatoria la debe de tomar con una probabilidad que disminuya exponencialmente, de tal manera que en un principio se explore más.
Con probabilidad $$\epsilon_{final}+(\epsilon_{inicial}-\epsilon_{final})\times e^{-iters/tasa_{\epsilon}}$$ se escoge una acción aleatoria. En la siguiente gráfica se puede observar la tasa que cae exponencialmente.
```
plt.plot([eps_final + (eps_inicial - eps_final) * math.exp(-1. * iters / eps_tasa) for iters in range(1000)])
plt.title('Disminución exponencial de la tasa de exploración')
plt.xlabel('Iteración')
plt.ylabel('Probabilidad de explorar: $\epsilon$')
plt.show
def genera_accion(estado):
global iters
decimal = random.uniform(0, 1)
limite_epsilon = eps_final + (eps_inicial - eps_final) * math.exp(-1. * iters / eps_tasa)
iters += 1
if decimal > limite_epsilon:
with torch.no_grad():
return red_estrategia(estado).max(0)[1].view(1)
else:
return torch.tensor([random.randrange(2)], device=device, dtype=torch.long)
```
Genera una red neuronal que reciba el vector de estado y regrese un vector de dimensión igual al número de acciones
```
class red_N(nn.Module):
def __init__(self):
super(red_N, self).__init__()
# Capas densas
self.capa_densa1 = nn.Linear(4, 256)
self.capa_densa2 = nn.Linear(256, 128)
self.final = nn.Linear(128, 2)
def forward(self, x):
# Arquitectura de la red, con activación ReLU en las dos capas interiores
x = F.relu(self.capa_densa1(x))
x = F.relu(self.capa_densa2(x))
return self.final(x)
```
En a siguiente celda generamos una clase de repositorio de experiencia con diferentes atributos:
`guarda`: guarda la observación $(s_i,a_i,s_i',r_i)$
`muestra`: genera una muestra de tamaño No_gupo
`len`: función que regresa la cantidad de muestras en el repositorio
```
Transicion = namedtuple('Transicion',
('estado', 'accion', 'sig_estado', 'recompensa'))
class repositorioExperiencia(object):
def __init__(self, capacidad):
self.capacidad = capacidad
self.memoria = []
self.posicion = 0
def guarda(self, *args):
"""Guarda una transición."""
if len(self.memoria) < self.capacidad:
self.memoria.append(None)
self.memoria[self.posicion] = Transicion(*args)
self.posicion = (self.posicion + 1) % self.capacidad
def muestra(self, batch_size):
return random.sample(self.memoria, batch_size)
def __len__(self):
return len(self.memoria)
```
En la siguiente celda definimos una función llamda `actualiza_q` que implemente DQL:
1. Saque una muestra de tamaño `No_grupo`,
2. Usando la `red_estrategia`, calcule $Q_{\theta}(s_t,a_t)$ para la muestra
3. Calcula $V^*(s_{t+1})$ usando la `red_etiqueta`
4. Calcular la etiquetas $y_j=r_i+\max_aQ_{\theta'}(s_t,a)$
5. Calcula función de pérdida para $Q_{\theta}(s_t,a_t)-y_j$
6. Actualize $\theta$
```
def actualiza_q():
if len(memoria) < No_grupo:
return
transiciones = memoria.muestra(No_grupo)
# Transpose the batch (see http://stackoverflow.com/a/19343/3343043 for
# detailed explanation).
grupo = Transicion(*zip(*transiciones))
# Compute a mask of non-final states and concatenate the batch elements
estados_intermedios = torch.tensor(tuple(map(lambda s: s is not None,
grupo.sig_estado)), device=device, dtype=torch.uint8)
sig_estados_intermedios = torch.cat([s for s in grupo.sig_estado
if s is not None])
grupo_estado = torch.cat(grupo.estado)
accion_grupo = torch.cat(grupo.accion)
recompensa_grupo = torch.cat(grupo.recompensa)
# Calcula Q(s_t, a_t) - una manera es usar la red_estrategia para calcular Q(s_t),
# y seleccionar las columnas usando los índices de la acciones tomadas usando la función gather
q_actual = red_estrategia(grupo_estado).gather(1, accion_grupo.unsqueeze(1))
# Calcula V*(s_{t+1}) para todos los sig_estados en el grupo usando la red_etiqueta
valores_sig_estado = torch.zeros(No_grupo, device=device)
valores_sig_estado[estados_intermedios] = red_etiqueta(sig_estados_intermedios).max(1)[0].detach()
# Calcular las etiquetas
y_j = (valores_sig_estado * gamma) + recompensa_grupo
# Calcula función de pérdida de Huber
#perdida = F.smooth_l1_loss(q_actual, y_j.unsqueeze(1))
perdida = F.mse_loss(q_actual, y_j.unsqueeze(1))
# Optimizar el modelo
optimizador.zero_grad()
perdida.backward()
for param in red_estrategia.parameters():
param.grad.data.clamp_(-1, 1)
optimizador.step()
# Función para graficar la duración
def grafica_duracion(dur):
plt.figure(2)
plt.clf()
duracion_t = torch.tensor(duracion_episodios, dtype=torch.float)
plt.title('Entrenamiento...')
plt.xlabel('Episodio')
plt.ylabel('Duración')
plt.plot(duracion_t.numpy())
# Toma el promedio de duración d 100 episodios y los grafica
if len(duracion_t) >= 15:
media = duracion_t.unfold(0, 15, 1).mean(1).view(-1)
media = torch.cat((torch.zeros(14), media))
plt.plot(media.numpy())
plt.plot([200]*len(duracion_t))
plt.pause(dur) # Pausa un poco para poder veer las gráficas
display.clear_output(wait=True)
display.display(plt.gcf())
red_estrategia = red_N().to(device)
red_etiqueta = red_N().to(device)
red_etiqueta.load_state_dict(red_estrategia.state_dict())
red_etiqueta.eval()
#optimizador = optim.RMSprop(red_estrategia.parameters())
optimizador = optim.Adam(red_estrategia.parameters(),lr=lr)
memoria = repositorioExperiencia(10000)
# Entrenamiento
for episodio in range(0, No_episodios):
# Reset the enviroment
estado = enviroment.reset()
estado = torch.tensor(estado, dtype = torch.float)
# Initialize variables
recompensa = 0
termina = False
for t in count():
# Decide acción a tomar
accion = genera_accion(estado)
# Implementa la acción y recibe reacción del ambiente
sig_estado, recompensa, termina, _ = enviroment.step(accion.item())
# Convierte a observaciones a tensores
estado = torch.tensor(estado, dtype = torch.float)
sig_estado = torch.tensor(sig_estado, dtype = torch.float)
# Si acabó (Termina = True) el episodio la recompensa es negativa
if termina:
recompensa = -recompensa
recompensa = torch.tensor([recompensa], device=device)
# Guarda la transición en la memoria
memoria.guarda(estado.unsqueeze(0), accion, sig_estado.unsqueeze(0), recompensa)
# Actualiza valor q en la red de medida
actualiza_q()
## Moverse al siguiente estado
estado = sig_estado
# Grafica la duración de los episodios
if termina:
duracion_episodios.append(t + 1)
break
# Actualizar la red_etiqueta
if episodio % actualizar_red_med == 0:
red_etiqueta.load_state_dict(red_estrategia.state_dict())
grafica_duracion(0.3)
print("**********************************")
print("Entrenamiento finalizado!\n")
print("**********************************")
grafica_duracion(15)
grafica_duracion(15)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from boruta import BorutaPy
from IPython.display import display
```
### Data Prep
```
df = pd.read_csv('data/aml_df.csv')
df.drop(columns=['Unnamed: 0'], inplace=True)
display(df.info())
df.head()
#holdout validation set
final_val = df.sample(frac=0.2)
#X and y for holdout
final_X = final_val[model_columns]
final_y = final_val.iloc[:, -1]
# training data
data = df.drop(index= final_val.index)
X = data[model_columns]
y = data.iloc[:, -1]
display(X.info())
X.head()
```
# Feature Reduction
### Boruta
```
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=1000, max_depth=20, random_state=8, n_jobs=-1)
feat_selector = BorutaPy(rf, n_estimators='auto', verbose=2, max_iter = 200, random_state=8)
feat_selector.fit(X.values, y.values)
selected = X.values[:, feat_selector.support_]
print(selected.shape)
boruta_mask = feat_selector.support_
boruta_features = model_columns[boruta_mask]
boruta_df = df[model_columns[boruta_mask]]
```
### Lasso
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import log_loss, accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
log_model = LogisticRegression(penalty='l1', solver='saga', max_iter=10000)
kf = KFold(n_splits=5, shuffle=True)
ll_performance = []
model_weights = []
for train_index, test_index in kf.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
log_model.fit(X_train, y_train)
y_pred = log_model.predict_proba(X_test)
log_ll = log_loss(y_test, y_pred)
ll_performance.append(log_ll)
model_weights.append(log_model.coef_)
print(ll_performance)
average_weight = np.mean(model_weights, axis=0)[0]
def important_gene_mask(columns, coefs):
mask = coefs != 0
important_genes = columns[mask[0]]
print(len(important_genes))
return important_genes
lasso_k1 = set(important_gene_mask(model_columns, model_weights[0]))
lasso_k2 = set(important_gene_mask(model_columns, model_weights[1]))
lasso_k3 = set(important_gene_mask(model_columns, model_weights[2]))
lasso_k4 = set(important_gene_mask(model_columns, model_weights[3]))
lasso_k5 = set(important_gene_mask(model_columns, model_weights[4]))
lasso_gene_union = set.union(lasso_k1, lasso_k2, lasso_k3, lasso_k4, lasso_k5)
len(lasso_gene_union)
lasso_gene_intersection = set.intersection(lasso_k1, lasso_k2, lasso_k3, lasso_k4, lasso_k5)
len(lasso_gene_intersection)
lasso_columns = list(lasso_gene_union)
lasso_boruta_intersection = set.intersection(set(boruta_features), lasso_gene_intersection)
len(lasso_boruta_intersection)
lasso_boruta_intersection
gene_name = ['HOXA9', 'HOXA3', 'HOXA6', 'TPSG1', 'HOXA7', 'SPATA6', 'GPR12', 'LRP4',
'CPNE8', 'ST18', 'MPV17L', 'TRH', 'TPSAB1', 'GOLGA8M', 'GT2B11',
'ANKRD18B', 'AC055876.1', 'WHAMMP2', 'HOXA10-AS', 'HOXA10',
'HOXA-AS3', 'PDCD6IPP1', 'WHAMMP3']
gene_zip = list(zip(lasso_boruta_intersection, gene_name))
gene_zip
pd.DataFrame(gene_zip)
```
those are the feature deemed most important by both lasso rounds + boruta
```
boruta_not_lasso = set.difference(set(boruta_features), lasso_gene_union)
len(boruta_not_lasso)
```
25 features were considered important by boruta but not picked up by any of the lasso rounds...why?
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# 영화 리뷰를 사용한 텍스트 분류
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ko/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ko/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
[docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)로
메일을 보내주시기 바랍니다.
이 노트북은 영화 리뷰(review) 텍스트를 *긍정*(positive) 또는 *부정*(negative)으로 분류합니다. 이 예제는 *이진*(binary)-또는 클래스(class)가 두 개인- 분류 문제입니다. 이진 분류는 머신러닝에서 중요하고 널리 사용됩니다.
여기에서는 [인터넷 영화 데이터베이스](https://www.imdb.com/)(Internet Movie Database)에서 수집한 50,000개의 영화 리뷰 텍스트를 담은 [IMDB 데이터셋](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb)을 사용하겠습니다. 25,000개 리뷰는 훈련용으로, 25,000개는 테스트용으로 나뉘어져 있습니다. 훈련 세트와 테스트 세트의 클래스는 *균형*이 잡혀 있습니다. 즉 긍정적인 리뷰와 부정적인 리뷰의 개수가 동일합니다.
이 노트북은 모델을 만들고 훈련하기 위해 텐서플로의 고수준 파이썬 API인 [tf.keras](https://www.tensorflow.org/r1/guide/keras)를 사용합니다. `tf.keras`를 사용한 고급 텍스트 분류 튜토리얼은 [MLCC 텍스트 분류 가이드](https://developers.google.com/machine-learning/guides/text-classification/)를 참고하세요.
```
# keras.datasets.imdb is broken in 1.13 and 1.14, by np 1.16.3
!pip install tf_nightly
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
```
## IMDB 데이터셋 다운로드
IMDB 데이터셋은 텐서플로와 함께 제공됩니다. 리뷰(단어의 시퀀스(sequence))는 미리 전처리해서 정수 시퀀스로 변환되어 있습니다. 각 정수는 어휘 사전에 있는 특정 단어를 의미합니다.
다음 코드는 IMDB 데이터셋을 컴퓨터에 다운로드합니다(또는 이전에 다운로드 받았다면 캐시된 복사본을 사용합니다):
```
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
```
매개변수 `num_words=10000`은 훈련 데이터에서 가장 많이 등장하는 상위 10,000개의 단어를 선택합니다. 데이터 크기를 적당하게 유지하기 위해 드물에 등장하는 단어는 제외하겠습니다.
## 데이터 탐색
잠시 데이터 형태를 알아 보죠. 이 데이터셋의 샘플은 전처리된 정수 배열입니다. 이 정수는 영화 리뷰에 나오는 단어를 나타냅니다. 레이블(label)은 정수 0 또는 1입니다. 0은 부정적인 리뷰이고 1은 긍정적인 리뷰입니다.
```
print("훈련 샘플: {}, 레이블: {}".format(len(train_data), len(train_labels)))
```
리뷰 텍스트는 어휘 사전의 특정 단어를 나타내는 정수로 변환되어 있습니다. 첫 번째 리뷰를 확인해 보죠:
```
print(train_data[0])
```
영화 리뷰들은 길이가 다릅니다. 다음 코드는 첫 번째 리뷰와 두 번째 리뷰에서 단어의 개수를 출력합니다. 신경망의 입력은 길이가 같아야 하기 때문에 나중에 이 문제를 해결하겠습니다.
```
len(train_data[0]), len(train_data[1])
```
### 정수를 단어로 다시 변환하기
정수를 다시 텍스트로 변환하는 방법이 있다면 유용할 것입니다. 여기에서는 정수와 문자열을 매핑한 딕셔너리(dictionary) 객체에 질의하는 헬퍼(helper) 함수를 만들겠습니다:
```
# 단어와 정수 인덱스를 매핑한 딕셔너리
word_index = imdb.get_word_index()
# 처음 몇 개 인덱스는 사전에 정의되어 있습니다
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
```
이제 `decode_review` 함수를 사용해 첫 번째 리뷰 텍스트를 출력할 수 있습니다:
```
decode_review(train_data[0])
```
## 데이터 준비
리뷰-정수 배열-는 신경망에 주입하기 전에 텐서로 변환되어야 합니다. 변환하는 방법에는 몇 가지가 있습니다:
* 원-핫 인코딩(one-hot encoding)은 정수 배열을 0과 1로 이루어진 벡터로 변환합니다. 예를 들어 배열 [3, 5]을 인덱스 3과 5만 1이고 나머지는 모두 0인 10,000차원 벡터로 변환할 수 있습니다. 그다음 실수 벡터 데이터를 다룰 수 있는 층-Dense 층-을 신경망의 첫 번째 층으로 사용합니다. 이 방법은 `num_words * num_reviews` 크기의 행렬이 필요하기 때문에 메모리를 많이 사용합니다.
* 다른 방법으로는, 정수 배열의 길이가 모두 같도록 패딩(padding)을 추가해 `max_length * num_reviews` 크기의 정수 텐서를 만듭니다. 이런 형태의 텐서를 다룰 수 있는 임베딩(embedding) 층을 신경망의 첫 번째 층으로 사용할 수 있습니다.
이 튜토리얼에서는 두 번째 방식을 사용하겠습니다.
영화 리뷰의 길이가 같아야 하므로 [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) 함수를 사용해 길이를 맞추겠습니다:
```
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
```
샘플의 길이를 확인해 보죠:
```
len(train_data[0]), len(train_data[1])
```
(패딩된) 첫 번째 리뷰 내용을 확인해 보죠:
```
print(train_data[0])
```
## 모델 구성
신경망은 층(layer)을 쌓아서 만듭니다. 이 구조에서는 두 가지를 결정해야 합니다:
* 모델에서 얼마나 많은 층을 사용할 것인가?
* 각 층에서 얼마나 많은 *은닉 유닛*(hidden unit)을 사용할 것인가?
이 예제의 입력 데이터는 단어 인덱스의 배열입니다. 예측할 레이블은 0 또는 1입니다. 이 문제에 맞는 모델을 구성해 보죠:
```
# 입력 크기는 영화 리뷰 데이터셋에 적용된 어휘 사전의 크기입니다(10,000개의 단어)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
```
층을 순서대로 쌓아 분류기(classifier)를 만듭니다:
1. 첫 번째 층은 `Embedding` 층입니다. 이 층은 정수로 인코딩된 단어를 입력 받고 각 단어 인덱스에 해당하는 임베딩 벡터를 찾습니다. 이 벡터는 모델이 훈련되면서 학습됩니다. 이 벡터는 출력 배열에 새로운 차원으로 추가됩니다. 최종 차원은 `(batch, sequence, embedding)`이 됩니다.
2. 그다음 `GlobalAveragePooling1D` 층은 `sequence` 차원에 대해 평균을 계산하여 각 샘플에 대해 고정된 길이의 출력 벡터를 반환합니다. 이는 길이가 다른 입력을 다루는 가장 간단한 방법입니다.
3. 이 고정 길이의 출력 벡터는 16개의 은닉 유닛을 가진 완전 연결(fully-connected) 층(`Dense`)을 거칩니다.
4. 마지막 층은 하나의 출력 노드(node)를 가진 완전 연결 층입니다. `sigmoid` 활성화 함수를 사용하여 0과 1 사이의 실수를 출력합니다. 이 값은 확률 또는 신뢰도를 나타냅니다.
### 은닉 유닛
위 모델에는 입력과 출력 사이에 두 개의 중간 또는 "은닉" 층이 있습니다. 출력(유닛 또는 노드, 뉴런)의 개수는 층이 가진 표현 공간(representational space)의 차원이 됩니다. 다른 말로 하면, 내부 표현을 학습할 때 허용되는 네트워크 자유도의 양입니다.
모델에 많은 은닉 유닛(고차원의 표현 공간)과 층이 있다면 네트워크는 더 복잡한 표현을 학습할 수 있습니다. 하지만 네트워크의 계산 비용이 많이 들고 원치않는 패턴을 학습할 수도 있습니다. 이런 표현은 훈련 데이터의 성능을 향상시키지만 테스트 데이터에서는 그렇지 못합니다. 이를 *과대적합*(overfitting)이라고 부릅니다. 나중에 이에 대해 알아 보겠습니다.
### 손실 함수와 옵티마이저
모델이 훈련하려면 손실 함수(loss function)과 옵티마이저(optimizer)가 필요합니다. 이 예제는 이진 분류 문제이고 모델이 확률을 출력하므로(출력층의 유닛이 하나이고 `sigmoid` 활성화 함수를 사용합니다), `binary_crossentropy` 손실 함수를 사용하겠습니다.
다른 손실 함수를 선택할 수 없는 것은 아닙니다. 예를 들어 `mean_squared_error`를 선택할 수 있습니다. 하지만 일반적으로 `binary_crossentropy`가 확률을 다루는데 적합합니다. 이 함수는 확률 분포 간의 거리를 측정합니다. 여기에서는 정답인 타깃 분포와 예측 분포 사이의 거리입니다.
나중에 회귀(regression) 문제(예를 들어 주택 가격을 예측하는 문제)에 대해 살펴 볼 때 평균 제곱 오차(mean squared error) 손실 함수를 어떻게 사용하는지 알아 보겠습니다.
이제 모델이 사용할 옵티마이저와 손실 함수를 설정해 보죠:
```
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['acc'])
```
## 검증 세트 만들기
모델을 훈련할 때 모델이 만난 적 없는 데이터에서 정확도를 확인하는 것이 좋습니다. 원본 훈련 데이터에서 10,000개의 샘플을 떼어내어 *검증 세트*(validation set)를 만들겠습니다. (왜 테스트 세트를 사용하지 않을까요? 훈련 데이터만을 사용하여 모델을 개발하고 튜닝하는 것이 목표입니다. 그다음 테스트 세트를 사용해서 딱 한 번만 정확도를 평가합니다).
```
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
```
## 모델 훈련
이 모델을 512개의 샘플로 이루어진 미니배치(mini-batch)에서 40번의 에포크(epoch) 동안 훈련합니다. `x_train`과 `y_train` 텐서에 있는 모든 샘플에 대해 40번 반복한다는 뜻입니다. 훈련하는 동안 10,000개의 검증 세트에서 모델의 손실과 정확도를 모니터링합니다:
```
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
```
## 모델 평가
모델의 성능을 확인해 보죠. 두 개의 값이 반환됩니다. 손실(오차를 나타내는 숫자이므로 낮을수록 좋습니다)과 정확도입니다.
```
results = model.evaluate(test_data, test_labels, verbose=2)
print(results)
```
이 예제는 매우 단순한 방식을 사용하므로 87% 정도의 정확도를 달성했습니다. 고급 방법을 사용한 모델은 95%에 가까운 정확도를 얻습니다.
## 정확도와 손실 그래프 그리기
`model.fit()`은 `History` 객체를 반환합니다. 여기에는 훈련하는 동안 일어난 모든 정보가 담긴 딕셔너리(dictionary)가 들어 있습니다:
```
history_dict = history.history
history_dict.keys()
```
네 개의 항목이 있습니다. 훈련과 검증 단계에서 모니터링하는 지표들입니다. 훈련 손실과 검증 손실을 그래프로 그려 보고, 훈련 정확도와 검증 정확도도 그래프로 그려서 비교해 보겠습니다:
```
import matplotlib.pyplot as plt
acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo"는 "파란색 점"입니다
plt.plot(epochs, loss, 'bo', label='Training loss')
# b는 "파란 실선"입니다
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # 그림을 초기화합니다
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
이 그래프에서 점선은 훈련 손실과 훈련 정확도를 나타냅니다. 실선은 검증 손실과 검증 정확도입니다.
훈련 손실은 에포크마다 *감소*하고 훈련 정확도는 *증가*한다는 것을 주목하세요. 경사 하강법 최적화를 사용할 때 볼 수 있는 현상입니다. 매 반복마다 최적화 대상의 값을 최소화합니다.
하지만 검증 손실과 검증 정확도에서는 그렇지 못합니다. 약 20번째 에포크 이후가 최적점인 것 같습니다. 이는 과대적합 때문입니다. 이전에 본 적 없는 데이터보다 훈련 데이터에서 더 잘 동작합니다. 이 지점부터는 모델이 과도하게 최적화되어 테스트 데이터에서 *일반화*되기 어려운 훈련 데이터의 특정 표현을 학습합니다.
여기에서는 과대적합을 막기 위해 단순히 20번째 에포크 근처에서 훈련을 멈출 수 있습니다. 나중에 콜백(callback)을 사용하여 자동으로 이렇게 하는 방법을 배워 보겠습니다.
| github_jupyter |
## Preamble
### Import libraries
```
import os, sys
# Import Pandas
import pandas as pd
# Import Plotly and Cufflinks
# Plotly username and API key should be set in environment variables
import plotly
plotly.tools.set_credentials_file(username=os.environ['PLOTLY_USERNAME'], api_key=os.environ['PLOTLY_KEY'])
import plotly.graph_objs as go
import cufflinks as cf
# Import numpy
import numpy as np
```
## Step 1:
### Import CSV containing photovoltaic performance of solar cells into Pandas Data Frame object
```
# Import module to read in secure data
sys.path.append('../data/NREL')
import retrieve_data as rd
solar = rd.retrieve_dirks_sheet()
```
## Step 2:
### Clean the data for inconsistencies
```
sys.path.append('utils')
import process_data as prd
prd.clean_data(solar)
```
## Step 3:
### Import functions from utils and define notebook-specific functions
```
import degradation_utils as du
def get_mode_correlation_percent(df, mode_1, mode_2, weighted):
"""
Return the percent of rows where two modes are seen together
Args:
df (DataFrame): Pandas DataFrame that has been cleaned using the clean_data function
mode_1 (string): Degradation mode to find in the DataFrame in pairing with mode_2
mode_2 (string): Degradation mode to find in the DataFrame in pairing with mode_1
weighted (bool): If true, count all modules in a system as degrading
If false, count a system as one degrading module
Returns:
float: The percentage of modules with both specified degradation modes
"""
# Calculate total number of modules
total_modules = du.get_total_modules(df, weighted)
if total_modules == 0:
return 0
if weighted:
single_modules = len(df[(df['System/module'] == 'Module') & (df[mode_1] == 1) & (df[mode_2] == 1)])
specified = df[(df['System/module'] != 'System') | (df['No.modules'].notnull())]
systems = specified[(specified['System/module'] != 'Module') &
(specified[mode_1] == 1) & (specified[mode_2] == 1)]['No.modules'].sum()
total = single_modules + systems
return float(total) / total_modules
else:
return float(len((df[(df[mode_1] == 1) & (df[mode_2] == 1)]))) / total_modules
def get_heatmap_data(df, modes, weighted):
"""
Returns a DataFrame used to construct a heatmap based on frequency of two degradation modes appearing together
Args:
df (DataFrame): A *cleaned* DataFrame containing the data entries to check modes from
modes (List of String): A list of all modes to check for in the DataFrame
weighted (bool): If true, count all modules in a system as degrading
If false, count a system as one degrading module
Returns:
heatmap (DataFrame): DataFrame containing all of degradation modes correlation frequency results
"""
# Initialize DataFrame to hold heatmap data
heatmap = pd.DataFrame(data=None, columns=modes, index=modes)
# Calculate all single mode percentages
mode_percentages = {}
for mode in modes:
mode_percentages[mode] = du.get_mode_percentage(df, mode, weighted)
# Iterate through every pair of modes
for mode_1 in modes:
for mode_2 in modes:
if mode_1 == mode_2:
heatmap.set_value(mode_1, mode_2, np.nan)
else:
print(mode_1 + " & " + mode_2)
heatmap_reflection = heatmap.at[mode_2, mode_1]
# If already calculated the reflection, save and skip
if (not pd.isnull(heatmap_reflection)):
heatmap.set_value(mode_1, mode_2, heatmap_reflection)
print('Skip, already calculated')
continue
percentage_1 = mode_percentages[mode_1]
percentage_2 = mode_percentages[mode_2]
print('Percentage 1: ' + str(percentage_1))
print('Percentage 2: ' + str(percentage_2))
if (percentage_1 == 0 or percentage_2 == 0):
heatmap.set_value(mode_1, mode_2, 0)
continue
percentage_both = get_mode_correlation_percent(df, mode_1, mode_2, weighted)
print('Percentage Both: ' + str(percentage_both))
result = float(percentage_both) / (percentage_1 * percentage_2)
print('Result: ' + str(result))
heatmap.set_value(mode_1, mode_2, result)
return heatmap
```
## Step 4: Generate heatmaps of correlation frequency between degradation modes
### Calculation
Find the correlation strength of all pairs of degradation modes by using the following formula:
P(Degradation mode A & Degradation mode B) / P(Degradation mode A)P(Degradation mode B)
### Weighted: Multiply data entries for module systems by number of modules
Number of degrading modules = # of degrading single modules + (# of degrading systems · # of modules per degrading system)
Total number of modules = # of single modules + (# of systems · # of modules per system)
P(Degradation mode X) = Number of degrading modules / Total number of modules
#### Generate heatmap for the entire dataset, regardless of time
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
sys_heatmap_all = get_heatmap_data(solar, modes, True)
sys_heatmap_all
sys_heatmap_all.iplot(kind='heatmap',colorscale='spectral',
filename='sys-heatmap-all', margin=(200,150,120,30))
```
#### Generate heatmap for the dataset of all modules installed before 2000
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
specified = solar[solar['Begin.Year'] < 2000]
sys_heatmap_pre_2000 = get_heatmap_data(specified, modes, True)
sys_heatmap_pre_2000
sys_heatmap_pre_2000.iplot(kind='heatmap',colorscale='spectral',
filename='sys-heatmap-pre-2000', margin=(200,150,120,30))
```
#### Generate heatmap for the dataset of all modules installed post 2000
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
specified = solar[solar['Begin.Year'] >= 2000]
sys_heatmap_post_2000 = get_heatmap_data(specified, modes, True)
sys_heatmap_post_2000
sys_heatmap_post_2000.iplot(kind='heatmap',colorscale='spectral',
filename='sys-heatmap-post-2000', margin=(200,150,120,30))
```
### Unweighted: Count module systems as single module
Number of degrading modules = # of degrading single modules + # of degrading systems
Total number of modules = # of single modules + # of systems
P(Degradation mode X) = Number of degrading modules / Total number of modules
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
modes_heatmap_all = get_heatmap_data(solar, modes, False)
modes_heatmap_all
modes_heatmap_all.iplot(kind='heatmap',colorscale='spectral',
filename='modes-heatmap-all', margin=(200,150,120,30))
```
#### Generate heatmap for the dataset of all modules installed before 2000
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
specified = solar[solar['Begin.Year'] < 2000]
modes_heatmap_pre_2000 = get_heatmap_data(specified, modes, False)
modes_heatmap_pre_2000
modes_heatmap_pre_2000.iplot(kind='heatmap',colorscale='spectral',
filename='modes-heatmap-pre-2000', margin=(200,150,120,30))
```
#### Generate heatmap for the dataset of all modules installed post 2000
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
specified = solar[solar['Begin.Year'] >= 2000]
modes_heatmap_post_2000 = get_heatmap_data(specified, modes, False)
modes_heatmap_post_2000
modes_heatmap_post_2000.iplot(kind='heatmap',colorscale='spectral',
filename='modes-heatmap-post-2000', margin=(200,150,120,30))
```
| github_jupyter |
# Task 9: Random Forests
_All credit for the code examples of this notebook goes to the book "Hands-On Machine Learning with Scikit-Learn & TensorFlow" by A. Geron. Modifications were made and text was added by K. Zoch in preparation for the hands-on sessions._
# Setup
First, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Function to save a figure. This also decides that all output files
# should stored in the subdirectorz 'classification'.
PROJECT_ROOT_DIR = "."
EXERCISE = "forests"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "output", EXERCISE, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# Bagging decision trees
First, let's create some half-moon data (as done in one of the earlier tasks).
```
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=500, noise=0.30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
```
This code example shows how "bagging" multiple decision trees can improve the classification performance, compared to a single decision tree. Notice how bias and variance change when combining 500 trees as in the example below (it can be seen very nicely in the plot). Please try the following:
1. How does the number of samples affect the performance of the ensemble classifier? Try changing it to the training size (m = 500), or go even higher.
2. How is the performance different when pasting is used instead of bagging (_no_ replacement of instances)?
3. How relevant is the number of trees in the ensemble?
```
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
# Create an instance of a bagging classifier, composed of
# 500 decision tree classifiers. bootstrap=True activates
# replacement when picking the random instances, i.e.
# turning it off will switch from bagging to pasting.
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
max_samples=100, bootstrap=True, n_jobs=-1, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
# Create an instance of a single decision tree to compare with.
tree_clf = DecisionTreeClassifier(random_state=42)
tree_clf.fit(X_train, y_train)
y_pred_tree = tree_clf.predict(X_test)
# Now do the plotting of the two.
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.5, contour=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap)
if contour:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", alpha=alpha)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", alpha=alpha)
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_decision_boundary(tree_clf, X, y)
plt.title("Decision Tree", fontsize=14)
plt.subplot(122)
plot_decision_boundary(bag_clf, X, y)
plt.title("Decision Trees with Bagging", fontsize=14)
save_fig("decision_tree_without_and_with_bagging_plot")
plt.show()
```
If you need an additional performance measure, you can use the accuracy score:
```
from sklearn.metrics import accuracy_score
print("Bagging ensemble: %s" % accuracy_score(y_test, y_pred))
print("Single tree: %s" % accuracy_score(y_test, y_pred_tree))
```
## Out-of-Bag evaluation
When a bagging classifier is used, evaluation of the performance can be performed _out-of-bag_. Remember what bagging does, and how many instances (on average) are picked from all training instances if the bag is chosen to be the same size as the number of training instances. The fraction of chosen instances should converge against something like
$$1 - \exp(-1) \approx 63.212\%$$
But that also means, that more than 35% of instances are _not seen_ in training. The [BaggingClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingClassifier.html) allows to set evaluation on out-of-bag instances automatically:
```
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
bootstrap=True, n_jobs=-1, oob_score=True, random_state=40)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
```
# Boosting via AdaBoost
The performance of decision trees can be much improved through the procedure of _hypothesis boosting_. AdaBoost, probably the most popular algorithm, uses a very common technique: models are trained _sequentially_, where each model tries to correct for mistakes the previous model made. AdaBoost in particular _boosts_ the weights of those instances that were classified incorrectly. The next classifier will then be more sensitive to these instances and probably do an overall better job. In the end, the outputs of all sequential classifiers are combined into a prediction value. Each classifier enters this global value weighted according to its error rate. Please check/answer the following questions to familiarise yourself with AdaBoost:
1. What is the error rate of a predictor?
2. How is the weight for each predictor calculated?
3. How are weights of instances updated if they were classified correctly? How are they updated if classified incorrectly?
4. How is the final prediction made from an AdaBoost ensemble?
5. The [AdaBoostClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) implements the AdaBoost algorithm in Scikit-Learn. The following bit of code implements AdaBoost with decision tree classifiers. Make yourself familiar with the class and its arguments, then try to tweak it to achieve better performance than in the example below!
```
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
ada_clf.fit(X_train, y_train)
plot_decision_boundary(ada_clf, X, y)
```
The following bit of code is a visualisation of how the weight adjustment in AdaBoost works. While not relying on the above AdaBoostClassifier class, this implements a support vector machine classifier ([SVC](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html)) and boosts the weights of incorrectly classified instances by hand. With different learning rates, the "amount" of boosting can be controlled.
```
from sklearn.svm import SVC
m = len(X_train)
plt.figure(figsize=(11, 4))
for subplot, learning_rate in ((121, 1), (122, 0.5)):
# Start with equal weights for all instances.
sample_weights = np.ones(m)
plt.subplot(subplot)
# Now let's go through five iterations where the
# weights get adjusted based on the previous step.
for i in range(5):
# As an example, use SVM classifier with Gaussian kernel.
svm_clf = SVC(kernel="rbf", C=0.05, gamma="auto", random_state=42)
svm_clf.fit(X_train, y_train, sample_weight=sample_weights)
y_pred = svm_clf.predict(X_train)
# The most important step: increase the weights of
# incorrectly predicted instances according to the
# learning_rate parameter.
sample_weights[y_pred != y_train] *= (1 + learning_rate)
# And do the plotting.
plot_decision_boundary(svm_clf, X, y, alpha=0.2)
plt.title("learning_rate = {}".format(learning_rate), fontsize=16)
if subplot == 121:
plt.text(-0.7, -0.65, "1", fontsize=14)
plt.text(-0.6, -0.10, "2", fontsize=14)
plt.text(-0.5, 0.10, "3", fontsize=14)
plt.text(-0.4, 0.55, "4", fontsize=14)
plt.text(-0.3, 0.90, "5", fontsize=14)
save_fig("boosting_plot")
plt.show()
```
# Gradient Boosting
An alternative to AdaBoost is gradient boosting. Again, gradient boosting sequentially trains multiple predictors which are then combined for a global prediction in the end. Gradient boosting fits the new predictor to the _residual errors_ made by the previous predictor, but doesn't touch instance weights. This can be visualised very well with a regression problem (of course, classification can also be performed. Scikit-Learn comes with the two classes [GradientBoostingRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html) and [GradientBoostingClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html) for these tasks. As a first step, the following example implements regression with decision trees by hand.
First, generate our random data.
```
np.random.seed(42)
X = np.random.rand(100, 1) - 0.5
y = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)
from sklearn.tree import DecisionTreeRegressor
# Start with the first tree and fit it to X, y.
tree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg1.fit(X, y)
# Calculate the residual errors the previous tree
# has made and fit a second tree to these.
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg2.fit(X, y2)
# Again, calculate the residual errors of the previous
# tree and fit a third tree.
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg3.fit(X, y3)
# And the rest is just plotting ...
def plot_predictions(regressors, X, y, axes, label=None, style="r-", data_style="b.", data_label=None):
x1 = np.linspace(axes[0], axes[1], 500)
y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)
plt.plot(X[:, 0], y, data_style, label=data_label)
plt.plot(x1, y_pred, style, linewidth=2, label=label)
if label or data_label:
plt.legend(loc="upper center", fontsize=16)
plt.axis(axes)
plt.figure(figsize=(11,11))
plt.subplot(321)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h_1(x_1)$", style="g-", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Residuals and tree predictions", fontsize=16)
plt.subplot(322)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1)$", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Ensemble predictions", fontsize=16)
plt.subplot(323)
plot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_2(x_1)$", style="g-", data_style="k+", data_label="Residuals")
plt.ylabel("$y - h_1(x_1)$", fontsize=16)
plt.subplot(324)
plot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1)$")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.subplot(325)
plot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_3(x_1)$", style="g-", data_style="k+")
plt.ylabel("$y - h_1(x_1) - h_2(x_1)$", fontsize=16)
plt.xlabel("$x_1$", fontsize=16)
plt.subplot(326)
plot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$")
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
save_fig("gradient_boosting_plot")
plt.show()
```
The following piece of code now uses the Scikit-Learn class for regression with gradient boosting. Two examples are given: (1) with a fast learning rate, but only very few predictors, (2) with a slower learning rate, but a high number of predictors. Clearly, the second ensemble overfits the problem. Can you try to tweak the parameters to get a model that generalises better?
```
from sklearn.ensemble import GradientBoostingRegressor
# First regression instance with only three estimaters,
# but a fast learning rate. The max_depth parameter
# controls the number of 'layers' in the decision
# tree estimators of the ensemble. Increase for stronger
# bias of the individual trees.
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
gbrt.fit(X, y)
# Second instance with many estimators and slower
# learning rate.
gbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.5, random_state=42)
gbrt_slow.fit(X, y)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="Ensemble predictions")
plt.title("learning_rate={}, n_estimators={}".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("learning_rate={}, n_estimators={}".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)
save_fig("gbrt_learning_rate_plot")
plt.show()
```
One way to solve this overfitting is to use _early stopping_ to find the optimal number of iterations/predictors for this problem. For that, we first need to split the dataset into training and validation set, because of course we cannot evaluate performance on instances the predictor used in training. The following code implements the already known [model_selection.train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function. Then, train another ensemble (with a fixed number of 120 predictors), but this time only on the training set. Errors are calculated based on the validation set and the optimal number of iterations is extracted. The code also creates a plot of the performance on the validation set to point out the optimal iteration.
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
# Split dataset into training and validation set.
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)
# Fit an ensemble. Let's start with 120 estimators, which
# is probably too much (as we saw above).
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)
gbrt.fit(X_train, y_train)
# Calculate the errors for each iteration (on the validation set)
# and find the optimal iteration step.
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gbrt.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors)
min_error = np.min(errors)
# Retrain a new ensemble with those settings.
gbrt_best = GradientBoostingRegressor(max_depth=2,n_estimators=bst_n_estimators, random_state=42)
gbrt_best.fit(X_train, y_train)
# And do the plotting of validation error as well
# as the optimised ensemble.
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.plot(errors, "b.-")
plt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], "k--")
plt.plot([0, 120], [min_error, min_error], "k--")
plt.plot(bst_n_estimators, min_error, "ko")
plt.text(bst_n_estimators, min_error*1.2, "Minimum", ha="center", fontsize=14)
plt.axis([0, 120, 0, 0.01])
plt.xlabel("Number of trees")
plt.title("Validation error", fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("Best model (%d trees)" % bst_n_estimators, fontsize=14)
save_fig("early_stopping_gbrt_plot")
plt.show()
```
| github_jupyter |
```
# default_exp data.tabular
```
# Data Tabular
> Main Tabular functions used throughout the library. This is helpful when you have additional time series data like metadata, time series features, etc.
```
#export
from tsai.imports import *
from tsai.utils import *
from fastai.tabular.all import *
#export
@delegates(TabularPandas.__init__)
def get_tabular_ds(df, procs=[Categorify, FillMissing, Normalize], cat_names=None, cont_names=None, y_names=None, groupby=None,
y_block=None, splits=None, do_setup=True, inplace=False, reduce_memory=True, device=None, **kwargs):
device = ifnone(device, default_device())
groupby = str2list(groupby)
cat_names = str2list(cat_names)
cont_names = str2list(cont_names)
y_names = str2list(y_names)
cols = []
for _cols in [groupby, cat_names, cont_names, y_names]:
if _cols is not None: cols.extend(_cols)
cols = list(set(cols))
if y_names is None: y_block = None
elif y_block is None:
num_cols = df._get_numeric_data().columns
y_block = CategoryBlock() if any([True for n in y_names if n not in num_cols]) else RegressionBlock()
else: y_block = None
pd.options.mode.chained_assignment=None
to = TabularPandas(df[cols], procs=procs, cat_names=cat_names, cont_names=cont_names, y_names=y_names, y_block=y_block,
splits=splits, do_setup=do_setup, inplace=inplace, reduce_memory=reduce_memory, device=device)
setattr(to, "groupby", groupby)
return to
#export
@delegates(DataLoaders.__init__)
def get_tabular_dls(df, procs=[Categorify, FillMissing, Normalize], cat_names=None, cont_names=None, y_names=None, bs=64,
y_block=None, splits=None, do_setup=True, inplace=False, reduce_memory=True, device=None, **kwargs):
to = get_tabular_ds(df, procs=procs, cat_names=cat_names, cont_names=cont_names, y_names=y_names,
y_block=y_block, splits=splits, do_setup=do_setup, inplace=inplace, reduce_memory=reduce_memory, device=device, **kwargs)
if splits is not None: bs = min(len(splits[0]), bs)
else: bs = min(len(df), bs)
return to.dataloaders(device=device, bs=bs, **kwargs)
#export
def preprocess_df(df, procs=[Categorify, FillMissing, Normalize], cat_names=None, cont_names=None, y_names=None, sample_col=None, reduce_memory=True):
cat_names = str2list(cat_names)
cont_names = str2list(cont_names)
y_names = str2list(y_names)
cols = []
for _cols in [cat_names, cont_names, y_names]:
if _cols is not None: cols.extend(_cols)
cols = list(set(cols))
pd.options.mode.chained_assignment=None
to = TabularPandas(df[cols], procs=procs, cat_names=cat_names, cont_names=cont_names, y_names=y_names, reduce_memory=reduce_memory)
procs = to.procs
if sample_col is not None:
sample_col = str2list(sample_col)
to = pd.concat([df[sample_col], to.cats, to.conts, to.ys], axis=1)
else:
to = pd.concat([to.cats, to.conts, to.ys], axis=1)
return to, procs
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
# df['salary'] = np.random.rand(len(df)) # uncomment to simulate a cont dependent variable
cat_names = ['workclass', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'native-country']
cont_names = ['age', 'fnlwgt', 'hours-per-week']
target = ['salary']
splits = RandomSplitter()(range_of(df))
dls = get_tabular_dls(df, cat_names=cat_names, cont_names=cont_names, y_names='salary', splits=splits, bs=512)
dls.show_batch()
metrics = mae if dls.c == 1 else accuracy
learn = tabular_learner(dls, layers=[200, 100], y_range=None, metrics=metrics)
learn.fit(1, 1e-2)
learn.dls.one_batch()
learn.model
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
cat_names = ['workclass', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'native-country']
cont_names = ['age', 'fnlwgt', 'hours-per-week']
target = ['salary']
df, procs = preprocess_df(df, procs=[Categorify, FillMissing, Normalize], cat_names=cat_names, cont_names=cont_names, y_names=target,
sample_col=None, reduce_memory=True)
df.head()
procs.classes, procs.means, procs.stds
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
```
| github_jupyter |
<img src="https://github.com/pmservice/ai-openscale-tutorials/raw/master/notebooks/images/banner.png" align="left" alt="banner">
# Part 4: Drift Monitor
The notebook will train, create and deploy a Credit Risk model. It will then configure OpenScale to monitor drift in data and accuracy by injecting sample payloads for viewing in the OpenScale Insights dashboard.
### Contents
- [1. Setup](#setup)
- [2. Model building and deployment](#model)
- [3. OpenScale configuration](#openscale)
- [4. Generate drift model](#driftmodel)
- [5. Submit payload](#payload)
- [6. Enable drift monitoring](#monitor)
- [7. Run drift monitor](# )
# 1.0 Install Python Packages <a name="setup"></a>
```
import warnings
warnings.filterwarnings('ignore')
!rm -rf /home/spark/shared/user-libs/python3.6*
!pip install --upgrade ibm-ai-openscale==2.2.1 --no-cache --user | tail -n 1
!pip install --upgrade watson-machine-learning-client-V4==1.0.95 | tail -n 1
!pip install --upgrade pyspark==2.3 | tail -n 1
!pip install scikit-learn==0.20.2 | tail -n 1
```
### Action: restart the kernel!
```
import warnings
warnings.filterwarnings('ignore')
```
# 2.0 Configure credentials <a name="credentials"></a>
<font color=red>Replace the `username` and `password` values of `************` with your Cloud Pak for Data `username` and `password`. The value for `url` should match the `url` for your Cloud Pak for Data cluster, which you can get from the browser address bar (be sure to include the 'https://'.</font> The credentials should look something like this (these are example values, not the ones you will use):
```
WOS_CREDENTIALS = {
"url": "https://zen.clusterid.us-south.containers.appdomain.cloud",
"username": "cp4duser",
"password" : "cp4dpass"
}
```
**NOTE: Make sure that there is no trailing forward slash / in the url**
```
WOS_CREDENTIALS = {
"url": "************",
"username": "************",
"password": "************"
}
WML_CREDENTIALS = WOS_CREDENTIALS.copy()
WML_CREDENTIALS['instance_id']='openshift'
WML_CREDENTIALS['version']='3.0.0'
```
Lets retrieve the variables for the model and deployment we set up in the initial setup notebook. **If the output does not show any values, check to ensure you have completed the initial setup before continuing.**
```
%store -r MODEL_NAME
%store -r DEPLOYMENT_NAME
%store -r DEFAULT_SPACE
print("Model Name: ", MODEL_NAME, ". Deployment Name: ", DEPLOYMENT_NAME, ". Deployment Space: ", DEFAULT_SPACE)
```
# 3.0 Load the training data
```
!rm german_credit_data_biased_training.csv
!wget https://raw.githubusercontent.com/IBM/credit-risk-workshop-cpd/master/data/openscale/german_credit_data_biased_training.csv
import pandas as pd
data_df = pd.read_csv('german_credit_data_biased_training.csv', sep=",", header=0)
data_df.head()
```
# 4.0 Configure OpenScale <a name="openscale"></a>
The notebook will now import the necessary libraries and set up a Python OpenScale client.
```
from ibm_ai_openscale import APIClient4ICP
from ibm_ai_openscale.engines import *
from ibm_ai_openscale.utils import *
from ibm_ai_openscale.supporting_classes import PayloadRecord, Feature
from ibm_ai_openscale.supporting_classes.enums import *
from watson_machine_learning_client import WatsonMachineLearningAPIClient
import json
wml_client = WatsonMachineLearningAPIClient(WML_CREDENTIALS)
ai_client = APIClient4ICP(WOS_CREDENTIALS)
ai_client.version
subscription = None
if subscription is None:
subscriptions_uids = ai_client.data_mart.subscriptions.get_uids()
for sub in subscriptions_uids:
if ai_client.data_mart.subscriptions.get_details(sub)['entity']['asset']['name'] == MODEL_NAME:
print("Found existing subscription.")
subscription = ai_client.data_mart.subscriptions.get(sub)
if subscription is None:
print("No subscription found. Please run openscale-initial-setup.ipynb to configure.")
```
### Set Deployment UID
```
wml_client.set.default_space(DEFAULT_SPACE)
wml_deployments = wml_client.deployments.get_details()
deployment_uid = None
for deployment in wml_deployments['resources']:
print(deployment['entity']['name'])
if DEPLOYMENT_NAME == deployment['entity']['name']:
deployment_uid = deployment['metadata']['guid']
break
print(deployment_uid)
```
# 5.0 Generate drift model <a name="driftmodel"></a>
Drift requires a trained model to be uploaded manually for WML. You can train, create and download a drift detection model using the code below. The entire code can be found in the [training_statistics_notebook](https://github.com/IBM-Watson/aios-data-distribution/blob/master/training_statistics_notebook.ipynb) ( check for Drift detection model generation).
```
training_data_info = {
"class_label":'Risk',
"feature_columns":["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"],
"categorical_columns":["CheckingStatus","CreditHistory","LoanPurpose","ExistingSavings","EmploymentDuration","Sex","OthersOnLoan","OwnsProperty","InstallmentPlans","Housing","Job","Telephone","ForeignWorker"]
}
#Set model_type. Acceptable values are:["binary","multiclass","regression"]
model_type = "binary"
#model_type = "multiclass"
#model_type = "regression"
def score(training_data_frame):
WML_CREDENTAILS = WML_CREDENTIALS
#The data type of the label column and prediction column should be same .
#User needs to make sure that label column and prediction column array should have the same unique class labels
prediction_column_name = "predictedLabel"
probability_column_name = "probability"
feature_columns = list(training_data_frame.columns)
training_data_rows = training_data_frame[feature_columns].values.tolist()
#print(training_data_rows)
payload_scoring = {
wml_client.deployments.ScoringMetaNames.INPUT_DATA: [{
"fields": feature_columns,
"values": [x for x in training_data_rows]
}]
}
score = wml_client.deployments.score(deployment_uid, payload_scoring)
score_predictions = score.get('predictions')[0]
prob_col_index = list(score_predictions.get('fields')).index(probability_column_name)
predict_col_index = list(score_predictions.get('fields')).index(prediction_column_name)
if prob_col_index < 0 or predict_col_index < 0:
raise Exception("Missing prediction/probability column in the scoring response")
import numpy as np
probability_array = np.array([value[prob_col_index] for value in score_predictions.get('values')])
prediction_vector = np.array([value[predict_col_index] for value in score_predictions.get('values')])
return probability_array, prediction_vector
#Generate drift detection model
from ibm_wos_utils.drift.drift_trainer import DriftTrainer
drift_detection_input = {
"feature_columns":training_data_info.get('feature_columns'),
"categorical_columns":training_data_info.get('categorical_columns'),
"label_column": training_data_info.get('class_label'),
"problem_type": model_type
}
drift_trainer = DriftTrainer(data_df,drift_detection_input)
if model_type != "regression":
#Note: batch_size can be customized by user as per the training data size
drift_trainer.generate_drift_detection_model(score,batch_size=data_df.shape[0])
#Note: Two column constraints are not computed beyond two_column_learner_limit(default set to 200)
#User can adjust the value depending on the requirement
drift_trainer.learn_constraints(two_column_learner_limit=200)
drift_trainer.create_archive()
#Generate a download link for drift detection model
from IPython.display import HTML
import base64
import io
def create_download_link_for_ddm( title = "Download Drift detection model", filename = "drift_detection_model.tar.gz"):
#Retains stats information
with open(filename,'rb') as file:
ddm = file.read()
b64 = base64.b64encode(ddm)
payload = b64.decode()
html = '<a download="{filename}" href="data:text/json;base64,{payload}" target="_blank">{title}</a>'
html = html.format(payload=payload,title=title,filename=filename)
return HTML(html)
create_download_link_for_ddm()
```
# 6.0 Submit payload <a name="payload"></a>
### Score the model so we can configure monitors
Now that the WML service has been bound and the subscription has been created, we need to send a request to the model before we configure OpenScale. This allows OpenScale to create a payload log in the datamart with the correct schema, so it can capture data coming into and out of the model.
```
fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"]
values = [
["no_checking",13,"credits_paid_to_date","car_new",1343,"100_to_500","1_to_4",2,"female","none",3,"savings_insurance",46,"none","own",2,"skilled",1,"none","yes"],
["no_checking",24,"prior_payments_delayed","furniture",4567,"500_to_1000","1_to_4",4,"male","none",4,"savings_insurance",36,"none","free",2,"management_self-employed",1,"none","yes"],
["0_to_200",26,"all_credits_paid_back","car_new",863,"less_100","less_1",2,"female","co-applicant",2,"real_estate",38,"none","own",1,"skilled",1,"none","yes"],
["0_to_200",14,"no_credits","car_new",2368,"less_100","1_to_4",3,"female","none",3,"real_estate",29,"none","own",1,"skilled",1,"none","yes"],
["0_to_200",4,"no_credits","car_new",250,"less_100","unemployed",2,"female","none",3,"real_estate",23,"none","rent",1,"management_self-employed",1,"none","yes"],
["no_checking",17,"credits_paid_to_date","car_new",832,"100_to_500","1_to_4",2,"male","none",2,"real_estate",42,"none","own",1,"skilled",1,"none","yes"],
["no_checking",33,"outstanding_credit","appliances",5696,"unknown","greater_7",4,"male","co-applicant",4,"unknown",54,"none","free",2,"skilled",1,"yes","yes"],
["0_to_200",13,"prior_payments_delayed","retraining",1375,"100_to_500","4_to_7",3,"male","none",3,"real_estate",37,"none","own",2,"management_self-employed",1,"none","yes"]
]
payload_scoring = {"fields": fields,"values": values}
payload = {
wml_client.deployments.ScoringMetaNames.INPUT_DATA: [payload_scoring]
}
scoring_response = wml_client.deployments.score(deployment_uid, payload)
print('Single record scoring result:', '\n fields:', scoring_response['predictions'][0]['fields'], '\n values: ', scoring_response['predictions'][0]['values'][0])
```
# 7.0 Enable drift monitoring <a name="monitor"></a>
```
subscription.drift_monitoring.enable(threshold=0.05, min_records=10,model_path="./drift_detection_model.tar.gz")
```
# 8.0 Run Drift monitor on demand <a name="driftrun"></a>
```
!rm german_credit_feed.json
!wget https://raw.githubusercontent.com/IBM/credit-risk-workshop-cpd/master/data/openscale/german_credit_feed.json
import random
with open('german_credit_feed.json', 'r') as scoring_file:
scoring_data = json.load(scoring_file)
fields = scoring_data['fields']
values = []
for _ in range(10):
current = random.choice(scoring_data['values'])
#set age of all rows to 100 to increase drift values on dashboard
current[12] = 100
values.append(current)
payload_scoring = {"fields": fields, "values": values}
payload = {
wml_client.deployments.ScoringMetaNames.INPUT_DATA: [payload_scoring]
}
scoring_response = wml_client.deployments.score(deployment_uid, payload)
drift_run_details = subscription.drift_monitoring.run(background_mode=False)
subscription.drift_monitoring.get_table_content()
```
## Congratulations!
You have finished this section of the hands-on lab for IBM Watson OpenScale. You can now view the OpenScale dashboard by going to the Cloud Pak for Data `Home` page, and clicking `Services`. Choose the `OpenScale` tile and click the menu to `Open`. Click on the tile for the model you've created to see the monitors.
OpenScale shows model performance over time. You have two options to keep data flowing to your OpenScale graphs:
* Download, configure and schedule the [model feed notebook](https://raw.githubusercontent.com/emartensibm/german-credit/master/german_credit_scoring_feed.ipynb). This notebook can be set up with your WML credentials, and scheduled to provide a consistent flow of scoring requests to your model, which will appear in your OpenScale monitors.
* Re-run this notebook. Running this notebook from the beginning will delete and re-create the model and deployment, and re-create the historical data. Please note that the payload and measurement logs for the previous deployment will continue to be stored in your datamart, and can be deleted if necessary.
This notebook has been adapted from notebooks available at https://github.com/pmservice/ai-openscale-tutorials.
| github_jupyter |
# Difference between gridded field (GRIB) and scattered observations (BUFR)
<img src="http://pandas.pydata.org/_static/pandas_logo.png" width=200>
In this example we will load a gridded model field in GRIB format and a set of observation data in BUFR format. We will then use Metview to examine the data, and compute and plot their differences. Then we will export the set of differences into a pandas dataframe for further inspection.
```
import metview as mv
use_mars = False # if False, then read data from disk
```
Metview retrieves/reads GRIB data into its [Fieldset](https://confluence.ecmwf.int/display/METV/Fieldset+Functions) class.
```
if use_mars:
t2m_grib = mv.retrieve(type='fc', date=-5, time=12, step=48, levtype='sfc', param='2t', grid='O160', gaussian='reduced')
else:
t2m_grib = mv.read('t2m_grib.grib')
```
Define our area of interest and set up some visual styling.
```
area = [30,-25,72,46] # S,W,N,E
europe = mv.geoview(
map_area_definition = "corners",
area = area,
coastlines = mv.mcoast(
map_coastline_land_shade = "on",
map_coastline_land_shade_colour = "#eeeeee",
map_grid_latitude_increment = 10,
map_grid_longitude_increment = 10)
)
auto_style = mv.mcont(contour_automatic_setting = "ecmwf")
grid_1x1 = mv.mcont(
contour = "off",
contour_grid_value_plot = "on",
contour_grid_value_plot_type = "marker",
contour_grid_value_marker_colour = "burgundy",
grib_scaling_of_retrieved_fields = "off"
)
```
Plot the locations of the grid points. We can see the spatial characteristics of the octahedral reduced Gaussian grid.
Plotting is performed through Metview's interface to the [Magics](https://confluence.ecmwf.int/display/MAGP/Magics) library developed at ECMWF. We will first define the view parameters (by default we will get a global map in cylindrical projection).
If we don't set the output destination to be Jupyter, we will get Metview's interactive display window.
```
mv.setoutput('jupyter')
mv.plot(europe, t2m_grib, auto_style, grid_1x1)
```
Metview retrieves/reads BUFR data into its [Bufr](https://confluence.ecmwf.int/display/METV/Observations+Functions) class.
```
if use_mars:
obs_3day = mv.retrieve(
type = "ob",
repres = "bu",
date = -3,
area = area
)
else:
obs_3day = mv.read('./obs_3day.bufr')
```
Plot the observations on the map.
```
obs_resize = mv.mobs(obs_size = 0.3, obs_ring_size = 0.3, obs_distance_apart = 1.8)
mv.plot(europe, obs_3day, obs_resize)
```
BUFR can contain a complex arragement of data. Metview has a powerful BUFR examiner [tool](https://confluence.ecmwf.int/display/METV/CodesUI) to inspect the data contents and to see the available keynames. This can be launched with the examine() function.
```
mv.examine(obs_3day)
```
With the information gleaned from that, we can filter the variable we require using the obsfilter() function. This returns a [Geopoints](https://confluence.ecmwf.int/display/METV/Geopoints) object, which has many more [functions](https://confluence.ecmwf.int/display/METV/Geopoints+Functions) available to it. Note: prior to Metview 5.1, only a numeric descriptor could be used to specify the parameter.
```
t2m_gpt = mv.obsfilter(
data = obs_3day,
parameter = 'airTemperatureAt2M',
output = 'geopoints'
)
```
Computing the difference between the gridded field and the scattered data is one line of code. Metview will, for each observation point, compute the interpolated value from the field at that location, perform the subtraction and put the result into a new Geopoints object.
```
diff = t2m_grib - t2m_gpt
```
We can then use Magics' powerful symbol plotting routine to assign colours and sizes based on the magnitude of the differences.
```
max_diff = mv.maxvalue(mv.abs(diff))
levels = [max_diff * x for x in [-1, -0.67, -0.33, -0.1, 0.1, 0.33, 0.67, 1]]
diff_symb = mv.msymb(
legend = "on",
symbol_type = "marker",
symbol_table_mode = "advanced",
symbol_outline = "on",
symbol_outline_colour = "charcoal",
symbol_advanced_table_selection_type = "list",
symbol_advanced_table_level_list = levels,
symbol_advanced_table_colour_method = "list",
symbol_advanced_table_colour_list = ["blue","sky","rgb(0.82,0.85,1)","white","rgb(0.9,0.8,0.8)","rgb(0.9,0.5,0.5)","red"],
symbol_advanced_table_height_list = [0.6,0.5,0.4,0.3,0.3,0.4,0.5,0.6]
)
mv.plot(europe, diff, diff_symb)
```
We can easily convert this to a pandas dataframe for further analysis.
```
df = diff.to_dataframe()
```
Print a summary of the whole data set:
```
df.describe()
```
Or a print a summary of just the actual values:
```
df.value.describe()
```
Produce a quick scatterplot of latitude vs difference values:
```
df.plot.scatter(x='latitude', y='value', title='Scatterplot')
```
# Additional resources
- [Introductory Metview training course](https://confluence.ecmwf.int/display/METV/Data+analysis+and+visualisation+using+Metview)
- [Metview's Python interface](https://confluence.ecmwf.int/display/METV/Metview%27s+Python+Interface)
- [Function list](https://confluence.ecmwf.int/display/METV/List+of+Operators+and+Functions)
- [Gallery example (field-obs difference)](https://confluence.ecmwf.int/display/METV/Model-Obs%20Difference%20Example)
| github_jupyter |
# Hands-on: `pandas` & Data Wrangling
By now, you have some experience in using the `pandas` library which will be very helpful in this module. In this notebook, we will explore more of `pandas` but in the context of data wrangling. To be specific, we will be covering the following topics:
- Reading in data
- Descriptive statistics
- Data wrangling
- Filtering
- Aggregation
- Merging
Again we import the necessary libraries first. Always remember to import first.
```
import pandas as pd
import numpy as np
```
## Data
The Philippines has an Open Data portal: https://data.gov.ph
In this notebook, we'll be using the [Public Elementary School Enrollment Statistics](https://data.gov.ph/?q=dataset/public-elementary-school-enrollment-statistics) provided by the Department of Education. The page contains two files. Download both files and save them to the same folder as this notebook.
## Reading Data
In the previous modules, we have already demonstrated how to read files using `pandas`. For more details, run the cells below to display the documentations for the commonly used functions for reading files. Try to **read the documentation** to see if what you're trying to do is something that can already done by a library. Or you could simply **google** your concern. Most of the times, someone has already encountered the same problem.
```
pd.read_csv?
pd.read_excel?
# by default, the encoding is utf-8, but since the data has some latin characters
# the encoding argument needs to be updated
# list of encodings can be found here https://docs.python.org/2.4/lib/standard-encodings.html
# read more about encodings here http://kunststube.net/encoding/
deped2012 = pd.read_csv('deped_publicelementaryenrollment2012.csv', encoding='latin1')
# the head function provides a preview of the first 5 rows of the data
deped2012.head()
# Let's read in the other file too
deped2015 = pd.read_csv('depend_publicelementaryenrollment2015.csv', encoding='latin1')
deped2015.head()
```
### Let's begin exploring the data...
Some of the most common questions to ask **first** before proceeding with your data is to know the basic details of what's **in** the data. This is an important first step to verify what you see in the preview (`head`) and what's in the entire file.
* How many rows and columns do we have?
* What is the data type of each column?
* What is the most common value? Mean? Standard deviation?
#### `shape`
A `pandas` `DataFrame` is essentially a 2D `numpy` array. Using the `shape` attribute of the `DataFrame`, we can easily check the dimensions of the data file we read. It returns a tuple of the dimensions.
```
deped2012.shape
```
This means that the `deped_publicelementaryenrollment2012.csv` file has 463,908 rows and 10 columns.
#### `dtypes`
`dtypes` lets you check what data type each column is.
```
deped2012.dtypes
```
Notice that everything except `school_id` and `enrollment` is type `object`. In Python, a String is considered an `object`.
#### `describe()`
`describe()` provides the basic descriptive statistics of the`DataFrame`. By default, it only includes the columns with numerical data. Non-numerical columns are omitted but there are arguments that shows the statistics related to non-numerical data.
```
deped2012.describe()
```
By default we see the **descriptive statistics** of the nnumerical columns.
```
deped2012.describe(include=np.object)
```
But by specifying the `include` argument, we can see the descriptive statistics of the specific data type we're looking for.
```
deped2012.describe?
```
### Data Wrangling
After looking at the basic information about the data, let's see how "clean" the data is
#### Common Data Problems (from slides)
1. Missing values
2. Formatting issues / data types
3. Duplicate records
4. Varying representation / Handle categorical values
#### `isna()` / `isnull()`
To check if there's any missing values, `pandas` provides these two functions to detect them. This actually maps each individual cell to either True or False.
#### `dropna()`
To remove any records with missing values, `dropna()` may be used. It has a number of arguments to help narrow down the criteria for removing the records with missing values.
```
deped2012.isna?
deped2012.dropna?
deped2012.isna().sum()
```
In this case, there are no null values which is great, but in most real-world datasets, expect null values.
```
deped2012_dropped = deped2012.dropna(inplace=False)
deped2012.shape, deped2012_dropped.shape
```
You'll see above that shape is dimension because nothing happened after applying `dropna` as there are no null values to begin with. But what if there's a null value in this dataset?
```
# This is just an ILLUSTRATION to show how to handle nan values. Don't change values to NaN unless NEEDED.
deped2012_copy = deped2012.copy() # We first make a copy of the dataframe
deped2012_copy.iloc[0,0] = np.nan # We modify the COPY (not the original)
deped2012_copy.head()
deped2012_copy.isna().sum()
```
There null value is now reflected as shown in the output above
```
deped2012_dropped = deped2012_copy.dropna(inplace=False)
deped2012_copy.shape, deped2012_dropped.shape
```
The 'dropped' dataframe now has a lower number of rows compared to the original one.
#### `duplicated()` --> `drop_duplicates()`
The `duplicated()` function returns the duplicated rows in the `DataFrame`. It also has a number of arguments for you to specify the subset of columns.
`drop_duplicates()` is the function to remove the duplicated rows found by `duplicated()`.
```
deped2012.duplicated?
deped2012.drop_duplicates?
deped2012.duplicated().sum()
```
We can see here that there are no duplicates.
#### Varying representation
For categorical or textual data, unless the options provided are fixed, misspellings and different representations may exist in the same file.
To check the unique values of each column, a `pandas` `Series` has a function `unique()` which returns all the unique values of the column.
```
deped2012['province'].unique()
deped2012['year_level'].unique()
deped2012['region'].unique()
deped2015['region'].unique()
```
### Summarizing Data
High data granularity is great for a detailed analysis. However, data is usually summarized or aggregated prior to visualization. `pandas` also provides an easy way to summarize data based on the columns you'd like using the `groupby` function.
We can call any of the following when grouping by columns:
- count()
- sum()
- min()
- max()
- std()
For columns that are categorical in nature, we can simply do `df['column'].value_counts()`. This will give the frequency of each unique value in the column.
```
pd.Series.value_counts?
```
Number of region instances
```
deped2015['region'].value_counts()
deped2012.groupby?
```
Number of enrollments per grade level
```
deped2012.groupby("year_level")['enrollment'].sum()
```
#### Exercise!
Let's try to get the following:
1. Total number of enrolled students per region and gender
2. Total number of enrolled students per year level and gender
```
deped2012.groupby(['region', 'gender'], as_index=False).sum()
deped2012.groupby(['year_level', 'gender']).sum()
```
### Filtering Data
```
deped2015.query("year_level=='grade 6'")
deped2015.query("year_level == 'grade 6' & school_id == 100004")
deped2015.query("year_level == 'grade 6' | year_level == 'grade 5'")[['region', 'province']]
```
### Merging Data
Data are sometimes separated into different files or additional data from another source can be associated to another dataset. `pandas` provides means to combine different `DataFrames` together (provided that there are common variables that it can connect them to.
#### `pd.merge`
`merge()` is very similar to database-style joins. `pandas` allows merging of `DataFrame` and **named** `Series` objects together. A join can be done along columns or indexes.
#### `pd.concat`
`concat()` on the other hand combines `pandas` objects along a specific axis.
#### `df.append`
`append()` basically adds the rows of another `DataFrame` or `Series` to the end of the caller.
```
pd.merge?
pd.concat?
deped2012.append?
stats2012 = deped2012.groupby('school_id', as_index=False).sum()
stats2015 = deped2015.groupby('school_id', as_index=False).sum()
stats2012.head()
stats2015.tail()
stats2012.append(stats2015)
```
#### Exercise
The task is to compare the enrollment statistics of the elementary schools between 2012 and 2015.
1. Get the total number of enrolled students per school for each year
2. Merge the two `DataFrame`s together to show the summarized statistics for the two years for all schools.
```
stats2012 = deped2012.groupby('school_id', as_index=False).sum()
stats2015 = deped2015.groupby('school_id', as_index=False).sum()
stats2012.head()
stats2012.shape
stats2015.head()
stats2015.shape
```
The following is the wrong way of merging this.
```
merged = pd.merge(stats2012, stats2015)
merged.head()
merged.shape
```
#### Observations
1. Are the number of rows for both `DataFrames` the same or different? What's the implication if they're different?
2. Note the same column names for the two `DataFrames`. Based on the documentation for `merge()`, there's a parameter for suffixes for overlapping column names. If we want to avoid the "messy" suffixes, we can choose to rename columns prior to merging.
One way is to assign an array to the columns object representing the column names for ALL columns.
```ipython
stats2012.columns = ['school_id', '2012']
stats2015.columns = ['school_id', '2015']
```
But this is not good if you have too many columns... `pandas` has a function `rename()` in which we can pass a "mappable" dictionary for the columns. The `inplace` parameter helps in renaming it and assigns the changed `DataFrame` back to the same variable.
```ipython
stats2012.rename(columns={'enrollment': '2012'}, inplace=True)
stats2015.rename(columns={'enrollment': '2015'}, inplace=True)
```
```
# try the code above
stats2012.columns = ['school_id', '2012']
stats2015.columns = ['school_id', '2015']
stats2012.head()
stats2015.head()
## Merge the two dataframes using different "how" parameters
# how : {'left', 'right', 'outer', 'inner'}, default 'inner'
inner_res = pd.merge(stats2012, stats2015)
inner_res.head()
inner_res.isna().sum()
inner_res.shape
```
Play around with the how parameter and observe the following:
- shape of the dataframe
- presence or absence of null values
- number of schools dropped with respect to the original dataframe
```
outer_res = pd.merge(stats2012, stats2015, how="outer")
outer_res.isna().sum()
left_res = pd.merge(stats2012, stats2015, how="left")
left_res.isna().sum()
```
For the following items, we will only be using the 2015 dataset.
1. Which region has the most number of schools? Does this region also have the most number of enrollees?
```
deped2015.groupby(['region']).sum().sort_values(by='enrollment', ascending=False)
```
2. Which region has the least number of schools? Does this region also have the least number of enrollees?
```
deped2015.groupby(['region']).sum().sort_values(by='enrollment', ascending=True)
```
3. Which school has the least number of enrollees?
```
deped2015.groupby(['school_name']).sum().sort_values(by='enrollment', ascending=True)
```
| github_jupyter |
# Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
- In this notebook, you will implement all the functions required to build a deep neural network.
- In the next assignment, you will use these functions to build a deep neural network for image classification.
**After this assignment you will be able to:**
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
**Notation**:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
## 1 - Packages
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the main package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
```
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v4 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
```
## 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
- Initialize the parameters for a two-layer network and for an $L$-layer neural network.
- Implement the forward propagation module (shown in purple in the figure below).
- Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
- We give you the ACTIVATION function (relu/sigmoid).
- Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
- Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
- Compute the loss.
- Implement the backward propagation module (denoted in red in the figure below).
- Complete the LINEAR part of a layer's backward propagation step.
- We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
- Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
- Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
- Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> **Figure 1**</center></caption><br>
**Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
## 3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
### 3.1 - 2-layer Neural Network
**Exercise**: Create and initialize the parameters of the 2-layer neural network.
**Instructions**:
- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*.
- Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape.
- Use zero initialization for the biases. Use `np.zeros(shape)`.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
# W1 = None
# b1 = None
# W2 = None
# b2 = None
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros(shape=(n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros(shape=(n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.01744812 -0.00761207]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
### 3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\\
m & n & o \\
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\\
d & e & f \\
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \\
t \\
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
**Exercise**: Implement initialization for an L-layer Neural Network.
**Instructions**:
- The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.
- Use zeros initialization for the biases. Use `np.zeros(shape)`.
- We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
```python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
```
```
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
## 4 - Forward propagation module
### 4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
- LINEAR
- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
**Exercise**: Build the linear part of forward propagation.
**Reminder**:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.
```
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
# Z = None
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
### 4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
- **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = sigmoid(Z)
```
- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = relu(Z)
```
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
**Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
```
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
**Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
### d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model</center></caption><br>
**Exercise**: Implement the forward propagation of the above model.
**Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.)
**Tips**:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`.
```
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# print(parameters.keys())
# print(L)
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, \
parameters['W' + str(l)], \
parameters['b' + str(l)], \
activation = "relu")
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, \
parameters['W' + str(L)], \
parameters['b' + str(L)], \
activation = "sigmoid")
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case_2hidden()
# print(str(parameters))
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
```
<table style="width:50%">
<tr>
<td> **AL** </td>
<td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 3 </td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
## 5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
**Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
```
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
logprobs = np.multiply(np.log(AL),Y) + np.multiply(np.log(1-AL),(1-Y))
cost = -(1/m) * np.sum(logprobs)
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
```
**Expected Output**:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
## 6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
**Reminder**:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* <br> *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
### 6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> **Figure 4** </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
**Exercise**: Use the 3 formulas above to implement linear_backward().
```
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = 1/m * np.dot(dZ, A_prev.T)
db = 1/m * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
### 6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**.
To help you implement `linear_activation_backward`, we provided two backward functions:
- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:
```python
dZ = sigmoid_backward(dA, activation_cache)
```
- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:
```python
dZ = relu_backward(dA, activation_cache)
```
If $g(.)$ is the activation function,
`sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
**Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer.
```
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
dAL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected output with sigmoid:**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
**Expected output with relu:**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
### 6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> **Figure 5** : Backward pass </center></caption>
** Initializing backpropagation**:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
```python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
```
You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`.
**Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model.
```
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients.
# Inputs: "dAL, current_cache".
# Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[-1]
grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] \
= linear_activation_backward(dAL, current_cache, activation="sigmoid")
### END CODE HERE ###
# Loop from l=L-2 to l=0
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 1)], current_cache".
# Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
# dA_prev_temp, dW_temp, db_temp = None
# grads["dA" + str(l)] = None
# grads["dW" + str(l + 1)] = None
# grads["db" + str(l + 1)] = None
#########################
### DIDN'T PASS GRADE ###
#########################
# dZ = relu_backward(grads["dA" + str(l+1)], current_cache)
# dA_prev, dW, db = linear_backward(dZ, current_cache)
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l+1)], \
current_cache, \
activation="relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print_grads(grads)
```
**Expected Output**
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]] </td>
</tr>
</table>
### 6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
**Exercise**: Implement `update_parameters()` to update your parameters using gradient descent.
**Instructions**:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
# parameters["W" + str(l+1)] = None
# parameters["b" + str(l+1)] = None
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * grads["dW" + str(l + 1)]
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * grads["db" + str(l + 1)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td > W1 </td>
<td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008]
[-1.76569676 -0.80627147 0.51115557 -1.18258802]
[-1.0535704 -0.86128581 0.68284052 2.20374577]] </td>
</tr>
<tr>
<td > b1 </td>
<td > [[-0.04659241]
[-1.28888275]
[ 0.53405496]] </td>
</tr>
<tr>
<td > W2 </td>
<td > [[-0.55569196 0.0354055 1.32964895]]</td>
</tr>
<tr>
<td > b2 </td>
<td > [[-0.84610769]] </td>
</tr>
</table>
## 7 - Conclusion
Congrats on implementing all the functions required for building a deep neural network!
We know it was a long assignment but going forward it will only get better. The next part of the assignment is easier.
In the next assignment you will put all these together to build two models:
- A two-layer neural network
- An L-layer neural network
You will in fact use these models to classify cat vs non-cat images!
| github_jupyter |
# Info Extraction
it's much more easier to extract information of model from pytorch module than onnx...onnx doesn't have output shape
```
import onnx
# Load the ONNX model
model = onnx.load("onnx/vgg19.onnx")
# Check that the IR is well formed
onnx.checker.check_model(model)
# Print a human readable representation of the graph
print(onnx.helper.printable_graph(model.graph))
#import onnx_caffe2.backend as backend
import onnx_tf.backend as backend
import numpy as np
import time
```
## Find Graph Edge (each link)
Node is operation, start from 0 ; Entity is object, start from u'1' (means %1)
基本上把每個node跑過一次後,所有的Entity都會摸到
```
def get_graph_order():
Node2nextEntity = {}
Entity2nextNode = {}
for Node_idx, node in enumerate(model.graph.node):
# node input
for Entity_idx in node.input:
if not Entity_idx in Entity2nextNode.keys():
Entity2nextNode.update({Entity_idx:Node_idx})
# node output
for Entity_idx in node.output:
if not Node_idx in Node2nextEntity.keys():
Node2nextEntity.update({Node_idx:Entity_idx})
return Node2nextEntity, Entity2nextNode
Node2nextEntity, Entity2nextNode = get_graph_order()
len(Node2nextEntity), len(Entity2nextNode)
import pickle
pickle.dump(Node2nextEntity,open('onnx/vgg19_Node2nextEntity_dict.pkl','wb'))
pickle.dump(Entity2nextNode,open('onnx/vgg19_Entity2nextNode_dict.pkl','wb'))
```
## Get Subgroup
```
import pickle
Node2nextEntity = pickle.load(open('onnx/vgg19_Node2nextEntity_dict.pkl','rb'))
Entity2nextNode = pickle.load(open('onnx/vgg19_Entity2nextNode_dict.pkl','rb'))
def find_sequencial_nodes(search_target=['Conv', 'Add', 'Relu', 'MaxPool'], if_print = False):
found_nodes = []
for i, node in enumerate(model.graph.node):
if if_print: print("\nnode[{}] ...".format(i))
n_idx = i #init
is_fit = True
for tar in search_target:
try:
assert model.graph.node[n_idx].op_type == tar #check this node
if if_print: print("node[{}] fit op_type [{}]".format(n_idx, tar))
e_idx = Node2nextEntity[n_idx] #find next Entity
n_idx = Entity2nextNode[e_idx] #find next Node
#if if_print: print(e_idx,n_idx)
except:
is_fit = False
if if_print: print("node[{}] doesn't fit op_type [{}]".format(n_idx, tar))
break
if is_fit:
if if_print: print("node[{}] ...fit!".format(i))
found_nodes.append(i)
else:
if if_print: print("node[{}] ...NOT fit!".format(i))
if if_print: print("\nNode{} fit the matching pattern".format(found_nodes))
return found_nodes
find_sequencial_nodes(search_target=['Conv', 'Add', 'Relu'], if_print = True)
find_sequencial_nodes(search_target=['Conv', 'Add', 'Relu', 'MaxPool'], if_print = False)
import itertools
def get_permutations(a):
p = []
for r in range(len(a)+1):
c = list(itertools.combinations(a,r))
for cc in c:
p += list(itertools.permutations(cc))
return p
#a = [4,5,6]
#get_permutations(a)
search_head = ['Conv']
followings = ['Add', 'Relu', 'MaxPool']
search_targets = [ search_head+list(foll) for foll in get_permutations(followings)]
search_targets
matchings = [find_sequencial_nodes(search_target) for search_target in search_targets]
for i,matching in enumerate(matchings):
if matching!=[]:
print("\nsearch:{}, \nget matching node:{}".format(search_targets[i],matching))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/TheoPantaz/Motor-Imagery-Classification-with-Tensorflow-and-MNE/blob/master/Motor_Imagery_clsf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Install mne
```
!pip install mne
```
Import libraries
```
import scipy.io as sio
import sklearn.preprocessing as skpr
import mne
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
```
Import data
```
from google.colab import drive
drive.mount('/content/drive')
def import_from_mat(filename):
dataset = sio.loadmat(filename, chars_as_strings = True)
return dataset['EEG'], dataset['LABELS'].flatten(), dataset['Fs'][0][0], dataset['events'].T
filename = '/content/drive/My Drive/PANTAZ_s2'
EEG, LABELS, Fs, events = import_from_mat(filename)
```
Normalize data
```
def standardize(data):
scaler = skpr.StandardScaler()
return scaler.fit_transform(data)
EEG = standardize(EEG)
```
Create mne object
```
channel_names = ['c1', 'c2', 'c3', 'c4', 'cp1', 'cp2', 'cp3', 'cp4']
channel_type = 'eeg'
def create_mne_object(EEG, channel_names, channel_type):
info = mne.create_info(channel_names, Fs, ch_types = channel_type)
raw = mne.io.RawArray(EEG.T, info)
return raw
raw = create_mne_object(EEG, channel_names, channel_type)
```
filtering
```
def filtering(raw, low_freq, high_freq):
# Notch filtering
freqs = (50, 100)
raw = raw.notch_filter(freqs = freqs)
# Apply band-pass filter
raw.filter(low_freq, high_freq, fir_design = 'firwin', skip_by_annotation = 'edge')
return raw
low_freq = 7.
high_freq = 30.
filtered = filtering(raw, low_freq, high_freq)
```
Epoching the data
> IM_dur = duration of original epoch
> last_start_of_epoch : at what point(percentage) of the original epoch will the last new epoch start
```
def Epoch_Setup(events, IM_dur, step, last_start_of_epoch):
IM_dur = int(IM_dur * Fs)
step = int(step * IM_dur)
last_start_of_epoch = int(last_start_of_epoch * IM_dur)
print(last_start_of_epoch)
steps_sum = int(last_start_of_epoch / step)
new_events = [[],[],[]]
for index in events:
new_events[0].extend(np.arange(index[0], index[0] + last_start_of_epoch, step))
new_events[1].extend([0] * steps_sum)
new_events[2].extend([index[-1]] * steps_sum)
new_events = np.array(new_events).T
return new_events
def Epochs(data, events, tmin, tmax):
epochs = mne.Epochs(data, events=events, tmin=tmin, tmax=tmax, preload=True, baseline=None, proj=True)
epoched_data = epochs.get_data()
labels = epochs.events[:, -1]
return epoched_data, labels
IM_dur = 4
step = 1/250
last_start_of_epoch = 0.5
tmix = -1
tmax = 2
new_events = Epoch_Setup(events, IM_dur, step, last_start_of_epoch)
epoched_data, labels = Epochs(filtered, new_events, tmix, tmax)
```
Split training and testing data
```
def data_split(data, labels, split):
split = int(split * data.shape[0])
X_train = epoched_data[:split]
X_test = epoched_data[split:]
Y_train = labels[:split]
Y_test = labels[split:]
return X_train, X_test, Y_train, Y_test
split = 0.5
X_train, X_test, Y_train, Y_test = data_split(epoched_data, labels, split)
print(X_train.shape)
print(Y_train.shape)
```
CSP fit and transform
```
components = 8
csp = mne.decoding.CSP(n_components=components, reg='oas', log = None, norm_trace=True)
X_train = csp.fit_transform(X_train, Y_train)
X_test = csp.transform(X_test)
```
Data reshape for Tensorflow model
> Create batches for LSTM
```
def reshape_data(X_train, X_test, labels, final_reshape):
X_train = np.reshape(X_train, (int(X_train.shape[0]/final_reshape), final_reshape, X_train.shape[-1]))
X_test = np.reshape(X_test, (int(X_test.shape[0]/final_reshape), final_reshape, X_test.shape[-1]))
n_labels = []
for i in range(0,len(labels),final_reshape):
n_labels.append(labels[i])
Labels = np.array(n_labels)
Y_train = Labels[:X_train.shape[0]] - 1
Y_test = Labels[X_train.shape[0]:] - 1
return X_train, X_test, Y_train, Y_test
reshape_factor = int(last_start_of_epoch / step)
final_reshape = int(reshape_factor)
X_train, X_test, Y_train, Y_test = reshape_data(X_train, X_test, labels, final_reshape)
```
Create tensorflow model
```
model = tf.keras.Sequential([
tf.keras.layers.LSTM(128, input_shape = [None,X_train.shape[-1]], return_sequences = True),
tf.keras.layers.LSTM(256),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(64, activation = 'relu'),
tf.keras.layers.Dense(64, activation = 'relu'),
tf.keras.layers.Dense(1, activation = 'sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer=tf.keras.optimizers.Adam(lr = 0.0001),metrics=['accuracy'])
model.summary()
```
Model fit
```
history = model.fit(X_train, Y_train, epochs= 50, batch_size = 25, validation_data=(X_test, Y_test), verbose=1)
```
Accuracy and plot loss
```
%matplotlib inline
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
Running classifier
```
tmin = -1
tmax = 4
epoched_data_running, labels_running = Epochs(filtered, events, tmix, tmax)
split = 0.5
split = int(split * epoched_data_running.shape[0])
X_test_running = epoched_data_running[split:]
Y_test_running = LABELS[split:-1] - 1
w_length = int(Fs * 1.5) # running classifier: window length
w_step = int(Fs/250) # running classifier: window step size
w_start = np.arange(0, X_test_running.shape[2] - w_length, w_step)
final_reshape = int(reshape_factor/4)
scores = []
batch_data = []
for i, n in enumerate(w_start):
data = csp.transform(X_test_running[...,n:n+w_length])
batch_data.append(data)
if (i+1) % final_reshape == 0:
batch_data = np.transpose(np.array(batch_data), (1,0,2))
scores.append(model.evaluate(batch_data, Y_test_running))
batch_data = []
scores = np.array(scores)
w_times = (np.arange(0, X_test_running.shape[2] - w_length, final_reshape * w_step) + w_length / 2.) / Fs + tmin
w_times = w_times[:-1]
plt.figure()
plt.plot(w_times, scores[:,1], label='Score')
plt.axvline(0, linestyle='--', color='k', label='Onset')
plt.axhline(0.5, linestyle='-', color='k', label='Chance')
plt.xlabel('time (s)')
plt.ylabel('classification accuracy')
plt.title('Classification score over time')
plt.legend(loc='lower right')
plt.show()
```
| github_jupyter |
# Titanic Data Science Solutions
### This notebook is a companion to the book [Data Science Solutions](https://www.amazon.com/Data-Science-Solutions-Startup-Workflow/dp/1520545312).
The notebook walks us through a typical workflow for solving data science competitions at sites like Kaggle.
There are several excellent notebooks to study data science competition entries. However many will skip some of the explanation on how the solution is developed as these notebooks are developed by experts for experts. The objective of this notebook is to follow a step-by-step workflow, explaining each step and rationale for every decision we take during solution development.
## Workflow stages
The competition solution workflow goes through seven stages described in the Data Science Solutions book.
1. Question or problem definition.
2. Acquire training and testing data.
3. Wrangle, prepare, cleanse the data.
4. Analyze, identify patterns, and explore the data.
5. Model, predict and solve the problem.
6. Visualize, report, and present the problem solving steps and final solution.
7. Supply or submit the results.
The workflow indicates general sequence of how each stage may follow the other. However there are use cases with exceptions.
- We may combine mulitple workflow stages. We may analyze by visualizing data.
- Perform a stage earlier than indicated. We may analyze data before and after wrangling.
- Perform a stage multiple times in our workflow. Visualize stage may be used multiple times.
- Drop a stage altogether. We may not need supply stage to productize or service enable our dataset for a competition.
## Question and problem definition
Competition sites like Kaggle define the problem to solve or questions to ask while providing the datasets for training your data science model and testing the model results against a test dataset. The question or problem definition for Titanic Survival competition is [described here at Kaggle](https://www.kaggle.com/c/titanic).
> Knowing from a training set of samples listing passengers who survived or did not survive the Titanic disaster, can our model determine based on a given test dataset not containing the survival information, if these passengers in the test dataset survived or not.
We may also want to develop some early understanding about the domain of our problem. This is described on the [Kaggle competition description page here](https://www.kaggle.com/c/titanic). Here are the highlights to note.
- On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. Translated 32% survival rate.
- One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew.
- Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
## Workflow goals
The data science solutions workflow solves for seven major goals.
**Classifying.** We may want to classify or categorize our samples. We may also want to understand the implications or correlation of different classes with our solution goal.
**Correlating.** One can approach the problem based on available features within the training dataset. Which features within the dataset contribute significantly to our solution goal? Statistically speaking is there a [correlation](https://en.wikiversity.org/wiki/Correlation) among a feature and solution goal? As the feature values change does the solution state change as well, and visa-versa? This can be tested both for numerical and categorical features in the given dataset. We may also want to determine correlation among features other than survival for subsequent goals and workflow stages. Correlating certain features may help in creating, completing, or correcting features.
**Converting.** For modeling stage, one needs to prepare the data. Depending on the choice of model algorithm one may require all features to be converted to numerical equivalent values. So for instance converting text categorical values to numeric values.
**Completing.** Data preparation may also require us to estimate any missing values within a feature. Model algorithms may work best when there are no missing values.
**Correcting.** We may also analyze the given training dataset for errors or possibly innacurate values within features and try to corrent these values or exclude the samples containing the errors. One way to do this is to detect any outliers among our samples or features. We may also completely discard a feature if it is not contribting to the analysis or may significantly skew the results.
**Creating.** Can we create new features based on an existing feature or a set of features, such that the new feature follows the correlation, conversion, completeness goals.
**Charting.** How to select the right visualization plots and charts depending on nature of the data and the solution goals.
## Refactor Release 2017-Jan-29
We are significantly refactoring the notebook based on (a) comments received by readers, (b) issues in porting notebook from Jupyter kernel (2.7) to Kaggle kernel (3.5), and (c) review of few more best practice kernels.
### User comments
- Combine training and test data for certain operations like converting titles across dataset to numerical values. (thanks @Sharan Naribole)
- Correct observation - nearly 30% of the passengers had siblings and/or spouses aboard. (thanks @Reinhard)
- Correctly interpreting logistic regresssion coefficients. (thanks @Reinhard)
### Porting issues
- Specify plot dimensions, bring legend into plot.
### Best practices
- Performing feature correlation analysis early in the project.
- Using multiple plots instead of overlays for readability.
```
# data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
```
## Acquire data
The Python Pandas packages helps us work with our datasets. We start by acquiring the training and testing datasets into Pandas DataFrames. We also combine these datasets to run certain operations on both datasets together.
```
train_df = pd.read_csv('../input/train.csv')
test_df = pd.read_csv('../input/test.csv')
combine = [train_df, test_df]
```
## Analyze by describing data
Pandas also helps describe the datasets answering following questions early in our project.
**Which features are available in the dataset?**
Noting the feature names for directly manipulating or analyzing these. These feature names are described on the [Kaggle data page here](https://www.kaggle.com/c/titanic/data).
```
print(train_df.columns.values)
```
**Which features are categorical?**
These values classify the samples into sets of similar samples. Within categorical features are the values nominal, ordinal, ratio, or interval based? Among other things this helps us select the appropriate plots for visualization.
- Categorical: Survived, Sex, and Embarked. Ordinal: Pclass.
**Which features are numerical?**
Which features are numerical? These values change from sample to sample. Within numerical features are the values discrete, continuous, or timeseries based? Among other things this helps us select the appropriate plots for visualization.
- Continous: Age, Fare. Discrete: SibSp, Parch.
```
# preview the data
train_df.head()
```
**Which features are mixed data types?**
Numerical, alphanumeric data within same feature. These are candidates for correcting goal.
- Ticket is a mix of numeric and alphanumeric data types. Cabin is alphanumeric.
**Which features may contain errors or typos?**
This is harder to review for a large dataset, however reviewing a few samples from a smaller dataset may just tell us outright, which features may require correcting.
- Name feature may contain errors or typos as there are several ways used to describe a name including titles, round brackets, and quotes used for alternative or short names.
```
train_df.tail()
```
**Which features contain blank, null or empty values?**
These will require correcting.
- Cabin > Age > Embarked features contain a number of null values in that order for the training dataset.
- Cabin > Age are incomplete in case of test dataset.
**What are the data types for various features?**
Helping us during converting goal.
- Seven features are integer or floats. Six in case of test dataset.
- Five features are strings (object).
```
train_df.info()
print('_'*40)
test_df.info()
```
**What is the distribution of numerical feature values across the samples?**
This helps us determine, among other early insights, how representative is the training dataset of the actual problem domain.
- Total samples are 891 or 40% of the actual number of passengers on board the Titanic (2,224).
- Survived is a categorical feature with 0 or 1 values.
- Around 38% samples survived representative of the actual survival rate at 32%.
- Most passengers (> 75%) did not travel with parents or children.
- Nearly 30% of the passengers had siblings and/or spouse aboard.
- Fares varied significantly with few passengers (<1%) paying as high as $512.
- Few elderly passengers (<1%) within age range 65-80.
```
train_df.describe()
# Review survived rate using `percentiles=[.61, .62]` knowing our problem description mentions 38% survival rate.
# Review Parch distribution using `percentiles=[.75, .8]`
# SibSp distribution `[.68, .69]`
# Age and Fare `[.1, .2, .3, .4, .5, .6, .7, .8, .9, .99]`
```
**What is the distribution of categorical features?**
- Names are unique across the dataset (count=unique=891)
- Sex variable as two possible values with 65% male (top=male, freq=577/count=891).
- Cabin values have several dupicates across samples. Alternatively several passengers shared a cabin.
- Embarked takes three possible values. S port used by most passengers (top=S)
- Ticket feature has high ratio (22%) of duplicate values (unique=681).
```
train_df.describe(include=['O'])
```
### Assumtions based on data analysis
We arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions.
**Correlating.**
We want to know how well does each feature correlate with Survival. We want to do this early in our project and match these quick correlations with modelled correlations later in the project.
**Completing.**
1. We may want to complete Age feature as it is definitely correlated to survival.
2. We may want to complete the Embarked feature as it may also correlate with survival or another important feature.
**Correcting.**
1. Ticket feature may be dropped from our analysis as it contains high ratio of duplicates (22%) and there may not be a correlation between Ticket and survival.
2. Cabin feature may be dropped as it is highly incomplete or contains many null values both in training and test dataset.
3. PassengerId may be dropped from training dataset as it does not contribute to survival.
4. Name feature is relatively non-standard, may not contribute directly to survival, so maybe dropped.
**Creating.**
1. We may want to create a new feature called Family based on Parch and SibSp to get total count of family members on board.
2. We may want to engineer the Name feature to extract Title as a new feature.
3. We may want to create new feature for Age bands. This turns a continous numerical feature into an ordinal categorical feature.
4. We may also want to create a Fare range feature if it helps our analysis.
**Classifying.**
We may also add to our assumptions based on the problem description noted earlier.
1. Women (Sex=female) were more likely to have survived.
2. Children (Age<?) were more likely to have survived.
3. The upper-class passengers (Pclass=1) were more likely to have survived.
## Analyze by pivoting features
To confirm some of our observations and assumptions, we can quickly analyze our feature correlations by pivoting features against each other. We can only do so at this stage for features which do not have any empty values. It also makes sense doing so only for features which are categorical (Sex), ordinal (Pclass) or discrete (SibSp, Parch) type.
- **Pclass** We observe significant correlation (>0.5) among Pclass=1 and Survived (classifying #3). We decide to include this feature in our model.
- **Sex** We confirm the observation during problem definition that Sex=female had very high survival rate at 74% (classifying #1).
- **SibSp and Parch** These features have zero correlation for certain values. It may be best to derive a feature or a set of features from these individual features (creating #1).
```
train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["SibSp", "Survived"]].groupby(['SibSp'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Parch", "Survived"]].groupby(['Parch'], as_index=False).mean().sort_values(by='Survived', ascending=False)
```
## Analyze by visualizing data
Now we can continue confirming some of our assumptions using visualizations for analyzing the data.
### Correlating numerical features
Let us start by understanding correlations between numerical features and our solution goal (Survived).
A histogram chart is useful for analyzing continous numerical variables like Age where banding or ranges will help identify useful patterns. The histogram can indicate distribution of samples using automatically defined bins or equally ranged bands. This helps us answer questions relating to specific bands (Did infants have better survival rate?)
Note that x-axis in historgram visualizations represents the count of samples or passengers.
**Observations.**
- Infants (Age <=4) had high survival rate.
- Oldest passengers (Age = 80) survived.
- Large number of 15-25 year olds did not survive.
- Most passengers are in 15-35 age range.
**Decisions.**
This simple analysis confirms our assumptions as decisions for subsequent workflow stages.
- We should consider Age (our assumption classifying #2) in our model training.
- Complete the Age feature for null values (completing #1).
- We should band age groups (creating #3).
```
g = sns.FacetGrid(train_df, col='Survived')
g.map(plt.hist, 'Age', bins=20)
```
### Correlating numerical and ordinal features
We can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values.
**Observations.**
- Pclass=3 had most passengers, however most did not survive. Confirms our classifying assumption #2.
- Infant passengers in Pclass=2 and Pclass=3 mostly survived. Further qualifies our classifying assumption #2.
- Most passengers in Pclass=1 survived. Confirms our classifying assumption #3.
- Pclass varies in terms of Age distribution of passengers.
**Decisions.**
- Consider Pclass for model training.
```
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived')
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend();
```
### Correlating categorical features
Now we can correlate categorical features with our solution goal.
**Observations.**
- Female passengers had much better survival rate than males. Confirms classifying (#1).
- Exception in Embarked=C where males had higher survival rate. This could be a correlation between Pclass and Embarked and in turn Pclass and Survived, not necessarily direct correlation between Embarked and Survived.
- Males had better survival rate in Pclass=3 when compared with Pclass=2 for C and Q ports. Completing (#2).
- Ports of embarkation have varying survival rates for Pclass=3 and among male passengers. Correlating (#1).
**Decisions.**
- Add Sex feature to model training.
- Complete and add Embarked feature to model training.
```
# grid = sns.FacetGrid(train_df, col='Embarked')
grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6)
grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
grid.add_legend()
```
### Correlating categorical and numerical features
We may also want to correlate categorical features (with non-numeric values) and numeric features. We can consider correlating Embarked (Categorical non-numeric), Sex (Categorical non-numeric), Fare (Numeric continuous), with Survived (Categorical numeric).
**Observations.**
- Higher fare paying passengers had better survival. Confirms our assumption for creating (#4) fare ranges.
- Port of embarkation correlates with survival rates. Confirms correlating (#1) and completing (#2).
**Decisions.**
- Consider banding Fare feature.
```
# grid = sns.FacetGrid(train_df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'})
grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6)
grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
grid.add_legend()
```
## Wrangle data
We have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals.
### Correcting by dropping features
This is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis.
Based on our assumptions and decisions we want to drop the Cabin (correcting #2) and Ticket (correcting #1) features.
Note that where applicable we perform operations on both training and testing datasets together to stay consistent.
```
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
"After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape
```
### Creating new feature extracting from existing
We want to analyze if Name feature can be engineered to extract titles and test correlation between titles and survival, before dropping Name and PassengerId features.
In the following code we extract Title feature using regular expressions. The RegEx pattern `(\w+\.)` matches the first word which ends with a dot character within Name feature. The `expand=False` flag returns a DataFrame.
**Observations.**
When we plot Title, Age, and Survived, we note the following observations.
- Most titles band Age groups accurately. For example: Master title has Age mean of 5 years.
- Survival among Title Age bands varies slightly.
- Certain titles mostly survived (Mme, Lady, Sir) or did not (Don, Rev, Jonkheer).
**Decision.**
- We decide to retain the new Title feature for model training.
```
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
```
We can replace many titles with a more common name or classify them as `Rare`.
```
for dataset in combine:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\
'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
train_df[['Title', 'Survived']].groupby(['Title'], as_index=False).mean()
```
We can convert the categorical titles to ordinal.
```
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train_df.head()
```
Now we can safely drop the Name feature from training and testing datasets. We also do not need the PassengerId feature in the training dataset.
```
train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
combine = [train_df, test_df]
train_df.shape, test_df.shape
```
### Converting a categorical feature
Now we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal.
Let us start by converting Sex feature to a new feature called Gender where female=1 and male=0.
```
for dataset in combine:
dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
train_df.head()
```
### Completing a numerical continuous feature
Now we should start estimating and completing features with missing or null values. We will first do this for the Age feature.
We can consider three methods to complete a numerical continuous feature.
1. A simple way is to generate random numbers between mean and [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation).
2. More accurate way of guessing missing values is to use other correlated features. In our case we note correlation among Age, Gender, and Pclass. Guess Age values using [median](https://en.wikipedia.org/wiki/Median) values for Age across sets of Pclass and Gender feature combinations. So, median Age for Pclass=1 and Gender=0, Pclass=1 and Gender=1, and so on...
3. Combine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Gender combinations.
Method 1 and 3 will introduce random noise into our models. The results from multiple executions might vary. We will prefer method 2.
```
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Gender')
grid = sns.FacetGrid(train_df, row='Pclass', col='Sex', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend()
```
Let us start by preparing an empty array to contain guessed Age values based on Pclass x Gender combinations.
```
guess_ages = np.zeros((2,3))
guess_ages
```
Now we iterate over Sex (0 or 1) and Pclass (1, 2, 3) to calculate guessed values of Age for the six combinations.
```
for dataset in combine:
for i in range(0, 2):
for j in range(0, 3):
guess_df = dataset[(dataset['Sex'] == i) & \
(dataset['Pclass'] == j+1)]['Age'].dropna()
# age_mean = guess_df.mean()
# age_std = guess_df.std()
# age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)
age_guess = guess_df.median()
# Convert random age float to nearest .5 age
guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5
for i in range(0, 2):
for j in range(0, 3):
dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\
'Age'] = guess_ages[i,j]
dataset['Age'] = dataset['Age'].astype(int)
train_df.head()
```
Let us create Age bands and determine correlations with Survived.
```
train_df['AgeBand'] = pd.cut(train_df['Age'], 5)
train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True)
```
Let us replace Age with ordinals based on these bands.
```
for dataset in combine:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age']
train_df.head()
```
We can not remove the AgeBand feature.
```
train_df = train_df.drop(['AgeBand'], axis=1)
combine = [train_df, test_df]
train_df.head()
```
### Create new feature combining existing features
We can create a new feature for FamilySize which combines Parch and SibSp. This will enable us to drop Parch and SibSp from our datasets.
```
for dataset in combine:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
train_df[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean().sort_values(by='Survived', ascending=False)
```
We can create another feature called IsAlone.
```
for dataset in combine:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()
```
Let us drop Parch, SibSp, and FamilySize features in favor of IsAlone.
```
train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
combine = [train_df, test_df]
train_df.head()
```
We can also create an artificial feature combining Pclass and Age.
```
for dataset in combine:
dataset['Age*Class'] = dataset.Age * dataset.Pclass
train_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(10)
```
### Completing a categorical feature
Embarked feature takes S, Q, C values based on port of embarkation. Our training dataset has two missing values. We simply fill these with the most common occurance.
```
freq_port = train_df.Embarked.dropna().mode()[0]
freq_port
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)
train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False)
```
### Converting categorical feature to numeric
We can now convert the EmbarkedFill feature by creating a new numeric Port feature.
```
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
train_df.head()
```
### Quick completing and converting a numeric feature
We can now complete the Fare feature for single missing value in test dataset using mode to get the value that occurs most frequently for this feature. We do this in a single line of code.
Note that we are not creating an intermediate new feature or doing any further analysis for correlation to guess missing feature as we are replacing only a single value. The completion goal achieves desired requirement for model algorithm to operate on non-null values.
We may also want round off the fare to two decimals as it represents currency.
```
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)
test_df.head()
```
We can not create FareBand.
```
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)
train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True)
```
Convert the Fare feature to ordinal values based on the FareBand.
```
for dataset in combine:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
train_df = train_df.drop(['FareBand'], axis=1)
combine = [train_df, test_df]
train_df.head(10)
```
And the test dataset.
```
test_df.head(10)
```
## Model, predict and solve
Now we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include:
- Logistic Regression
- KNN or k-Nearest Neighbors
- Support Vector Machines
- Naive Bayes classifier
- Decision Tree
- Random Forrest
- Perceptron
- Artificial neural network
- RVM or Relevance Vector Machine
```
X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
```
Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference [Wikipedia](https://en.wikipedia.org/wiki/Logistic_regression).
Note the confidence score generated by the model based on our training dataset.
```
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
```
We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function.
Positive coefficients increase the log-odds of the response (and thus increase the probability), and negative coefficients decrease the log-odds of the response (and thus decrease the probability).
- Sex is highest positivie coefficient, implying as the Sex value increases (male: 0 to female: 1), the probability of Survived=1 increases the most.
- Inversely as Pclass increases, probability of Survived=1 decreases the most.
- This way Age*Class is a good artificial feature to model as it has second highest negative correlation with Survived.
- So is Title as second highest positive correlation.
```
coeff_df = pd.DataFrame(train_df.columns.delete(0))
coeff_df.columns = ['Feature']
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Correlation', ascending=False)
```
Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of **two categories**, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine).
Note that the model generates a confidence score which is higher than Logistics Regression model.
```
# Support Vector Machines
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
```
In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference [Wikipedia](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm).
KNN confidence score is better than Logistics Regression but worse than SVM.
```
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
```
In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference [Wikipedia](https://en.wikipedia.org/wiki/Naive_Bayes_classifier).
The model generated confidence score is the lowest among the models evaluated so far.
```
# Gaussian Naive Bayes
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
```
The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference [Wikipedia](https://en.wikipedia.org/wiki/Perceptron).
```
# Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron
# Linear SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
# Stochastic Gradient Descent
sgd = SGDClassifier()
sgd.fit(X_train, Y_train)
Y_pred = sgd.predict(X_test)
acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)
acc_sgd
```
This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference [Wikipedia](https://en.wikipedia.org/wiki/Decision_tree_learning).
The model confidence score is the highest among models evaluated so far.
```
# Decision Tree
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
```
The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference [Wikipedia](https://en.wikipedia.org/wiki/Random_forest).
The model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.
```
# Random Forest
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
```
### Model evaluation
We can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.
```
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Naive Bayes', 'Perceptron',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree'],
'Score': [acc_svc, acc_knn, acc_log,
acc_random_forest, acc_gaussian, acc_perceptron,
acc_sgd, acc_linear_svc, acc_decision_tree]})
models.sort_values(by='Score', ascending=False)
submission = pd.DataFrame({
"PassengerId": test_df["PassengerId"],
"Survived": Y_pred
})
# submission.to_csv('../output/submission.csv', index=False)
```
Our submission to the competition site Kaggle results in scoring 3,883 of 6,082 competition entries. This result is indicative while the competition is running. This result only accounts for part of the submission dataset. Not bad for our first attempt. Any suggestions to improve our score are most welcome.
## References
This notebook has been created based on great work done solving the Titanic competition and other sources.
- [A journey through Titanic](https://www.kaggle.com/omarelgabry/titanic/a-journey-through-titanic)
- [Getting Started with Pandas: Kaggle's Titanic Competition](https://www.kaggle.com/c/titanic/details/getting-started-with-random-forests)
- [Titanic Best Working Classifier](https://www.kaggle.com/sinakhorami/titanic/titanic-best-working-classifier)
| github_jupyter |
```
"""
LICENSE MIT
2020
Guillaume Rozier
Website : http://www.covidtracker.fr
Mail : guillaume.rozier@telecomnancy.net
README:
This file contains scripts that download data from data.gouv.fr and then process it to build many graphes.
I'm currently cleaning the code, please ask me if something is not clear enough.
The charts are exported to 'charts/images/france'.
Data is download to/imported from 'data/france'.
Requirements: please see the imports below (use pip3 to install them).
"""
import pandas as pd
import plotly.graph_objects as go
import france_data_management as data
from datetime import datetime
from datetime import timedelta
from plotly.subplots import make_subplots
import plotly
import math
import os
import json
PATH = "../../"
PATH_STATS = "../../data/france/stats/"
import locale
locale.setlocale(locale.LC_ALL, 'fr_FR.UTF-8')
def import_df_age():
df = pd.read_csv(PATH+"data/france/vaccin/vacsi-a-fra.csv", sep=";")
return df
df_new = pd.read_csv(PATH+"data/france/donnes-hospitalieres-covid19-nouveaux.csv", sep=";")
df_clage = pd.read_csv(PATH+"data/france/donnes-hospitalieres-clage-covid19.csv", sep=";")
df_new_france = df_new.groupby("jour").sum()
df_new_france.sum()
df_clage_france = df_clage.groupby(["jour", "cl_age90"]).sum().reset_index()
df_clage_france[df_clage_france.jour=="2021-04-12"]
df = import_df_age()
df["n_dose1"] = df["n_dose1"].replace({",": ""}, regex=True).astype("int")
df = df.groupby(["clage_vacsi"]).sum()/100
df = df[1:]
df["n_dose1_pourcent"] = round(df.n_dose1/df.n_dose1.sum()*100, 1)
clage_vacsi = [24, 29, 39, 49, 59, 64, 69, 74, 79, 80]
nb_pop = [5236809, 3593713, 8034961, 8316050, 8494520, 3979481, 3801413, 3404034, 2165960, 4081928]
df_age = pd.DataFrame()
df_age["clage_vacsi"]=clage_vacsi
df_age["nb_pop"]=nb_pop
df = df.merge(df_age, left_on="clage_vacsi", right_on="clage_vacsi")
df["pop_vac"] = df["n_dose1"]/df["nb_pop"]*100
df
fig = go.Figure()
fig.add_trace(go.Bar(
x=[str(age) + " ans" for age in df.clage_vacsi[:-1]]+["+ 80 ans"],
y=df.pop_vac,
text=[str(round(prct, 2)) + " %" for prct in df.pop_vac],
textposition='auto',))
fig.update_layout(
title={
'text': "% de population ayant reçu au moins 1 dose de vaccin",
'y':0.95,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
titlefont = dict(
size=20),
annotations = [
dict(
x=0,
y=1.07,
xref='paper',
yref='paper',
font=dict(size=14),
text='{}. Données : Santé publique France. Auteur : <b>@GuillaumeRozier - covidtracker.fr.</b>'.format(datetime.strptime("2021-01-27", '%Y-%m-%d').strftime('%d %b')),
showarrow = False
),
]
)
fig.update_yaxes(range=[0, 100])
fig.show()
fig = go.Figure()
fig.add_trace(go.Pie(
labels=[str(age) + " ans" for age in df.index[:-1]]+["+ 80 ans"],
values=df.n_dose1_pourcent,
text=[str(prct) + "" for prct in df.n_dose1],
textposition='auto',))
fig.update_layout(
title={
'text': "Nombre de vaccinés par tranche d'âge",
'y':0.95,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
titlefont = dict(
size=20),
annotations = [
dict(
x=0,
y=1.07,
xref='paper',
yref='paper',
font=dict(size=14),
text='{}. Données : Santé publique France. Auteur : <b>@GuillaumeRozier - covidtracker.fr.</b>'.format(datetime.strptime("2021-01-27", '%Y-%m-%d').strftime('%d %b')),
showarrow = False
),
]
)
fig.show()
#locale.setlocale(locale.LC_ALL, 'fr_FR.UTF-8')
import random
import numpy as np
n_sain = 20000
x_sain = np.random.rand(1, n_sain)[0]*100
values_sain = np.random.rand(1, n_sain)[0]*100
x_az = np.random.rand(1,30)[0]*100
values_az = np.random.rand(1,30)[0]*100
fig = go.Figure()
for idx in range(len(x_sain)):
fig.add_trace(go.Scatter(
x=[x_sain[idx]],
y=[values_sain[idx]],
mode="markers",
showlegend=False,
marker_color="rgba(14, 201, 4, 0.5)", #"rgba(0, 0, 0, 0.5)",
marker_size=2))
fig.add_trace(go.Scatter(
x=x_az,
y=values_az,
mode="markers",
showlegend=False,
marker_color="rgba(201, 4, 4,0.5)", #"rgba(0, 0, 0, 0.5)",
marker_size=2))
fig.update_yaxes(range=[0, 100], visible=False)
fig.update_xaxes(range=[0, 100], nticks=10)
fig.update_layout(
plot_bgcolor='rgb(255,255,255)',
title={
'text': "Admissions en réanimation pour Covid19",
'y':0.90,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
titlefont = dict(
size=20),
annotations = [
dict(
x=0.5,
y=1.2,
xref='paper',
yref='paper',
text='Auteur : covidtracker.fr.'.format(),
showarrow = False
)]
)
fig.write_image(PATH + "images/charts/france/points_astrazeneca.jpeg", scale=4, width=800, height=350)
import numpy as np
np.random.rand(1,20000000)
```
| github_jupyter |
## First step in gap analysis is to determine the AEP based on operational data.
```
%load_ext autoreload
%autoreload 2
```
This notebook provides an overview and walk-through of the steps taken to produce a plant-level operational energy asssessment (OA) of a wind plant in the PRUF project. The La Haute-Borne wind farm is used here and throughout the example notebooks.
Uncertainty in the annual energy production (AEP) estimate is calculated through a Monte Carlo approach. Specifically, inputs into the OA code as well as intermediate calculations are randomly sampled based on their specified or calculated uncertainties. By performing the OA assessment thousands of times under different combinations of the random sampling, a distribution of AEP values results from which uncertainty can be deduced. Details on the Monte Carlo approach will be provided throughout this notebook.
### Step 1: Import plant data into notebook
A zip file included in the OpenOA 'examples/data' folder needs to be unzipped to run this step. Note that this zip file should be unzipped automatically as part of the project.prepare() function call below. Once unzipped, 4 CSV files will appear in the 'examples/data/la_haute_borne' folder.
```
# Import required packages
import os
import matplotlib.pyplot as plt
import numpy as np
import statsmodels.api as sm
import pandas as pd
import copy
from project_ENGIE import Project_Engie
from operational_analysis.methods import plant_analysis
```
In the call below, make sure the appropriate path to the CSV input files is specfied. In this example, the CSV files are located directly in the 'examples/data/la_haute_borne' folder
```
# Load plant object
project = Project_Engie('./data/la_haute_borne')
# Prepare data
project.prepare()
```
### Step 2: Review the data
Several Pandas data frames have now been loaded. Histograms showing the distribution of the plant-level metered energy, availability, and curtailment are shown below:
```
# Review plant data
fig, (ax1, ax2, ax3) = plt.subplots(ncols = 3, figsize = (15,5))
ax1.hist(project._meter.df['energy_kwh'], 40) # Metered energy data
ax2.hist(project._curtail.df['availability_kwh'], 40) # Curtailment and availability loss data
ax3.hist(project._curtail.df['curtailment_kwh'], 40) # Curtailment and availability loss data
plt.tight_layout()
plt.show()
```
### Step 3: Process the data into monthly averages and sums
The raw plant data can be in different time resolutions (in this case 10-minute periods). The following steps process the data into monthly averages and combine them into a single 'monthly' data frame to be used in the OA assessment.
```
project._meter.df.head()
```
First, we'll create a MonteCarloAEP object which is used to calculate long-term AEP. Two renalaysis products are specified as arguments.
```
pa = plant_analysis.MonteCarloAEP(project, reanal_products = ['era5', 'merra2'])
```
Let's view the result. Note the extra fields we've calculated that we'll use later for filtering:
- energy_nan_perc : the percentage of NaN values in the raw revenue meter data used in calculating the monthly sum. If this value is too large, we shouldn't include this month
- nan_flag : if too much energy, availability, or curtailment data was missing for a given month, flag the result
- num_days_expected : number of days in the month (useful for normalizing monthly gross energy later)
- num_days_actual : actual number of days per month as found in the data (used when trimming monthly data frame)
```
# View the monthly data frame
pa._aggregate.df.head()
```
### Step 4: Review reanalysis data
Reanalysis data will be used to long-term correct the operational energy over the plant period of operation to the long-term. It is important that we only use reanalysis data that show reasonable trends over time with no noticeable discontinuities. A plot like below, in which normalized annual wind speeds are shown from 1997 to present, provides a good first look at data quality.
The plot shows that both of the reanalysis products track each other reasonably well and seem well-suited for the analysis.
```
pa.plot_reanalysis_normalized_rolling_monthly_windspeed().show()
```
### Step 5: Review energy and loss data
It is useful to take a look at the energy data and make sure the values make sense. We begin with scatter plots of gross energy and wind speed for each reanalysis product. We also show a time series of gross energy, as well as availability and curtailment loss.
Let's start with the scatter plots of gross energy vs wind speed for each reanalysis product. Here we use the 'Robust Linear Model' (RLM) module of the Statsmodels package with the default Huber algorithm to produce a regression fit that excludes outliers. Data points in red show the outliers, and were excluded based on a Huber sensitivity factor of 3.0 (the factor is varied between 2.0 and 3.0 in the Monte Carlo simulation).
The plots below reveal that:
- there are some outliers
- Both renalysis products are strongly correlated with plant energy
```
pa.plot_reanalysis_gross_energy_data(outlier_thres=3).show()
```
Next we show time series plots of the monthly gross energy, availabilty, and curtialment. Note that the availability and curtailment data were estimated based on SCADA data from the plant.
Long-term availability and curtailment losses for the plant are calculated based on average percentage losses for each calendar month. Summing those average values weighted by the fraction of long-term gross energy generated in each month yields the long-term annual estimates. Weighting by monthly long-term gross energy helps account for potential correlation between losses and energy production (e.g., high availability losses in summer months with lower energy production). The long-term losses are calculated in Step 9.
```
pa.plot_aggregate_plant_data_timeseries().show()
```
### Step 6: Specify availabilty and curtailment data not represenative of actual plant performance
There may be anomalies in the reported availabilty that shouldn't be considered representative of actual plant performance. Force majeure events (e.g. lightning) are a good example. Such losses aren't typically considered in pre-construction AEP estimates; therefore, plant availablity loss reported in an operational AEP analysis should also not include such losses.
The 'availability_typical' and 'curtailment_typical' fields in the monthly data frame are initially set to True. Below, individual months can be set to 'False' if it is deemed those months are unrepresentative of long-term plant losses. By flagging these months as false, they will be omitted when assessing average availabilty and curtailment loss for the plant.
Justification for removing months from assessing average availabilty or curtailment should come from conversations with the owner/operator. For example, if a high-loss month is found, reasons for the high loss should be discussed with the owner/operator to determine if those losses can be considered representative of average plant operation.
```
# For illustrative purposes, let's suppose a few months aren't representative of long-term losses
pa._aggregate.df.loc['2014-11-01',['availability_typical','curtailment_typical']] = False
pa._aggregate.df.loc['2015-07-01',['availability_typical','curtailment_typical']] = False
```
### Step 7: Select reanalysis products to use
Based on the assessment of reanalysis products above (both long-tern trend and relationship with plant energy), we now set which reanalysis products we will include in the OA. For this particular case study, we use both products given the high regression relationships.
### Step 8: Set up Monte Carlo inputs
The next step is to set up the Monte Carlo framework for the analysis. Specifically, we identify each source of uncertainty in the OA estimate and use that uncertainty to create distributions of the input and intermediate variables from which we can sample for each iteration of the OA code. For input variables, we can create such distributions beforehand. For intermediate variables, we must sample separately for each iteration.
Detailed descriptions of the sampled Monte Carlo inputs, which can be specified when initializing the MonteCarloAEP object if values other than the defaults are desired, are provided below:
- slope, intercept, and num_outliers : These are intermediate variables that are calculated for each iteration of the code
- outlier_threshold : Sample values between 2 and 3 which set the Huber algorithm outlier detection parameter. Varying this threshold accounts for analyst subjectivity on what data points constitute outliers and which do not.
- metered_energy_fraction : Revenue meter energy measurements are associated with a measurement uncertainty of around 0.5%. This uncertainty is used to create a distribution centered at 1 (and with standard deviation therefore of 0.005). This column represents random samples from that distribution. For each iteration of the OA code, a value from this column is multiplied by the monthly revenue meter energy data before the data enter the OA code, thereby capturing the 0.5% uncertainty.
- loss_fraction : Reported availability and curtailment losses are estimates and are associated with uncertainty. For now, we assume the reported values are associated with an uncertainty of 5%. Similar to above, we therefore create a distribution centered at 1 (with std of 0.05) from which we sample for each iteration of the OA code. These sampled values are then multiplied by the availability and curtaiment data independently before entering the OA code to capture the 5% uncertainty in the reported values.
- num_years_windiness : This intends to capture the uncertainty associated with the number of historical years an analyst chooses to use in the windiness correction. The industry standard is typically 20 years and is based on the assumption that year-to-year wind speeds are uncorrelated. However, a growing body of research suggests that there is some correlation in year-to-year wind speeds and that there are trends in the resource on the decadal timescale. To capture this uncertainty both in the long-term trend of the resource and the analyst choice, we randomly sample integer values betweeen 10 and 20 as the number of years to use in the windiness correction.
- loss_threshold : Due to uncertainty in reported availability and curtailment estimates, months with high combined losses are associated with high uncertainty in the calculated gross energy. It is common to remove such data from analysis. For this analysis, we randomly sample float values between 0.1 and 0.2 (i.e. 10% and 20%) to serve as criteria for the combined availability and curtailment losses. Specifically, months are excluded from analysis if their combined losses exceeds that criteria for the given OA iteration.
- reanalyis_product : This captures the uncertainty of using different reanalysis products and, lacking a better method, is a proxy way of capturing uncertainty in the modelled monthly wind speeds. For each iteration of the OA code, one of the reanalysis products that we've already determined as valid (see the cells above) is selected.
### Step 9: Run the OA code
We're now ready to run the Monte-Carlo based OA code. We repeat the OA process "num_sim" times using different sampling combinations of the input and intermediate variables to produce a distribution of AEP values.
A single line of code here in the notebook performs this step, but below is more detail on what is being done.
Steps in OA process:
- Set the wind speed and gross energy data to be used in the regression based on i) the reanalysis product to be used (Monte-Carlo sampled); ii) the NaN energy data criteria (1%); iii) Combined availability and curtailment loss criteria (Monte-Carlo sampled); and iv) the outlier criteria (Monte-Carlo sampled)
- Normalize gross energy to 30-day months
- Perform linear regression and determine slope and intercept values, their standard errors, and the covariance between the two
- Use the information above to create distributions of possible slope and intercept values (e.g. mean equal to slope, std equal to the standard error) from which we randomly sample a slope and intercept value (note that slope and intercept values are highly negatively-correlated so the sampling from both distributions are constrained accordingly)
- to perform the long term correction, first determine the long-term monthly average wind speeds (i.e. average January wind speed, average Februrary wind speed, etc.) based on a 10-20 year historical period as determined by the Monte Carlo process.
- Apply the Monte-Carlo sampled slope and intercept values to the long-term monthly average wind speeds to calculate long-term monthly gross energy
- 'Denormalize' monthly long-term gross energy back to the normal number of days
- Calculate AEP by subtracting out the long-term avaiability loss (curtailment loss is left in as part of AEP)
```
# Run Monte-Carlo based OA
pa.run(num_sim=2000, reanal_subset=['era5', 'merra2'])
```
The key result is shown below: a distribution of AEP values from which uncertainty can be deduced. In this case, uncertainty is around 9%.
```
# Plot a distribution of AEP values from the Monte-Carlo OA method
pa.plot_result_aep_distributions().show()
```
### Step 10: Post-analysis visualization
Here we show some supplementary results of the Monte Carlo OA approach to help illustrate how it works.
First, it's worth looking at the Monte-Carlo tracker data frame again, now that the slope, intercept, and number of outlier fields have been completed. Note that for transparency, debugging, and analysis purposes, we've also included in the tracker data frame the number of data points used in the regression.
```
# Produce histograms of the various MC-parameters
mc_reg = pd.DataFrame(data = {'slope': pa._mc_slope.ravel(),
'intercept': pa._mc_intercept,
'num_points': pa._mc_num_points,
'metered_energy_fraction': pa._inputs.metered_energy_fraction,
'loss_fraction': pa._inputs.loss_fraction,
'num_years_windiness': pa._inputs.num_years_windiness,
'loss_threshold': pa._inputs.loss_threshold,
'reanalysis_product': pa._inputs.reanalysis_product})
```
It's useful to plot distributions of each variable to show what is happening in the Monte Carlo OA method. Based on the plot below, we observe the following:
- metered_energy_fraction, and loss_fraction sampling follow a normal distribution as expected
- The slope and intercept distributions appear normally distributed, even though different reanalysis products are considered, resulting in different regression relationships. This is likely because the reanalysis products agree with each other closely.
- 24 data points were used for all iterations, indicating that there was no variation in the number of outlier months removed
- We see approximately equal sampling of the num_years_windiness, loss_threshold, and reanalysis_product, as expected
```
plt.figure(figsize=(15,15))
for s in np.arange(mc_reg.shape[1]):
plt.subplot(4,3,s+1)
plt.hist(mc_reg.iloc[:,s],40)
plt.title(mc_reg.columns[s])
plt.show()
```
It's worth highlighting the inverse relationship between slope and intercept values under the Monte Carlo approach. As stated earlier, slope and intercept values are strongly negatively correlated (e.g. slope goes up, intercept goes down) which is captured by the covariance result when performing linear regression. By constrained random sampling of slope and intercept values based on this covariance, we assure we aren't sampling unrealisic combinations.
The plot below shows that the values are being sampled appropriately
```
# Produce scatter plots of slope and intercept values, and overlay the resulting line of best fits over the actual wind speed
# and gross energy data points. Here we focus on the ERA-5 data
plt.figure(figsize=(8,6))
plt.plot(mc_reg.intercept[mc_reg.reanalysis_product =='era5'],mc_reg.slope[mc_reg.reanalysis_product =='era5'],'.')
plt.xlabel('Intercept (GWh)')
plt.ylabel('Slope (GWh / (m/s))')
plt.show()
```
We can look further at the influence of certain Monte Carlo parameters on the AEP result. For example, let's see what effect the choice of reanalysis product has on the result:
```
# Boxplot of AEP based on choice of reanalysis product
tmp_df=pd.DataFrame(data={'aep':pa.results.aep_GWh,'reanalysis_product':mc_reg['reanalysis_product']})
tmp_df.boxplot(column='aep',by='reanalysis_product',figsize=(8,6))
plt.ylabel('AEP (GWh/yr)')
plt.xlabel('Reanalysis product')
plt.title('AEP estimates by reanalysis product')
plt.suptitle("")
plt.show()
```
In this case, the two reanalysis products lead to similar AEP estimates, although MERRA2 yields slightly higher uncertainty.
We can also look at the effect on the number of years used in the windiness correction:
```
# Boxplot of AEP based on number of years in windiness correction
tmp_df=pd.DataFrame(data={'aep':pa.results.aep_GWh,'num_years_windiness':mc_reg['num_years_windiness']})
tmp_df.boxplot(column='aep',by='num_years_windiness',figsize=(8,6))
plt.ylabel('AEP (GWh/yr)')
plt.xlabel('Number of years in windiness correction')
plt.title('AEP estimates by windiness years')
plt.suptitle("")
plt.show()
```
As seen above, the number of years used in the windiness correction does not significantly impact the AEP estimate.
| github_jupyter |
<img src='https://mundiwebservices.com/build/assets/Mundi-Logo-CMYK-colors.png' align='left' width='15%' ></img>
# Mundi GDAL
```
from mundilib import MundiCatalogue
# other tools
import os
import numpy as np
from osgeo import gdal
import matplotlib.pyplot as plt
```
### Processing of an in-memory image (display/make histogram/add mask, ...)
```
# getting image from Mundi
c = MundiCatalogue()
wms = c.get_collection("Sentinel2").mundi_wms('L1C')
response = wms.getmap(layers=['92_NDWI'],
srs='EPSG:3857',
bbox=(146453.3462,5397218.5672,176703.3001,5412429.5358), # Toulouse
size=(600, 300),
format='image/png',
time='2018-04-21/2018-04-21',
showlogo=False,
transparent=False)
# writing image
#out = open(image_file, 'wb')
#out.write(response.read())
#out.close()
# reading bytes stream through a virtual memory file - no need to save image on disk
data = response.read()
vsipath = '/vsimem/img'
gdal.FileFromMemBuffer(vsipath, data)
raster_ds = gdal.Open(vsipath)
print (type(raster_ds))
# Projection
print ("Projection: ", format(raster_ds.GetProjection()))
# Dimensions
print ("X: ", raster_ds.RasterXSize)
print ("Y: ", raster_ds.RasterYSize)
# Number of bands
print ("Nb of bands: ", raster_ds.RasterCount)
# band informations
print ("Bands information:")
for band in range(raster_ds.RasterCount):
band += 1
srcband = raster_ds.GetRasterBand(band)
if srcband is None:
continue
stats = srcband.GetStatistics( True, True )
if stats is None:
continue
print (" - band #%d : Minimum=%.3f, Maximum=%.3f, Mean=%.3f, StdDev=%.3f" % ( \
band, stats[0], stats[1], stats[2], stats[3] ))
# Getting first band of the raster as separate variable
band1 = raster_ds.GetRasterBand(1)
# Check type of the variable 'band'
print (type(band1))
# Data type of the values
gdal.GetDataTypeName(band1.DataType)
# getting array from band dataset
band1_ds = band1.ReadAsArray()
# The .ravel method turns an 2-D numpy array into a 1-D vector
print (band1_ds.shape)
print (band1_ds.ravel().shape)
# Print only selected metadata:
print ("No data value :", band1.GetNoDataValue()) # none
print ("Min value :", band1.GetMinimum())
print ("Max value :", band1.GetMaximum())
# Compute statistics if needed
if band1.GetMinimum() is None or band1.GetMaximum()is None:
band1.ComputeStatistics(0)
print("Statistics computed.")
# Fetch metadata for the band
band1.GetMetadata()
# see cmap values:
# cf. https://matplotlib.org/examples/color/colormaps_reference.html
for c in ["hot", "terrain", "ocean"]:
plt.imshow(band1_ds, cmap = c, interpolation='nearest')
plt.colorbar()
plt.tight_layout()
plt.show()
plt.imshow(band1_ds, cmap = "hot", interpolation='nearest')
plt.colorbar()
plt.tight_layout()
plt.show()
print ("\n--- raster content (head) ---")
print (band1_ds[1:10, ])
band1_hist_ds = band1_ds.ravel()
band1_hist_ds = band1_hist_ds[~np.isnan(band1_hist_ds)]
# 1 column, 1 line
fig, axes = plt.subplots(nrows=1, ncols=1)
axes.hist(band1_hist_ds, bins=10, histtype='bar', color='crimson', ec="pink")
#axes.hist(lidar_dem_hist, bins=[0, 25, 50, 75, 100, 150, 200, 255], histtype='bar', color='crimson', ec="pink")
axes.set_title("Distribution of pixel values", fontsize=16)
axes.set_xlabel('Pixel value (0-255)', fontsize=14)
axes.set_ylabel('Number of pixels', fontsize=14)
#axes.legend(prop={'size': 10})
plt.show()
# masking some pixels
masked_array = np.ma.masked_where(band1_ds<170, band1_ds)
plt.imshow(masked_array, cmap="hot", interpolation='nearest')
plt.show()
# adding of a line on image mask, changing pixel value with mask
masked_array[25:45,:] = 250
plt.imshow(masked_array, cmap="binary")
plt.show()
```
| github_jupyter |
# End to End example to manage lifecycle of ML models deployed on the edge using SageMaker Edge Manager
**SageMaker Studio Kernel**: Data Science
## Contents
* Use Case
* Workflow
* Setup
* Building and Deploying the ML Model
* Running the fleet of Virtual Wind Turbines and Edge Devices
* Cleanup
## Use Case
The challenge we're trying to address here is to detect anomalies in the components of a Wind Turbine. Each wind turbine has many sensors that reads data like:
- Internal & external temperature
- Wind speed
- Rotor speed
- Air pressure
- Voltage (or current) in the generator
- Vibration in the GearBox (using an IMU -> Accelerometer + Gyroscope)
So, depending on the types of the anomalies we want to detect, we need to select one or more features and then prepare a dataset that 'explains' the anomalies. We are interested in three types of anomalies:
- Rotor speed (when the rotor is not in an expected speed)
- Produced voltage (when the generator is not producing the expected voltage)
- Gearbox vibration (when the vibration of the gearbox is far from the expected)
All these three anomalies (or violations) depend on many variables while the turbine is working. Thus, in order to address that, let's use a ML model called [Autoencoder](https://en.wikipedia.org/wiki/Autoencoder), with correlated features. This model is unsupervised. It learns the latent representation of the dataset and tries to predict (regression) the same tensor given as input. The strategy then is to use a dataset collected from a normal turbine (without anomalies). The model will then learn **'what is a normal turbine'**. When the sensors readings of a malfunctioning turbine is used as input, the model will not be able to rebuild the input, predicting something with a high error and detected as an anomaly.
## Workflow
In this example, you will create a robust end-to-end solution that manages the lifecycle of ML models deployed to a wind turbine fleet to detect the anomalies in the operation using SageMaker Edge Manager.
- Prepare a ML model
- download a pre-trained model;
- compile the ML model with SageMaker Neo for Linux x86_64;
- create a deployment package using SageMaker Edge Manager;
- download/unpack the deployment package;
- Download/unpack a package with the IoT certificates, required by the agent;
- Download/unpack **SageMaker Edge Agent** for Linux x86_64;
- Generate the protobuf/grpc stubs (.py scripts) - with these files we will send requests via unix:// sockets to the agent;
- Using some helper functions, we're going to interact with the agent and do some tests.
The following diagram shows the resources, required to run this experiment and understand how the agent works and how to interact with it.

## Step 1 - Setup
### Installing some required libraries
```
!apt-get -y update && apt-get -y install build-essential procps
!pip install --quiet -U numpy sysv_ipc boto3 grpcio-tools grpcio protobuf sagemaker
!pip install --quiet -U matplotlib==3.4.1 seaborn==0.11.1
!pip install --quiet -U grpcio-tools grpcio protobuf
!pip install --quiet paho-mqtt
!pip install --quiet ipywidgets
import boto3
import tarfile
import os
import stat
import io
import time
import sagemaker
import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime
import numpy as np
import glob
```
### Let's take a look at the dataset and its features
Download the dataset
```
%matplotlib inline
%config InlineBackend.figure_format='retina'
!mkdir -p data
!curl https://aws-ml-blog.s3.amazonaws.com/artifacts/monitor-manage-anomaly-detection-model-wind-turbine-fleet-sagemaker-neo/dataset_wind_turbine.csv.gz -o data/dataset_wind.csv.gz
parser = lambda date: datetime.strptime(date, '%Y-%m-%dT%H:%M:%S.%f+00:00')
df = pd.read_csv('data/dataset_wind.csv.gz', compression="gzip", sep=',', low_memory=False, parse_dates=[ 'eventTime'], date_parser=parser)
df.head()
```
Features:
- **nanoId**: id of the edge device that collected the data
- **turbineId**: id of the turbine that produced this data
- **arduino_timestamp**: timestamp of the arduino that was operating this turbine
- **nanoFreemem**: amount of free memory in bytes
- **eventTime**: timestamp of the row
- **rps**: rotation of the rotor in Rotations Per Second
- **voltage**: voltage produced by the generator in milivolts
- **qw, qx, qy, qz**: quaternion angular acceleration
- **gx, gy, gz**: gravity acceleration
- **ax, ay, az**: linear acceleration
- **gearboxtemp**: internal temperature
- **ambtemp**: external temperature
- **humidity**: air humidity
- **pressure**: air pressure
- **gas**: air quality
- **wind_speed_rps**: wind speed in Rotations Per Second
## Step 2 - Building and Deploying the ML Model
In this below section you will :
- Compile/Optimize your pre-trained model to your edge device (Linux X86_64) using [SageMaker NEO](https://docs.aws.amazon.com/sagemaker/latest/dg/neo.html)
- Create a deployment package with a signed model + the runtime used by SageMaker Edge Agent to load and invoke the optimized model
- Deploy the package using IoT Jobs
```
project_name='wind-turbine-farm'
s3_client = boto3.client('s3')
sm_client = boto3.client('sagemaker')
project_id = sm_client.describe_project(ProjectName=project_name)['ProjectId']
bucket_name = 'sagemaker-wind-turbine-farm-%s' % project_id
prefix='wind_turbine_anomaly'
sagemaker_session=sagemaker.Session(default_bucket=bucket_name)
role = sagemaker.get_execution_role()
print('Project name: %s' % project_name)
print('Project id: %s' % project_id)
print('Bucket name: %s' % bucket_name)
```
## Compiling/Packaging/Deploying our ML model to our edge devices
Invoking SageMaker NEO to compile the pre-trained model. To know how this model was trained please refer to the training notebook [here](https://github.com/aws-samples/amazon-sagemaker-edge-manager-workshop/tree/main/lab/02-Training).
Upload the pre-trained model to S3 bucket
```
model_file = open("model/model.tar.gz", "rb")
boto3.Session().resource("s3").Bucket(bucket_name).Object('model/model.tar.gz').upload_fileobj(model_file)
print("Model successfully uploaded!")
```
It will compile the model for targeted hardware and OS with SageMaker Neo service. It will also include the [deep learning runtime](https://github.com/neo-ai/neo-ai-dlr) in the model package.
```
compilation_job_name = 'wind-turbine-anomaly-%d' % int(time.time()*1000)
sm_client.create_compilation_job(
CompilationJobName=compilation_job_name,
RoleArn=role,
InputConfig={
'S3Uri': 's3://%s/model/model.tar.gz' % sagemaker_session.default_bucket(),
'DataInputConfig': '{"input0":[1,6,10,10]}',
'Framework': 'PYTORCH'
},
OutputConfig={
'S3OutputLocation': 's3://%s/wind_turbine/optimized/' % sagemaker_session.default_bucket(),
'TargetPlatform': { 'Os': 'LINUX', 'Arch': 'X86_64' }
},
StoppingCondition={ 'MaxRuntimeInSeconds': 900 }
)
while True:
resp = sm_client.describe_compilation_job(CompilationJobName=compilation_job_name)
if resp['CompilationJobStatus'] in ['STARTING', 'INPROGRESS']:
print('Running...')
else:
print(resp['CompilationJobStatus'], compilation_job_name)
break
time.sleep(5)
```
### Building the Deployment Package SageMaker Edge Manager
It will sign the model and create a deployment package with:
- The optimized model
- Model Metadata
```
import time
model_version = '1.0'
model_name = 'WindTurbineAnomalyDetection'
edge_packaging_job_name='wind-turbine-anomaly-%d' % int(time.time()*1000)
resp = sm_client.create_edge_packaging_job(
EdgePackagingJobName=edge_packaging_job_name,
CompilationJobName=compilation_job_name,
ModelName=model_name,
ModelVersion=model_version,
RoleArn=role,
OutputConfig={
'S3OutputLocation': 's3://%s/%s/model/' % (bucket_name, prefix)
}
)
while True:
resp = sm_client.describe_edge_packaging_job(EdgePackagingJobName=edge_packaging_job_name)
if resp['EdgePackagingJobStatus'] in ['STARTING', 'INPROGRESS']:
print('Running...')
else:
print(resp['EdgePackagingJobStatus'], compilation_job_name)
break
time.sleep(5)
```
### Deploy the package
Using IoT Jobs, we will notify the Python application in the edge devices. The application will:
- Download the deployment package
- Unpack it
- Load the new mode (unload previous versions if any)
```
import boto3
import json
import sagemaker
import uuid
iot_client = boto3.client('iot')
sts_client = boto3.client('sts')
model_version = '1.0'
model_name = 'WindTurbineAnomalyDetection'
sagemaker_session=sagemaker.Session()
region_name = sagemaker_session.boto_session.region_name
account_id = sts_client.get_caller_identity()["Account"]
resp = iot_client.create_job(
jobId=str(uuid.uuid4()),
targets=[
'arn:aws:iot:%s:%s:thinggroup/WindTurbineFarm-%s' % (region_name, account_id, project_id),
],
document=json.dumps({
'type': 'new_model',
'model_version': model_version,
'model_name': model_name,
'model_package_bucket': bucket_name,
'model_package_key': "%s/model/%s-%s.tar.gz" % (prefix, model_name, model_version)
}),
targetSelection='SNAPSHOT'
)
```
Alright! Now, the deployment process will start on the connected edge devices!
## Step 3 - Running the fleet of Virtual Wind Turbines and Edge Devices
In this section you will run a local application written in Python3 that simulates 5 Wind Turbines and 5 edge devices. The SageMaker Edge Agent is deployed on the edge devices.
Here you'll be the **Wind Turbine Farm Operator**. It's possible to visualize the data flowing from the sensors to the ML Model and analyze the anomalies. Also, you'll be able to inject noise (pressing some buttons) in the data to simulate potential anomalies with the equipment.
<table border="0" cellpading="0">
<tr>
<td align="center"><b>STEP-BY-STEP</b></td>
<td align="center"><b>APPLICATION ARCHITECTURE</b></td>
</tr>
<tr>
<td><img src="../imgs/EdgeManagerWorkshop_Macro.png" width="500px"></img></td>
<td><img src="../imgs/EdgeManagerWorkshop_App.png" width="500px"></img></td>
</tr>
</table>
The components of the applicationare:
- Simulator:
- [Simulator](app/ota.py): Program that launches the virtual wind turbines and the edge devices. It uses Python Threads to run all the 10 processes
- [Wind Farm](app/windfarm.py): This is the application that runs on the edge device. It is reponsible for reading the sensors, invoking the ML model and analyzing the anomalies
- Edge Application:
- [Turbine](app/turbine.py): Virtual Wind Turbine. It reads the raw data collected from the 3D Prited Mini Turbine and stream it as a circular buffer. It also has a graphical representation in **IPython Widgets** that is rendered by the Simulator/Dashboard.
- [Over The Air](app/ota.py): This is a module integrated with **IoT Jobs**. In the previous exercise you created an IoT job to deploy the model. This module gets the document process it and deployes the model in each edge device and loads it via SageMaker Edge Manager.
- [Edge client](app/edgeagentclient.py): An abstraction layer on top of the **generated stubs** (proto compilation). It makes it easy to integrate **Wind Farm** with the SageMaker Edge Agent
```
agent_config_package_prefix = 'wind_turbine_agent/config.tgz'
agent_version = '1.20210512.96da6cc'
agent_pkg_bucket = 'sagemaker-edge-release-store-us-west-2-linux-x64'
```
### Prepare the edge devices
1. First download the deployment package that contains the IoT + CA certificates and the configuration file of the SageMaker Edge Agent.
2. Then, download the SageMaker Edge Manager package and complete the deployment process.
> You can see all the artifacts that will be loaded/executed by the virtual Edge Device in **agent/**
```
if not os.path.isdir('agent'):
s3_client = boto3.client('s3')
# Get the configuration package with certificates and config files
with io.BytesIO() as file:
s3_client.download_fileobj(bucket_name, agent_config_package_prefix, file)
file.seek(0)
# Extract the files
tar = tarfile.open(fileobj=file)
tar.extractall('.')
tar.close()
# Download and install SageMaker Edge Manager
agent_pkg_key = 'Releases/%s/%s.tgz' % (agent_version, agent_version)
# get the agent package
with io.BytesIO() as file:
s3_client.download_fileobj(agent_pkg_bucket, agent_pkg_key, file)
file.seek(0)
# Extract the files
tar = tarfile.open(fileobj=file)
tar.extractall('agent')
tar.close()
# Adjust the permissions
os.chmod('agent/bin/sagemaker_edge_agent_binary', stat.S_IXUSR|stat.S_IWUSR|stat.S_IXGRP|stat.S_IWGRP)
```
### Finally, create the SageMaker Edge Agent client stubs, using the protobuffer compiler
SageMaker EdgeManager exposes a [gRPC API](https://grpc.io/docs/what-is-grpc/introduction/) to processes on device. In order to use gRPC API in your choice of language, you need to use the protobuf file `agent.proto` (the definition file for gRPC interface) to generate a stub in your preferred language. Our example was written in Python, therefore below is an example to generate Python EdgeManager gRPC stubs.
```
!python3 -m grpc_tools.protoc --proto_path=agent/docs/api --python_out=app/ --grpc_python_out=app/ agent/docs/api/agent.proto
```
### SageMaker Edge Agent - local directory structure
```
agent
└───certificates
│ └───root
│ │ <<aws_region>>.pem # CA certificate used by Edge Manager to sign the model
│ │
│ └───iot
│ edge_device_<<device_id>>_cert.pem # IoT certificate
│ edge_device_<<device_id>>_key.pem # IoT private key
│ edge_device_<<device_id>>_pub.pem # IoT public key
│ ...
│
└───conf
│ config_edge_device_<<device_id>>.json # Edge Manager config file
│ ...
│
└───model
│ └───<<device_id>>
│ └───<<model_name>>
│ └───<<model_version>> # Artifacts from the Edge Manager model package
│ sagemaker_edge_manifest
│ ...
│
└───logs
│ agent<<device_id>>.log # Logs collected by the local application
│ ...
app
agent_pb2_grpc.py # grpc stubs generated by protoc
agent_pb2.py # agent stubs generated by protoc
...
```
## Simulating The Wind Turbine Farm
Now its time to run our simulator and start playing with the turbines, agents and with the anomalies
> After clicking on **Start**, each turbine will start buffering some data. It takes a few seconds but after completing this process, the application runs in real-time
> Try to press some buttons while the simulation is running, to inject noise in the data and see some anomalies
```
import sys
sys.path.insert(1, 'app')
import windfarm
import edgeagentclient
import turbine
import simulator
import ota
import boto3
from importlib import reload
reload(simulator)
reload(turbine)
reload(edgeagentclient)
reload(windfarm)
reload(ota)
# If there is an existing simulator running, halt it
try:
farm.halt()
except:
pass
iot_client = boto3.client('iot')
mqtt_host=iot_client.describe_endpoint(endpointType='iot:Data-ATS')['endpointAddress']
mqtt_port=8883
!mkdir -p agent/logs && rm -f agent/logs/*
simulator = simulator.WindTurbineFarmSimulator(5)
simulator.start()
farm = windfarm.WindTurbineFarm(simulator, mqtt_host, mqtt_port)
farm.start()
simulator.show()
```
> If you want to experiment with the deployment process, with the wind farm running, go back to Step 2, replace the variable **model_version** by the constant (string) '2.0' in the Json document used by the IoT Job. Then, create a new IoT Job to simulate how to deploy new versions of the model. Go back to this exercise to see the results.
```
try:
farm.halt()
except:
pass
print("Done")
```
## Cleanup
Run the next cell only if you already finished exploring/hacking the content of the workshop.
This code will delete all the resouces created so far, including the **SageMaker Project** you've created
```
import boto3
import time
from shutil import rmtree
iot_client = boto3.client('iot')
sm_client = boto3.client('sagemaker')
s3_resource = boto3.resource('s3')
policy_name='WindTurbineFarmPolicy-%s' % project_id
thing_group_name='WindTurbineFarm-%s' % project_id
fleet_name='wind-turbine-farm-%s' % project_id
# Delete all files from the S3 Bucket
s3_resource.Bucket(bucket_name).objects.all().delete()
# now deregister the devices from the fleet
resp = sm_client.list_devices(DeviceFleetName=fleet_name)
devices = [d['DeviceName'] for d in resp['DeviceSummaries']]
if len(devices) > 0:
sm_client.deregister_devices(DeviceFleetName=fleet_name, DeviceNames=devices)
# now deregister the devices from the fleet
for i,cert_arn in enumerate(iot_client.list_targets_for_policy(policyName=policy_name)['targets']):
for t in iot_client.list_principal_things(principal=cert_arn)['things']:
iot_client.detach_thing_principal(thingName=t, principal=cert_arn)
iot_client.detach_policy(policyName=policy_name, target=cert_arn)
certificateId = cert_arn.split('/')[-1]
iot_client.delete_role_alias(roleAlias='SageMakerEdge-%s' % fleet_name)
iot_client.delete_thing_group(thingGroupName=thing_group_name)
if os.path.isdir('agent'): rmtree('agent')
sm_client.delete_project(ProjectName=project_name)
```
Mission Complete!
| github_jupyter |
# TRANSCOST Model
The TRANSCOST model is a vehicle dedicated-dedicated system model for determining the cost per flight (CpF) and Life Cycle Cost (LCC) for launch vehicle systems.
Three key cost areas make up the model:
1. Development Cost
1. Production Cost
1. Operations Cost
Each of these cost areas and strategies for modeling them will be reviewed before combining them to model the CpF.
## Development Costs
The Development Costs model can be separated into the following categories:
1. Systems Engineering ($f_0$)
1. Strap-on Boosters ($B$)
1. Vehicle Systems/Stages ($V$)
1. Engines ($E$)
These elements are combined into the following equation which gives the total development cost for a launch vehicle:
$$ C_D = f_0 \left( \sum{H_B} + \sum{H_V} + \sum{H_E} \right) f_6\ f_7\ f_8\ \left[PYr \right]$$
It is important to discuss the units of this equation: the Person-Year [PYr]. The Person-Year unit is a cost unit which is independent of inflation or changing currency exchange rates. The value of the Person-Year is determined by the total cost of maintaining an employee for a year, which includes direct wages, travel costs, office costs, and other indirect costs.
We will now go over each term in this expression to clarify its meaning and how to determine its value.
$f_0$: systems engineering/integration factor. When developing and producing launch vehicles, multiple stages and vehicle systems need to be integrated together. The integration of the multiple system elements imparts an increase to the development cost, which is captured by this term. This term can be calculated using:
$$ f_0 = 1.04^N$$
where $N$ is the number of stages or major system elements.
$f_6$: cost growth factor for deviating from the optimum schedule. There is an optimum schedule for development and providing funding for any particular project. Deviations from this optimal schedule impart a penalty on the development cost. Historically, launch vehicles take between 5 and 9 years to develop. Working faster or slower than the optimum schedule will result in cost increases however. The instance of $f_6 = 1.0$ represents development that follows the ideal schedule perfectly. This will almost certainly never be the case, and typical values for $f_6$ range between 1.0 and 1.5.
$f_7$: cost growth factor for parallel organizations. In order to have efficient delegation of tasks and conflict resolution, a prime contractor needs to be established. Having multiple co-prime contractors leads to many inefficiencies, which imparts a penalty on the development cost. This factor can be calculated using:
$$ f_7 = n^{0.2} $$
where $n$ is the number of parallel prime contractors. Having multiple contractors on a project imparts no penalty as long as they are organized in a prime contractor/subcontractor relationship.
$f_8$: person-year correction factor for productivity differences in different countries/regions. Productivity differences exist between different countries and regions, so this must be accounted for in the model. This factor is baselined for productivity in the United States, so for the US $f_8 = 1.0$. Some other countries of interest include Russia with $f_8 = 2.11$, China with $f_8 = 1.34$, Europe (ESA) with $f_8 = 0.86$, and France/Germany with $f_8 = 0.77$.
### Cost Exponential Relationships (CERs)
To calculate the development cost of each major vehicle element, denoted as $H$ in the above total development cost equation, we introduce a series of cost exponential relationships (CER). These CERs relate the reference mass of a stage or system element to its development cost.
CERs have been definied for the following vehicle elements:
1. Solid-propellant Rocket Motors
1. Liquid-propellant Rocket Engines with Turbopumps
1. Pressure-fed Rocket Engines
1. Airbreathing Turbo- and Ramjet-Engines
1. Large Solid-Propellant Rocket Boosters
1. Liquid Propellant Propulsion Systems/Modules
1. Expendable Ballistic Stages and Transfer Vehicles
1. Reusable Ballistic Launch Vehicles
1. Winged Orbital Rocket Vehicles
1. Horizontal Take-off First Stage Vehicles, Advanced Aircraft, and Aerospaceplanes
1. Vertical Take-off First Stage-Fly-back Rocket Vehicles
1. Crewed Ballistic Re-entry Capsules
1. Crewed Space Systems
The general form for these CERs are as follows:
$$ H = a\ M^x\ f_1\ f_2\ f_3\ f_8 $$
In this equation, $a$ and $x$ are empirically determined coefficients for a particular type of vehicle stage or system.
$M$: reference mass (dry mass), in kilograms, of the vehicle system, stage, or engine being considered.
$f_1$: development standard factor. This factor accounts for the relative status of the project in comparison to the state of the art or other existing projects. The development of a standard project that has similar systems already in operation would have $f_1 = 0.9 - 1.1$. The development of a project that is a minor variation on an existing product would have $f_1 = 0.4 - 0.6$. The development of a first-generation system would have $f_1 = 1.3 - 1.4$.
$f_3$: team experience factor. This factor accounts for the relavant experience of the team working on the development of a new project. An industry team with some related experience would have $f_3 = 1.0$. A very experienced team who has worked on similar projects previously would have $f_3 = 0.7 - 0.8$. A new team with little or no previous experience would have $f_3 = 1.3 - 1.4$.
$f_2$: technical quality factor. This factor is not as well-defined as the other cost factors. Its value is derived from technical characterists of a particular vehicle element, and is defined uniquely for each vehicle element. Often, the fit of the CER is good enough without this factor, and so in those cases $f_2 = 1.0$. For others though, a particular relationship is derived for it. For instance, for the development of a liquid turbo-fed engine:
$$ f_2 = 0.026 \left(\ln{N_Q}\right)^2 $$
where $N_Q$ is the number of qualification firings for the engine. This indicates that development cost increases as the number of test firings increases.
### Example - Calculating Development Cost for SSMEs
Next we will consider an example to clarify the above model. We will look at the development costs of the Space Shuttle Main Engines.
First we find the appropriate CER for modeling this. The CER for liquid turbo-fed engines is:
$$ H = 277\ M^{0.48}\ f_1\ f_2\ f_3 $$
The development standard factor, $f_1$ can be taken to by 1.3 since this is a "first of its kind" project. The team experience factor, $f_3$, can be taken as 0.85 since a lot of the team had worked on the F-1 and J-2 engines at Rocketdyne.
We can calculate the technical quality factor, $f_2$, using the equation for turbo-fed liquid engines described in the previous section, knowing that the SSMEs required roughly 900 test firings.
Additionally, the SSME dry mass is 3180 kg.
We can then calculate the total development cost as follows:
```
import math
a = 277. # CER coefficient
x = 0.48 # CER exponent
f1 = 1.3 # development standard factor
f3 = 0.85 # team experience factor
N_Q = 900 # number of test firings
f2 = 0.026*(math.log(N_Q))**2 # technical quality factor
M = 3180 # dry mass of engine [kg]
H = a * M**x * f1 * f2 * f3 # development cost of SSME [PYr]
print H
```
From this calculation, we find a development cost of roughly 17672 PYr. The actual development cost was 18146 PYr, which is reasonaly close to the calculation.
In order to find the development cost of the entire vehicle, the CER would need to be calculated for each major vehicle state or system, then summed together and multiplied with the appropriate cost factors, as described in the total vehicle development cost equation above.
## Production Costs
The production cost model is done similarly to the development costs, which sums a series of CERs to find the total cost.
Three key cost ares make up the production cost model:
1. System management, vehicle integration, and checkout ($f_0$)
1. Vehicle systems ($S$)
1. Engines ($E$)
These elements are combined into the following vehicle production cost (per vehicle) equation:
$$ C_F = f_0^N \ \left( \sum\limits_{1}^n F_S + \sum\limits_{1}^n F_E \right)\ f_8 $$
We will now go over each term to clarify its meaning.
$f_0$: systems engineering/integration factor. Accounts for system management, integration, and checkout of each vehicle element. Typically between 1.02 and 1.03, depending on specifics of each element.
$N$: number of vehicle stages or system elements for the launch vehicle.
$n$: number of identical units per element on a single launch vehicle.
$f_8$: person-year correction factor. This is the same as described in the development costs section.
### Cost Exponential Relationships (CERs)
To calculate the production cost of each major vehicle element, denoted as $H$ in the above total development cost equation, we introduce a series of cost exponential relationships (CER). These CERs relate the reference mass of a stage or system element to its production cost.
Production CERs have been definined for the following stages/systems:
1. Solid Propellant Rocket Motors and Boosters
1. Liquid Propellant Rocket Engines
1. Propulsion Modules
1. Ballistic Vehicles/Stages (Expendable and Reusable)
1. High-speed Aircraft/Winged First Stage Vehicles
1. Winged Orbital Rocket Vehicles
1. Crewed Space Systems
The general form for these CERs for the production of the $i^{th}$ unit are as follows:
$$ F_i = a\ M^x\ f_{4,i} \ [PYr] $$
where $a$ and $x$ are empirically determined coefficients for each type of vehicle stage or system, and $M$ is the reference mass of the stage or system in kilograms.
$f_{4,i}$: cost reduction factor of the $i^{th}$ unit in series production. The cost reduction factor is influenced by several things, including the number of units produced, the production batch size, and the learning factor $p$. The learning factor is in turn influenced by product modifications and production rate.

The cost reduction factor for the production of the $i^{th}$ unit can be estimated using:
$$ f_{4,i} = i^{\frac{\ln{p}}{\ln{2}}} $$
It should be noted that the cost of vehicle maintenance, spares, refurbishment, or over-haul is NOT accounted for the in the production cost model. These are instead accounted for in the operations costs.
### Example - Calculating Production Cost for Saturn V Second Stage
Consider the 1967 contract for a batch of 5 Saturn V second stages. These stages will have unit numbers of 11-15, and will be producted at a build rate of 2-3 per year.
The CER for a vehicle stage with cryogenic propellants is:
$$ F = 1.30\ M^{0.65}\ f_{4,i} \ [PYr] $$
The second stage has a mass of 29,700 kg and production of the Saturn V second stage has a learning factor of $p = 0.96$.
Our goal is to find the production cost for this batch of 5 Saturn V second stages.
First we can will calculate the average cost reduction factor for units 11-15. Then we will find the production cost for the five units.
```
num_units = 5
p = 0.96
unit_nos = range(11,16) # establish production unit numbers
f4_sum = 0
for i in unit_nos:
f4_i = i**(math.log(p)/math.log(2))
f4_sum += f4_i
f4_avg = f4_sum/num_units
a = 1.30 # CER coefficient
x = 0.65 # CER exponent
M = 29700 # dry mass of stage [kg]
F = num_units*a*M**x*f4_avg
print F
```
From this calculation, we find a production cost of roughly 4516 PYr for the 5 units. The actual production cost for these 5 units was 4437 PYr.
## Operations Costs
Modeling the operations cost is much more difficult than modeling the development and production cost due to a large amount of operational influences, as well as scarce reliable reference data. That being said, we will try to model it best as possible.
The operations cost has three key cost areas:
1. Direct Operations Cost (DOC)
1. Indirect Operations Cost (IOC)
1. Refurbishment and Spares Cost (RSC)
It should be noted that all payload-related activities are exluded from this model.
In the case of ELVs, operations costs make up around 20-35% of the total Cost per Flight (CpF). In the case of RLVs, the operations costs typically make up 35-70% of the total CpF.
### Direct Operations Cost (DOC)
The direct operations cost accounts for all activities directly related to the ground preparations of a launch vehicle, plus launch operations and checkout.
There are five cost areas that make up the DOC:
1. Ground Operations
1. Materials and Propellants
1. Flight and Mission Operations
1. Transportation and Recovery
1. Fees and Insurance
Some of these cost areas are easier to estimate than others. We will go over strategies for estimating each of these cost areas.
#### Ground Operations
Many things affect the cost of the ground operations, including: the size and complexity of the vhicle; the fact of a crewed or automated vehicle; the assembly, mating, and transportation mode of the vehicle (vertical or horizontal); the launch mode and associated launch facilities; and the number of launches per year.
The followoing provisional CER can be used to estimate the pre-launch ground operations cost:
$$ C_{PLO} = 8\ {M_0}^{0.67}\ L^{-0.9}\ N^{0.7}\ f_V\ f_C\ f_4\ f_8 $$
$M_0$: gross weight at lift-off (GLOW) of the vehicle in Mg (metric tons)
$L$: launch rate given as launches per year (LpA). This factor and the exponent of -0.9 defines how the required team size grows with launch rate. If the exponent of L was -1.0, this would mean a constant team size regardless of launch rate, which is unrealistic. As a side note, an important consideration with RLVs for determing the LpA (and the fleet size) is the necessary turn-around time for the vehicle.
$N$: number of stages or major vehicle elements. This represents how more operational effort is required with more systems.
$f_v$: launch vehicle type factor. This factor accounts for the varying operational effort required for different launch vehicle types.
For expendable multistage vehicles:
- liquid-propellant vehicles with cryogenic propellant: $f_v = 1.0$
- liquid-propellant vehicles with storable propellant: $f_v = 0.8$
- solid-propellant vehicles: $f_v = 0.3$
For reusable launch systems with integrated health control system:
- automated cargo vehicles (Cryo-SSTO): $f_v = 0.7$
- crewed/piloted vehicles: $f_v = 1.8$
For vehicles with different type stages, an average value should be used.
$f_c$: assembly and integration mode factor. This accounts for the difference in operational effort required for different assembly and checkout modes.
- Vertical assembly and checkout on the launch pad: $f_c = 1.0$
- Vertical assembly and checkout, then transport to launch pad: $f_c = 0.7$
- Horizontal assembly and checkout, transport to pad, erect: $f_c = 0.5$
$f_4$: cost reduction factor as described previously in the production costs section.
$f_8$: person-year correction factor as described previously in the development costs section.
#### Costs of Propellants and Gases
Propellants represent a relatively small fraction of the total CpF. Propellant costs are highly dependent on the production source capacity, as well as the country/region of purchase (for instance, LH2 costs nearly twice as much in Europe as it does in the US).
For liquid propellants, it is important to consider the mass of propellant that will be boiled off during filling, as well as the actual mass required to fill the tanks. For LOX, 50-70% additional propellant is required (beyond what is needed to fill the tanks) to account for boil-off. For LH2, 75-95% additional propellant is required.
It should be noted that the cost of solid-propellants is included in the production cost and not the operations cost.
#### Launch, Flight and Mission Operations Cost
This cost area includes:
- Mission planning and preparation, including software update
- Launch and ascent flight control until payload separation
- Orbital and return flight operations in the ase of reusable launch systems
- Flight safety control and tracking
This cost area does NOT include crew operations or in-orbit experiments.
For ELVs, the launch, flight and mission operations cost is relatively minimal given the short flight-time. For RLVs, this cost is much higher due to extended mission times and increased complexity.
For unmanned systems, the following provisional CER has been determined for the per-flight cost:
$$ C_m = 20\ \left(\sum{Q_N} \right)\ L^{-0.65}\ f_4\ f_8 \ [PYr] $$
$L$ is the launch rate, $f_4$ is the cost reduction factor, and $f_8$ is the person-year correction factor.
$Q_N$ is a vehicle complexity factor. It takes on a diferent value for different numbers and types of stages:
- Small solid motor stages: $Q_N = 0.15$ each.
- Expendable liquid-prop stages or large boosters: $Q_N = 0.4$ each.
- Recoverable or fly-back systems: $Q_N = 1.0$ each.
- Unmanned reusable orbital systems: $Q_N = 2.0$ each.
- Crewed orbital vehicles: $Q_N = 3.0$ each.
For example, we can consider the launch of the ATHENA Vehicle from Wallops Island. We will consider it in two cases:
1. Early Operations: 10th flight, 5 launches per year
1. Mature Operations: 50th flight, 8 launches per year
The ATHENA Vehicle is a four-stage vehicle with three small solid motor stages and a fourth expendable monopropellant liquid-fueled stage. It can be assumed that its production has a learning factor of 90%.
```
p = 0.9
sum_QN = 0.85
# Early Operations, Case 1
L = 5
flight_num = 10
f_4 = flight_num**(math.log(p)/math.log(2))
print f_4
f_8 = 1.0
Cm_early = 20*(sum_QN)*L**(-0.65)*f_4*f_8
print 'Cm_early: ' + str(Cm_early)
# Mature Operations, Case 2
L = 8
flight_num = 50
f_4 = flight_num**(math.log(p)/math.log(2))
f_8 = 1.0
print f_4
Cm_mature = 20*(sum_QN)*L**(-0.65)*f_4*f_8
print 'Cm_mature: ' + str(Cm_mature)
```
For manned systems, an ADDITIONAL cost must be determined. The following provisional CER has been determined for the per-flight crewed operations cost:
$$ C_{ma} = 75\ {T_m}^{0.5}\ {N_a}^{0.5}\ L^{-0.8}\ f_4\ f_8\ [PYr] $$
$T_m$ is the mission duration in orbit in days, $N_a$ is the number of crew members, $L$ is the launch rate, $f_4$ is the cost reduction factor, and $f_8$ is the person-year correction factor.
The result of this CER must be added to the CER for the unmanned system to get the total launch, flight, and mission operations cost.
As an example, consider the Space Shuttle on its 10th flight at 4 LpA. Assume 7 crew onboard, a 14-day mission, and a learning factor of 90%.
We first calculate the unmanned CER for the vehicle system, and then calculate the additional mission cost for having a crewed system.
```
T_m = 14
N_a = 7
L = 4
flight_num = 10
p = 0.9
f_4 = flight_num**(math.log(p)/math.log(2))
f_8 = 1.0
sum_QN = 5.4 # one crewed orbital vehicle + two recoverable boosters + one expendable liquid-prop stage
# find unmanned CER value
Cm_unmanned = 20*(sum_QN)*L**(-0.65)*f_4*f_8
# find manned CER value
C_ma = 75*T_m**0.5*N_a**0.5*L**(-0.8)*f_4*f_8
print 'Cm_unmanned: ' + str(Cm_unmanned)
print 'Cm_manned: ' + str(C_ma)
print 'Sum: ' + str(Cm_unmanned + C_ma)
```
From the example of the shuttle, it can be seen that the majority of the launch, flight, and missions operations cost comes from having a crewed system, rather than an automated system.
#### Ground Transportation and Recovery Costs
This cost area includes things such as transportation of vehicle elements from their fabrication site to the launch area, transportation of reuable vehicles from a remote landing site to the launch area, and transporation of sea-launch facilities from the home harbour to the launch location and back.
Transportation costs for moving elements from their fabrication site to the launch area and cost of transporting sea-launch facilities cannot be accurately estimated with a CER.
However, a preliminary CER for the recovery cost for stages or boosters at sea is given by:
$$ C_{Rec} = \frac{1.5}{L}\left({7\ L^{0.7} + M^{0.83}}\right)\ f_8\ [PYr] $$
$L$ is the launch rate, $M$ is the recovery mass in Mg (metric tons), and $f_8$ is the person-year correction factor.
The specific cost per recovery decreases with launch rate and increases with recovery mass, which makes intuitive sense.
#### Fees and Insurance Costs
A variety of fees and insurance costs contribute to the CpF for launch vehicles. Some of these fees include:
1. **Launch site user fee.** For most US launch sites, the US Department of Transportation (DOT) charges a per-launch fee. It should be noted that the DOC only considers the per-launch fee of using a launch site, and doesn't account for a yearly fixed general fee for using a launch site, which would be handled as part of the IOC.
1. **Public damage insurance.** Most governments require launch service providers to take an insruance against damage caused by parts of a launch vehicle falling to the groung.
1. **Launch vehicle insurance.** For ELVs, the insurance for a launch failure and payload loss normally has to be payed by the customer separately. For RLVs, the launch service provider is the owner of the vehicle and must insure its lifetime. However, the catastrophic failure rate for RLVs can be substantially reduced in comparison to ELVs due to increase redundancy, integrated health control systems, landing capabilities in case of emergencies, and the ability to perform flight tests.
1. **Surcharge for mission abort.** In the case of RLVs, there is a possibility that the vehicle performs an emergency landing without deploying or delivering the payload. In this case, the launch service provider would likely be obligated to provide a free re-launch to the customer. The cost of this mission abort must be considered by the launch provider. A mission abort could be more expensive than the rest of the DOC given necessary investigations that would follow.
### Refurbishment and Spares Cost (RSC)
This cost area accounts for the cost or refurbishment of launch vehicles. It is important to distinguish the difference between the terms refurbishment and maintenance. Here, refurbishment refers to off-line activities, or major vehicle overhauls that have to be performed only after a certain number of flights. On the other hand, maintenance refers to on-line activities and includes everything that has to be done between two consecutive flights. Maintenance is accounted for in the pre-launch ground oeprations cost, and is not handled as part of the RSC.
Major refurbishment activities include:
1. Detailed vehicle system inspection (especially structure, tanks, and thermal protection)
1. Exchange of critical structure elements, such as TPS panels
1. Replacement of the complete main rocket engines
1. Exchange of critical components of the pressurization and feed system, power and electric system, and so on
The refurbishment costs for a vehicle element are typically treated as a percentage of the element production cost. The total refurbishment cost over the vehicle's lifetime must be distributed over the total number of flights to find the impact on the CpF. Like in calculating the development and production costs, engines and vehicle stages are handled separately.
#### Vehicle System Refurbishment Cost
The costs of the refurbishment and spares' cost per flight for various air- and space-craft are given in the chart below:

It is also important to note that the refurbishment cost and vehicle lifetime are NOT independent. The average per-flight refurbishment effort will increase with an increasing number of lifetime fights, since a larger number of vehicle elements will need to be exchanged. With this in mind, there may be an optimum number of vehicle reflights, beyond which it is more cost-effective to introduce a new vehicle than continue reusing an existing one.
#### Rocket Engine Refurbishment Cost
There is very little data available to quantify the engine refurbishment cost. Data for the SSMEs however amounts to a per-flight refurbishment cost of 11% of the original production cost. For future RLVs with a self-diagnosis system that will indicate maintenance requirements, refurbishment effot can be expected to reduce to below 0.5% per flight, with refurbishment every 20 to 25 flights.
Engine lifetime is heavily influenced by pressure levels involved. An effective strategy is to operate engines at approximately 90% of design thrust, which substantially lowers refurbishment effort and would potentially decrease the CpF, despite the penalty for operating at a lower thrust.
For solid rocket motors, cases can only be reused a few times due to relatively expensive recovery operations. In most scenarios, the cost-effectiveness of reusing solid rocket motors is questionable. In the case of the shuttle SRBs, the recovery and refurbishment effort for the two SRBs was more expensive than a pair of expendable SRBs without the recovery equipment would have been.
### Indirect Operations Cost (IOC)
The Indirect Operations Cost consists of all costs that represent a constant value per year, essentially independent of vehicle size and launch rate. This includes program administration and management, marketing and customer relations, general fees and texes, technical support activities, and pilot training, among other things.
Three general cost elements make up the IOC:
1. Program Administration and System Management
1. Technical Support
1. Launch Site Support and Maintenance
The IOC typically adds up to a fixed cost budget per year, which must ben be divided by the number of launches per year to find its contribution to the CpF. For approximately 6 - 12 LpA, the IOC typically represents 8 - 15% of the CpF. For low launch rate, however, its contribution to the CpF can be much larger.
#### Program Administration and System Management
The most practical way of assessing the cost of this area is to estimate a number of staff required for these tasks. The staff has to cover a variety of tasks, including marketing and customer relations, vehicle procurement, contracts handling, and accounting. The related general overhead for the staff and these tasks, including rental charges, travel costs, computer power, exhibit and publication costs, and others must be included.
Costs of government fees, taxes, insurance costs, and financing costs also need to be considered here.
#### Technical Support
Launch service providers need to provide technical support capabilities for ground operations, including:
1. Supervision of technical standard and vehicle performance
1. Supervision of industrial contracts for vehicle procurement
1. Failure analysis and implementation of techncial changes
1. Spares storage and administration (not belonging to refurbishment cost)
1. Pilots training and support for piloted vehicles
It is easiest to estimate these costs by estimating the number of staff required.
#### Launch Site Support and Maintenance
Launch sites operated by governmental organizations operate under a special budget, so launches of national spacecraft are therefore not charged with a launch site support and maintenance cost. For commercial endeavours, however, launch service providers typically have to pay a fixed fee per month or per year for use of the launch infrastructure, in addition to other per-launch fees (which are part of the DOC). This fee is entirely dependent on the specific launch site's fee schedule.
## Cost per Flight and Pricing
It is important now to make a distinction between Cost per Flight (CpF) and Price per Flight (PpF). Cost per Flight is the cost of production and operations per launch for the launch service provider. Price per Flight is the price charged by the launch service provider and paid for by the customer, which includes a development cost amortization charge and profit, in addition to the production and operations costs.
There are a few subtleties between CpF and PpF for ELVs and RLVs, as noted below.
The CpF includes:
1. Vehicle Cost
- Fabrication, assemby, verification
- Expendable elements cost (RLVs only)
- Refurbishment and spares cost (RLVs only)
1. Direct Operations Cost
- Ground operations
- Flight and mission operations
- Propellants, gases, and consumables
- Ground transportation costs
- Launch facilities user fee
- Public damage fee
- Vehicle failure impact charge (ELVs only)
- Mission abort and premiature vehicle loss charge (RLVs only)
- Other charges (taxes, fees, ...)
1. Indirect Operations Cost
- Program administration and system management
- Marketing, customer relations, and contracts office
- Technical system support
- Launch site and range cost
Then, for PpF, there are the following items in addition to the CpF items:
4. Business Charges
- Development cost amortization charge
- Nominal profit
The total customer cost might also include an insurance fee for payload loss or launch failure (ELVs only) on top of all of this.
For comparison of different launch vehicle configurations and architectures within the same study, it may be appropriate to only consider the vehicle cost and direct operations cost. However, in order to get a complete CpF and be able to compare to existing vehicles, all cost items must be included in the model.
### Production Cost Amortization
In the case of RLVs, there is a charge for vehicle amortization, which is the production cost of the vehicle divided by the total expected number of flights. Little data exists for the maximum number of flights for a reusable vehicle, but it is expected to be somewhere between 100 and 300 flights. Choosing the optimal number of flights for a vehicle requires careful consideration of the refurbishment and amortization cost. There will likely exist a number of flights which yields a minumum CpF.
<img src="transcost_figures/cost_v_flights.png" alt="Drawing" style="width: 500px;"/>
Amortization of rocket engines must also be considered. The total number of flights per engine is likely substantially less than that of the vehicle, somewhere between 30 and 80 flights.
### Effects of Vehicle Size and Launch Frequency on CpF
CpF tends to decrease with launch frequency. The accounts for the effects of the learning curve, as well as better distribution of indirect operations costs, which tend to be independent of launch frequency. The sensitivity of ELVs to launch frequency is greater than the sensitivity for RLVs. IOC and DOC costs for RLVs are lower due to air-craft like operations, which explains this difference in sensitivity.
CpF also tends to increase with GLOW or payload capability. For small launch vehicles, the CpF difference between ELVs and RLVs for the same payload mass is generally negligible. For large launch vehicles, the difference is substantial. The reason for this is that for large launch vehicles, the hardware cost becomes substantial, which gives the RLVs a cost advantage, since major hardware is reused. Additionally, the major expenditures for RLVs is typially the operations costs, which are less sensitive to vehicle size.
<img src="transcost_figures/cpf_v_leo-payload.png" alt="Drawing" style="width: 500px;"/>
### Development Cost Amortization
In the case of government funded launch systems, the launch service provider typically is not concerned with a development cost amortization charge. This is different for commercial endeavours however. Development costs for very large or complicated launch vehicles are so high that it would likely be impossible to provide commercial funding, given that it could take 10 years or more for the investment to pay off. For this reason, commercially funded projects tend to be of a smaller scale.
Considering commercial endavours, for a new ELV, the CpF would likely require a 15 - 40% development amortization charge in the case of 200-400 flights for its life-cycle. For a new RLV, the CpF would likely require a 200 - 400% surcharge for the same life-cycle.
However, despite the huge non-recurring development cost for the RLV, if we consider a case for a ELV having 120M CpF and a RLV having 35M CpF to take a payload of 8Mg to LEO, we would expect the CpF for the RLV to be less than that of the ELV if the number of launches is greater than 40 - 90 launches. RLVs can almost certainly be competitive given a large enough total number of flights.
<img src="transcost_figures/dev_amortization.png" alt="Drawing" style="width: 500px;"/>
### Pricing Strategies
1. **Standard pricing.** Price the launch vehicle based on the actual cost of the vehicle, flight operations, and amortization, plus profit.
1. **Pricing below cost.** A few situations might make this practical. For instance, if an additional launch can be done without affecting the IOC, and therefore cost relatively little. Another situation where this might make sense is if an interruption of the production line or a layoff of a specialized team can be avoided.
1. **Pricing according to payload mass.** This could make sense in the case of multiple payloads. However, it should be noted that the vehicles maximum payload capacity will be reduced due to the necessity of more payload support structures. Additionally, the payload utilization factor usually decreases since it is difficult to find multiple payloads whose combined mass exactly achieves the payload capacity.
1. **Pricing for mini-satellites (piggy-back payloads).** This is for small satellites that can make up part of the residual payload capacity. Prices for this are typically negotiable.
## Cost of Unreliability/Insurance
Historically, liquid boosters and solid boosters have a similar reliability of ~98%. However, reliability is an inherent problem for expendable vehicles since stages and components cannot be tested in flight-like ways. Even if designing for redundancy, the production involves new materials and slight variations each time.
### Cost of ELV's Unreliability and Insurance Fees
Launch vehicle failures not only have an impact on insurance rates (paid for by the customer), but also imposes a cost penalty on the launch service prodvider, who now has to perform a failure analysis and implement technical improvements. Insurance costs are highly variant and depend on recent launch successes/failures.
### Cost of RLV's Unreliability and Insurance Fees
The case of relability and subsequent insurance costs is very different for RLVs than for ELVs. For RLVs:
1. The reliability will be higher due to better testing, and higher degree of redundancy and integrated health control systems
1. The flight can be aborted, with the vehicle landing at the launch site or an alternative site, and the payload can be saved
1. The vehicle loss insurance fee is paid by the launch provider as part of the DOC
In this case, the customer does not need to pay the vehicle insurance fee, and the payload insurance fee will be substantially less.
### Specific Costs vs. Total Annual Transportation Mass (Market Size) and Optimum RLV Payload Capability
If annual transportationd demand increases, launch fruquency and launch vehicle size will increase. These factors have an effect on the specific transportation cost.
Based on two data studies (SPS and NEPTUNE), it was found that specific payload costs decrease with increasing market size. For a RLV and a market size of 1000 Mg/Yr, one could expect a specific cost of 2 - 10 PYr/Mg. For a market size of 10000 Mg/Yr, one could expect a specific cost of 0.3 - 2 PYr/Mg.
<img src="transcost_figures/spec-payload_v_total.png" alt="Drawing" style="width: 500px;"/>
Additionally, for a given market size, there exists an optimum RLV payload capacity that will minimize the specific cost. Larger launch vehicles typically have a higher payload utilization fraction and mass efficiency, which decreases specific launch rate. However, larger launch vehicles means a reduced launch rate, which increases the specific cost.
<img src="transcost_figures/spec-cost_v_payload.png" alt="Drawing" style="width: 600px;"/>
## Sources of Uncertainty
### Development Cost Model
Accuracy of the development model depends very much on:
1. consideration of ALL development cost criteria
1. realistic input data for the different vehicle and engines mass values, as well as for schedule
Risks:
1. Required technical changes and additional qualifications for technology that was chosen but not fully qualified
1. Changing vehicle specifications - vehicle design should be frozen at the start of the program
1. System mass was underestimated
1. Assumptions were made that everything would stay on schedule
### Production Cost Model
Criteria to consider:
1. Scope of verification/acceptance testing
1. Modification of the product during production
1. Production quantity - this is a huge uncertainy that has a large impact on production cost
1. Varying PYr-costs for a particular company - the rest of TRANSCOST uses an average PYr value for aerospace in the US
1. Personell experience
### Operations Cost Model
Uncertainties arise from:
1. The duration of the operational phase and the total number of flights - this determines number of vehicles to be built
1. Launch frequency
1. Uncertainty over future launch site conditions
1. Required staff size and fixed annual cost
1. Technical problems during operational phase and subsequent failure investigation, implementation of modifications, interruptions to flight operations
| github_jupyter |
## Dependencies
```
import json, glob
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
```
# Load data
```
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
```
# Model parameters
```
input_base_path = '/kaggle/input/208-robertabase/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
# vocab_path = input_base_path + 'vocab.json'
# merges_path = input_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
vocab_path = base_path + 'roberta-base-vocab.json'
merges_path = base_path + 'roberta-base-merges.txt'
config['base_model_path'] = base_path + 'roberta-base-tf_model.h5'
config['config_path'] = base_path + 'roberta-base-config.json'
model_path_list = glob.glob(input_base_path + 'model' + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep = '\n')
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
# Pre process
```
test['text'].fillna('', inplace=True)
test['text'] = test['text'].apply(lambda x: x.lower())
test['text'] = test['text'].apply(lambda x: x.strip())
x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=True)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
_, _, hidden_states = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
h11 = hidden_states[-2]
x = layers.SpatialDropout1D(.1)(h11)
start_logits = layers.Dense(1, name="start_logit", use_bias=False)(x)
start_logits = layers.Flatten()(start_logits)
end_logits = layers.Dense(1, name="end_logit", use_bias=False)(x)
end_logits = layers.Flatten()(end_logits)
start_probs = layers.Activation('softmax', name='y_start')(start_logits)
end_probs = layers.Activation('softmax', name='y_end')(end_logits)
model = Model(inputs=[input_ids, attention_mask], outputs=[start_probs, end_probs])
return model
```
# Make predictions
```
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
for model_path in model_path_list:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE']))
test_start_preds += test_preds[0]
test_end_preds += test_preds[1]
```
# Post process
```
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['text_len'] = test['text'].apply(lambda x : len(x))
test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))
# test['end'].clip(0, test['text_len'], inplace=True)
# test['start'].clip(0, test['end'], inplace=True)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1)
test['selected_text'].fillna(test['text'], inplace=True)
```
# Visualize predictions
```
display(test.head(10))
```
# Test set predictions
```
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test['selected_text']
submission.to_csv('submission.csv', index=False)
submission.head(10)
```
| github_jupyter |
# Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!
**You will learn to:** Use regularization in your deep learning models.
Let's first import the packages you are going to use.
### <font color='darkblue'> Updates to Assignment <font>
#### If you were working on a previous version
* The current notebook filename is version "2a".
* You can find your work in the file directory as version "2".
* To see the file directory, click on the Coursera logo at the top left of the notebook.
#### List of Updates
* Clarified explanation of 'keep_prob' in the text description.
* Fixed a comment so that keep_prob and 1-keep_prob add up to 100%
* Updated print statements and 'expected output' for easier visual comparisons.
```
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
```
train_X, train_Y, test_X, test_Y = load_2D_dataset()
```
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
## 1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python.
- in *dropout mode* -- by setting the `keep_prob` to a value less than one
You will first try the model without any regularization. Then, you will implement:
- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"
- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
```
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
```
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
```
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
## 2 - L2 Regularization
The standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
```python
np.sum(np.square(Wl))
```
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
```
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (lambd/(2*m)) * (np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
```
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m) * W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m) * W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m) * W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = \n"+ str(grads["dW1"]))
print ("dW2 = \n"+ str(grads["dW2"]))
print ("dW3 = \n"+ str(grads["dW3"]))
```
**Expected Output**:
```
dW1 =
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 =
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 =
[[-1.77691347 -0.11832879 -0.09397446]]
```
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call:
- `compute_cost_with_regularization` instead of `compute_cost`
- `backward_propagation_with_regularization` instead of `backward_propagation`
```
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
```
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
**What is L2-regularization actually doing?**:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
**What you should remember** -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
## 3 - Dropout
Finally, **dropout** is a widely used regularization technique that is specific to deep learning.
**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
### 3.1 - Forward propagation with dropout
**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
**Instructions**:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 1 with probability (`keep_prob`), and 0 otherwise.
**Hint:** Let's say that keep_prob = 0.8, which means that we want to keep about 80% of the neurons and drop out about 20% of them. We want to generate a vector that has 1's and 0's, where about 80% of them are 1 and about 20% are 0.
This python statement:
`X = (X < keep_prob).astype(int)`
is conceptually the same as this if-else statement (for the simple case of a one-dimensional array) :
```
for i,v in enumerate(x):
if v < keep_prob:
x[i] = 1
else: # v >= keep_prob
x[i] = 0
```
Note that the `X = (X < keep_prob).astype(int)` works with multi-dimensional arrays, and the resulting output preserves the dimensions of the input array.
Also note that without using `.astype(int)`, the result is an array of booleans `True` and `False`, which Python automatically converts to 1 and 0 if we multiply it with numbers. (However, it's better practice to convert data into the data type that we intend, so try using `.astype(int)`.)
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
```
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(*A1.shape) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = np.multiply(A1, D1) # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(*A2.shape) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = D2 < keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = np.multiply(A2, D2) # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
```
**Expected Output**:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
### 3.2 - Backward propagation with dropout
**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
**Instruction**:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`.
2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
```
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = np.multiply(dA2, D2) # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = np.multiply(dA1, D1) # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = \n" + str(gradients["dA1"]))
print ("dA2 = \n" + str(gradients["dA2"]))
```
**Expected Output**:
```
dA1 =
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 =
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
```
Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:
- `forward_propagation_with_dropout` instead of `forward_propagation`.
- `backward_propagation_with_dropout` instead of `backward_propagation`.
```
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
```
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Note**:
- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.
<font color='blue'>
**What you should remember about dropout:**
- Dropout is a regularization technique.
- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
- Apply dropout both during forward and backward propagation.
- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
## 4 - Conclusions
**Here are the results of our three models**:
<table>
<tr>
<td>
**model**
</td>
<td>
**train accuracy**
</td>
<td>
**test accuracy**
</td>
</tr>
<td>
3-layer NN without regularization
</td>
<td>
95%
</td>
<td>
91.5%
</td>
<tr>
<td>
3-layer NN with L2-regularization
</td>
<td>
94%
</td>
<td>
93%
</td>
</tr>
<tr>
<td>
3-layer NN with dropout
</td>
<td>
93%
</td>
<td>
95%
</td>
</tr>
</table>
Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system.
Congratulations for finishing this assignment! And also for revolutionizing French football. :-)
<font color='blue'>
**What we want you to remember from this notebook**:
- Regularization will help you reduce overfitting.
- Regularization will drive your weights to lower values.
- L2 regularization and Dropout are two very effective regularization techniques.
| github_jupyter |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
engine.execute('SELECT * FROM Measurement LIMIT 10').fetchall()
engine.execute('SELECT * FROM Station LIMIT 10').fetchall()
inspector = inspect(engine)
inspector.get_table_names()
measurement_columns = inspector.get_columns('measurement')
for m_c in measurement_columns:
print(m_c['name'], m_c["type"])
station_columns = inspector.get_columns('station')
for m_c in station_columns:
print(m_c['name'], m_c["type"])
```
# Exploratory Climate Analysis
```
# Design a query to retrieve the last 12 months of precipitation data and plot the results
Measurement = Base.classes.measurement
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
# Calculate the date 1 year ago from today
last_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
Precipitation = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date > last_year).\
order_by(Measurement.date.desc()).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(Precipitation[:], columns=['date', 'prcp'])
df.set_index('date', inplace=True)
# Sort the dataframe by date
df = df.sort_index()
df.head()
# Use Pandas Plotting with Matplotlib to plot the data
df.plot(kind="line",linewidth=4,figsize=(15,10))
plt.style.use('fivethirtyeight')
plt.xlabel("Date")
plt.title("Precipitation Analysis (From 8/24/16 to 8/23/17)")
# Rotate the xticks for the dates
plt.xticks(rotation=45)
plt.legend(["Precipitation"])
plt.tight_layout()
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
df.describe()
# How many stations are available in this dataset?
stations_count = session.query(Measurement).group_by(Measurement.station).count()
print("There are {} stations.".format(stations_count))
# What are the most active stations?
# List the stations and the counts in descending order.
active_stations = session.query(Measurement.station, func.count(Measurement.tobs)).group_by(Measurement.station).\
order_by(func.count(Measurement.tobs).desc()).all()
active_stations
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature most active station?
most_active_station = 'USC00519281';
active_station_stat = session.query(Measurement.station, func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).\
filter(Measurement.station == most_active_station).all()
active_station_stat
# A query to retrieve the last 12 months of temperature observation data (tobs).
# Filter by the station with the highest number of observations.
temperature = session.query(Measurement.station, Measurement.date, Measurement.tobs).\
filter(Measurement.station == 'USC00519397').\
filter(Measurement.date > last_year).\
order_by(Measurement.date).all()
temperature
# Plot the results as a histogram with bins=12.
measure_df=pd.DataFrame(temperature)
hist_plot = measure_df['tobs'].hist(bins=12, figsize=(15,10))
plt.xlabel("Recorded Temperature")
plt.ylabel("Frequency")
plt.title("Last 12 Months Station Analysis for Most Active Station")
plt.show()
# Write a function called `calc_temps` that will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
trip_departure = dt.date(2018, 5, 1)
trip_arrival = dt.date(2018, 4, 2)
last_year = dt.timedelta(days=365)
trip_stat = (calc_temps((trip_arrival - last_year), (trip_departure - last_year)))
print(trip_stat)
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
average_temp = trip_stat[0][1]
minimum_temp = trip_stat[0][0]
maximum_temp = trip_stat[0][2]
peak_yerr = (maximum_temp - minimum_temp)/2
barvalue = [average_temp]
xvals = range(len(barvalue))
fig, ax = plt.subplots()
rects = ax.bar(xvals, barvalue, width, color='g', yerr=peak_yerr,
error_kw=dict(elinewidth=6, ecolor='black'))
def autolabel(rects):
# attach some text labels
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x()+rect.get_width()/2., .6*height, '%.2f'%float(height),
ha='left', va='top')
autolabel(rects)
plt.ylim(0, 100)
ax.set_xticks([1])
ax.set_xlabel("Trip")
ax.set_ylabel("Temp (F)")
ax.set_title("Trip Avg Temp")
fig.tight_layout()
plt.show()
#trip dates - last year
trip_arrival_date = trip_arrival - last_year
trip_departure_date = trip_departure - last_year
print(trip_arrival_date)
print(trip_departure_date)
# Calculate the rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
trip_arrival_date = trip_arrival - last_year
trip_departure_date = trip_departure - last_year
rainfall_trip_data = session.query(Measurement.station, Measurement.date, func.avg(Measurement.prcp), Measurement.tobs).\
filter(Measurement.date >= trip_arrival_date).\
filter(Measurement.date <= trip_departure_date).\
group_by(Measurement.station).\
order_by(Measurement.prcp.desc()).all()
rainfall_trip_data
df_rainfall_stations = session.query(Station.station, Station.name, Station.latitude, Station.longitude, Station.elevation).\
order_by(Station.station.desc()).all()
df_rainfall_stations
df_rainfall = pd.DataFrame(rainfall_trip_data[:], columns=['station','date','precipitation','temperature'])
df_station = pd.DataFrame(df_rainfall_stations[:], columns=['station', 'name', 'latitude', 'longitude', 'elevation'])
df_station
result = pd.merge(df_rainfall, df_station, on='station')
df_result = result.drop(['date','precipitation','temperature',], 1)
df_result
```
## Optional Challenge Assignment
```
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
```
| github_jupyter |
# Experiments for ER Graph
## Imports
```
%load_ext autoreload
%autoreload 2
import os
import sys
from collections import OrderedDict
import logging
import math
from matplotlib import pyplot as plt
import networkx as nx
import numpy as np
import torch
from torchdiffeq import odeint, odeint_adjoint
sys.path.append('../')
# Baseline imports
from gd_controller import AdjointGD
from dynamics_driver import ForwardKuramotoDynamics, BackwardKuramotoDynamics
# Nodec imports
from neural_net import EluTimeControl, TrainingAlgorithm
# Various Utilities
from utilities import evaluate, calculate_critical_coupling_constant, comparison_plot, state_plot
from nnc.helpers.torch_utils.oscillators import order_parameter_cos
logging.getLogger().setLevel(logging.CRITICAL) # set to info to look at loss values etc.
```
## Load graph parameters
Basic setup for calculations, graph, number of nodes, etc.
```
dtype = torch.float32
device = 'cpu'
graph_type = 'erdos_renyi'
adjacency_matrix = torch.load('../../data/'+graph_type+'_adjacency.pt')
parameters = torch.load('../../data/parameters.pt')
# driver vector is a column vector with 1 value for driver nodes
# and 0 for non drivers.
result_folder = '../../results/' + graph_type + os.path.sep
os.makedirs(result_folder, exist_ok=True)
```
## Load dynamics parameters
Load natural frequencies and initial states which are common for all graphs and also calculate the coupling constant which is different per graph. We use a coupling constant value that is $10%$ of the critical coupling constant value.
```
total_time = parameters['total_time']
total_time = 5
natural_frequencies = parameters['natural_frequencies']
critical_coupling_constant = calculate_critical_coupling_constant(adjacency_matrix, natural_frequencies)
coupling_constant = 0.1*critical_coupling_constant
theta_0 = parameters['theta_0']
```
## NODEC
We now train NODEC with a shallow neural network. We initialize the parameters in a deterministic manner, and use stochastic gradient descent to train it. The learning rate, number of epochs and neural architecture may change per graph. We use different fractions of driver nodes.
```
fractions = np.linspace(0,1,10)
order_parameter_mean = []
order_parameter_std = []
samples = 1000
for p in fractions:
sample_arr = []
for i in range(samples):
print(p,i)
driver_nodes = int(p*adjacency_matrix.shape[0])
driver_vector = torch.zeros([adjacency_matrix.shape[0],1])
idx = torch.randperm(len(driver_vector))[:driver_nodes]
driver_vector[idx] = 1
forward_dynamics = ForwardKuramotoDynamics(adjacency_matrix,
driver_vector,
coupling_constant,
natural_frequencies
)
backward_dynamics = BackwardKuramotoDynamics(adjacency_matrix,
driver_vector,
coupling_constant,
natural_frequencies
)
neural_net = EluTimeControl([2])
for parameter in neural_net.parameters():
parameter.data = torch.ones_like(parameter.data)/1000 # deterministic init!
train_algo = TrainingAlgorithm(neural_net, forward_dynamics)
best_model = train_algo.train(theta_0, total_time, epochs=3, lr=0.3)
control_trajectory, state_trajectory =\
evaluate(forward_dynamics, theta_0, best_model, total_time, 100)
nn_control = torch.cat(control_trajectory).squeeze().cpu().detach().numpy()
nn_states = torch.cat(state_trajectory).cpu().detach().numpy()
nn_e = (nn_control**2).cumsum(-1)
nn_r = order_parameter_cos(torch.tensor(nn_states)).cpu().numpy()
sample_arr.append(nn_r[-1])
order_parameter_mean.append(np.mean(sample_arr))
order_parameter_std.append(np.std(sample_arr,ddof=1))
order_parameter_mean = np.array(order_parameter_mean)
order_parameter_std = np.array(order_parameter_std)
plt.figure()
plt.errorbar(fractions,order_parameter_mean,yerr=order_parameter_std/np.sqrt(samples),fmt="o")
plt.xlabel(r"fraction of controlled nodes")
plt.ylabel(r"$r(T)$")
plt.tight_layout()
plt.show()
np.savetxt("ER_drivers_K01.csv",np.c_[order_parameter_mean,order_parameter_std],header="order parameter mean\t order parameter std")
```
| github_jupyter |
Parallel Single-channel CSC
===========================
This example compares the use of [parcbpdn.ParConvBPDN](http://sporco.rtfd.org/en/latest/modules/sporco.admm.parcbpdn.html#sporco.admm.parcbpdn.ParConvBPDN) with [admm.cbpdn.ConvBPDN](http://sporco.rtfd.org/en/latest/modules/sporco.admm.cbpdn.html#sporco.admm.cbpdn.ConvBPDN) solving a convolutional sparse coding problem with a greyscale signal
$$\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_{m} - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_{m} \|_1 \;,$$
where $\mathbf{d}_{m}$ is the $m^{\text{th}}$ dictionary filter, $\mathbf{x}_{m}$ is the coefficient map corresponding to the $m^{\text{th}}$ dictionary filter, and $\mathbf{s}$ is the input image.
```
from __future__ import print_function
from builtins import input
import pyfftw # See https://github.com/pyFFTW/pyFFTW/issues/40
import numpy as np
from sporco import util
from sporco import signal
from sporco import plot
plot.config_notebook_plotting()
import sporco.metric as sm
from sporco.admm import cbpdn
from sporco.admm import parcbpdn
```
Load example image.
```
img = util.ExampleImages().image('kodim23.png', zoom=1.0, scaled=True,
gray=True, idxexp=np.s_[160:416, 60:316])
```
Highpass filter example image.
```
npd = 16
fltlmbd = 10
sl, sh = signal.tikhonov_filter(img, fltlmbd, npd)
```
Load dictionary and display it.
```
D = util.convdicts()['G:12x12x216']
plot.imview(util.tiledict(D), fgsz=(7, 7))
lmbda = 5e-2
```
The RelStopTol option was chosen for the two different methods to stop with similar functional values
Initialise and run standard serial CSC solver using ADMM with an equality constraint [[49]](http://sporco.rtfd.org/en/latest/zreferences.html#id51).
```
opt = cbpdn.ConvBPDN.Options({'Verbose': True, 'MaxMainIter': 200,
'RelStopTol': 5e-3, 'AuxVarObj': False,
'AutoRho': {'Enabled': False}})
b = cbpdn.ConvBPDN(D, sh, lmbda, opt=opt, dimK=0)
X = b.solve()
```
Initialise and run parallel CSC solver using ADMM dictionary partition method [[42]](http://sporco.rtfd.org/en/latest/zreferences.html#id43).
```
opt_par = parcbpdn.ParConvBPDN.Options({'Verbose': True, 'MaxMainIter': 200,
'RelStopTol': 1e-2, 'AuxVarObj': False, 'AutoRho':
{'Enabled': False}, 'alpha': 2.5})
b_par = parcbpdn.ParConvBPDN(D, sh, lmbda, opt=opt_par, dimK=0)
X_par = b_par.solve()
```
Report runtimes of different methods of solving the same problem.
```
print("ConvBPDN solve time: %.2fs" % b.timer.elapsed('solve_wo_rsdl'))
print("ParConvBPDN solve time: %.2fs" % b_par.timer.elapsed('solve_wo_rsdl'))
print("ParConvBPDN was %.2f times faster than ConvBPDN\n" %
(b.timer.elapsed('solve_wo_rsdl')/b_par.timer.elapsed('solve_wo_rsdl')))
```
Reconstruct images from sparse representations.
```
shr = b.reconstruct().squeeze()
imgr = sl + shr
shr_par = b_par.reconstruct().squeeze()
imgr_par = sl + shr_par
```
Report performances of different methods of solving the same problem.
```
print("Serial reconstruction PSNR: %.2fdB" % sm.psnr(img, imgr))
print("Parallel reconstruction PSNR: %.2fdB\n" % sm.psnr(img, imgr_par))
```
Display original and reconstructed images.
```
fig = plot.figure(figsize=(21, 7))
plot.subplot(1, 3, 1)
plot.imview(img, title='Original', fig=fig)
plot.subplot(1, 3, 2)
plot.imview(imgr, title=('Serial Reconstruction PSNR: %5.2f dB' %
sm.psnr(img, imgr)), fig=fig)
plot.subplot(1, 3, 3)
plot.imview(imgr_par, title=('Parallel Reconstruction PSNR: %5.2f dB' %
sm.psnr(img, imgr_par)), fig=fig)
fig.show()
```
Display low pass component and sum of absolute values of coefficient maps of highpass component.
```
fig = plot.figure(figsize=(21, 7))
plot.subplot(1, 3, 1)
plot.imview(sl, title='Lowpass component', fig=fig)
plot.subplot(1, 3, 2)
plot.imview(np.sum(abs(X), axis=b.cri.axisM).squeeze(),
cmap=plot.cm.Blues, title='Serial Sparse Representation',
fig=fig)
plot.subplot(1, 3, 3)
plot.imview(np.sum(abs(X_par), axis=b.cri.axisM).squeeze(),
cmap=plot.cm.Blues, title='Parallel Sparse Representation',
fig=fig)
fig.show()
```
| github_jupyter |
# Let's compare 4 different strategies to solve sentiment analysis:
1. **Custom model using open source package**. Build a custom model using scikit-learn and TF-IDF features on n-grams. This method is known to work well for English text.
2. **Integrate** a pre-built API. The "sentiment HQ" API provided by indico has been shown to achieve state-of-the-art accuracy, using a recurrent neural network.
3. **Word-level features**. A custom model, built from word-level text features from indico's "text features" API.
4. **RNN features**. A custom model, using transfer learning, using the recurrent features from indico's sentiment HQ model to train a new custom model.
Note: this notebook and the enclosed code snippets accompany the KDnuggets post:
### Semi-supervised feature transfer: the big practical benefit of deep learning today?
<img src="header.jpg">
### Download the data
1. Download the "Large Movie Review Dataset" from http://ai.stanford.edu/~amaas/data/sentiment/.
2. Decompress it.
3. Put it into some directory path that you define below.
Citation: Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. (2011). Learning Word Vectors for Sentiment Analysis. The 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011).
### User parameters
```
seed = 3 # for reproducibility across experiments, just pick something
train_num = 100 # number of training examples to use
test_num = 100 # number of examples to use for testing
base_model_name = "sentiment_train%s_test%s" % (train_num, test_num)
lab2bin = {'pos': 1, 'neg': 0} # label -> binary class
pos_path = "~DATASETS/aclImdb/train/pos/" # filepath to the positive examples
neg_path = "~DATASETS/aclImdb/train/neg/" # file path to the negative examples
output_path = "OUTPUT" # path where output file should go
batchsize = 25 # send this many requests at once
max_num_examples = 25000.0 # for making subsets below
```
### Setup and imports
Install modules as needed (for example: `pip install indicoio`)
```
import os, io, glob, random, time
# from itertools import islice, chain, izip_longest
import numpy as np
import pandas as pd
from tqdm import tqdm
import pprint
pp = pprint.PrettyPrinter(indent=4)
import indicoio
from indicoio.custom import Collection
from indicoio.custom import collections as check_status
import sklearn
from sklearn import metrics
from sklearn import linear_model
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt # for plotting results
%matplotlib inline
import seaborn # just for the colors
```
### Define your indico API key
If you don't have a (free) API key, you can [get one here](https://indico.io/pay-per-call). Your first 10,000 calls per months are free.
```
indicoio.config.api_key = "" # Add your API key here
```
### Convenience function for making batches of examples
```
def batcher(seq, stride = 4):
"""
Generator strides across the input sequence,
combining the elements between each stride.
"""
for pos in xrange(0, len(seq), stride):
yield seq[pos : pos + stride]
# for making subsets below
train_subset = (train_num / 25000.0)
test_subset = (test_num / 25000.0)
random.seed(seed)
np.random.seed(seed)
```
### Check that the requested paths exist
```
# check that paths exist
for p in [pos_path, neg_path]:
abs_path = os.path.abspath(p)
if not os.path.exists(abs_path):
os.makedirs(abs_path)
print(abs_path)
for p in [output_path]:
abs_path = os.path.abspath(p)
if not os.path.exists(abs_path): # and make output path if necessary
os.makedirs(abs_path)
print(abs_path)
```
### Query indico API to make sure everything is plumbed up correctly
```
# pre_status = check_status()
# pp.pprint(pre_status)
```
### Read data into a list of dictionary objects
where each dictionary object will be a single example. This makes it easy to manipulate later using dataframes, for cross-validation, visualization, etc.
### This dataset has pre-defined train/test splits
so rather than sampling our own, we'll use the existing splits to enable fair comparison with other published results.
```
train_data = [] # these lists will contain a bunch of little dictionaries, one for each example
test_data = []
# Positive examples (train)
examples = glob.glob(os.path.join(pos_path, "*")) # find all the positive examples, and read them
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'pos' # label as "pos"
t = f.read().lower() # these files are already ascii text, so just lowercase them
d['text'] = t
d['pred_label'] = None # placeholder for predicted label
d['prob_pos'] = None # placeholder for predicted probability of a positive label
train_data.append(d) # add example to the list of training data
i +=1
print("Read %d positive training examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Negative examples (train)
examples = glob.glob(os.path.join(neg_path, "*")) # find all the negative examples and read them
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'neg'
t = f.read().lower()
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
train_data.append(d)
i +=1
print("Read %d negative training examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Positive examples (test)
examples = glob.glob(os.path.join(pos_path, "*"))
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'pos'
t = f.read().lower() # these files are already ascii text
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
test_data.append(d)
i +=1
print("Read %d positive test examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Negative examples (test)
examples = glob.glob(os.path.join(neg_path, "*"))
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'neg'
t = f.read().lower() # these files are already ascii text
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
test_data.append(d)
i +=1
print("Read %d negative examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Populate a dataframe, shuffle, and subset as required
df_train = pd.DataFrame(train_data)
df_train = df_train.sample(frac = train_subset) # shuffle (by sampling everything randomly)
print("After resampling, down to %d training records" % len(df_train))
df_test = pd.DataFrame(test_data)
df_test = df_test.sample(frac = test_subset) # shuffle (by sampling everything randomly)
print("After resampling, down to %d test records" % len(df_test))
```
### Quick sanity check on the data, is everything as expected?
```
df_train.head(10) # sanity check
df_train.tail(10)
df_test.tail(10)
```
# Strategy A: scikit-learn
Build a custom model from scratch using sklearn (ngrams -> TFIDF -> LR)
### Define the vectorizer, logistic regression model, and overall pipeline
```
vectorizer = sklearn.feature_extraction.text.TfidfVectorizer(
max_features = int(1e5), # max vocab size (pretty large)
max_df = 0.50,
sublinear_tf = True,
use_idf = True,
encoding = 'ascii',
decode_error = 'replace',
analyzer = 'word',
ngram_range = (1,3),
stop_words = 'english',
lowercase = True,
norm = 'l2',
smooth_idf = True,
)
lr = linear_model.SGDClassifier(
alpha = 1e-5,
average = 10,
class_weight = 'balanced',
epsilon = 0.15,
eta0 = 0.0,
fit_intercept = True,
l1_ratio = 0.15,
learning_rate = 'optimal',
loss = 'log',
n_iter = 5,
n_jobs = -1,
penalty = 'l2',
power_t = 0.5,
random_state = seed,
shuffle = True,
verbose = 0,
warm_start = False,
)
classifier = Pipeline([('vectorizer', vectorizer),
('logistic_regression', lr)
])
```
### Fit the classifier
```
_ = classifier.fit(df_train['text'], df_train['label'])
```
### Get predictions
```
pred_sk = classifier.predict(df_test['text'])
y_true_sk = [lab2bin[ex] for ex in df_test['label']]
proba_sk = classifier.predict_proba(df_test['text']) # also get probas
```
### Compute and plot ROC and AUC
```
cname = base_model_name + "_sklearn"
plt.figure(figsize=(8,8))
probas_sk = []
y_pred_labels_sk = []
y_pred_sk = []
# get predictions
for i, pred in enumerate(pred_sk):
proba_pos = proba_sk[i][1]
probas_sk.append(proba_pos)
if float(proba_pos) >= 0.50:
pred_label = "pos"
elif float(proba_pos) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i) # if this happens, need to fix something
y_pred_labels_sk.append(pred_label)
y_pred_sk.append(lab2bin[pred_label])
# compute ROC
fpr, tpr, thresholds = metrics.roc_curve(y_true_sk, probas_sk)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true_sk, y_pred_sk)
print("Accuracy: %.4f" % (acc))
```
# Put examples data into batches, for APIs
### Prepare batches of training examples
```
examples = [list(ex) for ex in zip(df_train['text'], df_train['label'])]
batches = [b for b in batcher(examples, batchsize)] # stores in memory, but the texts are small so no problem
```
### Prepare batches of test examples
```
test_examples = [list(ex) for ex in zip(df_test['text'], df_test['label'])] # test data
test_batches = [b for b in batcher(test_examples, batchsize)]
```
# Strategy B. Pre-trained sentiment HQ
```
# get predictions from sentiment-HQ API
cname = base_model_name + "hq"
predictions_hq = []
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = indicoio.sentiment_hq(texts)
for i, result in enumerate(results):
r = {}
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result
predictions_hq.append(r)
cname = base_model_name + "_hq"
plt.figure(figsize=(8,8))
# y_true = [df_test['label']]
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions_hq):
y_true.append(lab2bin[pred['label']])
proba = pred['proba']
probas.append(proba)
if float(proba) >= 0.50:
pl = 'pos'
elif float(proba) < 0.50:
pl= 'neg'
else:
print("Error. Check proba value and y_true logic")
pred_label = pl # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC plot model: '%s'" % cname)
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
# plt.savefig(os.path.abspath(cname + "_hq_ROC" + ".png"))
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
```
# Strategy C. Custom model using general text features.
### Create an indico custom collection using general (word-level) text features, and upload data
```
cname = base_model_name
print("This model will be cached as an indico custom collection using the name: '%s'" % cname)
collection = Collection(cname)
try:
collection.clear() # delete any previous data in this collection
collection.info()
collection = Collection(cname)
except:
print(" Error, probably because a collection with the given name didn't exist. Continuing...")
print(" Submitting %d training examples in %d batches..." % (len(examples), len(batches)))
for batch in tqdm(batches):
try:
collection.add_data(batch)
except Exception as e:
print("Exception: '%s' for batch:" % e)
pp.pprint(batch)
print(" training model: '%s'" % cname)
collection.train()
collection.wait() # blocks until the model is trained
# get predictions from the trained API model
predictions = []
cname = base_model_name
collection = Collection(cname)
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = collection.predict(texts)
for i, result in enumerate(results):
r = {}
r['indico_result'] = result
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result['pos']
predictions.append(r)
pp.pprint(predictions[0]) # sanity check
```
### Draw ROC plot and compute metrics for the custom collection
```
plt.figure(figsize=(8,8))
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_cc_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
```
# Strategy D. Custom model using sentiment features from the pretrained deep neural network.
```
cname = base_model_name + "_domain"
print("This model will be cached as an indico custom collection using the name: '%s'" % cname)
collection = Collection(cname, domain = "sentiment")
try:
collection.clear() # delete any previous data in this collection
collection.info()
collection = Collection(cname, domain = "sentiment")
except:
print(" Error, probably because a collection with the given name didn't exist. Continuing...")
print(" Submitting %d training examples in %d batches..." % (len(examples), len(batches)))
for batch in tqdm(batches):
try:
collection.add_data(batch, domain = "sentiment")
except Exception as e:
print("Exception: '%s' for batch:" % e)
pp.pprint(batch)
print(" training model: '%s'" % cname)
collection.train()
collection.wait()
```
### Get predictions for custom collection with sentiment domain text features
```
# get predictions from trained API
predictions_domain = []
cname = base_model_name + "_domain"
collection = Collection(cname, domain = "sentiment")
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = collection.predict(texts, domain = "sentiment")
# batchsize = len(batch)
for i, result in enumerate(results):
r = {}
r['indico_result'] = result
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result['pos']
predictions_domain.append(r)
```
### Compute metrics and plot
```
cname = base_model_name + "_domain"
plt.figure(figsize=(8,8))
# y_true = [df_test['label']]
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions_domain):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC plot model: '%s'" % cname)
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
# plt.savefig(os.path.abspath(cname + "_cc_domain_ROC" + ".png"))
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
```
# Sanity check on results for all 4 strategies
Compare the first prediction for each to make sure all the right stuff is there...
```
print("Strategy A. Custom sklearn model using n-grams, TFIDF, LR:")
print(y_true_sk[0])
print(pred_sk[0])
print(proba_sk[0])
print("")
print("Strategy B. Sentiment HQ:")
pp.pprint(predictions_hq[0])
print("Strategy C. Custom collection using general text features:")
pp.pprint(predictions[0])
print("")
print("Strategy D. Custom collection using sentiment features:")
pp.pprint(predictions_domain[0])
print("")
```
# Compute overall metrics and plot
```
plt.figure(figsize=(10,10))
cname = base_model_name
# compute and draw curve for sklearn LR built from scratch
probas_sk = []
y_pred_labels_sk = []
y_pred_sk = []
for i, pred in enumerate(pred_sk):
proba_pos = proba_sk[i][1]
probas_sk.append(proba_pos)
if float(proba_pos) >= 0.50:
pred_label = "pos"
elif float(proba_pos) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i)
y_pred_labels_sk.append(pred_label)
y_pred_sk.append(lab2bin[pred_label])
fpr_sk, tpr_sk, thresholds_sk = metrics.roc_curve(y_true_sk, probas_sk)
roc_auc_sk = metrics.auc(fpr_sk, tpr_sk)
plt.plot(fpr_sk, tpr_sk, lw = 2, color = "#a5acaf", label = "A. Custom sklearn ngram LR model; area = %0.3f" % roc_auc_sk)
# compute and draw curve for sentimentHQ
probas_s = []
y_true_s = []
y_pred_labels_s = []
y_pred_s = []
for i, pred in enumerate(predictions_hq):
y_true_s.append(lab2bin[pred['label']])
probas_s.append(pred['proba'])
if float(pred['proba']) >= 0.50:
pred_label = "pos"
elif float(pred['proba']) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i)
y_pred_labels_s.append(pred_label)
y_pred_s.append(lab2bin[pred_label])
fpr_s, tpr_s, thresholds_s = metrics.roc_curve(y_true_s, probas_s)
roc_auc_s = metrics.auc(fpr_s, tpr_s)
plt.plot(fpr_s, tpr_s, lw = 2, color = "#b05ecc", label = "B. Sentiment HQ model; area = %0.3f" % roc_auc_s)
# Compute and draw curve for the custom collection using general text features
probas = []
y_true = []
y_pred_labels = []
y_pred = []
lab2bin = {'pos': 1,
'neg': 0}
for i, pred in enumerate(predictions):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 2, color = "#ffbb3b", label = "C. Custom IMDB model using general text features; area = %0.3f" % (roc_auc))
# now compute and draw curve for the CC using sentiment text features
probas_d = []
y_true_d = []
y_pred_labels_d = []
y_pred_d = []
for i, pred in enumerate(predictions_domain):
y_true_d.append(lab2bin[pred['label']])
probas_d.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x]))
y_pred_labels_d.append(pred_label)
y_pred_d.append(lab2bin[pred_label])
fpr_d, tpr_d, thresholds_d = metrics.roc_curve(y_true_d, probas_d)
roc_auc_d = metrics.auc(fpr_d, tpr_d)
plt.plot(fpr_d, tpr_d, lw = 2, color = "#43b9af", label = "D. Custom IMDB model using sentiment text features; area = %0.3f" % roc_auc_d)
# Add other stuff to figure
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC: %d training examples" % len(examples))
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_comparison_ROC" + ".png")), dpi = 300)
plt.show()
```
## Accuracy metrics
```
acc_sk = metrics.accuracy_score(y_true_sk, y_pred_sk)
print("A. Sklearn model from scratch (sklearn) : %.4f" % (acc_sk))
acc_s = metrics.accuracy_score(y_true_s, y_pred_s)
print("B. Sentiment HQ : %.4f" % (acc_s))
acc = metrics.accuracy_score(y_true, y_pred)
print("C. Custom model using general text features : %.4f" % (acc))
acc_d = metrics.accuracy_score(y_true_d, y_pred_d)
print("D. Custom model using sentiment text features : %.4f" % (acc_d))
# print("Using (%d, %d, %d, %d) examples" % (len(y_pred), len(y_pred_d), len(y_pred_s), len(y_pred_sk)))
```
| github_jupyter |
```
#Python Basics
#Dictionaries in Python
#Keys and Elements
#Dictionary is defined by {"key1": element1, "key2": element2, "key3": element3}
#Examples
dic1={"Fist Name":"Behdad", "Surname": "Jam", "Age": 35, "Records": [11.32, 14.34, 13.003]}
print(dic1)
History={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
print(History)
#we can add a new value to dictionary as follows
dic1={"Fist Name":"Behdad", "Surname": "Jam", "Age": 35, "Records": [11.32, 14.34, 13.003]}
dic1["key1"]="John"
dic1["John's Age"]=37
dic1["Birthday"]="23rd May"
print(dic1)
History={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
History["bands"]=1988
History["bandee"]=1976
History["bandbb"]=1944
print(History)
#we can add a new value to dictionary as follows
dictionary1= {'First Name': 'Behdad', 'Surname': 'Jam', 'Age': 35, "John's Age": 37, 'Birthday': '23rd May'}
print(dictionary1)
del(dictionary1['Birthday'])
print(dictionary1)
del(dictionary1['Age'])
print(dictionary1)
del(dictionary1['Surname'])
print(dictionary1)
del(dictionary1['First Name'])
print(dictionary1)
His= {'band1': 1943, 'bandx': 1967, 'bandy': 1984, 'band4': 1933}
print(His)
del(His['band1'])
print(His)
del(His['band4'])
print(His)
del(His['bandx'])
print(His)
#We can verify a element or item is in the dictionary or not with "in" as follows
History={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
d1= {'First Name': 'Behdad', 'Surname': 'Jam', 'Age': 35, "John's Age": 37, 'Birthday': '23rd May'}
d2={'bandx': 1967, 'bandy': 1984, 'band4': 1933}
d3={'bandy': 1984}
g="band1" in History
print("band1 is in History: ", bool(g))
g="band1" in History
print("band1 is in History: ", bool(g))
g="bandx" in History
print("bandx is in History: ", bool(g))
g="bandy" in History
print("bandy is in History: ", bool(g))
g="band1" in d2
print("band1 is in d2: ", bool(g))
g="band1" in d3
print("band1 is in d3: ", bool(g))
g="bandx" in d3
print("bandx is in d3: ", bool(g))
g="bandy" in d2
print("bandy is in d2: ", bool(g))
g="band1" in d2
print("band1 is in d2: ", g)
g="band1" in d3
print("band1 is in d3: ", g)
g="bandx" in d3
print("bandx is in d3: ", g)
g="bandy" in d2
print("bandy is in d2: ", g)
### You can show all keys in a dictionary using "key method" as follow:
History={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
d1= {'First Name': 'Behdad', 'Surname': 'Jam', 'Age': 35, "John's Age": 37, 'Birthday': '23rd May'}
d2={'bandx': 1967, 'bandy': 1984, 'band4': 1933}
d3={'bandy': 1984}
print (History.keys())
print(d1.keys())
print(d2.keys())
print(d3.keys())
### You can illustrate all values in a dictionary using "value method" as follow:
w1={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
w2= {'First Name': 'Behdad', 'Surname': 'Jam', 'Age': 35, "John's Age": 37, 'Birthday': '23rd May'}
w3={'bandx': 1967, 'bandy': 1984, 'band4': 1933}
w4={'bandy': 1984}
print (w1.values())
print(w2.values())
print(w3.values())
print(w4.values())
#You can show any value of keys as follows:
w1={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
w2= {'First Name': 'Behdad', 'Surname': 'Jam', 'Age': 35, "John's Age": 37, 'Birthday': '23rd May'}
w3={'bandx': 1967, 'bandy': 1984, 'band4': 1933}
w4={'bandy': 1984}
#value of key band1 is computed as follows
vw1= w1["band1"]
print(vw1)
#value of key bandx is computed as follows
vw1= w1["bandx"]
print(vw1)
#value of key band1y is computed as follows
vw1= w1["bandy"]
print(vw1)
```
| github_jupyter |
```
import numpy as np
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV
import pandas as pd
df = np.genfromtxt('D:/Github/eeg.fem/public/data/Musical/6080072/data_for_train/ALL_PCA_64.csv',delimiter=',')
x = df[:, :-1]
y = df[:, -1]
X_train, X_test, y_train, y_test = train_test_split(x, y, random_state=3, test_size=0.3)
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-5, 1, 1e5],
'gamma': [1e-5, 'scale', 1e5]}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-1, 1, 1e3, 1e5],
'gamma': [1e-6, 1e-5, 1e-4,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=16)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-2, 1, 1e3],
'gamma': [1e-6, 1e-5, 1e-4,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=12)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-2, 1, 1e1],
'gamma': [1e-6, 1e-5, 1e-4,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=12)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-2, 1, 1e1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-1, 1, 1e1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [5e-1, 1, 1e1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [5e-1, 2.5e-1, 1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [5e-2, 1.5e-1, 2.5e-1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [5e-2, 1.5e-1, 2.5e-1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-3, 1.5e-1, 2.5e-1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr')
params = {'C': [1e-3, 2e-3, 3e-3, 4e-3, 5e-3, 6e-3, 7e-3, 8e-3, 9e-3],
'gamma': ['scale'],
'tol': [1e-5, 1e-3, 1]}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=27)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','param_tol','mean_test_score']])
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','param_tol','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr')
params = {'C': [3e-3, 3.5e-3],
'gamma': ['scale'],
'tol': [1e-5, 1e-3, 1]}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=6)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_tol','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr')
params = {'C': [3.1e-3, 3.2e-3, 3.3e-3, 3.4e-3],
'gamma': ['scale'],
'tol': [1e-3],
'decision_function_shape': ['ovo', 'ovr']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=8)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr')
params = {'C': [3.1e-3, 3.15e-3, 3.2e-3],
'gamma': ['scale'],
'tol': [1e-3],
'decision_function_shape': ['ovr']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=8)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr')
params = {'C': [3.15e-3, 3.16e-3, 3.17e-3, 3.18e-3, 3.19e-3, 3.2e-3],
'gamma': ['scale'],
'tol': [1e-3],
'decision_function_shape': ['ovr']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=6)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','mean_test_score']])
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import pickle
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import cv2
from sklearn.model_selection import KFold, cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier, BaggingClassifier, GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier, IsolationForest
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC, SVC
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
from skopt.plots import plot_objective, plot_histogram
from sklearn.pipeline import Pipeline
from src.utils.feats import load_gei
from src.utils.results import df_results
import pandas as pd
# Kfold
n_splits = 3
cv = KFold(n_splits=n_splits, random_state=42, shuffle=True)
# classifier
model = RandomForestClassifier(n_estimators=150, max_depth=None, random_state=0, criterion='gini')
datapath = "../data/feats/database24_gei_480x640.pkl"
dim = (64, 48)
crop_person = True
X, y = load_gei(datapath, dim=dim, crop_person=crop_person)
# pipeline class is used as estimator to enable
# search over different model types
pipe = Pipeline([
('model', KNeighborsClassifier())
])
# single categorical value of 'model' parameter is
# sets the model class
# We will get ConvergenceWarnings because the problem is not well-conditioned.
# But that's fine, this is just an example.
# from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier
# from sklearn.ensemble import RandomForestClassifier, IsolationForest
# from sklearn.neighbors import KNeighborsClassifier
# from sklearn.svm import LinearSVC, SVC
# explicit dimension classes can be specified like this
ada_search = {
'model': Categorical([AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=10), random_state=0)]),
'model__n_estimators': Integer(300, 1100),
'model__learning_rate': Real(0.1, 0.5, prior='uniform'),
}
# gdb_search = {
# 'model': Categorical([GradientBoostingClassifier(max_depth=None, random_state=0)]),
# 'model__learning_rate': Real(1e-3, 0.5, prior='uniform'),
# 'model__n_estimators': Integer(1, 400),
# 'model__max_depth': Integer(1, 400),
# }
knn_search = {
'model': Categorical([KNeighborsClassifier()]),
'model__n_neighbors': Integer(1,6),
}
rf_search = {
'model': Categorical([RandomForestClassifier(max_depth=None, random_state=0, criterion='gini')]),
'model__n_estimators': Integer(250, 400),
}
svc_search = {
'model': Categorical([SVC()]),
'model__C': Real(1e-6, 1e+6, prior='log-uniform'),
'model__gamma': Real(1e-6, 1e+1, prior='log-uniform'),
'model__degree': Integer(1,8),
'model__kernel': Categorical(['linear', 'poly', 'rbf']),
}
opt = BayesSearchCV(
pipe,
# (parameter space, # of evaluations)
[(ada_search, 32), (knn_search, 8), (svc_search, 128), (rf_search, 128)],
cv=cv,
scoring='accuracy'
)
opt.fit(X, y)
df = df_results(opt)
df.to_csv('results_classifiers_bayes_search.csv')
df
# 5 best ADA models
df[df['model__learning_rate']>0].head(5)
# 5 best knn models
df[df['model__n_neighbors']>0].head(5)
# 5 best RF models
df[df['model__n_estimators']>0].head(5)
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
import re
import time
import collections
import os
import itertools
from sklearn.cross_validation import train_test_split
def build_dataset(words, n_words, atleast=1):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def add_start_end(string):
string = string.split()
strings = []
for s in string:
s = list(s)
s[0] = '<%s'%(s[0])
s[-1] = '%s>'%(s[-1])
strings.extend(s)
return strings
with open('lemmatization-en.txt','r') as fopen:
texts = fopen.read().split('\n')
after, before = [], []
for i in texts[:10000]:
splitted = i.encode('ascii', 'ignore').decode("utf-8").lower().split('\t')
if len(splitted) < 2:
continue
after.append(add_start_end(splitted[0]))
before.append(add_start_end(splitted[1]))
print(len(after),len(before))
concat_from = list(itertools.chain(*before))
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
print('filtered vocab size:',len(dictionary_from))
print("% of vocab used: {}%".format(round(len(dictionary_from)/vocabulary_size_from,4)*100))
concat_to = list(itertools.chain(*after))
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab from size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
print('filtered vocab size:',len(dictionary_to))
print("% of vocab used: {}%".format(round(len(dictionary_to)/vocabulary_size_to,4)*100))
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
for i in range(len(after)):
after[i].append('EOS')
before[:10], after[:10]
class Stemmer:
def __init__(self, size_layer, num_layers, embedded_size,
from_dict_size, to_dict_size, learning_rate,
dropout = 0.5, beam_width = 15):
def lstm_cell(size, reuse=False):
return tf.nn.rnn_cell.GRUCell(size, reuse=reuse)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.count_nonzero(self.X, 1, dtype=tf.int32)
self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype=tf.int32)
# encoder
encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
batch_size = tf.shape(self.X)[0]
for n in range(num_layers):
(out_fw, out_bw), (state_fw, state_bw) = tf.nn.bidirectional_dynamic_rnn(
cell_fw = lstm_cell(size_layer // 2),
cell_bw = lstm_cell(size_layer // 2),
inputs = encoder_embedded,
sequence_length = self.X_seq_len,
dtype = tf.float32,
scope = 'bidirectional_rnn_%d'%(n))
encoder_embedded = tf.concat((out_fw, out_bw), 2)
bi_state = tf.concat((state_fw, state_bw), -1)
self.encoder_state = tuple([bi_state] * num_layers)
self.encoder_state = tuple(self.encoder_state[-1] for _ in range(num_layers))
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
# decoder
decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))
decoder_cells = tf.nn.rnn_cell.MultiRNNCell([lstm_cell(size_layer) for _ in range(num_layers)])
dense_layer = tf.layers.Dense(to_dict_size)
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs = tf.nn.embedding_lookup(decoder_embeddings, decoder_input),
sequence_length = self.Y_seq_len,
time_major = False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = training_helper,
initial_state = self.encoder_state,
output_layer = dense_layer)
training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = training_decoder,
impute_finished = True,
maximum_iterations = tf.reduce_max(self.Y_seq_len))
predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
embedding = decoder_embeddings,
start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]),
end_token = EOS)
predicting_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = predicting_helper,
initial_state = self.encoder_state,
output_layer = dense_layer)
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = predicting_decoder,
impute_finished = True,
maximum_iterations = 2 * tf.reduce_max(self.X_seq_len))
self.training_logits = training_decoder_output.rnn_output
self.predicting_ids = predicting_decoder_output.sample_id
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)
y_t = tf.argmax(self.training_logits,axis=2)
y_t = tf.cast(y_t, tf.int32)
self.prediction = tf.boolean_mask(y_t, masks)
mask_label = tf.boolean_mask(self.Y, masks)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
size_layer = 128
num_layers = 2
embedded_size = 64
learning_rate = 1e-3
batch_size = 32
epoch = 15
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Stemmer(size_layer, num_layers, embedded_size, len(dictionary_from),
len(dictionary_to), learning_rate)
sess.run(tf.global_variables_initializer())
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i:
try:
ints.append(dic[k])
except Exception as e:
ints.append(UNK)
X.append(ints)
return X
X = str_idx(before, dictionary_from)
Y = str_idx(after, dictionary_to)
train_X, test_X, train_Y, test_Y = train_test_split(X, Y, test_size = 0.2)
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
from tqdm import tqdm
from sklearn.utils import shuffle
import time
for EPOCH in range(epoch):
lasttime = time.time()
total_loss, total_accuracy, total_loss_test, total_accuracy_test = 0, 0, 0, 0
train_X, train_Y = shuffle(train_X, train_Y)
test_X, test_Y = shuffle(test_X, test_Y)
pbar = tqdm(range(0, len(train_X), batch_size), desc='train minibatch loop')
for k in pbar:
index = min(k+batch_size,len(train_X))
batch_x, seq_x = pad_sentence_batch(train_X[k: k+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(train_Y[k: k+batch_size], PAD)
acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y})
total_loss += loss
total_accuracy += acc
pbar.set_postfix(cost=loss, accuracy = acc)
pbar = tqdm(range(0, len(test_X), batch_size), desc='test minibatch loop')
for k in pbar:
index = min(k+batch_size,len(test_X))
batch_x, seq_x = pad_sentence_batch(test_X[k: k+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(test_Y[k: k+batch_size], PAD)
acc, loss = sess.run([model.accuracy, model.cost],
feed_dict={model.X:batch_x,
model.Y:batch_y})
total_loss_test += loss
total_accuracy_test += acc
pbar.set_postfix(cost=loss, accuracy = acc)
total_loss /= (len(train_X) / batch_size)
total_accuracy /= (len(train_X) / batch_size)
total_loss_test /= (len(test_X) / batch_size)
total_accuracy_test /= (len(test_X) / batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(EPOCH, total_loss, total_accuracy))
print('epoch: %d, avg loss test: %f, avg accuracy test: %f'%(EPOCH, total_loss_test, total_accuracy_test))
predicted = sess.run(model.predicting_ids,
feed_dict={model.X:batch_x,
model.Y:batch_y})
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('BEFORE:',''.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL AFTER:',''.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED AFTER:',''.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/zaidalyafeai/Notebooks/blob/master/tf_Face_SSD.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Introduction
In this task we will detect faces in the wild using single shot detector (SSD) models. The SSD model is a bit complicated but will build a simple implmenetation that works for the current task. Basically, the SSD model is a basic model for object detection that uses full evaluation of the given image without using region proposals which was introduced in R-CNN. This makes SSD much faster. The basic architecture is using a CNN to extract the features and at the end we extract volumes of predictions in the shape $[w, h, c + 4]$ where $(w,h)$ is the size of prediction volume and the $c+5$ is the prediction of classes plus the bounding box offsets. Note that we add 1 for the background. Hence the size of the prediction module for one scale is $w \times h (c + 5)$. Note that we predict these valume at different scales and we use matching with IoU to infer the spatial location of the predicted boxes

# Download The Dataset
We use the dataset fromt his [project](http://vis-www.cs.umass.edu/fddb/). Each single frame frame is annotated by an ellpesioid around the faces that exist in that frame. This data set contains the annotations for 5171 faces in a set of 2845 images taken from the [Faces in the Wild data set](http://tamaraberg.com/faceDataset/index.html). Here is a sample 
```
!wget http://tamaraberg.com/faceDataset/originalPics.tar.gz
!wget http://vis-www.cs.umass.edu/fddb/FDDB-folds.tgz
!tar -xzvf originalPics.tar.gz >> tmp.txt
!tar -xzvf FDDB-folds.tgz >> tmp.txt
```
# Extract the Bounding Boxes
For each image we convert the ellipsoid annotation to a rectangluar region that frames the faces in the current image. Before that we need to explain the concept of anchor boxes. An **anchor box** that exists in a certain region of an image is a box that is responsible for predicting the box in that certain region. Given a certain set of boxes we could match these boxes to the corrospondant anchor box using the intersection over union metric IoU.

In the above example we see the anchor boxes with the associated true labels. If a certain anchor box has a maximum IoU overlap we consider that anchorbox responsible for that prediction. For simplicity we construct volumes of anchor boxes at only one scale.
```
from PIL import Image
import pickle
import os
import numpy as np
import cv2
import glob
```
Use anchors of size $(4,4)$
```
ANCHOR_SIZE = 4
def iou(boxA, boxB):
#evaluate the intersection points
xA = np.maximum(boxA[0], boxB[0])
yA = np.maximum(boxA[1], boxB[1])
xB = np.minimum(boxA[2], boxB[2])
yB = np.minimum(boxA[3], boxB[3])
# compute the area of intersection rectangle
interArea = np.maximum(0, xB - xA + 1) * np.maximum(0, yB - yA + 1)
# compute the area of both the prediction and ground-truth
# rectangles
boxAArea = (boxA[2] - boxA[0] + 1) * (boxA[3] - boxA[1] + 1)
boxBArea = (boxB[2] - boxB[0] + 1) * (boxB[3] - boxB[1] + 1)
#compute the union
unionArea = (boxAArea + boxBArea - interArea)
# return the intersection over union value
return interArea / unionArea
#for a given box we predict the corrosponding bounding box
def get_anchor(box):
max_iou = 0.0
matching_anchor = [0, 0, 0, 0]
matching_index = (0, 0)
i = 0
j = 0
w , h = (1/ANCHOR_SIZE, 1/ANCHOR_SIZE)
for x in np.linspace(0, 1, ANCHOR_SIZE +1)[:-1]:
j = 0
for y in np.linspace(0, 1, ANCHOR_SIZE +1)[:-1]:
xmin = x
ymin = y
xmax = (x + w)
ymax = (y + h)
anchor_box = [xmin, ymin, xmax, ymax]
curr_iou = iou(box, anchor_box)
#choose the location with the highest overlap
if curr_iou > max_iou:
matching_anchor = anchor_box
max_iou = curr_iou
matching_index = (i, j)
j += 1
i+= 1
return matching_anchor, matching_index
```
For each image we output a volume of boxes where we map each true label to the corrosponindg location in the $(4, 4, 5)$ tenosr. Note that here we have only two lables 1 for face and 0 for background so we can use binary cross entropy
```
def create_volume(boxes):
output = np.zeros((ANCHOR_SIZE, ANCHOR_SIZE, 5))
for box in boxes:
if max(box) == 0:
continue
_, (i, j) = get_anchor(box)
output[i,j, :] = [1] + box
return output
#read all the files for annotation
annot_files = glob.glob('FDDB-folds/*ellipseList.txt')
data = {}
for file in annot_files:
with open(file, 'r') as f:
rows = f.readlines()
j = len(rows)
i = 0
while(i < j):
#get the file name
file_name = rows[i].replace('\n', '')+'.jpg'
#get the number of boxes
num_boxes = int(rows[i+1])
boxes = []
img = Image.open(file_name)
w, h = img.size
#get all the bounding boxes
for k in range(1, num_boxes+1):
box = rows[i+1+k]
box = box.split(' ')[0:5]
box = [float(x) for x in box]
#convert ellipse to a box
xmin = int(box[3]- box[1])
ymin = int(box[4]- box[0])
xmax = int(xmin + box[1]*2)
ymax = int(ymin + box[0]*2)
boxes.append([xmin/w, ymin/h, xmax/w, ymax/h])
#conver the boxes to a volume of fixed size
data[file_name] = create_volume(boxes)
i = i + num_boxes+2
```
# Imports
We use tensorflow with eager execution. Hence, eager execution allows immediate evaluation of tensors without instintiating graph.
```
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Input
from tensorflow.keras.layers import Flatten, Dropout, BatchNormalization, Concatenate, Reshape, GlobalAveragePooling2D, Reshape
from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2, preprocess_input
import cv2
import matplotlib.pyplot as plt
import os
import numpy as np
from PIL import Image
from random import shuffle
import random
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
```
#Create A Dataset
Here we use `tf.data` for manipulating the data and use them for training
```
def parse_training(filename, label):
image = tf.image.decode_jpeg(tf.read_file(filename), channels = 3)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE])
label = tf.cast(label, tf.float32)
return image, label
def parse_testing(filename, label):
image = tf.image.decode_jpeg(tf.read_file(filename), channels = 3)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize_images(image, [IMG_SIZE, IMG_SIZE])
label = tf.cast(label, tf.float32)
return image, label
def create_dataset(ff, ll, training = True):
dataset = tf.data.Dataset.from_tensor_slices((ff, ll)).shuffle(len(ff) - 1)
if training:
dataset = dataset.map(parse_training, num_parallel_calls = 4)
else:
dataset = dataset.map(parse_testing, num_parallel_calls = 4)
dataset = dataset.batch(BATCH_SIZE)
return dataset
```
# Data Split
We create a 10% split for the test data to be used for validation
```
files = list(data.keys())
labels = list(data.values())
N = len(files)
M = int(0.9 * N)
#split files for images
train_files = files[:M]
test_files = files[M:]
#split labels
train_labels = labels[:M]
test_labels = labels[M:]
print('training', len(train_files))
print('testing' , len(test_files))
IMG_SIZE = 128
BATCH_SIZE = 32
train_dataset = create_dataset(train_files, train_labels)
test_dataset = create_dataset(test_files, test_labels, training = False)
```
# Visualization
```
def plot_annot(img, boxes):
img = img.numpy()
boxes = boxes.numpy()
for i in range(0, ANCHOR_SIZE):
for j in range(0, ANCHOR_SIZE):
box = boxes[i, j, 1:] * IMG_SIZE
label = boxes[i, j, 0]
if np.max(box) > 0:
img = cv2.rectangle(img, (int(box[0]), int(box[1])), (int(box[2]), int(box[3])), (1, 0, 0), 1)
plt.axis('off')
plt.imshow(img)
plt.show()
for x, y in train_dataset:
plot_annot(x[0], y[0])
break
```
# Create a model
We use a ResNet model with multiple blocks and at the end we use a conv volume with size (4, 4, 5) as a preciction.
```
def conv_block(fs, x, activation = 'relu'):
conv = Conv2D(fs, (3, 3), padding = 'same', activation = activation)(x)
bnrm = BatchNormalization()(conv)
drop = Dropout(0.5)(bnrm)
return drop
def residual_block(fs, x):
y = conv_block(fs, x)
y = conv_block(fs, y)
y = conv_block(fs, y)
return Concatenate(axis = -1)([x, y])
inp = Input(shape = (IMG_SIZE, IMG_SIZE, 3))
block1 = residual_block(16, inp)
pool1 = MaxPooling2D(pool_size = (2, 2))(block1)
block2 = residual_block(32, pool1)
pool2 = MaxPooling2D(pool_size = (2, 2))(block2)
block3 = residual_block(64, pool2)
pool3 = MaxPooling2D(pool_size = (2, 2))(block3)
block4 = residual_block(128, pool3)
pool4 = MaxPooling2D(pool_size = (2, 2))(block4)
block5 = residual_block(256, pool4)
pool5 = MaxPooling2D(pool_size = (2, 2))(block5)
out = Conv2D(5, (3, 3), padding = 'same', activation = 'sigmoid')(pool5)
#create a model with one input and two outputs
model = tf.keras.models.Model(inputs = inp, outputs = out)
model.summary()
```
# Loss and gradient
```
def loss(pred, y):
#extract the boxes that have values (i.e discard boxes that are zeros)
mask = y[...,0]
boxA = tf.boolean_mask(y, mask)
boxB = tf.boolean_mask(pred, mask)
prediction_error = tf.keras.losses.binary_crossentropy(y[...,0], pred[...,0])
detection_error = tf.losses.absolute_difference(boxA[...,1:], boxB[...,1:])
return tf.reduce_mean(prediction_error) + 10*detection_error
def grad(model, x, y):
#record the gradient
with tf.GradientTape() as tape:
pred = model(x)
value = loss(pred, y)
#return the gradient of the loss function with respect to the model variables
return tape.gradient(value, model.trainable_variables)
optimizer = tf.train.AdamOptimizer()
```
# Evaluation metric
```
epochs = 20
#initialize the history to record the metrics
train_loss_history = tfe.metrics.Mean('train_loss')
test_loss_history = tfe.metrics.Mean('test_loss')
best_loss = 1.0
for i in range(1, epochs + 1):
for x, y in train_dataset:
pred = model(x)
grads = grad(model, x, y)
#update the paramters of the model
optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step = tf.train.get_or_create_global_step())
#record the metrics of the current batch
loss_value = loss(pred, y)
#calcualte the metrics of the current batch
train_loss_history(loss_value)
#loop over the test dataset
for x, y in test_dataset:
pred = model(x)
#calcualte the metrics of the current batch
loss_value = loss(pred, y)
#record the values of the metrics
test_loss_history(loss_value)
#print out the results
print("epoch: [{0:d}/{1:d}], Train: [loss: {2:0.4f}], Test: [loss: {3:0.4f}]".
format(i, epochs, train_loss_history.result(),
test_loss_history.result()))
current_loss = test_loss_history.result().numpy()
#save the best model
if current_loss < best_loss:
best_loss = current_loss
print('saving best model with loss ', current_loss)
model.save('keras.h5')
#clear the history after each epoch
train_loss_history.init_variables()
test_loss_history.init_variables()
from tensorflow.keras.models import load_model
best_model = load_model('keras.h5')
```
# Visualization
```
#visualize the predicted bounding box
def plot_pred(img_id):
font = cv2.FONT_HERSHEY_SIMPLEX
raw = cv2.imread(img_id)[:,:,::-1]
h, w = (512, 512)
img = cv2.resize(raw, (IMG_SIZE, IMG_SIZE)).astype('float32')
img = np.expand_dims(img, 0)/255.
boxes = best_model(img).numpy()[0]
raw = cv2.resize(raw, (w, h))
for i in range(0, ANCHOR_SIZE):
for j in range(0, ANCHOR_SIZE):
box = boxes[i, j, 1:] * w
lbl = round(boxes[i, j, 0], 2)
if lbl > 0.5:
color = [random.randint(0, 255) for _ in range(0, 3)]
raw = cv2.rectangle(raw, (int(box[0]), int(box[1])), (int(box[2]), int(box[3])), color, 3)
raw = cv2.rectangle(raw, (int(box[0]), int(box[1])-30), (int(box[0])+70, int(box[1])), color, cv2.FILLED)
raw = cv2.putText(raw, str(lbl), (int(box[0]), int(box[1])), font, 1, (255, 255, 255), 2)
plt.axis('off')
plt.imshow(raw)
plt.show()
img_id = np.random.choice(test_files)
plot_pred(img_id)
!wget https://pmctvline2.files.wordpress.com/2018/08/friends-revival-jennifer-aniston.jpg -O test.jpg
plot_pred('test.jpg')
```
| github_jupyter |
# Working with Python: functions and modules
## Session 4: Using third party libraries
- [Matplotlib](#Matplotlib)
- [Exercise 4.1](#Exercise-4.1)
- [BioPython](#BioPython)
- [Working with sequences](#Working-with-sequences)
- [Connecting with biological databases](#Connecting-with-biological-databases)
- [Exercise 4.2](#Exercise-4.2)
## Matplotlib
[matplotlib](http://matplotlib.org/) is probably the single most used Python package for graphics. It provides both a very quick way to visualize data from Python and publication-quality figures in many formats.
matplotlib.pyplot is a collection of command style functions that make matplotlib work like MATLAB. Each pyplot function makes some change to a figure: e.g., creates a figure, creates a plotting area in a figure, plots some lines in a plotting area, decorates the plot with labels, etc.
Let's start with a very simple plot.
```
import matplotlib.pyplot as mpyplot
mpyplot.plot([1,2,3,4])
mpyplot.ylabel('some numbers')
mpyplot.show()
```
`plot()` is a versatile command, and will take an arbitrary number of arguments. For example, to plot x versus y, you can issue the command:
```
mpyplot.plot([1,2,3,4], [1,4,9,16])
```
For every x, y pair of arguments, there is an **optional third argument** which is the format string that indicates the color and line type of the plot. The letters and symbols of the format string are from MATLAB, and you concatenate a color string with a line style string. The default format string is `b-`, which is a solid blue line. For example, to plot the above with red circles, you would chose `ro`.
```
import matplotlib.pyplot as mpyplot
mpyplot.plot([1,2,3,4], [1,4,9,16], 'ro')
mpyplot.axis([0, 6, 0, 20])
mpyplot.show()
```
`matplotlib` has a few methods in the **`pyplot` module** that make creating common types of plots faster and more convenient because they automatically create a Figure and an Axes object. The most widely used are:
- [mpyplot.bar](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.bar) – creates a bar chart.
- [mpyplot.boxplot](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.boxplot) – makes a box and whisker plot.
- [mpyplot.hist](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.hist) – makes a histogram.
- [mpyplot.plot](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot) – creates a line plot.
- [mpyplot.scatter](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter) – makes a scatter plot.
Calling any of these methods will automatically setup `Figure` and `Axes` objects, and draw the plot. Each of these methods has different parameters that can be passed in to modify the resulting plot.
The [Pyplot tutorial](http://matplotlib.org/users/pyplot_tutorial.html) is where these simple examples above are coming from. More could be learn from it if you wish during your own time.
Let's now try to plot the GC content along the chain we have calculated during the previous session, while solving the Exercises 3.3 and 3.4.
```
seq = 'ATGGTGCATCTGACTCCTGAGGAGAAGTCTGCCGTTACTGCCCTGTGGGGCAAGGTG'
gc = [40.0, 60.0, 80.0, 60.0, 40.0, 60.0, 40.0, 40.0, 40.0, 60.0,
40.0, 60.0, 60.0, 60.0, 60.0, 60.0, 60.0, 60.0, 60.0, 60.0,
60.0, 40.0, 40.0, 40.0, 40.0, 40.0, 60.0, 60.0, 80.0, 80.0,
80.0, 60.0, 40.0, 40.0, 20.0, 40.0, 60.0, 80.0, 80.0, 80.0,
80.0, 60.0, 60.0, 60.0, 80.0, 80.0, 100.0, 80.0, 60.0, 60.0,
60.0, 40.0, 60.0]
window_ids = range(len(gc))
import matplotlib.pyplot as mpyplot
mpyplot.plot(window_ids, gc, '--' )
mpyplot.xlabel('5 bases window id along the sequence')
mpyplot.ylabel('%GC')
mpyplot.title('GC plot for sequence\n' + seq)
mpyplot.show()
```
## Exercise 4.1
Re-use the GapMinder dataset to plot, in Jupyter using Matplotlib, from the world data the life expectancy against GDP per capita for 1957 and 2007 using a scatter plot, add title to your graph as well as a legend.
## BioPython
The goal of Biopython is to make it as easy as possible to use Python for bioinformatics by creating high-quality, reusable modules and classes. Biopython features include parsers for various Bioinformatics file formats (BLAST, Clustalw, FASTA, Genbank,...), access to online services (NCBI, Expasy,...), interfaces to common and not-so-common programs (Clustalw, DSSP, MSMS...), a standard sequence class, various clustering modules, a KD tree data structure etc. and documentation as well as a tutorial: http://biopython.org/DIST/docs/tutorial/Tutorial.html.
## Working with sequences
We can create a sequence by defining a `Seq` object with strings. `Bio.Seq()` takes as input a string and converts in into a Seq object. We can print the sequences, individual residues, lengths and use other functions to get summary statistics.
```
# Creating sequence
from Bio.Seq import Seq
my_seq = Seq("AGTACACTGGT")
print(my_seq)
print(my_seq[10])
print(my_seq[1:5])
print(len(my_seq))
print(my_seq.count("A"))
```
We can use functions from `Bio.SeqUtils` to get idea about a sequence
```
# Calculate the molecular weight
from Bio.SeqUtils import GC, molecular_weight
print(GC(my_seq))
print(molecular_weight(my_seq))
```
One letter code protein sequences can be converted into three letter codes using `seq3` utility
```
from Bio.SeqUtils import seq3
print(seq3(my_seq))
```
Alphabets defines how the strings are going to be treated as sequence object. `Bio.Alphabet` module defines the available alphabets for Biopython. `Bio.Alphabet.IUPAC` provides basic definition for DNA, RNA and proteins.
```
from Bio.Alphabet import IUPAC
my_dna = Seq("AGTACATGACTGGTTTAG", IUPAC.unambiguous_dna)
print(my_dna)
print(my_dna.alphabet)
my_dna.complement()
my_dna.reverse_complement()
my_dna.translate()
```
### Parsing sequence file format: FASTA files
Sequence files can be parsed and read the same way we read other files.
```
with open( "data/glpa.fa" ) as f:
print(f.read())
```
Biopython provides specific functions to allow parsing/reading sequence files.
```
# Reading FASTA files
from Bio import SeqIO
with open("data/glpa.fa") as f:
for protein in SeqIO.parse(f, 'fasta'):
print(protein.id)
print(protein.seq)
```
Sequence objects can be written into files using file handles with the function `SeqIO.write()`. We need to provide the name of the output sequence file and the sequence file format.
```
# Writing FASTA files
from Bio import SeqIO
from Bio.SeqRecord import SeqRecord
from Bio.Seq import Seq
from Bio.Alphabet import IUPAC
sequence = 'MYGKIIFVLLLSEIVSISASSTTGVAMHTSTSSSVTKSYISSQTNDTHKRDTYAATPRAHEVSEISVRTVYPPEEETGERVQLAHHFSEPEITLIIFG'
seq = Seq(sequence, IUPAC.protein)
protein = [SeqRecord(seq, id="THEID", description='a description'),]
with open( "biopython.fa", "w") as f:
SeqIO.write(protein, f, 'fasta')
with open( "biopython.fa" ) as f:
print(f.read())
```
## Connecting with biological databases
Sequences can be searched and downloaded from public databases.
```
# Read FASTA file from NCBI GenBank
from Bio import Entrez
Entrez.email = 'A.N.Other@example.com' # Always tell NCBI who you are
handle = Entrez.efetch(db="nucleotide", id="71066805", rettype="gb")
seq_record = SeqIO.read(handle, "gb")
handle.close()
print(seq_record.id, 'with', len(seq_record.features), 'features')
print(seq_record.seq)
print(seq_record.format("fasta"))
# Read SWISSPROT record
from Bio import ExPASy
handle = ExPASy.get_sprot_raw('HBB_HUMAN')
prot_record = SeqIO.read(handle, "swiss")
handle.close()
print(prot_record.description)
print(prot_record.seq)
```
## Exercise 4.2
- Retrieve a FASTA file named `data/sample.fa` using BioPython and answer the following questions:
- How many sequences are in the file?
- What are the IDs and the lengths of the longest and the shortest sequences?
- Select sequences longer than 500bp. What is the average length of these sequences?
- Calculate and print the percentage of GC in each of the sequences.
- Write the newly created sequences into a FASTA file named `long_sequences.fa`
## Congratulation! You reached the end!
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.