code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
|---|---|
# Hypothesis Testing
```
set.seed(37)
```
## Student's t-test
The `Student's t-test` compares the means of two samples to see if they are different. Here is a `two-sided` Student's t-test.
```
x <- rnorm(1000, mean=0, sd=1)
y <- rnorm(1000, mean=1, sd=1)
r <- t.test(x, y, alternative='two.sided')
print(r)
```
Here is a directional Student's t-test to see if the mean of `x` is greater than the mean of `y`.
```
x <- rnorm(1000, mean=0, sd=1)
y <- rnorm(1000, mean=1, sd=1)
r <- t.test(x, y, alternative='greater')
print(r)
```
Here is a directional Student's t-test to see if the mean of `x` is less than the mean of `y`.
```
x <- rnorm(1000, mean=0, sd=1)
y <- rnorm(1000, mean=1, sd=1)
r <- t.test(x, y, alternative='less')
print(r)
```
We may also perform a `one-sample` Student's t-test.
```
x <- rnorm(1000, mean=0, sd=1)
r <- t.test(x, mu=5)
print(r)
```
If your data is in long format, you may use a formula to perform a Student's t-test.
```
data <- data.frame(
score = c(90, 89, 70, 99, 100, 77, 80, 67, 70),
gender = c(rep('girl', 5), rep('boy', 4))
)
r <- t.test(score ~ gender, data=data)
print(r)
```
## Wilcoxon U-Test
The `Wilcoxon U-Test` is non-parametric test used to compare two samples. The function `wilcox.text` behaves the same way as the `t.test` function.
```
x <- rnorm(1000, mean=0, sd=1)
y <- rnorm(1000, mean=0.5, sd=1)
r <- wilcox.test(x, y)
print(r)
```
## Correlation
May also compute correlation and test the it as well.
```
x <- seq(1, 1000)
y <- x * 2 + rnorm(1000, mean=5, sd=5)
c <- cor(x, y)
print(c)
```
We compute the covariance with the `cov` function.`
```
x <- seq(1, 1000)
y <- x * 2 + rnorm(1000, mean=5, sd=5)
c <- cov(x, y)
print(c)
```
We compute the significance with `cor.test`.
```
x <- seq(1, 1000)
y <- x * 2 + rnorm(1000, mean=5, sd=5)
r <- cor.test(x, y)
print(r)
```
## Chi-squared test
A `Chi-squared` test is used to test for association with contigency tables.
```
df <- data.frame(
rural = c(10, 15, 12),
urban = c(20, 30, 25),
row.names=c('DC', 'MD', 'VA')
)
r <- chisq.test(df)
print(r)
```
A `goodness of fit` test using the `Chi-squared test` is performed as follows.
```
df <- data.frame(
rural = c(10, 15, 12),
urban = c(20, 30, 25),
row.names=c('DC', 'MD', 'VA')
)
r <- chisq.test(df$rural, p=df$urban, rescale.p=TRUE)
print(r)
```
## Analysis of variance
### One-way analysis of variance
A one-way `analysis of variance` (`AOV`) may be conducted as follows.
```
library(tidyr)
df <- data.frame(
city = c('A', 'B', 'C', 'D', 'E'),
urban = c(20, 25, 22, 24, 21),
rural = c(10, 15, 12, 14, 11),
suburb = c(15, 18, 19, 20, 17)
)
df <- df %>% pivot_longer(-city, names_to='location', values_to='expense')
r <- aov(expense ~ location, data=df)
print(r)
print('-- summary below --')
print(summary(r))
```
#### Post-hoc test
We apply `Tukey's Honestly Significant Difference` (`HSD`) test to see which pairs differ.
```
t <- TukeyHSD(r)
print(t)
```
#### Obtaining the effects
```
e <- model.tables(r, type='effects')
print(e)
```
#### Obtaining the means
```
m <- model.tables(r, type='means')
print(m)
```
#### Visualizing the means
```
options(repr.plot.width=4, repr.plot.height=4)
boxplot(expense ~ location, data=df)
```
#### Visualizing the differences
```
options(repr.plot.width=5, repr.plot.height=3)
op = par(mar = c(5, 8, 4, 2))
plot(t, cex=0.2, las=1)
par(op)
```
### Two-way ANOVA
```
suppressMessages({
library('dplyr')
})
N = 5
a <- 5 + 20 * rnorm(N, mean=20, sd=1) + 4 * rnorm(N, mean=4, sd=1) # urban-high
b <- 5 + 18 * rnorm(N, mean=18, sd=1) + 2 * rnorm(N, mean=2, sd=1) # urban-low
c <- 5 + 10 * rnorm(N, mean=10, sd=1) + 4 * rnorm(N, mean=4, sd=1) # suburban-high
d <- 5 + 8 * rnorm(N, mean=8, sd=1) + 2 * rnorm(N, mean=2, sd=1) # suburban-low
e <- 5 + 5 * rnorm(N, mean=5, sd=1) + 4 * rnorm(N, mean=4, sd=1) # rural-high
f <- 5 + 3 * rnorm(N, mean=3, sd=1) + 2 * rnorm(N, mean=2, sd=1) # rural-low
df <- data.frame(
expense=c(a, b, c, d, e, f),
location=c(rep('urban', 2*N), rep('suburban', 2*N), rep('rural', 2*N)),
income=c(rep('high', N), rep('low', N), rep('high', N), rep('low', N), rep('high', N), rep('low', N)),
stringsAsFactors=TRUE
)
r <- aov(expense ~ location * income, data=df)
print(r)
print('-- summary below --')
print(summary(r))
```
#### Two-Way ANOVA post-hoc
```
t <- TukeyHSD(r)
print(t)
```
#### Two-Way ANOVA effects
```
e <- model.tables(r, type='effects')
print(e)
```
#### Two-Way ANOVA means
```
m <- model.tables(r, type='means')
print(m)
```
#### Two-Way ANOVA means visualization
```
options(repr.plot.width=5, repr.plot.height=5)
op = par(mar = c(8, 4, 4, 2))
boxplot(expense ~ location * income, data = df, cex.axis = 0.9, las=2, xlab='')
par(op)
```
#### Two-Way ANOVA differences visualization
```
options(repr.plot.width=5, repr.plot.height=3)
op = par(mar = c(5, 14, 4, 2))
plot(t, cex=0.2, las=1)
par(op)
```
#### Two-Way ANOVA interaction plot
```
options(repr.plot.width=5, repr.plot.height=5)
attach(df)
interaction.plot(location, income, expense)
detach(df)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/michalwilk123/nlp-transformer-app-pl/blob/master/ProjektSi_2021.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **Transformacja liniowa nastroju skończonego tekstu**
## Projekt: Sztuczna Inteligencja 2021
---
</br>
#### Michał Wilk 180333
#### Radosław Baziak 180197
# __Spis Treści__
---
<br/>
#### __Teoria:__
* [Treść zadania](#TrescZadania)
* [Modele Natural Language Processing](#ModeleNaturalLanguageProcessing)
* [Model Transformer](#ModelTransformer)
* [Wykorzystane biblioteki / narzędzia](#WykorzystaneBiblioteki)
<br/>
#### __Praktyka:__
* [__Aplikacja__](#Aplikacja)
<br/>
#### __Podsumowanie:__
* [Ocena aplikacji](#OcenaAplikacji)
* [Ocena projektu](#OcenaProjektu)
* [Przyszłość modeli transformer](#PrzyszloscModeliTransformer)
* [Bibliografia](#Bibliografia)
# Teoria
## <a name="TrescZadania"></a>Treść zadania
Pochylamy się nad takimi zagadnieniami jak:
* rozpoznawanie tekstu
* określenie polaryzacji tekstu (Analiza nastroju tekstu/Sentiment analysis)
* generowanie tekstu na podstawie jego kontekstu
<br/>
Rezultatem naszej pracy jest stworzenie metody(aplikacji), która będzie w stanie zmienić nastrój tekstu (czyli jego polaryzację) w sposób liniowy. W ten sposób sprawdzimy, czy najnowsze ogólnie dostępne modele przetwarzania języka są w stanie rozwiązać wydawałoby się bardzo złożony problem.
## <a name="ModeleNaturalLanguageProcessing"></a>__Modele Natural Language Processing__
#### **O dziedzinie Natural Language Processing**
---
</br>
Przetwarzanie języka naturalnego jest bardzo dynamicznie rozwijającą
się dziedziną. W ciągu ostatnich kilkudziesięciu lat jesteśmy świadkami
znacznego rozwoju tej gałęzi nauki.
Typową problematyką tego działu sztucznej inteligencji jest próba nauczenia maszyny interpretacji tekstu.
Praktycznymi zastosowaniami tej gałęzi sztucznej inteligencji są m.in. [[12]](https://arxiv.org/abs/1908.09203):
* tłumaczenie tekstu
* generowania tekstu podobnego do innego tekstu
* chatboty
* określanie prawdziwości tekstu
* określanie nastroju tekstu
* podpowiedzi wyrazów w programach do pisania oprogramowania (IDE), np. IntelliSense
Często te modele opierają swoje działania na podobnych zasadach, a ich funkcjonowanie można zredukować do kilku etapów:
1) __Tokenizowanie tekstu__ - czyli zamiana tekstu w formie tekstowej, np: "Informatyka" na formę dyskretną np. ciąg bajtowy: _0101_
2) __Encoding__- enkodowanie ciągu tokenów na obiekt dyskretny, np. można zdanie w formie tokenów zamienić na graf czy drzewo zależności w tekście
3) __Decoding__ - kiedy jesteśmy w posiadaniu struktury zdania możemy (zazwyczaj!) poprosić naszą strukturę o dodanie naszego słowa do struktury i dzięki temu wygenerować tekst, który został stworzony na podstawie poprzednich wejść.
4) __Tłumaczenie obiektu zdania na faktyczny tekst__ - w tym kroku tworzymy tekst i dobieramy najbardziej prawdopodobne słowo.
Jest sporo podejść jeżeli chodzi o implementację tego typu modeli, które postaramy się wyszczególnić poniżej.
#### __Przykłady modeli przetwarzających język:__
---
</br>
__Ukryte modele Markowa__ - jest to statystyczny model Markowa, który zawiera łańcuch Markowa, w którym część danych nie jest jawna. Predykcja następnego stanu bazowana jest wyłącznie na aktualnym stanie.
W ten sposób stworzony model "pamięta" 1 słowo wstecz. Dodanie kolejnych słów do zapamiętania znacząco zwiększa złożoność pamięciową tego algorytmu.
___n___ - _ilość tokenów w słowniku_
___m___ - _pamięć modelu, czyli ile słów jest wstecz jest zapamiętywanych_
___ZP___ - _złożoność pamięciowa_
$$ ZP(n, m) = n^m $$
> Mimo swojego podeszłego wieku model jest wciąż szeroko wykorzystywany.
W porównaniu do innych podanych modeli, ten generuje nowy tekst bardzo
wydajnie. Jednym z zastosowań tego modelu jest podpowiadanie kolejnych słów
w niektórych telefonach komórkowych.
__Long short-term memory__ - jest to rekurencyjny model analizy danych, który określając aktualny stan posiada informacje o poprzednich stanach. W skład budowy LSTM wchodzą bramki, przez które przechodzą dane z wcześniejszych outputów oraz nowy input. Mają one na celu określić czy dane informacje należy zapomnieć, zaktualizować bądź zachować. Dopiero tak przefiltrowane informacje trafiają do dalszej analizy. Taka budowa pozwala na przywiązywanie większej wagi do częściej pojawiających się danych i szybko zapominać o nieregularnych wariacjach, w wyniku czego system ten bardzo dobrze radzi sobie z szumami w danych.
</br>
__Transformer__ - jest to model opierający się o mechanizm skupienia, w przeciwieństwie do poprzednich modeli transformery pozwalają na niesekwencyjną analizę danych wejściowych, gdyż skupiają się na ogólnym kontekście. Podczas zwracania danych model zachowuje je jako dane kontekstowe do uzyskania lepszych wyników generując pozostały output.
#### **Rozwój sposobów reprezentacji wyrazów w pamięci komputera**
Jak wspomnieliśmy wyżej, w poprzednich pracach wyrazy często były reprezentowane jako atomiczne jednostki.
Współcześnie [[2]](https://arxiv.org/abs/1301.3781), informacje o wyrazach często reprezentowane są jako wektor pewnych cech ustalonych przez inny model.
Dzięki temu komputer jest w stanie określić zależności między słowami. Na przykład w ten sposób komputer będzie w stanie określić, że słowo "Paryż" ma więcej wspólnego ze słowem "Marsylia", niż ze słowem "ryż".
## <a name="ModelTransformer"></a>__Model Transformer__
#### __Budowa transformera__
Transformer składa się z dwóch głównych części enkodera i dekodera, domyślnie jest to po sześć kopii każdego z nich. Enkodery pobierają dane z wejścia i przepuszcza je przez warstwę self-attention, która zwraca uwagę na całość danych wejściowych i na ich podstawie określa wagi wyrazów i ich powiązania między sobą. Następnie te informacje są przekazywane do Feed Forward Neural Network, która na podstawie dostarczonych danych dokonuje predykcji outputu. Dekoder również zawiera te dwie warstwy, ale między nimi posiada warstwę Encoder-Decoder Attention, która pomaga dekoderowi skupić się na ważniejszych częściach inputu. Więcej o pracy transformera można znaleźć w pracy [[1]](https://arxiv.org/abs/1706.03762).
#### Rozwój modeli __Transformer__
---
__Podział modeli ze względu na wykonywane zadania:__
* __Translacja tekstu:__
* T5
* __Czatboty:__
* LaMDA
* Meena
* __Streszczenia tekstu:__
* XSUM
* BigBird
* __Modele o wielu zastosowaniach:__
* GPT
* BERT
### __Model BERT__
__Bert__ - (Bidirectional Encoder Representations from Transformers) model stworzony przez Google. Wprowadził inny sposób analizy tekstu. W przeciwieństwie do swoich poprzedników, którzy analizowali tekst w jednym kierunku (lewo->prawo lub prawo->lewo), model Bert analizuje tekst zarówno od lewej do prawej, jak i od prawej do lewej.
Jest to istotna różnica w porównaniu do innych modeli.
BERT zdecydowanie lepiej radzi sobie ze zrozumieniem kontekstu całego tekstu.
Na przykład w pracy naukowej [[3]](https://arxiv.org/abs/1810.04805) podany został przykład problemu odpowiadania na zadane pytanie. Bardzo często, tekst w pytaniu może się odnosić do przyszłych wyrazów w zdaniu więc takie rozumienie 'dwukierunkowe' może dawać znacznie lepsze rezultaty.
Wadą tego typu modeli jest to, że BERT może operować na jedynie z góry
określonej długości danych wejściowych, ponieważ model ten jest uczony na ustalonym zakresie słów (na przykład nasze modele były uczone w zakresie 512 tokenów). Dla tekstu, który nie wykorzystuje wszystkich miejsc na tokeny, dodawane są pod koniec tokeny specjalne: [PAD], które oznaczają niewykorzystane miejsce na słowo.
### __Model GPT-2__
**GPT** [[11]](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
[[12]](https://arxiv.org/abs/1908.09203) - (Generative Pre-trained Transformer) model szerzej opisany w pracach stworzony przez OpenAI, w przeciwieństwie do Bert'a GTP korzysta z dekoderów, a nie enkoderów transformatora. GPT, tak jak wiele innych tradycyjnych modeli podaje na output sekwencyjnie po jednym wyrazie. Działanie owego modelu zawiera proces tzw. auto-regresji, czyli po każdym wygenerowanym słowie trafia ono również do danych wejściowych używanych do generacji kolejnych słów. Podczas pracy dekodera zwracana jest uwaga tylko na słowa leżące na lewo od tokenu, a nie całe jego otoczenie. Pozostałe aspekty pracy modelu GPT są podobne do modelu Bert.
<br>
Model GPT-2 posiada 1.5 miliarda parametrów i został przetrenowany na 40GB danych tekstowych ze stron internetowych. Głównym zadaniem GPT-2 była predykcja następnego wyrazu w zdaniu oraz Zero Shot Learning, czyli oczekiwanie na wypełnienie przez model zdania na podstawie instrukcji, np. dla podanego inputu "To jest polskie zdanie. ENGLISH: ___ " model miał sam zrozumieć zadanie i przetłumaczyć tekst.
### __Model laMDA__
**laMDA** [[7]](https://arxiv.org/abs/2102.08602v1).- (Language Model for Dialogue Applications) stworzony przez Google jeden z nowszych (18 maj 2021) modeli korzystających z architektury transformerów, który działa na podobnej zasadzie co Bert'a, czy GPT-2. Wprowadza za to bardzo dużą różnicę w warstwie self-attention. Ze względu na przeznaczenie tego modelu do prowadzenia długich konwersacji, które często są w stanie zmieniać temat wypowiedzi trzeba znaleźć rozwiązanie dla problemu rozpoznawania wejścia i kontekstu zdań. Aby zoptymalizować pracę modelu wprowadzona została nowa warstwa 'lambda', zastępująca warstwę self-attention. Dane na wejściu przekazywane są do tejże warstwy. Ta analizuje dane słowo razem z jego kontekstem i na tej podstawie formułuje funkcję liniową lambda i nakłada ją na każdy input. Powoduje to uwzględnienie zarówno zawartości wyrazu, kontekstu i jego pozycji. Rozwiązanie to sprawdza się zarówno w analizie pełnych tekstów, jak i tych zawierających puste pola mask przy niskim zużyciu pamięci nawet przy długich tekstach. W odróżnieniu od innych modeli ze względu na swoje przeznaczenie laMDA trenowany był prawie wyłącznie na zbiorze danych składającym się z dialogów.
### **Modyfikacje modelu BERT**
Bert w bardzo szybkim tempie zdominował rynek NLP dzięki swojej wydajności i jakości rezultatów. Mimo to pojawiły się modele usprawniające go pod różnymi względami jak na przykład szybkość przetwarzania danych, czy poprawność wyników.
**RoBERTa** [[8]](https://arxiv.org/abs/1907.11692) przedstawiony przez Facebook'a model jest wersją Bert'a z innym podejściem do jego treningu. RoBERTa nie podejmuje się Next Sentence Prediction, a zamiast tego przedstawia dynamiczne tokeny mask, które zmieniają się podczas treningu. Poza tym do treningu wykorzystano dziesięciokrotnie większy zasób danych. Mimo dużo większej mocy obliczeniowej używanej podczas treningu trwa on 4-5 razy dłużej, ale dzięki temu osiągnięto 2-20% trafniejsze wyniki niż tradycyjny Bert.
**AlBERT** [[9]](https://arxiv.org/abs/1909.11942) wydany przez Google model mający aż 89% mniej parametrów niż standardowy Bert zachowując znikomą różnicę w poprawności wyników. AlBERT używa dwóch technik do optymalizacji swojej pracy: faktoryzacji danych oraz dzielenia parametrów między warstwami. Ze względu na przetwarzanie danych wyrazy zamienione w reprezentujące je wektory musiały mieć odpowiednio dopasowane wymiary, faktoryzacja pozwoliła na budowanie mniejszych wektorów i skalowanie wyniku. Dzielenie parametrów między warstwami powoduje największy spadek poprawności modelu, ale pozwala niemal ośmiokrotnie zmniejszyć ilość parametrów w modelu.
**DistilBERT** [[5]](https://arxiv.org/abs/1910.01108) zaprezentowany przez HuggingFace model, który miał na celu zminimalizować rozmiar Bert'a i poprawić jego wydajność. Posiada on tylko połowę oryginalnych warstw, gdyż przyjmuje inne podejście zwane destylacją
[[4]](https://arxiv.org/abs/1503.02531v1), która ma na celu przybliżenie wyniku Bert'a. Ogólny pomysł polega na tym, że gdy inna sieć neuronów (Bert) zostanie przetrenowana, jej wyniki można mniej więcej przewidzieć za pomocą innej mniejszej sieci. Jedną z funkcji optymalizacyjnych modelu DistilBERT jest dywergencja Kullbacka-Leiblera, która określa rozbieżność między dwoma rozkładami prawdopodobieństwa. Dzięki ograniczonej złożoności model ten trenuje się czterokrotnie szybciej zachowując 95% poprawności wyników w porównaniu z oryginalnym Bert'em.
---
### **Ciekawostki**
**MegatronBERT** [[10]](https://arxiv.org/abs/1909.08053) - model bardzo niepraktyczny zaprojektowany przez firmę NVIDIA. Posiada 3.9 miliarda parametrów trenowany równolegle na 512 GPU utrzymując 15.1 PetaFlop'ów. Aby skorzystać z modelu potrzebny jest superkomputer, a poprawność wyników zwiększa się o niewiele punktów procentowych.
**herBERT**
[[13]](https://arxiv.org/abs/2005.00630) - Bert przetrenowany na polskim tekście
## <a name="WykorzystaneBiblioteki"></a>__Wykorzystane biblioteki / narzędzia__
Do naszej aplikacji użyliśmy bardzo popularnego frameworka
[huggingface](https://huggingface.co),
który zaopatrzył nas w gotowy bazowy model distilbert-uncased, gotowe
funkcje przygotowujące dane do treningu oraz funkcje przeprowadzające
operacje _fine-tune_, o których pisaliśmy w części teoretycznej projektu.
Wyjściem naszego modelu są tensory z wartościami funkcji logitowej, tak więc
sam rezultat musieliśmy dodatkowo przerobić. Aby to wykonać użyliśmy
popularnego frameworka [pytorch](https://pytorch.org/).
Nasza aplikacja wymagała również użycia innych bibliotek. Do klasyfikacji
części mowy w wejściowym tekście użyliśmy bardzo popularnej biblioteki do
zadań z dziedziny NLP: [nltk](https://www.nltk.org/).
Interaktywne widgety zostały wygenerowane za pomocą biblioteki [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/).
# Praktyka
Na potrzeby projektu stworzyliśmy 2 modele BERT poddane operacji fine-tune.
[Link do kodu tworzącego modele](https://colab.research.google.com/drive/18EYy2SXvyCEE5WhUC6Y5lM0Rks1wMEp9?usp=sharing)
## <a name="Aplikacja"></a>**Aplikacja**
---
</br>
### **Rozszerzony opis:**
</br>
#### **Proces tworzenia tej pracy:**
Od początku mieliśmy dosyć niecodzienne podejście jeżeli chodzi o wykonanie
naszego projektu. Nie wiedzieliśmy jak się za ten problem zabrać, tak
więc błądziliśmy i testowaliśmy wiele różnych modeli z różnymi rezultatami.
Pod koniec okazało się, że odnieśliśmy najlepsze rezultaty z podanym modelem BERT.
#### **Napotkane problemy**
* **modele są bardzo duże** - aby to rozwiązać modele udostępniliśmy za pomocą platformy hugging space, dzięki której nie musimy się martwić, że nasze modele o łącznym rozmiarze ok. 700 mb przepadną.
* **modele są bardzo wolne** - rozwiązaliśmy to przez wybranie modelu nieco bardziej nastawionego na wydajność (DistilBERT) niżeli na jakość wyników. Dodatkowo poddajemy wyniki cachowaniu, dzięki czemu użytkownik czeka tylko raz
na otrzymanie wyniku.
* **modele zwracają uwagę na nieistotne elementy zdania** - jak zostało to opisane przez inną pracę naukową [[6]](https://arxiv.org/abs/1906.04341). Model BERT najczęściej zwraca uwagę na niestotne z naszego punktu widzenia części zdania
(głównie na token początkowy i końcowy, ale sporo uwagi jest również zwracana zaimkom i spójnikom).
Dlatego określiliśmy nasze, heurystyczne kryterium doboru słów do zmiany.
Za pomocą innej biblioteki szukamy słów, które są np. przymiotnikami i je maskujemy. Oczywiście nie jest to optymalna metoda, ale wydaje nam się, że efekt końcowy jest wystarczająco satysfakcjonujący.
* **interaktywność** - ten problem okazał się bardzo prosty do rozwiązania. Odkryliśmy bibliotekę ipywidgets, która pozwala nam na tworzenie podstawowych, interaktywnych widgetów w środowisku juptera.
```
!pip install ipywidgets transformers torch nltk &> /dev/null
!jupyter nbextension enable --py widgetsnbextension &> /dev/null
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
```
### <a name="DeklaracjaModeli"></a>Deklaracja modeli
```
from transformers import (AutoTokenizer,
AutoModelForMaskedLM,
AutoModelForSequenceClassification)
import torch
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# model wytrenowany na negatywnie nastawionych danych
negative_model = AutoModelForMaskedLM.from_pretrained(
"michalwilk123/distilbert-imdb-negative"
).to(device)
# model wytrenowany na pozytywnie nastawionych danych
positive_model = AutoModelForMaskedLM.from_pretrained(
"michalwilk123/distilbert-imdb-positive"
).to(device)
tokenizer = AutoTokenizer.from_pretrained(
"michalwilk123/distilbert-imdb-negative",
use_fast=True
)
# model klasyfikujący nastrój zdania
classif_model = AutoModelForSequenceClassification.from_pretrained(
"textattack/bert-base-uncased-imdb"
).to(device)
```
### Określenie jakie części mowy nas interesują
```
import sys
import nltk
from enum import Enum
from dataclasses import dataclass
# zmieniane wyrazy poukładane wg ich 'ważności'
# poniższe ciągi liter są nazywane tagami POS
token_order = [
"JJ", # przymiotnik
"JJR", # inny przymiotnik
"JJS", # przymiotnik hiperbola
"RB", # przyimek czyli slowo 'not'
"RBS", # inny przyimek
"RBR", # przyimek informujacy o intesnywności np. "bardzo"
"PDT", # inny przymiotnik
"RP", # partykuła
"VB", # czasownik
"VBD", # czasownik czas przeszły
"VBG", # czasownik czas present participle
"VBN", # czasownik czas past participle
"VBP", # czasownik inny
"NNS", # rzeczownik
"NNPS", # rzeczownik plural
"VBZ", # czasownik inny
"CC", # spójnik
"PRP", # zaimek osobowy
"IN",
]
# w jaki sposób modyfikujemy zdanie
TRANSFORM_TYPE = Enum("TRANSFORM_TYPE", ["POSITIVE", "NEGATIVE"])
def negate_transform(en:Enum):
return TRANSFORM_TYPE.POSITIVE if en is TRANSFORM_TYPE.NEGATIVE else \
TRANSFORM_TYPE.POSITIVE
# reprezentacja wyrazu
@dataclass
class WordStructure:
idx:int
word:str
pos_token:str
transform_type:Enum
# daj słowa które będą podmieniane, sortując wg ich ważności
def get_relevant_tokens(sentence:str, t_type:Enum):
# podział zdania na kolejne słowa i przydzielenie tagów POS
tokens = nltk.pos_tag(tokenizer.tokenize(sentence))
# filtruj z wyrazów które nie mają znaczenia
tokens = list(filter(
lambda tt: tt.pos_token in token_order,
[WordStructure(i, *el, t_type) for i, el in enumerate(tokens)]
))
# wewnętrzna struktura wyrazu:
# (index, słowo, pos_token, czy_zamaskowany)
# sortujemy wg istotnosci tokenu
tokens.sort(key=lambda el: token_order.index(el.pos_token))
return tokens
```
### **Wykorzystanie modeli przetrenowanych w aplikacji**
---
</br>
**Zmieniamy kolejne wyrazy w tekście dzięki wymyślonej przez nas heurystyce.**
* Na początku wypisujemy jakie słowa w zdaniu będą miały znaczenie w decydowaniu o nastawieniu zdania (_patrz powyższa lista token_order_). Z podaną kolejnością zmieniamy kolejne wyrazy w zdaniu.
</br>
* Deklarujemy 100 najlepszych kandydatów na zajęcie znaczących słów.
Często jednak model dobierający tych kandydatów może się mylić.
Jak zauważyliśmy model BERT bardziej zwraca uwagę na to, aby wybrane słowo było możliwie najlepiej dopasowane do kontekstu, a nie na to aby dopasować najbardziej odpowiednie słowo ze swojego zbioru danych. Istnieją również inne powody tego zachowania, jak na przykład niewystarczająca ilość danych do treningu, bądź nieprawidłowe dane wejściowe.
Aby ten problem zmniejszyć, dodajemy 3 model: **klasyfikator nastroju**
</br>
* Trzeci model sprawdza, czy dodane słowo poprawnie zmieni nam wartość nastroju zdania.
* Jeżeli tak, to zwracamy i wyświetlamy takie zdanie.
* Jeżeli nie, testujemy kolejnych kandydatów, czy ci nie dadzą lepszego rezultatu.
Oczywiście metodę możnaby jeszcze usprawnić na wiele różnych sposobów, lecz na ten moment obecny rezultat jest naszym zdaniem akceptowalny.
```
from torch.nn import functional as F
def get_current_polarity(sentence):
# określanie obecnego nastroju zdania
outputs_n = classif_model(**tokenizer(sentence, return_tensors = "pt"))
sof = round(F.softmax(outputs_n['logits'], dim=-1).tolist()[0][1],4)
print("Polarity: ", sof)
return sof
def get_model_predictions(word_list:str, t_type:Enum) -> None:
"""
Dostaje liste z wyrazami oraz maskami. Zdanie jest modyfikowane poprzez
wypełnianie jego zamaskowanych wyrazów zgodnie z podanym typem transformacji
"""
# przygotowujemy dane do zaserwowania naszemu modelowi
inputs = tokenizer(tokenizer.convert_tokens_to_string(word_list), return_tensors = "pt").to(device)
# szukamy indeksow pustych tokenow w zdaniu
masked_idxs = torch.where(inputs["input_ids"][0] == tokenizer.mask_token_id)[0].to(device)
ori_list = list(
map(
lambda el: el[0],
filter(
lambda x: x[1] == tokenizer.mask_token,
enumerate(word_list)
)
)
)
# wybór odpowiedniego modelu
if t_type is TRANSFORM_TYPE.POSITIVE:
outputs = positive_model(**inputs)
elif t_type is TRANSFORM_TYPE.NEGATIVE:
outputs = negative_model(**inputs)
else: assert False
# określamy funkcja softmax jaki wyraz ma największe prawdopodobieństwo
# pojawienia sie w wolnym miejscu
for pred_idx, ori_idx in zip(masked_idxs, ori_list):
end_layer = outputs.logits[0, pred_idx, :]
assert word_list[ori_idx] == tokenizer.mask_token, "masking not masked word!!!"
# tworzymy liste 100 slow kandydujących i przypisujemy im tagi POS
candidates = nltk.pos_tag(
tokenizer.convert_ids_to_tokens(torch.topk(end_layer, 100, dim = -1)[1])
)
current_polarity = None
for cand_word in filter(lambda el: el[1] in token_order, candidates):
word_list[ori_idx] = cand_word[0]
new_polar = get_current_polarity(
tokenizer.convert_tokens_to_string(word_list)
)
if current_polarity is None:
current_polarity = new_polar
continue
# kończymy dodawanie jeżeli wynik jest poprawiony
# jeżeli wynik jest tak dobry że raczej nie jesteśmy w stanie go
# poprawić to kończymy z obecnym wynikiem
if t_type is TRANSFORM_TYPE.POSITIVE:
if new_polar > current_polarity or new_polar > 0.99:
break
elif t_type is TRANSFORM_TYPE.NEGATIVE:
if new_polar < current_polarity or new_polar < 0.01:
break
test_sentence = (
"I absolutely [MASK] this movie! I do think it is [MASK]."
" Watching this film was a [MASK] experience for me and my "
"friends on this rainy afternoon. The acting was also very [MASK] made."
)
test_sentence_tok = tokenizer.tokenize(test_sentence)
get_model_predictions(test_sentence_tok, TRANSFORM_TYPE.NEGATIVE)
tokenizer.convert_tokens_to_string(test_sentence_tok)
# krok zmiany nastroju tekstu. Ta funkcja jest wywoływana po kliknięciu w przycisk
def transform_polarity(splited_sentence, relevant_tokens, t_type:Enum):
# jezeli t_type jest pozytywny -> zmien n tokenow na negatywne
number_of_samples = 1 # round(len(relevant_tokens) / 6 + 0.5)
samples = list(filter(lambda el: el.transform_type is not t_type, relevant_tokens))[:number_of_samples]
# każdemu nowo usuniętemu wyrazowi przypisz maske
for s in samples:
s.transform_type = t_type
splited_sentence[s.idx] = tokenizer.mask_token
get_model_predictions(splited_sentence, t_type)
class SentenceStructure:
"""
Określenie struktury zdania. Ze zdania tworzymy liste tokentów
"""
def __init__(self, sentence:str):
self.sentence = sentence
self.split_sentence = tokenizer.tokenize(sentence)
self.start_polar = get_current_polarity(self.__repr__())
# obecna polaryzacja
self.sentiment = TRANSFORM_TYPE.POSITIVE if self.start_polar > 0.5 \
else TRANSFORM_TYPE.NEGATIVE
self.relevant_tokens = get_relevant_tokens(sentence, self.sentiment)
def __repr__(self):
return tokenizer.convert_tokens_to_string(self.split_sentence)
```
### Część interaktywna
Poniżej podane jest miejsce do wpisania naszego zdania do zmiany transformacji.
```
from ipywidgets import interact, IntSlider, Textarea, Button, Output
# przykładowe zdanie
sentence = "I absolutely love this movie! I do think it is great. Watching this film was a great experience for me and my friends on this rainy afternoon. The acting was also very well made."
out = Output()
@interact(
inner=Textarea(
value=sentence,
placeholder='',
description='Tekst:',
disabled=False,
)
)
def choose_sentence(inner:str):
global sentence
sentence = inner
confirm_button = Button(description="Zatwierdz zdanie")
sentence_obj = None
current_sentence = None
def on_button_clicked(b):
global current_sentence, sentence_obj
with out:
out.clear_output()
sentence_obj = SentenceStructure(sentence)
confirm_button.on_click(on_button_clicked)
display(confirm_button)
display(out)
from functools import lru_cache
polarity_score = 50
sentence_out = Output()
t_type = None
positive_button = Button(description="Bardziej pozytywne")
negative_button = Button(description="Bardziej negatywne")
@lru_cache()
def get_morphed_sentence(polarity_score):
transform_polarity(sentence_obj.split_sentence, sentence_obj.relevant_tokens, t_type)
return sentence_obj.__repr__()
def polarize_up(btn):
global sentence_obj, polarity_score, t_type
if polarity_score >= 100:
return
polarity_score += 1
with sentence_out:
sentence_out.clear_output()
t_type = TRANSFORM_TYPE.POSITIVE
print(
f"Score: {polarity_score}: {TRANSFORM_TYPE.POSITIVE}\n",
get_morphed_sentence(polarity_score)
)
def polarize_down(btn):
global sentence_obj, polarity_score, t_type
if polarity_score <= 0:
return
polarity_score -= 1
with sentence_out:
sentence_out.clear_output()
t_type = TRANSFORM_TYPE.NEGATIVE
print(
f"Score: {polarity_score} {TRANSFORM_TYPE.NEGATIVE}\n",
get_morphed_sentence(polarity_score)
)
positive_button.on_click(polarize_up)
negative_button.on_click(polarize_down)
with sentence_out:
print(sentence_obj)
display(positive_button)
display(negative_button)
display(sentence_out)
```
# Podsumowanie
## <a name="OcenaAplikacji"></a>Ocena aplikacji
Zaimplementowane modele uczone są na bardzo ograniczonej ilości danych, co wyraźnie przekłada się na zwracane przez nie wyniki i szybki zanik powiązania między proponowanymi przez nie wyrazami, a docelową wartością polaryzacji.
Model klasyfikatora ma wyraźną tendencję do dawania skrajnych ocen, dlatego największa widoczna różnica przy zmianie polaryzacji zdania ma miejsce w przedziale score od 30 do 50, poza tym przedziałem zdania zaproponowane przez model pozytywny i negatywny szybko zbiegają się do odpowiednio maksymalnie pozytywnie lub negatywnie spolaryzowanej formy, a więc zmiana czynnika score nie wpływa już na zdanie.
Klasyfikator części mowy również jest niedoskonałym rozwiązaniem, gdyż wiele wyrazów w języku angielskim mimo jednakowej pisowni może zmieniać swoją funkcję w zdaniu w zależności od kontekstu, nie wspominając o możliwych wystąpieniach określeń składających się z więcej niż jednego wyrazu.
Aplikacja działa zadowalająco na miarę projektu studenckiego, ale na pewno pozostawia wiele pola do optymalizacji i dodatkowego treningu.
## <a name="OcenaProjektu"></a>Ocena projektu
Nasza aplikacja była przykładem, że takie modele mogą być naprawdę przydatne.
Responsywne sterowanie nastrojem arbitralnego tekstu byłoby niegdyś zapewne bardzo trudnym (jeżeli nie niemożliwym) zadaniem.
Dzisiejszy rozwój technologii oraz infrastruktury wokół samej branży ML sprawia, że jako studenci jesteśmy w stanie napisać tak rozwiniętą aplikacje w ograniczonym czasie, przy zerowym wkładzie własnym.
Modele przetwarzania językowego mają wiele zastosowań, a dzisiejszy rozwój technologii daje nam sporo nadziei, że kolejne pokolenie studentów będzie w stanie napisać aplikacje rozwiązującą jeszcze bardziej złożony problem za pomocą jeszcze lepszych narzędzi.
## <a name="PrzyszloscModeliTransformer"></a>Przyszłość modeli Transformer
Przedstawione w 2017 pierwsze modele transformerów zrewolucjonizowały środowisko NLP swoją wydajnością i sposobem analizy danych. Od tamtego czasu stoją w samym centrum badań z przetwarzania języka naturalnego. Nowe pomysły pojawiają się praktycznie co miesiąc, starając się w różny sposób usprawnić i udoskonalić system.
Przełomowe rozwiązania prezentowane są w krótkich odstępach czasu również bardzo często przez wielkie korporacje jak Google czy Facebook, co tylko podkreśla jak bardzo praktyczna jest ta gałąź sztucznej inteligencji.
Oglądając dotychczasowe tempo rozwoju, możemy zaryzykować stwierdzeniem, że transformery zostaną jeszcze z nami przez jakiś czas, wciąż ewoluując i zmieniając swoją formę.
Jednocześnie nie możemy wykluczyć, że ta metoda nie zostanie nagle niedługo wyparta przez kolejną innowację w dziedzinie nauki.
## <a name="Bibliografia"></a>__Bibliografia__
[[1] *Attention Is All You Need*](https://arxiv.org/abs/1706.03762)
[[2] *Efficient Estimation of Word Representations in Vector Space*](https://arxiv.org/abs/1301.3781)
[[3] *BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding*](https://arxiv.org/abs/1810.04805)
[[4] *Distilling the Knowledge in a Neural Network*](https://arxiv.org/abs/1503.02531v1)
[[5] *DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter*](https://arxiv.org/abs/1910.01108)
[[6] *What Does BERT Look At? An Analysis of BERT's Attention*](https://arxiv.org/abs/1906.04341)
[[7] *LambdaNetworks: Modeling Long-Range Interactions Without Attention*](https://arxiv.org/abs/2102.08602v1)
[[8] *RoBERTa: A Robustly Optimized BERT Pretraining Approach*](https://arxiv.org/abs/1907.11692)
[[9] *ALBERT: A Lite BERT for Self-supervised Learning of Language Representations*](https://arxiv.org/abs/1909.11942)
[[10] *Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism*](https://arxiv.org/abs/1909.08053)
[[11] *Language Models are Unsupervised Multitask Learners*](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
[[12] *Release Strategies and the Social Impacts of Language Models*](https://arxiv.org/abs/1908.09203)
[[13] *KLEJ: Comprehensive Benchmark for Polish Language Understanding*](https://arxiv.org/abs/2005.00630)
|
github_jupyter
|
```
!pip install confluent-kafka==1.7.0
from confluent_kafka.admin import AdminClient, NewTopic, NewPartitions
from confluent_kafka import KafkaException
import sys
from uuid import uuid4
bootstrap_server = "kafka:9092" # Brokers act as cluster entripoints
conf = {'bootstrap.servers': bootstrap_server}
a = AdminClient(conf)
md = a.list_topics(timeout=10)
print(" {} topics:".format(len(md.topics)))
for t in iter(md.topics.values()):
if t.error is not None:
errstr = ": {}".format(t.error)
else:
errstr = ""
print(" \"{}\" with {} partition(s){}".format(t, len(t.partitions), errstr))
from confluent_kafka import SerializingProducer
from confluent_kafka.serialization import *
import time
topic = "RoboticArm"
def delivery_report(err, msg):
if err is not None:
print("Failed to deliver message: {}".format(err))
else:
print("Produced record to topic {} partition [{}] @ offset {}"
.format(msg.topic(), msg.partition(), msg.offset()))
producer_conf = {
'bootstrap.servers': bootstrap_server,
'key.serializer': StringSerializer('utf_8'),
'value.serializer': StringSerializer('utf_8')
}
producer = SerializingProducer(producer_conf)
```
## run the following cell to loop across the data
they are the same data as those in the EPL example only the time flows at half of the speed
```
import json
from IPython.display import clear_output
def send(value):
key = None
producer.produce(topic=topic, value=json.dumps(value), key=key, on_delivery=delivery_report)
print(value)
producer.poll(1)
clear_output(wait=True)
while True:
send({"id":"1", "status":"ready", "stressLevel": 0, "ts": int(time.time())})
time.sleep(2)
send({"id":"1", "status": "goodGrasped", "stressLevel": 1, "ts": int(time.time())})
time.sleep(2)
ts = int(time.time())
send({"id":"1", "status":"movingGood", "stressLevel": 7, "ts": ts})
send({"id":"2", "status":"ready", "stressLevel": 0, "ts": ts })
time.sleep(2)
send({"id":"2", "status":"goodGrasped", "stressLevel": 5, "ts": int(time.time()) })
time.sleep(1)
send({"id":"2", "status":"movingGood", "stressLevel": 9, "ts": int(time.time()) })
time.sleep(10)
ts = int(time.time())
send({"id":"1", "status":"placingGood", "stressLevel": 3, "ts": ts})
send({"id":"2", "status":"placingGood", "stressLevel": 3, "ts": ts })
time.sleep(8)
ts = int(time.time())
send({"id":"1", "status":"moving", "stressLevel": 2, "ts": ts})
send({"id":"2", "status":"moving", "stressLevel": 1, "ts": ts })
time.sleep(6)
ts = int(time.time())
send({"id":"1", "status":"ready", "stressLevel": 0, "ts": ts})
send({"id":"2", "status":"ready", "stressLevel": 0, "ts": ts })
time.sleep(2)
```
to interrupt the execution of the cell, prese the square icon in the bar or choose *interrupt kernel* from the *kernel* dropdown menu
|
github_jupyter
|
```
#default_exp dispatch
#export
from fastcore.imports import *
from fastcore.foundation import *
from fastcore.utils import *
from nbdev.showdoc import *
from fastcore.test import *
```
# Type dispatch
> Basic single and dual parameter dispatch
## Helpers
```
#exports
def type_hints(f):
"Same as `typing.get_type_hints` but returns `{}` if not allowed type"
return typing.get_type_hints(f) if isinstance(f, typing._allowed_types) else {}
#export
def anno_ret(func):
"Get the return annotation of `func`"
if not func: return None
ann = type_hints(func)
if not ann: return None
return ann.get('return')
#hide
def f(x) -> float: return x
test_eq(anno_ret(f), float)
def f(x) -> typing.Tuple[float,float]: return x
test_eq(anno_ret(f), typing.Tuple[float,float])
def f(x) -> None: return x
test_eq(anno_ret(f), NoneType)
def f(x): return x
test_eq(anno_ret(f), None)
test_eq(anno_ret(None), None)
#export
cmp_instance = functools.cmp_to_key(lambda a,b: 0 if a==b else 1 if issubclass(a,b) else -1)
td = {int:1, numbers.Number:2, numbers.Integral:3}
test_eq(sorted(td, key=cmp_instance), [numbers.Number, numbers.Integral, int])
#export
def _p2_anno(f):
"Get the 1st 2 annotations of `f`, defaulting to `object`"
hints = type_hints(f)
ann = [o for n,o in hints.items() if n!='return']
while len(ann)<2: ann.append(object)
return ann[:2]
def _f(a): pass
test_eq(_p2_anno(_f), (object,object))
def _f(a, b): pass
test_eq(_p2_anno(_f), (object,object))
def _f(a:None, b)->str: pass
test_eq(_p2_anno(_f), (NoneType,object))
def _f(a:str, b)->float: pass
test_eq(_p2_anno(_f), (str,object))
def _f(a:None, b:str)->float: pass
test_eq(_p2_anno(_f), (NoneType,str))
def _f(a:int, b:int)->float: pass
test_eq(_p2_anno(_f), (int,int))
def _f(self, a:int, b:int): pass
test_eq(_p2_anno(_f), (int,int))
def _f(a:int, b:str)->float: pass
test_eq(_p2_anno(_f), (int,str))
test_eq(_p2_anno(attrgetter('foo')), (object,object))
```
## TypeDispatch -
The following class is the basis that allows us to do type dipatch with type annotations. It contains a dictionary type -> functions and ensures that the proper function is called when passed an object (depending on its type).
```
#export
class _TypeDict:
def __init__(self): self.d,self.cache = {},{}
def _reset(self):
self.d = {k:self.d[k] for k in sorted(self.d, key=cmp_instance, reverse=True)}
self.cache = {}
def add(self, t, f):
"Add type `t` and function `f`"
if not isinstance(t,tuple): t=tuple(L(t))
for t_ in t: self.d[t_] = f
self._reset()
def all_matches(self, k):
"Find first matching type that is a super-class of `k`"
if k not in self.cache:
types = [f for f in self.d if k==f or (isinstance(k,type) and issubclass(k,f))]
self.cache[k] = [self.d[o] for o in types]
return self.cache[k]
def __getitem__(self, k):
"Find first matching type that is a super-class of `k`"
res = self.all_matches(k)
return res[0] if len(res) else None
def __repr__(self): return self.d.__repr__()
def first(self): return first(self.d.values())
#export
class TypeDispatch:
"Dictionary-like object; `__getitem__` matches keys of types using `issubclass`"
def __init__(self, funcs=(), bases=()):
self.funcs,self.bases = _TypeDict(),L(bases).filter(is_not(None))
for o in L(funcs): self.add(o)
self.inst = None
def add(self, f):
"Add type `t` and function `f`"
a0,a1 = _p2_anno(f)
t = self.funcs.d.get(a0)
if t is None:
t = _TypeDict()
self.funcs.add(a0, t)
t.add(a1, f)
def first(self): return self.funcs.first().first()
def returns(self, x): return anno_ret(self[type(x)])
def returns_none(self, x):
r = anno_ret(self[type(x)])
return r if r == NoneType else None
def _attname(self,k): return getattr(k,'__name__',str(k))
def __repr__(self):
r = [f'({self._attname(k)},{self._attname(l)}) -> {getattr(v, "__name__", v.__class__.__name__)}'
for k in self.funcs.d for l,v in self.funcs[k].d.items()]
return '\n'.join(r)
def __call__(self, *args, **kwargs):
ts = L(args).map(type)[:2]
f = self[tuple(ts)]
if not f: return args[0]
if self.inst is not None: f = MethodType(f, self.inst)
return f(*args, **kwargs)
def __get__(self, inst, owner):
self.inst = inst
return self
def __getitem__(self, k):
"Find first matching type that is a super-class of `k`"
k = L(k)
while len(k)<2: k.append(object)
r = self.funcs.all_matches(k[0])
for t in r:
o = t[k[1]]
if o is not None: return o
for base in self.bases:
res = base[k]
if res is not None: return res
return None
def f_col(x:typing.Collection): return x
def f_nin(x:numbers.Integral)->int: return x+1
def f_ni2(x:int): return x
def f_bll(x:(bool,list)): return x
def f_num(x:numbers.Number): return x
t = TypeDispatch([f_nin,f_ni2,f_num,f_bll,None])
t.add(f_ni2) #Should work even if we add the same function twice.
test_eq(t[int], f_ni2)
test_eq(t[np.int32], f_nin)
test_eq(t[str], None)
test_eq(t[float], f_num)
test_eq(t[bool], f_bll)
test_eq(t[list], f_bll)
t.add(f_col)
test_eq(t[str], f_col)
test_eq(t[np.int32], f_nin)
o = np.int32(1)
test_eq(t(o), 2)
test_eq(t.returns(o), int)
assert t.first() is not None
t
```
If `bases` is set to a collection of `TypeDispatch` objects, then they are searched matching functions if no match is found in this object.
```
def f_str(x:str): return x+'1'
t2 = TypeDispatch(f_str, bases=t)
test_eq(t2[int], f_ni2)
test_eq(t2[np.int32], f_nin)
test_eq(t2[float], f_num)
test_eq(t2[bool], f_bll)
test_eq(t2[str], f_str)
test_eq(t2('a'), 'a1')
test_eq(t2[np.int32], f_nin)
test_eq(t2(o), 2)
test_eq(t2.returns(o), int)
def m_nin(self, x:(str,numbers.Integral)): return str(x)+'1'
def m_bll(self, x:bool): self.foo='a'
def m_num(self, x:numbers.Number): return x
t = TypeDispatch([m_nin,m_num,m_bll])
class A: f = t
a = A()
test_eq(a.f(1), '11')
test_eq(a.f(1.), 1.)
test_is(a.f.inst, a)
a.f(False)
test_eq(a.foo, 'a')
test_eq(a.f(()), ())
def m_tup(self, x:tuple): return x+(1,)
t2 = TypeDispatch(m_tup, t)
class A2: f = t2
a2 = A2()
test_eq(a2.f(1), '11')
test_eq(a2.f(1.), 1.)
test_is(a2.f.inst, a2)
a2.f(False)
test_eq(a2.foo, 'a')
test_eq(a2.f(()), (1,))
def f1(x:numbers.Integral, y): return x+1
def f2(x:int, y:float): return x+y
t = TypeDispatch([f1,f2])
test_eq(t[int], f1)
test_eq(t[int,int], f1)
test_eq(t[int,float], f2)
test_eq(t[float,float], None)
test_eq(t[np.int32,float], f1)
test_eq(t(3,2.0), 5)
test_eq(t(3,2), 4)
test_eq(t('a'), 'a')
t
```
## typedispatch Decorator
```
#export
class DispatchReg:
"A global registry for `TypeDispatch` objects keyed by function name"
def __init__(self): self.d = defaultdict(TypeDispatch)
def __call__(self, f):
nm = f'{f.__qualname__}'
self.d[nm].add(f)
return self.d[nm]
typedispatch = DispatchReg()
@typedispatch
def f_td_test(x, y): return f'{x}{y}'
@typedispatch
def f_td_test(x:numbers.Integral, y): return x+1
@typedispatch
def f_td_test(x:int, y:float): return x+y
test_eq(f_td_test(3,2.0), 5)
test_eq(f_td_test(3,2), 4)
test_eq(f_td_test('a','b'), 'ab')
```
## Casting
Now that we can dispatch on types, let's make it easier to cast objects to a different type.
```
#export
_all_=['cast']
#export
def retain_meta(x, res):
"Call `res.set_meta(x)`, if it exists"
if hasattr(res,'set_meta'): res.set_meta(x)
return res
#export
def default_set_meta(self, x):
"Copy over `_meta` from `x` to `res`, if it's missing"
if hasattr(x, '_meta') and not hasattr(self, '_meta'): self._meta = x._meta
return self
#export
@typedispatch
def cast(x, typ):
"cast `x` to type `typ` (may also change `x` inplace)"
res = typ._before_cast(x) if hasattr(typ, '_before_cast') else x
if isinstance(res, ndarray): res = res.view(typ)
elif hasattr(res, 'as_subclass'): res = res.as_subclass(typ)
else:
try: res.__class__ = typ
except: res = typ(res)
return retain_meta(x, res)
```
This works both for plain python classes:...
```
mk_class('_T1', 'a')
class _T2(_T1): pass
t = _T1(a=1)
t2 = cast(t, _T2)
test_eq_type(_T2(a=1), t2)
```
...as well as for arrays and tensors.
```
class _T1(ndarray): pass
t = array([1])
t2 = cast(t, _T1)
test_eq(array([1]), t2)
test_eq(_T1, type(t2))
```
To customize casting for other types, define a separate `cast` function with `typedispatch` for your type.
```
#export
def retain_type(new, old=None, typ=None):
"Cast `new` to type of `old` or `typ` if it's a superclass"
# e.g. old is TensorImage, new is Tensor - if not subclass then do nothing
if new is None: return
assert old is not None or typ is not None
if typ is None:
if not isinstance(old, type(new)): return new
typ = old if isinstance(old,type) else type(old)
# Do nothing the new type is already an instance of requested type (i.e. same type)
if typ==NoneType or isinstance(new, typ): return new
return retain_meta(old, cast(new, typ))
class _T(tuple): pass
a = _T((1,2))
b = tuple((1,2))
test_eq_type(retain_type(b, typ=_T), a)
```
If `old` has a `_meta` attribute, its content is passed when casting `new` to the type of `old`.
```
class _A():
set_meta = default_set_meta
def __init__(self, t): self.t=t
class _B1(_A):
def __init__(self, t, a=1):
super().__init__(t)
self._meta = {'a':a}
x = _B1(1, a=2)
b = _A(1)
test_eq(retain_type(b, old=x)._meta, {'a': 2})
a = {L: [int, tuple]}
first(a.keys())
#export
def retain_types(new, old=None, typs=None):
"Cast each item of `new` to type of matching item in `old` if it's a superclass"
if not is_listy(new): return retain_type(new, old, typs)
if typs is not None:
if isinstance(typs, dict):
t = first(typs.keys())
typs = typs[t]
else: t,typs = typs,None
else: t = type(old) if old is not None and isinstance(old,type(new)) else type(new)
return t(L(new, old, typs).map_zip(retain_types, cycled=True))
class T(tuple): pass
t1,t2 = retain_types((1,(1,(1,1))), (2,T((2,T((3,4))))))
test_eq_type(t1, 1)
test_eq_type(t2, T((1,T((1,1)))))
t1,t2 = retain_types((1,(1,(1,1))), typs = {tuple: [int, {T: [int, {T: [int,int]}]}]})
test_eq_type(t1, 1)
test_eq_type(t2, T((1,T((1,1)))))
#export
def explode_types(o):
"Return the type of `o`, potentially in nested dictionaries for thing that are listy"
if not is_listy(o): return type(o)
return {type(o): [explode_types(o_) for o_ in o]}
test_eq(explode_types((2,T((2,T((3,4)))))), {tuple: [int, {T: [int, {T: [int,int]}]}]})
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
|
github_jupyter
|
```
import linsolve
import tf_linsolve
import tensorflow as tf
import scipy
import numpy as np
import pylab as plt
%load_ext line_profiler
from hera_cal.io import HERAData
hd = HERAData('zen.2458098.27465.sum.corrupt.uvh5')
data, flags, _ = hd.read(polarizations=['nn'])
from hera_cal.redcal import predict_noise_variance_from_autos, SEC_PER_DAY, split_pol, join_pol
data_wgts = {
bl: predict_noise_variance_from_autos(
bl, data
)
** -1
for bl in data.keys() if bl[0] != bl[1]
}
data = {bl: data[bl] for bl in data.keys() if bl[0] != bl[1]}
np.savez('zen.2458098.27465.sum.corrupt.npz', antpos=hd.antpos, data=data, wgts=data_wgts, freqs=hd.freqs)
!ls -alh zen.2458098.27465.sum.corrupt.npz
```
## Methods
Here, I'll develop some methods to replace the solvers in linsolve assuming that the inputs are tensors
### Solvers Dense
```
# This could help with repeated calls, but increases the runtime for single usage
#@tf.function
def _invert_lsqr(A, y, rcond=0, sparse=False):
"""
rcond:
rcond must be set to 0 to work for complex datasets
"""
dtype = y.dtype
assert not (
dtype in [np.complex128, np.complex64, complex] and rcond > 0
), "If using complex data, rcond must be equal to 0 for performance reasons"
x = tf.linalg.lstsq(
tf.transpose(A, perm=[2, 0, 1]),
tf.transpose(y)[..., None],
l2_regularizer=rcond,
)[..., 0]
return x
def _invert_lsqr_sparse(xs_ys_vals, y, rcond):
"""
"""
A = _get_A_sparse(xs_ys_vals)
return _invert_lsqr(A, y, rcond, sparse=True)
# This could help with repeated calls, but increases the runtime for single usage
#@tf.function
def _invert_pinv(A, y, rcond, sparse=False):
"""
"""
dtype = y.dtype
A = tf.transpose(A, perm=[2, 0, 1])
AtA = tf.matmul(A, A, adjoint_a=True, a_is_sparse=sparse, b_is_sparse=sparse)
if dtype in [complex, np.complex64, np.complex128]:
# tensorflow does not allow for complex psuedo-inverses. Compute the value manually
R = tf.math.real(AtA)
C = tf.math.imag(AtA)
r0 = tf.matmul(tf.linalg.pinv(R), C)
y11 = tf.linalg.pinv(tf.matmul(C, r0) + R)
y10 = tf.matmul(-r0, y11)
AtAi = tf.cast(tf.complex(y11, y10), dtype=AtA.dtype)
else:
AtAi = tf.linalg.pinv(AtA, rcond=rcond)
return tf.einsum(
"nij,njk,kn->ni", AtAi, tf.transpose(A, perm=[0, 2, 1], conjugate=True), y
)
def _invert_pinv_sparse(xs_ys_vals, y, rcond):
"""
"""
A = _get_A_sparse(xs_ys_vals)
return _invert_pinv(A, y, rcond, sparse=True)
# This could help with repeated calls, but increases the runtime for single usage
#@tf.function
def _invert_solve(A, y, rcond, sparse=False):
"""
"""
A = tf.transpose(A, perm=[2, 0, 1])
AtA = tf.matmul(A, A, adjoint_a=True, a_is_sparse=sparse, b_is_sparse=sparse)
Aty = tf.matmul(
tf.transpose(A, perm=[0, 2, 1], conjugate=True),
tf.transpose(y)[..., None],
a_is_sparse=sparse,
)
return tf.linalg.solve(AtA, Aty)[..., 0]
def _invert_solve_sparse(xs_ys_vals, y, rcond):
"""
"""
A = _get_A_sparse(xs_ys_vals)
return _invert_solve(A, y, rcond, sparse=True)
# This could help with repeated calls, but increases the runtime for single usage
#@tf.function
def _invert_pinv_shared(A, y, rcond, sparse=False):
"""
"""
AtA = tf.matmul(A, A, adjoint_a=True, a_is_sparse=sparse, b_is_sparse=sparse)
dtype = AtA.dtype
if dtype in [complex, np.complex64, np.complex128]:
# tensorflow does not allow for complex psuedo-inverses. Compute the value manually
R = tf.math.real(AtA)
C = tf.math.imag(AtA)
r0 = tf.matmul(tf.linalg.pinv(R), C)
y11 = tf.linalg.pinv(tf.matmul(C, r0) + R)
y10 = tf.matmul(-r0, y11)
AtAi = tf.cast(tf.complex(y11, y10), dtype=AtA.dtype)
else:
AtAi = tf.linalg.pinv(AtA, rcond=rcond)
return tf.transpose(tf.matmul(AtAi, tf.matmul(A, y, adjoint_a=True, a_is_sparse=sparse)))
def _invert_pinv_shared_sparse(xs_ys_vals, y, rcond):
"""
"""
A = _get_A_sparse(xs_ys_vals)
return _invert_pinv_shared(A, y, rcond, sparse=True)
tf.convert_to_tensor?
```
### Helper Methods
```
def _get_AtA_Aty_sparse(xs_ys_vals, y):
"""
"""
pass
```
## Standard Linsolve
### Standard Linsolve Case
```
x = np.linspace(0, 2 * np.pi, 1000)
g = np.cos(x) + 1j * np.sin(x)
h = np.sin(x) + 1j * np.cos(x)
i = x + 1j * x
data = {'g + h': g + h, 'g + i': g + i, 'i + h': i + h, 'i + g + h': i + g + h}
wgts = {k: np.random.uniform(0.9, 1.1, v.shape[0]) for k, v in data.items()}
ls = linsolve.LinearSolver(data)
A = ls.get_A()[..., 0]
y = ls.get_weighted_data()
ATF = tf.convert_to_tensor(A)
yTF = tf.constant(y)
%time ma = _invert_pinv_shared(ATF, yTF, rcond=None)
%time s = ls.solve()
```
#### Profile
```
%lprun -f _invert_pinv_shared _invert_pinv_shared(ATF, yTF, rcond=None)
```
### Least-Squares Case
```
ls = linsolve.LinearSolver(data, wgts=wgts)
ATF = tf.convert_to_tensor(ls.get_A())
yTF = tf.convert_to_tensor(ls.get_weighted_data())
%time sol = _invert_lsqr(ATF, yTF, rcond=0)
%time solution = ls.solve(mode='lsqr')
plt.figure(figsize=(10, 6))
plt.plot(np.abs(solution['g'] - sol[..., 0]))
plt.plot(np.abs(solution['h'] - sol[..., 1]))
plt.plot(np.abs(solution['i'] - sol[..., 2]))
plt.show()
```
#### Profiling
```
%lprun -f _invert_lsqr _invert_lsqr(ATF, yTF, rcond=0)
```
### Psuedo-inverse
```
%time solution = _invert_pinv(ATF, yTF, rcond=None)
%time sol = ls.solve()
```
#### Profiling
```
%lprun -f _invert_pinv _invert_pinv(ATF, yTF, rcond=None)
```
### Solve
```
%time _ = _invert_solve(ATF, yTF, 0)
%time sol = ls.solve(mode='solve')
```
#### Profiling
```
%lprun -f _invert_solve _invert_solve(ATF, yTF, rcond=None)
```
# Tensorflow Operations
```
A = np.random.uniform(0, 1, size=(200, 100, 300)) + 1j * np.random.uniform(0, 1, size=(200, 100, 300))
ATF = tf.complex(tf.random.uniform((200, 100, 300)), tf.random.uniform((200, 100, 300)))
```
## Conjugation
```
%%time
_ = A.conj()
%%time
_ = tf.math.conj(ATF)
```
## Transpose
```
%%time
_ = tf.einsum("ijk...->ikj...", ATF)
%%time
_ = tf.transpose(ATF, perm=[0, 2, 1])
%%time
_ = tf.reshape(ATF, (200, 300, 100))
%%time
G = tf.linalg.matrix_transpose(ATF)
G = tf.constant(G)
%%time
_ = tf.einsum('ijk,ijl->ij', G, G)
%%time
_ = tf.einsum('ijk,ijl->ji', G, G)
tf.linalg.solve?
A = tf.random.uniform((1000, 100, 100))
y = tf.random.uniform((1000, 100, 20))
%%timeit
_ = tf.linalg.solve(A, y)
%%timeit
_ = tf.transpose(tf.linalg.solve(A, y))
@tf.function
def function(A):
"""
"""
return result(A)
@tf.function
def function_graph(A):
"""
"""
return result_graph(A)
def function_no_opt(A):
"""
"""
return result_no_opt(A)
def result(A):
"""
"""
return tf.matmul(A, A, transpose_a=True)
@tf.function
def result_graph(A):
"""
"""
return tf.matmul(A, A, transpose_a=True)
def result_no_opt(A):
"""
"""
return tf.matmul(A, A, transpose_a=True)
A = tf.random.uniform((5000, 1000))
%timeit _ = function(A)
%timeit _ = function_graph(A)
%timeit _ = function_no_opt(A)
from uvtools.dspec import dpss_operator
ugrid = np.arange(-40, 40, 0.499)
freqs = np.linspace(50e6, 250e6, 1024)
r, _ = dpss_operator(ugrid, filter_centers=[0], filter_half_widths=[1], eigenval_cutoff=[1e-10])
f, _ = dpss_operator(freqs, filter_centers=[0], filter_half_widths=[10e-9], eigenval_cutoff=[1e-10])
f.shape, r.shape
r.shape[1] ** 2 * f.shape[1] / 2
ugrid = np.arange(-40, 0, 0.499)
freqs = np.linspace(50e6, 250e6, 1024)
r, _ = dpss_operator(ugrid, filter_centers=[0], filter_half_widths=[1], eigenval_cutoff=[1e-10])
f, _ = dpss_operator(freqs, filter_centers=[0], filter_half_widths=[10e-9], eigenval_cutoff=[1e-10])
x1 = np.random.uniform(0, 1, (10, 100))
x2 = np.random.uniform(0, 1, (5, 10, 100))
i1 = np.argmax(x1, axis=-1)
i2 = np.argmax(x2, axis=-1)
ind1 = np.indices(i1.shape)
ind2 = np.indices(i2.shape)
```
|
github_jupyter
|
# Scalars
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
## Integers
### Binary representation of integers
```
format(16, '032b')
```
### Bit shifting
```
format(16 >> 2, '032b')
16 >> 2
format(16 << 2, '032b')
16 << 2
```
### Overflow
In general, the computer representation of integers has a limited range, and may overflow. The range depends on whether the integer is signed or unsigned.
For example, with 8 bits, we can represent at most $2^8 = 256$ integers.
- 0 to 255 unsigned
- -128 ti 127 signed
Signed integers
```
np.arange(130, dtype=np.int8)[-5:]
```
Unsigned integers
```
np.arange(130, dtype=np.uint8)[-5:]
np.arange(260, dtype=np.uint8)[-5:]
```
### Integer division
In Python 2 or other languages such as C/C++, be very careful when dividing as the division operator `/` performs integer division when both numerator and denominator are integers. This is rarely what you want. In Python 3 the `/` always performs floating point division, and you use `//` for integer division, removing a common source of bugs in numerical calculations.
```
%%python2
import numpy as np
x = np.arange(10)
print(x/10)
```
Python 3 does the "right" thing.
```
x = np.arange(10)
x/10
```
## Real numbers
Real numbers are represented as **floating point** numbers. A floating point number is stored in 3 pieces (sign bit, exponent, mantissa) so that every float is represetned as get +/- mantissa ^ exponent. Because of this, the interval between consecutive numbers is smallest (high precison) for numebrs close to 0 and largest for numbers close to the lower and upper bounds.
Because exponents have to be singed to represent both small and large numbers, but it is more convenint to use unsigned numbers here, the exponnent has an offset (also knwnn as the exponentn bias). For example, if the expoennt is an unsigned 8-bit number, it can rerpesent the range (0, 255). By using an offset of 128, it will now represent the range (-127, 128).

**Note**: Intervals between consecutive floating point numbers are not constant. In particular, the precision for small numbers is much larger than for large numbers. In fact, approximately half of all floating point numbers lie between -1 and 1 when using the `double` type in C/C++ (also the default for `numpy`).

Because of this, if you are adding many numbers, it is more accurate to first add the small numbers before the large numbers.
#### IEEE 754 32-bit floating point representation

See [Wikipedia](https://en.wikipedia.org/wiki/Single-precision_floating-point_format) for how this binary number is evaluated to 0.15625.
```
from ctypes import c_int, c_float
s = c_int.from_buffer(c_float(0.15625)).value
s = format(s, '032b')
s
rep = {
'sign': s[:1],
'exponent' : s[1:9:],
'fraction' : s[9:]
}
rep
```
### Most base 10 real numbers are approximations
This is simply because numbers are stored in finite-precision binary format.
```
'%.20f' % (0.1 * 0.1 * 100)
```
### Never check for equality of floating point numbers
```
i = 0
loops = 0
while i != 1:
i += 0.1 * 0.1
loops += 1
if loops == 1000000:
break
i
i = 0
loops = 0
while np.abs(1 - i) > 1e-6:
i += 0.1 * 0.1
loops += 1
if loops == 1000000:
break
i
```
### Associative law does not necessarily hold
```
6.022e23 - 6.022e23 + 1
1 + 6.022e23 - 6.022e23
```
### Distributive law does not hold
```
a = np.exp(1)
b = np.pi
c = np.sin(1)
a*(b+c)
a*b + a*c
```
### Catastrophic cancellation
Consider calculating sample variance
$$
s^2= \frac{1}{n(n-1)}\sum_{i=1}^n x_i^2 - (\sum_{i=1}^n x_i)^2
$$
Be careful whenever you calculate the difference of potentially big numbers.
```
def var(x):
"""Returns variance of sample data using sum of squares formula."""
n = len(x)
return (1.0/(n*(n-1))*(n*np.sum(x**2) - (np.sum(x))**2))
```
### Underflow
```
np.warnings.filterwarnings('ignore')
np.random.seed(4)
xs = np.random.random(1000)
ys = np.random.random(1000)
np.prod(xs)/np.prod(ys)
```
#### Prevent underflow by staying in log space
```
x = np.sum(np.log(xs))
y = np.sum(np.log(ys))
np.exp(x - y)
```
### Overflow
```
np.exp(1000)
```
### Numerically stable algorithms
#### What is the sample variance for numbers from a normal distribution with variance 1?
```
np.random.seed(15)
x_ = np.random.normal(0, 1, int(1e6))
x = 1e12 + x_
var(x)
```
#### Use functions from numerical libraries where available
```
np.var(x)
```
There is also a variance function in the standard library, but it is slower for large arrays.
```
import statistics
statistics.variance(x)
```
Note that `numpy` uses does not use the asymptotically unbiased estimator by default. If you want the unbiased variance, set `ddof` to 1.
```
np.var([1,2,3,4], ddof=1)
statistics.variance([1,2,3,4])
```
### Useful numerically stable functions
Let's calculate
$$
\log(e^{1000} + e^{1000})
$$
Using basic algebra, we get the solution $\log(2) + 1000$.
\begin{align}
\log(e^{1000} + e^{1000}) &= \log(e^{0}e^{1000} + e^{0}e^{1000}) \\
&= \log(e^{100}(e^{0} + e^{0})) \\
&= \log(e^{1000}) + \log(e^{0} + e^{0}) \\
&= 1000 + \log(2)
\end{align}
**logaddexp**
```
x = np.array([1000, 1000])
np.log(np.sum(np.exp(x)))
np.logaddexp(*x)
```
**logsumexp**
This function generalizes `logaddexp` to an arbitrary number of addends and is useful in a variety of statistical contexts.
Suppose we need to calculate a probability distribution $\pi$ parameterized by a vector $x$
$$
\pi_i = \frac{e^{x_i}}{\sum_{j=1}^n e^{x_j}}
$$
Taking logs, we get
$$
\log(\pi_i) = x_i - \log{\sum_{j=1}^n e^{x_j}}
$$
```
x = 1e6*np.random.random(100)
np.log(np.sum(np.exp(x)))
from scipy.special import logsumexp
logsumexp(x)
```
**logp1 and expm1**
```
np.exp(np.log(1 + 1e-6)) - 1
np.expm1(np.log1p(1e-6))
```
**sinc**
```
x = 1
np.sin(x)/x
np.sinc(x)
x = np.linspace(0.01, 2*np.pi, 100)
plt.plot(x, np.sinc(x), label='Library function')
plt.plot(x, np.sin(x)/x, label='DIY function')
plt.legend()
pass
```
|
github_jupyter
|
#Introduction to the Research Environment
The research environment is powered by IPython notebooks, which allow one to perform a great deal of data analysis and statistical validation. We'll demonstrate a few simple techniques here.
##Code Cells vs. Text Cells
As you can see, each cell can be either code or text. To select between them, choose from the 'Cell Type' dropdown menu on the top left.
###This is a test
Oh, so amazing
Incluso se puede usar ${{LaTeX}}:$
$$x=\frac{-b \pm \sqrt{b^2 -4(a)(c)}}{2(a)}$$
$$\text{Incluso podemos escribir}$$
##Executing a Command
A code cell will be evaluated when you press play, or when you press the shortcut, shift-enter. Evaluating a cell evaluates each line of code in sequence, and prints the results of the last line below the cell.
```
2 + 2
6 + 6
```
Sometimes there is no result to be printed, as is the case with assignment.
```
X = 2
W = 10
```
Remember that only the result from the last line is printed.
```
2 + 2
3 + 3
6 + 6
7 + 7
```
However, you can print whichever lines you want using the `print` statement.
```
print (2 + 2)
3 + 3
print (4 + 4)
5 + 5
```
##Knowing When a Cell is Running
While a cell is running, a `[*]` will display on the left. When a cell has yet to be executed, `[ ]` will display. When it has been run, a number will display indicating the order in which it was run during the execution of the notebook `[5]`. Try on this cell and note it happening.
```
#Take some time to run something
c = 0
for i in range(10000000):
c = c + i
c
c = 1
for i in range(10):
c = c * (i+1)
c
```
##Importing Libraries
The vast majority of the time, you'll want to use functions from pre-built libraries. You can't import every library on Quantopian due to security issues, but you can import most of the common scientific ones. Here I import numpy and pandas, the two most common and useful libraries in quant finance. I recommend copying this import statement to every new notebook.
Notice that you can rename libraries to whatever you want after importing. The `as` statement allows this. Here we use `np` and `pd` as aliases for `numpy` and `pandas`. This is a very common aliasing and will be found in most code snippets around the web. The point behind this is to allow you to type fewer characters when you are frequently accessing these libraries.
```
import numpy as np
import pandas as pd
# This is a plotting library for pretty pictures.
import matplotlib.pyplot as plt
import cython as cy
import pandas_datareader as pdr
import datetime
import xarray as xa
```
##Tab Autocomplete
Pressing tab will give you a list of IPython's best guesses for what you might want to type next. This is incredibly valuable and will save you a lot of time. If there is only one possible option for what you could type next, IPython will fill that in for you. Try pressing tab very frequently, it will seldom fill in anything you don't want, as if there is ambiguity a list will be shown. This is a great way to see what functions are available in a library.
Try placing your cursor after the `.` and pressing tab.
```
np.random.normal
np.random.binomial
```
##Getting Documentation Help
Placing a question mark after a function and executing that line of code will give you the documentation IPython has for that function. It's often best to do this in a new cell, as you avoid re-executing other code and running into bugs.
```
np.random.normal?
np.test?
```
##Sampling
We'll sample some random data using a function from `numpy`.
```
# Sample 100 points with a mean of 0 and an std of 1. This is a standard normal distribution.
X = np.random.normal(0, 1, 100)
print(X)
W = np.random.lognormal(0,1,100)
print(W)
```
##Plotting
We can use the plotting library we imported as follows.
```
plt.plot(X)
plt.plot(W)
```
###Squelching Line Output
You might have noticed the annoying line of the form `[<matplotlib.lines.Line2D at 0x7f72fdbc1710>]` before the plots. This is because the `.plot` function actually produces output. Sometimes we wish not to display output, we can accomplish this with the semi-colon as follows.
```
plt.plot(X);
plt.plot(W);
```
###Adding Axis Labels
No self-respecting quant leaves a graph without labeled axes. Here are some commands to help with that.
```
X = np.random.normal(0, 1, 100)
X2 = np.random.normal(0, 1, 100)
plt.plot(X);
plt.plot(X2);
plt.xlabel('Time') # The data we generated is unitless, but don't forget units in general.
plt.ylabel('Returns')
plt.legend(['X', 'X2']);
W = np.random.lognormal(0, 1, 100)
W2 = np.random.lognormal(0, 1, 100)
plt.plot(W);
plt.plot(W2);
plt.xlabel('Time') # The data we generated is unitless, but don't forget units in general.
plt.ylabel('Returns')
plt.legend(['W', 'W2']);
```
##Generating Statistics
Let's use `numpy` to take some simple statistics.
```
np.mean(X)
np.std(X)
np.mean(W)
np.std(W)
```
##Getting Real Pricing Data
Randomly sampled data can be great for testing ideas, but let's get some real data. We can use `get_pricing` to do that. You can use the `?` syntax as discussed above to get more information on `get_pricing`'s arguments.
```
#No Funciona :c
#get_pricing?
#data = get_pricing('MSFT', start_date='2012-1-1', end_date='2015-6-1')
pdr.get_data_yahoo?
data = pdr.get_data_yahoo('MSFT', start=datetime.datetime(2020, 1, 1),
end=datetime.datetime(2021,1,1))
pdr.get_data_yahoo?
mi_ejemplo = pdr.get_data_yahoo('LNVGY', start=datetime.datetime(2020, 1, 1),
end=datetime.datetime(2021,1,1))
```
Our data is now a dataframe. You can see the datetime index and the colums with different pricing data.
```
data
mi_ejemplo
```
This is a pandas dataframe, so we can index in to just get price like this. For more info on pandas, please [click here](http://pandas.pydata.org/pandas-docs/stable/10min.html).
```
X = data['Close']
Y= mi_ejemplo['Close']
```
Because there is now also date information in our data, we provide two series to `.plot`. `X.index` gives us the datetime index, and `X.values` gives us the pricing values. These are used as the X and Y coordinates to make a graph.
```
plt.plot(X.index, X.values)
plt.ylabel('Price')
plt.legend(['MSFT']);
plt.plot(X.index, X.values)
plt.ylabel('Precio')
plt.legend(['LNVGY']);
```
We can get statistics again on real data.
```
np.mean(X)
np.mean(Y)
np.std(X)
np.std(Y)
```
##Getting Returns from Prices
We can use the `pct_change` function to get returns. Notice how we drop the first element after doing this, as it will be `NaN` (nothing -> something results in a NaN percent change).
```
R = X.pct_change()[1:]
T = Y.pct_change()[1:]
```
We can plot the returns distribution as a histogram.
```
plt.hist(R, bins=20)
plt.xlabel('Return')
plt.ylabel('Frequency')
plt.legend(['MSFT Returns']);
plt.hist(T, bins=20)
plt.xlabel('Return')
plt.ylabel('Frequency')
plt.legend(['LNVGY Returns']);
```
Get statistics again.
```
np.mean(R)
np.mean(T)
np.std(R)
np.std(T)
```
Now let's go backwards and generate data out of a normal distribution using the statistics we estimated from Microsoft's returns. We'll see that we have good reason to suspect Microsoft's returns may not be normal, as the resulting normal distribution looks far different.
```
plt.hist(np.random.normal(np.mean(R), np.std(R), 10000), bins=20)
plt.xlabel('Return')
plt.ylabel('Frequency')
plt.legend(['Normally Distributed Returns']);
plt.hist(np.random.normal(np.mean(T), np.std(T), 10000), bins=20)
plt.xlabel('Return')
plt.ylabel('Frequency')
plt.legend(['Normally Distributed Returns']);
```
##Generating a Moving Average
`pandas` has some nice tools to allow us to generate rolling statistics. Here's an example. Notice how there's no moving average for the first 60 days, as we don't have 60 days of data on which to generate the statistic.
```
##Rolling_mean ya se dejó de usar!!!!
# Take the average of the last 60 days at each timepoint.
#MAVG = pd.rolling_mean(X, window=60)
#plt.plot(X.index, X.values)
#plt.plot(MAVG.index, MAVG.values)
#plt.ylabel('Price')
#plt.legend(['MSFT', '60-day MAVG']);
MAVG = X.rolling(60).mean()
plt.plot(X.index, X.values)
plt.plot(MAVG.index, MAVG.values)
plt.ylabel('Price')
plt.legend(['MSFT', '60-day MAVG']);
SPRT = Y.rolling(60).mean()
plt.plot(Y.index, Y.values)
plt.plot(SPRT.index, SPRT.values)
plt.ylabel('Price')
plt.legend(['LNVGY', '60-day SPRT']);
```
This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.
|
github_jupyter
|
# Running T<sub>1</sub> Experiments with Qiskit
In a T<sub>1</sub> experiment, we measure an excited qubit after a delay. Due to decoherence processes (e.g. amplitude damping channel), it is possible that, at the time of measurement, after the delay, the qubit will not be excited anymore. The larger the delay time is, the more likely is the qubit to fall to the ground state. The goal of the experiment is to characterize the decay rate of the qubit towards the ground state.
We start by fixing a delay time $t$ and a number of shots $s$. Then, by repeating $s$ times the procedure of exciting the qubit, waiting, and measuring, we estimate the probability to measure $|1\rangle$ after the delay. We repeat this process for a set of delay times, resulting in a set of probability estimates.
In the absence of state preparation and measurement errors, the probablity to measure |1> after time $t$ is $e^{-t/T_1}$, for a constant $T_1$ (the coherence time), which is our target number. Since state preparation and measurement errors do exist, the qubit's decay towards the ground state assumes the form $Ae^{-t/T_1} + B$, for parameters $A, T_1$, and $B$, which we deduce form the probability estimates. To this end, the T<sub>1</sub> experiment internally calls the `curve_fit` method of `scipy.optimize`.
The following code demonstrates a basic run of a T<sub>1</sub> experiment for qubit 0.
```
from qiskit_experiments.framework import ParallelExperiment
from qiskit_experiments.library import T1
# A T1 simulator
from qiskit_experiments.test.t1_backend import T1Backend
# Simulate T1 of 25 microseconds
t1 = 25
backend = T1Backend(t1=[t1*1e-6])
# Time intervals to wait before measurement
delays = list(range(1, 40, 3))
# Create an experiment for qubit 0,
# setting the unit to microseconds,
# with the specified time intervals
exp = T1(qubit=0,
delays=delays,
unit="us")
# Run the experiment circuits with 1000 shots each,
# and analyze the result
exp_data = exp.run(backend=backend,
shots=1000)
# Print the result
res = exp_data.analysis_result(0)
res
```
It is possible to override the default analysis options. In particular, be aware of the `t1_guess` and `t1_bounds` options. In the following snippet, we instruct to look for T<sub>1</sub> in the range between 3 to 10. Since T<sub>1</sub> is outside this range (equals 25 in the example), the analysis will fail.
```
exp.set_analysis_options(t1_bounds=[3, 10])
fail_fit = exp.run(backend=backend,
shots=1000)
print(fail_fit.analysis_result(0))
# Return the default analysis option
exp.set_analysis_options(t1_bounds=exp._default_analysis_options().get("t1_bounds"))
```
You can combine a new experiment with an old one. This way, the T<sub>1</sub> estimate will be based on the data of both experiments, hence will be more accurate. This is done by setting the `experiment_data` parameter of `run` with the returned value of an earlier call to `run`:
```
# Run again and combine with an earlier run.
combined = exp.run(backend=backend,
shots=1000,
experiment_data=exp_data)
# `combined` consists now of two analysis results:
# - The result from the first execution of the experiment
# - The result of the two first executions together
combined_analysis_result = combined.analysis_result(1)
print("T1:", combined_analysis_result["value"])
print("Error bar:", combined_analysis_result["stderr"])
# Compare with the previous error bar:
print("Previous error bar:", res["stderr"])
```
To measure T1 of multiple qubits in the same experiment, we create a parallel experiment:
```
# A simulator where qubits 0 and 1 have T1 of 25 microseconds
backend = T1Backend(t1=[t1*1e-6, t1*1e-6])
# An experiment for qubit 1
exp_q1 = T1(qubit=1,
delays=delays,
unit="us")
# A parallel experiment
parallel_exp = ParallelExperiment([exp, exp_q1])
parallel_data = parallel_exp.run(backend=backend)
```
|
github_jupyter
|
```
import os
from pycocotools.coco import COCO
import numpy as np
import torch.utils.data as data
import torch
from heatmap import heatmaps_from_keypoints
from imageio import imread
from skimage.transform import resize
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.model_zoo as model_zoo
from torch.nn import init
from torch.autograd.variable import Variable
import matplotlib.pyplot as plt
import pickle
MAIN_FOLDER = "/Volumes/TOSHIBA EXT/data/"
IMAGES_FOLDER = os.path.join(MAIN_FOLDER, "train2017")
IMAGES_FOLDER_TEST = os.path.join(MAIN_FOLDER, "val2017")
ANNOTATION_FILE = os.path.join(MAIN_FOLDER, "annotations/person_keypoints_train2017.json")
ANNOTATION_FILE_TEST = os.path.join(MAIN_FOLDER, "annotations/person_keypoints_val2017.json")
CHECKPOINTS_FOLDER = "./cktp/"
```
### Heatmap
```
def gaussian_heatmap(shape, keypoint_coordinates, std = 1.5):
"""
Computes a square gaussian kernel
:param shape: Shape of the output heatmap
:param keypoint_coordinates: Location of the keypoint
:param std: Standard deviation
:return: Heatmap of shape (1,shape,shape)
"""
# Get the coordinates
x = keypoint_coordinates[0]
y = keypoint_coordinates[1]
a = np.arange(0, shape, 1, float)
b = a[:,np.newaxis]
# Generate the heatmap
heatmap_raw = np.exp(-(((a-x)**2)/(2*std**2) + ((b-y)**2)/(2*std**2)))
# Normalize
heatmap_max = np.amax(heatmap_raw)
heatmap_normalized = heatmap_raw/heatmap_max
# Get it in the accurate format
heatmap = np.expand_dims(heatmap_raw, axis=0)
return heatmap
def gaussian_heatmaps(xs, ys, vs, shape=32, image_height=512, image_width=640, std=1.):
"""
Computes heatmaps from the keypoints
:param xs: Array of x coordinates for the keypoints
:param ys: Array of y coordinates for the keypoints
:param shape: shape of the heatmaps
:param image_height: Height of the images the keypoints are for
:param image_width: Width of the images the keypoints are for
:param std: Standard deviation of the gaussion function used
:return: Heatmaps as numpy arrays of shape (shape, shape, n_keypoints)
"""
# Rescale keypoints coordinates to the heatmaps scale
# ys
height_scale = shape/image_height
ys = ys*height_scale
# xs
width_scale = shape/image_width
xs = xs*width_scale
# Render a heatmap for each joint
heatmaps = gaussian_heatmap(shape, (xs[0],ys[0]))
for i, v in enumerate(vs):
if i!=0:
# If the joint is visible, generate a heatmaps
if v!=0:
new_heatmap = gaussian_heatmap(shape, (xs[i],ys[i]))
# Otherwise the heatmaps is composed of zeros
else:
new_heatmap = np.zeros((1, shape, shape))
heatmaps = np.append(heatmaps, new_heatmap, axis=0)
return heatmaps
def keypoints_from_heatmap(heatmap):
"""Get the coordinates of the max value heatmap - it is the keypoint"""
max_heatmap = np.amax(heatmap)
keypoints = np.where(heatmap == max_heatmap)
if len(keypoints) == 2:
return keypoints[1][0], keypoints[0][0], max_heatmap
elif len(keypoints) == 3:
return keypoints[2][0], keypoints[1][0], max_heatmap
def keypoints_from_heatmaps(heatmaps, shape=32, image_height=512, image_width=640):
"""Get the coordinates of the keypoints from the 17 heatmaps"""
keypoints = []
for i, heatmap in enumerate(heatmaps):
x, y, max_heatmap = keypoints_from_heatmap(heatmap)
if max_heatmap == 0:
keypoints += [0,0,0]
else:
x = x*image_width/shape
y = y*image_height/shape
keypoints += [x,y,2]
return keypoints
def get_xs_ys_vs(keypoints):
""" Splits MSCOCO keypoints notations from [x0, y0, v0, ...] to [x0, ...], [y0, ...] and [v0, ...] """
keypoints_array = np.asarray(keypoints)
xs = np.take(keypoints_array, [3*i for i in range(17)])
ys = np.take(keypoints_array, [3*i+1 for i in range(17)])
vs = np.take(keypoints_array, [3*i+2 for i in range(17)])
return xs, ys, vs
def heatmaps_from_keypoints(keypoints):
xs, ys, vs = get_xs_ys_vs(keypoints)
heatmaps = gaussian_heatmaps(xs, ys, vs)
return heatmaps
```
### Dataset
```
class MSCOCO(data.Dataset):
""" Represents a MSCOCO Keypoints dataset """
def __init__(self, images_folder, annotations_json, train=False, evalu=False, input_type=0):
""" Instantiate a MSCOCO dataset """
super().__init__()
self.images_folder = images_folder
#Input type indicates if the input is the original image or a combination of original image with filtered image
#O : original image
#1 : original image + skin filtered
#2 : original image + edge filter
#3 : original image + clustering filter
#4 : orignal image + skin filter + edge filter
#5 : orignal image + skin filter + clustering filter
self.input_type = input_type
# Load the annotations
self.annotations = COCO(annotations_json)
imgs_id = self.annotations.getImgIds()
if train:
self.img_ids = imgs_id[:int(len(imgs_id)*2/3)]
elif evalu:
self.img_ids = imgs_id[int(len(imgs_id)*2/3)+1:]
else:
self.img_ids = imgs_id
def __len__(self):
return len(self.img_ids)
def __getitem__(self, index):
""" Returns the index-th image with keypoints annotations, both as tensors """
try:
#L is the list of the input's path for a single image
L = []
input_imgs = []
# Get the image informations
img_id = self.img_ids[index]
img = self.annotations.loadImgs(img_id)[0]
# Load the image from the file
img_path = os.path.join(self.images_folder, img['file_name'])
L.append(img_path)
#Need to adapt it depending on the path of the filtered image
if self.input_type == 1 or self.input_type == 4 or self.input_type == 5:
L.append(img_path) #Need to change with skin filtered image
if self.input_type == 2 or self.input_type == 4:
L.append(img_path) #Need to change with edge filtered image
if self.input_type == 3 or self.input_type == 5:
L.append(img_path) #Need to change with clustering filtered image
for image in L:
img_array = load_image(image)
img_array = MSCOCO.transformGreyImage(img_array)
img_tensor = torch.from_numpy(img_array)
img_tensor = img_tensor.float() # Pytorch needs a float tensor
input_imgs.append(img_tensor)
# Get the keypoints
annIds = self.annotations.getAnnIds(imgIds=img['id'])
anns = self.annotations.loadAnns(annIds)
# Some images do not contain any coco object, so anns = []
if len(anns)>0:
keypoints = anns[0]['keypoints'] # anns is a list with only one element
else:
# keypoints are not visible so
keypoints = [0 for i in range(3*17)]
# Check to avoid errors
if len(keypoints)!=3*17:
print('Warning: Keypoints list for image {} has length {} instead of 17'.format(img_id, len(keypoints)))
# Generate the heatmaps
heatmaps_array = heatmaps_from_keypoints(keypoints)
#img_tensor_input = torch.cat((img_tensor,img_tensor_filtered),0)
keypoints_tensor = torch.from_numpy(heatmaps_array).float() # Pytorch needs a float tensor
img_tensor = torch.cat(input_imgs,0)
return img_tensor, keypoints_tensor
except:
#L is the list of the input's path for a single image
L = []
input_imgs = []
# Get the image informations
img_id = 391895
img = self.annotations.loadImgs(img_id)[0]
# Load the image from the file
img_path = os.path.join(self.images_folder, img['file_name'])
L.append(img_path)
#Need to adapt it depending on the path of the filtered image
if self.input_type == 1 or self.input_type == 4 or self.input_type == 5:
L.append(img_path) #Need to change with skin filtered image
if self.input_type == 2 or self.input_type == 4:
L.append(img_path) #Need to change with edge filtered image
if self.input_type == 3 or self.input_type == 5:
L.append(img_path) #Need to change with clustering filtered image
for image in L:
img_array = load_image(image)
img_array = MSCOCO.transformGreyImage(img_array)
img_tensor = torch.from_numpy(img_array)
img_tensor = img_tensor.float() # Pytorch needs a float tensor
input_imgs.append(img_tensor)
# Get the keypoints
annIds = self.annotations.getAnnIds(imgIds=img['id'])
anns = self.annotations.loadAnns(annIds)
# Some images do not contain any coco object, so anns = []
if len(anns)>0:
keypoints = anns[0]['keypoints'] # anns is a list with only one element
else:
# keypoints are not visible so
keypoints = [0 for i in range(3*17)]
# Check to avoid errors
if len(keypoints)!=3*17:
print('Warning: Keypoints list for image {} has length {} instead of 17'.format(img_id, len(keypoints)))
# Generate the heatmaps
heatmaps_array = heatmaps_from_keypoints(keypoints)
#img_tensor_input = torch.cat((img_tensor,img_tensor_filtered),0)
keypoints_tensor = torch.from_numpy(heatmaps_array).float() # Pytorch needs a float tensor
img_tensor = torch.cat(input_imgs,0)
return img_tensor, keypoints_tensor
@staticmethod
def transformGreyImage(img_array):
# Black and white images
if len(img_array.shape)==2:
# Add a channel axis
img_array = np.expand_dims(img_array, axis=2)
# Fill all the axes with the black&white image
img_array = np.concatenate((img_array, img_array, img_array), axis=2)
img_array = np.transpose(img_array, (2,1,0))
return img_array
# Homemade image loader
def load_image(image_path):
image = imread(image_path)
image = resize(image, (256, 256))
return image
```
### Model
```
class ConvRelu(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, training=True, padding=1, stride=1):
super().__init__()
self.conv = nn.Conv2d(in_channels,
out_channels,
kernel_size,
padding=padding,
stride=stride)
self.relu = nn.ReLU()
self.batch_norm = nn.BatchNorm2d(out_channels)
self.training = training
def forward(self, x):
x = self.relu(self.conv(x))
if self.training:
x = self.batch_norm(x)
return x
class Model(nn.Module):
def __init__(self, input_type=0):
super().__init__()
self.pool = nn.MaxPool2d(2)
#1 image
if input_type == 0:
input_size = 3
#2 images
elif input_type == 1 or input_type == 2 or input_type == 3:
input_size = 6
#3 images
elif input_type == 4 or input_type == 5:
input_size = 9
self.feature_extraction = nn.Sequential(
ConvRelu(input_size, 64, 3),
ConvRelu(64, 64, 3),
self.pool,
ConvRelu(64, 128, 3),
#ConvRelu(128, 128, 3),
self.pool,
ConvRelu(128, 128, 3),
#ConvRelu(128, 128, 3),
self.pool,
ConvRelu(128, 512, 3),
#ConvRelu(512, 512, 3),
)
self.features_to_heatmaps = nn.Conv2d(512, 17, 1) # 17 kind of joints, 17 heatmaps
def forward(self, x):
x = self.feature_extraction(x)
heatmaps = self.features_to_heatmaps(x)
return heatmaps
def plotKeypointsOverOutputModel(index,dataset,model,img_folder):
"""Forward a img to the model and display the output keypoints over the image.
It enables us to see the loss evolution over the model visually over the image
index is the index of the img in the dataset argument"""
# Get an image
imgId = dataset.img_ids[index]
img, keypoints = dataset[index]
# Transform into a pytorch model input and Forward pass
y = model(Variable(img.unsqueeze(0)))
#Get the coordinates of the keypoints
keypoints = keypoints_from_heatmaps(y[0].data.numpy())
# Plot the image
img_anno = dataset.annotations.loadImgs(imgId)[0]
img_path = os.path.join(img_folder, img_anno['file_name'])
img_array = load_image(img_path)
img_array_resized = resize(img_array, (512, 640))
plt.figure()
plt.title('Original image')
plt.imshow(img_array_resized)
xs,ys,vs = get_xs_ys_vs(keypoints)
plt.plot(xs,ys,'ro',color='c')
plt.show()
```
### Configuration of the training
```
def conf_training(resuming=False, input_type=0, *args):
"""Function that initiates the configuration of the model depending if a last model
is loaded or if it's the beginning of a new model"""
#Data
trainset = MSCOCO(IMAGES_FOLDER, ANNOTATION_FILE, train=True, input_type=input_type)
evalset = MSCOCO(IMAGES_FOLDER, ANNOTATION_FILE, evalu=True, input_type=input_type)
# Loss
criterion = nn.MSELoss()
#criterion = nn.CrossEntropyLoss()
# Number of epochs
epochs = 10
# Batch sizes
batch_size_train = 1
batch_size_val = 1
if not resuming:
# Model
net = Model(input_type=input_type)
# Optimizer
optimizer = torch.optim.Adam(net.parameters())
#First epoch
current_epoch = -1
else:
#Load the last saved model with its configurations
checkpoint = torch.load(os.path.join(MAIN_FOLDER,"model_"+args[0]))
#Model
net = Model(input_type=input_type)
net.load_state_dict(checkpoint['state_dict'])
#Current_epoch
current_epoch = checkpoint['epoch']
#Optimizer
optimizer = torch.optim.Adam(net.parameters())
#Data loaders
trainloader = torch.utils.data.DataLoader(trainset,
batch_size=batch_size_train,
shuffle=True,
num_workers=4
)
evaloader = torch.utils.data.DataLoader(evalset,
batch_size=batch_size_val,
shuffle=True,
num_workers=4
)
evalset_length = len(evalset)
return epochs, trainloader, evaloader, optimizer, net, current_epoch, criterion, evalset_length, evalset
```
### Running the model
```
def training(epochs, trainloader, evaloader, optimizer, net, current_epoch, criterion, evalset_length, evalset):
plt.ion()
if current_epoch == -1:
#If not resuming a model, creating the loss file
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'wb')
pickle.dump({"loss_train":{}, "loss_val":{}},lossFile)
lossFile.close()
start_epoch = current_epoch + 1
for epoch in range(start_epoch, epochs): # loop over the dataset multiple times
print("Epoch number {}".format(epoch))
#plotKeypointsOverOutputModel(0,evalset,net,IMAGES_FOLDER)#Displaying the result over the first element of the evalset
running_loss = 0.0
#For each epoch, we keep the loss under a dictionnary with epoch_nb as key and list of loss as value
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'rb')
loss_dic = pickle.load(lossFile)
lossFile.close()
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'wb')
loss_dic['loss_train'][epoch] = []
loss_dic['loss_val'][epoch] = []
pickle.dump(loss_dic,lossFile)
lossFile.close()
for i, data in enumerate(trainloader, 0):
print("Batch number {}".format(i))
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('Trainset loss[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
#Save the loss_train in disk for each batch
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'rb')
loss_dic = pickle.load(lossFile)
lossFile.close()
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'wb')
loss_dic['loss_train'][epoch] += [loss.data[0]]
pickle.dump(loss_dic,lossFile)
lossFile.close()
#Save the model
#net.cpu()
state = {
'epoch': epoch,
'state_dict': net.state_dict()
}
torch.save(state, os.path.join(MAIN_FOLDER,"model_"+str(epoch))) #Save the torch model after each epoch
#net.cuda()
running_loss_eval = 0.0
print("Starting Eval for Epoch {}".format(epoch))
for i, data in enumerate(evaloader, 0):
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# forward
outputs = net(inputs)
loss = criterion(outputs, labels)
# print statistics
running_loss_eval += loss.data[0]
#Save the loss_val in disk for each batch
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'rb')
loss_dic = pickle.load(lossFile)
lossFile.close()
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'wb')
loss_dic['loss_val'][epoch] += [loss.data[0]]
pickle.dump(loss_dic,lossFile)
lossFile.close()
print("Evalset Loss for Epoch {0} : {1}".format(epoch,running_loss_eval/evalset_length))
#loss_val[epoch] += [running_loss_eval/evalset_length] #Stock the loss on evalset for each epoch
print('Finished Training')
def launch_training(resuming=False, input_type=0, *args):
"""Function that configurates the model from init or a last model ; and then it trains the model"""
epochs, trainloader, evaloader, optimizer, net, current_epoch, criterion, evalset_length, evalset = conf_training(resuming=resuming,input_type=input_type, *args)
training(epochs, trainloader, evaloader, optimizer, net, current_epoch, criterion, evalset_length, evalset)
def launch_testing(model_epoch, input_type=0):
"""Function that launches a model over the test dataset"""
testset = MSCOCO(IMAGES_FOLDER_TEST, ANNOTATION_FILE_TEST,input_type=input_type)
#Load the training model
checkpoint = torch.load(os.path.join(MAIN_FOLDER, model_epoch))
net = Model(input_type=input_type)
net.load_state_dict(checkpoint['state_dict'])
# Loss
criterion = nn.MSELoss()
# Batch sizes
batch_size_test = 1
#TestLoader
evaloader = torch.utils.data.DataLoader(testset,
batch_size=batch_size_test,
shuffle=True,
num_workers=4
)
loss_test = 0.0
for i, data in enumerate(evaloader):
inputs, labels = data[0], data[1]
inputs, labels = Variable(inputs), Variable(labels)
outputs = net(inputs)
loss = criterion(y, outputs)
loss_test += loss.data[0]
if i % 500 ==0:
print("Current loss over the test dataset: {0} after {1}ème iteration".format(loss_test/(i+1),i+1))
loss_test = loss_test/len(testset)
print("Average loss over the test dataset: {}".format(loss_test))
#Launch a training over a new model with inputSize = 0
launch_training(False,0)
#Launch a training over a model currently trained with inputSize = 0
#launch_training(True,0,path_model)
#Launch a trained model over the test dataset, with inputSize = 0
#launch_testing(path_model,0)
%cd cocoapi
!ls
```
|
github_jupyter
|
# Lab 05
## Solving a rigid system of differential equations
### Konks Eric, Б01-818
X.9.7
$$y_1'=-0.04y_1+10^4y_2y_3$$
$$y_2'=0.04y_1-10^4y_2y_3-3*10^7y_2^2$$
$$y_3'=3*10^7y_2^2$$
$$y_1(0)=1,\ y_2(0)=0,\ y_3(0)=0$$
```
import unittest
import logging
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#logging.basicConfig(level=logging.DEBUG)
class RODE:
def __init__(self):
self.log = logging.getLogger("RODE")
def k_calc_stop(self, k_cur, k_next, delta):
if id(k_cur) == id(k_next):
return False
if np.abs(np.linalg.norm(np.matrix(k_cur)) - np.linalg.norm(np.matrix(k_next))) < delta:
return True
return False
def k_calc(self, stages, c_vec, b_vec, a, f_vec, u_res, h, t_res, delta):
k_next = [[0 for _ in range(stages)] for _ in range(len(f_vec))]
k_cur = k_next
itr = 0
while not self.k_calc_stop(k_cur, k_next, delta):
k_tmp = k_next
k_next = [k_cur[i][:] for i in range(len(k_cur))]
k_cur = k_tmp
for s in range(stages):
u_k = [u_res[-1][j]+h*sum(a[s][m]*k_cur[j][m] for m in range(s)) for j in range(len(f_vec))]
self.log.debug(f"Iter[{itr}]|S[{s}]: u_k: {u_k}")
for i in range(len(f_vec)):
k_next[i][s] = f_vec[i](t_res[-1]+c_vec[s]*h, u_k)
self.log.debug(f"Iter[{itr}]]: k: {k_next}")
itr = itr + 1
return k_next
def solve(self, stages, c_vec, b_vec, a, f_vec, u_init, h, t_range, delta):
u_res = [u_init,]
t_res = [t_range[0],]
while t_res[-1] < t_range[1]:
u_cur = [0 for _ in range(len(f_vec))]
k = self.k_calc(stages, c_vec, b_vec, a, f_vec, u_res, h, t_res, delta)
for i in range(len(f_vec)):
u_cur[i] = u_res[-1][i]+h*sum(b_vec[s]*k[i][s] for s in range(stages))
self.log.debug(f"T[{t_res[-1]}]: k: {k}")
self.log.debug(f"T[{t_res[-1]}]: u: {u_cur}")
u_res.append(u_cur)
t_res.append(t_res[-1]+h)
return (t_res, u_res)
log = logging.getLogger()
c_vec = [1/2-np.sqrt(15)/10, 1/2, 1/2+np.sqrt(15)/10]
b_vec = [5/18, 4/9, 5/18]
a = [[5/36,2/9-np.sqrt(15)/15,5/36-np.sqrt(15)/30],
[5/36+np.sqrt(15)/24,2/9,5/36-np.sqrt(15)/24],
[5/36+np.sqrt(15)/30,2/9+np.sqrt(15)/15,5/36]]
#c_vec = [1/3, 1]
#b_vec = [3/4, 1/4]
#a = [[5/12, -1/12], [3/4, 1/4]]
log.debug(f"c={c_vec}")
log.debug(f"b={b_vec}")
log.debug(f"a={a}")
u_init = [1, 0, 0]
t_range = (0, 40)
delta = 10e-6
h = 0.001
f1 = lambda t, u_vec: -0.04*u_vec[0]+10**4*u_vec[1]*u_vec[2]
f2 = lambda t, u_vec: 0.04*u_vec[0]-10**4*u_vec[1]*u_vec[2]-3*10**7*u_vec[1]**2
f3 = lambda t, u_vec: 3*10**7*u_vec[1]**2
f_vec = [f1, f2, f3]
rode = RODE()
res = rode.solve(len(c_vec), c_vec, b_vec, a, f_vec, u_init, h, t_range, delta)
df = pd.DataFrame({"t": res[0], "(y1, y2, y3)": res[1]})
print(df)
def mplot(x, y, xlabel, ylabel):
plt.plot(x, y, label=f"{ylabel}({xlabel})")
plt.grid(True)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.legend()
plt.show()
mplot(res[0], [j[0] for j in res[1]], 't', 'y1')
mplot(res[0], [j[1] for j in res[1]], 't', 'y2')
mplot(res[0], [j[2] for j in res[1]], 't', 'y3')
```
|
github_jupyter
|
# Publications markdown generator for academicpages
Takes a TSV of publications with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `publications.py`. Run either from the `markdown_generator` folder after replacing `publications.tsv` with one containing your data.
TODO: Make this work with BibTex and other databases of citations, rather than Stuart's non-standard TSV format and citation style.
## Data format
The TSV needs to have the following columns: pub_date, title, venue, excerpt, citation, site_url, and paper_url, with a header at the top.
- `excerpt` and `paper_url` can be blank, but the others must have values.
- `pub_date` must be formatted as YYYY-MM-DD.
- `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/publications/YYYY-MM-DD-[url_slug]`
This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
```
!cat publications.tsv
```
## Import pandas
We are using the very handy pandas library for dataframes.
```
import pandas as pd
```
## Import TSV
Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`.
I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
```
publications = pd.read_csv("publications.tsv", sep="\t", header=0)
publications
publications.columns
```
## Escape special characters
YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
```
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
"""Produce entities within text."""
return "".join(html_escape_table.get(c,c) for c in text)
```
## Creating the markdown files
This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
```
import os
for row, item in publications.iterrows():
paper_name = item.paper_url.rsplit('/', 1)[1].split('.')[0]
md_filename = str(item.pub_year) + "-" + paper_name + ".md"
html_filename = str(item.pub_year) + "-" + paper_name
## YAML variables
md = "---\ntitle: \"" + item.title + '"\n'
md += """collection: publications"""
md += """\npermalink: /publication/""" + html_filename
md += "\nyear: " + str(item.pub_year)
md += "\nconference: '" + html_escape(item.conference) + "'"
md += "\nauthors: " + "[" + ", ".join(["'" + a + "'" for a in item.authors.split(', ')]) + "]"
md += "\nlocation: '" + html_escape(item.location) + "'"
md += "\naccepted: '" + str(item.accepted) + "'"
md += "\nsubmitted: '" + str(item.submitted) + "'"
if len(str(item.paper_url)) > 5:
md += "\npaper_url: '" + item.paper_url + "'"
if item.video_url != '-':
md += "\nvideo_url: '" + item.video_url + "'"
md += "\n---"
## Markdown description for individual page
#if len(str(item.paper_url)) > 5:
# md += "\n[Download paper here](" + item.paper_url + ")\n"
md_filename = os.path.basename(md_filename)
with open("../_publications/" + md_filename, 'w') as f:
f.write(md)
```
These files are in the publications directory, one directory below where we're working from.
```
!ls ../_publications/
!cat ../_publications/2019-seacma.md
```
|
github_jupyter
|
## GMLS-Nets: 1D Regression of Linear and Non-linear Operators $L[u]$.
__Ben J. Gross__, __Paul J. Atzberger__ <br>
http://atzberger.org/
Examples showing how GMLS-Nets can be used to perform regression for some basic linear and non-linear differential operators in 1D.
__Parameters:__</span> <br>
The key parameter terms to adjust are:<br>
``op_type``: The operator type.<br>
``flag_mlp_case``: The type of mapping unit to use.<br>
__Examples of Non-linear Operators ($u{u_x},u_x^2,u{u_{xx}},u_{xx}^2$) :__<br>
To run training for a non-linear operator like ``u*ux`` using MLP for the non-linear GMLS mapping unit, you can use:<br>
``op_type='u*ux';`` <br>
``flag_mlp_case = 'NonLinear1';`` <br>
You can obtain different performance by adjusting the mapping architecture and hyperparameters of the network.
__Examples of linear Operators ($u_x,u_{xx}$):__<br>
To run training for a linear operator like the 1d Laplacian ``uxx`` with a linear mapping unit, you can use<br>
``op_type='uxx';``<br>
``flag_mlp_case = 'Linear1';``<br>
These are organized for different combinations of these settings allowing for exploring the methods. The codes are easy to modify and adjust to also experiment with other operators. For example, see the dataset classes.
### Imports
```
import sys;
# setup path to location of gmlsnets_pytorch (if not install system-wide)
path_gmlsnets_pytorch = '../../';
sys.path.append(path_gmlsnets_pytorch);
import torch;
import torch.nn as nn;
import numpy as np;
import pickle;
import matplotlib.pyplot as plt;
import pdb
import time
import os
# setup gmlsnets package
import gmlsnets_pytorch as gmlsnets;
import gmlsnets_pytorch.nn;
import gmlsnets_pytorch.vis;
import gmlsnets_pytorch.dataset;
# dereference a few common items
MapToPoly_Function = gmlsnets.nn.MapToPoly_Function;
get_num_polys = MapToPoly_Function.get_num_polys;
weight_one_minus_r = MapToPoly_Function.weight_one_minus_r;
eval_poly = MapToPoly_Function.eval_poly;
print("Packages:");
print("torch.__version__ = " + str(torch.__version__));
print("numpy.__version__ = " + str(np.__version__));
print("gmlsnets.__version__ = " + str(gmlsnets.__version__));
```
### Parameters and basic setup
```
# Setup the parameters
batch_size = int(1e2);
flag_extend_periodic = False; # periodic boundaries
flag_dataset = 'diffOp1';
run_name = '%s_Test1'%flag_dataset;
base_dir = './output/regression_diff_op_1d/%s'%run_name;
flag_print_model = False;
print("Settings:");
print("flag_dataset = " + flag_dataset);
print("run_name = " + run_name);
print("base_dir = " + base_dir);
if not os.path.exists(base_dir):
os.makedirs(base_dir);
# Configure devices
if torch.cuda.is_available():
num_gpus = torch.cuda.device_count();
print("num_gpus = " + str(num_gpus));
if num_gpus >= 4:
device = torch.device('cuda:3');
else:
device = torch.device('cuda:0');
else:
device = torch.device('cpu');
print("device = " + str(device));
```
### Setup GMLS-Net for regressing differential operator
```
class gmlsNetRegressionDiffOp1(nn.Module):
"""Sets up a GMLS-Net for regression differential operator in 1D."""
def __init__(self,
flag_GMLS_type=None,
porder1=None,Nc=None,
pts_x1=None,layer1_epsilon=None,
weight_func1=None,weight_func1_params=None,
mlp_q1=None,pts_x2=None,
device=None,flag_verbose=0,
**extra_params):
super(gmlsNetRegressionDiffOp1, self).__init__();
self.layer_types = [];
if device is None:
device = torch.device('cpu'); # default
# --
Ncp1 = mlp_q1.channels_out; # number of channels out of the MLP-Pointwise layer
num_features1 = mlp_q1.channels_out; # number of channels out (16 typical)
GMLS_Layer = gmlsnets.nn.GMLS_Layer;
ExtractFromTuple = gmlsnets.nn.ExtractFromTuple;
PermuteLayer = gmlsnets.nn.PermuteLayer;
PdbSetTraceLayer = gmlsnets.nn.PdbSetTraceLayer;
# --- Layer 1
#flag_layer1 = 'standard_conv1';
flag_layer1 = 'gmls1d_1';
self.layer_types.append(flag_layer1);
if flag_layer1 == 'standard_conv1':
self.layer1 = nn.Sequential(
nn.Conv1d(in_channels=Nc,out_channels=num_features1,
kernel_size=5,stride=1,padding=2,bias=False),
).to(device);
elif flag_layer1 == 'gmls1d_1':
self.layer1 = nn.Sequential(
GMLS_Layer(flag_GMLS_type, porder1,
pts_x1, layer1_epsilon,
weight_func1, weight_func1_params,
mlp_q=mlp_q1, pts_x2=pts_x2, device=device,
flag_verbose=flag_verbose),
#PdbSetTraceLayer(),
ExtractFromTuple(index=0), # just get the forward output associated with the mapping and not pts_x2
#PdbSetTraceLayer(),
PermuteLayer((0,2,1))
).to(device);
else:
raise Exception('flag_layer1 type not recognized.');
def forward(self, x):
out = self.layer1(x);
return out;
```
### Setup the Model: Neural Network
```
# setup sample point locations
xj = torch.linspace(0,1.0,steps=101,device=device).unsqueeze(1);
xi = torch.linspace(0,1.0,steps=101,device=device).unsqueeze(1);
# make a numpy copy for plotting and some other routines
np_xj = xj.cpu().numpy(); np_xi = xi.cpu().numpy();
# setup parameters
Nc = 1; # scalar field
Nx = xj.shape[0]; num_dim = xj.shape[1];
porder = 2; num_polys = get_num_polys(porder,num_dim);
weight_func1 = MapToPoly_Function.weight_one_minus_r;
targ_kernel_width = 11.5; layer1_epsilon = 0.4*0.5*np.sqrt(2)*targ_kernel_width/Nx;
#targ_kernel_width = 21.5; layer1_epsilon = 0.4*0.5*np.sqrt(2)*targ_kernel_width/Nx;
weight_func1_params = {'epsilon': layer1_epsilon,'p':4};
color_input = (0.05,0.44,0.69);
color_output = (0.44,0.30,0.60);
color_predict = (0.05,0.40,0.5);
color_target = (221/255,103/255,103/255);
# print the current settings
print("GMLS Parameters:")
print("porder = " + str(porder));
print("num_dim = " + str(num_dim));
print("num_polys = " + str(num_polys));
print("layer1_epsilon = %.3e"%layer1_epsilon);
print("weight_func1 = " + str(weight_func1));
print("weight_func1_params = " + str(weight_func1_params));
print("xj.shape = " + str(xj.shape));
print("xi.shape = " + str(xi.shape));
# create an MLP for training the non-linear part of the GMLS Net
#flag_mlp_case = 'Linear1';flag_mlp_case = 'Nonlinear1'
flag_mlp_case = 'Nonlinear1';
if (flag_mlp_case == 'Linear1'):
layer_sizes = [];
num_depth = 0; # number of internal layers
num_hidden = -1; # number of hidden per layer
channels_in = Nc; # number of poly channels (matches input u channel size)
channels_out = 1; # number of output filters
layer_sizes.append(num_polys); # input
layer_sizes.append(1); # output, single channel always, for vectors, we use channels_out separate units.
mlp_q1 = gmlsnets.nn.MLP_Pointwise(layer_sizes,channels_in=channels_in,channels_out=channels_out,
flag_bias=False).to(device);
elif (flag_mlp_case == 'Nonlinear1'):
layer_sizes = [];
num_input = Nc*num_polys; # number of channels*num_polys, allows for cross-channel coupling
num_depth = 4; # number of internal layers
num_hidden = 100; # number of hidden per layer
num_out_channels = 16; # number of output filters
layer_sizes.append(num_polys);
for k in range(num_depth):
layer_sizes.append(num_hidden);
layer_sizes.append(1); # output, single channel always, for vectors, we use channels_out separate units.
mlp_q1 = gmlsnets.nn.MLP_Pointwise(layer_sizes,channels_out=num_out_channels,
flag_bias=True).to(device);
if flag_print_model:
print("mlp_q1:");
print(mlp_q1);
# Setup the Neural Network for Regression
flag_verbose = 0;
flag_case = 'standard';
# Setup the model
xi = xi.float();
xj = xj.float();
model = gmlsNetRegressionDiffOp1(flag_case,porder,Nc,xj,layer1_epsilon,
weight_func1,weight_func1_params,
mlp_q1=mlp_q1,pts_x2=xi,
device=device,
flag_verbose=flag_verbose);
if flag_print_model:
print("model:");
print(model);
```
## Setup the training and test data
```
### Generate Dataset
if flag_dataset == 'diffOp1':
# Use the FFT to represent differential operators for training data sets.
#
# Setup a data set of the following:
# To start let's do regression for the Laplacian (not inverse, just action of it, like finding FD)
#
#op_type = 'u*ux';op_type = 'ux*ux';op_type = 'uxx';op_type = 'u*uxx';op_type = 'uxx*uxx';
op_type = 'u*ux';
flag_verbose = 1;
num_training_samples = int(5e4);
nchannels = 1;
nx = np_xj.shape[0];
#alpha1 = 0.05;
alpha1 = 0.1;
scale_factor = 1e2;
train_dataset = gmlsnets.dataset.diffOp1(op_type=op_type,op_params=None,
gen_mode='exp1',gen_params={'alpha1':alpha1},
num_samples=num_training_samples,
nchannels=nchannels,nx=nx,
noise_factor=0,scale_factor=scale_factor,
flag_verbose=flag_verbose);
train_dataset = train_dataset.to(device);
if flag_verbose > 0:
print("done.");
num_test_samples = int(1e4);
scale_factor = 1e2;
test_dataset = gmlsnets.dataset.diffOp1(op_type=op_type,op_params=None,
gen_mode='exp1',gen_params={'alpha1':alpha1},
num_samples=num_test_samples,
nchannels=nchannels,nx=nx,
noise_factor=0,scale_factor=scale_factor,
flag_verbose=flag_verbose);
test_dataset = test_dataset.to(device);
if flag_verbose > 0:
print("done.");
# Put the data into the
#train_dataset and test_dataset structures for processing
else:
msg = "flag_dataset not recognized.";
msg += "flag_data_set = " + str(flag_data_set);
raise Exception(msg);
# Data loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=True);
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,batch_size=batch_size,shuffle=False);
%matplotlib inline
# plot sample of the training data
gmlsnets.vis.plot_dataset_diffOp1(train_dataset,np_xj,np_xi,rows=4,cols=6,
title="Data Samples: u, f=L[u], L = %s"%op_type);
```
## Train the Model
### Custom Functions
```
def custom_loss_least_squares(val1,val2):
r"""Computes the Mean-Square-Error (MSE) over the entire batch."""
diff_flat = (val1 - val2).flatten();
N = diff_flat.shape[0];
loss = torch.sum(torch.pow(diff_flat,2),-1)/N;
return loss;
def domain_periodic_repeat(Z):
r"""Extends the input periodically."""
Z_periodic = torch.cat((Z, Z, Z), 2);
return Z_periodic;
def domain_periodic_extract(Z_periodic):
r"""Extracts the middle unit cell portion of the extended data."""
nn = int(Z_periodic.shape[2]/3);
Z = Z_periodic[:,:,nn:2*nn];
return Z;
```
### Initialize
```
loss_list = np.empty(0); loss_step_list = np.empty(0);
save_skip = 1; step_count = 0;
```
### Train the network.
```
num_epochs = int(3e0); #int(1e4);
learning_rate = 1e-2;
print("Training the network with:");
print("");
print("model:");
print("model.layer_types = " + str(model.layer_types));
print("");
# setup the optimization method and loss function
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate);
#loss_func = nn.CrossEntropyLoss();
#loss_func = nn.MSELoss();
loss_func = custom_loss_least_squares;
print("num_epochs = %d"%num_epochs);
print("batch_size = %d"%batch_size);
print(" ");
# Train the model
flag_time_it = True;
if flag_time_it:
time_1 = time.time();
print("-"*80);
num_steps = len(train_loader);
for epoch in range(num_epochs):
for i, (input,target) in enumerate(train_loader):
input = input.to(device);
target = target.to(device);
if flag_extend_periodic:
# Extend input periodically
input_periodic = domain_periodic_repeat(input);
# Forward pass
output_periodic = model(input_periodic);
output = domain_periodic_extract(output_periodic);
else:
output = model(input);
# Compute loss
loss = loss_func(output,target);
# Display
if step_count % save_skip == 0:
np_loss = loss.cpu().detach().numpy();
loss_list = np.append(loss_list,np_loss);
loss_step_list = np.append(loss_step_list,step_count);
# Back-propagation for gradients and use to optimize
optimizer.zero_grad();
loss.backward();
optimizer.step();
step_count += 1;
if ((i + 1) % 100) == 0 or i == 0:
msg = 'epoch: [%d/%d]; '%(epoch+1,num_epochs);
msg += 'batch_step = [%d/%d]; '%(i + 1,num_steps);
msg += 'loss_MSE: %.3e.'%(loss.item());
print(msg);
if flag_time_it and i > 0:
msg = 'elapsed_time = %.4e secs \n'%(time.time() - time_1);
print(msg);
time_1 = time.time();
print("done training.")
print("-"*80);
```
### Plot Loss
```
%matplotlib inline
plt.figure(figsize=(8,6));
plt.plot(loss_step_list,loss_list,'b-');
plt.yscale('log');
plt.xlabel('step');
plt.ylabel('loss');
plt.title('Loss');
```
### Test the Neural Network Predictions
```
print("Testing predictions of the neural network:");
flag_save_tests = True;
if flag_save_tests:
test_data = {};
# Save the first few to show as examples of labeling
saved_test_input = [];
saved_test_target = [];
saved_test_output_pred = [];
count_batch = 0;
with torch.no_grad():
total = 0; II = 0;
avg_error = 0;
for input,target in test_loader: # loads data in batches and then sums up
if (II >= 1000):
print("tested on %d samples"%total);
II = 0;
input = input.to(device); target = target.to(device);
# Compute model
flag_extend_periodic = False;
if flag_extend_periodic:
# Extend input periodically
input_periodic = domain_periodic_repeat(input);
# Forward pass
output_periodic = model(input_periodic);
output = domain_periodic_extract(output_periodic);
else:
output = model(input);
# Compute loss
loss = loss_func(output,target);
# Record the results
avg_error += loss;
total += output.shape[0];
II += output.shape[0];
count_batch += 1;
NN = output.shape[0];
for k in range(min(NN,20)): # save first 10 images of each batch
saved_test_input.append(input[k]);
saved_test_target.append(target[k]);
saved_test_output_pred.append(output[k]);
print("");
print("Tested on a total of %d samples."%total);
print("");
# Compute RMSD error
test_accuracy = avg_error.cpu()/count_batch;
test_accuracy = np.sqrt(test_accuracy);
print("The neural network has RMSD error %.2e on the %d test samples."%(test_accuracy,total));
print("");
```
### Show a Sample of the Predictions
```
# collect a subset of the data to show and attach named labels
%matplotlib inline
num_prediction_samples = len(saved_test_input);
print("num_prediction_samples = " + str(num_prediction_samples));
#II = np.random.permutation(num_samples); # compute random collection of indices @optimize
II = np.arange(num_prediction_samples);
if flag_dataset == 'name-here' or 0 == 0:
u_list = []; f_list = []; f_pred_list = [];
for I in np.arange(0,min(num_prediction_samples,16)):
u_list.append(saved_test_input[II[I]].cpu());
f_list.append(saved_test_target[II[I]].cpu());
f_pred_list.append(saved_test_output_pred[II[I]].cpu());
# plot predictions against test data
gmlsnets.vis.plot_samples_u_f_fp_1d(u_list,f_list,f_pred_list,np_xj,np_xi,rows=4,cols=6,
title="Test Samples and Predictions: u, f=L[u], L = %s"%op_type);
```
### Save Model
```
model_filename = '%s/model.ckpt'%base_dir;
print("model_filename = " + model_filename);
torch.save(model.state_dict(), model_filename);
model_filename = "%s/model_state.pickle"%base_dir;
print("model_filename = " + model_filename);
f = open(model_filename,'wb');
pickle.dump(model.state_dict(),f);
f.close();
```
### Display the GMLS-Nets Learned Parameters
```
flag_run_cell = flag_print_model;
if flag_run_cell:
print("-"*80)
print("model.parameters():");
ll = model.parameters();
for l in ll:
print(l);
if flag_run_cell:
print("-"*80)
print("model.state_dict():");
print(model.state_dict());
print("-"*80)
```
### Done
|
github_jupyter
|
**General Work Process**
1. Import dataset and preprocess
2. Train model
3. Test model
```
import io
import os
import re
import shutil
import string
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import Sequential, layers, losses
from tensorflow.keras.layers import Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers import TextVectorization
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
# view train data files
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
# clean unnecessary empty folder
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
batch_size = 1024
seed = 10
train_data = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
val_data = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
# sample batch from train data
for text_batch, label_batch in train_data.take(1):
# view the first 5 samples
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_data.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_data.cache().prefetch(buffer_size=AUTOTUNE)
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
embedding_dim=16
model = Sequential([
vectorize_layer,
Embedding(vocab_size, embedding_dim, name="embedding"),
GlobalAveragePooling1D(),
Dense(32, activation='relu'),
Dense(1)
])
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_ds,
validation_data=val_ds,
epochs=20,
callbacks=[tensorboard_callback])
%load_ext tensorboard
%tensorboard --logdir logs
# get the trained word embeddings
weights = model.get_layer('embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
vocab[:10]
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0:
continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
```
## Test model
```
# view test data files
test_dir = os.path.join(dataset_dir, 'test')
os.listdir(test_dir)
test_data = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/test')
def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text), label
# sample batch from test data
for test_text_batch, test_label_batch in test_data.take(1):
# view the first 5 samples
for i in range(5):
print(test_label_batch[i].numpy(), test_text_batch.numpy()[i])
text_batch, label_batch = next(iter(test_data))
first_review, first_label = text_batch[0], label_batch[0]
print("Review", first_review)
print("Label", test_data.class_names[first_label])
print("Vectorized review", vectorize_text(first_review, first_label))
# the vectorize function is not required to process the test data
# if the vectorize layer included in model
# test_ds = test_data.map(vectorize_text)
# # sample batch from test data
# for test_text_batch, test_label_batch in test_ds.take(1):
# for i in range(1):
# print(test_label_batch[i].numpy(), test_text_batch.numpy()[i])
loss, accuracy = model.evaluate(test_data)
print("Loss: ", loss)
print("Accuracy: ", accuracy)
export_model = tf.keras.Sequential([
model,
layers.Activation('sigmoid')
])
export_model.compile(
loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy']
)
# Test it with `raw_test_ds`, which yields raw strings
loss, accuracy = export_model.evaluate(test_data)
print(accuracy)
text_batch, label_batch = next(iter(test_data))
first_review, first_label = text_batch[0], label_batch[0]
pred_label = export_model.predict(test_data)
pred_label
pred_label.shape
pred_y = []
for i in range(len(pred_label)):
pred_y.append(round(pred_label[i][0]))
len(pred_y)
actual_y = []
for tt, ll in test_data:
for l in ll:
actual_y.append(l.numpy())
correct = 0
for i in range(len(pred_y)):
if pred_y[i] == actual_y[i]:
correct+=1
correct/len(pred_y)*100
```
**Analyze my own review**
```
my_reviews =["The new movie is popular and awesome",
"The background music is annoying and too loud",
"We are very enjoy the movie",
"Negative comment in internent is hurt people",
"The smile is very sweat and cute!",
"The view is so beautiful and attrative",
]
export_model.predict(my_reviews)
```
|
github_jupyter
|
```
from moviepy.editor import *
postedByFontSize=25
replyFontSize=35
titleFontSize=100
cortinilla= VideoFileClip('assets for Channel/assets for video/transicion.mp4')
clip = ImageClip('assets for Channel/assets for video/background assets/fondo_preguntas.jpg').on_color((1920, 1080))
final= VideoFileClip('assets for Channel/assets for video/transicion.mp4')
def generate_video_of_reply(author,replyLines,replyaudio):
videoComponents=[]
textReply= []
postedBy = TextClip('Posted by /'+author, fontsize=postedByFontSize, color='white')
postedBy=postedBy.set_pos((162, 124))
index=0
yAxis=184
for replyLine in replyLines:
print('line '+str(index)+replyLine)
try:
replyline=TextClip(replyLine, fontsize=postedByFontSize, color='white')
replyline=replyline.set_pos((162,yAxis))
textReply.append(replyline)
except:
print('null line')
print(yAxis)
yAxis+=25
index+=1
videoComponents.append(clip)
videoComponents.append(postedBy)
videoComponents.extend(textReply)
replyVideo = CompositeVideoClip(videoComponents)
replyVideo = replyVideo.set_duration(replyaudio.duration)
replyVideo = replyVideo.set_audio(replyaudio)
return replyVideo
def generate_final_video(title,replies):
videoClips=[]
videoClips.append(generate_title(title))
index=0
for reply in replies:
audio=AudioFileClip('comment'+str(index)+'.mp3')
videoClips.append(generate_video_of_reply(reply['author'],reply['replyLines'],audio))
videoClips.append(cortinilla)
index+=1
videoClips.append(final)
finalVideo=concatenate_videoclips(videoClips)
finalVideo.fx(vfx.speedx, factor=1.3)
finalVideo.write_videofile("text.mp4", fps=24)
def generate_title(title):
videoComponents=[]
yAxisJumpInLine=80
maxCharsInLine=38
titleaudio=AudioFileClip('title.mp3')
titleline=TextClip(title, fontsize=titleFontSize, color='white')
titleline=titleline.set_pos((202,94))
#if(len(titleline)>38):
# sublines=[line[i:i+maxCharsInLine] for i in range(0, len(line), maxCharsInLine)]
# sublinesSize=len(sublines)
# for x in range(sublinesSize):
# index = len(sublines[x]) # calculate length of string and save in index
# while index > 0:
# if(sublines[x][ index - 1 ]==' '): # save the value of str[index-1] in reverseString
# index = index - 1
#if(' ' in sublines[x+1]):
videoComponents.append(clip)
videoComponents.append(titleline)
titleVideo = CompositeVideoClip(videoComponents)
titleVideo = titleVideo.set_duration(titleaudio.duration)
titleVideo = titleVideo.set_audio(titleaudio)
return titleVideo
```
|
github_jupyter
|
# PoissonRegressor with StandardScaler & Power Transformer
This Code template is for the regression analysis using Poisson Regressor, StandardScaler as feature rescaling technique and Power Transformer as transformer in a pipeline. This is a generalized Linear Model with a Poisson distribution.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.linear_model import PoissonRegressor
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler, PowerTransformer
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
Poisson regression is a generalized linear model form of regression used to model count data and contingency tables. It assumes the response variable or target variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. It is sometimes known as a log-linear model, especially when used to model contingency tables.
#### Model Tuning Parameters
> **alpha** -> Constant that multiplies the penalty term and thus determines the regularization strength. alpha = 0 is equivalent to unpenalized GLMs.
> **tol** -> Stopping criterion.
> **max_iter** -> The maximal number of iterations for the solver.
Feature Transformation
Power Transformers are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
Currently, <Code>PowerTransformer</Code> supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood.
Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html) for the parameters
```
model=make_pipeline(StandardScaler(),PowerTransformer(),PoissonRegressor())
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Viraj Jayant , Github: [Profile](https://github.com/Viraj-Jayant)
|
github_jupyter
|
<a href="https://colab.research.google.com/github/Pradyumna1312/ML_SelfStudy/blob/main/ML_SelfStudy_LogReg.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Logistic regression
It is a statistical technique for modelling the probability of a specific class or occurrence.
Social Network Ads is a categorical dataset describes information about a product being purchased through an advertisement on social media.
Implementing Logistic regression model in Python to predict whether the product is purchased or not by a person using any one of the three attributes given in the dataset.
Follow the following steps:
1. Import Libraries
```
import pandas as pd
import numpy as np
from math import exp
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
```
2. Load the dataset
```
df = pd.read_csv("https://raw.githubusercontent.com/Pradyumna1312/ML_SelfStudy/main/Datasets/Social_Network_Ads.csv")
X = df
Y = df.iloc[:,-1].values
X = df[df.columns[[1,2,3]]]
print(df)
```
3. Consider any one highly related input attribute with the output variable and
display the scatter plot
```
AY = np.cov(X['Age'],Y)
ESY = np.cov(X['EstimatedSalary'], Y)
print("Covariance of Age with Output\n", AY,'\n\n',"Covariance of Estimated Salary with Output\n", ESY,'\n')
corrAY, _ = pearsonr(X['Age'],Y)
corrESY, _ = pearsonr(X['EstimatedSalary'], Y)
print("Correlation of Age with Output\n", corrAY,'\n\n',"Correlation of Estimated Salary with Output\n",corrESY)
# Therefore Age is highly related to output.
plt.scatter(X['Age'],Y)
plt.title("Scatter plot of Highly related feature")
plt.show()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test= train_test_split(X,Y,test_size= 0.41, random_state= 0)
print(x_train)
```
4. Use stochastic gradient decent method to train the model and use 300 epochs
and initialize the weights=0 and learning rate=0.001, threshold value =0.5.
```
def normalize(X):
return X - X.mean()
# Method to make predictions
def predict(X, b0, b1):
return np.array([1 / (1 + exp(-1*b0 + -1*b1*x)) for x in X])
# Method to train the model
def logistic_regression(X, Y, epochs):
X = normalize(X)
# Initializing variables
b0 = 0
b1 = 0
L = 0.001
for epoch in range(epochs):
y_pred = predict(X, b0, b1)
D_b0 = -2 * sum((Y - y_pred) * y_pred * (1 - y_pred)) # Derivative of loss wrt b0
D_b1 = -2 * sum(X * (Y - y_pred) * y_pred * (1 - y_pred)) # Derivative of loss wrt b1
# Update b0 and b1
b0 = b0 - L * D_b0
b1 = b1 - L * D_b1
return b0, b1
def sqr_err(y_true, y_pred):
return np.array([(y_pred[i]-y_true[i])**2 for i in range(len(y_true))])
```
6. Predict the MSE and accuracy of the trained model after 300 epochs
```
b0, b1 = logistic_regression(X['Age'],Y,300)
# Making predictions
X_test_norm = normalize(x_test['Age'])
y_pred = predict(X_test_norm, b0, b1)
y_pred = [1 if p >= 0.5 else 0 for p in y_pred]
plt.clf()
plt.scatter(x_test['Age'], y_test)
plt.scatter(x_test['Age'], y_pred, c="red")
plt.show()
# The accuracy
accuracy = 0
for i in range(len(y_pred)):
if y_pred[i] == y_test[i]:
accuracy += 1
print(f"Accuracy = {accuracy / len(y_pred)}")
# The MSE
mse = ((y_test - y_pred) ** 2).mean()
print("MSE =", mse)
```
5. Plot the MSE for 300 epochs
```
y_pred = predict(X_test_norm, b0, b1)
Squared_error = sqr_err(y_test, y_pred)
plt.figure()
plt.plot(y_pred, Squared_error)
plt.xlabel('Predicted Values')
plt.ylabel('Squared Error')
plt.show()
```
7. Validate the classification model for any 2 unseen values.
```
valid=np.array([20,40])
valid = normalize(valid)
y_valid = predict(valid,b0,b1)
y_valid = [1 if p >= 0.5 else 0 for p in y_valid]
print('The ouputs for Ages 20, 40 are as follows:', y_valid)
```
|
github_jupyter
|
```
# Transformers installation
! pip install transformers datasets
# To install from source instead of the last release, comment the command above and uncomment the following one.
# ! pip install git+https://github.com/huggingface/transformers.git
```
# Fine-tuning a pretrained model
In this tutorial, we will show you how to fine-tune a pretrained model from the Transformers library. In TensorFlow,
models can be directly trained using Keras and the `fit` method. In PyTorch, there is no generic training loop so
the 🤗 Transformers library provides an API with the class [Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer) to let you fine-tune or train
a model from scratch easily. Then we will show you how to alternatively write the whole training loop in PyTorch.
Before we can fine-tune a model, we need a dataset. In this tutorial, we will show you how to fine-tune BERT on the
[IMDB dataset](https://www.imdb.com/interfaces/): the task is to classify whether movie reviews are positive or
negative. For examples of other tasks, refer to the [additional-resources](#additional-resources) section!
<a id='data-processing'></a>
## Preparing the datasets
```
#@title
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/_BZearw7f0w?rel=0&controls=0&showinfo=0" frameborder="0" allowfullscreen></iframe>')
```
We will use the [🤗 Datasets](https://github.com/huggingface/datasets/) library to download and preprocess the IMDB
datasets. We will go over this part pretty quickly. Since the focus of this tutorial is on training, you should refer
to the 🤗 Datasets [documentation](https://huggingface.co/docs/datasets/) or the [preprocessing](https://huggingface.co/docs/transformers/master/en/preprocessing) tutorial for
more information.
First, we can use the `load_dataset` function to download and cache the dataset:
```
from datasets import load_dataset
raw_datasets = load_dataset("imdb")
```
This works like the `from_pretrained` method we saw for the models and tokenizers (except the cache directory is
_~/.cache/huggingface/dataset_ by default).
The `raw_datasets` object is a dictionary with three keys: `"train"`, `"test"` and `"unsupervised"`
(which correspond to the three splits of that dataset). We will use the `"train"` split for training and the
`"test"` split for validation.
To preprocess our data, we will need a tokenizer:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
```
As we saw in [preprocessing](https://huggingface.co/docs/transformers/master/en/preprocessing), we can prepare the text inputs for the model with the following command (this is an
example, not a command you can execute):
```
inputs = tokenizer(sentences, padding="max_length", truncation=True)
```
This will make all the samples have the maximum length the model can accept (here 512), either by padding or truncating
them.
However, we can instead apply these preprocessing steps to all the splits of our dataset at once by using the
`map` method:
```
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
```
You can learn more about the map method or the other ways to preprocess the data in the 🤗 Datasets [documentation](https://huggingface.co/docs/datasets/).
Next we will generate a small subset of the training and validation set, to enable faster training:
```
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
full_train_dataset = tokenized_datasets["train"]
full_eval_dataset = tokenized_datasets["test"]
```
In all the examples below, we will always use `small_train_dataset` and `small_eval_dataset`. Just replace
them by their _full_ equivalent to train or evaluate on the full dataset.
<a id='trainer'></a>
## Fine-tuning in PyTorch with the Trainer API
```
#@title
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/nvBXf7s7vTI?rel=0&controls=0&showinfo=0" frameborder="0" allowfullscreen></iframe>')
```
Since PyTorch does not provide a training loop, the 🤗 Transformers library provides a [Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer)
API that is optimized for 🤗 Transformers models, with a wide range of training options and with built-in features like
logging, gradient accumulation, and mixed precision.
First, let's define our model:
```
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
```
This will issue a warning about some of the pretrained weights not being used and some weights being randomly
initialized. That's because we are throwing away the pretraining head of the BERT model to replace it with a
classification head which is randomly initialized. We will fine-tune this model on our task, transferring the knowledge
of the pretrained model to it (which is why doing this is called transfer learning).
Then, to define our [Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer), we will need to instantiate a
[TrainingArguments](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.TrainingArguments). This class contains all the hyperparameters we can tune for the
[Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer) or the flags to activate the different training options it supports. Let's begin by
using all the defaults, the only thing we then have to provide is a directory in which the checkpoints will be saved:
```
from transformers import TrainingArguments
training_args = TrainingArguments("test_trainer")
```
Then we can instantiate a [Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer) like this:
```
from transformers import Trainer
trainer = Trainer(model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset)
```
To fine-tune our model, we just need to call
```
trainer.train()
```
which will start a training that you can follow with a progress bar, which should take a couple of minutes to complete
(as long as you have access to a GPU). It won't actually tell you anything useful about how well (or badly) your model
is performing however as by default, there is no evaluation during training, and we didn't tell the
[Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer) to compute any metrics. Let's have a look on how to do that now!
To have the [Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer) compute and report metrics, we need to give it a `compute_metrics`
function that takes predictions and labels (grouped in a namedtuple called [EvalPrediction](https://huggingface.co/docs/transformers/master/en/internal/trainer_utils#transformers.EvalPrediction)) and
return a dictionary with string items (the metric names) and float values (the metric values).
The 🤗 Datasets library provides an easy way to get the common metrics used in NLP with the `load_metric` function.
here we simply use accuracy. Then we define the `compute_metrics` function that just convert logits to predictions
(remember that all 🤗 Transformers models return the logits) and feed them to `compute` method of this metric.
```
import numpy as np
from datasets import load_metric
metric = load_metric("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
```
The compute function needs to receive a tuple (with logits and labels) and has to return a dictionary with string keys
(the name of the metric) and float values. It will be called at the end of each evaluation phase on the whole arrays of
predictions/labels.
To check if this works on practice, let's create a new [Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer) with our fine-tuned model:
```
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.evaluate()
```
which showed an accuracy of 87.5% in our case.
If you want to fine-tune your model and regularly report the evaluation metrics (for instance at the end of each
epoch), here is how you should define your training arguments:
```
from transformers import TrainingArguments
training_args = TrainingArguments("test_trainer", evaluation_strategy="epoch")
```
See the documentation of [TrainingArguments](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.TrainingArguments) for more options.
<a id='keras'></a>
## Fine-tuning with Keras
```
#@title
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/rnTGBy2ax1c?rel=0&controls=0&showinfo=0" frameborder="0" allowfullscreen></iframe>')
```
Models can also be trained natively in TensorFlow using the Keras API. First, let's define our model:
```
import tensorflow as tf
from transformers import TFAutoModelForSequenceClassification
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
```
Then we will need to convert our datasets from before in standard `tf.data.Dataset`. Since we have fixed shapes,
it can easily be done like this. First we remove the _"text"_ column from our datasets and set them in TensorFlow
format:
```
tf_train_dataset = small_train_dataset.remove_columns(["text"]).with_format("tensorflow")
tf_eval_dataset = small_eval_dataset.remove_columns(["text"]).with_format("tensorflow")
```
Then we convert everything in big tensors and use the `tf.data.Dataset.from_tensor_slices` method:
```
train_features = {x: tf_train_dataset[x] for x in tokenizer.model_input_names}
train_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset["label"]))
train_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8)
eval_features = {x: tf_eval_dataset[x] for x in tokenizer.model_input_names}
eval_tf_dataset = tf.data.Dataset.from_tensor_slices((eval_features, tf_eval_dataset["label"]))
eval_tf_dataset = eval_tf_dataset.batch(8)
```
With this done, the model can then be compiled and trained as any Keras model:
```
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.metrics.SparseCategoricalAccuracy(),
)
model.fit(train_tf_dataset, validation_data=eval_tf_dataset, epochs=3)
```
With the tight interoperability between TensorFlow and PyTorch models, you can even save the model and then reload it
as a PyTorch model (or vice-versa):
```
from transformers import AutoModelForSequenceClassification
model.save_pretrained("my_imdb_model")
pytorch_model = AutoModelForSequenceClassification.from_pretrained("my_imdb_model", from_tf=True)
```
<a id='pytorch_native'></a>
## Fine-tuning in native PyTorch
```
#@title
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/Dh9CL8fyG80?rel=0&controls=0&showinfo=0" frameborder="0" allowfullscreen></iframe>')
```
You might need to restart your notebook at this stage to free some memory, or execute the following code:
```
del model
del pytorch_model
del trainer
torch.cuda.empty_cache()
```
Let's now see how to achieve the same results as in [trainer section](#trainer) in PyTorch. First we need to
define the dataloaders, which we will use to iterate over batches. We just need to apply a bit of post-processing to
our `tokenized_datasets` before doing that to:
- remove the columns corresponding to values the model does not expect (here the `"text"` column)
- rename the column `"label"` to `"labels"` (because the model expect the argument to be named `labels`)
- set the format of the datasets so they return PyTorch Tensors instead of lists.
Our _tokenized_datasets_ has one method for each of those steps:
```
tokenized_datasets = tokenized_datasets.remove_columns(["text"])
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
tokenized_datasets.set_format("torch")
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
```
Now that this is done, we can easily define our dataloaders:
```
from torch.utils.data import DataLoader
train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8)
eval_dataloader = DataLoader(small_eval_dataset, batch_size=8)
```
Next, we define our model:
```
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
```
We are almost ready to write our training loop, the only two things are missing are an optimizer and a learning rate
scheduler. The default optimizer used by the [Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer) is [AdamW](https://huggingface.co/docs/transformers/master/en/main_classes/optimizer_schedules#transformers.AdamW):
```
from transformers import AdamW
optimizer = AdamW(model.parameters(), lr=5e-5)
```
Finally, the learning rate scheduler used by default is just a linear decay from the maximum value (5e-5 here) to 0:
```
from transformers import get_scheduler
num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler("linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps)
```
One last thing, we will want to use the GPU if we have access to one (otherwise training might take several hours
instead of a couple of minutes). To do this, we define a `device` we will put our model and our batches on.
```
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model.to(device)
```
We now are ready to train! To get some sense of when it will be finished, we add a progress bar over our number of
training steps, using the _tqdm_ library.
```
from tqdm.auto import tqdm
progress_bar = tqdm(range(num_training_steps))
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
```
Note that if you are used to freezing the body of your pretrained model (like in computer vision) the above may seem a
bit strange, as we are directly fine-tuning the whole model without taking any precaution. It actually works better
this way for Transformers model (so this is not an oversight on our side). If you're not familiar with what "freezing
the body" of the model means, forget you read this paragraph.
Now to check the results, we need to write the evaluation loop. Like in the [trainer section](#trainer) we will
use a metric from the datasets library. Here we accumulate the predictions at each batch before computing the final
result when the loop is finished.
```
metric = load_metric("accuracy")
model.eval()
for batch in eval_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
with torch.no_grad():
outputs = model(**batch)
logits = outputs.logits
predictions = torch.argmax(logits, dim=-1)
metric.add_batch(predictions=predictions, references=batch["labels"])
metric.compute()
```
<a id='additional-resources'></a>
## Additional resources
To look at more fine-tuning examples you can refer to:
- [🤗 Transformers Examples](https://github.com/huggingface/transformers/tree/master/examples) which includes scripts
to train on all common NLP tasks in PyTorch and TensorFlow.
- [🤗 Transformers Notebooks](https://huggingface.co/docs/transformers/master/en/notebooks) which contains various notebooks and in particular one per task (look for
the _how to finetune a model on xxx_).
|
github_jupyter
|
## ## Week 3-1 - Linear Regression - class notebook
This notebook gives three examples of regression, that is, fitting a linear model to our data to find trends. For the finale, we're going to duplicate the analysis behind the Washington Post story
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
%matplotlib inline
```
## Part 1 - Single variable regression
We'll start with some simple data on height and weight.
```
hw = pd.read_csv("week-3/height-weight.csv")
hw
```
Let's look at the distribution of each of these variables.
```
hw.height.hist()
hw.weight.hist()
```
Really, the interesting thing is to look at them together. For this we use a scatter plot.
```
hw.plot(kind='scatter', x='height', y='weight')
```
Clearly there's a trend that relates the two. One measure of the strength of that trend is called "correlation". We can compute the correlation between every pair of columns with `corr()`, though in this case it's really only between one pair.
```
# Show the correlations! OMG
hw.corr()
# the closer to 1 the correlation is, the closer to a line are the values
```
If you want to get better at knowing what sort of graph a correlation coefficient corresponds to, play the remarkable 8-bit game [Guess the Correlation](http://guessthecorrelation.com/)
So far so good. Now suppose we want to know what weight we should guess if we know someone is 60" tall. We don't have anyone of that height in our data, and even id we did, they could be above or below average height. We need to build some sort of *model* which captures the trend, and guesses the average weight at each height.
*ENTER THE REGRESSION*.
```
# convert pandas dataframe to a numpy array, which can be understood by sklearn
x = hw[['height']].values
y = hw[['weight']].values
lm = LinearRegression()
lm.fit(x,y)
```
Ok, now we've got a "linear regression." What is it? It's just a line `y=mx+b`, which we can recover like this:
```
m = lm.coef_[0]
m
b = lm.intercept_
b
```
We can plot this line `y=mx+b` on top of the scatterplot to see it.
```
hw.plot(kind='scatter', x='height', y='weight')
plt.plot(hw.height, m*hw.height+b, '–')
```
So if we want to figure out the average weight of someone who is 60" tall, we can compute
```
m*60+b
```
There's a shortcut for this, which will come in handy when we add variables
```
lm.predict(60)
```
## Part 2 - Multi-variable regression
We can do essentially the same trick with one more independent variable. Then our regression equation is `y = m1*x1 + m2*x2 + b`. We'll use one of the built-in `sklearn` data test as demonstration data.
```
from sklearn import datasets
from mpl_toolkits.mplot3d import Axes3D
diabetes = datasets.load_diabetes()
print(diabetes.DESCR)
# take a look at the predictive (independent) variables
# The variables to be used for prediction
df = pd.DataFrame(diabetes.data,
columns=['age', 'sex', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'])
df.hist()
# take a look at the "target" (dependent) variable
# fit a regression
# Which columns do we want to use to try to predict? I’m choosing age and BMI here
# (BMI is “body mass index”, it’s a measure of weight compared to height)
indices = (0, 2)
x = diabetes.data[:, indices]
y = diabetes.target
lm2 = LinearRegression()
lm2.fit(x, y)
```
Ok awesome, we've fit a regression with multiple variables. What did we get? Let's check the coefficients
```
lm2.coef_
```
Now we have *two* coefficients. They're both positive, which means that both age and BMI are associated with increased disease progression. We have an intercept too, the predicted value of the target variable when both age and BMI are zero (which never happens, but that's the way the math works)
```
lm2.intercept_
```
To really see what's going on here, we're going to plot the whole thing in beautiful 3D. Now instead of a regression line, we have a regression *plane.* Are you ready for this?
```
# Helpful function that we'll use later for making more 3D regression plots
def plot_regression_3d(x, y, z, model, elev=30, azim=30, xlab=None, ylab=None):
fig = plt.figure()
ax = Axes3D(fig, elev=elev, azim=azim)
# This looks gnarly, but we're just taking four points at the corners of the plot,
# and using predict() to determine their vertical position
xmin = x.min()
xmax = x.max()
ymin = y.min()
ymax = y.max()
corners_x = np.array([[xmin, xmin], [xmax, xmax]])
corners_y = np.array([[ymin, ymax], [ymin, ymax]])
corners_z = model.predict(np.array([[xmin, xmin, xmax, xmax], [ymin, ymax, ymin, ymax]]).T).reshape((2, 2))
ax.plot_surface(corners_x, corners_y, corners_z, alpha=0.5)
ax.scatter(x, y, z, alpha=0.3)
ax.set_xlabel(xlab)
ax.set_ylabel(ylab)
# Now plot our diabetes data
plot_regression_3d(x[:, 0], x[:, 1], y, lm2, elev=20, azim=0, xlab='age',
ylab='BMI')
```
## Part 3 - Analysis of 2016 voters
Aside from prediction, we can use regression to attempt explanations. The coefficient `m` in the above encodes a guess about the existence and strength of the relationship between `x` and `y`. If it's zero, we guess that they're unrelated. Otherwise, it tells us how they are likely to vary together.
In this section we're going to try to understand what motivated people to vote for Trump but looking at the relationship between vote and other variables in the [2016 American National Election Study data](http://electionstudies.org/project/2016-time-series-study/).
There were quite a few statistical analyses of this "why did Trump win?" kind after the election, by journalists and researchers.
- [Racism motivated Trump voters more than authoritarianism](https://www.washingtonpost.com/news/monkey-cage/wp/2017/04/17/racism-motivated-trump-voters-more-than-authoritarianism-or-income-inequality) - Washington Post
- [The Rise of American Authoritarianism](https://www.vox.com/2016/3/1/11127424/trump-authoritarianism) - Vox
- [Education, Not Income, Predicted Who Would Vote For Trump](https://fivethirtyeight.com/features/education-not-income-predicted-who-would-vote-for-trump/) - 538
- [Why White Americans Voted for Trump – A Research Psychologist’s Analysis](https://techonomy.com/2018/02/white-americans-voted-trump-research-psychologists-analysis/) - Techonomy
- [Status threat, not economic hardship, explains the 2016 presidential vote](http://www.pnas.org/content/early/2018/04/18/1718155115) - Diana C. Mutz, PNAS
- [Trump thrives in areas that lack traditional news outlets](https://www.politico.com/story/2018/04/08/news-subscriptions-decline-donald-trump-voters-505605) - Politico
- [The Five Types of Trump Voters](https://www.voterstudygroup.org/publications/2016-elections/the-five-types-trump-
voters) - Voter Study Group
Many of these used regression, but some did not. My favoite is the Voter Study Group analysis which used clustering -- just like we learned last week. It has a good discussion of the problems with using a regression to answer this question.
We're going to use regression anyway, along the lines of the [Washington Post piece](https://www.washingtonpost.com/news/monkey-cage/wp/2017/04/17/racism-motivated-trump-voters-more-than-authoritarianism-or-income-inequality/?utm_term=.01d9d3764f2c) which also uses ANES data. In particular, a regression on variables representing attitudes about authoritarianism and minorities.
```
# read 'anes_timeseries_2016_rawdata.csv'
anes = pd.read_csv('week-3/anes_timeseries_2016_rawdata.csv')
print(anes.shape)
anes.head()
```
The first thing we need to do is construct indices of "authoritarianism" and "racism" from answers to the survey questions. We're following exactly what the Washington Post did here. Are "authoritarianism" and "racism" accurate and/or useful words for indices constructed of these questions? Our choice of words will hugely shape the impression that readers come away with -- even if we do the exact same calculations.
We start by dropping everything we don't need: we keep only white voters, only people who voted, and just the cols we want
```
# drop non-white voters
white_col = 'V161310a'
anes = anes[anes[white_col] == 1]
anes.shape
# keep only Trump, Clinton voters
voted_col = 'V162034a' # 1=Clinton, 2=Trump, 3=Johnson, 4=Stein, negative numbers = didn't vote or won't say
anes = anes[(anes[voted_col] == 1) | (anes[voted_col] == 2)]
anes.shape
# keep only columns on authoritarian, racial scales
authoritarian_cols = ['V162239', 'V162240', 'V162241', 'V162242']
racial_cols = ['V162211', 'V162212', 'V162213', 'V162214']
anes = anes[[voted_col] + authoritarian_cols + racial_cols]
anes.head()
```
Now we have to decode these values.
For the child-rearing questions, the code book tells us that 1 means the first option and 2 means the second. But 3 means both and then there are all sorts of codes that mean the question wasn't answered, in different ways. And then there's the issue that the questions have different directions: Options 1 might mean either "more" or "less" authoritarian. So we have a custom translation dictionary for each column. This is the stuff that dreams are made of, people.
```
# recode the authoritarian variables
# These variables are proxies for authoritarian attitudes. Why are these questiones about children?
# Because that's the only way to get honest answers! It's a long story.
# See https://www.vox.com/2016/3/1/11127424/trump-authoritarianism
# All authoritarian traits are coded 1 for first option and 2 for second
# We turn this into +1/0/-1 where +1 is the more authoritarian option, and 0 means no data
# Child trait more important: independence or respect
anes['V162239'].replace({1: -1, 2: 1, 3: 0, -6: 0, -7: 0, -8: 0, -9: 0},
inplace=True)
# Child trait more important: curiosity or good manners
anes['V162240'].replace({1: -1, 2: 1, 3: 0, -6: 0, -7: 0, -8: 0, -9: 0},
inplace=True)
# Child trait more important: obedience or self-reliance
anes['V162241'].replace({1: 1, 2: -1, 3: 0, -6: 0, -7: 0, -8: 0, -9: 0},
inplace=True)
# Child trait more important: considerate or well-behaved
anes['V162242'].replace({1: -1, 2: 1, 3: 0, -6: 0, -7: 0, -8: 0, -9: 0},
inplace=True)
# recode the racial variables
# All racial questions are coded on a five point scale, 1=agree strongy, 5=disagree strongly
# We recode so that least tolerant = +2 and most tolerant =-2
# Agree/disagree: blacks shd work way up w/o special favors
anes['V162211'].replace(
{1: 2, 2: 1, 3: 0, 4: -1, 5: -2, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True)
# Agree/disagree: past slavery make more diff for blacks
anes['V162212'].replace(
{1: -2, 2: -1, 3: 0, 4: 1, 5: 2, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True)
# Agree/disagree: blacks have gotten less than deserve
anes['V162213'].replace(
{1: -2, 2: -1, 3: 0, 4: 1, 5: 2, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True)
anes['V162214'].replace(
{1: 2, 2: 1, 3: 0, 4: -1, 5: -2, -6: 0, -7: 0, -8: 0, -9: 0}, inplace=True)
# check the results
anes.head()
```
Finally, add the authority and racial columns together to form the composite indexes.
```
# sum each group of columns. End up with vote, authority, racial columns
anes['authority'] = anes[authoritarian_cols].sum(axis=1)
anes['racial'] = anes[racial_cols].sum(axis=1)
anes['vote'] = anes[voted_col]
anes = anes[['vote', 'authority', 'racial']]
anes.head(10)
```
Data prepared at last! Let's first look at the scatter plots
```
anes.plot(kind='scatter', x='authority', y='vote')
```
Er, right... all this says is that we've got votes for both candidates at all levels of authoritarianism. To get a sense of how many dots in each point, we can add some jitter and make the points a bit transparent.
```
# function to add noise to the values in the array
# add a noise to the values in the array
def jitter(arr):
# pick a standard deviation for the jitter of 3% of the data range
stdev = .02 * (max(arr) - min(arr))
return arr + np.random.randn(len(arr)) * stdev
# plot vote vs authoritarian variables with jitter
plt.scatter(x=jitter(anes.authority), y=jitter(anes.vote), alpha=0.05)
```
Note that, generally, as you move to the right (more authoritarian) there are more Trump voters. We can do this same plot with the racial axis.
```
# plot vote vs racial variables with jitter
# ... oh fuck it, this is just copy-pasting stuff from the class notebook,
# I'd rather just keep listening to the lecture instead of wasting brainpower
# on copy-pasting
```
Similar deal. The axis is smoother because we are summing numbers from a five point agree/disagree scale, rather than just the two-option questions of the authoritarianism subplot.
Now in glorious 3D.
```
# 3D plot of both sets of vars
```
Same problem: everything is on top of each other. Same solution.
```
# jittered 3D plot
```
You can definitely see the change alog both axes. But which factor matters more? Let's get quantitative by fitting a linear model. Regression to the rescue!
```
# This is some drudgery to convert the dataframe into the format that sklearn needs:
# This does the actual regression
# call plot_regression_3d
```
Well that looks cool but doesn't really clear it up for me. Let's look at the coefficients.
Looks like the coefficient on `racial` is higher. But wait, we choose the numbers that we turned each response into! We could have coded `racial` on a +/-1 scale instead of a +/-2 scale, or a +/-10 scale. So... we could get any number we want just be changing how we convert the data.
To fix this, we're going to standardize the values (both dependent and independent) to have mean 0 and standard deviation 1. This gives us [standardized coefficients](https://en.wikipedia.org/wiki/Standardized_coefficient).
```
# normalize the columns and take a look
# fit another regression
```
What we have now is the same data, just scaled in each direction
```
# call plot_regression_3d
```
Finally, we can compare the coefficients directly. It doesn't matter what range we used to code the survey answers, because we divided it out during normalization.
So there we have it. For white voters in the 2016 election, the standardized regression coefficient on racial factors is quite a bit bigger than the standardized coeffiecient on authoritrianism. But what does this actually mean?
```
# what's the new intercept?
```
|
github_jupyter
|
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
train_data = mnist.train.images # Returns np.array
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
eval_data = mnist.test.images # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
BATCH_SIZE = 512
RNN_HIDDEN_SIZE = 128
def model_fn(features, labels, mode):
# input_layer = tf.reshape(features["x"], [-1, 784, 1])
# rnn_cell = tf.nn.rnn_cell.LSTMCell(RNN_HIDDEN_SIZE)
# initial_state = rnn_cell.zero_state(batch_size=BATCH_SIZE, dtype=tf.float32)
# _, state = tf.nn.dynamic_rnn(rnn_cell, input_layer, initial_state=initial_state, dtype=tf.float32)
# dense1 = tf.layers.dense(inputs=tf.reshape(state, [-1, RNN_HIDDEN_SIZE * 2]), units=512, activation=tf.nn.relu)
# dense2 = tf.layers.dense(inputs=dense1, units=1024, activation=tf.nn.relu)
input_layer = tf.reshape(features['x'], [-1, 28, 28, 1])
conv1 = tf.layers.conv2d(inputs=input_layer,
filters=32,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
conv2 = tf.layers.conv2d(inputs=pool1,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
predictions = tf.layers.dense(inputs=dense, units=784)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
loss = tf.losses.mean_squared_error(labels=labels, predictions=predictions)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(loss=loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
eval_metric_ops = {'distance': tf.metrics.mean_squared_error(labels=labels, predictions=predictions)}
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
est = tf.estimator.Estimator(model_fn=model_fn, model_dir='pattern_memorization_model')
logging_hook = tf.train.LoggingTensorHook(tensors={}, at_end=True)
train_input_fn = tf.estimator.inputs.numpy_input_fn(x={"x": train_data},
y=train_data,
batch_size=BATCH_SIZE,
num_epochs=None,
shuffle=True)
est.train(input_fn=train_input_fn, steps=2000, hooks=[logging_hook])
eval_input_fn = tf.estimator.inputs.numpy_input_fn(x={"x": eval_data},
y=eval_data,
num_epochs=1,
shuffle=False)
est.evaluate(input_fn=eval_input_fn)
test_images = eval_data[np.random.choice(mnist.test.num_examples, 3)]
input_fn = tf.estimator.inputs.numpy_input_fn(x={'x': test_images}, shuffle=False)
preds = list(est.predict(input_fn))
plt.rcParams["figure.figsize"] = [13, 6]
for i in range(3):
plt.subplot(1, 2, 1)
plt.imshow(np.reshape(test_images[i], [28, 28]), cmap='gray')
plt.subplot(1, 2, 2)
plt.imshow(np.reshape(preds[i], [28, 28]), cmap='gray')
plt.show()
test_image = np.random.randn(1, 28, 28).astype(np.float32)
test_image[test_image < 0] = 0
test_image[test_image > 1] = 1.0
for i in range(10, 20):
for j in range(28):
test_image[0][i][j] = 1.0
test_image[0]
input_fn = tf.estimator.inputs.numpy_input_fn(x={'x': test_image}, shuffle=False)
pred = list(est.predict(input_fn))
plt.subplot(1, 2, 1)
plt.imshow(np.reshape(test_image, [28, 28]), cmap='gray')
plt.subplot(1, 2, 2)
plt.imshow(np.reshape(pred[0], [28, 28]), cmap='gray')
plt.show()
est.evaluate(input_fn=tf.estimator.inputs.numpy_input_fn(x={"x": test_image},
y=test_image.reshape(1, 784),
num_epochs=1,
shuffle=False))
test_image = np.random.randn(1, 28, 28).astype(np.float32)
test_image[test_image < 0] = 0
test_image[test_image > 0] = 0.25
for i in list(range(0, 4)) + list(range(24, 28)):
for j in range(0, 28):
test_image[0][i][j] = 1.0
test_image[0][j][i] = 1.0
test_image[0]
input_fn = tf.estimator.inputs.numpy_input_fn(x={'x': test_image}, shuffle=False)
pred = list(est.predict(input_fn))
plt.subplot(1, 2, 1)
plt.imshow(np.reshape(test_image, [28, 28]), cmap='gray')
plt.subplot(1, 2, 2)
plt.imshow(np.reshape(pred[0], [28, 28]), cmap='gray')
plt.show()
```
# 끗
|
github_jupyter
|
# はじめに
本実験では,PythonとGoogle Colaboratory(以下,Colab)を使用して,力学系の数値解析手法を学ぶ.PythonとColabの特徴は以下のとおり.
- Pythonとは
- プログラミング言語の1つで,現在,広く利用されている.
- Google Colaboratory(Colab)とは
- ブラウザ上で Python を記述して実行できるツール.
- 具体的には,まずブラウザで表示されるノートブック(今開いているこのページが1つのノートブックである)を作成し,そこにPythonコードの記述と実行を行う.
- Pythonコードの他に,テキストも入力できる
- 連続使用が12時間を超えるか,ノートブックを90分程度操作しないと,自動的に切断される.
- 上記の制約のため,ノートブックを細かく保存すること(保存方法は次に説明する)
Colabの概要について説明している,[Google Colaboratory の開始方法 (日本語字幕あり)](https://www.youtube.com/watch?v=inN8seMm7UI)を視聴すること.
# 実験の進め方
本実験では,Colabのノートブックを使って進めていく.
ノートブックは,複数のセルから構成されている.
セルには,文字を入力するための`テキストセル`,Pythonのコードを入力するための`コードセル`がある.
セルに書かれた説明を読み,コードを実行していくことで,内容の理解を深めてほしい.
特に,
<font color="red">
(TODO)
</font>
と書かれた指示は必ず実行すること.
ノートブックの内容を順に理解していけば,最後のレポート課題が解けるはずだ.
ノートブックには,各自がコードの追加や,実行ができる.プログラムを学ぶためにその動作を確認することは重要なので,積極的にコードを書き,実行してみること.
その試行錯誤の過程でノートブックを壊滅的に壊してしまった場合でも,この初期状態から再開できる.その場合は,
[実験のTopページ](https://github.com/yyamnk/numerical-methods-py3/blob/master/uu_experiment.md)
からやり直すこと.
次に,ノートブックの保存方法を説明する.
現在,開いているノートブックは教科書用であり,編集や実行ができない状態である.そこで,次の手順で実験を進めること.
1. [実験のTopページ](https://github.com/yyamnk/numerical-methods-py3/blob/master/uu_experiment.md)から,教科書用ノートブックを開く(今ここ)
2. 下図のように,ブラウザの左上から`ファイル` -> `ドライブにコピーを保存`をクリックし,ノートブックを各自のGoogle Driveへ保存する.
3. コピーしたノートブックが開かれたら,それを編集・実行しながら学習を進める.

保存したコピーが開けたら,実験を始めよう.
# 四則演算
まず最初に,Pythonによる四則演算を学ぶ.
Pythonコードの入力と実行は,次の手順で行う.
1. ブラウザ左上にある「+ コード」をクリックして`コードセル`を追加
2. 追加されたセルをクリックし,プログラムを記述
3. セルの左端にある「▷(再生ボタン)」をクリックして,セルを実行する
- [方法がわからない場合は,ここを視聴せよ](https://youtu.be/inN8seMm7UI?t=27)
ここでは,例として,既にPythonコードを入力したセルを用意した.このセルを実行してみよう.
```
# ここはコメント.Pythonは#記号以降の文字を無視する.
1 + 5 # 和,このセルを実行すると,計算結果が出力される(初回の実行では多少の時間がかかる)
```
セルを実行すると,結果が出力されたはずだ.
<font color="red">
(TODO)適当な2つの数の,差・積・商を計算するコードを作成し,実行せよ.ここでは,1つのセルに1つの四則演算のみとすること.また,手計算と比較して動作を確かめよ.
</font>
```
# 差を計算するコード
# 積を計算するコード
# 商を計算するコード
```
# 累乗
Pythonでは,累乗を`**`で表す.
<font color="red">
(TODO)次のコードを実行して動作を確かめよ.
</font>
```
2 ** 3 # 2の3乗(2 ^ 3)
2 ** 0.5 # 2の平方根
```
# 複素数
Pythonでは,複素数の演算もサポートされている.虚数には`j`をつけること.
<font color="red">
(TODO)以下のセルを実行し,動作を確認せよ.
</font>
```
(1 + 2j) + (3 + 4j)
```
# 変数
Pythonでは,数値等を変数に代入できる.
ここでは変数の定義・代入と,変数を使った計算をやってみよう.
<font color="red">
(TODO)以下のセルを実行し,動作を確認せよ.
</font>
```
x = 10 # `x`という変数を定義し,10を代入する.
# 代入のみの場合,このセルを実行しても出力は無いが,内部で処理はされている.
```
このように定義した変数は,同一のnotebookでいつでも参照できる.
```
x # 定義した変数の参照方法1: 変数のみを書く
print(x) # # 定義した変数の参照方法2: print()を用いる
```
<font color="red">
(TODO)次のセルを実行し,変数を用いた四則演算ができることを確かめよ
</font>
```
r = 5 # 新たな変数を定義
pi = 3.14 # 新たな変数を定義, 変数名には複数の文字を使っても良い
2 * r * pi # 変数を使って計算する.
```
# プログラム例:2次方程式の解の公式
ここまでの知識で,2次方程式の解を計算するプログラムを考えてみよう.
$$
a x^2 + b x + c = 0
$$
の解は,解の公式より
$$
x = \frac{-b \pm \sqrt{b^2 - 4 a c}}{2a}
$$
である.
<font color="red">
(TODO)次のセルを実行し,解が計算できることを確かめよ
</font>
```
a = 1
b = -6
c = 13
(-b + (b ** 2 - 4 * a * c) ** 0.5) / (2 * a) # 1個めの解
(-b - (b ** 2 - 4 * a * c) ** 0.5) / (2 * a) # 2個めの解
```
# Numpy 配列
プログラミングでは,複数の要素(値)の集合を管理したいことがある.これを実現するデータ構造を配列と呼ぶ.Pythonには,配列を実現する方法はいくつかあるが,本実験では,`Numpy 配列`を用いる.基本的な使い方は以下の通り.
<font color="red">
(TODO)以下のセルを実行し,動作を確認せよ
</font>
```
# Numpyを用いるためにライブラリを読み込む.
import numpy as np # これにより,以降のセルでは`np`と書くことでnumpyの機能が使える.
# 次に,配列を定義する.
xs = np.array([1, 2, 3, 3, 5]) # 要素は[]で指定する
xs # 確認のため,xsを出力する
# 全ての要素がゼロの配列も定義できる.ここでは要素が5個の配列を定義する.
xs = np.zeros(5)
xs # 確認のため,xsを出力する
# 定義したNumpy配列へ値を代入する
# 代入するときは,0からはじまる配列の要素番号を指定し,`=`で値を代入する
xs[0] = 10 # 配列の先頭に代入
xs[1] = 20 # 2番目の要素に代入
xs[2] = 30 # 3番目の要素に代入
xs # 確認のため,xsを出力する
# 配列の要素を呼び出したい場合は,`配列名[インデックス番号]`とする
xs[2]
```
数値計算でよく用いるのは,初期値,値の刻み幅,最終値から配列を作成することだ.これは,次のように記述できる.
```
ts = np.arange(start=10, stop=15, step=0.5) # 初期値10, 終了値15, 刻み幅0.5の数列を定義する
ts
```
# 関数
プログラムでは,頻繁に実行する手続き・処理がある.このような処理を,いつでも使えるように部品のような形にまとめたものを`関数`という.
Pythonには便利な関数が予め実装されている.このような関数を組み込み関数と呼ぶ.
組み込み関数の例として,
- 配列の長さを返す`len()`
- Numpy配列の要素の平均を返す`np.mean()`
- Numpy配列の絶対値を返す`np.abs()`
- Numpy配列の$\cos, \sin$を返す`np.cos()`, `np.sin()`
などがある.
`()`の中には変数を書く.これを引数と呼ぶ.また,関数の出力は返り値と呼ばれる.
<font color="red">
(TODO)以下のセルを実行し,動作を確認せよ
</font>
```
len(xs) # 配列の長さを出力する(引数:配列xs, 返り値:xsの要素数)
np.mean(xs) # 配列の各要素の平均を出力する(引数:配列xs, 返り値:xsの要素の平均)
np.abs(np.array([-1, 2, -3, 4])) # 配列の各要素の絶対値を出力する
```
Pythonでは,組み込み関数だけれはなく,ユーザが独自定義する関数も利用することができる.ここでは関数の定義と実行を行ってみよう.
変数$x$から次の$y$を算出する関数を考える.
$$
y = ax + b
$$
ここで,$a=10$, $b=3$とすると,Pythonコードは次のようになる.
<font color="red">
(TODO)以下のセルを実行し,動作を確認せよ
</font>
```
# 関数の定義
def myfunc(x): # 関数の定義は,def 関数名(入力変数): で行う.
# 関数内のコードは,インデント(行頭に半角スペース2個を入れること)する.
a = 5 # 変数を定義(この変数は関数の中でのみ有効)
b = 3
y = a * x + b
return y # 返り値はreturnで書く.
# 関数の実行,組み込み関数と同様に`関数名(引数)`とする
myfunc(5)
```
<font color="red">
(TODO)以下のセルに,`myfunc`の引数を`10`と`20`としたコードを書き,出力を確認せよ.
</font>
```
# 引数が10のコード
# 引数が20のコード
```
# for文
ある処理を繰り返すとき,for文を用いる.
例えば,「0から`n`まで,1刻みの数字を表示する」という処理を考えよう.
このとき,
```
print(0)
print(1)
...(略)...
print(n)
```
と書くのは大変だが,for文を使えば以下のように書ける.
<font color="red">
(TODO)以下のセルを実行し,動作を確認せよ
</font>
```
# 0からnまでの数字を表示するプログラム
n = 10
for i in range(0, n + 1): # 変数`i`には,0からnまでの数字が逐次代入される.
print(i) # iの値を画面出力する.for文内のコードはインデントする
print('終了') # この行はインデントされていないので,for文には含まれない.
```
上記のコードでは,`print(n)`の行がインデント(行の先頭に半角スペースが2個あること)されていることがわかる.for文は,直後のインデントされたコードのみを繰り返し実行する.そのため,最後の`print(終了)`は1度のみ実行されている.
# グラフの作成
数値計算の結果をグラフに描写することができる.
<font color="red">
(TODO)以下のセルを実行し,動作を確認せよ
</font>
```
from matplotlib import pyplot as plt # グラフ描写に用いるpyplotというライブラリを読み込む
# 以降,`plt`と書くことで利用できる.
xs = np.array([0, 1, 2, 3, 4]) # x軸となるnumpy配列
ys = np.array([-1, 1, -1, 1, -1]) # y軸となるnumpy配列
plt.plot(xs, ys) # plt.plot()は,最初の引数がx軸,2番目の引数がy軸となるようにグラフを作成する.
# どの変数をx軸,y軸に割り当てるかは,変数を書く順番による.
# x軸にys, y軸にxsをplotするには,次のようにする.
plt.plot(ys, xs)
# 複数のグラフを同時に描写する
plt.plot(xs, ys, 'r-') # '-r'はオプションで,plot xs and ys using red line を意味する.
plt.plot(xs, 2 * ys, 'g:') # plot using green dot line
plt.plot(xs, 3 * ys, 'b.') # plot using blue dots
```
# プログラム例による理解度の確認
ここまで学習してきた内容を用いて,次の課題をやってみよう.
<font color="red">
(TODO)$y=x^3$の概形をグラフに出力するプログラムを書け.ただし,$x$の範囲は[-3, 3]とし,刻みは0.2とする.
</font>
> わからない場合のヒント
>
> 1. $y=x^3$を計算する関数`f`を定義する.
> 2. $x$の点列をnumpy配列`xs`で定義する.
> 3. $y$の点列を,要素が全てゼロのnumpy配列`ys`として定義する.
> 4. `ys[i]`(`i`はインデックス)に,`f(xs[i])`の返り値を代入する.
> 5. 4を全ての`i`で処理するようfor文に入れる.
> 6. `xs`, `ys`をplotする
わからない場合でも,考えることが重要です.しばらく悩んでみてください.
できたら,[ここ](https://colab.research.google.com/github/yyamnk/numerical-methods-py3/blob/master/exp_python1_ans.ipynb)から答え合わせしてください.)
# ここまでのまとめ
このノートブックでは,PythonとColabの基本的な使い方を学んだ.これらは本実験をこなすための最低限の内容であり,機能のごく一部にしかすぎない.詳細は入門書等を参照すること.
このノートブックの内容を一通り理解したら,[実験のTopページ](https://github.com/yyamnk/numerical-methods-py3/blob/master/uu_experiment.md) に戻り,次のノートブックに進むこと.
|
github_jupyter
|
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Build a Convolutional Neural Network using Estimators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/estimators/cnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/estimators/cnn.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
> Note: This is an archived TF1 notebook. These are configured
to run in TF2's
[compatbility mode](https://www.tensorflow.org/guide/migrate)
but will run in TF1 as well. To use TF1 in Colab, use the
[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)
magic.
The `tf.layers` module provides a high-level API that makes
it easy to construct a neural network. It provides methods that facilitate the
creation of dense (fully connected) layers and convolutional layers, adding
activation functions, and applying dropout regularization. In this tutorial,
you'll learn how to use `layers` to build a convolutional neural network model
to recognize the handwritten digits in the MNIST data set.

The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000
training examples and 10,000 test examples of the handwritten digits 0–9,
formatted as 28x28-pixel monochrome images.
## Get Started
Let's set up the imports for our TensorFlow program:
```
import tensorflow.compat.v1 as tf
import numpy as np
tf.logging.set_verbosity(tf.logging.INFO)
```
## Intro to Convolutional Neural Networks
Convolutional neural networks (CNNs) are the current state-of-the-art model
architecture for image classification tasks. CNNs apply a series of filters to
the raw pixel data of an image to extract and learn higher-level features, which
the model can then use for classification. CNNs contains three components:
* **Convolutional layers**, which apply a specified number of convolution
filters to the image. For each subregion, the layer performs a set of
mathematical operations to produce a single value in the output feature map.
Convolutional layers then typically apply a
[ReLU activation function](https://en.wikipedia.org/wiki/Rectifier_\(neural_networks\)) to
the output to introduce nonlinearities into the model.
* **Pooling layers**, which
[downsample the image data](https://en.wikipedia.org/wiki/Convolutional_neural_network#Pooling_layer)
extracted by the convolutional layers to reduce the dimensionality of the
feature map in order to decrease processing time. A commonly used pooling
algorithm is max pooling, which extracts subregions of the feature map
(e.g., 2x2-pixel tiles), keeps their maximum value, and discards all other
values.
* **Dense (fully connected) layers**, which perform classification on the
features extracted by the convolutional layers and downsampled by the
pooling layers. In a dense layer, every node in the layer is connected to
every node in the preceding layer.
Typically, a CNN is composed of a stack of convolutional modules that perform
feature extraction. Each module consists of a convolutional layer followed by a
pooling layer. The last convolutional module is followed by one or more dense
layers that perform classification. The final dense layer in a CNN contains a
single node for each target class in the model (all the possible classes the
model may predict), with a
[softmax](https://en.wikipedia.org/wiki/Softmax_function) activation function to
generate a value between 0–1 for each node (the sum of all these softmax values
is equal to 1). We can interpret the softmax values for a given image as
relative measurements of how likely it is that the image falls into each target
class.
Note: For a more comprehensive walkthrough of CNN architecture, see Stanford University's [Convolutional Neural Networks for Visual Recognition course material](https://cs231n.github.io/convolutional-networks/).
## Building the CNN MNIST Classifier
Let's build a model to classify the images in the MNIST dataset using the
following CNN architecture:
1. **Convolutional Layer #1**: Applies 32 5x5 filters (extracting 5x5-pixel
subregions), with ReLU activation function
2. **Pooling Layer #1**: Performs max pooling with a 2x2 filter and stride of 2
(which specifies that pooled regions do not overlap)
3. **Convolutional Layer #2**: Applies 64 5x5 filters, with ReLU activation
function
4. **Pooling Layer #2**: Again, performs max pooling with a 2x2 filter and
stride of 2
5. **Dense Layer #1**: 1,024 neurons, with dropout regularization rate of 0.4
(probability of 0.4 that any given element will be dropped during training)
6. **Dense Layer #2 (Logits Layer)**: 10 neurons, one for each digit target
class (0–9).
The `tf.layers` module contains methods to create each of the three layer types
above:
* `conv2d()`. Constructs a two-dimensional convolutional layer. Takes number
of filters, filter kernel size, padding, and activation function as
arguments.
* `max_pooling2d()`. Constructs a two-dimensional pooling layer using the
max-pooling algorithm. Takes pooling filter size and stride as arguments.
* `dense()`. Constructs a dense layer. Takes number of neurons and activation
function as arguments.
Each of these methods accepts a tensor as input and returns a transformed tensor
as output. This makes it easy to connect one layer to another: just take the
output from one layer-creation method and supply it as input to another.
Add the following `cnn_model_fn` function, which
conforms to the interface expected by TensorFlow's Estimator API (more on this
later in [Create the Estimator](#create-the-estimator)). This function takes
MNIST feature data, labels, and mode (from
`tf.estimator.ModeKeys`: `TRAIN`, `EVAL`, `PREDICT`) as arguments;
configures the CNN; and returns predictions, loss, and a training operation:
```
def cnn_model_fn(features, labels, mode):
"""Model function for CNN."""
# Input Layer
input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])
# Convolutional Layer #1
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Pooling Layer #1
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Convolutional Layer #2 and Pooling Layer #2
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
# Dense Layer
pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
dropout = tf.layers.dropout(
inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)
# Logits Layer
logits = tf.layers.dense(inputs=dropout, units=10)
predictions = {
# Generate predictions (for PREDICT and EVAL mode)
"classes": tf.argmax(input=logits, axis=1),
# Add `softmax_tensor` to the graph. It is used for PREDICT and by the
# `logging_hook`.
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Calculate Loss (for both TRAIN and EVAL modes)
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# Configure the Training Op (for TRAIN mode)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
# Add evaluation metrics (for EVAL mode)
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels=labels, predictions=predictions["classes"])
}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
```
The following sections (with headings corresponding to each code block above)
dive deeper into the `tf.layers` code used to create each layer, as well as how
to calculate loss, configure the training op, and generate predictions. If
you're already experienced with CNNs and [TensorFlow `Estimator`s](../../guide/custom_estimators.md),
and find the above code intuitive, you may want to skim these sections or just
skip ahead to ["Training and Evaluating the CNN MNIST Classifier"](#train_eval_mnist).
### Input Layer
The methods in the `layers` module for creating convolutional and pooling layers
for two-dimensional image data expect input tensors to have a shape of
<code>[<em>batch_size</em>, <em>image_height</em>, <em>image_width</em>,
<em>channels</em>]</code> by default. This behavior can be changed using the
<code><em>data_format</em></code> parameter; defined as follows:
* `batch_size` —Size of the subset of examples to use when performing
gradient descent during training.
* `image_height` —Height of the example images.
* `image_width` —Width of the example images.
* `channels` —Number of color channels in the example images. For color
images, the number of channels is 3 (red, green, blue). For monochrome
images, there is just 1 channel (black).
* `data_format` —A string, one of `channels_last` (default) or `channels_first`.
`channels_last` corresponds to inputs with shape
`(batch, ..., channels)` while `channels_first` corresponds to
inputs with shape `(batch, channels, ...)`.
Here, our MNIST dataset is composed of monochrome 28x28 pixel images, so the
desired shape for our input layer is <code>[<em>batch_size</em>, 28, 28,
1]</code>.
To convert our input feature map (`features`) to this shape, we can perform the
following `reshape` operation:
```
input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])
```
Note that we've indicated `-1` for batch size, which specifies that this
dimension should be dynamically computed based on the number of input values in
`features["x"]`, holding the size of all other dimensions constant. This allows
us to treat `batch_size` as a hyperparameter that we can tune. For example, if
we feed examples into our model in batches of 5, `features["x"]` will contain
3,920 values (one value for each pixel in each image), and `input_layer` will
have a shape of `[5, 28, 28, 1]`. Similarly, if we feed examples in batches of
100, `features["x"]` will contain 78,400 values, and `input_layer` will have a
shape of `[100, 28, 28, 1]`.
### Convolutional Layer #1
In our first convolutional layer, we want to apply 32 5x5 filters to the input
layer, with a ReLU activation function. We can use the `conv2d()` method in the
`layers` module to create this layer as follows:
```
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
```
The `inputs` argument specifies our input tensor, which must have the shape
<code>[<em>batch_size</em>, <em>image_height</em>, <em>image_width</em>,
<em>channels</em>]</code>. Here, we're connecting our first convolutional layer
to `input_layer`, which has the shape <code>[<em>batch_size</em>, 28, 28,
1]</code>.
Note: `conv2d()` will instead accept a shape of `[<em>batch_size</em>, <em>channels</em>, <em>image_height</em>, <em>image_width</em>]` when passed the argument `data_format=channels_first`.
The `filters` argument specifies the number of filters to apply (here, 32), and
`kernel_size` specifies the dimensions of the filters as `[<em>height</em>,
<em>width</em>]</code> (here, <code>[5, 5]`).
<p class="tip"><b>TIP:</b> If filter height and width have the same value, you can instead specify a
single integer for <code>kernel_size</code>—e.g., <code>kernel_size=5</code>.</p>
The `padding` argument specifies one of two enumerated values
(case-insensitive): `valid` (default value) or `same`. To specify that the
output tensor should have the same height and width values as the input tensor,
we set `padding=same` here, which instructs TensorFlow to add 0 values to the
edges of the input tensor to preserve height and width of 28. (Without padding,
a 5x5 convolution over a 28x28 tensor will produce a 24x24 tensor, as there are
24x24 locations to extract a 5x5 tile from a 28x28 grid.)
The `activation` argument specifies the activation function to apply to the
output of the convolution. Here, we specify ReLU activation with
`tf.nn.relu`.
Our output tensor produced by `conv2d()` has a shape of
<code>[<em>batch_size</em>, 28, 28, 32]</code>: the same height and width
dimensions as the input, but now with 32 channels holding the output from each
of the filters.
### Pooling Layer #1
Next, we connect our first pooling layer to the convolutional layer we just
created. We can use the `max_pooling2d()` method in `layers` to construct a
layer that performs max pooling with a 2x2 filter and stride of 2:
```
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
```
Again, `inputs` specifies the input tensor, with a shape of
<code>[<em>batch_size</em>, <em>image_height</em>, <em>image_width</em>,
<em>channels</em>]</code>. Here, our input tensor is `conv1`, the output from
the first convolutional layer, which has a shape of <code>[<em>batch_size</em>,
28, 28, 32]</code>.
Note: As with <code>conv2d()</code>, <code>max_pooling2d()</code> will instead
accept a shape of <code>[<em>batch_size</em>, <em>channels</em>,
<em>image_height</em>, <em>image_width</em>]</code> when passed the argument
<code>data_format=channels_first</code>.
The `pool_size` argument specifies the size of the max pooling filter as
<code>[<em>height</em>, <em>width</em>]</code> (here, `[2, 2]`). If both
dimensions have the same value, you can instead specify a single integer (e.g.,
`pool_size=2`).
The `strides` argument specifies the size of the stride. Here, we set a stride
of 2, which indicates that the subregions extracted by the filter should be
separated by 2 pixels in both the height and width dimensions (for a 2x2 filter,
this means that none of the regions extracted will overlap). If you want to set
different stride values for height and width, you can instead specify a tuple or
list (e.g., `stride=[3, 6]`).
Our output tensor produced by `max_pooling2d()` (`pool1`) has a shape of
<code>[<em>batch_size</em>, 14, 14, 32]</code>: the 2x2 filter reduces height and width by 50% each.
### Convolutional Layer #2 and Pooling Layer #2
We can connect a second convolutional and pooling layer to our CNN using
`conv2d()` and `max_pooling2d()` as before. For convolutional layer #2, we
configure 64 5x5 filters with ReLU activation, and for pooling layer #2, we use
the same specs as pooling layer #1 (a 2x2 max pooling filter with stride of 2):
```
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
```
Note that convolutional layer #2 takes the output tensor of our first pooling
layer (`pool1`) as input, and produces the tensor `conv2` as output. `conv2`
has a shape of <code>[<em>batch_size</em>, 14, 14, 64]</code>, the same height and width as `pool1` (due to `padding="same"`), and 64 channels for the 64
filters applied.
Pooling layer #2 takes `conv2` as input, producing `pool2` as output. `pool2`
has shape <code>[<em>batch_size</em>, 7, 7, 64]</code> (50% reduction of height and width from `conv2`).
### Dense Layer
Next, we want to add a dense layer (with 1,024 neurons and ReLU activation) to
our CNN to perform classification on the features extracted by the
convolution/pooling layers. Before we connect the layer, however, we'll flatten
our feature map (`pool2`) to shape <code>[<em>batch_size</em>,
<em>features</em>]</code>, so that our tensor has only two dimensions:
```
pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
```
In the `reshape()` operation above, the `-1` signifies that the *`batch_size`*
dimension will be dynamically calculated based on the number of examples in our
input data. Each example has 7 (`pool2` height) * 7 (`pool2` width) * 64
(`pool2` channels) features, so we want the `features` dimension to have a value
of 7 * 7 * 64 (3136 in total). The output tensor, `pool2_flat`, has shape
<code>[<em>batch_size</em>, 3136]</code>.
Now, we can use the `dense()` method in `layers` to connect our dense layer as
follows:
```
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
```
The `inputs` argument specifies the input tensor: our flattened feature map,
`pool2_flat`. The `units` argument specifies the number of neurons in the dense
layer (1,024). The `activation` argument takes the activation function; again,
we'll use `tf.nn.relu` to add ReLU activation.
To help improve the results of our model, we also apply dropout regularization
to our dense layer, using the `dropout` method in `layers`:
```
dropout = tf.layers.dropout(
inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)
```
Again, `inputs` specifies the input tensor, which is the output tensor from our
dense layer (`dense`).
The `rate` argument specifies the dropout rate; here, we use `0.4`, which means
40% of the elements will be randomly dropped out during training.
The `training` argument takes a boolean specifying whether or not the model is
currently being run in training mode; dropout will only be performed if
`training` is `True`. Here, we check if the `mode` passed to our model function
`cnn_model_fn` is `TRAIN` mode.
Our output tensor `dropout` has shape <code>[<em>batch_size</em>, 1024]</code>.
### Logits Layer
The final layer in our neural network is the logits layer, which will return the
raw values for our predictions. We create a dense layer with 10 neurons (one for
each target class 0–9), with linear activation (the default):
```
logits = tf.layers.dense(inputs=dropout, units=10)
```
Our final output tensor of the CNN, `logits`, has shape `[batch_size, 10]`.
### Generate Predictions {#generate_predictions}
The logits layer of our model returns our predictions as raw values in a
<code>[<em>batch_size</em>, 10]</code>-dimensional tensor. Let's convert these
raw values into two different formats that our model function can return:
* The **predicted class** for each example: a digit from 0–9.
* The **probabilities** for each possible target class for each example: the
probability that the example is a 0, is a 1, is a 2, etc.
For a given example, our predicted class is the element in the corresponding row
of the logits tensor with the highest raw value. We can find the index of this
element using the `tf.argmax`
function:
```
tf.argmax(input=logits, axis=1)
```
The `input` argument specifies the tensor from which to extract maximum
values—here `logits`. The `axis` argument specifies the axis of the `input`
tensor along which to find the greatest value. Here, we want to find the largest
value along the dimension with index of 1, which corresponds to our predictions
(recall that our logits tensor has shape <code>[<em>batch_size</em>,
10]</code>).
We can derive probabilities from our logits layer by applying softmax activation
using `tf.nn.softmax`:
```
tf.nn.softmax(logits, name="softmax_tensor")
```
Note: We use the `name` argument to explicitly name this operation `softmax_tensor`, so we can reference it later. (We'll set up logging for the softmax values in ["Set Up a Logging Hook"](#set-up-a-logging-hook)).
We compile our predictions in a dict, and return an `EstimatorSpec` object:
```
predictions = {
"classes": tf.argmax(input=logits, axis=1),
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
```
### Calculate Loss {#calculating-loss}
For both training and evaluation, we need to define a
[loss function](https://en.wikipedia.org/wiki/Loss_function)
that measures how closely the model's predictions match the target classes. For
multiclass classification problems like MNIST,
[cross entropy](https://en.wikipedia.org/wiki/Cross_entropy) is typically used
as the loss metric. The following code calculates cross entropy when the model
runs in either `TRAIN` or `EVAL` mode:
```
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
```
Let's take a closer look at what's happening above.
Our `labels` tensor contains a list of prediction indices for our examples, e.g. `[1,
9, ...]`. `logits` contains the linear outputs of our last layer.
`tf.losses.sparse_softmax_cross_entropy`, calculates the softmax crossentropy
(aka: categorical crossentropy, negative log-likelihood) from these two inputs
in an efficient, numerically stable way.
### Configure the Training Op
In the previous section, we defined loss for our CNN as the softmax
cross-entropy of the logits layer and our labels. Let's configure our model to
optimize this loss value during training. We'll use a learning rate of 0.001 and
[stochastic gradient descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent)
as the optimization algorithm:
```
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
```
Note: For a more in-depth look at configuring training ops for Estimator model functions, see ["Defining the training op for the model"](../../guide/custom_estimators.md#defining-the-training-op-for-the-model) in the ["Creating Estimations in tf.estimator"](../../guide/custom_estimators.md) tutorial.
### Add evaluation metrics
To add accuracy metric in our model, we define `eval_metric_ops` dict in EVAL
mode as follows:
```
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels=labels, predictions=predictions["classes"])
}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
```
<a id="train_eval_mnist"></a>
## Training and Evaluating the CNN MNIST Classifier
We've coded our MNIST CNN model function; now we're ready to train and evaluate
it.
### Load Training and Test Data
First, let's load our training and test data with the following code:
```
# Load training and eval data
((train_data, train_labels),
(eval_data, eval_labels)) = tf.keras.datasets.mnist.load_data()
train_data = train_data/np.float32(255)
train_labels = train_labels.astype(np.int32) # not required
eval_data = eval_data/np.float32(255)
eval_labels = eval_labels.astype(np.int32) # not required
```
We store the training feature data (the raw pixel values for 55,000 images of
hand-drawn digits) and training labels (the corresponding value from 0–9 for
each image) as [numpy
arrays](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html)
in `train_data` and `train_labels`, respectively. Similarly, we store the
evaluation feature data (10,000 images) and evaluation labels in `eval_data`
and `eval_labels`, respectively.
### Create the Estimator {#create-the-estimator}
Next, let's create an `Estimator` (a TensorFlow class for performing high-level
model training, evaluation, and inference) for our model. Add the following code
to `main()`:
```
# Create the Estimator
mnist_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model")
```
The `model_fn` argument specifies the model function to use for training,
evaluation, and prediction; we pass it the `cnn_model_fn` we created in
["Building the CNN MNIST Classifier."](#building-the-cnn-mnist-classifier) The
`model_dir` argument specifies the directory where model data (checkpoints) will
be saved (here, we specify the temp directory `/tmp/mnist_convnet_model`, but
feel free to change to another directory of your choice).
Note: For an in-depth walkthrough of the TensorFlow `Estimator` API, see the tutorial [Creating Estimators in tf.estimator](../../guide/custom_estimators.md).
### Set Up a Logging Hook {#set_up_a_logging_hook}
Since CNNs can take a while to train, let's set up some logging so we can track
progress during training. We can use TensorFlow's `tf.train.SessionRunHook` to create a
`tf.train.LoggingTensorHook`
that will log the probability values from the softmax layer of our CNN. Add the
following to `main()`:
```
# Set up logging for predictions
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log, every_n_iter=50)
```
We store a dict of the tensors we want to log in `tensors_to_log`. Each key is a
label of our choice that will be printed in the log output, and the
corresponding label is the name of a `Tensor` in the TensorFlow graph. Here, our
`probabilities` can be found in `softmax_tensor`, the name we gave our softmax
operation earlier when we generated the probabilities in `cnn_model_fn`.
Note: If you don't explicitly assign a name to an operation via the `name` argument, TensorFlow will assign a default name. A couple easy ways to discover the names applied to operations are to visualize your graph on [TensorBoard](../../guide/graph_viz.md)) or to enable the [TensorFlow Debugger (tfdbg)](../../guide/debugger.md).
Next, we create the `LoggingTensorHook`, passing `tensors_to_log` to the
`tensors` argument. We set `every_n_iter=50`, which specifies that probabilities
should be logged after every 50 steps of training.
### Train the Model
Now we're ready to train our model, which we can do by creating `train_input_fn`
and calling `train()` on `mnist_classifier`. In the `numpy_input_fn` call, we pass the training feature data and labels to
`x` (as a dict) and `y`, respectively. We set a `batch_size` of `100` (which
means that the model will train on minibatches of 100 examples at each step).
`num_epochs=None` means that the model will train until the specified number of
steps is reached. We also set `shuffle=True` to shuffle the training data. Then train the model a single step and log the output:
```
# Train the model
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
batch_size=100,
num_epochs=None,
shuffle=True)
# train one step and display the probabilties
mnist_classifier.train(
input_fn=train_input_fn,
steps=1,
hooks=[logging_hook])
```
Now—without logging each step—set `steps=1000` to train the model longer, but in a reasonable time to run this example. Training CNNs is computationally intensive. To increase the accuracy of your model, increase the number of `steps` passed to `train()`, like 20,000 steps.
```
mnist_classifier.train(input_fn=train_input_fn, steps=1000)
```
### Evaluate the Model
Once training is complete, we want to evaluate our model to determine its
accuracy on the MNIST test set. We call the `evaluate` method, which evaluates
the metrics we specified in `eval_metric_ops` argument in the `model_fn`.
Add the following to `main()`:
```
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
eval_results = mnist_classifier.evaluate(input_fn=eval_input_fn)
print(eval_results)
```
To create `eval_input_fn`, we set `num_epochs=1`, so that the model evaluates
the metrics over one epoch of data and returns the result. We also set
`shuffle=False` to iterate through the data sequentially.
## Additional Resources
To learn more about TensorFlow Estimators and CNNs in TensorFlow, see the
following resources:
* [Creating Estimators in tf.estimator](../../guide/custom_estimators.md)
provides an introduction to the TensorFlow Estimator API. It walks through
configuring an Estimator, writing a model function, calculating loss, and
defining a training op.
* [Advanced Convolutional Neural Networks](../../tutorials/images/deep_cnn.md) walks through how to build a MNIST CNN classification model
*without estimators* using lower-level TensorFlow operations.
|
github_jupyter
|
# This notebook helps you to do several things:
1) Find your optimal learning rate
https://docs.fast.ai/callbacks.html#LRFinder
2)
```
%reload_ext autoreload
%autoreload 2
import fastai
from fastai.callbacks import *
from torch.utils.data import Dataset, DataLoader
from models import UNet2d_assembled
import numpy as np
import torch
from fastai.vision import *
torch.backends.cudnn.benchmark = True
DEVICE = 'cuda'
OS = 'Windows'
# GET DATASET
class CMRIreconDataset(Dataset):
"""CMRIrecon dataset."""
def __init__(self, input_file_path, target_file_path):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.inputs = np.load(input_file_path)
self.targets = np.load(target_file_path)
def __len__(self):
# print("print length of inputs",len(self.inputs))
# print("print shape of inputs",np.shape(self.inputs))
return len(self.inputs)
def __getitem__(self, idx):
# sample = {'input': self.inputs[idx], 'target': self.targets[idx]}
X = self.inputs[idx].astype(np.float32)
Y = self.targets[idx].astype(np.float32)
return X, Y
if OS == 'Linux':
CMRIdataset = CMRIreconDataset(
input_file_path = \
'/home/nw92/reconproject_data/input_data.npy', \
target_file_path = \
'/home/nw92/reconproject_data/target_data.npy')
elif OS == 'Windows':
CMRIdataset = CMRIreconDataset(
input_file_path = \
'C:/Users/littl/Documents/PythonScripts/reconproject_data/input_data.npy', \
target_file_path = \
'C:/Users/littl/Documents/PythonScripts/reconproject_data/target_data.npy')
else:
print("Please use valid COMPUTER.\nOptions:\t\'Windows\'\t\'Linux\'")
# SPLIT DATASET INTO TRAIN, VAL AND TEST #####################################
# CMRIdataset = train_dataset + test_dataset
print("\nSplit dataset into train data (80%) and test data (20%)...\n")
train_size = int(0.8 * len(CMRIdataset))
test_size = len(CMRIdataset) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(CMRIdataset, [train_size, test_size])
# train_dataset = train_dataset + val_dataset
print("\nSplit train data into train data (80%) and val data (20%)...\n")
train_size = int(0.8 * len(train_dataset))
val_size = len(train_dataset) - train_size
train_dataset, val_dataset = torch.utils.data.random_split(train_dataset, [train_size, val_size])
print("Load train_dl, val_dl and test_dl...")
# load train set
train_dl = DataLoader(train_dataset, batch_size=16,
shuffle=True, num_workers=0)
# load validation set
valid_dl = DataLoader(val_dataset, batch_size=16,
shuffle=True, num_workers=0)
# load test set
test_dl = DataLoader(test_dataset, batch_size=16,
shuffle=True, num_workers=0)
print("train_dl, val_dl and test_dl loaded!")
# # DEFINE DATABUNCH TO FEED THE MODEL
data = DataBunch(train_dl,
valid_dl,
test_dl,
device=DEVICE,
# dl_tfms:Optional[Collection[Callable]]=None,
# path:PathOrStr='.',
# collate_fn:Callable='data_collate',
# no_check:bool=False
)
# data.show_batch(rows=4)
# DEFINE LEARNER
loss_func = nn.MSELoss()
metrics = mean_absolute_error
model = UNet2d_assembled.UNet2D(20) #20 channels
learn = Learner(data = data,
model = model,
# opt_func:Callable='Adam',
loss_func = loss_func,
metrics = metrics,
# callback_fns=[CSVLogger],
# true_wd:bool=True,
# bn_wd:bool=True,
# wd:Floats=0.01,
# train_bn:bool=True,
# path:str=None,
# model_dir:PathOrStr='models',
# callback_fns:Collection[Callable]=None,
# callbacks:Collection[Callback]=<factory>,
# layer_groups:ModuleList=None,
# add_time:bool=True,
# silent:bool=None
)
# learn.summary()
learn.lr_find(start_lr=1e-07, end_lr=10)
# learn = cnn_learner(data, models.resnet18, metrics=accuracy)
# learn.fit(1)
learn.recorder.plot()
learn.recorder.plot()
lr = 1.5e-2
learn.fit_one_cycle(3, lr)
learn.recorder.plot_lr(show_moms=True)
learn = Learner(data = data,
model = model,
# opt_func:Callable='Adam',
loss_func = loss_func,
metrics = metrics,
callback_fns=[CSVLogger],
# true_wd:bool=True,
# bn_wd:bool=True,
# wd:Floats=0.01,
# train_bn:bool=True,
# path:str=None,
# model_dir:PathOrStr='models',
# callback_fns:Collection[Callable]=None,
# callbacks:Collection[Callback]=<factory>,
# layer_groups:ModuleList=None,
# add_time:bool=True,
# silent:bool=None
)
learn.fit(3)
learn.fit(3)
learn.fit(3, 1e-1)
learn.csv_logger.read_logged_file()
def fit_odd_shedule(learn, lr):
n = len(learn.data.train_dl)
phases = [TrainingPhase(n).schedule_hp('lr', lr, anneal=annealing_cos),
TrainingPhase(n*2).schedule_hp('lr', lr, anneal=annealing_poly(2))]
sched = GeneralScheduler(learn, phases)
learn.callbacks.append(sched)
total_epochs = 3
learn.fit(total_epochs)
learn = Learner(data = data,
model = model,
# opt_func:Callable='Adam',
loss_func = loss_func,
metrics = metrics,
# callback_fns=[CSVLogger],
# true_wd:bool=True,
# bn_wd:bool=True,
# wd:Floats=0.01,
# train_bn:bool=True,
# path:str=None,
# model_dir:PathOrStr='models',
# callback_fns:Collection[Callable]=None,
# callbacks:Collection[Callback]=<factory>,
# layer_groups:ModuleList=None,
# add_time:bool=True,
# silent:bool=None
)
fit_odd_shedule(learn, lr)
learn.recorder.plot_lr()
learn = Learner(data = data,
model = model,
# opt_func:Callable='Adam',
loss_func = loss_func,
metrics = metrics,
# callback_fns=[CSVLogger,
# SaveModelCallback(learn,
# every='epoch',
# monitor='valid_loss')],
# true_wd:bool=True,
# bn_wd:bool=True,
# wd:Floats=0.01,
# train_bn:bool=True,
# path:str=None,
# model_dir:PathOrStr='models',
# callback_fns:Collection[Callable]=None,
# callbacks:Collection[Callback]=<factory>,
# layer_groups:ModuleList=None,
# add_time:bool=True,
# silent:bool=None
)
learn.fit_one_cycle(3, lr,
callbacks=[fastai.callbacks.SaveModelCallback(learn, every='epoch', monitor='valid_loss')])
```
|
github_jupyter
|
```
## import data manipulation packages for data cleaning and distance calculation
import pandas as pd
import numpy as np
from sklearn.neighbors import DistanceMetric
from math import radians
## DATA CLEANING AND PREPARATION
## import dataset as variable 'city' and drop NaN
cities = pd.read_excel('worldcities.xlsx')
ct = cities.dropna(axis = 'rows', how = 'any')
## add london starting point as 'London_st' slights on the right (to facilitate the assignment resolution)
London_st = ct.loc[(ct['city'] == 'London') & (ct['iso3'] == 'GBR')]
London_st['city']='London_st'
London_st['lng'] = London_st['lng'] + 0.2
ct = ct.append(London_st)
## resetting index after append
ct = ct.reset_index()
## concatenate iso2 and city to get unique id
ct['ID'] = ct['city'].map(str) + ct['iso2'].map(str)
## drop not usable columns
ct = ct.drop(['city_ascii', 'country', 'iso2', 'admin_name', 'capital'], axis = 1)
ct = ct.drop('index', axis = 1)
## identifying location of 'London_st' to be used later as 'source'
source = ct.loc[(ct['city'] == 'London_st')]
## identifying location of 'London' to be used later as 'target'
target = ct.loc[(ct['city'] == 'London') & (ct['iso3'] == 'GBR')]
## GETTING WEIGHTS - part I
## population weights '+2', where population > 200000
pop = np.where(ct['population'] < 200000 , 0, 2)
## same state weights '+2', where 'iso3' is different
i = ct['iso3'].to_numpy()
st = (i[:, None ] != i) * 2
## GETTING DIRECTION - getting an array comparing longitudes (0 if a city is west the other, 1 if a city is east)
## to get all positive longitudes we need to rescale from -180/+180 to 0/360 scale, where London is approx 0
dr_x = np.where(ct['lng']>= 0 , ct['lng'] , (ct['lng'] + 180) + 180)
x = dr_x
dr = (x[:, None] < x) * 1
## computing big distances (>60 degrees) as a '0' (no go area) to get the final matrix less 'heavy' to be handled
rang = (x[: , None] < x + 60 ) * 1
## QUESTO NON SERVE GIUSTO?
## dir_test = pd.DataFrame(dr*rang.T, columns = ct['ID'], index = ct['ID'])
## dir_test
## creating 3 dataframes with direction, same state and population weights
direction = pd.DataFrame(dr*rang.T, columns = ct['ID'], index = ct['ID'])
same_state = pd.DataFrame(st, columns = ct['ID'], index = ct['ID'])
population = pd.DataFrame(pop , index = ct['ID'])
## DISTANCE COMPUTATION - 'Harvesine'
## the earth is spheric, so a specific calculation ('Harvesine distance') is required to get the distance from places
ct['lat'] = np.radians(ct['lat'])
ct['lng'] = np.radians(ct['lng'])
## retrieve the 'harvesine' metric from scipy
dist = DistanceMetric.get_metric('haversine')
## calculating the pairwise distance between cities multiplying *6373 to get kms
## get a smaller size object by getting distance only if direction is 'east' (value 1 in 'direction' dataframe)
D = np.where(direction > 0, dist.pairwise(ct [['lat','lng']].to_numpy())*6373 , 0)
## create the distance matrix with cities in the indexes
distance = pd.DataFrame(D.T, columns = ct['ID'], index = ct['ID'])
## view matrix of distance
## QUESTO NON SERVE GIUSTO?
## distance.loc['London_stGB'].sum()
## secondo me questo è già risolto con import pandas as pd no?
## from pandas import DataFrame
## GETTING WEIGHTS - part II
## utilising the matrix of distance called 'distance' (which contains already directions)
## populate 'dis' with weights: '+2' if closest, '4' if second closest, '8' if third closest
## the rest of distances as '0', meaning 'no go'
dis = distance.T.replace(0, 0)
dis = dis.replace(dis.apply(lambda x: x[x > 0].min(axis=0)), 2)
dis = dis.replace(dis.apply(lambda x: x[x > 2].min(axis=0)), 4)
dis = dis.replace(dis.apply(lambda x: x[x > 4].min(axis=0)), 8)
dis = dis.where((dis <= 8), 0)
dis
## SUMMING THE TOTAL WEIGHTS
## sum of dataframes: 'dis', 'same_state' and 'population' to get final weights
graph =((dis + same_state + pop.T) * dis / dis)
graph = graph.where((graph > 1), 0)
graph
## preparation of final dataframe as array for 'NetworkX'
gr_array = np.array(graph)
gr_array
## SHORTEST PATH ALGORITHM aka Dijkstra's algorithm
## import NetworkX
import networkx as nx
## convert the numpy array to GRAPH data structure, with has nodes (cities) and edges (weights between nodes)
## zeros are not taken into account, so the direction is taken into account in the built array
GR = nx.from_numpy_array(gr_array)
## edges visualization (optional)
GR.edges(data=True)
## nodes visualization (optional)
GR.nodes()
## retrieve location of 'London_st' as source and 'London' as origin
print(source)
print(target)
## using networkx.single_source_dijkstra()
## the command computes shortest paths and lengths in a weighted graph G
## it returns a tuple containing the 'length' of the shortest path, and the 'path' itself
length, path = nx.single_source_dijkstra(GR, 6622, 31)
print(length, path)
## get the names of the 'path' retrieving from 'ct' original object
ct.loc[path, 'city']
## quanti giorni per fare il giro del mondo?
days_to_london = length * 0.041667
days_to_london
## draw the graph (drop if too long to compute)
##nx.draw(GR)
##carica i dati del percorso ottimo in un dataframe
percorso=ct.loc[path]
##ricavo lista di "id" per filtrare il dataframe orignale con le città (per i dati di lon e lat)
filtro = percorso['id'].tolist()
##crea dataframe con i dati origari di "cities" per le città che compongono il percorso ottimo
città= cities[cities['id'].isin(filtro)]
##imposta la colonna "id" come indice
città = città.set_index('id')
##ordina per gli "id" del filtro (quelli del percorso ottimo in ordine)
città_def=città.loc[filtro]
##sostituisce in "città_def" il nome della città di partenza con "London_st"
città_def.iloc[0,0]='London_St'
##sostituisce la coordinata di longitudine della città di partenza con quella leggermente spostata per far partire il percorso
città_def.iloc[0,3]='0.0725'
#import delle librerie per i grafici
import matplotlib.pyplot as plt
import plotly.graph_objects as go
##crea il primo grafico con le traiettorie tra le città sulla base della mappa mondiale
fig = go.Figure(data=go.Scattergeo(
lat = città_def['lat'],
lon =città_def['lng'],
mode = 'lines',
line = dict(width = 1, color = 'blue'),))
##aggiorna il grafico aggiungendo i marker per le città visitate con nome della città se selezionate con mouse, titolo e varia la tipologia di mappa sullo sfondo
fig.add_trace(go.Scattergeo(
locationmode = 'country names',
lon = città_def['lng'],
lat = città_def['lat'],
hoverinfo = 'text',
text = città_def['city'],
name = "Cities",
mode = 'markers',
marker = dict(
size = 4,
color = 'rgb(102,102,102)',
line = dict(
width = 4,
color = 'rgba(68, 68, 68, 0)'
)
)))
fig.update_geos(projection_type="natural earth")
fig.update_layout(title_text='Shortest Path Around the World')
fig.show()
```
|
github_jupyter
|
## Sampling
You can get a randomly rows of the dataset. It is very usefull in training machine learning models.
We will use the dataset about movie reviewers obtained of [here](http://grouplens.org/datasets/movielens/100k/).
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# read a dataset of movie reviewers into a DataFrame
user_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code']
users = pd.read_csv('./dataset/u.user', sep='|', header=None, names=user_cols, index_col='user_id')
users.head()
# sample 3 rows from the DataFrame without replacement (new in pandas 0.16.1)
users.sample(n=3)
#use the 'random_state' parameter for reproducibility
users.sample(n=3, random_state=42)
# sample 75% of the DataFrame's rows without replacement
train = users.sample(frac=0.75, random_state=99)
# store the remaining 25% of the rows in another DataFrame
test = users.loc[~users.index.isin(train.index), :]
train.head()
test.head()
# detect duplicate zip codes: True if an item is identical to a previous item
users.zip_code.duplicated().tail()
# count the duplicate items (True becomes 1, False becomes 0)
users.zip_code.duplicated().sum()
# detect duplicate DataFrame rows: True if an entire row is identical to a previous row
users.duplicated().tail()
```
### Logic for duplicated:
+ keep='first' (default): Mark duplicates as True except for the first occurrence.
+ keep='last': Mark duplicates as True except for the last occurrence.
+ keep=False: Mark all duplicates as True.
```
# examine the duplicate rows (ignoring the first occurrence)
users.loc[users.duplicated(keep='first'), :]
# examine the duplicate rows (ignoring the last occurrence)
users.loc[users.duplicated(keep='last'), :]
# examine the duplicate rows (including all duplicates)
users.loc[users.duplicated(keep=False), :]
# only consider a subset of columns when identifying duplicates
users.duplicated(subset=['age', 'zip_code']).sum()
# drop the duplicate rows (inplace=False by default)
users.drop_duplicates(keep='first').shape
users.drop_duplicates(keep='last').shape
users.drop_duplicates(keep=False).shape
```
## Appending pandas Series
```
# Load 'sales-jan-2015.csv' into a DataFrame: jan
jan = pd.read_csv('./dataset/sales-jan-2015.csv', parse_dates=True, index_col='Date')
# Load 'sales-feb-2015.csv' into a DataFrame: feb
feb = pd.read_csv('./dataset/sales-feb-2015.csv', parse_dates=True, index_col='Date')
# Load 'sales-mar-2015.csv' into a DataFrame: mar
mar = pd.read_csv('./dataset/sales-mar-2015.csv', parse_dates=True, index_col='Date')
# Extract the 'Units' column from jan: jan_units
jan_units = pd.DataFrame(jan['Units'])
# Extract the 'Units' column from feb: feb_units
feb_units = pd.DataFrame(feb['Units'])
# Extract the 'Units' column from mar: mar_units
mar_units = pd.DataFrame(mar['Units'])
# Append feb_units and then mar_units to jan_units: quarter1
quarter1 = jan_units.append(feb_units).append(mar_units)
# Print the first slice from quarter1
print(quarter1.loc['jan 27, 2015':'feb 2, 2015'])
# Print the second slice from quarter1
print(quarter1.loc['feb 26, 2015':'mar 7, 2015'])
# Compute & print total sales in quarter1
print(quarter1.sum())
df_quarter= pd.DataFrame(quarter1, columns = ['Units'])
df_quarter
jan_units.reset_index(inplace = True)
feb_units.reset_index(inplace = True)
mar_units.reset_index(inplace = True)
quarter_columns = pd.concat([jan_units, feb_units, mar_units], axis= 1, ignore_index=False)
df_quarter_columns= pd.DataFrame(quarter_columns)
df_quarter_columns
```
## Reading multiple files to build a DataFrame
It is often convenient to build a large DataFrame by parsing many files as DataFrames and concatenating them all at once. You'll do this here with three files, but, in principle, this approach can be used to combine data from dozens or hundreds of files.
Here, you'll work with DataFrames compiled from The Guardian's Olympic medal dataset.
```
medals=[]
medal_types = ['gold','silver','bronze']
for medal in medal_types:
# Create the file name: file_name
file_name = "./dataset/olympic-medals/%s_top5.csv" % medal
# Create list of column names: columns
columns = ['Country', medal]
# Read file_name into a DataFrame: df
medal_df = pd.read_csv(file_name, header=0, index_col='Country', names=columns)
# Append medal_df to medals
medals.append(medal_df)
# Concatenate medals horizontally: medals
medals = pd.concat(medals, axis='columns', sort = True)
# Print medals
pd.DataFrame(medals)
```
## Concatenating vertically to get MultiIndexed rows
When stacking a sequence of DataFrames vertically, it is sometimes desirable to construct a MultiIndex to indicate the DataFrame from which each row originated. This can be done by specifying the keys parameter in the call to pd.concat(), which generates a hierarchical index with the labels from keys as the outermost index label. So you don't have to rename the columns of each DataFrame as you load it. Instead, only the Index column needs to be specified.
```
medals=[]
for medal in medal_types:
file_name = "./dataset/olympic-medals/%s_top5.csv" % medal
# Read file_name into a DataFrame: medal_df
medal_df = pd.read_csv(file_name, index_col='Country')
# Append medal_df to medals
medals.append(medal_df)
# Concatenate medals: medals
medals = pd.concat(medals, keys=['bronze', 'silver', 'gold'])
# Print medals
pd.DataFrame(medals)
```
## Concatenating DataFrames with inner join
```
medals=[]
for medal in medal_types:
file_name = "./dataset/olympic-medals/%s_top5.csv" % medal
# Read file_name into a DataFrame: medal_df
medal_df = pd.read_csv(file_name, index_col='Country')
# Append medal_df to medals
medals.append(medal_df)
# Concatenate medal_list horizontally using an inner join: medals
medals = pd.concat(medals, keys=['bronze', 'silver', 'gold'], axis=1, join='inner')
# Print medals
pd.DataFrame(medals)
```
## Slicing MultiIndexed DataFrames
```
# Sort the entries of medals
medals_sorted = medals.sort_index(level=0)
# Print the number of Bronze medals won by Germany
print(medals_sorted.loc[('bronze','Germany')])
# Print data about silver medals
print(medals_sorted.loc['silver'])
# Create alias for pd.IndexSlice: idx
idx = pd.IndexSlice
# Print all the data on medals won by the United Kingdom
print(medals_sorted.loc[idx[:,'United Kingdom'], :])
```
## Merging
```
user_usage = pd.read_csv("./dataset/merge/user_usage.csv")
user_device = pd.read_csv("./dataset/merge/user_device.csv")
devices = pd.read_csv("./dataset/merge/android_devices.csv")
user_usage.head()
user_device.head()
devices.head()
devices.rename(columns={"Retail Branding": "manufacturer"}, inplace=True)
devices.head()
```
## First merge
We're trying to get the average usage figures for different types of devices. So we need to get the user's device code from user_usage as a column on user_usage, and then get the device's manufacturer from devices as a column on the result.
First, we merge user_usage with user_device with "use_id" as our common column
```
result = pd.merge(user_usage,
user_device[['use_id', 'platform', 'device']],
on='use_id')
result.head()
```
An inner merge, (or inner join) keeps only the common values in both the left and right dataframes for the result. In our example above, only the rows that contain use_id values that are common between user_usage and user_device remain in the result dataset. We can validate this by looking at how many values are common:
```
print("user_usage dimensions: {}".format(user_usage.shape))
print("user_device dimensions: {}".format(user_device[['use_id', 'platform', 'device']].shape))
print("Result dimensions : {}".format(result.shape))
```
## Left merge example
A left merge, or left join, between two dataframes keeps all of the rows and values from the left dataframe, in this case "user_usage". Rows from the right dataframe will be kept in the result only where there is a match in the merge variable in the right dataframe, and NaN values will be in the result where not.
```
result = pd.merge(user_usage,
user_device[['use_id', 'platform', 'device']],
on='use_id', how='left')
print("user_usage dimensions: {}".format(user_usage.shape))
print("result dimensions: {}".format(result.shape))
print("There are {} missing values in the result.".format(
result['device'].isnull().sum()))
result.head()
```
## Right merge example
A right merge, or right join, between two dataframes keeps all of the rows and values from the right dataframe, in this case "user_device". Rows from the left dataframe will be kept where there is a match in the merge variable, and NaN values will be in the result where not.
```
result = pd.merge(user_usage,
user_device[['use_id', 'platform', 'device']],
on='use_id', how='right')
print("user_device dimensions: {}".format(user_device.shape))
print("result dimensions: {}".format(result.shape))
print("There are {} missing values in the 'monthly_mb' column in the result.".format(
result['monthly_mb'].isnull().sum()))
print("There are {} missing values in the 'platform' column in the result.".format(
result['platform'].isnull().sum()))
```
## Outer merge example
A full outer join, or outer merge, keeps all rows from the left and right dataframe in the result. Rows will be aligned where there is shared join values between the left and right, and rows with NaN values, in either the left-originating or right-originating columns will be, will be left in the result where there is no shared join value.
In the final result, a subset of rows should have no missing values. These rows are the rows where there was a match between the merge column in the left and right dataframes. These rows are the same values as found by our inner merge result before.
```
print("There are {} unique values of use_id in our dataframes.".format(
pd.concat([user_usage['use_id'], user_device['use_id']]).unique().shape[0]))
result = pd.merge(user_usage,
user_device[['use_id', 'platform', 'device']],
on='use_id', how='outer', indicator=True)
print("Outer merge result has {} rows.".format(result.shape))
print("There are {} rows with no missing values.".format(
(result.apply(lambda x: x.isnull().sum(), axis=1) == 0).sum()))
result.iloc[[0, 1, 200,201, 350,351]]
# First, add the platform and device to the user usage.
result = pd.merge(user_usage,
user_device[['use_id', 'platform', 'device']],
on='use_id',
how='left')
# Now, based on the "device" column in result, match the "Model" column in devices.
devices.rename(columns={"Retail Branding": "manufacturer"}, inplace=True)
result = pd.merge(result,
devices[['manufacturer', 'Model']],
left_on='device',
right_on='Model',
how='left')
result.head()
devices[devices.Device.str.startswith('GT')]
```
## Calculating statistics on final result
With merges complete, we can simply calculate statistics for users grouped by the manufacturer of their device.
```
result.groupby("manufacturer").agg({
"outgoing_mins_per_month": "mean",
"outgoing_sms_per_month": "mean",
"monthly_mb": "mean",
"use_id": "count"
})
```
|
github_jupyter
|
# Homework 8 - Artificial Neural Networks with PyTorch
## About
### In this homework, you will get your feet wet with deep learning using the PyTorch deep learning platform. This will involve:
* Preparing data
* Learning about the components of a deep learning pipeline
* Setting up a model, a loss function, and an optimizer
* Setting up training and testing loops
* Using a visualizer like tensorboard to monitor logged data
*This homework is due __April 15th 2019__. Training neural networks takes some time, particularly on CPUs so start early.*
## Dev Environment
### Working on Google Colab
You may choose to work locally or on Google Colaboratory. You have access to free compute through this service.
1. Visit https://colab.research.google.com/drive
2. Navigate to the **`Upload`** tab, and upload your `HW8.ipynb`
3. Now on the top right corner, under the `Comment` and `Share` options, you should see a `Connect` option. Once you are connected, you will have access to a VM with 12GB RAM, 50 GB disk space and a single GPU. The dropdown menu will allow you to connect to a local runtime as well.
**Notes:**
* **If you do not have a working setup for Python 3, this is your best bet. It will also save you from heavy installations like `tensorflow` if you don't want to deal with those.**
* ***There is a downside*. You can only use this instance for a single 12-hour stretch, after which your data will be deleted, and you would have redownload all your datasets, any libraries not already on the VM, and regenerate your logs**.
### Installing PyTorch and Dependencies
The instructions for installing and setting up PyTorch can be found at https://pytorch.org/get-started/locally/. Make sure you follow the instructions for your machine. For any of the remaining libraries used in this assignment:
* We have provided a `hw8_requirements.txt` file on the homework web page.
* Download this file, and in the same directory you can run `pip3 install -r hw8_requirements.txt`
Check that PyTorch installed correctly by running the following:
```
import torch
torch.rand(5, 3)
```
The output should look something like
```python
tensor([[0.3380, 0.3845, 0.3217],
[0.8337, 0.9050, 0.2650],
[0.2979, 0.7141, 0.9069],
[0.1449, 0.1132, 0.1375],
[0.4675, 0.3947, 0.1426]])
```
### Let's get started with the assignment.
## Instructions
### Part 1 - Datasets and Dataloaders (10 points)
In this section we will download the MNIST dataset using PyTorch's own API.
Helpful Resources:
* https://pytorch.org/docs/stable/torchvision/datasets.html#mnist
* https://pytorch.org/docs/stable/torchvision/transforms.html
* https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
The `torchvision` package consists of popular datasets, model architectures, and common image transformations for computer vision. We are particularly concerned with `torchvision.datasets` and `torchvision.transforms`. Check out the API for these modules in the links provided above.
**Create a directory named `hw8_data` with the following command**.
```
!mkdir hw8_data
```
**Now use `torch.datasets.MNIST` to load the Train and Test data into `hw8_data`.**
* ** Use the directory you created above as the `root` directory for your datasets**
* ** Populate the `transformations` variable with any transformations you would like to perform on your data.** (Hint: You will need to do at least one)
* **Pass your `transformations` variable to `torch.datasets.MNIST`. This allows you to perform arbitrary transformations to your data at loading time.**
```
from torchvision import datasets, transforms
transformations = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(
(0.1000,), (0.3000,))
])
mnist_train = datasets.MNIST(root='./hw8_data', train=True, download=True, transform=transformations)
mnist_test = datasets.MNIST(root='./hw8_data', train=False, download=True, transform=transformations)
```
Check that your torch datasets have been successfully downloaded into your data directory by running the next two cells.
* Each will output some metadata about your dataset.
* Check that the training set has 60000 datapoints and a `Root Location: hw8_data`
* Check that the testing (__also validation in our case__) set has 10000 datapoints and `Root Location: hw8_data`
Notice that these datasets implement the python `__len__` and `__getitem__` functions. Each element in the dataset should be a 2-tuple. What does yours look like?
```
print(len(mnist_train))
print(len(mnist_train[0]))
mnist_train
print(len(mnist_test))
print(len(mnist_test[0]))
mnist_test
```
**Any file in our dataset will now be read at runtime, and the specified transformations we need on it will be applied when we need it.**.
We could iterate through these directly using a loop, but this is not idiomatic. PyTorch provides us with this abstraction in the form of `DataLoaders`. The module of interest is `torch.utils.data.DataLoader`.
`DataLoader` allows us to do lots of useful things
* Group our data into batches
* Shuffle our data
* Load the data in parallel using `multiprocessing` workers
**Use `DataLoader` to create a loader for the training set and one for the testing set**
* **Use a `batch_size` of 32 to start, you may change it if you wish.**
* **Set the `shuffle` parameter to `True`.**
```
from torch.utils.data import DataLoader
train_loader = torch.utils.data.DataLoader(dataset=mnist_train,
batch_size=32,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=mnist_test,
batch_size=len(mnist_test),
shuffle=False)
random_seed = 1
torch.backends.cudnn.enabled = False
torch.manual_seed(random_seed)
```
The following function is adapted from `show_landmarks_batch` at
https://pytorch.org/tutorials/beginner/data_loading_tutorial.html#iterating-through-the-dataset .
Run the following cell to see that your loader provides a random `batch_size` number of data points.
```
import matplotlib.pyplot as plt
from torchvision import utils
%matplotlib inline
def show_mnist_batch(sample_batched):
"""Show images for a batch of samples."""
images_batch = sample_batched[0]
batch_size = len(images_batch)
im_size = images_batch.size(2)
grid = utils.make_grid(images_batch)
plt.imshow(grid.numpy().transpose((1, 2, 0)))
plt.title('Batch from DataLoader')
# Displays the first batch of images
for i, batch in enumerate(train_loader):
if i==1:
break
show_mnist_batch(batch)
```
### Part 2 - Models, Loss Functions and Optimizers (10 points)
In this section, we will do the following:
* Learn about how to build your deep learning model and define its parameters
* Choose a loss function to optimize
* Choose an optimization method to maximize/minimize the loss
We'll first start with a single layer neural network to do handwritten digit classification. The math may ring some bells from homework 7.
`torch.nn` is the module we will be using here. You can find the API at https://pytorch.org/docs/stable/nn.html. There is also a quick summary at https://pytorch.org/tutorials/beginner/nn_tutorial.html#closing_thoughts.
#### Models
We will use the following python modules in building our one layer model.
* `torch.nn.Module`: Your model will be abstracted as a python class. Your python class must subclass `torch.nn.Module`. It is the base class for all neural network modules in PyTorch (Do not confuse python modules with PyTorch Modules). These implement the `forward()` function which defines how your model handles input and produces an output. Your model class can also have `torch.nn.Module`s as members, allowing nested tree like structures, and it is leveraging this that you are able to build neural networks in PyTorch.
* `torch.nn.Linear`: A unit of computation in neural networks are *Layers* and PyTorch provides abstractions for layers as `nn.Modules`. These come in many forms including *Convolutional*, *Recurrent*, and *Linear*. You can find the API for linear layers here https://pytorch.org/docs/stable/nn.html#linear-layers.
**Now use the information provided to define the `OneLayerModel` class below. The superclass constructor has been called for you, and this allows your subclass to access superclass methods and members.**
* **Finish the `__init__()` function.**
* **Finish the `forward()` function.** (Hint: Use that fact that layer modules implement their own `forward()` function)
```
from torch import nn
class OneLayerModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(OneLayerModel, self).__init__()
self.flin = nn.Linear(input_dim, output_dim)
def forward(self, x):
x = self.flin(x)
return x
```
#### Loss Functions and Optimizers
You've defined your model but now what? It's just a black box that takes an input and spits out some numbers. You haven't yet defined what it means to be a good or bad model.
A ***Loss Function*** takes what your model outputs and compares it to what it *should* have put out. It returns some meaningful value used to update your model parameters, and so train your model. Check out Section 21.2.1 of the textbook for more details about types of loss functions. The Loss function represents the overall goal of building this model, and the choice of loss function is very important.
We must examine our model parameters and our problem instance to see about how to choose a loss function.
* We take in a 784-dimensional vector and output 10 real values, giving our model 784 x 10 parameters.
* It is natural given that our problem is an instance of *multi-class classification* that we would want each of our output values to model `P(y==i|x)`.
* If we go this route, we get an added constraint that the sum of all 10 of our output values should be 1 (forming a probability mass distribution).
Turns out there is a very convenient loss function for just our use case known as ***cross-entropy loss***. Check out this reference https://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html#cross-entropy for a little more intuition on this.
Once again, PyTorch has abstractions built in for us in the `torch.nn` module, namely `torch.nn.CrossEntropyLoss`. The API can be found at https://pytorch.org/docs/stable/nn.html#crossentropyloss.
We're still not ready to train our model because while we have some parameters, and we have some measure of how good or bad our predictions are, we have no notion of how to go about updating our parameters in order to improve our loss.
This is where ***Optimizers*** come in. In general, we have one main way of minimizing loss functions (training our models), and that is through *Stochastic Gradient Descent* https://en.wikipedia.org/wiki/Stochastic_gradient_descent. There are many variants and optimizations of this method, however, and the `torch.optim` package gives us abstractions for these. The API can be found at https://pytorch.org/docs/stable/optim.html#.
```
from torch import optim
```
### Part 3 - Training and Validation (45 points)
In this section we will learn how to use the concepts we've learned about so far to train the model we built, and validate how well it does.We also want to monitor how well our training is going while it is happening.
For this we can use a package called `tensorboardX`. You will need to install this package using `pip` or `Anaconda`, based on your dev environment. Additionally, we'll want to use a logging module called `tensorboardX.SummaryWriter`. You can consult the API here https://tensorboardx.readthedocs.io/en/latest/tutorial.html. Run the next cell to ensure that all is working well.
```
""" Try uncommenting these commands if you're facing issues here
!pip3 install -U protobuf
!pip3 install -U tensorflow
!pip3 install -U tensorboardX
"""
%load_ext tensorboard.notebook
from tensorboardX import SummaryWriter
```
We have provided the code to use `tensorboard` just before calling your `train` function. You don't have to change the top-level log directory, but you can create multiple runs (different parameters or versions of your code) just by creating subdirectories for these within your top-level directory.
**Now use the information provided above to do the following:**
* ** Instantiate a `OneLayerModel` with the appropriate input/output parameters.**
* ** Define a cross-entropy loss function.**
* ** Define a stochastic gradient descent optimizer based for you model's parameters. Start with a learning rate of 0.001, and adjust as necessary. You can start with the vanilla `optim.SGD` optimizer, and change it if you wish.**
* **Create a `SummaryWriter` object that will be responsible for logging our training progress into a directory called `logs/expt1` (Or whatever you wish your top-level directory to be called).**
```
model = OneLayerModel(1*28*28, 10)
# Loss and optimizer
loss = nn.CrossEntropyLoss()
learning_rate = 0.01
optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum = 0.5)
writer = SummaryWriter('logs/expt1')
```
We've finally come to the point where we need to write our training set up. We're going to use both our training and testing (validation) sets for this. Note that traditionally, you would separate part of your training data into validation data in order to get an unbiased estimate of how your model performs, but here we'll just pretend that our testing data is our validation data.
**Training a model with batches of data broadly involves the following steps:**
1. **One `epoch` is defined as a full pass of your dataset through your model. We choose the number of epochs we wish to train our model for.**
2. **In each epoch, set your model to train mode.**
3. **you feed your model `batch_size` examples at a time, and receive `batch_size` number of outputs until you've gotten through your entire dataset.**
4. **Calculate the loss function for those outputs given the labels for that batch.**
5. **Now calculate the gradients for each model parameter.** (Hint: Your loss function object can do this for you)
6. **Update your model parameters** (Hint: The optimizer comes in here)
7. **Set the gradients in your model to zero for the next batch.**
8. **After each epoch, set your model to evaluation mode.**
9. **Now evaluate your model on the validation data. Log the total loss and accuracy over the validation data.** (Note: PyTorch does automatic gradient calculations in the background through its `Autograd` mechanism https://pytorch.org/docs/stable/notes/autograd.html. Make sure to do evaluation in a context where this is turned off!)
**Complete the `train()` function below. Try to make it as general as possible, so that it can be used for improved versions of you model. Feel free to define as many helper functions as needed.**
**Make sure that you do the following: **
* **Log the *training loss* and *training accuracy* on each batch for every epoch, such that it will show up on `tensorboard`.**
* **Log the loss on the validation set and the accuracy on the validation set every epoch**
**You will need to produce the plots for these.**
You may also want to add some print statements in your training function to report progress in this notebook.
```
def train(model, train_loader, val_loader, loss_func, optimizer,num_epochs=10, writer=None):
test(model, val_loader, loss_func, 0, writer)
for epoch in range(1, num_epochs + 1):
train_internal(model, train_loader, loss_func, optimizer, writer, epoch)
test(model, val_loader, loss_func, epoch, writer)
log_interval = 500
def train_internal(model, train_loader, loss_func, optimizer, writer, epoch):
model.train()
for batch_id, (data, target) in enumerate(train_loader):
data=data.reshape(len(data),-1)
loss = loss_func(model(data), target)
loss.backward()
optimizer.step()
optimizer.zero_grad()
loss_item = loss.item()
with torch.no_grad():
output = model(data)
predicted = torch.argmax(output, dim=1)
train_accuracy = predicted.eq(target.data.view_as(predicted)).float().mean()
writer.add_scalars('Training', {'loss':loss_item,
'accuracy': train_accuracy.item()}, batch_id)
if batch_id % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f} Accuracy: {:.6f}'.format(
epoch, batch_id * len(data), len(train_loader.dataset),
100. * batch_id / len(train_loader), loss_item, train_accuracy))
def test(model, val_loader, loss_func, epoch_num, writer):
model.eval()
loss_item = 0
correct = 0
with torch.no_grad():
for data, target in val_loader:
data=data.reshape(len(data),-1)
output = model(data)
loss_item += loss_func(output, target)
pred = torch.argmax(output, dim=1)
correct += pred.eq(target.data.view_as(pred)).sum()
accuracy = 100. * correct.item()/ len(val_loader.dataset)
loss = loss_item.item()/len(val_loader)
writer.add_scalar('Validation set loss', loss, epoch_num)
writer.add_scalar('Validation set accuracy', accuracy, epoch_num)
print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
loss, correct, len(val_loader.dataset),accuracy))
```
Finally call `train` with the relevant parameters. Run the tensorboard command on your top-level logs directory to monitor training. If there is logging data from a previous run, just delete the directory for the run, and reinstantiate the `SummaryWriter` for that run. (You may want to reinstantiate the model itself if you want to clear the model parameters too).
Note : This function may take a while to complete if you're training for many epochs on a cpu. This is where it comes in handy to be running on Google Colab, or just have a GPU on hand.
```
#%tensorboard --logdir=logs
train(model, train_loader, test_loader, loss, optimizer, 15, writer)
```
__Final Validation Loss:__ *0.2722*
__Final Validation Accuracy:__ *92.53%*
#### What is familiar about a 1-layer neural network with cross-entopy loss? Have you seen this before?
Answer: SVM model has similar linear function and uses SGD.
### Part 4 - Two Layer Neural Net (20 points)
The thing that makes neural networks really powerful is that they are able to do complex function approximation. As we saw earlier, we can organize the computation done in neural networks into units called *layers*. In a general neural network, there is an *input layer*, and an *output layer*. These may be the same layer as they were in our previous example. When they are not the same, there are intermediate layers known as _hidden layers_. These layers receive input from other layers and send their output to other layers.
We have been dealing with a certain type of neural network known as a __fully connected__ network. For our purposes, this just means that the output of the layer is just the dot product of its input `x`, its weights `w` plus a bias term `b`, all wrapped in a non-linear *activation function* `F`.
`y = F(w^T x + b)`.
These non-linear activation functions are very important but where in our last neural network did we apply such a function? Implicitly we applied what's known as a __softmax activation__ in order to compute cross-entropy loss https://en.wikipedia.org/wiki/Softmax_function.
We'll now try to create a neural network with one hidden layer. This means that we have to come up with an activation function for the output of that hidden layer. A famous, simple but powerful activation function is the __Rectified Linear Unit (ReLU)__ function defined nas `ReLU(x) = max(x,0)`. We will use this on the output of the hidden layer.
`torch.nn` has a module known as `nn.Sequential` that allows us to chain together other modules. This module implements a `forward()` function that automatically handles input-output connections etc. Check out the API at https://pytorch.org/docs/stable/nn.html#sequential.
**Just like you did with the single layer model, define a class `TwoLayerModel`, a neural network with ReLU activation for the hidden layer. `nn.Sequential` may come in handy.**
```
import torch.nn.functional as F
class TwoLayerModel(nn.Module):
def __init__(self, input_dim, output_dim, hidden_layers):
super(TwoLayerModel, self).__init__()
self.relu = nn.ReLU(inplace=True)
#self.conv1 = nn.Conv2d(1, 20, kernel_size=5)
self.flin1 = nn.Linear(input_dim, 256)
self.flin2 = nn.Linear(256, output_dim)
def forward(self, x):
#x = self.relu(F.max_pool2d(self.conv1(x), 2))
x = self.relu(self.flin1(x))
x = self.flin2(x)
return x
```
**Once again use the information provided above to do the following:**
* ** Instantiate a `TwoLayerModel` with the appropriate input/output/hidden layer parameters.**
* ** Define a cross-entropy loss function again.**
* ** Define a stochastic gradient descent optimizer based for you model's parameters. Start with a learning rate of 0.001, and adjust as necessary. You can start with the vanilla `optim.SGD` optimizer, and change it if you wish.**
* **Create a `SummaryWriter` object that will be responsible for logging our training progress into a directory called `logs/expt2` (Or whatever you wish your top-level directory to be called, just make sure the subdirectory is different from your previous SummaryWriter).**
```
model2 = TwoLayerModel(1*28*28, 10, 256)
learning_rate=0.01
# Loss and optimizer
loss2 = nn.CrossEntropyLoss()
optimizer2 = optim.SGD(model2.parameters(), lr=learning_rate, momentum = 0.5)
writer2 = SummaryWriter('logs/expt2')
```
Call `train` on your two layer neural network.
```
#%tensorboard --logdir=logs
train(model2, train_loader, test_loader, loss2, optimizer2, 15, writer2)
```
__Final Validation Loss:__ *0.0618*
__Final Validation Accuracy:__ *98.11%*
#### Did your accuracy on the validation set improve with multiple layers? Why do you think this is ?
Answer: The problem itself is not linear. Most of the digits' features are not linearly separable. That is why there is a 6% accuracy increase when using 2 linear layers or if adding a third one with convolution and max pooling the accuracy increases to 98% (6% more than using a single layer).
### Part 5 - What is being learned at each layer? (10 points)
So what exactly are these weights that our network is learning at each layer? By conveniently picking our layer dimensions as perfect square numbers, we can try to visualize the weights learned at each layer as square images. Use the following function to do so for *all interesting layers* across your models. Feel free to modify the function as you wish.
**At the very least, you must generate:**
1. **The ten 28x28 weight images learned by your one layer model.**
2. **The 256 28x28 weight images learned by the hidden layer in your two-layer model.**
```
def visualize_layer_weights(model, layer_idx, num_images, image_dim, title):
# Find number of rows and columns based on number of images
for d in range(1,num_images):
f = num_images/d
if int(f)==f:
dim1 = int(min(f,d))
dim2 = int(max(f,d))
if d > f:
break
# Plot weights as square images
fig, ax = plt.subplots(dim1, dim2)
# At least 1 inch by 1 inch images
fig.set_size_inches(dim2, dim1)
weights = (list(model.parameters())[layer_idx])
fig.suptitle(title)
for i in range(dim1):
for j in range(dim2):
item = weights[dim2*i+j]
ax[i][j].imshow(item.reshape(image_dim,image_dim).detach().numpy(), cmap='gray')
visualize_layer_weights(model, 0,10,28,'One layer NN')
visualize_layer_weights(model2, 0,256,28,'Two layer NN')
```
|
github_jupyter
|
# Transfer Learning
In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html).
ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU).
Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy.
With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
```
Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`.
```
data_dir = 'assets/Cat_Dog_data'
# TODO: Define transforms for the training data and testing data
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456,0.406],
[0.229, 0.224,0.225])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456,0.406],
[0.229, 0.224,0.225])])
# Pass transforms in here, then run the next cell to see how the transforms look
train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=64)
```
We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on.
```
model = models.densenet121(pretrained=True)
model
```
This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers.
```
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(1024, 500)),
('relu', nn.ReLU()),
('fc2', nn.Linear(500, 2)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
```
With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.
PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU.
```
import time
for device in ['cpu', 'cuda']:
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
model.to(device)
for ii, (inputs, labels) in enumerate(trainloader):
# Move input and label tensors to the GPU
inputs, labels = inputs.to(device), labels.to(device)
start = time.time()
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if ii==3:
break
print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds")
```
You can write device agnostic code which will automatically use CUDA if it's enabled like so:
```python
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
```
From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily.
>**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen.
```
## TODO: Use a pretrained model to classify the cat and dog images
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = models.densenet121(pretrained=True)
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
model.classifier = nn.Sequential(nn.Linear(1024, 256),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(256, 2),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.003)
model.to(device);
epochs = 1
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for inputs, labels in trainloader:
steps += 1
# Move input and label tensors to the default device
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
loss = criterion(logps, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
test_loss = 0
accuracy = 0
model.eval()
with torch.no_grad():
for inputs, labels in testloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
# Calculate accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train()
```
|
github_jupyter
|
```
%matplotlib inline
```
# Frequency and time-frequency sensors analysis
The objective is to show you how to explore the spectral content
of your data (frequency and time-frequency). Here we'll work on Epochs.
We will use this dataset: `somato-dataset`. It contains so-called event
related synchronizations (ERS) / desynchronizations (ERD) in the beta band.
```
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Stefan Appelhoff <stefan.appelhoff@mailbox.org>
# Richard Höchenberger <richard.hoechenberger@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet, psd_multitaper, psd_welch
from mne.datasets import somato
```
Set parameters
```
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False)
# Construct Epochs
event_id, tmin, tmax = 1, -1., 3.
baseline = (None, 0)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=baseline, reject=dict(grad=4000e-13, eog=350e-6),
preload=True)
epochs.resample(200., npad='auto') # resample to reduce computation time
```
Frequency analysis
------------------
We start by exploring the frequence content of our epochs.
Let's first check out all channel types by averaging across epochs.
```
epochs.plot_psd(fmin=2., fmax=40., average=True, spatial_colors=False)
```
Now let's take a look at the spatial distributions of the PSD.
```
epochs.plot_psd_topomap(ch_type='grad', normalize=True)
```
Alternatively, you can also create PSDs from Epochs objects with functions
that start with ``psd_`` such as
:func:`mne.time_frequency.psd_multitaper` and
:func:`mne.time_frequency.psd_welch`.
```
f, ax = plt.subplots()
psds, freqs = psd_multitaper(epochs, fmin=2, fmax=40, n_jobs=1)
psds = 10. * np.log10(psds)
psds_mean = psds.mean(0).mean(0)
psds_std = psds.mean(0).std(0)
ax.plot(freqs, psds_mean, color='k')
ax.fill_between(freqs, psds_mean - psds_std, psds_mean + psds_std,
color='k', alpha=.5)
ax.set(title='Multitaper PSD (gradiometers)', xlabel='Frequency (Hz)',
ylabel='Power Spectral Density (dB)')
plt.show()
```
Notably, :func:`mne.time_frequency.psd_welch` supports the keyword argument
``average``, which specifies how to estimate the PSD based on the individual
windowed segments. The default is ``average='mean'``, which simply calculates
the arithmetic mean across segments. Specifying ``average='median'``, in
contrast, returns the PSD based on the median of the segments (corrected for
bias relative to the mean), which is a more robust measure.
```
# Estimate PSDs based on "mean" and "median" averaging for comparison.
kwargs = dict(fmin=2, fmax=40, n_jobs=1)
psds_welch_mean, freqs_mean = psd_welch(epochs, average='mean', **kwargs)
psds_welch_median, freqs_median = psd_welch(epochs, average='median', **kwargs)
# Convert power to dB scale.
psds_welch_mean = 10 * np.log10(psds_welch_mean)
psds_welch_median = 10 * np.log10(psds_welch_median)
# We will only plot the PSD for a single sensor in the first epoch.
ch_name = 'MEG 0122'
ch_idx = epochs.info['ch_names'].index(ch_name)
epo_idx = 0
_, ax = plt.subplots()
ax.plot(freqs_mean, psds_welch_mean[epo_idx, ch_idx, :], color='k',
ls='-', label='mean of segments')
ax.plot(freqs_median, psds_welch_median[epo_idx, ch_idx, :], color='k',
ls='--', label='median of segments')
ax.set(title='Welch PSD ({}, Epoch {})'.format(ch_name, epo_idx),
xlabel='Frequency (Hz)', ylabel='Power Spectral Density (dB)')
ax.legend(loc='upper right')
plt.show()
```
Lastly, we can also retrieve the unaggregated segments by passing
``average=None`` to :func:`mne.time_frequency.psd_welch`. The dimensions of
the returned array are ``(n_epochs, n_sensors, n_freqs, n_segments)``.
```
psds_welch_unagg, freqs_unagg = psd_welch(epochs, average=None, **kwargs)
print(psds_welch_unagg.shape)
```
Time-frequency analysis: power and inter-trial coherence
--------------------------------------------------------
We now compute time-frequency representations (TFRs) from our Epochs.
We'll look at power and inter-trial coherence (ITC).
To this we'll use the function :func:`mne.time_frequency.tfr_morlet`
but you can also use :func:`mne.time_frequency.tfr_multitaper`
or :func:`mne.time_frequency.tfr_stockwell`.
```
# define frequencies of interest (log-spaced)
freqs = np.logspace(*np.log10([6, 35]), num=8)
n_cycles = freqs / 2. # different number of cycle per frequency
power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, use_fft=True,
return_itc=True, decim=3, n_jobs=1)
```
Inspect power
-------------
<div class="alert alert-info"><h4>Note</h4><p>The generated figures are interactive. In the topo you can click
on an image to visualize the data for one sensor.
You can also select a portion in the time-frequency plane to
obtain a topomap for a certain time-frequency region.</p></div>
```
power.plot_topo(baseline=(-0.5, 0), mode='logratio', title='Average power')
power.plot([82], baseline=(-0.5, 0), mode='logratio', title=power.ch_names[82])
fig, axis = plt.subplots(1, 2, figsize=(7, 4))
power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=8, fmax=12,
baseline=(-0.5, 0), mode='logratio', axes=axis[0],
title='Alpha', show=False)
power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=13, fmax=25,
baseline=(-0.5, 0), mode='logratio', axes=axis[1],
title='Beta', show=False)
mne.viz.tight_layout()
plt.show()
```
Joint Plot
----------
You can also create a joint plot showing both the aggregated TFR
across channels and topomaps at specific times and frequencies to obtain
a quick overview regarding oscillatory effects across time and space.
```
power.plot_joint(baseline=(-0.5, 0), mode='mean', tmin=-.5, tmax=2,
timefreqs=[(.5, 10), (1.3, 8)])
```
Inspect ITC
-----------
```
itc.plot_topo(title='Inter-Trial coherence', vmin=0., vmax=1., cmap='Reds')
```
<div class="alert alert-info"><h4>Note</h4><p>Baseline correction can be applied to power or done in plots.
To illustrate the baseline correction in plots, the next line is
commented power.apply_baseline(baseline=(-0.5, 0), mode='logratio')</p></div>
Exercise
--------
- Visualize the inter-trial coherence values as topomaps as done with
power.
|
github_jupyter
|
# Plots of the total distance covered by the particles as a function of their initial position
*Author: Miriam Sterl*
We plot the total distances covered by the particles during the simulation, as a function of their initial position. We do this for the FES, the GC and the GC+FES run.
```
from netCDF4 import Dataset
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import cartopy.mpl.ticker as cticker
File1 = '/science/projects/oceanparcels/output_data/data_Miriam/Results_TrackingFES.nc'
dataset1 = Dataset(File1)
lat1 = dataset1.variables['lat'][:]
lon1 = dataset1.variables['lon'][:]
time1 = dataset1.variables['time'][:]
dist1 = dataset1.variables['distance'][:]
lon1[lon1>180]-=360
lon1[lon1<-180]+=360
File2 = '/science/projects/oceanparcels/output_data/data_Miriam/Results_TrackingGC.nc'
dataset2 = Dataset(File2)
lat2 = dataset2.variables['lat'][:]
lon2 = dataset2.variables['lon'][:]
time2 = dataset2.variables['time'][:]
dist2 = dataset2.variables['distance'][:]
lon2[lon2>180]-=360
lon2[lon2<-180]+=360
File3 = '/science/projects/oceanparcels/output_data/data_Miriam/Results_TrackingGCFES.nc'
dataset3 = Dataset(File3)
lat3 = dataset3.variables['lat'][:]
lon3 = dataset3.variables['lon'][:]
time3 = dataset3.variables['time'][:]
dist3 = dataset3.variables['distance'][:]
lon3[lon3>180]-=360
lon3[lon3<-180]+=360
# Initial longitudes and latitudes (on 2002-01-01)
startLons = lon1[:,0]
startLats = lat1[:,0]
# Distance travelled by the particles between 2002-01-01 and 2015-01-01
finalDist = [dist1[:,-1], dist2[:,-1], dist3[:,-1]]
titles = ['(a) FES run', '(b) GC run', '(c) GC+FES run']
def DistancePlot(lons, lats, dist, fig, ax, vmin, vmax, titlenr, titlesize, labelnr, labelsize, colormap):
"""
Function that plots the total distance covered by particles during a certain period as a function of their initial position
"""
minLat = np.min(np.round(lats)) # the minimal (rounded) latitude
maxLat = np.max(np.round(lats)) # the maximal (rounded) latitude
minLon = np.min(np.round(lons)) # the minimal (rounded) longitude
maxLon = np.max(np.round(lons)) # the maximal (rounded) longitude
allLats = np.arange(minLat, maxLat+1) # the latitudinal grid
allLons = np.arange(minLon, maxLon+1) # the longitudinal grid
distances = np.zeros((len(allLons), len(allLats)))
for i in range(len(dist)):
distances[int(np.round(lons[i]-minLon)), int(np.round(lats[i]-minLat))] = dist[i]
# shift by minLon, minLat to get positive indices
maskedDist = np.ma.masked_where(distances==0.0, distances) # mask land points
Lat, Lon = np.meshgrid(allLats, allLons)
distplot = ax.pcolormesh(Lon, Lat, maskedDist/1e4, cmap = colormap, vmin=vmin, vmax=vmax)
ax.set_title(titles[titlenr], fontsize=titlesize,fontweight='bold')
ax.coastlines()
ax.add_feature(cfeature.LAND, zorder=0, edgecolor='black', facecolor=(0.6,0.6,0.6))
ax.set_xticks([-180, -150, -120, -90, -60, -30, 0, 30, 60, 90, 120, 150, 180], crs=ccrs.PlateCarree())
ax.set_xticklabels([-180, -150, -120, -90, -60, -30, 0, 30, 60, 90, 120, 150, 180], fontsize=labelsize)
ax.set_yticks([-90, -60, - 30, 0, 30, 60, 90], crs=ccrs.PlateCarree())
ax.set_yticklabels([-90, -60, - 30, 0, 30, 60, 90], fontsize=labelsize)
lon_formatter = cticker.LongitudeFormatter()
lat_formatter = cticker.LatitudeFormatter()
ax.xaxis.set_major_formatter(lon_formatter)
ax.yaxis.set_major_formatter(lat_formatter)
ax.grid(linewidth=2, color='black', alpha=0.25, linestyle=':')
return distplot
# Compare the three different runs after 13 years
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(28,16), subplot_kw={'projection': ccrs.PlateCarree()})
i=0
for ax in axes.flat:
distance = DistancePlot(startLons, startLats, finalDist[i], fig, ax,
vmin=1, vmax=10, titlenr = i, titlesize=18, labelnr = 0, labelsize=15, colormap='YlOrRd')
i = i+1
cbar = fig.colorbar(distance, ax=axes.ravel().tolist(), shrink=0.53, extend='both', anchor=(2.2,0.5))
cbar.set_label("Distance ($10^{4}$ km)", rotation=90, fontsize=15)
cbar.ax.tick_params(labelsize=12)
fig.suptitle('Total distance covered', x=0.835, y=1.02, fontsize=21, fontweight='bold')
plt.tight_layout()
#plt.savefig('DistanceComparison', bbox_inches='tight')
```
|
github_jupyter
|
# Data-Sitters Club 8: Just the Code
This notebook contains just the code (and a little bit of text) from the portions of *[DSC 8: Text-Comparison-Algorithm-Crazy-Quinn](https://datasittersclub.github.io/site/dsc8/)* for using Euclidean and cosine distance with word counts and word frequencies, and running TF-IDF for your texts.
This code assumes you've actually read the Data-Sitters Club book already. There's lots of pitfalls if you just try to apply the code without understanding what it's doing, or the effect caused by the various different options. Read first, then try!
## Load modules
```
#Installs seaborn
#You only need to run this cell the first time you run this notebook
import sys
!{sys.executable} -m pip install seaborn
#Imports the count vectorizer from Scikit-learn along with
from sklearn.feature_extraction.text import CountVectorizer
#Glob is used for finding path names
import glob
#We need these to format the data correctly
from scipy.spatial.distance import pdist, squareform
#In case you're starting to run the code just at this point, we'll need os again
import os
import numpy as np
#In case you're starting to run the code just at this point, we'll need pandas again
import pandas as pd
#Import matplotlib
import matplotlib.pyplot as plt
#Import seaborn
import seaborn as sns
```
## Set the file directory for your corpus
```
filedir = '/Users/qad/Documents/dsc_corpus_clean'
os.chdir(filedir)
```
# Word count vectorizer
This looks at just the top 1000 words, and doesn't use `max_df` to remove words that occur across all your texts. You can add it in between the input and the `max_features` parameters, separated by a comma (e.g. `input="filename", max_df=.7, max_features=1000`).
```
# Use the glob library to create a list of file names, sorted alphabetically
# Alphabetical sorting will get us the books in numerical order
filenames = sorted(glob.glob("*.txt"))
# Parse those filenames to create a list of file keys (ID numbers)
# You'll use these later on.
filekeys = [f.split('/')[-1].split('.')[0] for f in filenames]
# Create a CountVectorizer instance with the parameters you need
wordcountvectorizer = CountVectorizer(input="filename", max_features=1000)
# Run the vectorizer on your list of filenames to create your wordcounts
# Use the toarray() function so that SciPy will accept the results
wordcounts = wordcountvectorizer.fit_transform(filenames)
```
### Bonus: word count toy
The code below will display all the words that were included in the word count vectorizer, based on the parameters you've set.
```
sum_words = wordcounts.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in wordcountvectorizer.vocabulary_.items()]
sorted(words_freq, key = lambda x: x[1], reverse=True)
```
## Euclidean distance for word count vectorizer
```
#Runs the Euclidean distance calculation, prints the output, and saves it as a CSV
euclidean_distances = pd.DataFrame(squareform(pdist(wordcounts)), index=filekeys, columns=filekeys)
euclidean_distances
```
### Euclidean distance visualization
```
#Defines the size of the image
plt.figure(figsize=(100, 100))
#Increases the label size so it's more legible
sns.set(font_scale=3)
#Generates the visualization using the data in the dataframe
ax = sns.heatmap(euclidean_distances)
#Displays the image
plt.show()
```
## Cosine distance for word count vectorizer
```
cosine_distances = pd.DataFrame(squareform(pdist(wordcounts, metric='cosine')), index=filekeys, columns=filekeys)
cosine_distances
```
### Cosine distance visualization
```
#Defines the size of the image
plt.figure(figsize=(100, 100))
#Increases the label size so it's more legible
sns.set(font_scale=3)
#Generates the visualization using the data in the dataframe
ax = sns.heatmap(cosine_distances)
#Displays the image
plt.show()
```
# Term frequency vectorizer
```
from sklearn.feature_extraction.text import TfidfVectorizer
# Use the glob library to create a list of file names, sorted alphabetically
# Alphabetical sorting will get us the books in numerical order
filenames = sorted(glob.glob("*.txt"))
# Parse those filenames to create a list of file keys (ID numbers)
# You'll use these later on.
filekeys = [f.split('/')[-1].split('.')[0] for f in filenames]
# Create a CountVectorizer instance with the parameters you need
freqvectorizer = TfidfVectorizer(input="filename", stop_words=None, use_idf=False, norm='l1', max_features=1000)
# Run the vectorizer on your list of filenames to create your wordcounts
# Use the toarray() function so that SciPy will accept the results
wordfreqs = freqvectorizer.fit_transform(filenames).toarray()
```
## Euclidean distance for term frequency vectorizer
```
euclidean_distances_freq = pd.DataFrame(squareform(pdist(wordfreqs, metric='euclidean')), index=filekeys, columns=filekeys)
euclidean_distances_freq
```
### Euclidean distance visualization
```
#Defines the size of the image
plt.figure(figsize=(100, 100))
#Increases the label size so it's more legible
sns.set(font_scale=3)
#Generates the visualization using the data in the dataframe
ax = sns.heatmap(euclidean_distances_freq)
#Displays the image
plt.show()
```
## Cosine distance for word count vectorizer
```
cosine_distances_freq = pd.DataFrame(squareform(pdist(wordfreqs, metric='cosine')), index=filekeys, columns=filekeys)
cosine_distances_freq
```
### Cosine distance visualization
```
#Defines the size of the image
plt.figure(figsize=(100, 100))
#Increases the label size so it's more legible
sns.set(font_scale=3)
#Generates the visualization using the data in the dataframe
ax = sns.heatmap(cosine_distances_freq)
#Displays the image
plt.show()
```
## TF-IDF
```
# Use the glob library to create a list of file names, sorted alphabetically
# Alphabetical sorting will get us the books in numerical order
filenames = sorted(glob.glob("*.txt"))
# Parse those filenames to create a list of file keys (ID numbers)
# You'll use these later on.
filekeys = [f.split('/')[-1].split('.')[0] for f in filenames]
# Create a CountVectorizer instance with the parameters you need
vectorizer = TfidfVectorizer(input="filename", stop_words=None, use_idf=True, norm=None, max_features=1000, max_df=.95)
# Run the vectorizer on your list of filenames to create your wordcounts
# Use the toarray() function so that SciPy will accept the results
transformed_documents = vectorizer.fit_transform(filenames)
transformed_documents_as_array = transformed_documents.toarray()
```
Create a CSV per text file with most distinctive terms.
```
# construct a list of output file paths using the previous list of text files the relative path for tf_idf_output
output_filenames = [str(txt_file).replace(".txt", ".csv") for txt_file in filenames]
# loop each item in transformed_documents_as_array, using enumerate to keep track of the current position
for counter, doc in enumerate(transformed_documents_as_array):
# construct a dataframe
tf_idf_tuples = list(zip(vectorizer.get_feature_names(), doc))
one_doc_as_df = pd.DataFrame.from_records(tf_idf_tuples, columns=['term', 'score']).sort_values(by='score', ascending=False).reset_index(drop=True)
# output to a csv using the enumerated value for the filename
one_doc_as_df.to_csv(output_filenames[counter])
```
## Suggested Citation
Dombrowski, Quinn. “DSC #8: Just the Code.” Jupyter Notebook. *The Data-Sitters Club*, October 21, 2020. https://github.com/datasittersclub/dsc8.
|
github_jupyter
|
# Politician Activity on Facebook by Political Affiliation
The parameters in the cell below can be adjusted to explore other political affiliations and time frames.
### How to explore other political affiliation?
The ***affiliation*** parameter can be use to aggregate politicians by their political affiliations. The column `affiliation` in this [this other notebook](../politicians.ipynb?autorun=true) show the politicians that belong each political affiliation.
***Alternatively***, you can direcly use the [politicians API](http://mediamonitoring.gesis.org/api/politicians/swagger/), or access it with the [SMM Wrapper](https://pypi.org/project/smm-wrapper/).
## A. Set Up parameters
```
# Parameters:
affiliation = 'Grüne'
from_date = '2017-09-01'
to_date = '2018-12-31'
aggregation = 'week'
```
## B. Using the SMM Politician API
```
import pandas as pd
# Create an instance to the smm wrapper
from smm_wrapper import SMMPoliticians
smm = SMMPoliticians()
#using the api to get the data
df = smm.dv.get_politicians()
# Filter the accounts by party, and valid ones (the ones that contain fb_ids)
party_df = df[(df['affiliation']==affiliation) & (df['fb_ids'].notnull())]
# query the Social Media Monitoring API
posts_by = pd.concat(smm.dv.posts_by(_id=organization_id, from_date=from_date, to_date=to_date, aggregate_by=aggregation)
for organization_id in party_df.index)
comments_by = pd.concat(smm.dv.comments_by(_id=organization_id, from_date=from_date, to_date=to_date, aggregate_by=aggregation)
for organization_id in party_df.index)
# aggregate posts and comments
total_posts_by = posts_by.groupby('date')[
'posts', 'replies', 'shares', 'reactions', 'likes'].sum()
total_comments_by = comments_by.groupby('date')[
'comments', 'replies', 'likes'].sum()
```
## C. Plotting
### C.1 Plot Facebook Post Activity
```
import plotly
from plotly import graph_objs as go
plotly.offline.init_notebook_mode(connected=True)
#plot for facebook posts activity
plotly.offline.iplot({
"data": [go.Scatter(x=total_posts_by.index.tolist(), y=total_posts_by['posts'], name='Posts', line_shape='spline'),
go.Scatter(x=total_posts_by.index.tolist(), y=total_posts_by['replies'], name='Replies',line_shape='spline'),
go.Scatter(x=total_posts_by.index.tolist(), y=total_posts_by['shares'], name='Shares', line_shape='spline'),
go.Scatter(x=total_posts_by.index.tolist(), y=total_posts_by['reactions'], name='Reactions', line_shape='spline'),
go.Scatter(x=total_posts_by.index.tolist(), y=total_posts_by['likes'], name='Likes', line_shape='spline')],
"layout": go.Layout(title='Facebook posts for {}'.format(affiliation), xaxis={'title':''}, yaxis={'title':'N'})
})
```
### C.2 Plot Facebook Comment Activity
```
# plot for facebook comments activity
plotly.offline.iplot({
"data": [go.Scatter(x=total_comments_by.index.tolist(), y=total_comments_by['comments'], name='Comments', line_shape='spline'),
go.Scatter(x=total_comments_by.index.tolist(), y=total_comments_by['replies'], name='Replies', line_shape='spline'),
go.Scatter(x=total_comments_by.index.tolist(), y=total_comments_by['likes'], name='Likes', line_shape='spline')],
"layout": go.Layout(title='Facebook comments for {}'.format(affiliation), xaxis={'title':''}, yaxis={'title':'N'})
})
```
|
github_jupyter
|
# 目的:了解Python基本語法
1. [資料型別](#01)
2. [for-loop](#02)
3. [while-loop](#03)
4. [清單(list)](#04)
5. [tuple是什麼?](#05)
6. [Python特殊的清單處理方式](#06)
7. [if的用法](#07)
8. [以if控制迴圈的break和continue](#08)
9. [函數:將計算結果直接於函數內印出或回傳(return)出函數外](#09)
10. [匿名函數](#10)
11. [物件導向範例](#11)
12. [NumPy (Python中用於處理numerical array的套件)](#12)
13. [一維序列](#13)
14. [二維矩陣](#14)
# 練習
* [運用range(5),for以及append()建立一清單,其內容為\[0,1,4,9,16\] ](#ex01)
* [運用range(5), if以及for建立一清單,其內容為\[0,4,16\] ](#ex02)
* [試輸出99乘法表](#ex1)
* [試輸出99乘法表(以清單表示)](#ex2)
* [寫一個函數factorial(n)。](#ex3)
* [建立一函數 f。輸入: 一個 2 維矩陣,輸出: 該2維矩陣內的所有數值加總。](#ex4)
---
## <a id="01"/>資料型別
### 整數(int)
```
a=1
type(a)
b=3
type(b)
```
兩整數相除,輸出結果為浮點數(float)。(備註:Python 3開始)
```
a/b
type(a/b)
```
在Python3,兩整數相除,需以//運算子來相除,方能真正用整數儲存該結果。
```
a//b
type(a//b)
```
兩整數相加,其輸出仍然為整數。
```
a+b
type(a+b)
```
### 浮點數(float)
Python不需宣告型別。一個數字將會被判別為整數(int)或浮點數(float),需看該數是否有小數點存在。
```
type(1)
type(1.)
type(1.E-5)
```
### 字串(str)
```
mystr='Hello World!'
type(mystr)
```
將該字串所有字變成大寫
```
mystr.upper()
```
將該字串所有字變成小寫
```
mystr.upper().lower()
```
取出該字串前三個字
```
mystr[0:3]
```
檢查某字串片段是否存在於該字串
```
'Wor' in mystr
'WOR' in mystr
'WOR' in mystr.upper()
```
以len()看字串長度
```
len(mystr)
mystr=' hi '
```
清除左右空白
```
mystr.strip()
```
清除左空白
```
mystr.lstrip()
```
清除右空白
```
mystr.rstrip()
```
置換字串內的h成f
```
mystr.replace('h','f')
```
### 布林(Boolean)
```
t=True #真
f=False #假
t==f #真等於假?
t==t #真等於真?
t!=f #真不等於假?
t==f or t!=f #真等於假 或是 真不等於假?
t==f and t!=f #真等於假 和 真不等於假?
not t #非真?
```
[回索引](#目的:了解Python基本語法)
## <a id="02"/>for-loop
```
for j in range(5):
print(j)
```
以上,我們使用了range()這個內建函數,它到底是什麼?
```
r=range(5)
print( type(r) )
```
用type()檢查變數r的型別,我們發現了r=range(5)是屬於'range'這個類別的一個物件。
接下來,我們以內建函數hasattr()去檢查range(5)這個物件是不是可疊代(iterable):
首先以help()函數檢查一下hasattr()的用法:
```
help(hasattr)
hasattr(range(5), '__iter__')
r=range(5).__iter__() # 取得range(5)的疊代器
print( r.__next__() ) # 進行疊代並印出
print( r.__next__() ) # 進行疊代並印出
print( r.__next__() ) # 進行疊代並印出
print( r.__next__() ) # 進行疊代並印出
print( r.__next__() ) # 進行疊代並印出
print( r.__next__() ) # 進行疊代並印出
```
### 小結
1. 若物件(object)為可疊代(iterable):
* 表示我們可用\_\_iter\_\_()以及\_\_next\_\_()來操控該物件,一個一個的去取得物件裡面的元素。
* 物件裡面的元素亦可簡單的以for迴圈來取得。
2. 複習以下函數的意義:hasattr(),\_\_iter\_\_(),\_\_next\_\_(),range()
[回索引](#目的:了解Python基本語法)
-----
## <a id="03"/> while-loop
```
i=0
while(i<5):
print(i)
i+=1 # i=i+1的簡寫
```
常用於不確定要跑幾次,要跑到直到條件滿足才跳出迴圈的情形。例如:嘗試擷取某網頁,直到失敗次數太多或是擷取成功為止。
[回索引](#目的:了解Python基本語法)
## <a id="04"/>清單(list)
定義:包含元素的一個集合。清單內的元素可重複,且每個元素都有一個索引(index)。
```
array=[1,2,2,3,4,5] #建立一個清單
print(array)
print(array[0]) #印出清單內的第一個元素
print(array[-1]) #印出清單內最後一個元素
type([1,2,2,3,4,5]) #以type查看清單型別,確定清單(list)的型別就是list。
hasattr([1,2,3,4,5],'__iter__') # 若是[1,2,3,4,5]為可疊代物件,那我們就可以用迴圈來疊代出清單內的所有元素。
for j in [1,2,3,4,5]:
print(j,j**2)
for j in [1,2.,'字串',3,range(10),5,[1,1,1,2,2,2]]:
print(j,'\t',type(j),'\t',hasattr(j,'__iter__'))
```
從以上得知:
1. 清單裡的元素可以有不同的型別(type)。
2. 字串(str)和清單(list)一樣,是可以疊代的物件。因此,他們可以用for迴圈來進行內容的提取,例如:
```
for j in 'Python':
print(j)
```
使用append()添加新元素至清單內
```
array=[1,2,3]
array.append(4)
print(array)
```
使用del 刪除清單內元素
```
print(array)
del array[2] #刪除清單內的第二個元素
print(array)
```
我們可使用len()去得知清單的長度
```
array=[10,20,30,40]
print(len(array))
```
使用enumerate()去列舉清單
```
enumerate(array)
type(enumerate(array))
hasattr(enumerate,'__iter__')
for j in enumerate(array):
print(j)
print( type( (0,10) ) )
```
[回索引](#目的:了解Python基本語法)
## <a id="05"/>tuple是什麼?
```
array=(1,2,3,"abc")
print(array)
del array[1]
array.append(5)
array[2]=0
```
結論:不可新增刪除覆蓋tuple內的元素,因此tuple可以被看做是唯讀的list。
list可以被取set()。
set的定義:集合內元素不允許重複,且集合內的元素無索引。
```
set([1,1,2,3,3,4,1,2,'alpha','beta'])
type( {1, 2, 3, 4, 'beta', 'alpha'} )
st={1,1,2,3,3,4,1,2,'alpha','beta'}
print(st)
print(hasattr(st,'__iter__'))
for j in st:
print(j)
print(st[0])
```
也就是先前說的,set內的元素並無索引。
[回索引](#目的:了解Python基本語法)
## <a id="06"/>Python特殊的清單處理方式
將range(5)裡面的東西抓出來,放到一清單叫做lst,可有各種寫法:
第一種
```
lst=[]
for j in range(5):
lst.append(j)
print(lst)
```
第二種
```
lst=[j for j in range(5)] #此是非常Python的寫法(Pythonic way of coding)
print(lst)
```
第三種
```
lst=list(range(5))
print(lst)
```
第四種
```
lst=[*range(5)]
print(lst)
```
## <a id="ex01" style='color:purple'/> 練習0-1. 運用range(5),for以及append()建立一清單,其內容為[0,1,4,9,16]
```
#法一:
lst=[]
for j in range(5):
#完成接下來的部分
#法二:
#提示: lst=[.....]
```
[回索引](#目的:了解Python基本語法)
## <a id="ex02" style='color:purple'/> 練習0-2. 運用range(5), if以及for建立一清單,其內容為[0,4,16]
```
# 法一:
lst=[]
for j in range(5):
#完成接下來的部分
#法二:
#提示: lst=[.....]
```
[回索引](#目的:了解Python基本語法)
## <a id="07"/>if的用法
### if...elif..else的使用 :
```
x=5
if(x==1):
print('x is 1')
elif(x==2):
print('x is 2')
else:
print('x is neither 1 nor 2.')
```
### 例:取range(10)內的偶數並印出:
法一
```
for j in range(10):
if(j%2==0):
print(j)
```
法二
```
[j for j in range(10) if j%2==0]
```
[回索引](#目的:了解Python基本語法)
## <a id="08"/>以if控制迴圈的break和continue
```
for j in range(5):
print(j)
if(j==2):
break #中斷,跳出迴圈
for j in range(5):
if(j==2):
continue #略過以下程式碼,並繼續疊代至下一個元素
print(j)
```
[回索引](#目的:了解Python基本語法)
## <a id="ex1" style='color:purple'/> 練習1. 試著輸出以下內容
```
#提示:使用for, range(),print()
for i in range(1,4):
#完成接下來的部分
```
[回索引](#目的:了解Python基本語法)
## <a id="ex2" style='color:purple'/> 練習2. 試著輸出以下內容
```
#提示:使用for, range(),print(),以及建立一個清單(list)
#完成接下來的部分
```
[回索引](#目的:了解Python基本語法)
## <a id="09"/>函數:將計算結果直接於函數內印出或回傳(return)出函數外
### 例一
```
def square(x):
print(x*x)
def square_return(x):
return(x**2)
```
square(x)將只會印出x, 而square_return(x)將會回傳x。
```
square(2)
square_return(2)
```
可另一變數res接收square_return(x)回傳的值。
```
res=square_return(2)
print(res)
```
需注意的是,square(x)並不會回傳值,因此res將接收到None(無)。
```
res=square(2)
print(res)
```
### 例二: 寫一個函數add(a, b)。其輸入為 a和b,輸出為 a+b。
```
def add(a,b):
return a+b
addResult=add(5,7)
print(addResult)
```
### 複習:Java函數寫法(輸入x,回傳x平方)
```
%%file testSquare.java
public class testSquare{
public static void main(String args[]){
int y=square(2);
System.out.println(y);
}
static int square(int x){
return x*x;
}
}
!javac testSquare.java
!java testSquare
```
[回索引](#目的:了解Python基本語法)
## <a id="ex3" style='color:purple'/> 練習3:寫一個函數factorial(n)。
其作用為:
輸入:$n$,輸出:$1*2*3*....*n$
```
# 修改以下程式碼,以完成函數factorial(n)
def factorial(n):
if(n==0):
return ???
if(n!=0):
return ???
```
[回索引](#目的:了解Python基本語法)
## <a id="10"/>匿名函數
一般函數寫法
```
def f(x,y):
return x+y
f(1,2)
```
使用匿名函數,並將匿名函數給予名稱f。此方法得到的函數等同於上述使用一般函數寫法的結果。
```
f=lambda x,y:x+y
f(1,2)
```
將匿名函數直接拿來使用,不給名稱,用完就丟。
```
(lambda x,y:x+y)(1,2) # 1+2=3
(lambda x:x*x)(7) # 7X7=49
```
[回索引](#目的:了解Python基本語法)
## <a id="11"/> 物件導向範例
範例:提款機
```
class Customer(object):
def __init__(self, name, balance=0.0):
self.name=name #當物件被新建立,姓名以及餘額兩個屬性將被初始化
self.balance=balance
def withdraw(self, amount): #提款
if amount > self.balance: #若要提取大於帳戶餘額的數目,將提出錯誤訊息
raise RuntimeError('Amount greater than available balance.')
self.balance -= amount
return self.balance
def deposit(self, amount): #存款
self.balance += amount
return self.balance
```
* 第1行:所有Python3類別都是object這個類別的子類別。
* 第2行:當物件產生時,初始子__init__()(等同Java中的建構子)將初始化屬於該物件的一些屬性。此範例中,屬於該物件的兩個屬性,也就是人名和帳戶餘額將被建立。
* 所有方法都要接收物件本身為第一個參數。依照慣例,大家將該物件本身稱作self。
```
a=Customer("Bill",100)
a.withdraw(70)
a.deposit(60)
a.withdraw(100)
```
[回索引](#目的:了解Python基本語法)
---
## <a id="12"/>NumPy (Python中用於處理numerical array的套件)
此套件用於建立數值陣列和做數值運算。
https://docs.scipy.org/doc/numpy/reference/index.html
```
import numpy as np
```
內建常數$\pi$
```
np.pi
```
計算根號$\pi$
```
np.sqrt(np.pi)
```
[回索引](#目的:了解Python基本語法)
## <a id="13"/>一維序列
用np.arange(n)建立一序列內容為[0 1 2 .....n-1]
```
np.arange(10)
```
用np.linspace(0,2.*np.pi,10)建立一個一維線性空間。起始為0,終點為 $\pi$ ,共10個點。
```
np.linspace(0,2.*np.pi,10)
```
將數列內所有數值+100
```
np.arange(10)+100
```
將數列內所有數值取平方
```
np.arange(10)**2
```
以np.mean()計算出算數平均
```
np.mean( np.arange(10) )
```
以np.std()計算出標準差
```
np.std( np.arange(10) )
```
檢驗Numpy序列和Python清單效能差異
```
a=np.random.normal(0,1,100000) # 100000個常態分佈亂數
b=np.random.normal(0,1,100000) # 100000個常態分佈亂數
list_a=list(a)
list_b=list(b)
%%timeit
res=a+b
%%timeit
res=[]
for j in range(len(list_a)):
res.append(list_a[j]+list_b[j])
```
NumPy較快,因
* 有做vectorization (能把資料一次餵給多個算數邏輯閘,以加快運算速度。)
* array裡的資料皆同型別,相加時不用一個個做型別判別。
[回索引](#目的:了解Python基本語法)
## <a id="14"/>二維矩陣
建立一矩陣
```
A=np.array([[1,2,3],[4,5,6],[7,8,9]])
A
```
將$A$轉置 ($A^{T}$)
```
A.T
```
$A\cdot A^{T}$
```
A.dot(A.T)
```
截取片段:清單方式:以A [index0][index1] 取出二維陣列$A$的部分片段。
```
A[0]
A[1:3]
A[1:3]
A[:][1:3]
```
截取片段:矩陣方式:以A [index0,index1] 取出二維陣列$A$的部分片段。(index0方向:垂直,index1方向:水平)
```
A
A[1:3,:]
A[:,1:3]
```
檢查一下A的形狀
```
A.shape
```
以條件找尋A裡面符合條件的數值
```
A>5
A[A>5]
```
[回索引](#目的:了解Python基本語法)
## <a id="ex4" style='color:purple'/>練習4: 建立一函數 f。輸入: 一個 2 維矩陣,輸出: 該2維矩陣內的所有數值加總。
```
A=np.array([[1,2,3],[4,5,6],[7,8,9]])
def f(A)
# 完成此函數
return ???
```
[回索引](#目的:了解Python基本語法)
|
github_jupyter
|
### ILAS: Introduction to Programming 2017/18
# Coursework Assignment: Plant-life Report
__Complete exercises A to E.__
<br>__The exercises should be completed using Python programming skills we have covered in class. The questions are focussed on an imaginary case study:__
>It is though that the acidification of an area of protected land is having a destructive effect on plant populations.
<br>Experts are particularly worried about the demise of a species of shrub called *winter heath*, that supports the area's insect populations, and the spread of an acid-loving poisonous weed called *darley heath* . <br>Chemical waste from local industries are thought to be reposonsible for the soil acidification.
<br>Your job is to process data collected over a number of years to present as part of a report.
<br>The report will be used as evidence to try and impose restrictions disposal of industrial waste within the area.
<img src="img/map2.png" alt="Drawing" style="width: 500px;"/>
### Input data
Data collectd by a plant survey over the past 20 years is given in the folder `environmental_survey` in the `sample_data` folder of the ILAS_python repository.
The survey was conducted once a year.
The locations and characteristics of plants and trees were recorded.
Soil pH was also recorded at different locations.
### Setting up
Create a new folder in which to store your project.
Copy the `environmental_survey` folder into the project folder.
### Part A: Assembling a Data Set
__Aim: Import plant data from .csv files and manipulate data to convert units and remove unecessary values.__
__(1.) Input and Output: Data Frames
<br>*(5 marks)*__
<br>Write a Python program that imports the data from the file `plants2017` and stores it as a __`pandas DataFrame`__.
The data set should contain only the data for shrub plants.
<br>Remove the rows with "tree" in the plants column to leave only information about shrubs in your data set.
(Hint: After removing data from a DataFrame use `df.reset_index(drop=True)` (where 'df' is the DataFrame name) to re-assign index numbers).
__(2.) Functions__
<br>__*(5 marks)*__
<br>The GPS location information for each plant is in units of decimal degrees.
<br>To make them more "human readable", the values should be converted to represent each data point on a 2D grid, with units of metres (or kilometres).
<img src="img/lat_long.png" alt="Drawing" style="width: 400px;"/>
The following equations can be used to approximate:
- the vertical distance from the *equator* from `GPS_lat`
- the horizontal distance from the *meridian* from `GPS_lon`
The latitude in m from the equator:
$lat = \frac{40,008,000 \times GPS_{lat}}{360} $
The longitude in m from the meridian:
$lon = \frac{40,075,160 \times GPS_{lon}}{360} \times \cos(GPS_{lat})$
<img src="img/ParametricCircle.png" alt="Drawing" style="width: 200px;"/>
Write code to convert GPS_lat and GPS_lon in decimal degrees to units of m or km, using the equation above.
<br>__*Hint: `GPS_lat` and `GPS_lat` are given in degrees, `numpy.cos` automatically applies to angles given in radians.*__
Encapsulate your code in a function so that it can be applied to any data frame.
(Hint: your function should take the columns of data frame to be converted as its arguments).
Show your function works by applying it to your data frame.
(You can also want to *rename* your column heading as they are no longer GPS coordinates.)
__(3.) Functions and Data Structures: Boolean Indexing__
<br>__*(5 marks)*__
<br>When fully grown, the four main shrubs that grow in the area can be identified by distinct features.
To include *only fully grown* plants in your data set:
- Write a function that selects only plants above a height of 50cm.
- Apply the function to your data set.
- Edit your function so that the same function may be used to:
- remove plants below 50cm by default
- remove plants below a height set by the user
### Part B: Refining the Data Set and Mapping pH
__Aim: Split the area over which the survey was taken into a grid of equally sized cells. Sort the pH samples by grid cell to show how pH varies across the area.__
__(1.) Input and Output__
<br>__*(2 marks)*__
<br>In the same Python file you wrote in __Part A__, import the data from the file `pH_2017` and store it as a new __`pandas DataFrame`__ called `pH`.
<br>
__(2.) Functions__
<br>__*(2 marks)*__
<br>Use the function that you wrote in __Part A (2.)__ to convert the the columns GPS_lat and GPS_lon in `pH` to units of m or km.
```
')
```
The sampled area measures approximately 3445m x 3950m.
<br>An orthoganol grid of 15 x 15 cells (3000m x 3000m) can be used to represent the sampled area:
- the grid is chosen to be slightly smaller than the sampled area so that no unsampled regions are included.
- the origin is chosen to be at
- $x = x_{min} + \frac{3445-3000}{2}$
- $y = y_{min} + \frac{3950-3000}{2}$
<img src="img/map.png" alt="Drawing" style="width: 500px;"/>
The following equation can be used to map a point, $P$, in range A to range B.
$P_B=\frac{P_A-A_{min}}{A_{max}-A_{min}} \times (B_{max}-B_{min}) + B_{min}$
__(3.) Functions and mathematical operators.__
<br>__*(5 marks)*__
Write a function called `scale` to map points in the range (origin, origin+3000) to the range (0, 3000).
By floor dividing (seminar 2) points in the range 0 to 3000 by 200, each point can be assigned an integer value in the range 0 to 14. Create an additional step in your function that uses floor division to assign an x and y grid reference to each data point.
Note:
- some grid references may be outside of the range 0 to 14.
- multiple cells will blong to the same grid reference.
Add two new columns to your DataFrame to store the x and y grid reference for each data point
Store your function that assigns a grid index as function so that it can be applied to any data set collected in the same area.
__(3.) `numpy` multi-dimensional arrays.__
<br>__*(2 marks)*__
<br>_Find the mean of the pH readings taken in each grid cell.
<br>Use a 2D numpy array to store each mean reading at each 2D grid location.
__(4.) Plotting__
<br>__*(3 marks)*__
<br>Plot the mean pH for each grid cell as a colour map of the gridded area.
<br>You may use a *2D colour map* or a *3D plot*.
<br>Save your figure as a .png file in your project folder.
### Part C: Classifying Plants Using Simple Mathematical Operations
__Aim: Sort the plant samples species. Produce a total count of each species in each grid cell.__
<br>The shrub plants in your DataFrame from __Part A__ can be catagorsied as one of four species.
The *average* physical characteristics of each *plant species* are shown in the table below:
|Shrub |Height (m)|Leaf length (cm)|Leaf aspect ratio|Bud length (cm)|
|------------|----------|----------------|-----------------|---------------|
|Winter heath| 1.2| 3.5| 2.0| 2.3|
|Bell heather| 1.8| 1.5| 1.2| 2.3|
|Brush bush | 0.7| 2.1| 10.2| 1.5|
|Darley heath| 0.7| 2.2| 3.1| 1.7|
<br>The *vector quantisation algorithm* is a simple algorithm used for catagorisation.
It determines which catagory a data point should belong to from its closest proximity to a set of values representing possible catagories.
<br>Each value represents the *average* of the corresponding catagory.
The *closeness* of the characteristics of a point $(c_1, c_2, c_3, ... c_n)$ to the average value of a catagory $(ca_1, ca_2, ca_3, ... ca_n)$ can be determined by the magnitude:
<br>$d = \sqrt{(ca_1-c_1)^2 + (ca_2-c_2)^2 + (ca_3-c_3)^2 + ... + (ca_n-c_n)^2}$ <br>
If $d$ is evaluated for each catagory, the catagory with the *minimium* value of $d$ represents the closest fit.
The vector quantisation algorithm can be applied to each data point using a for loop or numpy broadcasting.
__(1.) Mathematical compuation with Numpy__
<br>__*(5 marks)*__
<br>Use the vector quantisation algorithm to determine the species of each plant.
<br>Hint: Use a for loop or use broadcasting.
<br>Add a column to your DataFrame called "species" with the species of each plant that most closely fits the plant characteristics.
__(2.) Functions__
<br>__*(1 mark)*__
<br>Use the function that you wrote for __Part B: (2.)__ to assign a grid reference to each data point. <br>Save the grid refernce x and y value as two columns in your Data Frame.
__(3.) Data Structures: Lists__
<br>__*(5 marks)*__
Create a list for each of the following fields.
1. x grid index
1. y grid index
1. average pH reading
1. total count of *Winter heath* plant
1. total count of *Bell heather* plant
1. total count of *Brush bush* plant
1. total count of *Darley heath* plant
Loop through each grid cell and store a computed value for each field.
Store the lists as a list of lists (nested lists).
### Part D: Using Multiple Files to Produce Time-Series Data
__Aim: Run all the steps that you coded in Part A-C for every envioronmental survey collected between the years 1997-2017 to produce time series data of the plant count and average pH.__
__(1.) Control Flow__
<br>__*(5 marks)*__
<br>Use a for loop to store a list of lists like you created in __Part C: (3.)__ for each year of the environmental survey (1997-2017)
Hint: You can loop through each plant survey using:
>```Python
annual_data=[]
for year in range(1997, 2018):
df = pd.read_csv("environmental_survey/plants" + str(year) + ".csv")
```
Hint: Append the list of lists created in __Part C: (3.)__ to the list `annual_data` each time the code loops.
>```Python
annual_data=[]
for year in range(1997, 2018):
df = pd.read_csv("environmental_survey/plants" + str(year) + ".csv")
```
__(2.) Plotting and Curve Fitting__
<br>__*(5 marks)*__
<br>The two closest industrial sites to the area of land are:
<br>__Sketchy inc.__ , established 1995, GPS coordinates lon = 136.7647, lat = 35.7336
<br>__Philamore co.__ , established 1990, GPS coordinates lon = 136.8262, lat = 35.7498
<br>Choose one grid cell that is close to an industrial site and one grid cell that is far from the industrial sites.
<br>Plot a scatter graph of the average pH and plant count for each species (y axis) against time (x axis).
<br>Fit a trendline to each data series
<br>Show the equation of the trendline and the proximity to an industrial site as labels.
|
github_jupyter
|
# SageMaker Batch Transform using an XgBoost Bring Your Own Container (BYOC)
In this notebook, we will walk through an end to end data science workflow demonstrating how to build your own custom XGBoost Container using Amazon SageMaker Studio. We will first process the data using SageMaker Processing, push an XGB algorithm container to ECR, train the model, and use Batch Transform to generate inferences from your model in batch or offline mode. Finally we will use SageMaker Experiments to capture the metadata and lineage associated with the trained model. This is a key differentiator of SageMaker Studio as the metadata captured is visible in the Experiments UI.
## The example
In this example we show how to package a custom XGBoost container with Amazon SageMaker studio with a Python example which works with the UCI Credit Card dataset. To use a different algorithm or a different dataset, you can easily change the Docker container and the xgboost folder attached with this code.
In this example, we use a single image to support training and hosting. This simplifies the procedure because we only need to manage one image for both tasks. Sometimes you may want separate images for training and hosting because they have different requirements. In this case, separate the parts discussed below into separate Dockerfiles and build two images. Choosing whether to use a single image or two images is a matter of what is most convenient for you to develop and manage.
If you're only using Amazon SageMaker for training or hosting, but not both, only the functionality used needs to be built into your container.
## The workflow
This notebook is divided into three parts: *exploring your data and feature engineering*, *building your contianer* and *using your container to train a model and generate inferences*
### The Dockerfile
The Dockerfile describes the image that we want to build. You can think of it as describing the complete operating system installation of the system that you want to run. A Docker container running is quite a bit lighter than a full operating system, however, because it takes advantage of Linux on the host machine for the basic operations.
For the Python science stack, we start from an official TensorFlow docker image and run the normal tools to install TensorFlow Serving. Then we add the code that implements our specific algorithm to the container and set up the right environment for it to run under.
For details on how BYOC works with SageMaker Notebook instances, see this example: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb. Unlike SageMaker notebook instances, in SageMaker studio as we will see below, you will not need the build_and_push.sh script anymore. The studio-build CLI will handle pushing the container to ECR for you.
Let's look at the Dockerfile for this example.
```
!cat Dockerfile
```
### Step 1: Pre-requisites: Download the necessary libraries
```
import sys
#!{sys.executable} -m pip install "sagemaker-experiments"
#!{sys.executable} -m pip install "sagemaker-studio-image-build"
```
### Step 2: Ensure IAM Role has access to necessary services
The SageMaker Studio Image Build CLI uses Amazon Elastic Container Registry and AWS CodeBuild so we need to ensure that the role we provide as input to our CLI commands has the necessary policies and permissions attached.
Two scenarios are supported including:
* **Add IAM Permissions to SageMaker Execution Role**
This scenario includes updating the Execution Role attached to this notebook instance with the required permissions. In this scenario, you need to get the current execution role and ensure the trust policy and additional permissions are associated with the role.
* **Create/Utilize a secondary role with appropriate permissions attached**
This scenario include using a secondary role setup with the permissions below and identified in the --role argument when invoking the CLI (Example: *sm-docker build . --role build-cli-role*)
**Ensure the role that will be used has the following**
1) Trust policy with CodeBuild
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"codebuild.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
2) Permissions attached to the execution role to execute a build in AWS CodeBuild, create ECR repository and push images to ECR
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codebuild:DeleteProject",
"codebuild:CreateProject",
"codebuild:BatchGetBuilds",
"codebuild:StartBuild"
],
"Resource": "arn:aws:codebuild:*:*:project/sagemaker-studio*"
},
{
"Effect": "Allow",
"Action": "logs:CreateLogStream",
"Resource": "arn:aws:logs:*:*:log-group:/aws/codebuild/sagemaker-studio*"
},
{
"Effect": "Allow",
"Action": [
"logs:GetLogEvents",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:log-group:/aws/codebuild/sagemaker-studio*:log-stream:*"
},
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ecr:CreateRepository",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:DescribeImages",
"ecr:DescribeRepositories",
"ecr:UploadLayerPart",
"ecr:ListImages",
"ecr:InitiateLayerUpload",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage"
],
"Resource": "arn:aws:ecr:*:*:repository/sagemaker-studio*"
},
{
"Effect": "Allow",
"Action": "ecr:GetAuthorizationToken",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::sagemaker-*/*"
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket"
],
"Resource": "arn:aws:s3:::sagemaker*"
},
{
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:ListRoles"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::*:role/*",
"Condition": {
"StringLikeIfExists": {
"iam:PassedToService": "codebuild.amazonaws.com"
}
}
}
]
}
### Restart Kernel
Once the libraries are installed, restart the kernel by clicking Kernel --> Restart and Running all the cells below.
```
# Let's inspect the role we have created for our notebook here:
import boto3
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
sess = sagemaker.Session()
region = boto3.session.Session().region_name
print("Region = {}".format(region))
sm = boto3.Session().client("sagemaker")
```
### Complete Setup: Import libraries and set global definitions.
All needed libraries will come pre-installed with this notebook with the Lifecycle configuration scripts.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
from time import sleep, gmtime, strftime
import json
import time
# Import SageMaker Experiments
from sagemaker.analytics import ExperimentAnalytics
from smexperiments.experiment import Experiment
from smexperiments.trial import Trial
from smexperiments.trial_component import TrialComponent
from smexperiments.tracker import Tracker
```
### Specify buckets for storing data
```
# Use our custom bucket here.
rawbucket = sess.default_bucket()
prefix = "sagemaker-modelmonitor" # use this prefix to store all files pertaining to this workshop.
dataprefix = prefix + "/data"
traindataprefix = prefix + "/train_data"
testdataprefix = prefix + "/test_data"
testdatanolabelprefix = prefix + "/test_data_no_label"
trainheaderprefix = prefix + "/train_headers"
```
### Step 3: Data Exploration
A key part of the data science lifecyle is data exploration, pre-processing and feature engineering. We will demonstrate how to use SM notebooks for data exploration and SM Processing for feature engineering and pre-processing data
### Download and Import the data
We will use the UCI Machine Learning Archive dataset on payment default for this example [https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+client]. Here we have a number of common features such as payment histories from prior months, payments, bills etc to predict a binary outcome -- whether or not a user will default on their payment in the following month.
```
data = pd.read_excel("data.xls", header=1)
data = data.drop(columns=["ID"])
data.head()
data.rename(columns={"default payment next month": "Label"}, inplace=True)
lbl = data.Label
data = pd.concat([lbl, data.drop(columns=["Label"])], axis=1)
data.head()
COLS = data.columns
```
### Data Exploration
Once you have downloaded the dataset, the next step in the data science lifecycle is to explore the dataset. A correlation plot can indicate whether the features are correlated to one another and the label itself.
```
## Corr plot
f = plt.figure(figsize=(19, 15))
plt.matshow(data.corr(), fignum=f.number)
plt.xticks(range(data.shape[1]), data.columns, fontsize=14, rotation=45)
plt.yticks(range(data.shape[1]), data.columns, fontsize=14)
cb = plt.colorbar()
cb.ax.tick_params(labelsize=14)
plt.title("Correlation Matrix", fontsize=16);
from pandas.plotting import scatter_matrix
SCAT_COLUMNS = ["BILL_AMT1", "BILL_AMT2", "PAY_AMT1", "PAY_AMT2"]
scatter_matrix(data[SCAT_COLUMNS], figsize=(10, 10), diagonal="kde")
plt.show()
```
### Step 4: Secure Feature Processing pipeline using SageMaker Processing
While you can pre-process small amounts of data directly in a notebook SageMaker Processing offloads the heavy lifting of pre-processing larger datasets by provisioning the underlying infrastructure, downloading the data from an S3 location to the processing container, running the processing scripts, storing the processed data in an output directory in Amazon S3 and deleting the underlying transient resources needed to run the processing job. Once the processing job is complete, the infrastructure used to run the job is wiped, and any temporary data stored on it is deleted.
```
if not os.path.exists('rawdata/rawdata.csv'):
!mkdir rawdata
data.to_csv('rawdata/rawdata.csv', index=None)
else:
pass
# Upload the raw dataset
raw_data_location = sess.upload_data("rawdata", bucket=rawbucket, key_prefix=dataprefix)
print(raw_data_location)
## Use SageMaker Processing with Sk Learn. -- combine data into train and test at this stage if possible.
from sagemaker.sklearn.processing import SKLearnProcessor
sklearn_processor = SKLearnProcessor(
framework_version="0.20.0", role=role, instance_type="ml.c4.xlarge", instance_count=1
)
```
### Write a preprocessing script (same as above)
```
%%writefile preprocessing.py
import argparse
import os
import warnings
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.exceptions import DataConversionWarning
from sklearn.compose import make_column_transformer
warnings.filterwarnings(action="ignore", category=DataConversionWarning)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--train-test-split-ratio", type=float, default=0.3)
parser.add_argument("--random-split", type=int, default=0)
args, _ = parser.parse_known_args()
print("Received arguments {}".format(args))
input_data_path = os.path.join("/opt/ml/processing/input", "rawdata.csv")
print("Reading input data from {}".format(input_data_path))
df = pd.read_csv(input_data_path)
df.sample(frac=1)
COLS = df.columns
newcolorder = (
["PAY_AMT1", "BILL_AMT1"]
+ list(COLS[1:])[:11]
+ list(COLS[1:])[12:17]
+ list(COLS[1:])[18:]
)
split_ratio = args.train_test_split_ratio
random_state = args.random_split
X_train, X_test, y_train, y_test = train_test_split(
df.drop("Label", axis=1), df["Label"], test_size=split_ratio, random_state=random_state
)
preprocess = make_column_transformer(
(["PAY_AMT1"], StandardScaler()), (["BILL_AMT1"], MinMaxScaler()), remainder="passthrough"
)
print("Running preprocessing and feature engineering transformations")
train_features = pd.DataFrame(preprocess.fit_transform(X_train), columns=newcolorder)
test_features = pd.DataFrame(preprocess.transform(X_test), columns=newcolorder)
# concat to ensure Label column is the first column in dataframe
train_full = pd.concat(
[pd.DataFrame(y_train.values, columns=["Label"]), train_features], axis=1
)
test_full = pd.concat([pd.DataFrame(y_test.values, columns=["Label"]), test_features], axis=1)
print("Train data shape after preprocessing: {}".format(train_features.shape))
print("Test data shape after preprocessing: {}".format(test_features.shape))
train_features_headers_output_path = os.path.join(
"/opt/ml/processing/train_headers", "train_data_with_headers.csv"
)
train_features_output_path = os.path.join("/opt/ml/processing/train", "train_data.csv")
test_features_output_path = os.path.join("/opt/ml/processing/test", "test_data.csv")
print("Saving training features to {}".format(train_features_output_path))
train_full.to_csv(train_features_output_path, header=False, index=False)
print("Complete")
print("Save training data with headers to {}".format(train_features_headers_output_path))
train_full.to_csv(train_features_headers_output_path, index=False)
print("Saving test features to {}".format(test_features_output_path))
test_full.to_csv(test_features_output_path, header=False, index=False)
print("Complete")
# Copy the preprocessing code over to the s3 bucket
codeprefix = prefix + "/code"
codeupload = sess.upload_data("preprocessing.py", bucket=rawbucket, key_prefix=codeprefix)
print(codeupload)
train_data_location = rawbucket + "/" + traindataprefix
test_data_location = rawbucket + "/" + testdataprefix
print("Training data location = {}".format(train_data_location))
print("Test data location = {}".format(test_data_location))
```
Next we will execute the script above using the managed scikit-learn preprocessing container. This step may take a few minutes to execute.
```
from sagemaker.processing import ProcessingInput, ProcessingOutput
sklearn_processor.run(
code=codeupload,
inputs=[ProcessingInput(source=raw_data_location, destination="/opt/ml/processing/input")],
outputs=[
ProcessingOutput(
output_name="train_data",
source="/opt/ml/processing/train",
destination="s3://" + train_data_location,
),
ProcessingOutput(
output_name="test_data",
source="/opt/ml/processing/test",
destination="s3://" + test_data_location,
),
ProcessingOutput(
output_name="train_data_headers",
source="/opt/ml/processing/train_headers",
destination="s3://" + rawbucket + "/" + prefix + "/train_headers",
),
],
arguments=["--train-test-split-ratio", "0.2"],
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description["ProcessingOutputConfig"]
for output in output_config["Outputs"]:
if output["OutputName"] == "train_data":
preprocessed_training_data = output["S3Output"]["S3Uri"]
if output["OutputName"] == "test_data":
preprocessed_test_data = output["S3Output"]["S3Uri"]
```
# Part 2: Building the Container and Training the model
### Step 5: Set up SageMaker Experiments
In this notebook, we first build the Docker image by providing the Dockerfile discussed before and train a model using that Dockerfile
We use SageMaker Experiments for data scientists to track the lineage of the model from the raw data source to the preprocessing steps and the model training pipeline. With SageMaker Experiments, data scientists can compare, track and manage multiple diferent model training jobs, data processing jobs, hyperparameter tuning jobs and retain a lineage from the source data to the training job artifacts to the model hyperparameters and any custom metrics that they may want to monitor as part of the model training.
```
# Create a SageMaker Experiment
cc_experiment = Experiment.create(
experiment_name=f"CreditCardDefault-{int(time.time())}",
description="Predict credit card default from payments data",
sagemaker_boto_client=sm,
)
print(cc_experiment)
```
In addition to training, we want to track the lineage of the entire machine learing pipeline also including the processing job above.
```
# Start Tracking parameters used in the Pre-processing pipeline.
with Tracker.create(display_name="Preprocessing", sagemaker_boto_client=sm) as tracker:
tracker.log_parameters({"train_test_split_ratio": 0.2, "random_state": 0})
# we can log the s3 uri to the dataset we just uploaded
tracker.log_input(name="ccdefault-raw-dataset", media_type="s3/uri", value=raw_data_location)
tracker.log_input(
name="ccdefault-train-dataset", media_type="s3/uri", value=train_data_location
)
tracker.log_input(name="ccdefault-test-dataset", media_type="s3/uri", value=test_data_location)
```
### Step 6: Build XgBoost container for training
The code for the XGB container is already supplied with this notebook. We simply need to build this container and push it to ECR. The single line of code below will do it.
```
!sm-docker build .
```
### Step 7: Train the Model
The same security postures we applied previously during SM Processing apply to training jobs. We will also have SageMaker experiments track the training job and store metadata such as model artifact location, training/validation data location, model hyperparameters etc.
As shown above, your image URI has the following form:
Image URI: {account-id}.dkr.ecr.{region}.amazonaws.com/sagemaker-studio-{studioID}:{username}
```
account = sess.boto_session.client("sts").get_caller_identity()["Account"]
ecr = boto3.client("ecr")
domain_id = "sagemaker-studio-{}".format(sm.list_apps()["Apps"][0]["DomainId"])
image_tag = ecr.list_images(repositoryName=domain_id, filter={"tagStatus": "TAGGED"})["imageIds"][
0
]["imageTag"]
image = "{}.dkr.ecr.{}.amazonaws.com/{}:{}".format(account, region, domain_id, image_tag)
preprocessing_trial_component = tracker.trial_component
trial_name = f"cc-fraud-training-job-{int(time.time())}"
cc_trial = Trial.create(
trial_name=trial_name, experiment_name=cc_experiment.experiment_name, sagemaker_boto_client=sm
)
cc_trial.add_trial_component(preprocessing_trial_component)
cc_training_job_name = "cc-training-job-{}".format(int(time.time()))
xgb = sagemaker.estimator.Estimator(
image,
role,
instance_count=1,
instance_type="ml.m4.xlarge",
max_run=86400,
output_path="s3://{}/{}/models".format(rawbucket, prefix),
sagemaker_session=sess,
) # set to true for distributed training
xgb.set_hyperparameters(
max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
verbosity=0,
objective="binary:logistic",
num_round=100,
)
xgb.fit(
inputs={"training": "s3://" + train_data_location},
job_name=cc_training_job_name,
experiment_config={
"TrialName": cc_trial.trial_name, # log training job in Trials for lineage
"TrialComponentDisplayName": "Training",
},
wait=True,
)
time.sleep(2)
```
Having used SageMaker Experiments to track the training runs, we can now extract model metadata to get the entire lineage of the model from the source data to the model artifacts and the hyperparameters.
To do this, simply call the **describe_trial_component** API.
```
# Present the Model Lineage as a dataframe
from sagemaker.session import Session
session = boto3.Session()
lineage_table = ExperimentAnalytics(
sagemaker_session=Session(session, sm),
search_expression={
"Filters": [{"Name": "Parents.TrialName", "Operator": "Equals", "Value": trial_name}]
},
sort_by="CreationTime",
sort_order="Ascending",
)
lineagedf = lineage_table.dataframe()
lineagedf
# get detailed information about a particular trial
sm.describe_trial_component(TrialComponentName=lineagedf.TrialComponentName[1])
```
# Part 3: Using the trained model for inference
### Step 8: Inference using Batch Transform
Let's first use Batch Transform to generate inferences for the test dataset you pre-processed before.
```
s3 = boto3.client("s3")
s3.download_file(rawbucket, testdataprefix + "/test_data.csv", "test_data.csv")
newcolorder = (
["PAY_AMT1", "BILL_AMT1"] + list(COLS[1:])[:11] + list(COLS[1:])[12:17] + list(COLS[1:])[18:]
)
test_full = pd.read_csv("test_data.csv", names=["Label"] + newcolorder)
test_full.head()
test_data_no_label = test_full.drop(columns=["Label"], axis=1)
label = test_full["Label"]
test_data_no_label.to_csv("test_data_no_label.csv", index=False, header=False)
test_data_no_label.shape
sess = sagemaker.Session()
test_data_nohead_location = sess.upload_data(
"test_data_no_label.csv", bucket=rawbucket, key_prefix=testdatanolabelprefix
)
%%time
sm_transformer = xgb.transformer(1, "ml.m5.xlarge", accept="text/csv")
# start a transform job
sm_transformer.transform(test_data_nohead_location, split_type="Line", content_type="text/csv")
sm_transformer.wait()
import json
import io
from urllib.parse import urlparse
def get_csv_output_from_s3(s3uri, file_name):
parsed_url = urlparse(s3uri)
bucket_name = parsed_url.netloc
prefix = parsed_url.path[1:]
s3 = boto3.resource("s3")
obj = s3.Object(bucket_name, "{}/{}".format(prefix, file_name))
return obj.get()["Body"].read().decode("utf-8")
output = get_csv_output_from_s3(sm_transformer.output_path, "test_data_no_label.csv.out")
output_df = pd.read_csv(io.StringIO(output), sep=",", header=None)
output_df.head(8)
from sklearn.metrics import confusion_matrix, accuracy_score
1 - np.unique(data["Label"], return_counts=True)[1][1] / (len(data["Label"]))
print(
"Baseline Accuracy = {}".format(
1 - np.unique(data["Label"], return_counts=True)[1][1] / (len(data["Label"]))
)
)
print("Accuracy Score = {}".format(accuracy_score(label, output_df)))
output_df["Predicted"] = output_df.values
output_df["Label"] = label
confusion_matrix = pd.crosstab(
output_df["Predicted"],
output_df["Label"],
rownames=["Actual"],
colnames=["Predicted"],
margins=True,
)
confusion_matrix
```
### Step 9: Conclusions
In this notebook we demonstrated an end to end cycle of data exploration, data processing using SageMaker processing, model development using an XGBoost Bring Your Own Container which we pushed to ECR, model training and offline inference using Batch Transform. Finally we logged our training metadata using SageMaker Experiments.
You can use this notebook to experiment with end to end data science experimentation using SageMaker Studio.
Remember to delete your datasets in the Amazon S3 bucket you used for this notebook.
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import seaborn as sns
from scipy import stats
import matplotlib.pyplot as plt
from ipywidgets import *
import warnings
warnings.simplefilter(action='ignore', category=Warning)
%matplotlib inline
from google.colab import drive
df = pd.read_csv("DEMOFINAL - Sheet1.csv")
df = df.rename(columns={'rtt_qos': 'persegment_RTT', 'tp_qos': 'Throughput', 'p_qos': 'Packets'})
df.columns
def interactive_contol(Mobility, Column, Total_users, User_no , Algorithm, Target):
t= Mobility
c= Column
u= User_no
a=Algorithm
tr= Target
tu= Total_users
if a=='Rate Based':
case1= df[(df['algorithm_used']=='conventional')]
case2= df[(df['algorithm_used']=='exponential')]
a1='Conventional'
a2='Exponential'
elif a=='Buffer Based':
case1= df[(df['algorithm_used']=='bba')]
case2= df[(df['algorithm_used']=='logistic')]
a1='BBA'
a2='Logistic'
else:
case1= df[(df['algorithm_used']=='arbiter')]
case2= df[(df['algorithm_used']=='elastic')]
a1='Arbiter +'
a2='Elastic'
case1_final = case1[( case1['column']==c) & ( case1['type']==t) & ( case1['user_no']==u) & ( case1['total_users']==tu)]
case2_final = case2[( case2['column']==c) & ( case2['type']==t) & ( case2['user_no']==u) & ( case2['total_users']==tu)]
if c==8 and t=='driving':
title = '0.5 - 3 Mbps';
elif c==10 and t=='driving':
title = '6 - 14 Mbps';
elif c==1 and t=='driving':
title = '38.26 - 10.33 Mbps';
elif c==2 and t=='driving':
title = '29.33 - 10.55 Mbps';
elif c==4 and t=='static':
title = '72.42 - 9 Mbps';
elif c==5 and t=='static':
title = '70 - 20 Mbps';
elif c==7 and t=='static':
title = '4 - 7.6 Mbps';
elif c==9 and t=='static':
title = '0.5 - 6 Mbps';
elif c==11 and t=='static':
title = '8 - 57 Mbps';
else:
title='Unknown Case'
plt.style.use('classic')
fig = plt.figure(figsize=(10,5))
with plt.style.context('Solarize_Light2'):
fig.set_facecolor('white')
plt.rcParams['axes.facecolor'] = 'white'
plt.plot(case1_final['intSeg'], case1_final[tr], label=a1)
plt.plot(case2_final['intSeg'], case2_final[tr], label=a2, linestyle='--', color='orange')
plt.title(title, fontsize=12)
plt.xlabel('Segments (2 sec)', fontsize=12, color='black')
plt.ylabel(tr, fontsize=12, color='black')
plt.legend(loc='best',frameon=False)
plt.grid(axis='y', c='#D3D3D3')
plt.grid(axis='x', c='#D3D3D3')
plt.tick_params(axis='x', colors='black')
plt.tick_params(axis='y', colors='black')
plt.show()
interact(interactive_contol, Mobility=['driving','static'], Column=[1,2,4,5,7,8,9,10,11],Total_users=[2,3], User_no=[1,2,3], Algorithm=['Rate Based','Hybrid', 'Buffer Based' ], Target=['Clae', 'Duanmu',
'Yin', 'Yu','P1203', 'persegment_RTT', 'Throughput', 'Packets','intArr','intDel', 'intSta', 'intDelRate',
'intActRate', 'intByteSize', 'floatBuf'])
```
|
github_jupyter
|
# Operations on word vectors
Welcome to your first assignment of this week!
Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings.
**After this assignment you will be able to:**
- Load pre-trained word vectors, and measure similarity using cosine similarity
- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______.
- Modify word embeddings to reduce their gender bias
Let's get started! Run the following cell to load the packages you will need.
```
import numpy as np
from w2v_utils import *
```
Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the `word_to_vec_map`.
```
words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
```
You've loaded:
- `words`: set of words in the vocabulary.
- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.
You've seen that one-hot vectors do not do a good job cpaturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are.
# 1 - Cosine similarity
To measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows:
$$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$
where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value.
<img src="images/cosine_sim.png" style="width:800px;height:250px;">
<caption><center> **Figure 1**: The cosine of the angle between two vectors is a measure of how similar they are</center></caption>
**Exercise**: Implement the function `cosine_similarity()` to evaluate similarity between word vectors.
**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$
```
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similariy between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = np.dot(u,v)
# Compute the L2 norm of u (≈1 line)
norm_u = np.linalg.norm(u)
# Compute the L2 norm of v (≈1 line)
norm_v = np.linalg.norm(v)
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = dot / (norm_u * norm_v)
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
```
**Expected Output**:
<table>
<tr>
<td>
**cosine_similarity(father, mother)** =
</td>
<td>
0.890903844289
</td>
</tr>
<tr>
<td>
**cosine_similarity(ball, crocodile)** =
</td>
<td>
0.274392462614
</td>
</tr>
<tr>
<td>
**cosine_similarity(france - paris, rome - italy)** =
</td>
<td>
-0.675147930817
</td>
</tr>
</table>
After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave.
## 2 - Word analogy task
In the word analogy task, we complete the sentence <font color='brown'>"*a* is to *b* as *c* is to **____**"</font>. An example is <font color='brown'> '*man* is to *woman* as *king* is to *queen*' </font>. In detail, we are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity.
**Exercise**: Complete the code below to be able to perform word analogies!
```
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings v_a, v_b and v_c (≈1-3 lines)
e_a, e_b, e_c = word_to_vec_map[word_a], word_to_vec_map[word_b], word_to_vec_map[word_c]
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = cosine_similarity(e_b-e_a, word_to_vec_map[w]-e_c)
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###
return best_word
```
Run the cell below to test your code, this may take 1-2 minutes.
```
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
```
**Expected Output**:
<table>
<tr>
<td>
**italy -> italian** ::
</td>
<td>
spain -> spanish
</td>
</tr>
<tr>
<td>
**india -> delhi** ::
</td>
<td>
japan -> tokyo
</td>
</tr>
<tr>
<td>
**man -> woman ** ::
</td>
<td>
boy -> girl
</td>
</tr>
<tr>
<td>
**small -> smaller ** ::
</td>
<td>
large -> larger
</td>
</tr>
</table>
Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?.
### Congratulations!
You've come to the end of this assignment. Here are the main points you should remember:
- Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.)
- For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started.
Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook.
Congratulations on finishing the graded portions of this notebook!
## 3 - Debiasing word vectors (OPTIONAL/UNGRADED)
In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded.
Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.)
```
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
```
Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
```
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable.
But let's try with some other words.
```
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch!
We'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing.
### 3.1 - Neutralize bias for non-gender specific words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "othogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$.
Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below.
<img src="images/neutral.png" style="width:800px;height:300px;">
<caption><center> **Figure 2**: The word vector for "receptionist" represented before and after applying the neutralize operation. </center></caption>
**Exercise**: Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$:
$$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$
$$e^{debiased} = e - e^{bias\_component}\tag{3}$$
If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.
<!--
**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:
$$u = u_B + u_{\perp}$$
where : $u_B = $ and $ u_{\perp} = u - u_B $
!-->
```
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = word_to_vec_map[word]
# Compute e_biascomponent using the formula give above. (≈ 1 line)
e_biascomponent = np.multiply((np.dot(e, g) / np.linalg.norm(g)**2), g)
# Neutralize e by substracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = e - e_biascomponent
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
```
**Expected Output**: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$).
<table>
<tr>
<td>
**cosine similarity between receptionist and g, before neutralizing:** :
</td>
<td>
0.330779417506
</td>
</tr>
<tr>
<td>
**cosine similarity between receptionist and g, after neutralizing:** :
</td>
<td>
-3.26732746085e-17
</tr>
</table>
### 3.2 - Equalization algorithm for gender-specific words
Next, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this.
The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works:
<img src="images/equalize10.png" style="width:800px;height:400px;">
The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are:
$$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$
$$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{5}$$
$$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$
$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{7}$$
$$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{8}$$
$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {|(e_{w1} - \mu_{\perp}) - \mu_B)|} \tag{9}$$
$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {|(e_{w2} - \mu_{\perp}) - \mu_B)|} \tag{10}$$
$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$
$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$
**Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!
```
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = None
e_w1, e_w2 = None
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = None
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = None
mu_orth = None
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = None
e_w2B = None
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = None
corrected_e_w2B = None
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = None
e2 = None
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
```
**Expected Output**:
cosine similarities before equalizing:
<table>
<tr>
<td>
**cosine_similarity(word_to_vec_map["man"], gender)** =
</td>
<td>
-0.117110957653
</td>
</tr>
<tr>
<td>
**cosine_similarity(word_to_vec_map["woman"], gender)** =
</td>
<td>
0.356666188463
</td>
</tr>
</table>
cosine similarities after equalizing:
<table>
<tr>
<td>
**cosine_similarity(u1, gender)** =
</td>
<td>
-0.700436428931
</td>
</tr>
<tr>
<td>
**cosine_similarity(u2, gender)** =
</td>
<td>
0.700436428931
</td>
</tr>
</table>
Please feel free to play with the input words in the cell above, to apply equalization to other pairs of words.
These debiasing algorithms are very helpful for reducing bias, but are not perfect and do not eliminate all traces of bias. For example, one weakness of this implementation was that the bias direction $g$ was defined using only the pair of words _woman_ and _man_. As discussed earlier, if $g$ were defined by computing $g_1 = e_{woman} - e_{man}$; $g_2 = e_{mother} - e_{father}$; $g_3 = e_{girl} - e_{boy}$; and so on and averaging over them, you would obtain a better estimate of the "gender" dimension in the 50 dimensional word embedding space. Feel free to play with such variants as well.
### Congratulations
You have come to the end of this notebook, and have seen a lot of the ways that word vectors can be used as well as modified.
Congratulations on finishing this notebook!
**References**:
- The debiasing algorithm is from Bolukbasi et al., 2016, [Man is to Computer Programmer as Woman is to
Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf)
- The GloVe word embeddings were due to Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (https://nlp.stanford.edu/projects/glove/)
|
github_jupyter
|
# 16 - Regression Discontinuity Design
We don't stop to think about it much, but it is impressive how smooth nature is. You can't grow a tree without first getting a bud, you can't teleport from one place to another, a wound takes its time to heal. Even in the social realm, smoothness seems to be the norm. You can't grow a business in one day, consistency and hard work are required to build wealth and it takes years before you learn how linear regression works. Under normal circumstances, nature is very cohesive and doesn't jump around much.
> When the intelligent and animal souls are held together in one embrace, they can be kept from separating.
\- Tao Te Ching, Lao Tzu.
Which means that **when we do see jumps and spikes, they are probably artificial** and often man-made situations. These events are usually accompanied by counterfactuals to the normal way of things: if a weird thing happens, this gives us some insight into what would have happened if nature was to work in a different way. Exploring these artificial jumps is at the core of Regression Discontinuity Design.

The basic setup goes like this. Imagine that you have a treatment variable $T$ and potential outcomes $Y_0$ and $Y_1$. The treatment T is a discontinuous function of an observed running variable $R$ such that
$
D_i = \mathcal{1}\{R_i>c\}
$
In other words, this is saying that treatment is zero when $R$ is below a threshold $c$ and one otherwise. This means that we get to observe $Y_1$ when $R>c$ and $Y_0$ when $R<c$. To wrap our head around this, think about the potential outcomes as 2 functions that we can't observe entirely. Both $Y_0(R)$ and $Y_1(R)$ are there, we just can't see that. The threshold acts as a switch that allows us to see one or the other of those function, but never both, much like in the image below:

The idea of regression discontinuity is to compare the outcome just above and just below the threshold to identify the treatment effect at the threshold. This is called a **sharp RD** design, since the probability of getting the treatment jumps from 0 to 1 at the threshold, but we could also think about a **fuzzy RD** design, where the probability also jumps, but is a less dramatic manner.
## Is Alcohol Killing You?
A very relevant public policy question is what should be the minimal drinking age. Most countries, Brazil included, set it to be 18 year, but in the US (most states) it is currently 21. So, is it the case that the US is being overly prudent and that they should lower their minimal drinking age? Or is it the case that other countries should make their legal drinking age higher?
One way to look at this question is from a [mortality rate perspective (Carpenter and Dobkin, 2009)](https://www.aeaweb.org/articles?id=10.1257/app.1.1.164). From the public policy standpoint, one could argue that we should lower the mortality rate as much as possible. If alcohol consumption increases the mortality rate by a lot, we should avoid lowering the minimum drinking age. This would be consistent with the objective of lowering deaths caused by alcohol consumption.
To estimate the impacts of alcohol on death, we could use the fact that legal drinking age imposes a discontinuity on nature. In the US, those just under 21 years don't drink (or drink much less) while those just older than 21 do drink. This means that the probability of drinking jumps at 21 years and that is something we can explore with an RDD.
```
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from matplotlib import style
from matplotlib import pyplot as plt
import seaborn as sns
import statsmodels.formula.api as smf
%matplotlib inline
style.use("fivethirtyeight")
```
To do so we can grab some mortality data aggregated by age. Each row is the average age of a group of people and the average mortality by all causes (`all`), by moving vehicle accident (`mva`) and by suicide (`suicide`).
```
drinking = pd.read_csv("./data/drinking.csv")
drinking.head()[["agecell", "all", "mva", "suicide"]]
```
Just to aid visibility (and for another important reason we will see later) we will centralize the running variable `agecell` at the threshold 21.
```
drinking["agecell"] -= 21
```
If we plot the multiple outcome variables (`all`, `mva`, `suicide`) with the runing variable on the x axis, we get some visual cue about some sort of jump in mortality as we cross the legal drinking age.
```
plt.figure(figsize=(8,8))
ax = plt.subplot(3,1,1)
drinking.plot.scatter(x="agecell", y="all", ax=ax)
plt.title("Death Cause by Age (Centered at 0)")
ax = plt.subplot(3,1,2, sharex=ax)
drinking.plot.scatter(x="agecell", y="mva", ax=ax)
ax = plt.subplot(3,1,3, sharex=ax)
drinking.plot.scatter(x="agecell", y="suicide", ax=ax);
```
There are some cues, but we need more than that. What exactly is the effect of drinking on mortality at the threshold? And what is the standard error on that estimate?
## RDD Estimation
The key assumption that RDD relies on is the smoothness of the potential outcome at the threshold. Formally, the limits of the potential outcomes as the running variable approaches the threshold from the right and from the left should be the same.
$$
\lim_{r \to c^-} E[Y_{ti}|R_i=r] = \lim_{r \to c^+} E[Y_{ti}|R_i=r]
$$
If this holds true, we can find the causal effect at the threshold
$$
\begin{align}
\lim_{r \to c^+} E[Y_{ti}|R_i=r] - \lim_{r \to c^-} E[Y_{ti}|R_i=r]=&\lim_{r \to c^+} E[Y_{1i}|R_i=r] - \lim_{r \to c^-} E[Y_{0i}|R_i=r] \\
=& E[Y_{1i}|R_i=r] - E[Y_{0i}|R_i=r] \\
=& E[Y_{1i} - Y_{0i}|R_i=r]
\end{align}
$$
This is, in its own way, a sort of Local Average Treatment Effect (LATE), since we can only know it at the threshold. In this setting, we can think of RDD as a local randomized trial. For those at the threshold, the treatment could have gone either way and, by chance, some people fell below the threshold, and some people fell above. In our example, at the same point in time, some people are just above 21 years and some people are just below 21. What determines this is if someone was born some days later or not, which is pretty random. For this reason, RDD provides a very compelling causal story. It is not the golden standard of RCT, but it is close.
Now, to estimate the treatment effect at the threshold, all we need to do is estimate both of the limits in the formula above and compare them. The simplest way to do that is by running a linear regression

To make it work, we interact a dummy for being above the threshold with the running variable
$
y_i = \beta_0 + \beta_1 r_i + \beta_2 \mathcal{1}\{r_i>c\} + \beta_3 \mathcal{1}\{r_i>c\} r_i
$
Essentially, this is the same as fitting a linear regression above the threshold and another below it. The parameter $\beta_0$ is the intercept of the regression below the threshold and $\beta_0+\beta_2$ is the intercept for the regression above the threshold.
Here is where the trick of centering the running variable at the threshold comes into play. After this pre-processing step, the threshold becomes zero. This causes the intercept $\beta_0$ to be the predicted value at the threshold, for the regression below it. In other words, $\beta_0=\lim_{r \to c^-} E[Y_{ti}|R_i=r]$. By the same reasoning, $\beta_0+\beta_2$ is the limit of the outcome from above. Wich means, that
$
\lim_{r \to c^+} E[Y_{ti}|R_i=r] - \lim_{r \to c^-} E[Y_{ti}|R_i=r]=\beta_2=E[ATE|R=c]
$
Here is what this looks like in code for the case where we want to estimate the effect of alcohol consumption on death by all causes at 21 years.
```
rdd_df = drinking.assign(threshold=(drinking["agecell"] > 0).astype(int))
model = smf.wls("all~agecell*threshold", rdd_df).fit()
model.summary().tables[1]
```
This model is telling us that mortality increases by 7.6627 points with the consumption of alcohol. Another way of putting this is that alcohol increases the chance of death by all causes by 8% ((7.6627+93.6184)/93.6184). Notice that this also gives us standard errors for our causal effect estimate. In this case, the effect is statistically significant, since the p-value is below 0.01.
If we want to verify this model visually, we can show the predicted values on the data that we have. You can see that it is as though we had 2 regression models: one for those above the threshold and one for below it.
```
ax = drinking.plot.scatter(x="agecell", y="all", color="C0")
drinking.assign(predictions=model.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title("Regression Discontinuity");
```
If we do the same for the other causes, this is what we get.
```
plt.figure(figsize=(8,8))
for p, cause in enumerate(["all", "mva", "suicide"], 1):
ax = plt.subplot(3,1,p)
drinking.plot.scatter(x="agecell", y=cause, ax=ax)
m = smf.wls(f"{cause}~agecell*threshold", rdd_df).fit()
ate_pct = 100*((m.params["threshold"] + m.params["Intercept"])/m.params["Intercept"] - 1)
drinking.assign(predictions=m.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title(f"Impact of Alcohol on Death: {np.round(ate_pct, 2)}%")
plt.tight_layout()
```
RDD is telling us that alcohol increases the chance of death by suicide and car accidents by 15%, which is a pretty significant amount. These results are compelling arguments to not lower the drinking age, if we want to minimize mortality rates.
### Kernel Weighting
Regression Discontinuity relies heavily on the extrapolations properties of linear regression. Since we are looking at the values at the beginning and end of 2 regression lines, we better get those limits right. What can happen is that regression might focus too much on fitting the other data points at the cost of a poor fit at the threshold. If this happens, we might get the wrong measure of the treatment effect.
One way to solve this is to give higher weights for the points that are closer to the threshold. There are many ways to do this, but a popular one is to reweight the samples with the **triangular kernel**
$
K(R, c, h) = \mathcal{1}\{|R-c| \leq h\} * \bigg(1-\frac{|R-c|}{h}\bigg)
$
The first part of this kernel is an indicator function to whether we are close to the threshold. How close? This is determined by a bandwidth parameter $h$. The second part of this kernel is a weighting function. As we move away from the threshold, the weights get smaller and smaller. These weights are divided by the bandwidth. If the bandwidth is large, the weights get smaller at a slower rate. If the bandwidth is small, the weights quickly go to zero.
To make it easier to understand, here is what the weights look like for this kernel applied to our problem. I've set the bandwidth to be 1 here, meaning we will only consider data from people that are no older than 22 years and no younger than 20 years.
```
def kernel(R, c, h):
indicator = (np.abs(R-c) <= h).astype(float)
return indicator * (1 - np.abs(R-c)/h)
plt.plot(drinking["agecell"], kernel(drinking["agecell"], c=0, h=1))
plt.xlabel("agecell")
plt.ylabel("Weight")
plt.title("Kernel Weight by Age");
```
If we apply these weights to our original problem, the impact of alcohol gets bigger, at least for all causes. It jumps from 7.6627 to 9.7004. The result remains very significant. Also, notice that I'm using `wls` instead of `ols`
```
model = smf.wls("all~agecell*threshold", rdd_df,
weights=kernel(drinking["agecell"], c=0, h=1)).fit()
model.summary().tables[1]
ax = drinking.plot.scatter(x="agecell", y="all", color="C0")
drinking.assign(predictions=model.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title("Regression Discontinuity (Local Regression)");
```
And here is what it looks like for the other causes of death. Notice how the regression on the right is more negatively sloped since it disconsiders the right most points.
```
plt.figure(figsize=(8,8))
weights = kernel(drinking["agecell"], c=0, h=1)
for p, cause in enumerate(["all", "mva", "suicide"], 1):
ax = plt.subplot(3,1,p)
drinking.plot.scatter(x="agecell", y=cause, ax=ax)
m = smf.wls(f"{cause}~agecell*threshold", rdd_df, weights=weights).fit()
ate_pct = 100*((m.params["threshold"] + m.params["Intercept"])/m.params["Intercept"] - 1)
drinking.assign(predictions=m.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title(f"Impact of Alcohol on Death: {np.round(ate_pct, 2)}%")
plt.tight_layout()
```
With the exception of suicide, it looks like adding the kernel weight made the negative impact on alcohol bigger. Once again, if we want to minimize the death rate, we should NOT recommend lowering the legal drinking age, since there is a clear impact of alcohol on the death rates.
This simple case covers what happens when regression discontinuity design works perfectly. Next, we will see some diagnostics that we should run in order to check how much we can trust RDD and talk about a topic that is very dear to our heart: the effect of education on earnings.
## Sheepskin Effect and Fuzzy RDD
When it comes to the effect of education on earnings, there are two major views in economics. The first one is the widely known argument that education increases human capital, increasing productivity and thus, earnings. In this view, education actually changes you for the better. Another view is that education is simply a signaling mechanism. It just puts you through all these hard tests and academic tasks. If you can make it, it signals to the market that you are a good employee. In this way, education doesn't make you more productive. It only tells the market how productive you have always been. What matters here is the diploma. If you have it, you will be paid more. We refer to this as the **sheepskin effect**, since diplomas were printed in sheepskin in the past.
To test this hypothesis, [Clark and Martorell](https://faculty.smu.edu/millimet/classes/eco7321/papers/clark%20martorell%202014.pdf) used regression discontinuity to measure the effect of graduating 12th grade on earnings. In order to do that, they had to think about some running variable where students that fall above it graduate and those who fall below it, don't. They found such data in the Texas education system.
In order to graduate in Texas, one has to pass an exam. Testing starts at 10th grade and students can do it multiple times, but eventually, they face a last chance exam at the end of 12th grade. The idea was to get data from students who took those last chance exams and compare those that had barely failed it to those that barely passed it. These students will have very similar human capital, but different signaling credentials. Namely, those that barely passed it, will receive a diploma.
```
sheepskin = pd.read_csv("./data/sheepskin.csv")[["avgearnings", "minscore", "receivehsd", "n"]]
sheepskin.head()
```
Once again, this data is grouped by the running variable. It contains not only the running variable (minscore, already centered at zero) and the outcome (avgearnings), but it also has the probability of receiving a diploma in that score cell and the size of the call (n). So, for example, out of the 12 students in the cell -30 below the score threshold, only 5 were able to get the diploma (12 * 0,416).
This means that there is some slippage in the treatment assignment. Some students that are below the passing threshold managed to get the diploma anyway. Here, the regression discontinuity is **fuzzy**, rather than sharp. Notice how the probability of getting the diploma doesn't jump from zero to one at the threshold. But it does jump from something like 50% to 90%.
```
sheepskin.plot.scatter(x="minscore", y="receivehsd", figsize=(10,5))
plt.xlabel("Test Scores Relative to Cut off")
plt.ylabel("Fraction Receiving Diplomas")
plt.title("Last-chance Exams");
```
We can think of fuzzy RD as a sort of non compliance. Passing the threshold should make everyone receive the diploma, but some students, the never takers, don’t get it. Likewise, being below the threshold should prevent you from getting a diploma, but some students, the always takers, manage to get it anyway.
Just like when we have the potential outcome, we have the potential treatment status in this situation. $T_1$ is the treatment everyone would have received had they been above the threshold. $T_0$ is the treatment everyone would have received had they been below the threshold. As you've might have noticed, we can think of the **threshold as an Instrumental Variable**. Just as in IV, if we naively estimate the treatment effect, it will be biased towards zero.

The probability of treatment being less than one, even above the threshold, makes the outcome we observe less than the true potential outcome $Y_1$. By the same token, the outcome we observe below the threshold is higher than the true potential outcome $Y_0$. This makes it look like the treatment effect at the threshold is smaller than it actually is and we will have to use IV techniques to correct for that.
Just like when we've assumed smoothness on the potential outcome, we now assume it for the potential treatment. Also, we need to assume monotonicity, just like in IV. In case you don't remember, it states that $T_{i1}>T_{i0} \ \forall i$. This means that crossing the threshold from the left to the right only increases your chance of getting a diploma (or that there are no defiers). With these 2 assumptions, we have a Wald Estimator for LATE.
$$
\dfrac{\lim_{r \to c^+} E[Y_i|R_i=r] - \lim_{r \to c^-} E[Y_i|R_i=r]}{\lim_{r \to c^+} E[T_i|R_i=r] - \lim_{r \to c^-} E[T_i|R_i=r]} = E[Y_{1i} - Y_{0i} | T_{1i} > T_{0i}, R_i=c]
$$
Notice how this is a local estimate in two senses. First, it is local because it only gives the treatment effect at the threshold $c$. This is the RD locality. Second, it is local because it only estimates the treatment effect for the compliers. This is the IV locality.
To estimate this, we will use 2 linear regression. The numerator can be estimated just like we've done before. To get the denominator, we simply replace the outcome with the treatment. But first, let's talk about a sanity check we need to run to make sure we can trust our RDD estimates.
### The McCrary Test
One thing that could break our RDD argument is if people can manipulate where they stand at the threshold. In the sheepskin example this could happen if students just below the threshold found a way around the system to increase their test score by just a bit. Another example is when you need to be below a certain income level to get a government benefit. Some families might lower their income on purpose, just to be just eligible for the program.
In these sorts of situations, we tend to see a phenomenon called bunching on the density of the running variable. This means that we will have a lot of entities just above or just below the threshold. To check for that, we can plot the density function of the running variable and see if there are any spikes around the threshold. For our case, the density is given by the `n` column in our data.
```
plt.figure(figsize=(8,8))
ax = plt.subplot(2,1,1)
sheepskin.plot.bar(x="minscore", y="n", ax=ax)
plt.title("McCrary Test")
plt.ylabel("Smoothness at the Threshold")
ax = plt.subplot(2,1,2, sharex=ax)
sheepskin.replace({1877:1977, 1874:2277}).plot.bar(x="minscore", y="n", ax=ax)
plt.xlabel("Test Scores Relative to Cut off")
plt.ylabel("Spike at the Threshold");
```
The first plot shows how our data density looks like. As we can see, there are no spikes around the threshold, meaning there is no bunching. Students are not manipulating where they fall on the threshold. Just for illustrative purposes, the second plot shows what bunching would look like if students could manipulate where they fall on the threshold. We would see a spike in the density for the cells just above the threshold, since many students would be on that cell, barely passing the exam.
Getting this out of the way, we can go back to estimate the sheepskin effect. As I've said before, the numerator of the Wald estimator can be estimated just like we did in the Sharp RD. Here, we will use as weight the kernel with a bandwidth of 15. Since we also have the cell size, we will multiply the kernel by the sample size to get a final weight for the cell.
```
sheepsking_rdd = sheepskin.assign(threshold=(sheepskin["minscore"]>0).astype(int))
model = smf.wls("avgearnings~minscore*threshold",
sheepsking_rdd,
weights=kernel(sheepsking_rdd["minscore"], c=0, h=15)*sheepsking_rdd["n"]).fit()
model.summary().tables[1]
```
This is telling us that the effect of a diploma is -97.7571, but this is not statistically significant (P-value of 0.5). If we plot these results, we get a very continuous line at the threshold. More educated people indeed make more money, but there isn't a jump at the point where they receive the 12th grade diploma. This is an argument in favor of the view that says that education increases earnings by making people more productive, rather than being just a signal to the marker. In other words, there is no sheepskin effect.
```
ax = sheepskin.plot.scatter(x="minscore", y="avgearnings", color="C0")
sheepskin.assign(predictions=model.fittedvalues).plot(x="minscore", y="predictions", ax=ax, color="C1", figsize=(8,5))
plt.xlabel("Test Scores Relative to Cutoff")
plt.ylabel("Average Earnings")
plt.title("Last-chance Exams");
```
However, as we know from the way non compliance bias works, this result is biased towards zero. To correct for that, we need to scale it by the first stage and get the Wald estimator. Unfortunately, there isn't a good Python implementation for this, so we will have to do it manually and use bootstrap to get the standard errors.
The code below runs the numerator of the Wald estimator just like we did before and also constructs the denominator by replacing the target variable with the treatment variable `receivehsd`. The final step just divides the numerator by the denominator.
```
def wald_rdd(data):
weights=kernel(data["minscore"], c=0, h=15)*data["n"]
denominator = smf.wls("receivehsd~minscore*threshold", data, weights=weights).fit()
numerator = smf.wls("avgearnings~minscore*threshold", data, weights=weights).fit()
return numerator.params["threshold"]/denominator.params["threshold"]
from joblib import Parallel, delayed
np.random.seed(45)
bootstrap_sample = 1000
ates = Parallel(n_jobs=4)(delayed(wald_rdd)(sheepsking_rdd.sample(frac=1, replace=True))
for _ in range(bootstrap_sample))
ates = np.array(ates)
```
With the bootstrap samples, we can plot the distribution of ATEs and see where the 95% confidence interval is.
```
sns.distplot(ates, kde=False)
plt.vlines(np.percentile(ates, 2.5), 0, 100, linestyles="dotted")
plt.vlines(np.percentile(ates, 97.5), 0, 100, linestyles="dotted", label="95% CI")
plt.title("ATE Bootstrap Distribution")
plt.xlim([-10000, 10000])
plt.legend();
```
As you can see, even when we scale the effect by the first stage, it is still not statistically different from zero. This means that education doesn't increase earnings by a simple sheepskin effect, but rather by increasing one's productivity.
## Key Ideas
We learned how to take advantage of artificial discontinuities to estimate causal effects. The idea is that we will have some artificial threshold that makes the probability of treatment jump. One example that we saw was how age makes the probability of drinking jump at 21 years. We could use that to estimate the impact of drinking on mortality rate. We use the fact that very close to the threshold, we have something close to a randomized trial. Entities very close to the threshold could have gone either way and what determines where they've landed is essentially random. With this, we can compare those just above and just below to get the treatment effect. We saw how we could do that with weighted linear regression using a kernel and how this even gave us, for free, standard errors for our ATE.
Then, we look at what would happen in the fuzzy RD design, where we have non compliance. We saw how we could approach the situation much like we did with IV.
## References
I like to think of this entire book as a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.
* [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)
* [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)
I'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.
* [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)
* [Mastering 'Metrics](https://www.masteringmetrics.com/)
Other important reference is Miguel Hernan and Jamie Robins' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.
* [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)

## Contribute
Causal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually.
If you found this book valuable and you want to support it, please go to [Patreon](https://www.patreon.com/causal_inference_for_the_brave_and_true). If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn't understand. Just go to the book's repository and [open an issue](https://github.com/matheusfacure/python-causality-handbook/issues). Finally, if you liked this content, please share it with others who might find it useful and give it a [star on GitHub](https://github.com/matheusfacure/python-causality-handbook/stargazers).
|
github_jupyter
|
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import random
import math
from itertools import combinations
POPULATION_SIZE = 1000
INITIAL_SICK = 1
INITIAL_HEALTHY = POPULATION_SIZE - INITIAL_SICK
SICK_COLOR = (1, 0, 0)
HEALTHY_COLOR = (0, 1, 0)
RECOVERED_COLOR = (0.7, 0, 0.7)
class Person:
x: float
y: float
sick: bool
recovered: bool
susceptibility: float
color: tuple[int, int, int]
HEALTHY_COLOR = (0, 1, 0)
SICK_COLOR = (1, 0, 0)
RECOVERED_COLOR = (0.7, 0, 0.7)
def __init__(self, **kwargs):
self.x = random.random()
self.y = random.random()
self.sick = False
self.recovered = False
self.color = HEALTHY_COLOR
self.mobility = random.random()
self.susceptibility = random.random()
self.recovered_susceptibility = 0
for key, value in kwargs.items():
setattr(self, key, value)
def get_sick(self):
""" Become sick, update corresponding fields. """
self.sick = True
self.color = SICK_COLOR
self.susceptibility = 0
def get_color(self):
""" Get representation of a person health as a corresponding color. """
return self.color
def get_position(self) -> tuple[float, float]:
""" Return current person location. """
return self.x, self.y
def recover(self):
""" Recover from sickness, update corresponding fields. """
self.sick = False
self.recovered = True
self.color = RECOVERED_COLOR
self.susceptibility = self.recovered_susceptibility
def move(self):
""" Move from previous position to a new one. """
move_x, move_y = self.get_move_values()
self.x += move_x
self.y += move_y
self.apply_boundary_conditions()
def apply_boundary_conditions(self):
""" Check if person did not leave the space of the simulation, if so modifies its position. """
if self.x > 1:
self.x -= 1
if self.x < 0:
self.x += 1
if self.y > 1:
self.y -= 1
if self.y < 0:
self.y += 1
def get_distance_to_travel(self) -> float:
""" Get distance person will move at the given time step. """
return random.random() * self.mobility
@staticmethod
def get_move_coefficients():
""" Generate direction in which person will be moved at the given time step. """
angle = math.radians(random.random() * 360)
return math.cos(angle), math.sin(angle)
def get_move_values(self):
distance_to_move = self.get_distance_to_travel()
x_coefficient, y_coefficient = self.get_move_coefficients()
return distance_to_move * x_coefficient, distance_to_move * y_coefficient
def update(self):
""" Update status related to disease development. """
pass
def can_get_infected(self):
""" Returns information if given agent can get infected. """
return not self.sick
def can_infect(self):
""" Returns information if given agent can infect others. """
return self.sick
def get_infected(self):
if self.susceptibility >= random.random():
self.get_sick()
class Simulation:
color = tuple[float, float, float]
population_time_step = tuple[float, float, color]
population_size: int
initial_sick: int
population: list[Person]
frames: list[population_time_step]
fig: plt.Figure
ax: plt.Axes
animation: animation
def __init__(self, population_size: int, initial_sick: int = 1, number_of_frames: int = 30, person_kwargs: dict = {}):
self.frames = []
self.initial_sick = initial_sick
self.population_size = population_size
self.population = [Person(**person_kwargs) for x in range(population_size)]
self.contact_radious = 0.2
self.squared_contanct_radious = self.contact_radious**2
for idx in range(initial_sick):
self.population[idx].get_sick()
self.generate_frames(number_of_frames)
def find_all_interactions(self):
""" Finds all interactions between 2 agents, ignores order in which agents appear. """
contacts = set()
for person_1, person_2 in combinations(self.population, 2):
distance = self.calcaulate_squared_euclidean_distance(person_1.get_position(), person_2.get_position())
if distance <= self.squared_contanct_radious:
contacts.add((person_1, person_2))
return contacts
@staticmethod
def find_possible_infections(contacts: set[Person, Person]):
""" Finds all interactions in which one Person is sick. """
# TODO introdcution of personal protection for sick (if prob > value yield else pass) saved by individual protection case
for person_1, person_2 in contacts:
if person_1.can_get_infected() and person_2.can_infect():
yield person_1
elif person_1.can_infect() and person_2.can_get_infected():
yield person_2
@staticmethod
def calcaulate_squared_euclidean_distance(first: tuple[float, float], second: tuple[float, float]) -> float:
return (first[0] - second[0]) **2 + (first[1] - second[1])**2
def generate_frames(self, number_of_frames: int) -> None:
""" Generates given number of frames of the simulation. """
self.save_frame(*self.get_population_position())
for frame in range(number_of_frames):
self.update_population()
self.save_frame(*self.get_population_position())
def update_population(self) -> None:
""" Updates position and healt status for each person in the population. """
for person in self.population:
person.move()
interactions = self.find_all_interactions()
possible_infections = set(self.find_possible_infections(interactions))
for idx, person in enumerate(possible_infections):
person.get_infected()
def get_population_position(self) -> population_time_step:
""" Get current x, y coordinates of each person and appropriate color depending on the health status. """
population_description = ((*person.get_position(), person.get_color()) for person in
self.population)
return tuple(zip(*population_description))
def save_frame(self, x: list[float], y: list[float], c: list[color]) -> None:
""" Adds a single frame representing current state of the simulation to the record. """
self.frames.append((x, y, c))
def get_frame(self, frame_index: int = -1) -> population_time_step:
""" Get selected frame of the simulation. """
if frame_index not in range(len(self.frames)):
frame_index = -1
return self.frames[frame_index]
def __iter__(self):
return iter(self.frames)
simulation = Simulation(100, number_of_frames=50)
%matplotlib widget
fig = plt.figure(figsize=(6,6))
ax = fig.add_axes([0, 0, 1, 1])
x, y, c = simulation.frames[0]
scatter = ax.scatter(x=x, y=y, c=c)
def update(frame):
x, y, c = frame
scatter.set_offsets(list(zip(x,y)))
# scatter.set_array(np.array(c)) # Z nieznanego powodu ustawianie koloru tym cudem znika punkty xddd
anim = animation.FuncAnimation(fig, update, iter(simulation), interval=200)
plt.show()
```
|
github_jupyter
|
```
# Import libraries and modules
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
print(np.__version__)
print(tf.__version__)
np.set_printoptions(threshold=np.inf)
```
# Local Development
## Arguments
```
arguments = {}
# File arguments.
arguments["train_file_pattern"] = "gs://machine-learning-1234-bucket/gan/data/mnist/train*.tfrecord"
arguments["eval_file_pattern"] = "gs://machine-learning-1234-bucket/gan/data/mnist/test*.tfrecord"
arguments["output_dir"] = "gs://machine-learning-1234-bucket/gan/vanilla_gan/trained_model"
# Training parameters.
arguments["train_batch_size"] = 32
arguments["train_steps"] = 56250
arguments["save_summary_steps"] = 100
arguments["save_checkpoints_steps"] = 10000
arguments["keep_checkpoint_max"] = 10
arguments["input_fn_autotune"] = False
# Eval parameters.
arguments["eval_batch_size"] = 32
arguments["eval_steps"] = 100
arguments["start_delay_secs"] = 60000
arguments["throttle_secs"] = 60000
# Image parameters.
arguments["height"] = 28
arguments["width"] = 28
arguments["depth"] = 1
# Generator parameters.
arguments["latent_size"] = 512
arguments["generator_hidden_units"] = [256, 512, 1024]
arguments["generator_leaky_relu_alpha"] = 0.2
arguments["generator_final_activation"] = "tanh"
arguments["generator_l1_regularization_scale"] = 0.
arguments["generator_l2_regularization_scale"] = 0.
arguments["generator_optimizer"] = "Adam"
arguments["generator_learning_rate"] = 0.0002
arguments["generator_adam_beta1"] = 0.5
arguments["generator_adam_beta2"] = 0.999
arguments["generator_adam_epsilon"] = 1e-8
arguments["generator_clip_gradients"] = None
arguments["generator_train_steps"] = 1
# Discriminator hyperparameters.
arguments["discriminator_hidden_units"] = [1024, 512, 256]
arguments["discriminator_leaky_relu_alpha"] = 0.2
arguments["discriminator_l1_regularization_scale"] = 0.
arguments["discriminator_l2_regularization_scale"] = 0.
arguments["discriminator_optimizer"] = "Adam"
arguments["discriminator_learning_rate"] = 0.0002
arguments["discriminator_adam_beta1"] = 0.5
arguments["discriminator_adam_beta2"] = 0.999
arguments["discriminator_adam_epsilon"] = 1e-8
arguments["discriminator_clip_gradients"] = None
arguments["discriminator_train_steps"] = 1
arguments["label_smoothing"] = 0.9
```
## print_object.py
```
def print_obj(function_name, object_name, object_value):
"""Prints enclosing function, object name, and object value.
Args:
function_name: str, name of function.
object_name: str, name of object.
object_value: object, value of passed object.
"""
# pass
print("{}: {} = {}".format(function_name, object_name, object_value))
```
## input.py
```
def preprocess_image(image):
"""Preprocess image tensor.
Args:
image: tensor, input image with shape
[cur_batch_size, height, width, depth].
Returns:
Preprocessed image tensor with shape
[cur_batch_size, height, width, depth].
"""
func_name = "preprocess_image"
# Convert from [0, 255] -> [-1.0, 1.0] floats.
image = tf.cast(x=image, dtype=tf.float32) * (2. / 255) - 1.0
print_obj(func_name, "image", image)
return image
def decode_example(protos, params):
"""Decodes TFRecord file into tensors.
Given protobufs, decode into image and label tensors.
Args:
protos: protobufs from TFRecord file.
params: dict, user passed parameters.
Returns:
Image and label tensors.
"""
func_name = "decode_example"
# Create feature schema map for protos.
features = {
"image_raw": tf.io.FixedLenFeature(shape=[], dtype=tf.string),
"label": tf.io.FixedLenFeature(shape=[], dtype=tf.int64)
}
# Parse features from tf.Example.
parsed_features = tf.io.parse_single_example(
serialized=protos, features=features
)
print_obj("\n" + func_name, "features", features)
# Convert from a scalar string tensor (whose single string has
# length height * width * depth) to a uint8 tensor with shape
# [height * width * depth].
image = tf.io.decode_raw(
input_bytes=parsed_features["image_raw"], out_type=tf.uint8
)
print_obj(func_name, "image", image)
# Reshape flattened image back into normal dimensions.
image = tf.reshape(
tensor=image,
shape=[params["height"], params["width"], params["depth"]]
)
print_obj(func_name, "image", image)
# Preprocess image.
image = preprocess_image(image=image)
print_obj(func_name, "image", image)
# Convert label from a scalar uint8 tensor to an int32 scalar.
label = tf.cast(x=parsed_features["label"], dtype=tf.int32)
print_obj(func_name, "label", label)
return {"image": image}, label
def read_dataset(filename, mode, batch_size, params):
"""Reads TF Record data using tf.data, doing necessary preprocessing.
Given filename, mode, batch size, and other parameters, read TF Record
dataset using Dataset API, apply necessary preprocessing, and return an
input function to the Estimator API.
Args:
filename: str, file pattern that to read into our tf.data dataset.
mode: The estimator ModeKeys. Can be TRAIN or EVAL.
batch_size: int, number of examples per batch.
params: dict, dictionary of user passed parameters.
Returns:
An input function.
"""
def _input_fn():
"""Wrapper input function used by Estimator API to get data tensors.
Returns:
Batched dataset object of dictionary of feature tensors and label
tensor.
"""
# Create list of files that match pattern.
file_list = tf.data.Dataset.list_files(file_pattern=filename)
# Create dataset from file list.
if params["input_fn_autotune"]:
dataset = tf.data.TFRecordDataset(
filenames=file_list,
num_parallel_reads=tf.data.experimental.AUTOTUNE
)
else:
dataset = tf.data.TFRecordDataset(filenames=file_list)
# Shuffle and repeat if training with fused op.
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.apply(
tf.data.experimental.shuffle_and_repeat(
buffer_size=50 * batch_size,
count=None # indefinitely
)
)
# Decode CSV file into a features dictionary of tensors, then batch.
if params["input_fn_autotune"]:
dataset = dataset.apply(
tf.data.experimental.map_and_batch(
map_func=lambda x: decode_example(
protos=x,
params=params
),
batch_size=batch_size,
num_parallel_calls=tf.data.experimental.AUTOTUNE
)
)
else:
dataset = dataset.apply(
tf.data.experimental.map_and_batch(
map_func=lambda x: decode_example(
protos=x,
params=params
),
batch_size=batch_size
)
)
# Prefetch data to improve latency.
if params["input_fn_autotune"]:
dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
else:
dataset = dataset.prefetch(buffer_size=1)
return dataset
return _input_fn
```
## generator.py
```
class Generator(object):
"""Generator that takes latent vector input and outputs image.
Fields:
name: str, name of `Generator`.
kernel_regularizer: `l1_l2_regularizer` object, regularizar for kernel
variables.
bias_regularizer: `l1_l2_regularizer` object, regularizar for bias
variables.
"""
def __init__(self, kernel_regularizer, bias_regularizer, name):
"""Instantiates and builds generator network.
Args:
kernel_regularizer: `l1_l2_regularizer` object, regularizar for
kernel variables.
bias_regularizer: `l1_l2_regularizer` object, regularizar for bias
variables.
name: str, name of generator.
"""
# Set name of generator.
self.name = name
# Regularizer for kernel weights.
self.kernel_regularizer = kernel_regularizer
# Regularizer for bias weights.
self.bias_regularizer = bias_regularizer
def get_fake_images(self, Z, params):
"""Creates generator network and returns generated images.
Args:
Z: tensor, latent vectors of shape [cur_batch_size, latent_size].
params: dict, user passed parameters.
Returns:
Generated image tensor of shape
[cur_batch_size, height * width * depth].
"""
func_name = "get_fake_images"
# Create the input layer to our DNN.
# shape = (cur_batch_size, latent_size)
network = Z
print_obj("\n" + func_name, "network", network)
# Dictionary containing possible final activations.
final_activation_dict = {
"sigmoid": tf.nn.sigmoid, "relu": tf.nn.relu, "tanh": tf.nn.tanh
}
with tf.compat.v1.variable_scope("generator", reuse=tf.compat.v1.AUTO_REUSE):
# Add hidden layers with given number of units/neurons per layer.
for i, units in enumerate(params["generator_hidden_units"]):
# shape = (cur_batch_size, generator_hidden_units[i])
network = tf.compat.v1.layers.dense(
inputs=network,
units=units,
activation=None,
kernel_regularizer=self.kernel_regularizer,
bias_regularizer=self.bias_regularizer,
name="layers_dense_{}".format(i)
)
print_obj(func_name, "network", network)
network = tf.nn.leaky_relu(
features=network,
alpha=params["generator_leaky_relu_alpha"],
name="leaky_relu_{}".format(i)
)
print_obj(func_name, "network", network)
# Final linear layer for outputs.
# shape = (cur_batch_size, height * width * depth)
generated_outputs = tf.compat.v1.layers.dense(
inputs=network,
units=params["height"] * params["width"] * params["depth"],
activation=final_activation_dict.get(
params["generator_final_activation"].lower(), None
),
kernel_regularizer=self.kernel_regularizer,
bias_regularizer=self.bias_regularizer,
name="layers_dense_generated_outputs"
)
print_obj(func_name, "generated_outputs", generated_outputs)
return generated_outputs
def get_generator_loss(self, fake_logits):
"""Gets generator loss.
Args:
fake_logits: tensor, shape of
[cur_batch_size, 1].
Returns:
Tensor of generator's total loss of shape [].
"""
func_name = "get_generator_loss"
# Calculate base generator loss.
generator_loss = tf.reduce_mean(
input_tensor=tf.nn.sigmoid_cross_entropy_with_logits(
logits=fake_logits,
labels=tf.ones_like(input=fake_logits)
),
name="generator_loss"
)
print_obj("\n" + func_name, "generator_loss", generator_loss)
# Get regularization losses.
generator_reg_loss = tf.compat.v1.losses.get_regularization_loss(
scope="generator",
name="generator_regularization_loss"
)
print_obj(func_name, "generator_reg_loss", generator_reg_loss)
# Combine losses for total losses.
generator_total_loss = tf.math.add(
x=generator_loss,
y=generator_reg_loss,
name="generator_total_loss"
)
print_obj(func_name, "generator_total_loss", generator_total_loss)
# # Add summaries for TensorBoard.
# tf.summary.scalar(
# name="generator_loss", tensor=generator_loss, family="losses"
# )
# tf.summary.scalar(
# name="generator_reg_loss",
# tensor=generator_reg_loss,
# family="losses"
# )
# tf.summary.scalar(
# name="generator_total_loss",
# tensor=generator_total_loss,
# family="total_losses"
# )
return generator_total_loss
```
## discriminator.py
```
class Discriminator(object):
"""Discriminator that takes image input and outputs logits.
Fields:
name: str, name of `Discriminator`.
kernel_regularizer: `l1_l2_regularizer` object, regularizar for kernel
variables.
bias_regularizer: `l1_l2_regularizer` object, regularizar for bias
variables.
"""
def __init__(self, kernel_regularizer, bias_regularizer, name):
"""Instantiates and builds discriminator network.
Args:
kernel_regularizer: `l1_l2_regularizer` object, regularizar for
kernel variables.
bias_regularizer: `l1_l2_regularizer` object, regularizar for bias
variables.
name: str, name of discriminator.
"""
# Set name of discriminator.
self.name = name
# Regularizer for kernel weights.
self.kernel_regularizer = kernel_regularizer
# Regularizer for bias weights.
self.bias_regularizer = bias_regularizer
def get_discriminator_logits(self, X, params):
"""Creates discriminator network and returns logits.
Args:
X: tensor, image tensors of shape
[cur_batch_size, height * width * depth].
params: dict, user passed parameters.
Returns:
Logits tensor of shape [cur_batch_size, 1].
"""
func_name = "get_discriminator_logits"
# Create the input layer to our DNN.
# shape = (cur_batch_size, height * width * depth)
network = X
print_obj("\n" + func_name, "network", network)
with tf.compat.v1.variable_scope("discriminator", reuse=tf.compat.v1.AUTO_REUSE):
# Add hidden layers with given number of units/neurons per layer.
for i, units in enumerate(params["discriminator_hidden_units"]):
# shape = (cur_batch_size, discriminator_hidden_units[i])
network = tf.compat.v1.layers.dense(
inputs=network,
units=units,
activation=None,
kernel_regularizer=self.kernel_regularizer,
bias_regularizer=self.bias_regularizer,
name="layers_dense_{}".format(i)
)
print_obj(func_name, "network", network)
network = tf.nn.leaky_relu(
features=network,
alpha=params["discriminator_leaky_relu_alpha"],
name="leaky_relu_{}".format(i)
)
print_obj(func_name, "network", network)
# Final linear layer for logits.
# shape = (cur_batch_size, 1)
logits = tf.compat.v1.layers.dense(
inputs=network,
units=1,
activation=None,
kernel_regularizer=self.kernel_regularizer,
bias_regularizer=self.bias_regularizer,
name="layers_dense_logits"
)
print_obj(func_name, "logits", logits)
return logits
def get_discriminator_loss(self, fake_logits, real_logits, params):
"""Gets discriminator loss.
Args:
fake_logits: tensor, shape of
[cur_batch_size, 1].
real_logits: tensor, shape of
[cur_batch_size, 1].
params: dict, user passed parameters.
Returns:
Tensor of discriminator's total loss of shape [].
"""
func_name = "get_discriminator_loss"
# Calculate base discriminator loss.
discriminator_real_loss = tf.reduce_mean(
input_tensor=tf.nn.sigmoid_cross_entropy_with_logits(
logits=real_logits,
labels=tf.multiply(
x=tf.ones_like(input=real_logits),
y=params["label_smoothing"]
)
),
name="discriminator_real_loss"
)
print_obj(
"\n" + func_name,
"discriminator_real_loss",
discriminator_real_loss
)
discriminator_fake_loss = tf.reduce_mean(
input_tensor=tf.nn.sigmoid_cross_entropy_with_logits(
logits=fake_logits,
labels=tf.zeros_like(input=fake_logits)
),
name="discriminator_fake_loss"
)
print_obj(
func_name, "discriminator_fake_loss", discriminator_fake_loss
)
discriminator_loss = tf.add(
x=discriminator_real_loss,
y=discriminator_fake_loss,
name="discriminator_loss"
)
print_obj(func_name, "discriminator_loss", discriminator_loss)
# Get regularization losses.
discriminator_reg_loss = tf.compat.v1.losses.get_regularization_loss(
scope="discriminator",
name="discriminator_reg_loss"
)
print_obj(func_name, "discriminator_reg_loss", discriminator_reg_loss)
# Combine losses for total losses.
discriminator_total_loss = tf.math.add(
x=discriminator_loss,
y=discriminator_reg_loss,
name="discriminator_total_loss"
)
print_obj(
func_name, "discriminator_total_loss", discriminator_total_loss
)
# # Add summaries for TensorBoard.
# tf.summary.scalar(
# name="discriminator_real_loss",
# tensor=discriminator_real_loss,
# family="losses"
# )
# tf.summary.scalar(
# name="discriminator_fake_loss",
# tensor=discriminator_fake_loss,
# family="losses"
# )
# tf.summary.scalar(
# name="discriminator_loss",
# tensor=discriminator_loss,
# family="losses"
# )
# tf.summary.scalar(
# name="discriminator_reg_loss",
# tensor=discriminator_reg_loss,
# family="losses"
# )
# tf.summary.scalar(
# name="discriminator_total_loss",
# tensor=discriminator_total_loss,
# family="total_losses"
# )
return discriminator_total_loss
```
## train_and_eval.py
```
def get_logits_and_losses(features, generator, discriminator, params):
"""Gets logits and losses for both train and eval modes.
Args:
features: dict, feature tensors from input function.
generator: instance of generator.`Generator`.
discriminator: instance of discriminator.`Discriminator`.
params: dict, user passed parameters.
Returns:
Real and fake logits and generator and discriminator losses.
"""
func_name = "get_logits_and_losses"
# Extract real images from features dictionary.
real_images = tf.reshape(
tensor=features["image"],
shape=[-1, params["height"] * params["width"] * params["depth"]]
)
print_obj("\n" + func_name, "real_images", real_images)
# Get dynamic batch size in case of partial batch.
cur_batch_size = tf.shape(
input=real_images,
out_type=tf.int32,
name="{}_cur_batch_size".format(func_name)
)[0]
# Create random noise latent vector for each batch example.
Z = tf.random.normal(
shape=[cur_batch_size, params["latent_size"]],
mean=0.0,
stddev=1.0,
dtype=tf.float32
)
print_obj(func_name, "Z", Z)
# Get generated image from generator network from gaussian noise.
print("\nCall generator with Z = {}.".format(Z))
fake_images = generator.get_fake_images(Z=Z, params=params)
# # Add summaries for TensorBoard.
# tf.summary.image(
# name="fake_images",
# tensor=tf.reshape(
# tensor=fake_images,
# shape=[-1, params["height"], params["width"], params["depth"]]
# ),
# max_outputs=5
# )
# Get fake logits from discriminator using generator's output image.
print("\nCall discriminator with fake_images = {}.".format(fake_images))
fake_logits = discriminator.get_discriminator_logits(
X=fake_images, params=params
)
# Get real logits from discriminator using real image.
print(
"\nCall discriminator with real_images = {}.".format(real_images)
)
real_logits = discriminator.get_discriminator_logits(
X=real_images, params=params
)
# Get generator total loss.
generator_total_loss = generator.get_generator_loss(
fake_logits=fake_logits
)
# Get discriminator total loss.
discriminator_total_loss = discriminator.get_discriminator_loss(
fake_logits=fake_logits, real_logits=real_logits, params=params
)
return (real_logits,
fake_logits,
generator_total_loss,
discriminator_total_loss)
```
## train.py
```
def get_variables_and_gradients(loss, scope):
"""Gets variables and their gradients wrt. loss.
Args:
loss: tensor, shape of [].
scope: str, the network's name to find its variables to train.
Returns:
Lists of variables and their gradients.
"""
func_name = "get_variables_and_gradients"
# Get trainable variables.
variables = tf.compat.v1.trainable_variables(scope=scope)
print_obj("\n{}_{}".format(func_name, scope), "variables", variables)
# Get gradients.
gradients = tf.gradients(
ys=loss,
xs=variables,
name="{}_gradients".format(scope)
)
print_obj("\n{}_{}".format(func_name, scope), "gradients", gradients)
# Add variable names back in for identification.
gradients = [
tf.identity(
input=g,
name="{}_{}_gradients".format(func_name, v.name[:-2])
)
if tf.is_tensor(x=g) else g
for g, v in zip(gradients, variables)
]
print_obj("\n{}_{}".format(func_name, scope), "gradients", gradients)
return variables, gradients
def create_variable_and_gradient_histogram_summaries(loss_dict, params):
"""Creates variable and gradient histogram summaries.
Args:
loss_dict: dict, keys are scopes and values are scalar loss tensors
for each network kind.
params: dict, user passed parameters.
"""
pass
# for scope, loss in loss_dict.items():
# # Get variables and their gradients wrt. loss.
# variables, gradients = get_variables_and_gradients(loss, scope)
# # Add summaries for TensorBoard.
# for g, v in zip(gradients, variables):
# tf.summary.histogram(
# name="{}".format(v.name[:-2]),
# values=v,
# family="{}_variables".format(scope)
# )
# if tf.is_tensor(x=g):
# tf.summary.histogram(
# name="{}".format(v.name[:-2]),
# values=g,
# family="{}_gradients".format(scope)
# )
def train_network(loss, global_step, params, scope):
"""Trains network and returns loss and train op.
Args:
loss: tensor, shape of [].
global_step: tensor, the current training step or batch in the
training loop.
params: dict, user passed parameters.
scope: str, the variables that to train.
Returns:
Loss tensor and training op.
"""
func_name = "train_network"
print_obj("\n" + func_name, "scope", scope)
# Create optimizer map.
optimizers = {
"Adam": tf.compat.v1.train.AdamOptimizer,
"Adadelta": tf.compat.v1.train.AdadeltaOptimizer,
"AdagradDA": tf.compat.v1.train.AdagradDAOptimizer,
"Adagrad": tf.compat.v1.train.AdagradOptimizer,
"Ftrl": tf.compat.v1.train.FtrlOptimizer,
"GradientDescent": tf.compat.v1.train.GradientDescentOptimizer,
"Momentum": tf.compat.v1.train.MomentumOptimizer,
"ProximalAdagrad": tf.compat.v1.train.ProximalAdagradOptimizer,
"ProximalGradientDescent": tf.compat.v1.train.ProximalGradientDescentOptimizer,
"RMSProp": tf.compat.v1.train.RMSPropOptimizer
}
# Get optimizer and instantiate it.
if params["{}_optimizer".format(scope)] == "Adam":
optimizer = optimizers[params["{}_optimizer".format(scope)]](
learning_rate=params["{}_learning_rate".format(scope)],
beta1=params["{}_adam_beta1".format(scope)],
beta2=params["{}_adam_beta2".format(scope)],
epsilon=params["{}_adam_epsilon".format(scope)],
name="{}_{}_optimizer".format(
scope, params["{}_optimizer".format(scope)].lower()
)
)
else:
optimizer = optimizers[params["{}_optimizer".format(scope)]](
learning_rate=params["{}_learning_rate".format(scope)],
name="{}_{}_optimizer".format(
scope, params["{}_optimizer".format(scope)].lower()
)
)
print_obj("{}_{}".format(func_name, scope), "optimizer", optimizer)
# Get gradients.
gradients = tf.gradients(
ys=loss,
xs=tf.compat.v1.trainable_variables(scope=scope),
name="{}_gradients".format(scope)
)
print_obj("\n{}_{}".format(func_name, scope), "gradients", gradients)
# Clip gradients.
if params["{}_clip_gradients".format(scope)]:
gradients, _ = tf.clip_by_global_norm(
t_list=gradients,
clip_norm=params["{}_clip_gradients".format(scope)],
name="{}_clip_by_global_norm_gradients".format(scope)
)
print_obj("\n{}_{}".format(func_name, scope), "gradients", gradients)
# Zip back together gradients and variables.
grads_and_vars = zip(gradients, tf.compat.v1.trainable_variables(scope=scope))
print_obj(
"{}_{}".format(func_name, scope), "grads_and_vars", grads_and_vars
)
# Create train op by applying gradients to variables and incrementing
# global step.
train_op = optimizer.apply_gradients(
grads_and_vars=grads_and_vars,
global_step=global_step,
name="{}_apply_gradients".format(scope)
)
return loss, train_op
def get_loss_and_train_op(
generator_total_loss, discriminator_total_loss, params):
"""Gets loss and train op for train mode.
Args:
generator_total_loss: tensor, scalar total loss of generator.
discriminator_total_loss: tensor, scalar total loss of discriminator.
params: dict, user passed parameters.
Returns:
Loss scalar tensor and train_op to be used by the EstimatorSpec.
"""
func_name = "get_loss_and_train_op"
# Get global step.
global_step = tf.compat.v1.train.get_or_create_global_step()
# Determine if it is time to train generator or discriminator.
cycle_step = tf.math.mod(
x=global_step,
y=tf.cast(
x=tf.add(
x=params["discriminator_train_steps"],
y=params["generator_train_steps"]
),
dtype=tf.int64
),
name="{}_cycle_step".format(func_name)
)
# Create choose discriminator condition.
condition = tf.less(
x=cycle_step, y=params["discriminator_train_steps"]
)
# Conditionally choose to train generator or discriminator subgraph.
loss, train_op = tf.cond(
pred=condition,
true_fn=lambda: train_network(
loss=discriminator_total_loss,
global_step=global_step,
params=params,
scope="discriminator"
),
false_fn=lambda: train_network(
loss=generator_total_loss,
global_step=global_step,
params=params,
scope="generator"
)
)
return loss, train_op
```
## eval_metrics.py
```
def get_eval_metric_ops(fake_logits, real_logits, params):
"""Gets eval metric ops.
Args:
fake_logits: tensor, shape of [cur_batch_size, 1] that came from
discriminator having processed generator's output image.
real_logits: tensor, shape of [cur_batch_size, 1] that came from
discriminator having processed real image.
params: dict, user passed parameters.
Returns:
Dictionary of eval metric ops.
"""
func_name = "get_eval_metric_ops"
# Concatenate discriminator logits and labels.
discriminator_logits = tf.concat(
values=[real_logits, fake_logits],
axis=0,
name="discriminator_concat_logits"
)
print_obj("\n" + func_name, "discriminator_logits", discriminator_logits)
discriminator_labels = tf.concat(
values=[
tf.ones_like(input=real_logits) * params["label_smoothing"],
tf.zeros_like(input=fake_logits)
],
axis=0,
name="discriminator_concat_labels"
)
print_obj(func_name, "discriminator_labels", discriminator_labels)
# Calculate discriminator probabilities.
discriminator_probabilities = tf.nn.sigmoid(
x=discriminator_logits, name="discriminator_probabilities"
)
print_obj(
func_name, "discriminator_probabilities", discriminator_probabilities
)
# Create eval metric ops dictionary.
eval_metric_ops = {
"accuracy": tf.compat.v1.metrics.accuracy(
labels=discriminator_labels,
predictions=discriminator_probabilities,
name="discriminator_accuracy"
),
"precision": tf.compat.v1.metrics.precision(
labels=discriminator_labels,
predictions=discriminator_probabilities,
name="discriminator_precision"
),
"recall": tf.compat.v1.metrics.recall(
labels=discriminator_labels,
predictions=discriminator_probabilities,
name="discriminator_recall"
),
"auc_roc": tf.compat.v1.metrics.auc(
labels=discriminator_labels,
predictions=discriminator_probabilities,
num_thresholds=200,
curve="ROC",
name="discriminator_auc_roc"
),
"auc_pr": tf.compat.v1.metrics.auc(
labels=discriminator_labels,
predictions=discriminator_probabilities,
num_thresholds=200,
curve="PR",
name="discriminator_auc_pr"
)
}
print_obj(func_name, "eval_metric_ops", eval_metric_ops)
return eval_metric_ops
```
## predict.py
```
def get_predictions_and_export_outputs(features, generator, params):
"""Gets predictions and serving export outputs.
Args:
features: dict, feature tensors from serving input function.
generator: instance of `Generator`.
params: dict, user passed parameters.
Returns:
Predictions dictionary and export outputs dictionary.
"""
func_name = "get_predictions_and_export_outputs"
# Extract given latent vectors from features dictionary.
Z = features["Z"]
print_obj("\n" + func_name, "Z", Z)
# Establish generator network subgraph.
fake_images = generator.get_fake_images(Z=Z, params=params)
print_obj(func_name, "fake_images", fake_images)
# Reshape into a rank 4 image.
generated_images = tf.reshape(
tensor=fake_images,
shape=[-1, params["height"], params["width"], params["depth"]]
)
print_obj(func_name, "generated_images", generated_images)
# Create predictions dictionary.
predictions_dict = {
"generated_images": generated_images
}
print_obj(func_name, "predictions_dict", predictions_dict)
# Create export outputs.
export_outputs = {
"predict_export_outputs": tf.estimator.export.PredictOutput(
outputs=predictions_dict)
}
print_obj(func_name, "export_outputs", export_outputs)
return predictions_dict, export_outputs
```
## vanilla_gan.py
```
def vanilla_gan_model(features, labels, mode, params):
"""Vanilla GAN custom Estimator model function.
Args:
features: dict, keys are feature names and values are feature tensors.
labels: tensor, label data.
mode: tf.estimator.ModeKeys with values of either TRAIN, EVAL, or
PREDICT.
params: dict, user passed parameters.
Returns:
Instance of `tf.estimator.EstimatorSpec` class.
"""
func_name = "vanilla_gan_model"
print_obj("\n" + func_name, "features", features)
print_obj(func_name, "labels", labels)
print_obj(func_name, "mode", mode)
print_obj(func_name, "params", params)
# Loss function, training/eval ops, etc.
predictions_dict = None
loss = None
train_op = None
eval_metric_ops = None
export_outputs = None
# Instantiate generator.
vanilla_generator = Generator(
kernel_regularizer=None,
# tf.contrib.layers.l1_l2_regularizer(
# scale_l1=params["generator_l1_regularization_scale"],
# scale_l2=params["generator_l2_regularization_scale"]
# ),
bias_regularizer=None,
name="generator"
)
# Instantiate discriminator.
vanilla_discriminator = Discriminator(
kernel_regularizer=None,
# tf.contrib.layers.l1_l2_regularizer(
# scale_l1=params["discriminator_l1_regularization_scale"],
# scale_l2=params["discriminator_l2_regularization_scale"]
# ),
bias_regularizer=None,
name="discriminator"
)
if mode == tf.estimator.ModeKeys.PREDICT:
# Get predictions and export outputs.
(predictions_dict,
export_outputs) = get_predictions_and_export_outputs(
features=features, generator=vanilla_generator, params=params
)
else:
# Get logits and losses from networks for train and eval modes.
(real_logits,
fake_logits,
generator_total_loss,
discriminator_total_loss) = get_logits_and_losses(
features=features,
generator=vanilla_generator,
discriminator=vanilla_discriminator,
params=params
)
if mode == tf.estimator.ModeKeys.TRAIN:
# Create variable and gradient histogram summaries.
create_variable_and_gradient_histogram_summaries(
loss_dict={
"generator": generator_total_loss,
"discriminator": discriminator_total_loss
},
params=params
)
# Get loss and train op for EstimatorSpec.
loss, train_op = get_loss_and_train_op(
generator_total_loss=generator_total_loss,
discriminator_total_loss=discriminator_total_loss,
params=params
)
else:
# Set eval loss.
loss = discriminator_total_loss
# Get eval metrics.
eval_metric_ops = get_eval_metric_ops(
real_logits=real_logits,
fake_logits=fake_logits,
params=params
)
# Return EstimatorSpec
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops,
export_outputs=export_outputs
)
```
## serving.py
```
def serving_input_fn(params):
"""Serving input function.
Args:
params: dict, user passed parameters.
Returns:
ServingInputReceiver object containing features and receiver tensors.
"""
func_name = "serving_input_fn"
# Create placeholders to accept data sent to the model at serving time.
# shape = (batch_size,)
feature_placeholders = {
"Z": tf.compat.v1.placeholder(
dtype=tf.float32,
shape=[None, params["latent_size"]],
name="serving_input_placeholder_Z"
)
}
print_obj("\n" + func_name, "feature_placeholders", feature_placeholders)
# Create clones of the feature placeholder tensors so that the SavedModel
# SignatureDef will point to the placeholder.
features = {
key: tf.identity(
input=value,
name="{}_identity_placeholder_{}".format(func_name, key)
)
for key, value in feature_placeholders.items()
}
print_obj(func_name, "features", features)
return tf.estimator.export.ServingInputReceiver(
features=features, receiver_tensors=feature_placeholders
)
```
## model.py
```
def train_and_evaluate(args):
"""Trains and evaluates custom Estimator model.
Args:
args: dict, user passed parameters.
Returns:
`Estimator` object.
"""
func_name = "train_and_evaluate"
print_obj("\n" + func_name, "args", args)
# Ensure filewriter cache is clear for TensorBoard events file.
# tf.summary.FileWriterCache.clear()
# Set logging to be level of INFO.
# tf.logging.set_verbosity(tf.logging.INFO)
# Create a RunConfig for Estimator.
config = tf.estimator.RunConfig(
model_dir=args["output_dir"],
save_summary_steps=args["save_summary_steps"],
save_checkpoints_steps=args["save_checkpoints_steps"],
keep_checkpoint_max=args["keep_checkpoint_max"]
)
# Create our custom estimator using our model function.
estimator = tf.estimator.Estimator(
model_fn=vanilla_gan_model,
model_dir=args["output_dir"],
config=config,
params=args
)
# Create train spec to read in our training data.
train_spec = tf.estimator.TrainSpec(
input_fn=read_dataset(
filename=args["train_file_pattern"],
mode=tf.estimator.ModeKeys.TRAIN,
batch_size=args["train_batch_size"],
params=args
),
max_steps=args["train_steps"]
)
# Create exporter to save out the complete model to disk.
exporter = tf.estimator.LatestExporter(
name="exporter",
serving_input_receiver_fn=lambda: serving_input_fn(args)
)
# Create eval spec to read in our validation data and export our model.
eval_spec = tf.estimator.EvalSpec(
input_fn=read_dataset(
filename=args["eval_file_pattern"],
mode=tf.estimator.ModeKeys.EVAL,
batch_size=args["eval_batch_size"],
params=args
),
steps=args["eval_steps"],
start_delay_secs=args["start_delay_secs"],
throttle_secs=args["throttle_secs"],
exporters=exporter
)
# Create train and evaluate loop to train and evaluate our estimator.
tf.estimator.train_and_evaluate(
estimator=estimator, train_spec=train_spec, eval_spec=eval_spec)
return estimator
```
## Run model
```
os.environ["OUTPUT_DIR"] = arguments["output_dir"]
%%bash
gsutil -m rm -rf ${OUTPUT_DIR}
estimator = train_and_evaluate(arguments)
```
## Prediction
```
!gsutil ls gs://machine-learning-1234-bucket/gan/vanilla_gan/trained_model/export/exporter
loaded = tf.saved_model.load(
export_dir=os.path.join(
arguments["output_dir"], "export", "exporter", "1595549661"
)
)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_outputs)
Z = tf.random.normal(shape=(10, 512))
predictions = infer(Z)
```
Convert image back to the original scale.
```
generated_images = np.clip(
a=tf.cast(
x=((tf.reshape(
tensor=predictions["generated_images"],
shape=[
-1,
arguments["height"],
arguments["width"],
arguments["depth"]
]
) + 1.0) * (255. / 2)),
dtype=tf.int32
),
a_min=0,
a_max=255
)
print(generated_images.shape)
def plot_images(images):
"""Plots images.
Args:
images: np.array, array of images of
[num_images, image_size, image_size, num_channels].
"""
num_images = len(images)
plt.figure(figsize=(20, 20))
for i in range(num_images):
image = images[i]
plt.subplot(1, num_images, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(
tf.reshape(image, image.shape[:-1]),
cmap="gray_r"
)
plt.show()
plot_images(generated_images)
```
|
github_jupyter
|
Водопьян А.О. Кобзарь О.С. Хабибуллин Р.А. 2019 г.
# Вязкость нефти
Источники:
1. Beggs, H.D. and Robinson, J.R. “Estimating the Viscosity of Crude Oil Systems.”
Journal of Petroleum Technology. Vol. 27, No. 9 (1975)
2. Vazquez M. et al. Correlations for fluid physical property prediction //SPE Annual Fall Technical Conference and Exhibition. – Society of Petroleum Engineers, 1977.
## Общие принципы
Корреляции для вязкостей были получены с помощью анализа большого объема лабораторных исследований. Для получения готовых формул соблюдалось два взаимо противоположных стремления: охватить корреляцией наиболее большое количество разных нефтей и при этом получить приемлемую точность корреляции.
Вязкость нефти условно делится на 3 типа:
1. Вязкость дегазированной нефти - dead oil viscosity.
2. Вязкость нефти при давлении меньшем, чем давление насыщения - saturated oil viscosity
3. Вязкость нефти при давлении большем, чем давление насыщения - undersaturated oil viscosity
Для каждой вязкости своя корреляция, причем обычно следующий тип вязкости основывается на предыдущем при увеличении давлении от атмосферного.
Undersaturated oil viscosity, в инностранной литературе, "недонасыщенная нефть". Дело в том, что при давлении большем, чем давление насыщения, дополнительное количество газа может растворится в нефти, однако весь доступный газ уже растворился при давлении насыщения.
## Вязкость дегазированной нефти [1]
$$ \mu_{OD} = 10^X - 1 $$
где:
$$ X = yT^{-1.163} $$
$$ y = 10 ^ Z $$
$$ Z = 3.0324 - 0.02023 \gamma_o $$
## Вязкость нефти, содержащей растворенный газ ($P \leq P_b$) [1]
$$\mu = A \mu_{OD}^B$$
где:
$$A = 10.715(R_s + 100)^{-0.515}$$
$$B = 5.44(R_s + 150)^{-0.338}$$
### Номенклатура:
$R_s$ - газосодержание, $scf/STB$
$T$ - температура, $^{\circ} F$
$\mu_{OD}$ - вязкость дегазированной нефти при данной $T$, сПуаз
$\mu$ - вязкость газонасыщенной нефти при данной $T$, сПуаз
$\gamma_o $ - плотность нефти, $^{\circ} API$
## Вязкость нефти, содержащей растворенный газ ($P > P_b$) [2]
$$\mu_o = \mu_{ob}(p/p_b)^m$$
где:
$$ m = C_1p^{C_2} exp(C_3 + C_4 p ) $$
а коэффициенты равны:
$C_1 = 2.6$
$C_2 = 1.178$
$C_3 = -11.513$
$C_4 = -8.98 \times 10^{-5}$
```
import sys
sys.path.append('../')
import uniflocpy.uPVT.PVT_fluids as PVT
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import pandas as pd
import pylab
import uniflocpy.uPVT.PVT_correlations as PVTcorr
import uniflocpy.uPVT.PVT_fluids as PVT_fluids
%matplotlib inline
def show_example(legend, title, xlabel, ylabel):
plt.grid(True)
plt.title(title, color='black')
plt.ylabel(ylabel, color='black')
plt.xlabel(xlabel, color='black')
plt.legend(legend)
plt.show()
list_t_k = np.arange(278.15,400,5)
list_t_c = list_t_k - 273.15
list_gamma_oil = [0.6, 0.65, 0.7, 0.75]
for sensivity_parametr in list_gamma_oil:
mu_do_cp = PVTcorr.unf_deadoilviscosity_Beggs_cP(sensivity_parametr, list_t_k)
plt.plot(list_t_c, mu_do_cp, linewidth=3)
show_example(list_gamma_oil,'Вязкость дегазированной нефти в зависимости от относительной плотности',
'Температура, C', '$\mu_{DO}$, сПуаз' )
list_rs_m3m3 = np.arange(0, 500, 10)
rs_m3m3 = 50
for sensivity_parametr in list_gamma_oil:
mu_do_cp = PVTcorr.unf_deadoilviscosity_Beggs_cP(sensivity_parametr, list_t_k)
mu_cp = PVTcorr.unf_saturatedoilviscosity_Beggs_cP(mu_do_cp, rs_m3m3)
plt.plot(list_t_c, mu_cp, linewidth=3)
show_example(list_gamma_oil,'Вязкость нефти при $P \leq P_b$ в зависимости от относительной плотности',
'Температура, C', '$\mu_{DO}$, сПуаз' )
list_rs_m3m3 = np.arange(0, 500, 10)
rs_m3m3 = 50
p_MPaa = 10
pb_MPaa = 8
for sensivity_parametr in list_gamma_oil:
mu_do_cp = PVTcorr.unf_deadoilviscosity_Beggs_cP(sensivity_parametr, list_t_k)
mu_cp = PVTcorr.unf_saturatedoilviscosity_Beggs_cP(mu_do_cp, rs_m3m3)
mu_cp_p = PVTcorr.unf_undersaturatedoilviscosity_VB_cP(p_MPaa, pb_MPaa, mu_cp)
plt.plot(list_t_c, mu_cp_p, linewidth=3)
show_example(list_gamma_oil,'Вязкость нефти при $P > P_b$ в зависимости от относительной плотности',
'Температура, C', '$\mu_{DO}$, сПуаз' )
rsb_labels = ('400', '200', '50')
fluid_Standing_1 = PVT_fluids.FluidStanding(rsb_m3m3 = 400)
fluid_Standing_2 = PVT_fluids.FluidStanding(rsb_m3m3 = 200)
fluid_Standing_3 = PVT_fluids.FluidStanding(rsb_m3m3 = 50)
p_bar = range(1,700)
t_c = 80
mu_oil_1 = []
mu_oil_2 = []
mu_oil_3 = []
for i in p_bar:
fluid_Standing_1.calc(i, t_c)
fluid_Standing_2.calc(i, t_c)
fluid_Standing_3.calc(i, t_c)
mu_oil_1.append(fluid_Standing_1.mu_oil_cP)
mu_oil_2.append(fluid_Standing_2.mu_oil_cP)
mu_oil_3.append(fluid_Standing_3.mu_oil_cP)
plt.plot(p_bar, mu_oil_1, linewidth=3)
plt.plot(p_bar, mu_oil_2, linewidth=3)
plt.plot(p_bar, mu_oil_3, linewidth=3)
show_example(rsb_labels,'Вязкость нефти в зависимости от ГФ',
'Давление, бар', '$\mu_{DO}$, сПуаз' )
```
|
github_jupyter
|
```
import pandas
df = pandas.read_excel("s3://lab11---2019/house_price (1).xls")
df[:10]
df.describe()
df.hist(figsize=(20,20))
df.groupby('house_type').mean()
df[:10]
!pip install mglearn
import sklearn
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
%matplotlib inline
import pandas
import numpy as np
import mglearn
from collections import Counter
from sklearn.metrics import cohen_kappa_score
from sklearn import preprocessing
df = pandas.read_excel('s3://lab11---2019/house_price (1).xls')
# combine multipl columns into a 2D array
# also convert the integer data to float data
X = np.column_stack((df.built_in.astype(float),df.price.astype(float)))
X = preprocessing.scale(X) # scale the data before training the model
y = df.house_type
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size =0.3,stratify = y, random_state=0)
# for classification, make sure a stratify splitting method is selected
mglearn.discrete_scatter(X[:,0],X[:,1],y) # use mglearn to visualize data
plt.legend(y,loc='best')
plt.xlabel('build_in')
plt.ylabel('house price')
plt.show()
from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(solver='lbfgs', hidden_layer_sizes=(20,20,20), random_state=0).fit(X_train, y_train)
mglearn.discrete_scatter(X_train[:, 0], X_train[:, 1],mlp.predict(X_train))
plt.legend(y,loc='best')
plt.xlabel('build_in')
plt.ylabel('house price')
plt.show()
print("Training set accuracy: {:.2f}".format(mlp.score(X_train, y_train)))
print ("Training Kappa: {:.3f}".format(cohen_kappa_score(y_train,mlp.predict(X_train))))
print("Test set accuracy: {:.2f}".format(mlp.score(X_test, y_test)))
print ("Test Kappa: {:.3f}".format(cohen_kappa_score(y_test,mlp.predict(X_test))))
fig, axes = plt.subplots(2, 4, figsize=(20, 8))
for axx, n_hidden_nodes in zip(axes, [10, 20]):
for ax, alpha in zip(axx, [0.0001, 0.01, 0.1, 1]):
mlp = MLPClassifier(solver='lbfgs', random_state=0,
hidden_layer_sizes=[n_hidden_nodes, n_hidden_nodes],
alpha=alpha)
mlp.fit(X_train, y_train)
mglearn.discrete_scatter(X_train[:, 0], X_train[:, 1], mlp.predict(X_train), ax=ax)
ax.set_title("n_hidden=[{}, {}]\nalpha={:.4f}\nkapa={:.4f}".format(
n_hidden_nodes, n_hidden_nodes, alpha,cohen_kappa_score(y_train,mlp.predict(X_train))))
plt.subplots_adjust(hspace=0.5)
mlp = MLPClassifier(solver='lbfgs', hidden_layer_sizes=(20,20), random_state=0).fit(X_train, y_train)
fig, axes = plt.subplots(1, 3, figsize=(20, 8))
for i , ax in zip(range(3),axes):
img = ax.imshow(mlp.coefs_[i], interpolation='none', cmap='viridis')
ax.set_title(" No.{} layer".format(i))
ax.set_xlabel("Columns in weight matrix")
ax.set_ylabel("Input feature")
fig.colorbar(img, ax = ax)
```
|
github_jupyter
|
# Tidy Data
> Structuring datasets to facilitate analysis [(Wickham 2014)](http://www.jstatsoft.org/v59/i10/paper)
If there's one maxim I can impart it's that your tools shouldn't get in the way of your analysis. Your problem is already difficult enough, don't let the data or your tools make it any harder.
## The Rules
In a tidy dataset...
1. Each variable forms a column
2. Each observation forms a row
3. Each type of observational unit forms a table
We'll cover a few methods that help you get there.
Based on [this](http://stackoverflow.com/questions/22695680/python-pandas-timedelta-specific-rows) StackOverflow question.
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
pd.options.display.max_rows = 10
%matplotlib inline
```
Earlier, I fetched some data
```python
tables = pd.read_html("http://www.basketball-reference.com/leagues/NBA_2015_games.html")
games = tables[0]
games.to_csv('data/games.csv', index=False)
```
```
pd.read_html?
!head -n 2 data/games.csv
```
The Question:
> **How many days of rest did each team get between each game?**
Whether or not your dataset is tidy depends on your question. Given our question, what is an observation?
```
column_names = ['date', '_', 'away_team', 'away_points', 'home_team',
'home_points', 'n_ot', 'notes']
games = (pd.read_csv('data/games.csv', names=column_names, parse_dates=['date'],
skiprows=1)
.drop(['_', 'notes', 'n_ot'], axis='columns')
.set_index('date', append=True))
games.index.names = ['game_id', 'date']
games.head()
```
Is `games` a tidy dataset, given our question? No, we have multiple observations (teams) per row. We'll use `pd.melt` to fix that.
```
tidy = pd.melt(games.sort_index().reset_index(),
id_vars=['game_id', 'date'], value_vars=['away_team', 'home_team'],
value_name='team')
tidy.head()
```
Now the translation from question to operation is direct:
```
# For each team... get number of dates between games
tidy.groupby('team')['date'].diff().dt.days - 1
tidy['rest'] = tidy.sort('date').groupby('team').date.diff().dt.days - 1
tidy.dropna().head()
un = pd.pivot_table(tidy, values='rest',
index=['game_id', 'date'],
columns='variable').rename(
columns={'away_team': 'away_rest', 'home_team': 'home_rest'}
)
un.columns.name = None
un.dropna().head()
df = pd.concat([games, un], axis=1)
df
g = sns.FacetGrid(data=tidy.dropna(), col='team', col_wrap=5, hue='team')
g.map(sns.barplot, "variable", "rest");
delta = (un.home_rest - un.away_rest).dropna().astype(int)
(delta.value_counts()
.reindex(np.arange(delta.min(), delta.max() + 1), fill_value=0)
.sort_index().plot(kind='bar', color='k', width=.9, rot=0, figsize=(12, 6)))
```
# Stack / Unstack
An "observation" depends on the question. Home team advantage?
```
home_adv = games.home_points - games.away_points
ax = (home_adv).plot(kind='hist', bins=80, color='k', figsize=(10, 5))
ax.set_xlim(-40, 40)
mu = home_adv.mean()
ax.vlines(mu, *ax.get_ylim(), color='steelblue', linewidth=3)
print('Home win percent:', (home_adv > 0).mean())
```
# Team Strength
# Mini Project: Home Court Advantage?
What's the effect (in terms of probability to win) of being
the home team.
### Step 1: Calculate Win %
We need to create an indicator for whether the home team won.
Add it as a column called `home_win` in `games`.
```
games['home_win'] = ... # fill this in
#%load -r 1:4 solutions_tidy.py
```
### Step 2: Find the win percent for each team
Teams are split across two columns. It's easiest to calculate the number of wins and
number of games as away, and the number of wins and number of games as home. Then
combine those two results to get the win percent.
```
wins_as_home = games.groupby('').agg([])
# hint: use `~` to flip an array of booleans
wins_as_away = ...
wins_as_home.columns = ['n_wins', 'n_games']
wins_as_away.columns = ['n_wins', 'n_games']
%load -r 5:13 solutions_tidy.py
```
Now add `wins_as_home` and `wins_as_away` to get a DataFrame with
two columsn, `n_wins`, and `n_games` and one row per team.
Finally, calculate the win percent.
```
%load -r 14:20 solutions_tidy.py
strength.order().plot(kind='barh', figsize=(5, 12))
```
Bring the `strength` valuess in for each team, for each game.
```
games.head()
```
For SQL people
```sql
SELECT *
FROM games NATURAL JOIN strength
```
We just need to get the names worked out.
```
strength.head().reset_index().rename(columns=lambda x: 'away_' + x)
(pd.merge(games.reset_index(), strength.reset_index().add_prefix('away_'))
.pipe(pd.merge, strength.reset_index().add_prefix('home_'))
.set_index(['game_id', 'date']))
```
For python people
```
games = games.assign(away_strength=games.away_team.map(strength),
home_strength=games.home_team.map(strength))
games.head()
X = pd.concat([games, un], axis=1).set_index(['away_team', 'home_team'], append=True).dropna()
X.head()
X['home_win'] = X.home_win.astype(int) # for statsmodels
import statsmodels.api as sm
mod = sm.Logit.from_formula('home_win ~ home_strength + away_strength + home_rest + away_rest', X)
res = mod.fit()
res.summary()
mod = sm.Logit.from_formula('home_win ~ rest_difference',
X.assign(rest_difference=lambda df: df.home_rest - df.away_rest))
res = mod.fit()
res.summary()
mod = sm.OLS.from_formula('spread ~ home_strength + away_strength + rest_difference',
X.assign(rest_difference=lambda df: df.home_rest - df.away_rest,
spread=lambda df: df.home_points - df.away_points))
res = mod.fit()
res.summary()
```
# Recap
- Tidy data: one row per observation
- melt / stack: wide to long
- pivot_table / unstack: long to wide
|
github_jupyter
|
# Week 1
1.Question 1
Consider the table below describing a data set of individuals who have registered to volunteer at a public school. Which of the choices below lists categorical variables?
**Answer:phone number and name**
2.Question 2
A study is designed to test the effect of type of light on exam performance of students. 180 students are randomly assigned to three classrooms: one that is dimly lit, another with yellow lighting, and a third with white fluorescent lighting, and given the same exam. Which of the following correctly identifies the variables used in the study as explanatory and response?
**Answer: explanatory: type of light (categorical with 3 levels)
response: exam performance**
3.Question 3
In a study published in 2011 in The Proceedings of the National Academy of Sciences, researchers randomly assigned 120 elderly men and women who volunteered to be a part of this study (average age mid-60s) to one of two exercise groups. One group walked around a track three times a week; the other did a variety of less aerobic exercises, including yoga and resistance training with bands. After a year, brain scans showed that among the walkers, the hippocampus (part of the brain responsible for forming memories) had increased in volume by about 2% on average; in the others, it had declined by about 1.4%. Which of the following is false?
**Answer:The results of this study can be generalized to all elderly.**
4.Question 4
A school district is considering whether it will no longer allow students to park at school after two recent accidents where students were severely injured. As a first step, they survey parents of high school students by mail, asking them whether or not the parents would object to this policy change. Of 5,799 surveys that go out, 1,209 are returned. Of these 1,209 surveys that were completed, 926 agreed with the policy change and 283 disagreed. Which of the following statements is the most plausible?
**Answer:It is possible that 80% of the parents of high school students disagree with the policy change.**
5.Question 5
For your political science class, you’d like to take a survey from a sample of all the Catholic Church members in your town. Your town is divided into 17 neighborhoods, each with similar socio-economic status distribution and ethnic diversity, and each contains a Catholic Church. Rather than trying to obtain a list of all members of all these churches, you decide to pick 3 churches at random. For these churches, you’ll ask to get a list of all current members and contact 100 members at random. What kind of design have you used?
**Answer:stratified sampling**
6.Question 6
In an experiment, what purpose does blocking serve?
**Answer:Control for variables that might influence the response.**
7.Question 7
Which of the following is one of the four principles of experimental design?
**Answer:randomize**
|
github_jupyter
|
```
# import packages
import csv
import numpy as np
import warnings
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.neighbors import KNeighborsClassifier
warnings.filterwarnings('ignore')
# read .tsv files
columns = {0:'ID', 1:'label', 2:'statement', 3:'subject', 4:'speaker', 5:'job_title',
6:'state', 7:'party', 8:'barely_true', 9:'false', 10:'half_true', 11:'mostly_true',
12:'pants_on_fire', 13:'context'}
def readTsvFile(file_name):
tsv_file = open(file_name)
read_tsv = csv.reader(tsv_file, delimiter='\t')
dataset = []
for row in read_tsv:
dataset.append(row)
# print('examples:', len(dataset))
# print('features:', len(dataset[0]))
# print('row1_example:', dataset[0])
# print('\n')
return dataset
# plotting bar charts
def plottingData(column_name, arr_X, arr_y):
fig = plt.figure()
ax = fig.add_axes([1, 1, 2, 2])
ax.bar(arr_X, arr_y)
ax.set_xlabel(column_name)
ax.set_ylabel('count')
plt.xticks(rotation=90)
plt.show()
def removeSpace(string):
if string[-1] != ' ':
return string
space_count = 0
pos = -1
while string[pos] == ' ':
pos -= 1
return string[:pos+1]
#caculate numbers of each category in each columns
def categoryChecker(dataset, column_name, n):
category = {}
for row in dataset:
if len(row) > n:
if row[n] == '' or row[n] == 'N/A':
row[n] = 'None' #missing data will rename as 'None'
cleaned_string = removeSpace(row[n])
if cleaned_string in category:
category[cleaned_string] += 1
if cleaned_string not in category:
category[cleaned_string] = 1
# else:
# print('Suspicious case:', row[0]) #len(row) <= n?
category = {k: v for k, v in sorted(category.items(), key=lambda x: x[1],
reverse=True)} #sorting dictionary
count = 0
arr_X = []
arr_y = []
for k, v in category.items():
count += v
arr_X.append(k)
arr_y.append(v)
# print(column_name, ':', category)
# print('The total number of examples:', count)
# print('The number of categories:', len(arr_X))
plottingData(column_name, arr_X, arr_y) #call plotting function
def dataVisualization(dataset):
for i in [1, 6, 7]:
categoryChecker(dataset, columns[i], i)
def getStatement(dataset, column_num):
statement = []
for row in dataset:
if len(row) < column_num:
statement.append('')
# print(row[0])
else:
statement.append(row[column_num])
return statement
def trainRunVectorizer(dataset_words):
cv = CountVectorizer(stop_words='english')
doc = np.array([dataset_words])
dataset_cv = cv.fit_transform(doc.ravel())
# print(cv.vocabulary_)
# print(dataset_cv.toarray())
# print(dataset_cv.shape)
return dataset_cv, cv
def runVectorizer(dataset_words, cv):
doc = np.array([dataset_words])
dataset_cv = cv.transform(doc.ravel())
# print(cv.vocabulary_)
# print(dataset_cv.toarray())
# print(dataset_cv.shape)
return dataset_cv
def runTfidfTransformer(vectorized_statement):
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
np.set_printoptions(precision=2)
tfidf_transformed = tfidf.fit_transform(vectorized_statement)
tfidf_transformed_array = tfidf_transformed.toarray()
return tfidf_transformed_array
def categorizedDataset(dataset, column_nums=[3, 5, 6, 7]):
pre_categorized_dataset = []
for column_num in column_nums:
seen = {}
categorized_row = []
counter = 0
for row in dataset:
if len(row) > column_num:
data = row[column_num]
else:
data = ''
if data in seen:
categorized_row.append(seen[data])
if data not in seen:
seen[data] = counter
categorized_row.append(seen[data])
counter += 1
pre_categorized_dataset.append(categorized_row)
categorized_dataset = np.array(pre_categorized_dataset).transpose()
return categorized_dataset
def creditHistory(dataset, column_nums=[8, 9, 10, 11, 12]):
pre_credit_history_dataset = []
for row in dataset:
credit_row = []
for column_num in column_nums:
if len(row) > column_num:
data = row[column_num]
else:
data = 0
credit_row.append(data)
pre_credit_history_dataset.append(credit_row)
credit_history_dataset = np.array(pre_credit_history_dataset)
return credit_history_dataset
def getTargetDataset(dataset, column_num = 1):
pre_target_dataset = []
seen = {}
counter = 0
for row in dataset:
if row[column_num] in seen:
pre_target_dataset.append(seen[row[column_num]])
if row[column_num] not in seen:
seen[row[column_num]] = counter
pre_target_dataset.append(seen[row[column_num]])
counter += 1
target_dataset = np.array(pre_target_dataset).transpose()
return target_dataset
def runTrainDataset():
train_dataset = readTsvFile('train.tsv')
# dataVisualization(train_dataset)
y_train = getTargetDataset(train_dataset)
train_dataset_statement = getStatement(train_dataset, 2)
train_dataset_context = getStatement(train_dataset, 13)
categorized_train_dataset = categorizedDataset(train_dataset)
credit_history_train_dataset = creditHistory(train_dataset)
del train_dataset
train_vectorized_statement, cv_statement = trainRunVectorizer(train_dataset_statement)
train_vectorized_context, cv_context = trainRunVectorizer(train_dataset_context)
del train_dataset_statement
del train_dataset_context
train_tfidfed_statement = runTfidfTransformer(train_vectorized_statement)
train_tfidfed_context = runTfidfTransformer(train_vectorized_context)
del train_vectorized_statement
del train_vectorized_context
train_vectrized_features = np.column_stack((train_tfidfed_statement, train_tfidfed_context))
del train_tfidfed_statement
del train_tfidfed_context
X_train = np.column_stack((train_vectrized_features, categorized_train_dataset))
del train_vectrized_features
del categorized_train_dataset
X_train = np.column_stack((X_train, credit_history_train_dataset))
del credit_history_train_dataset
return X_train, y_train, cv_statement, cv_context
def runValDataset(cv_statement, cv_context):
val_dataset = readTsvFile('valid.tsv')
# dataVisualization(val_dataset)
y_val = getTargetDataset(val_dataset)
val_dataset_statement = getStatement(val_dataset, 2)
val_dataset_context = getStatement(val_dataset, 13)
categorized_val_dataset = categorizedDataset(val_dataset)
credit_history_val_dataset = creditHistory(val_dataset)
val_dataset = None
val_vectorized_statement = runVectorizer(val_dataset_statement, cv_statement)
val_vectorized_context = runVectorizer(val_dataset_context, cv_context)
val_dataset_statement = val_dataset_context = None
val_tfidfed_statement = runTfidfTransformer(val_vectorized_statement)
val_tfidfed_context = runTfidfTransformer(val_vectorized_context)
val_vectorized_statement = val_vectorized_context = None
val_vectrized_features = np.column_stack((val_tfidfed_statement, val_tfidfed_context))
val_tfidfed_statement = val_tfidfed_context = None
X_val = np.column_stack((val_vectrized_features, categorized_val_dataset))
val_vectrized_features = categorized_val_dataset = None
X_val = np.column_stack((X_val, credit_history_val_dataset))
credit_history_val_dataset = None
return X_val, y_val
def runTestDataset(cv_statement, cv_context):
test_dataset = readTsvFile('test.tsv')
# dataVisualization(test_dataset)
y_test = getTargetDataset(test_dataset)
test_dataset_statement = getStatement(test_dataset, 2)
test_dataset_context = getStatement(test_dataset, 13)
categorized_test_dataset = categorizedDataset(test_dataset)
credit_history_test_dataset = creditHistory(test_dataset)
test_dataset = None
test_vectorized_statement = runVectorizer(test_dataset_statement, cv_statement)
test_vectorized_context = runVectorizer(test_dataset_context, cv_context)
test_dataset_statement = test_dataset_context = None
test_tfidfed_statement = runTfidfTransformer(test_vectorized_statement)
test_tfidfed_context = runTfidfTransformer(test_vectorized_context)
test_vectorized_statement = test_vectorized_context = None
test_vectrized_features = np.column_stack((test_tfidfed_statement, test_tfidfed_context))
test_tfidfed_statement = test_tfidfed_context = None
X_test = np.column_stack((test_vectrized_features, categorized_test_dataset))
test_vectrized_features = categorized_test_dataset = None
X_test = np.column_stack((X_test, credit_history_test_dataset))
credit_history_test_dataset = None
return X_test, y_test
X_train, y_train, cv_statement, cv_context = runTrainDataset()
X_val, y_val = runValDataset(cv_statement, cv_context)
def trainKNN(X_train, X_val, y_train, y_val, num_neighbor, KNN_type, weight):
n_neighbor = num_neighbor
p_value = KNN_type
best_KNN_val_acc = 0
best_n_neighbor = []
best_p_value = []
best_wight = []
accuracies = 0
counter = 0
for neighbor in n_neighbor:
for pv in p_value:
for w in weight:
knn = KNeighborsClassifier(n_neighbors = neighbor, p = pv, weights = w)
knn.fit(X_train, y_train)
KNN_val_acc = knn.score(X_val, y_val)
accuracies += KNN_val_acc
counter += 1
if KNN_val_acc > best_KNN_val_acc:
best_KNN_val_acc = KNN_val_acc
best_n_neighbor = [neighbor]
best_p_value = [pv]
best_weight = [w]
elif KNN_val_acc == best_KNN_val_acc:
best_n_neighbor.append(neighbor)
best_p_value.append(pv)
best_weight.append(w)
print('Accuracy:', KNN_val_acc, ',', 'n:', neighbor, ',',
'm:', pv, ',', 'w:', w)
mean_accuracy = accuracies/counter
print('Best Accuracy:', best_KNN_val_acc)
print('num of neighbors:', best_n_neighbor)
print('KNN type:', best_p_value)
print('Weight:', best_weight)
print('Mean Accuracy:', mean_accuracy)
return best_n_neighbor, best_p_value, best_weight
best_n_neighbor, best_p_value, best_weight = trainKNN(X_train.astype(np.float32),
X_val.astype(np.float32),
y_train.astype(np.float32),
y_val.astype(np.float32),
range(3, 16, 3), [1,2],
['uniform', 'distance'])
del X_val
del y_val
X_test, y_test = runTestDataset(cv_statement, cv_context)
del cv_statement
del cv_context
def testKNN(X_train, X_val, y_train, y_val, num_neighbor, KNN_type, best_weight):
n_neighbor = num_neighbor[0]
p_value = KNN_type[0]
w = best_weight[0]
knn = KNeighborsClassifier(n_neighbors = n_neighbor, p = p_value, weights = w)
knn.fit(X_train, y_train)
KNN_test_acc = knn.score(X_test, y_test)
print('KNN test accuracy:', KNN_test_acc)
return KNN_test_acc
testKNN(X_train.astype(np.float32), X_test.astype(np.float32),
y_train.astype(np.float32), y_test.astype(np.float32), best_n_neighbor,
best_p_value, best_weight)
```
|
github_jupyter
|
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
import warnings
from math import sqrt
from collections import Counter
from collections import defaultdict
style.use('fivethirtyeight')
import pandas as pd
import random
df = pd.read_csv('Dataset.csv')
original_df = pd.DataFrame.copy(df)
df.replace('?',-9999, inplace=True)
df.drop(['id'], 1, inplace=True)
df.drop(['label'], 1, inplace=True)
full_data = df.astype(float).values.tolist()
test_size=0.2
train_data = full_data[:-int(test_size*len(full_data))]
test_data = full_data[-int(test_size*len(full_data)):]
class K_means:
def __init__(self, k=3, tol=0.001, max_iter=300):
self.k = k
self.tol = tol
self.max_iter = max_iter
def fit(self,data):
#centroid dict
self.centroids = {}
# since k=2 we will select first two points from the data and we will declare that as a centroid
self.track={}
for i in range(self.k):
self.track[i]=[]
for i in range(self.k):
self.centroids[i] = data[i]
self.track[i].append(data[i])
# we will run this loop for 300 times (300 iteration)
for i in range(self.max_iter):
self.classifications = {} #{2: [], 4: []}
for i in range(self.k):
self.classifications[i] = []
for featureset in data: #finding distance from centroid , finding mini value , putting them in classification
distances = [np.linalg.norm(featureset - self.centroids[centroid]) for centroid in
self.centroids]
classification = distances.index(min(distances)) #find the index of the min distance
self.classifications[classification].append(featureset)
prev_centroids = dict(self.centroids)
for classification in self.classifications:
self.centroids[classification] = np.average(self.classifications[classification],axis=0)
self.track[classification].append(np.average(self.classifications[classification],axis=0))
#print(self.centroids)
optimized = True
for c in self.centroids:
original_centroid = prev_centroids[c]
current_centroid = self.centroids[c]
if np.sum((current_centroid-original_centroid)/original_centroid*100.0) > self.tol:
optimized = False
if optimized:
break
def predict(self,data):
distances = [np.linalg.norm(data-self.centroids[centroid]) for centroid in self.centroids]
#print(distances)
classification = distances.index(min(distances))
return classification
clf=K_means()
clf.fit(np.array(train_data))
clf.predict(np.array(test_data))
labels = original_df['label'].tolist()[:int(0.2*len(full_data))]
#takes testing data corresponding to original data
test_set = []
for i in labels:
if i == 2:
test_set.append(0)
else:
test_set.append(1)
acc=[]
for i in range(1,4):
clf = K_means(k=i)
clf.fit(np.array(train_data))
correct = 0
total = 0
for j in range(len(test_data)):
if(clf.predict(test_data[j]) == test_set[j]):
correct+=1
total += 1
print("Acc:",i," ",(correct/total)*100,"%")
acc.append(correct/total)
plt.plot([1,2,3],acc)
plt.show()
```
|
github_jupyter
|
# Module 3 Graded Assessment
```
"""
1.Question 1
Fill in the blanks of this code to print out the numbers 1 through 7.
"""
number = 1
while number <= 7:
print(number, end=" ")
number +=1
"""
2.Question 2
The show_letters function should print out each letter of a word on a separate line.
Fill in the blanks to make that happen.
"""
def show_letters(word):
for letter in word:
print(letter)
show_letters("Hello")
# Should print one line per letter
"""
3.Question 3
Complete the function digits(n) that returns how many digits the number has.
For example: 25 has 2 digits and 144 has 3 digits. Tip: you can figure out the digits of a number by dividing
it by 10 once per digit until there are no digits left.
"""
def digits(n):
count = str(n)
return len(count)
print(digits(25)) # Should print 2
print(digits(144)) # Should print 3
print(digits(1000)) # Should print 4
print(digits(0)) # Should print 1
"""
4.Question 4
This function prints out a multiplication table (where each number is the result of multiplying the first number of its row by the number at the top of its column). Fill in the blanks so that calling multiplication_table(1, 3) will print out:
1 2 3
2 4 6
3 6 9
"""
def multiplication_table(start, stop):
for x in range(start,stop+1):
for y in range(start,stop+1):
print(str(x*y), end=" ")
print()
multiplication_table(1, 3)
# Should print the multiplication table shown above
"""
5.Question 5
The counter function counts down from start to stop when start is bigger than stop,
and counts up from start to stop otherwise.
Fill in the blanks to make this work correctly.
"""
def counter(start, stop):
x = start
if x>stop:
return_string = "Counting down: "
while x >= stop:
return_string += str(x)
if x>stop:
return_string += ","
x = x-1
else:
return_string = "Counting up: "
while x <= stop:
return_string += str(x)
if x<stop:
return_string += ","
x = x+1
return return_string
print(counter(1, 10)) # Should be "Counting up: 1,2,3,4,5,6,7,8,9,10"
print(counter(2, 1)) # Should be "Counting down: 2,1"
print(counter(5, 5)) # Should be "Counting up: 5"
"""
6.Question 6
The loop function is similar to range(), but handles the parameters somewhat differently: it takes in 3 parameters:
the starting point, the stopping point, and the increment step. When the starting point is greater
than the stopping point, it forces the steps to be negative. When, instead, the starting point is less
than the stopping point, it forces the step to be positive. Also, if the step is 0, it changes to 1 or -1.
The result is returned as a one-line, space-separated string of numbers. For example, loop(11,2,3)
should return 11 8 5 and loop(1,5,0) should return 1 2 3 4. Fill in the missing parts to make that happen.
"""
def loop(start, stop, step):
return_string = ""
if step == 0:
step=1
if start>stop:
step = abs(step) * -1
else:
step = abs(step)
for count in range(start, stop, step):
return_string += str(count) + " "
return return_string.strip()
print(loop(11,2,3)) # Should be 11 8 5
print(loop(1,5,0)) # Should be 1 2 3 4
print(loop(-1,-2,0)) # Should be -1
print(loop(10,25,-2)) # Should be 10 12 14 16 18 20 22 24
print(loop(1,1,1)) # Should be empty
#8.Question 8
#What is the value of x at the end of the following code?
for x in range(1, 10, 3):
print(x)
#7
#9.Question 9
#What is the value of y at the end of the following code?
for x in range(10):
for y in range(x):
print(y)
#8
```
|
github_jupyter
|
### Instructions
The lecture uses random forest to predict the state of the loan with data taken from Lending Club (2015). With minimal feature engineering, they were able to get an accuracy of 98% with cross validation. However, the accuracies had a lot of variance, ranging from 98% to 86%, indicating there are lots of useless features.
I am tasked with 1) removing as many features as possible without dropping the average below 90% accuracy in a 10 fold cross validation and 2) if the first task is possible without using anything related to payment amount or outstanding principal.
### 1 - Import Data
In this dataset, there are 420k+ rows and 110 features and the target variable (loan status).
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn import ensemble
from sklearn.model_selection import cross_val_score
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('LoanStats3d.csv', skipinitialspace=True, header=1)
df.info()
```
The the last two rows of the dataset holds no data, so these rows will be deleted.
```
df.tail()
df = df[:-2]
```
### 2 - Removing Features
In the lecture, they removed any columns with missing values. I'm not sure this is the best method, as there could be valuable information in the missing values. Instead, the method I employ is to identify the categorical features. If there are less than 30 unique values, then I create dummy variables out of them. If there are more than 30 unique values, I use panda's ability to map each unique value to a numeric value, allowing me to retain all columns and rows.
```
cat_col = [col for col in df.columns if df[col].dtype == 'object']
num_col = [col for col in df.columns if df[col].dtype != 'object']
cat_col.remove('loan_status')
dummy_df = pd.DataFrame()
for col in cat_col:
if df[col].nunique() < 30:
dummy_df = pd.concat([dummy_df, pd.get_dummies(df[col], prefix = col, drop_first=True)], axis = 1)
cat_col.remove(col)
```
For whatever reason, the id and interest rates are labeled as 'objects'. The following is to convert them into numeric features.
```
df['id'] = pd.to_numeric(df['id'], errors='coerce')
df['int_rate'] = pd.to_numeric(df['int_rate'].str.strip('%'), errors='coerce')
cat_col.remove('id')
cat_col.remove('int_rate')
```
Using Panda's codes function is as simple as converting the objects into categorical dtypes (instead of objects). Then add one to the codes as null values are given a value of -1, which random forest will not take.
```
for col in cat_col + ['loan_status']:
df[col] = df[col].astype('category')
df[col] = df[col].cat.codes+1
df_combined = pd.concat([df[cat_col+num_col], df['loan_status'], dummy_df], axis = 1)
combined_cols_lst = list(df_combined.columns)
combined_cols_lst.remove('loan_status')
```
At this point, I have 136 features. How do we remove the features that do not help predict the loan status? One way is to find the features that are highly correlated with the loan status. Below I've found 9 features that have a correlation of at least 0.15.
```
print('There are {} features.'.format(len(combined_cols_lst)))
important_cols = [col for col in combined_cols_lst if df_combined[[col, 'loan_status']].corr().abs()['loan_status'][0] > 0.15]
important_cols
```
### 3 - Random Forest Classifier
I'm finally ready to apply the data to a random forest classifier. I will be using a 10 fold cross validation, the same as the lecture for comparison. Recall that in the lecture, the average accuracy was ~97%, but it had a range of ~11%. **On the other hand, this model with only 9 features has an accuracy of ~97%, but a range of ~2.5%. **
```
rfc = ensemble.RandomForestClassifier()
X = df_combined[important_cols]
Y = df_combined['loan_status']
cv = cross_val_score(rfc, X, Y, cv = 10)
print('The cross validation score has a range of {:0.3f} and mean of {:0.3f}'.format(cv.max() - cv.min(), cv.mean()))
```
#### 3.1 - Removing Payment Amount and Outstanding Principal
The second question to answer is if is is possible to have an accuracy above 90% without using features related to payment amounts or outstanding principals. Looking at the features deemed 'important', there are only three that are not related to payment amount or principals. Of these three features, two of them have very low correlations. My guess is it will be pretty difficult to achieve 90% accuracy.
```
for col in important_cols:
print(col, df_combined[[col, 'loan_status']].corr().abs()['loan_status'][0])
important_cols_2 = ['total_rec_prncp',
'recoveries',
'collection_recovery_fee']
```
As expected, the average accuracy is ~86% and is not able to meet the target accuracy.
```
rfc2 = ensemble.RandomForestClassifier()
X2 = df_combined[important_cols_2]
Y2 = df_combined['loan_status']
cv2 = cross_val_score(rfc2, X2, Y2, cv = 10)
print('The cross validation score has a range of {:0.3f} and mean of {:0.3f}'.format(cv2.max() - cv2.min(), cv2.mean()))
```
|
github_jupyter
|
# Federated Keras MNIST Tutorial
```
#Install Tensorflow and MNIST dataset if not installed
!pip install tensorflow==2.3.1
#Alternatively you could use the intel-tensorflow build
# !pip install intel-tensorflow==2.3.0
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import backend as K
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, Flatten, Dense
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist
import openfl.native as fx
from openfl.federated import FederatedModel,FederatedDataSet
def test_intel_tensorflow():
"""
Check if Intel version of TensorFlow is installed
"""
import tensorflow as tf
print("We are using Tensorflow version {}".format(tf.__version__))
major_version = int(tf.__version__.split(".")[0])
if major_version >= 2:
from tensorflow.python import _pywrap_util_port
print("Intel-optimizations (DNNL) enabled:",
_pywrap_util_port.IsMklEnabled())
else:
print("Intel-optimizations (DNNL) enabled:")
test_intel_tensorflow()
```
After importing the required packages, the next step is setting up our openfl workspace. To do this, simply run the `fx.init()` command as follows:
```
#Setup default workspace, logging, etc.
fx.init('keras_cnn_mnist')
```
Now we are ready to define our dataset and model to perform federated learning on. The dataset should be composed of a numpy arrayWe start with a simple fully connected model that is trained on the MNIST dataset.
```
#Import and process training, validation, and test images/labels
# Set the ratio of validation imgs, can't be 0.0
VALID_PERCENT = 0.3
(X_train, y_train), (X_test, y_test) = mnist.load_data()
split_on = int((1 - VALID_PERCENT) * len(X_train))
train_images = X_train[0:split_on,:,:]
train_labels = to_categorical(y_train)[0:split_on,:]
valid_images = X_train[split_on:,:,:]
valid_labels = to_categorical(y_train)[split_on:,:]
test_images = X_test
test_labels = to_categorical(y_test)
def preprocess(images):
#Normalize
images = (images / 255) - 0.5
#Flatten
images = images.reshape((-1, 784))
return images
# Preprocess the images.
train_images = preprocess(train_images)
valid_images = preprocess(valid_images)
test_images = preprocess(test_images)
feature_shape = train_images.shape[1]
classes = 10
fl_data = FederatedDataSet(train_images,train_labels,valid_images,valid_labels,batch_size=32,num_classes=classes)
def build_model(feature_shape,classes):
#Defines the MNIST model
model = Sequential()
model.add(Dense(64, input_shape=feature_shape, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(classes, activation='softmax'))
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'],)
return model
#Create a federated model using the build model function and dataset
fl_model = FederatedModel(build_model,data_loader=fl_data)
```
The `FederatedModel` object is a wrapper around your Keras, Tensorflow or PyTorch model that makes it compatible with openfl. It provides built in federated training and validation functions that we will see used below. Using it's `setup` function, collaborator models and datasets can be automatically defined for the experiment.
```
collaborator_models = fl_model.setup(num_collaborators=2)
collaborators = {'one':collaborator_models[0],'two':collaborator_models[1]}#, 'three':collaborator_models[2]}
#Original MNIST dataset
print(f'Original training data size: {len(train_images)}')
print(f'Original validation data size: {len(valid_images)}\n')
#Collaborator one's data
print(f'Collaborator one\'s training data size: {len(collaborator_models[0].data_loader.X_train)}')
print(f'Collaborator one\'s validation data size: {len(collaborator_models[0].data_loader.X_valid)}\n')
#Collaborator two's data
print(f'Collaborator two\'s training data size: {len(collaborator_models[1].data_loader.X_train)}')
print(f'Collaborator two\'s validation data size: {len(collaborator_models[1].data_loader.X_valid)}\n')
#Collaborator three's data
#print(f'Collaborator three\'s training data size: {len(collaborator_models[2].data_loader.X_train)}')
#print(f'Collaborator three\'s validation data size: {len(collaborator_models[2].data_loader.X_valid)}')
```
We can see the current plan values by running the `fx.get_plan()` function
```
#Get the current values of the plan. Each of these can be overridden
print(fx.get_plan())
```
Now we are ready to run our experiment. If we want to pass in custom plan settings, we can easily do that with the `override_config` parameter
```
#Run experiment, return trained FederatedModel
final_fl_model = fx.run_experiment(collaborators,override_config={'aggregator.settings.rounds_to_train':5})
#Save final model and load into keras
final_fl_model.save_native('final_model')
model = tf.keras.models.load_model('./final_model')
#Test the final model on our test set
model.evaluate(test_images,test_labels)
```
|
github_jupyter
|
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Rock, Paper & Scissors with TensorFlow Hub - TFLite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%202%20-%20TensorFlow%20Lite/Week%203/Exercise/TFLite_Week3_Exercise.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%202%20-%20TensorFlow%20Lite/Week%203/Exercise/TFLite_Week3_Exercise.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
## Setup
```
try:
%tensorflow_version 2.x
except:
pass
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
from tqdm import tqdm
print("\u2022 Using TensorFlow Version:", tf.__version__)
print("\u2022 Using TensorFlow Hub Version: ", hub.__version__)
print('\u2022 GPU Device Found.' if tf.test.is_gpu_available() else '\u2022 GPU Device Not Found. Running on CPU')
```
## Select the Hub/TF2 Module to Use
Hub modules for TF 1.x won't work here, please use one of the selections provided.
```
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
```
## Data Preprocessing
Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.
This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
```
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
```
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.
Since `"cats_vs_dog"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
```
splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10))
# Go to the TensorFlow Dataset's website and search for the Rock, Paper, Scissors dataset and load it here
splits, info = tfds.load( # YOUR CODE HERE )
(train_examples, validation_examples, test_examples) = splits
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
```
### Format the Data
Use the `tf.image` module to format the images for the task.
Resize the images to a fixes input size, and rescale the input channels
```
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
```
Now shuffle and batch the data
```
BATCH_SIZE = 32 #@param {type:"integer"}
# Prepare the examples by preprocessing the them and then batching them (and optionally prefetching them)
# If you wish you can shuffle train set here
train_batches = # YOUR CODE HERE
validation_batches = # YOUR CODE HERE
test_batches = # YOUR CODE HERE
```
Inspect a batch
```
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
```
## Defining the Model
All it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.
For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
```
do_fine_tuning = False #@param {type:"boolean"}
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes, activation='softmax')
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 10 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
```
## Training the Model
```
if do_fine_tuning:
model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
else:
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
```
## Export the Model
```
RPS_SAVED_MODEL = "rps_saved_model"
```
Export the SavedModel
```
# Use TensorFlow's SavedModel API to export the SavedModel from the trained Keras model
# YOUR CODE HERE
%%bash -s $RPS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(RPS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
```
## Convert Using TFLite's Converter
```
# Intialize the TFLite converter to load the SavedModel
converter = # YOUR CODE HERE
# Set the optimization strategy for 'size' in the converter
converter.optimizations = [# YOUR CODE HERE]
# Use the tool to finally convert the model
tflite_model = # YOUR CODE HERE
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
```
## Test the TFLite Model Using the Python Interpreter
```
# Load TFLite model and allocate tensors.
with open(tflite_model_file, 'rb') as fid:
tflite_model = fid.read()
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['rock', 'paper', 'scissors']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
print(type(predicted_label), type(true_label))
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]), color=color)
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
```
Create a file to save the labels.
```
with open('labels.txt', 'w') as f:
f.write('\n'.join(class_names))
```
If you are running this notebook in a Colab, you can run the cell below to download the model and labels to your local disk.
**Note**: If the files do not download when you run the cell, try running the cell a second time. Your browser might prompt you to allow multiple files to be downloaded.
```
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
```
# Prepare the Test Images for Download (Optional)
This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
```
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq rps_test_images.zip -r test_images/
```
If you are running this notebook in a Colab, you can run the cell below to download the Zip file with the images to your local disk.
**Note**: If the Zip file does not download when you run the cell, try running the cell a second time.
```
try:
files.download('rps_test_images.zip')
except:
pass
```
|
github_jupyter
|
論文<br>
https://arxiv.org/abs/2109.07161<br>
<br>
GitHub<br>
https://github.com/saic-mdal/lama<br>
<br>
<a href="https://colab.research.google.com/github/kaz12tech/ai_demos/blob/master/Lama_demo.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 環境セットアップ
## GitHubからソースコードを取得
## ライブラリをインストール
```
%cd /content
!git clone https://github.com/saic-mdal/lama.git
!pip install -r lama/requirements.txt --quiet
!pip install wget --quiet
!pip install --upgrade webdataset==0.1.103
!pip uninstall opencv-python-headless -y --quiet
!pip install opencv-python-headless==4.1.2.30 --quiet
# torchtext 0.8.0をインストール
!pip uninstall torch torchvision torchaudio torchtext -y
!pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 torchtext -f https://download.pytorch.org/whl/torch_stable.html
# avoid AttributeError: 'builtin_function_or_method' object has no attribute 'rfftn'
!sed -E -i "15i import torch.fft" /content/lama/saicinpainting/training/modules/ffc.py
```
## 学習済みモデルのセットアップ
```
% cd /content/lama
!curl -L $(yadisk-direct https://disk.yandex.ru/d/ouP6l8VJ0HpMZg) -o big-lama.zip
!unzip big-lama.zip
```
## ライブラリをインポート
```
import base64, os
from IPython.display import HTML, Image
from google.colab.output import eval_js
from base64 import b64decode
import matplotlib.pyplot as plt
import numpy as np
import wget
from shutil import copyfile
import shutil
```
# Canvasのセットアップ
```
canvas_html = """
<style>
.button {
background-color: #4CAF50;
border: none;
color: white;
padding: 15px 32px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 16px;
margin: 4px 2px;
cursor: pointer;
}
</style>
<canvas1 width=%d height=%d>
</canvas1>
<canvas width=%d height=%d>
</canvas>
<button class="button">Finish</button>
<script>
var canvas = document.querySelector('canvas')
var ctx = canvas.getContext('2d')
var canvas1 = document.querySelector('canvas1')
var ctx1 = canvas.getContext('2d')
ctx.strokeStyle = 'red';
var img = new Image();
img.src = "data:image/%s;charset=utf-8;base64,%s";
console.log(img)
img.onload = function() {
ctx1.drawImage(img, 0, 0);
};
img.crossOrigin = 'Anonymous';
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.lineWidth = %d
var button = document.querySelector('button')
var mouse = {x: 0, y: 0}
canvas.addEventListener('mousemove', function(e) {
mouse.x = e.pageX - this.offsetLeft
mouse.y = e.pageY - this.offsetTop
})
canvas.onmousedown = ()=>{
ctx.beginPath()
ctx.moveTo(mouse.x, mouse.y)
canvas.addEventListener('mousemove', onPaint)
}
canvas.onmouseup = ()=>{
canvas.removeEventListener('mousemove', onPaint)
}
var onPaint = ()=>{
ctx.lineTo(mouse.x, mouse.y)
ctx.stroke()
}
var data = new Promise(resolve=>{
button.onclick = ()=>{
resolve(canvas.toDataURL('image/png'))
}
})
</script>
"""
def draw(imgm, filename='drawing.png', w=400, h=200, line_width=1):
display(HTML(canvas_html % (w, h, w,h, filename.split('.')[-1], imgm, line_width)))
data = eval_js("data")
binary = b64decode(data.split(',')[1])
with open(filename, 'wb') as f:
f.write(binary)
```
# 画像のセットアップ
[使用画像1](https://www.pakutaso.com/shared/img/thumb/PAK85_oyakudachisimasu20140830_TP_V.jpg)<br>
[使用画像2](https://www.pakutaso.com/shared/img/thumb/TSU88_awaitoykyo_TP_V.jpg)<br>
[使用画像3](https://www.pakutaso.com/20211208341post-37933.html)
```
% cd /content/lama
from google.colab import files
files = files.upload()
fname = list(files.keys())[0]
shutil.rmtree('./data_for_prediction', ignore_errors=True)
! mkdir data_for_prediction
copyfile(fname, f'./data_for_prediction/{fname}')
os.remove(fname)
fname = f'./data_for_prediction/{fname}'
image64 = base64.b64encode(open(fname, 'rb').read())
image64 = image64.decode('utf-8')
print(f'Will use {fname} for inpainting')
img = np.array(plt.imread(f'{fname}')[:,:,:3])
```
# inpainting
```
mask_path = f".{fname.split('.')[1]}_mask.png"
draw(image64, filename=mask_path, w=img.shape[1], h=img.shape[0], line_width=0.04*img.shape[1])
with_mask = np.array(plt.imread(mask_path)[:,:,:3])
mask = (with_mask[:,:,0]==1)*(with_mask[:,:,1]==0)*(with_mask[:,:,2]==0)
plt.imsave(mask_path,mask, cmap='gray')
%cd /content/lama
!mkdir output/
copyfile(mask_path,os.path.join("./output/", os.path.basename(mask_path)))
!PYTHONPATH=. TORCH_HOME=$(pwd) python3 bin/predict.py \
model.path=$(pwd)/big-lama \
indir=$(pwd)/data_for_prediction \
outdir=/content/lama/output \
dataset.img_suffix={suffix}
plt.rcParams['figure.dpi'] = 200
plt.imshow(plt.imread(f"/content/lama/output/{fname.split('.')[1].split('/')[2]}_mask.png"))
_=plt.axis('off')
_=plt.title('inpainting result')
plt.show()
fname = None
```
|
github_jupyter
|
# Wind Statistics
### Introduction:
The data have been modified to contain some missing values, identified by NaN.
Using pandas should make this exercise
easier, in particular for the bonus question.
You should be able to perform all of these operations without using
a for loop or other looping construct.
1. The data in 'wind.data' has the following format:
```
"""
Yr Mo Dy RPT VAL ROS KIL SHA BIR DUB CLA MUL CLO BEL MAL
61 1 1 15.04 14.96 13.17 9.29 NaN 9.87 13.67 10.25 10.83 12.58 18.50 15.04
61 1 2 14.71 NaN 10.83 6.50 12.62 7.67 11.50 10.04 9.79 9.67 17.54 13.83
61 1 3 18.50 16.88 12.33 10.13 11.17 6.17 11.25 NaN 8.50 7.67 12.75 12.71
"""
```
The first three columns are year, month and day. The
remaining 12 columns are average windspeeds in knots at 12
locations in Ireland on that day.
More information about the dataset go [here](wind.desc).
### Step 1. Import the necessary libraries
```
import pandas as pd
import datetime
```
### Step 2. Import the dataset from this [address](https://github.com/guipsamora/pandas_exercises/blob/master/06_Stats/Wind_Stats/wind.data)
### Step 3. Assign it to a variable called data and replace the first 3 columns by a proper datetime index.
```
# parse_dates gets 0, 1, 2 columns and parses them as the index
data_url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/06_Stats/Wind_Stats/wind.data'
data = pd.read_csv(data_url, sep = "\s+", parse_dates = [[0,1,2]])
data.head()
```
### Step 4. Year 2061? Do we really have data from this year? Create a function to fix it and apply it.
```
# The problem is that the dates are 2061 and so on...
# function that uses datetime
def fix_century(x):
year = x.year - 100 if x.year > 1989 else x.year
return datetime.date(year, x.month, x.day)
# apply the function fix_century on the column and replace the values to the right ones
data['Yr_Mo_Dy'] = data['Yr_Mo_Dy'].apply(fix_century)
# data.info()
data.head()
```
### Step 5. Set the right dates as the index. Pay attention at the data type, it should be datetime64[ns].
```
# transform Yr_Mo_Dy it to date type datetime64
data["Yr_Mo_Dy"] = pd.to_datetime(data["Yr_Mo_Dy"])
# set 'Yr_Mo_Dy' as the index
data = data.set_index('Yr_Mo_Dy')
data.head()
# data.info()
```
### Step 6. Compute how many values are missing for each location over the entire record.
#### They should be ignored in all calculations below.
```
# "Number of non-missing values for each location: "
data.isnull().sum()
```
### Step 7. Compute how many non-missing values there are in total.
```
#number of columns minus the number of missing values for each location
data.shape[0] - data.isnull().sum()
#or
data.notnull().sum()
```
### Step 8. Calculate the mean windspeeds of the windspeeds over all the locations and all the times.
#### A single number for the entire dataset.
```
data.sum().sum() / data.notna().sum().sum()
```
### Step 9. Create a DataFrame called loc_stats and calculate the min, max and mean windspeeds and standard deviations of the windspeeds at each location over all the days
#### A different set of numbers for each location.
```
data.describe(percentiles=[])
```
### Step 10. Create a DataFrame called day_stats and calculate the min, max and mean windspeed and standard deviations of the windspeeds across all the locations at each day.
#### A different set of numbers for each day.
```
# create the dataframe
day_stats = pd.DataFrame()
# this time we determine axis equals to one so it gets each row.
day_stats['min'] = data.min(axis = 1) # min
day_stats['max'] = data.max(axis = 1) # max
day_stats['mean'] = data.mean(axis = 1) # mean
day_stats['std'] = data.std(axis = 1) # standard deviations
day_stats.head()
```
### Step 11. Find the average windspeed in January for each location.
#### Treat January 1961 and January 1962 both as January.
```
data.loc[data.index.month == 1].mean()
```
### Step 12. Downsample the record to a yearly frequency for each location.
```
data.groupby(data.index.to_period('A')).mean()
```
### Step 13. Downsample the record to a monthly frequency for each location.
```
data.groupby(data.index.to_period('M')).mean()
```
### Step 14. Downsample the record to a weekly frequency for each location.
```
data.groupby(data.index.to_period('W')).mean()
```
### Step 15. Calculate the min, max and mean windspeeds and standard deviations of the windspeeds across all locations for each week (assume that the first week starts on January 2 1961) for the first 52 weeks.
```
# resample data to 'W' week and use the functions
weekly = data.resample('W').agg(['min','max','mean','std'])
# slice it for the first 52 weeks and locations
weekly.loc[weekly.index[1:53], "RPT":"MAL"] .head(10)
```
|
github_jupyter
|
# APS 5 - Questões com auxílio do Pandas
** Nome: ** <font color=blue> Gabriel Heusi Pereira Bueno de Camargo </font>
APS **INDIVIDUAL**
Data de Entrega: 26/Set até às 23h59 via GitHub.
Vamos trabalhar com dados do USGS (United States Geological Survey) para tentar determinar se os abalos detectados no hemisfério Norte têm grande probabilidade de serem testes nucleares.
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import expon
from numpy import arange
import scipy.stats as stats
#Abrir o arquivo
df = pd.read_csv('earthquake.csv')
#listar colunas
print(list(df))
```
## Liste as primeiras linhas do DataFrame
```
df.head()
```
## Q1 - Manipulando o DataFrame
Crie uma coluna chamada `Hemisfério` baseada na Latitude
A regra de formação é a seguinte:
Valor | Critério
---|---
Norte | Latitude positiva
Sul | Latitude negativa
```
df.loc[(df.Latitude >=0), "Hemisfério"] = "Norte"
df.loc[(df.Latitude <0), "Hemisfério"] = "Sul"
df.head()
df.Magnitude.describe()
```
## Q2 - Fit e Histograma
Faça o Histograma da Magnitude. Interprete.
```
f = plt.figure(figsize=(11,5))
faixas = arange(5,9,0.65)
plot = df.Magnitude.plot.hist(bins=faixas , title="Histograma de Magnitude",normed=1,alpha = 0.9,color="g")
plt.xlabel("Magnitude")
plt.ylabel("Densidade")
plt.show()
```
Faça o fit de uma distribuição exponencial sobre os dados da Magnitude, achando os valores de **loc** e **scale**. Interprete loc e scale no caso da exponencial.
Documentação: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.expon.html
Refaça o Histograma plotando a fdp (função densidade de probabilidade) da exponencial com os parâmetros achados no fit em cima. Cuidado com o domínio utilizado. Interprete.
```
mu = df.Magnitude.mean()
dp = df.Magnitude.std()
fig = plt.figure(figsize=(11, 5))
plot= df.Magnitude.plot.hist(bins = faixas, title='HISTOGRAMA Magnitude ', normed=1, alpha=0.9,color = 'r')
a = sorted(df.Magnitude)
plt.plot(a, stats.norm.pdf(a, loc = mu, scale = dp))
plt.title('Histograma X Pdf')
```
## Q3 - Tabela cruzada
Faça uma tabela de cruzamento das variáveis `Hemisfério` e `Type`
Sua tabela deve ser <font color=red> normalizada</font>
```
ct = pd.crosstab(df.Hemisfério,df.Type,margins=True,normalize = True)
ct
```
### Q3.1 - Qual a probabilidade de ocorrer um terremoto no hemisfério norte?
Adicione na célula abaixo o cálculo:
```
probNorte = ct.Earthquake.Norte/ct.Earthquake.All
print(probNorte)
```
Explique o seu raciocínio
O cálculo da probabilidade nesse caso se baseia na análise dos casos que ocorrem no Norte em comparação com os casos totais de terremoto. Portanto para saber a probabilidade de ocorrer um terremoto no hemisfério Norte basta dividir esse valor, apresentado no crosstab, pela probabilidade total.
### Q3.2 - Dado que aconteceu no Norte, qual a probabilidade de ele ter sido `Nuclear Explosion`?
Calcule a resposta abaixo, ou explique como a encontrou
Se for cálculo preencha a célula a seguir:
```
probNuclear = ct["Nuclear Explosion"]["Norte"]/ct.All.Norte
print(probNuclear)
```
Se conseguir obter a resposta sem calcular, insira a resposta abaixo:
* A probabilidade de ter sido `Nuclear Explosion` é ...
## Q4 - Análise bivariada
Faça o *plot* de dispersão (*scatter plot*) entre as variáveis `Magnitude Error` e `Depth`
```
plt.scatter(x = df['Magnitude Error'],
y = df['Depth'])
plt.show()
```
Calcule a correlação entre as variáveis `Magnitude Error` e `Depth`
```
df["Depth"].corr(df["Magnitude Error"])
```
Explique o que significa o valor da correlação calculada acima?
A correlação apresentada acima mostra uma espécie de dependência entre as duas variáveis, no caso Magnitude Error e Depth, observando o gráfico mostrado acima os valores são bem distantes, mas é justamente isso e o valor da correlação mostrado, que é baixo, que mostra uma alta dependência entre as duas variáveis, não há grande discrepância entre os valores. O fato de ser negativo justificaria uma reta descrescente.
## Q5 - Describe e boxplot
Faça o `describe` e o *boxplot* da `Latitude` e da `Longitude`. Explique os valores
```
Lat = df["Latitude"].describe()
Long = df["Longitude"].describe()
print(Lat,Long)
df.boxplot(column = ["Latitude","Longitude"])
plt.show()
```
## Q6 - Tirando conclusões com base nos dados
Em um certo lugar já ocorreram abalos com *Magnitude Type* `MB` e *Type* `Nuclear Explosion`.
Responda:
* É mais provável que tenha sido no norte ou no sul?
Assuma que os Magnitude Type e Type são independentes
```
df.loc[(df.Type=="Nuclear Explosion")&(df["Magnitude Type"]=="MB")&(df["Hemisfério"]=="Sul"),"Hemis"]="Sul"
df.loc[(df.Type=="Nuclear Explosion")&(df["Magnitude Type"]=="MB")&(df["Hemisfério"]=="Norte"),"Hemis"]="Norte"
sul=df["Hemis"].value_counts("Sul")
sul
```
Observando os valores mostrados acima pode-se concluir que a probabilidade de se ocorrer um terremoto é maior no hemisfério Norte em comparação com o Sul. Mais precisamente afirma-se que o Norte tem uma probabilidade de 82,82% de se ocorrer um terremoto, enquanto o Sul apenas 17,17%.
|
github_jupyter
|
```
# assume you have openmm, pdbfixer and mdtraj installed.
# if not, you can follow the gudie here https://github.com/npschafer/openawsem
# import all using lines below
# from simtk.openmm.app import *
# from simtk.openmm import *
# from simtk.unit import *
from simtk.openmm.app import ForceField
# define atoms and residues.
forcefield = ForceField("cg.xml")
from pdbfixer import PDBFixer
from simtk.openmm.app import PDBFile
fixer = PDBFixer("1r69.pdb")
# more on pdbfixer, check:
# https://htmlpreview.github.io/?https://github.com/openmm/pdbfixer/blob/master/Manual.html
fixer.removeHeterogens(keepWater=False)
PDBFile.writeFile(fixer.topology, fixer.positions, open('1r69_cleaned.pdb', 'w'))
import mdtraj
pdb = mdtraj.load("1r69_cleaned.pdb")
keep_list = []
for atom in pdb.topology.atoms:
if atom.name == "CA":
keep_list.append(atom.index)
chosen = pdb.atom_slice(keep_list)
chosen.save("ca_only.pdb")
from simtk.openmm import HarmonicBondForce
def connect_term(system):
k_con= 10000
con = HarmonicBondForce()
n = system.getNumParticles()
for i in range(n-1):
con.addBond(i, i+1, 0.3816, k_con)
return con
from simtk.openmm import CustomBondForce
def connect_term_v2(system):
k_con= 10000
r0 = 0.3816
con = CustomBondForce(f"0.5*{k_con}*(r-r0)^2")
n = system.getNumParticles()
con.addPerBondParameter("r0")
for i in range(n-1):
con.addBond(i, i+1, [r0])
return con
from simtk.openmm import CustomCompoundBondForce
def connect_term_v3(system):
k_con= 10000
r0 = 0.3816
con = CustomCompoundBondForce(2, f"0.5*{k_con}*(distance(p1,p2)-r0)^2")
n = system.getNumParticles()
con.addPerBondParameter("r0")
for i in range(n-1):
con.addBond([i, i+1], [r0])
return con
# contact map
import numpy as np
from simtk.unit import *
pdb = PDBFile("ca_only.pdb")
pos = pdb.positions.value_in_unit(nanometer)
pos = np.array(pos)
dis = (((pos.reshape(1, -1, 3) - pos.reshape(-1, 1, 3))**2).sum(axis=-1))**0.5
import matplotlib.pylab as plt
%matplotlib inline
plt.figure(figsize=[10,10])
plt.imshow(dis < 0.8, origin="lower")
plt.colorbar()
n = dis.shape[0]
contact_threshold = 0.8 # in unit of nm
contact_list = []
for i in range(n):
for j in range(i+1, n):
dis_ij = dis[i][j]
if dis_ij < contact_threshold:
sigma_ij = 0.1*(j-i)**0.15
contact_list.append((i, j, (dis_ij, sigma_ij)))
len(contact_list)
from simtk.openmm import CustomBondForce
def structure_based_term(contact_list):
k = 10
structure_based = CustomBondForce(f"-{k}*exp(-(r-r_ijN)^2/(2*sigma_ij^2))")
# structure_based = CustomBondForce(f"-{k}")
structure_based.addPerBondParameter("r_ijN")
structure_based.addPerBondParameter("sigma_ij")
for contact in contact_list:
structure_based.addBond(*contact)
return structure_based
from simtk.openmm import LangevinIntegrator
from simtk.openmm import CustomIntegrator
from simtk.openmm.app import Simulation
from simtk.openmm.app import PDBReporter
from simtk.openmm.app import StateDataReporter
from simtk.openmm.app import DCDReporter
from sys import stdout
pdb = PDBFile("ca_only.pdb")
forcefield = ForceField("cg.xml")
print(pdb.topology)
system = forcefield.createSystem(pdb.topology)
system.removeForce(0) # remove the default force "CMotionRemover"
# connect = connect_term(system)
# system.addForce(connect)
# connect = connect_term_v2(system)
# system.addForce(connect)
connect = connect_term_v3(system)
system.addForce(connect)
structure_based = structure_based_term(contact_list)
system.addForce(structure_based)
print("Number of particles: ", system.getNumParticles())
print("Number of forces: ", system.getNumForces())
integrator = LangevinIntegrator(300*kelvin, 1/picosecond, 0.004*picoseconds)
simulation = Simulation(pdb.topology, system, integrator)
simulation.context.setPositions(pdb.positions)
simulation.reporters.append(PDBReporter('output.pdb', 1000))
simulation.reporters.append(StateDataReporter(stdout, 1000, step=True,
potentialEnergy=True, temperature=True))
simulation.step(10000)
integrator = CustomIntegrator(0.001)
simulation = Simulation(pdb.topology, system, integrator)
simulation.context.setPositions(pdb.positions)
simulation.reporters.append(DCDReporter('output.dcd', 1))
simulation.reporters.append(StateDataReporter(stdout, 1, step=True,
potentialEnergy=True, temperature=True))
simulation.step(int(1))
simulation.minimizeEnergy()
simulation.step(int(1))
integrator = LangevinIntegrator(300*kelvin, 1/picosecond, 0.004*picoseconds)
simulation = Simulation(pdb.topology, system, integrator)
simulation.context.setPositions(pdb.positions)
simulation.reporters.append(DCDReporter('output.dcd', 1000, append=True))
simulation.reporters.append(StateDataReporter(stdout, 1000, step=True,
potentialEnergy=True, temperature=True))
simulation.step(10000)
# conda install nglview -c conda-forge
# jupyter-nbextension enable nglview --py --sys-prefix
import nglview
view = nglview.show_pdbid("1r69") # load "3pqr" from RCSB PDB and display viewer widget
view
view = nglview.show_structure_file("ca_only.pdb")
view
traj = mdtraj.load_dcd("output.dcd", top="ca_only.pdb")
view = nglview.show_mdtraj(traj)
view
# Input: expects 3xN matrix of points
# Returns R,t
# R = 3x3 rotation matrix
# t = 3x1 column vector
def rigid_transform_3D(A, B, correct_reflection=True):
assert A.shape == B.shape
num_rows, num_cols = A.shape
if num_rows != 3:
raise Exception(f"matrix A is not 3xN, it is {num_rows}x{num_cols}")
num_rows, num_cols = B.shape
if num_rows != 3:
raise Exception(f"matrix B is not 3xN, it is {num_rows}x{num_cols}")
# find mean column wise
centroid_A = np.mean(A, axis=1)
centroid_B = np.mean(B, axis=1)
# ensure centroids are 3x1
centroid_A = centroid_A.reshape(-1, 1)
centroid_B = centroid_B.reshape(-1, 1)
# subtract mean
Am = A - centroid_A
Bm = B - centroid_B
H = Am @ np.transpose(Bm)
# sanity check
#if linalg.matrix_rank(H) < 3:
# raise ValueError("rank of H = {}, expecting 3".format(linalg.matrix_rank(H)))
# find rotation
U, S, Vt = np.linalg.svd(H)
R = Vt.T @ U.T
# special reflection case
if np.linalg.det(R) < 0 and correct_reflection:
print("det(R) < R, reflection detected!, correcting for it ...")
Vt[2,:] *= -1
R = Vt.T @ U.T
t = -R @ centroid_A + centroid_B
return R, t
target = traj.xyz[0].T
n = traj.xyz.shape[0]
for i in range(1, n):
current = traj.xyz[i].T
ret_R, ret_t = rigid_transform_3D(current, target, correct_reflection=False)
out = (ret_R@current) + ret_t
traj.xyz[i] = out.T.reshape(1, -1, 3)
view = nglview.show_mdtraj(traj, gui=True)
view
# energy evaluation.
pdb = PDBFile('ca_only.pdb')
traj = mdtraj.load_dcd("output.dcd", top='ca_only.pdb')
integrator = CustomIntegrator(0.001)
simulation = Simulation(pdb.topology, system, integrator)
for frame in range(traj.n_frames):
simulation.context.setPositions(traj.openmm_positions(frame))
state = simulation.context.getState(getEnergy=True)
termEnergy = state.getPotentialEnergy().value_in_unit(kilojoule_per_mole)
# termEnergy = state.getPotentialEnergy()
print(frame, f"{termEnergy:.3f} kJ/mol")
system = forcefield.createSystem(pdb.topology)
system.removeForce(0) # remove the default force "CMotionRemover"
connect = connect_term(system)
connect.setForceGroup(1)
system.addForce(connect)
connect = connect_term_v2(system)
connect.setForceGroup(2)
system.addForce(connect)
connect = connect_term_v3(system)
connect.setForceGroup(3)
system.addForce(connect)
structure_based = structure_based_term(contact_list)
structure_based.setForceGroup(4)
system.addForce(structure_based)
print("Number of particles: ", system.getNumParticles())
print("Number of forces: ", system.getNumForces())
integrator = LangevinIntegrator(300*kelvin, 1/picosecond, 0.004*picoseconds)
simulation = Simulation(pdb.topology, system, integrator)
simulation.context.setPositions(pdb.positions)
force_groups = {"con":1, "con_v2":2, "con_v3":3, "structure_based_term":4}
show_energy = ["con", "con_v2", "con_v3", "structure_based_term"]
integrator = CustomIntegrator(0.001)
simulation = Simulation(pdb.topology, system, integrator)
width = 15
line = "".join([f"{term:<15}" for term in ["frame"] + show_energy])
print(line)
for frame in range(traj.n_frames):
simulation.context.setPositions(traj.openmm_positions(frame))
all_energy = []
for term in show_energy:
group = force_groups[term]
state = simulation.context.getState(getEnergy=True, groups={group})
termEnergy = state.getPotentialEnergy().value_in_unit(kilojoule_per_mole)
all_energy.append(termEnergy)
line = "".join([f"{termEnergy:<15.3f}" for termEnergy in all_energy])
print(f"{frame:<15}{line}")
```
|
github_jupyter
|
# [NTDS'18] tutorial 2: build a graph from an edge list
[ntds'18]: https://github.com/mdeff/ntds_2018
[Benjamin Ricaud](https://people.epfl.ch/benjamin.ricaud), [EPFL LTS2](https://lts2.epfl.ch)
* Dataset: [Open Tree of Life](https://tree.opentreeoflife.org)
* Tools: [pandas](https://pandas.pydata.org), [numpy](http://www.numpy.org), [networkx](https://networkx.github.io), [gephi](https://gephi.org/)
## Tools
The below line is a [magic command](https://ipython.readthedocs.io/en/stable/interactive/magics.html) that allows plots to appear in the notebook.
```
%matplotlib inline
```
The first thing is always to import the packages we'll use.
```
import pandas as pd
import numpy as np
import networkx as nx
```
Tutorials on pandas can be found at:
* <https://pandas.pydata.org/pandas-docs/stable/10min.html>
* <https://pandas.pydata.org/pandas-docs/stable/tutorials.html>
Tutorials on numpy can be found at:
* <https://docs.scipy.org/doc/numpy/user/quickstart.html>
* <http://www.scipy-lectures.org/intro/numpy/index.html>
* <http://www.scipy-lectures.org/advanced/advanced_numpy/index.html>
A tutorial on networkx can be found at:
* <https://networkx.github.io/documentation/stable/tutorial.html>
## Import the data
We will play with a excerpt of the Tree of Life, that can be found together with this notebook. This dataset is reduced to the first 1000 taxons (starting from the root node). The full version is available here: [Open Tree of Life](https://tree.opentreeoflife.org/about/taxonomy-version/ott3.0).


```
tree_of_life = pd.read_csv('data/taxonomy_small.tsv', sep='\t\|\t?', encoding='utf-8', engine='python')
```
If you do not remember the details of a function:
```
pd.read_csv?
```
For more info on the separator, see [regex](https://docs.python.org/3.6/library/re.html).
Now, what is the object `tree_of_life`? It is a Pandas DataFrame.
```
tree_of_life
```
The description of the entries is given here:
https://github.com/OpenTreeOfLife/reference-taxonomy/wiki/Interim-taxonomy-file-format
## Explore the table
```
tree_of_life.columns
```
Let us drop some columns.
```
tree_of_life = tree_of_life.drop(columns=['sourceinfo', 'uniqname', 'flags','Unnamed: 7'])
tree_of_life.head()
```
Pandas infered the type of values inside each column (int, float, string and string). The parent_uid column has float values because there was a missing value, converted to `NaN`
```
print(tree_of_life['uid'].dtype, tree_of_life.parent_uid.dtype)
```
How to access individual values.
```
tree_of_life.iloc[0, 2]
tree_of_life.loc[0, 'name']
```
**Exercise**: Guess the output of the below line.
```
# tree_of_life.uid[0] == tree_of_life.parent_uid[1]
```
Ordering the data.
```
tree_of_life.sort_values(by='name').head()
```
## Operation on the columns
Unique values, useful for categories:
```
tree_of_life['rank'].unique()
```
Selecting only one category.
```
tree_of_life[tree_of_life['rank'] == 'species'].head()
```
How many species do we have?
```
len(tree_of_life[tree_of_life['rank'] == 'species'])
tree_of_life['rank'].value_counts()
```
## Building the graph
Let us build the adjacency matrix of the graph. For that we need to reorganize the data. First we separate the nodes and their properties from the edges.
```
nodes = tree_of_life[['uid', 'name','rank']]
edges = tree_of_life[['uid', 'parent_uid']]
```
When using an adjacency matrix, nodes are indexed by their row or column number and not by a `uid`. Let us create a new index for the nodes.
```
# Create a column for node index.
nodes.reset_index(level=0, inplace=True)
nodes = nodes.rename(columns={'index':'node_idx'})
nodes.head()
# Create a conversion table from uid to node index.
uid2idx = nodes[['node_idx', 'uid']]
uid2idx = uid2idx.set_index('uid')
uid2idx.head()
edges.head()
```
Now we are ready to use yet another powerful function of Pandas. Those familiar with SQL will recognize it: the `join` function.
```
# Add a new column, matching the uid with the node_idx.
edges = edges.join(uid2idx, on='uid')
# Do the same with the parent_uid.
edges = edges.join(uid2idx, on='parent_uid', rsuffix='_parent')
# Drop the uids.
edges = edges.drop(columns=['uid','parent_uid'])
edges.head()
```
The above table is a list of edges connecting nodes and their parents.
## Building the (weighted) adjacency matrix
We will use numpy to build this matrix. Note that we don't have edge weights here, so our graph is going to be unweighted.
```
n_nodes = len(nodes)
adjacency = np.zeros((n_nodes, n_nodes), dtype=int)
for idx, row in edges.iterrows():
if np.isnan(row.node_idx_parent):
continue
i, j = int(row.node_idx), int(row.node_idx_parent)
adjacency[i, j] = 1
adjacency[j, i] = 1
adjacency[:15, :15]
```
Congratulations, you have built the adjacency matrix!
## Graph visualization
To conclude, let us visualize the graph. We will use the python module networkx.
```
# A simple command to create the graph from the adjacency matrix.
graph = nx.from_numpy_array(adjacency)
```
In addition, let us add some attributes to the nodes:
```
node_props = nodes.to_dict()
for key in node_props:
# print(key, node_props[key])
nx.set_node_attributes(graph, node_props[key], key)
```
Let us check if it is correctly recorded:
```
graph.node[1]
```
Draw the graph with two different [layout algorithms](https://en.wikipedia.org/wiki/Graph_drawing#Layout_methods).
```
nx.draw_spectral(graph)
nx.draw_spring(graph)
```
Save the graph to disk in the `gexf` format, readable by gephi and other tools that manipulate graphs. You may now explore the graph using gephi and compare the visualizations.
```
nx.write_gexf(graph, 'tree_of_life.gexf')
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
import numpy as np
import scipy.stats as stats
import scipy.special
#graphing
import matplotlib.pyplot as plt
#stats
import statsmodels.api as sm
from statsmodels.base.model import GenericLikelihoodModel
#import testing
import sys
sys.path.append("../")
import vuong_plots
beta0 = 1.
beta1 = .25
def gen_data(beta0=beta0,beta1=beta1):
nobs = 1000
#parameters
sigma = 1
epsilon = stats.norm.rvs(loc=0,scale=sigma,size=nobs)
#censor data below x<0?
x = stats.norm.rvs(loc=5,scale=5,size=nobs)
y = beta0+ beta1*x + epsilon
#censor
y[y<=0] = 0
return y,x,nobs
yn,xn,nobs = gen_data()
print(xn.shape)
print(sm.add_constant(xn).shape)
print(scipy.stats.mode(yn))
np.random.seed()
yn,xn,nobs = gen_data()
class Tobit(GenericLikelihoodModel):
def __init__(self, *args,cc=False,ols=False, **kwargs):
super(Tobit,self).__init__(*args,**kwargs)
self._set_extra_params_names(['var'])
self.start_params = np.array([1]*(self.exog.shape[1]+1))
self.cc = cc
self.ols = ols
#self.start_params = np.array( range(1, (2*self.exog.shape[1]+2)))
#2 sets of params for z, 1 for x, 2 variances...
def loglikeobs(self, params):
y = self.endog
x = self.exog
m = 1*(self.endog == 0) #missingness
beta = params[0:-1]
sigma2 = max(params[-1],1e-3)
mu_y = np.matmul(x,beta)
pr_y = stats.norm.logpdf( y, loc = mu_y, scale=np.sqrt(sigma2))
#if complete case, assign pr missing to all observations...
pr_m = np.log(max(m.mean(),1e-4))
if not self.cc:
pr_m = stats.norm.logcdf( y, loc = mu_y, scale=np.sqrt(sigma2))
#we're done if ols
if self.ols:
return pr_y
else:
ll = (1-m)*pr_y + m*pr_m
return ll
def score(self, params):
y = self.endog
x = self.exog
m = 1*(self.endog == 0) #missingness
m_x = np.repeat(m,x.shape[1]).reshape(x.shape)
if ols: #if OLS use all the data...
m, m_x = np.ones(y.shape), np.ones(x.shape)
b = params[0:-1]
sigma2 = max(params[-1],1e-3)
s = np.sqrt(sigma2)
beta_jac = np.zeros(len(b))
sigma_jac = 0
#for censored
if not cc and not ols:
left_stats = (y - np.dot(x, b)) / s
l_pdf = scipy.stats.norm.logpdf(left_stats)
l_cdf = scipy.stats.norm.logcdf(left_stats)
left_frac = np.exp(l_pdf - l_cdf)
beta_left = np.dot(left_frac*m, x*m_x / s)
beta_jac -= beta_left
left_sigma = np.dot(left_frac*m, left_stats*m)
sigma_jac -= left_sigma
#for non-censored
mid_stats = (y - np.dot(x, b)) / s
beta_mid = np.dot(mid_stats*(1-m), x*(1-m_x) / s)
beta_jac += beta_mid
mid_sigma = ((np.square(mid_stats) - 1)*(1-m)).sum()
sigma_jac += mid_sigma
combo_jac = np.append(beta_jac, sigma_jac / (2*s) ) # by chain rule, since the expression above is dloglik/dlogsigma
return combo_jac
model1 = Tobit(yn,sm.add_constant(xn))
model1_fit = model1.fit(disp=False)
model1_fit.summary()
def setup_shi(yn,xn):
model1 = Tobit(yn,sm.add_constant(xn))
model1_fit = model1.fit(disp=False)
ll1 = model1.loglikeobs(model1_fit.params)
grad1 = model1.score_obs(model1_fit.params)
hess1 = model1.hessian(model1_fit.params)
k1 = len(model1_fit.params)
#fit logistic values
model2 = Tobit(yn,sm.add_constant(xn),ols=True)
model2_fit = model2.fit(disp=False)
ll2 = model2.loglikeobs(model2_fit.params)
grad2 = model2.score_obs(model2_fit.params)
hess2 = model2.hessian(model2_fit.params)
k2 = len(model2_fit.params)
return ll1,grad1,hess1,ll2,k1, grad2,hess2,k2
true_stats = vuong_plots.plot_true(gen_data,setup_shi)
yn,xn,nobs = gen_data()
anayltic_stats = vuong_plots.plot_analytic(yn,xn,nobs,setup_shi)
bootstrap_stats = vuong_plots.plot_bootstrap(yn,xn,nobs,setup_shi)
plt.legend()
plt.show()
plt.plot(range(1,5), [ stats.kstat(bootstrap_stats,n=i) for i in range(1,5)], label="Bootstrap")
plt.plot(range(1,5), [ stats.kstat(anayltic_stats,n=i) for i in range(1,5)], label="Analytic")
plt.plot(range(1,5), [ stats.kstat(true_stats,n=i) for i in range(1,5)], label="True")
plt.legend()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/livjab/DS-Unit-2-Sprint-4-Practicing-Understanding/blob/master/module1-hyperparameter-optimization/LS_DS_241_Hyperparameter_Optimization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science — Practicing & Understanding Predictive Modeling_
# Hyperparameter Optimization
Today we'll use this process:
## "A universal workflow of machine learning"
_Excerpt from Francois Chollet, [Deep Learning with Python](https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/README.md), Chapter 4: Fundamentals of machine learning_
**1. Define the problem at hand and the data on which you’ll train.** Collect this data, or annotate it with labels if need be.
**2. Choose how you’ll measure success on your problem.** Which metrics will you monitor on your validation data?
**3. Determine your evaluation protocol:** hold-out validation? K-fold validation? Which portion of the data should you use for validation?
**4. Develop a first model that does better than a basic baseline:** a model with statistical power.
**5. Develop a model that overfits.** The universal tension in machine learning is between optimization and generalization; the ideal model is one that stands right at the border between underfitting and overfitting; between undercapacity and overcapacity. To figure out where this border lies, first you must cross it.
**6. Regularize your model and tune its hyperparameters, based on performance on the validation data.** Repeatedly modify your model, train it, evaluate on your validation data (not the test data, at this point), modify it again, and repeat, until the model is as good as it can get.
**Iterate on feature engineering: add new features, or remove features that don’t seem to be informative.**
Once you’ve developed a satisfactory model configuration, you can **train your final production model on all the available data (training and validation) and evaluate it one last time on the test set.**
## 1. Define the problem at hand and the data on which you'll train
We'll apply the workflow to a [project from _Python Data Science Handbook_](https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic) by Jake VanderPlas:
> **Predicting Bicycle Traffic**
> As an example, let's take a look at whether we can predict the number of bicycle trips across Seattle's Fremont Bridge based on weather, season, and other factors.
> We will join the bike data with another dataset, and try to determine the extent to which weather and seasonal factors—temperature, precipitation, and daylight hours—affect the volume of bicycle traffic through this corridor. Fortunately, the NOAA makes available their daily [weather station data](http://www.ncdc.noaa.gov/cdo-web/search?datasetid=GHCND) (I used station ID USW00024233) and we can easily use Pandas to join the two data sources.
> Let's start by loading the two datasets, indexing by date:
So this is a regression problem, not a classification problem. We'll define the target, choose an evaluation metric, and choose models that are appropriate for regression problems.
### Download data
```
!curl -o FremontBridge.csv https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD
!wget https://raw.githubusercontent.com/jakevdp/PythonDataScienceHandbook/master/notebooks/data/BicycleWeather.csv
```
### Load data
```
# Modified from cells 15, 16, and 20, at
# https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic
import pandas as pd
# Download and join data into a dataframe
def load():
fremont_bridge = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'
bicycle_weather = 'https://raw.githubusercontent.com/jakevdp/PythonDataScienceHandbook/master/notebooks/data/BicycleWeather.csv'
counts = pd.read_csv(fremont_bridge, index_col='Date', parse_dates=True,
infer_datetime_format=True)
weather = pd.read_csv(bicycle_weather, index_col='DATE', parse_dates=True,
infer_datetime_format=True)
daily = counts.resample('d').sum()
daily['Total'] = daily.sum(axis=1)
daily = daily[['Total']] # remove other columns
weather_columns = ['PRCP', 'SNOW', 'SNWD', 'TMAX', 'TMIN', 'AWND']
daily = daily.join(weather[weather_columns], how='inner')
# Make a feature for yesterday's total
daily['Total_yesterday'] = daily.Total.shift(1)
daily = daily.drop(index=daily.index[0])
return daily
daily = load()
```
### First fast look at the data
- What's the shape?
- What's the date range?
- What's the target and the features?
```
# TODO
daily.shape
daily.head()
daily.tail()
```
Target
- Total : Daily total number of bicycle trips across Seattle's Fremont Bridge
Features
- Date (index) : from 2012-10-04 to 2015-09-01
- Total_yesterday : Total trips yesterday
- PRCP : Precipitation (1/10 mm)
- SNOW : Snowfall (1/10 mm)
- SNWD : Snow depth (1/10 mm)
- TMAX : Maximum temperature (1/10 Celsius)
- TMIN : Minimum temperature (1/10 Celsius)
- AWND : Average daily wind speed (1/10 meters per second)
## 2. Choose how you’ll measure success on your problem.
Which metrics will you monitor on your validation data?
This is a regression problem, so we need to choose a regression [metric](https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values).
I'll choose mean absolute error.
```
# TODO
from sklearn.metrics import mean_absolute_error
```
## 3. Determine your evaluation protocol
We're doing model selection, hyperparameter optimization, and performance estimation. So generally we have two ideal [options](https://sebastianraschka.com/images/blog/2018/model-evaluation-selection-part4/model-eval-conclusions.jpg) to choose from:
- 3-way holdout method (train/validation/test split)
- Cross-validation with independent test set
I'll choose cross-validation with independent test set. Scikit-learn makes cross-validation convenient for us!
Specifically, I will use random shuffled cross validation to train and validate, but I will hold out an "out-of-time" test set, from the last 100 days of data:
```
# TODO
test = daily[-100:]
train = daily[:-100]
train.shape, test.shape
X_train = train.drop(columns="Total")
y_train = train["Total"]
X_test = test.drop(columns="Total")
y_test = test["Total"]
X_train.shape, y_train.shape, X_test.shape, y_test.shape
```
## 4. Develop a first model that does better than a basic baseline
### Look at the target's distribution and descriptive stats
```
import matplotlib.pyplot as plt
import seaborn as sns
sns.distplot(y_train);
y_train.describe()
```
### Basic baseline 1
```
y_pred = [y_train.median()] * len(y_train)
mean_absolute_error(y_train, y_pred)
```
### Basic baseline 2
```
y_pred = X_train["Total_yesterday"]
mean_absolute_error(y_train, y_pred)
```
### First model that does better than a basic baseline
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html
```
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_validate
scores = cross_validate(LinearRegression(),
X_train,
y_train,
scoring="neg_mean_absolute_error",
cv=3,
return_train_score=True,
return_estimator=True)
pd.DataFrame(scores)
scores["test_score"].mean()
scores["estimator"][0].coef_
for i, model in enumerate(scores["estimator"]):
coefficients = model.coef_
intercept = model.intercept_
feature_names = X_train.columns
print(f'Model from cross-validation fols #{i}')
print("Intercept", intercept)
print(pd.Series(coefficients, feature_names).to_string())
print('\n')
```
## 5. Develop a model that overfits.
"The universal tension in machine learning is between optimization and generalization; the ideal model is one that stands right at the border between underfitting and overfitting; between undercapacity and overcapacity. To figure out where this border lies, first you must cross it." —Chollet
<img src="https://jakevdp.github.io/PythonDataScienceHandbook/figures/05.03-validation-curve.png">
Diagram Source: https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html#Validation-curves-in-Scikit-Learn
### Random Forest?
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
```
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=100, max_depth=None, n_jobs=-1)
scores = cross_validate(model,
X_train,
y_train,
scoring="neg_mean_absolute_error",
cv=3,
return_train_score=True,
return_estimator=True)
pd.DataFrame(scores)
scores["test_score"].mean()
```
### Validation Curve
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html
> Validation curve. Determine training and test scores for varying parameter values. This is similar to grid search with one parameter.
```
import numpy as np
# Modified from cell 13 at
# https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html#Validation-curves-in-Scikit-Learn
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.model_selection import validation_curve
model = RandomForestRegressor(n_estimators=100)
depth = [2, 3, 4, 5, 6]
train_score, val_score = validation_curve(
model, X_train, y_train,
param_name='max_depth', param_range=depth,
scoring='neg_mean_absolute_error', cv=3)
plt.plot(depth, np.median(train_score, 1), color='blue', label='training score')
plt.plot(depth, np.median(val_score, 1), color='red', label='validation score')
plt.legend(loc='best')
plt.xlabel('depth');
```
### `RandomizedSearchCV`
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html
https://scikit-learn.org/stable/modules/grid_search.html
```
from sklearn.model_selection import RandomizedSearchCV
param_distributions = {
"n_estimators": [100, 200],
"max_depth": [4, 5],
"criterion": ["mse", "mae"]
}
gridsearch = RandomizedSearchCV(
RandomForestRegressor(n_jobs=-1, random_state=42),
param_distributions=param_distributions,
n_iter=8,
cv=3, scoring="neg_mean_absolute_error",
verbose=10,
return_train_score=True)
gridsearch.fit(X_train, y_train)
results = pd.DataFrame(gridsearch.cv_results_)
results.sort_values(by="rank_test_score")
gridsearch.best_estimator_
```
## FEATURE ENGINEERING!
Jake VanderPlas demonstrates this feature engineering:
https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic
```
# Modified from code cells 17-21 at
# https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic
def jake_wrangle(X):
X = X.copy()
# patterns of use generally vary from day to day;
# let's add binary columns that indicate the day of the week:
days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
for i, day in enumerate(days):
X[day] = (X.index.dayofweek == i).astype(float)
# we might expect riders to behave differently on holidays;
# let's add an indicator of this as well:
from pandas.tseries.holiday import USFederalHolidayCalendar
cal = USFederalHolidayCalendar()
holidays = cal.holidays('2012', '2016')
X = X.join(pd.Series(1, index=holidays, name='holiday'))
X['holiday'].fillna(0, inplace=True)
# We also might suspect that the hours of daylight would affect
# how many people ride; let's use the standard astronomical calculation
# to add this information:
def hours_of_daylight(date, axis=23.44, latitude=47.61):
"""Compute the hours of daylight for the given date"""
days = (date - pd.datetime(2000, 12, 21)).days
m = (1. - np.tan(np.radians(latitude))
* np.tan(np.radians(axis) * np.cos(days * 2 * np.pi / 365.25)))
return 24. * np.degrees(np.arccos(1 - np.clip(m, 0, 2))) / 180.
X['daylight_hrs'] = list(map(hours_of_daylight, X.index))
# temperatures are in 1/10 deg C; convert to C
X['TMIN'] /= 10
X['TMAX'] /= 10
# We can also calcuate the average temperature.
X['Temp (C)'] = 0.5 * (X['TMIN'] + X['TMAX'])
# precip is in 1/10 mm; convert to inches
X['PRCP'] /= 254
# In addition to the inches of precipitation, let's add a flag that
# indicates whether a day is dry (has zero precipitation):
X['dry day'] = (X['PRCP'] == 0).astype(int)
# Let's add a counter that increases from day 1, and measures how many
# years have passed. This will let us measure any observed annual increase
# or decrease in daily crossings:
X['annual'] = (X.index - X.index[0]).days / 365.
return X
X_train = jake_wrangle(X_train)
```
### Linear Regression (with new features)
```
scores = cross_validate(LinearRegression(),
X_train,
y_train,
scoring="neg_mean_absolute_error",
cv=3,
return_train_score=True,
return_estimator=True)
pd.DataFrame(scores)
scores["test_score"].mean()
```
### Random Forest (with new features)
```
param_distributions = {
'n_estimators': [100],
'max_depth': [5, 10, 15, None],
'criterion': ["mae"]
}
gridsearch = RandomizedSearchCV(
RandomForestRegressor(n_jobs=-1, random_state=42),
param_distributions=param_distributions,
n_iter=2,
cv=3,
scoring="neg_mean_absolute_error",
verbose=10,
return_train_score=True)
gridsearch.fit(X_train, y_train)
gridsearch.best_estimator_
```
### Feature engineering, explained by Francois Chollet
> _Feature engineering_ is the process of using your own knowledge about the data and about the machine learning algorithm at hand to make the algorithm work better by applying hardcoded (nonlearned) transformations to the data before it goes into the model. In many cases, it isn’t reasonable to expect a machine-learning model to be able to learn from completely arbitrary data. The data needs to be presented to the model in a way that will make the model’s job easier.
> Let’s look at an intuitive example. Suppose you’re trying to develop a model that can take as input an image of a clock and can output the time of day.
> If you choose to use the raw pixels of the image as input data, then you have a difficult machine-learning problem on your hands. You’ll need a convolutional neural network to solve it, and you’ll have to expend quite a bit of computational resources to train the network.
> But if you already understand the problem at a high level (you understand how humans read time on a clock face), then you can come up with much better input features for a machine-learning algorithm: for instance, write a Python script to follow the black pixels of the clock hands and output the (x, y) coordinates of the tip of each hand. Then a simple machine-learning algorithm can learn to associate these coordinates with the appropriate time of day.
> You can go even further: do a coordinate change, and express the (x, y) coordinates as polar coordinates with regard to the center of the image. Your input will become the angle theta of each clock hand. At this point, your features are making the problem so easy that no machine learning is required; a simple rounding operation and dictionary lookup are enough to recover the approximate time of day.
> That’s the essence of feature engineering: making a problem easier by expressing it in a simpler way. It usually requires understanding the problem in depth.
> Before convolutional neural networks became successful on the MNIST digit-classification problem, solutions were typically based on hardcoded features such as the number of loops in a digit image, the height of each digit in an image, a histogram of pixel values, and so on.
> Neural networks are capable of automatically extracting useful features from raw data. Does this mean you don’t have to worry about feature engineering as long as you’re using deep neural networks? No, for two reasons:
> - Good features still allow you to solve problems more elegantly while using fewer resources. For instance, it would be ridiculous to solve the problem of reading a clock face using a convolutional neural network.
> - Good features let you solve a problem with far less data. The ability of deep-learning models to learn features on their own relies on having lots of training data available; if you have only a few samples, then the information value in their features becomes critical.
# ASSIGNMENT
**1.** Complete the notebook cells that were originally commented **`TODO`**.
**2.** Then, focus on feature engineering to improve your cross validation scores. Collaborate with your cohort on Slack. You could start with the ideas [Jake VanderPlas suggests:](https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic)
> Our model is almost certainly missing some relevant information. For example, nonlinear effects (such as effects of precipitation and cold temperature) and nonlinear trends within each variable (such as disinclination to ride at very cold and very hot temperatures) cannot be accounted for in this model. Additionally, we have thrown away some of the finer-grained information (such as the difference between a rainy morning and a rainy afternoon), and we have ignored correlations between days (such as the possible effect of a rainy Tuesday on Wednesday's numbers, or the effect of an unexpected sunny day after a streak of rainy days). These are all potentially interesting effects, and you now have the tools to begin exploring them if you wish!
**3.** Experiment with the Categorical Encoding notebook.
**4.** At the end of the day, take the last step in the "universal workflow of machine learning" — "You can train your final production model on all the available data (training and validation) and evaluate it one last time on the test set."
See the [`RandomizedSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) documentation for the `refit` parameter, `best_estimator_` attribute, and `predict` method:
> **refit : boolean, or string, default=True**
> Refit an estimator using the best found parameters on the whole dataset.
> The refitted estimator is made available at the `best_estimator_` attribute and permits using `predict` directly on this `GridSearchCV` instance.
### STRETCH
**A.** Apply this lesson other datasets you've worked with, like Ames Housing, Bank Marketing, or others.
**B.** In additon to `RandomizedSearchCV`, scikit-learn has [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). Another library called scikit-optimize has [`BayesSearchCV`](https://scikit-optimize.github.io/notebooks/sklearn-gridsearchcv-replacement.html). Experiment with these alternatives.
**C.** _[Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do)_ discusses options for "Grid-Searching Which Model To Use" in Chapter 6:
> You can even go further in combining GridSearchCV and Pipeline: it is also possible to search over the actual steps being performed in the pipeline (say whether to use StandardScaler or MinMaxScaler). This leads to an even bigger search space and should be considered carefully. Trying all possible solutions is usually not a viable machine learning strategy. However, here is an example comparing a RandomForestClassifier and an SVC ...
The example is shown in [the accompanying notebook](https://github.com/amueller/introduction_to_ml_with_python/blob/master/06-algorithm-chains-and-pipelines.ipynb), code cells 35-37. Could you apply this concept to your own pipelines?
```
len(X_train.columns)
X_train.describe()
# Lets feature engineer a column determining if it rained yesterday.
# We can use the feature engineered by Jake VanderPlas called "dry day"
# to determine if there was rain on a given day
X_train["dry day"].value_counts()
X_train["yesterday dry day"] = X_train["dry day"].shift()
X_train[["dry day", "yesterday dry day"]].head(10)
# deal with Nan and change to int type
X_train["yesterday dry day"] = X_train["yesterday dry day"].fillna(value=1).astype(int)
# Let's try to make a column for the number of days since it was last sunny
X_train['rainy day streak'] = X_train.groupby( (X_train['dry day'] !=1)
.cumsum()).cumcount() + ( (X_train['dry day'] != 0)
.cumsum() == 0).astype(int)
X_train[["dry day", "rainy day streak"]].head(10)
# Let's make a feature for extreme cold/extreme heat
# Anything above about 80 degrees (F) and below 40 degrees (F) counts as extreme temp
# 80F = 26.67C, 40F = 4.44C
def extreme_temps(X_train):
if (X_train["Temp (C)"] > 26.67):
return 1
elif (X_train["Temp (C)"] < 4.44):
return 1
else:
return 0
X_train["extreme temp day"] = X_train.apply(extreme_temps, axis=1)
X_train["extreme temp day"].value_counts()
X_train[["Temp (C)", "extreme temp day"]].sort_values("Temp (C)").head()
X_train[["Temp (C)", "extreme temp day"]].sort_values("Temp (C)", ascending=False).head()
# linear regression with new added features
scores = cross_validate(LinearRegression(),
X_train,
y_train,
scoring="neg_mean_absolute_error",
cv=3,
return_train_score=True,
return_estimator=True)
pd.DataFrame(scores)
scores["test_score"].mean()
# random forest regression
param_distributions = {
'n_estimators': [100, 200, 300],
'max_depth': [5, 10, 15, None],
'criterion': ["mse", "mae"]
}
gridsearch = RandomizedSearchCV(
RandomForestRegressor(n_jobs=-1, random_state=42),
param_distributions=param_distributions,
cv=3,
scoring="neg_mean_absolute_error",
verbose=10,
return_train_score=True)
gridsearch.fit(X_train, y_train)
gridsearch.best_estimator_
scores = cross_validate(RandomForestRegressor(bootstrap=True,
criterion='mse',
max_depth=None,
max_features='auto',
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
n_estimators=300,
n_jobs=-1,
oob_score=False,
random_state=42,
verbose=0,
warm_start=False),
X_train,
y_train,
scoring="neg_mean_absolute_error",
cv=3,
return_train_score=True,
return_estimator=True)
pd.DataFrame(scores)
scores["test_score"].mean()
pd.DataFrame(gridsearch.cv_results_).sort_values(by="rank_test_score")
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
from pathlib import Path
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
```
# Return Forecasting: Read Historical Daily Yen Futures Data
In this notebook, you will load historical Dollar-Yen exchange rate futures data and apply time series analysis and modeling to determine whether there is any predictable behavior.
```
# Futures contract on the Yen-dollar exchange rate:
# This is the continuous chain of the futures contracts that are 1 month to expiration
yen_futures = pd.read_csv(
Path("yen.csv"), index_col="Date", infer_datetime_format=True, parse_dates=True
)
yen_futures.head()
# Trim the dataset to begin on January 1st, 1990
yen_futures = yen_futures.loc["1990-01-01":, :]
yen_futures.head()
```
# Return Forecasting: Initial Time-Series Plotting
Start by plotting the "Settle" price. Do you see any patterns, long-term and/or short?
```
yen_futures_settle= yen_futures['Settle']
#print(type(yen_futures_settle))
#print(yen_futures_settle)
yen_futures_settle = yen_futures_settle.to_frame()
yen_futures_settle.head()
```
#### make a copy for later
```
yen_futures_settle_only = yen_futures_settle.copy()
yen_futures_settle_only.head()
# Plot just the "Settle" column from the dataframe:
# YOUR CODE HERE!
yen_futures_settle.plot(y='Settle', title='Yen Futures Settle Prices', figsize=(20,10))
#ax.legend(['Settle prices'])
```
---
# Decomposition Using a Hodrick-Prescott Filter
Using a Hodrick-Prescott Filter, decompose the Settle price into a trend and noise.
```
import statsmodels.api as sm
# Apply the Hodrick-Prescott Filter by decomposing the "Settle" price into two separate series:
# YOUR CODE HERE!
#Hodrick-Prescott filter
ts_noise, ts_trend = sm.tsa.filters.hpfilter(yen_futures_settle['Settle'])
```
#### Test the noise , trend datasets
```
print(ts_noise.head())
print(ts_noise[1])
print(ts_trend.head())
# Create a dataframe of just the settle price, and add columns for "noise" and "trend" series from above:
# YOUR CODE HERE!
yen_futures_settle['noise'] = ts_noise
yen_futures_settle['trend'] = ts_trend
yen_futures_settle.head()
```
#### Drop noise from data frame
```
yen_futures_settle_trend = yen_futures_settle.drop(columns=['noise'])
yen_futures_settle_only.head()
yen_futures_settle_only.tail()
```
#### filter 2015 to now
```
yen_futures_settle_trend2015 = yen_futures_settle_trend['2015':]
# Plot the Settle Price vs. the Trend for 2015 to the present
# YOUR CODE HERE!
#yen_futures_settle_trend.plot(title='Yen Futures Settle vs Trend', figsize=(20,10))
yen_futures_settle_trend2015.plot(title='Yen Futures Settle vs Trend', figsize=(20,10))
# Plot the Settle Noise
# YOUR CODE HERE!
ts_noise.plot(title='Noise', figsize=(20,10))
```
---
# Forecasting Returns using an ARMA Model
Using futures Settle *Returns*, estimate an ARMA model
1. ARMA: Create an ARMA model and fit it to the returns data. Note: Set the AR and MA ("p" and "q") parameters to p=2 and q=1: order=(2, 1).
2. Output the ARMA summary table and take note of the p-values of the lags. Based on the p-values, is the model a good fit (p < 0.05)?
3. Plot the 5-day forecast of the forecasted returns (the results forecast from ARMA model)
```
# Create a series using "Settle" price percentage returns, drop any nan"s, and check the results:
# (Make sure to multiply the pct_change() results by 100)
# In this case, you may have to replace inf, -inf values with np.nan"s
returns = (yen_futures[["Settle"]].pct_change() * 100)
returns = returns.replace(-np.inf, np.nan).dropna()
returns.tail()
import statsmodels.api as sm
# Estimate and ARMA model using statsmodels (use order=(2, 1))
# YOUR CODE HERE!
from statsmodels.tsa.arima_model import ARMA
# For the order parameter, the first 1 indicates the number of AR lags
# For the order parameter, the second 1 indicates the number of MA lags
model = ARMA(returns.values, order=(2,1))
# Fit the model and assign it to a variable called results
# YOUR CODE HERE!
results = model.fit()
# Output model summary results:
# YOUR CODE HERE!
results.summary()
# Plot the 5 Day Returns Forecast
# YOUR CODE HERE!
pd.DataFrame(results.forecast(steps=5)[0]).plot(title="5 Day Returns Forecast")
pd.DataFrame(results.forecast(steps=5)[0])
```
---
# Forecasting the Settle Price using an ARIMA Model
1. Using the *raw* Yen **Settle Price**, estimate an ARIMA model.
1. Set P=5, D=1, and Q=1 in the model (e.g., ARIMA(df, order=(5,1,1))
2. P= # of Auto-Regressive Lags, D= # of Differences (this is usually =1), Q= # of Moving Average Lags
2. Output the ARIMA summary table and take note of the p-values of the lags. Based on the p-values, is the model a good fit (p < 0.05)?
3. Construct a 5 day forecast for the Settle Price. What does the model forecast will happen to the Japanese Yen in the near term?
```
from statsmodels.tsa.arima_model import ARIMA
# Estimate and ARIMA Model:
# Hint: ARIMA(df, order=(p, d, q))
# YOUR CODE HERE!
model2 = ARIMA(yen_futures_settle['Settle'], order=(5, 1, 1))
# Fit the model
# YOUR CODE HERE!
res2 = model2.fit()
# Output model summary results:
res2.summary()
# Plot the 5 Day Price Forecast
# YOUR CODE HERE!
pd.DataFrame(res2.forecast(steps=5)[0]).plot(title="5 Day Futures Price Forecast")
pd.DataFrame(res2.forecast(steps=5)[0])
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plot_acf(yen_futures_settle['Settle'], lags=30, zero=False)
plot_pacf(yen_futures_settle['Settle'], lags=30, zero=False)
```
---
# Volatility Forecasting with GARCH
Rather than predicting returns, let's forecast near-term **volatility** of Japanese Yen futures returns. Being able to accurately predict volatility will be extremely useful if we want to trade in derivatives or quantify our maximum loss.
Using futures Settle *Returns*, estimate an GARCH model
1. GARCH: Create an GARCH model and fit it to the returns data. Note: Set the parameters to p=2 and q=1: order=(2, 1).
2. Output the GARCH summary table and take note of the p-values of the lags. Based on the p-values, is the model a good fit (p < 0.05)?
3. Plot the 5-day forecast of the volatility.
```
yen_futures_settle_only.head()
#import arch
from arch import arch_model
# Estimate a GARCH model:
# YOUR CODE HERE!
model = arch_model(returns, mean="Zero", vol="GARCH", p=2, q=1)
# Fit the model
# YOUR CODE HERE!
res_garch = model.fit(disp="off")
# Summarize the model results
# YOUR CODE HERE!
res_garch.summary()
fig = res_garch.plot(annualize='D')
# Find the last day of the dataset
last_day = returns.index.max().strftime('%Y-%m-%d')
last_day
# Create a 5 day forecast of volatility
forecast_horizon = 5
# Start the forecast using the last_day calculated above
# YOUR CODE HERE!
forecasts = res_garch.forecast(start=last_day, horizon=forecast_horizon)
forecasts
# Annualize the forecast
intermediate = np.sqrt(forecasts.variance.dropna() * 252)
intermediate.head()
# Transpose the forecast so that it is easier to plot
final = intermediate.dropna().T
final.head()
# Plot the final forecast
# YOUR CODE HERE!
final.plot(title = "5 Day Forecast of Volatality")
```
---
# Conclusions
Based on your time series analysis, would you buy the yen now?
Is the risk of the yen expected to increase or decrease?
Based on the model evaluation, would you feel confident in using these models for trading?
#### LaTex: $\alpha$2
|
github_jupyter
|
# Semantic Segmentation and Data Sets
In our discussion of object detection issues in the previous sections, we only used rectangular bounding boxes to label and predict objects in images. In this section, we will look at semantic segmentation, which attempts to segment images into regions with different semantic categories. These semantic regions label and predict objects at the pixel level. Figure 9.10 shows a semantically-segmented image, with areas labeled "dog", "cat", and "background". As you can see, compared to object detection, semantic segmentation labels areas with pixel-level borders, for significantly greater precision.

## Image Segmentation and Instance Segmentation
In the computer vision field, there are two important methods related to semantic segmentation: image segmentation and instance segmentation. Here, we will distinguish these concepts from semantic segmentation as follows:
* Image segmentation divides an image into several constituent regions. This method generally uses the correlations between pixels in an image. During training, labels are not needed for image pixels. However, during prediction, this method cannot ensure that the segmented regions have the semantics we want. If we input the image in 9.10, image segmentation might divide the dog into two regions, one covering the dog's mouth and eyes where black is the prominent color and the other covering the rest of the dog where yellow is the prominent color.
* Instance segmentation is also called simultaneous detection and segmentation. This method attempts to identify the pixel-level regions of each object instance in an image. In contrast to semantic segmentation, instance segmentation not only distinguishes semantics, but also different object instances. If an image contains two dogs, instance segmentation will distinguish which pixels belong to which dog.
## Pascal VOC2012 Semantic Segmentation Data Set
In the semantic segmentation field, one important data set is Pascal VOC2012[1]. To better understand this data set, we must first import the package or module needed for the experiment.
```
import sys
sys.path.insert(0, '..')
%matplotlib inline
import d2l
from mxnet import gluon, image, nd
from mxnet.gluon import data as gdata, utils as gutils
import os
import sys
import tarfile
```
We download the archive containing this data set to the `../data` path. The archive is about 2GB, so it will take some time to download. After you decompress the archive, the data set is located in the `../data/VOCdevkit/VOC2012` path.
```
# This function has been saved in the d2l package for future use
def download_voc_pascal(data_dir='../data'):
voc_dir = os.path.join(data_dir, 'VOCdevkit/VOC2012')
url = ('http://host.robots.ox.ac.uk/pascal/VOC/voc2012'
'/VOCtrainval_11-May-2012.tar')
sha1 = '4e443f8a2eca6b1dac8a6c57641b67dd40621a49'
fname = gutils.download(url, data_dir, sha1_hash=sha1)
with tarfile.open(fname, 'r') as f:
f.extractall(data_dir)
return voc_dir
voc_dir = download_voc_pascal()
```
Go to `../data/VOCdevkit/VOC2012` to see the different parts of the data set. The `ImageSets/Segmentation` path contains text files that specify the training and testing examples. The `JPEGImages` and `SegmentationClass` paths contain the example input images and labels, respectively. These labels are also in image format, with the same dimensions as the input images to which they correspond. In the labels, pixels with the same color belong to the same semantic category. The `read_voc_images` function defined below reads all input images and labels to the memory.
```
# This function has been saved in the d2l package for future use
def read_voc_images(root=voc_dir, is_train=True):
txt_fname = '%s/ImageSets/Segmentation/%s' % (
root, 'train.txt' if is_train else 'val.txt')
with open(txt_fname, 'r') as f:
images = f.read().split()
features, labels = [None] * len(images), [None] * len(images)
for i, fname in enumerate(images):
features[i] = image.imread('%s/JPEGImages/%s.jpg' % (root, fname))
labels[i] = image.imread(
'%s/SegmentationClass/%s.png' % (root, fname))
return features, labels
train_features, train_labels = read_voc_images()
```
We draw the first five input images and their labels. In the label images, white represents borders and black represents the background. Other colors correspond to different categories.
```
n = 5
imgs = train_features[0:n] + train_labels[0:n]
d2l.show_images(imgs, 2, n);
```
Next, we list each RGB color value in the labels and the categories they label.
```
# This constant has been saved in the d2l package for future use
VOC_COLORMAP = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0],
[0, 0, 128], [128, 0, 128], [0, 128, 128], [128, 128, 128],
[64, 0, 0], [192, 0, 0], [64, 128, 0], [192, 128, 0],
[64, 0, 128], [192, 0, 128], [64, 128, 128], [192, 128, 128],
[0, 64, 0], [128, 64, 0], [0, 192, 0], [128, 192, 0],
[0, 64, 128]]
# This constant has been saved in the d2l package for future use
VOC_CLASSES = ['background', 'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat', 'chair', 'cow',
'diningtable', 'dog', 'horse', 'motorbike', 'person',
'potted plant', 'sheep', 'sofa', 'train', 'tv/monitor']
```
After defining the two constants above, we can easily find the category index for each pixel in the labels.
```
colormap2label = nd.zeros(256 ** 3)
for i, colormap in enumerate(VOC_COLORMAP):
colormap2label[(colormap[0] * 256 + colormap[1]) * 256 + colormap[2]] = i
# This function has been saved in the d2l package for future use
def voc_label_indices(colormap, colormap2label):
colormap = colormap.astype('int32')
idx = ((colormap[:, :, 0] * 256 + colormap[:, :, 1]) * 256
+ colormap[:, :, 2])
return colormap2label[idx]
```
For example, in the first example image, the category index for the front part of the airplane is 1 and the index for the background is 0.
```
y = voc_label_indices(train_labels[0], colormap2label)
y[105:115, 130:140], VOC_CLASSES[1]
```
### Data Preprocessing
In the preceding chapters, we scaled images to make them fit the input shape of the model. In semantic segmentation, this method would require us to re-map the predicted pixel categories back to the original-size input image. It would be very difficult to do this precisely, especially in segmented regions with different semantics. To avoid this problem, we crop the images to set dimensions and do not scale them. Specifically, we use the random cropping method used in image augmentation to crop the same region from input images and their labels.
```
# This function has been saved in the d2l package for future use
def voc_rand_crop(feature, label, height, width):
feature, rect = image.random_crop(feature, (width, height))
label = image.fixed_crop(label, *rect)
return feature, label
imgs = []
for _ in range(n):
imgs += voc_rand_crop(train_features[0], train_labels[0], 200, 300)
d2l.show_images(imgs[::2] + imgs[1::2], 2, n);
```
### Data Set Classes for Custom Semantic Segmentation
We use the inherited `Dataset` class provided by Gluon to customize the semantic segmentation data set class `VOCSegDataset`. By implementing the `__getitem__` function, we can arbitrarily access the input image with the index `idx` and the category indexes for each of its pixels from the data set. As some images in the data set may be smaller than the output dimensions specified for random cropping, we must remove these example by using a custom `filter` function. In addition, we define the `normalize_image` function to normalize each of the three RGB channels of the input images.
```
# This class has been saved in the d2l package for future use
class VOCSegDataset(gdata.Dataset):
def __init__(self, is_train, crop_size, voc_dir, colormap2label):
self.rgb_mean = nd.array([0.485, 0.456, 0.406])
self.rgb_std = nd.array([0.229, 0.224, 0.225])
self.crop_size = crop_size
features, labels = read_voc_images(root=voc_dir, is_train=is_train)
self.features = [self.normalize_image(feature)
for feature in self.filter(features)]
self.labels = self.filter(labels)
self.colormap2label = colormap2label
print('read ' + str(len(self.features)) + ' examples')
def normalize_image(self, img):
return (img.astype('float32') / 255 - self.rgb_mean) / self.rgb_std
def filter(self, imgs):
return [img for img in imgs if (
img.shape[0] >= self.crop_size[0] and
img.shape[1] >= self.crop_size[1])]
def __getitem__(self, idx):
feature, label = voc_rand_crop(self.features[idx], self.labels[idx],
*self.crop_size)
return (feature.transpose((2, 0, 1)),
voc_label_indices(label, self.colormap2label))
def __len__(self):
return len(self.features)
```
### Read the Data Set
Using the custom `VOCSegDataset` class, we create the training set and testing set instances. We assume the random cropping operation output images in the shape $320\times 480$. Below, we can see the number of examples retained in the training and testing sets.
```
crop_size = (320, 480)
voc_train = VOCSegDataset(True, crop_size, voc_dir, colormap2label)
voc_test = VOCSegDataset(False, crop_size, voc_dir, colormap2label)
```
We set the batch size to 64 and define the iterators for the training and testing sets.
```
batch_size = 64
num_workers = 0 if sys.platform.startswith('win32') else 4
train_iter = gdata.DataLoader(voc_train, batch_size, shuffle=True,
last_batch='discard', num_workers=num_workers)
test_iter = gdata.DataLoader(voc_test, batch_size, last_batch='discard',
num_workers=num_workers)
```
Print the shape of the first mini-batch. In contrast to image classification and object recognition, labels here are three-dimensional arrays.
```
for X, Y in train_iter:
print(X.shape)
print(Y.shape)
break
```
## Summary
* Semantic segmentation looks at how images can be segmented into regions with different semantic categories.
* In the semantic segmentation field, one important data set is Pascal VOC2012.
* Because the input images and labels in semantic segmentation have a one-to-one correspondence at the pixel level, we randomly crop them to a fixed size, rather than scaling them.
## Exercises
* Recall the content we covered in the ["Image Augmentation"](image-augmentation.md) section. Which of the image augmentation methods used in image classification would be hard to use in semantic segmentation?
## Reference
[1] Pascal VOC2012 data set. http://host.robots.ox.ac.uk/pascal/VOC/voc2012/
## Scan the QR Code to [Discuss](https://discuss.mxnet.io/t/2448)

|
github_jupyter
|
```
import fitsio as ft
import healpy as hp
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append('/users/PHS0336/medirz90/github/LSSutils')
from lssutils.utils import make_hp
from lssutils.lab import get_cl
from lssutils.extrn.galactic.hpmaps import logHI
from sklearn.linear_model import LinearRegression
from lssutils.dataviz import setup_color
setup_color()
def run_linear(xtrain, ytrain,
xtest, ytest,
x, ix):
reg2 = LinearRegression().fit(xtrain, ytrain)
npred = reg2.predict(xtest)
print(f'MSE: {((npred - ytest)**2).mean():.3f} MAE:{(abs(npred - ytest)).mean():.3f}')
sfun = reg2.predict(x)
return make_hp(1024, ix, sfun, True) / sfun.mean()
lh = logHI(nside_out=1024, path='/fs/ess/PHS0336/data/templates/NHI_HPX.fits')
df = ft.read('/fs/ess/PHS0336/data/rongpu/imaging_sys/tables/v3/nelg_features_bmzls_1024.fits')
loghi = lh.map[:, np.newaxis]
hi = 10**(loghi-20.)
ix = df['hpix']
frac = make_hp(1024, df['hpix'], df['fracgood'], True)
mask = np.isfinite(frac)
ngal = make_hp(1024, df['hpix'], df['label'], True)
print(mask.sum())
x1 = loghi #np.column_stack([loghi, loghi*loghi])
x2 = hi #np.column_stack([hi, hi*hi])
np.random.seed(85)
train_ix = np.random.choice(ix, size=int(0.8*ix.size), replace=False)
test_ix = np.setdiff1d(ix, train_ix)
sf_loghi = run_linear(x1[train_ix], ngal[train_ix],
x1[test_ix], ngal[test_ix],
x1[ix], ix)
sf_loghi *= (ngal[ix]/sf_loghi[ix]).sum() / ngal[ix].sum()
sf_hi = run_linear(x2[train_ix], ngal[train_ix],
x2[test_ix], ngal[test_ix],
x1[ix], ix)
sf_hi *= (ngal[ix]/sf_hi[ix]).sum() / ngal[ix].sum()
kw = dict(min=0.9, max=1.1, rot=-95, cmap=plt.cm.jet)
hp.mollview(sf_hi, **kw)
hp.mollview(sf_loghi, **kw)
hp.mollview(ngal/df['label'].mean(), **kw)
cl_null = get_cl(ngal, frac, mask, njack=0)
cl_hi = get_cl(ngal, frac, mask, njack=0, selection_fn=sf_hi)
cl_loghi = get_cl(ngal, frac, mask, njack=0, selection_fn=sf_loghi)
fg, ax = plt.subplots(nrows=2, figsize=(6, 8), sharex=True)
fg.subplots_adjust(hspace=0.0)
for n_i, cl_i in zip(['No weight', 'HI', 'logHI'],
[cl_null, cl_hi, cl_loghi]):
ln = ax[0].plot(1000*cl_i['cl_gg']['l']*cl_i['cl_gg']['cl'], alpha=0.8, label=n_i)
ax[1].plot(cl_i['cl_gg']['cl']/cl_null['cl_gg']['cl'], color=ln[0].get_color())
ax[0].legend()
ax[0].set(ylabel=r'$\ell C_{\ell}~[10^{-3}]$', xscale='log',)
ax[1].set(xlabel=r'$\ell$', ylim=(0.0, 1.45), ylabel='$C_{\ell} / Noweight$')
```
## Updated Galaxy Density Count
```
old = ft.read('/fs/ess/PHS0336/data/rongpu/imaging_sys/tables/v2/nelg_features_bmzls_1024_old.fits')
new = ft.read('/fs/ess/PHS0336/data/rongpu/imaging_sys/tables/v3/nelg_features_bmzls_1024.fits')
old.size, new.size
np.array_equal(old['hpix'], new['hpix'])
old['label'], new['label']
frac = make_hp(1024, new['hpix'], new['fracgood'], True)
mask = np.isfinite(frac)
mask.sum()
old['features'][:, 0]-new['features'][:, 0]
syst = make_hp(1024, new['hpix'], new['features'][:, 0])[:, np.newaxis]
syst.shape
nold = make_hp(1024, old['hpix'], old['label'])
nnew = make_hp(1024, new['hpix'], new['label'])
cl_old = get_cl(nold, frac, mask, systematics=syst, njack=0, cross_only=True)
cl_new = get_cl(nnew, frac, mask, systematics=syst, njack=0, cross_only=True)
plt.plot(cl_old['cl_gg']['cl'], label='Old')
plt.plot(cl_new['cl_gg']['cl'], label='New')
plt.legend()
# plt.xscale('log')
plt.yscale('log') #symlog', linthreshy=1.0e-6)
plt.ylim(ymin=8.0e-9)
plt.ylabel('C_gg')
plt.xlabel(r'$\ell$')
from lssutils.utils import histogram_cell
def plot(cl, **kw):
lb = np.arange(0, 3000, 100)
lb_, cl_ = histogram_cell(cl, bins=lb)
al = kw.pop('alpha')
lab = kw.pop('label')
ln = plt.plot(cl, alpha=al, **kw)
plt.plot(lb_, cl_, color=ln[0].get_color(),
label=lab, marker='o', mfc='w', **kw)
plot(cl_old['cl_sg'][0]['cl'], label='Old', alpha=0.5)
plot(cl_new['cl_sg'][0]['cl'], label='New', alpha=0.5)
plt.legend()
plt.axhline(0)
plt.ylim(-1.0e-8, 1.0e-8)
# plt.yscale('symlog', linthreshy=1.0e-9)
plt.ylabel('C_gs')
plt.xlabel(r'$\ell$')
```
|
github_jupyter
|
# Scroll down to get to the interesting tables...
# Construct list of properties of widgets
"Properties" here is one of:
+ `keys`
+ `traits()`
+ `class_own_traits()`
Common (i.e. uninteresting) properties are filtered out.
The dependency on astropy is for their Table. Replace it with pandas if you want...
```
import itertools
from ipywidgets import *
from IPython.display import display
from traitlets import TraitError
from astropy.table import Table, Column
```
# Function definitions
## Calculate "interesting" properties
```
def properties(widget, omit=None, source=None):
"""
Return a list of widget properties for a widget instance, omitting
common properties.
Parameters
----------
widget : ipywidgets.Widget instance
The widget for which the list of preoperties is desired.
omit : list, optional
List of properties to omit in the return value. Default is
``['layout', 'style', 'msg_throttle']``, and for `source='traits'
is extended to add ``['keys', 'comm']``.
source : str, one of 'keys', 'traits', 'class_own_traits', 'style_keys' optional
Source of property list for widget. Default is ``'keys'``.
"""
if source is None:
source = 'keys'
valid_sources = ('keys', 'traits', 'class_own_traits', 'style_keys')
if source not in valid_sources:
raise ValueError('source must be one of {}'.format(', '.join(valid_sources)))
if omit is None:
omit = ['layout', 'style', 'msg_throttle']
if source == 'keys':
props = widget.keys
elif source == 'traits':
props = widget.traits()
omit.extend(['keys', 'comm'])
elif source == 'class_own_traits':
props = widget.class_own_traits()
elif source == 'style_keys':
props = widget.style.keys
props = [k for k in props if not k.startswith('_')]
return [k for k in props if k not in omit]
```
## Create a table (cross-tab style) for which properties are available for which widgets
This is the only place astropy.table.Table is used, so delete if you want to.
```
def table_for_keys(keys, keys_info, source):
unique_keys = set()
for k in keys:
unique_keys.update(keys_info[k])
unique_keys = sorted(unique_keys)
string_it = lambda x: 'X' if x else ''
colnames = ['Property ({})'.format(source)] + keys
columns = [Column(name=colnames[0], data=unique_keys)]
for c in colnames[1:]:
column = Column(name=c, data=[string_it(k in key_dict[c]) for k in unique_keys])
columns.append(column)
return Table(columns)
```
## List of widget objects...
```
widget_list = [
IntSlider,
FloatSlider,
IntRangeSlider,
FloatRangeSlider,
IntProgress,
FloatProgress,
BoundedIntText,
BoundedFloatText,
IntText,
FloatText,
ToggleButton,
Checkbox,
Valid,
Dropdown,
RadioButtons,
Select,
SelectionSlider,
SelectionRangeSlider,
ToggleButtons,
SelectMultiple,
Text,
Textarea,
Label,
HTML,
HTMLMath,
Image,
Button,
Play,
DatePicker,
ColorPicker,
Box,
HBox,
VBox,
Accordion,
Tab
]
```
## ...and their names
```
names = [wd.__name__ for wd in widget_list]
```
## Figure out the properties for each widget
The `try`/`except` below is to catch a couple of classes that *require* that `options` be passed on intialization.
```
property_source = 'keys'
all_keys = []
for widget_class in widget_list:
try:
keys = properties(widget_class(), source=property_source)
except TraitError as e:
keys = properties(widget_class(options=(2,10)), source=property_source)
finally:
all_keys.append(keys)
```
Probably should have used a dict from the beginning...
```
key_dict = {k: v for k, v in zip(names, all_keys)}
```
## Define a few groups of widgets by widget interface type
This makes for nicer (i.e. more compact and readable) tables later on.
```
sliders = [k for k in key_dict.keys() if 'Slider' in k]
buttons = [k for k in key_dict.keys() if 'Button' in k]
containers = ['Box', 'VBox', 'HBox', 'Accordion', 'Tab']
texts = [k for k in names if 'text' in k or 'Text' in k] + [k for k in names if 'HTML' in k] + ['Label']
progress = [k for k in names if 'Progress' in k]
selects = ['Dropdown', 'Select', 'SelectMultiple']
all_so_far = sliders + buttons + texts + containers + progress + selects
others = [k for k in names if k not in all_so_far]
slider_keys = set()
```
# Tables of keys (synced properties)
## Sliders
```
table_for_keys(sliders, key_dict, source=property_source)
```
## Buttons
```
table_for_keys(buttons, key_dict, source=property_source)
```
## Containers
```
table_for_keys(containers, key_dict, source=property_source)
```
## Text
```
table_for_keys(texts, key_dict, source=property_source)
```
## Progress bars
```
table_for_keys(progress, key_dict, source=property_source)
```
# Select widgets
```
table_for_keys(selects, key_dict, source=property_source)
```
## Everything else
```
table_for_keys(others, key_dict, source=property_source)
property_source = 'style_keys'
style_keys = []
for widget_class in widget_list:
try:
keys = properties(widget_class(), source=property_source)
except TraitError as e:
keys = properties(widget_class(options=(2,10)), source=property_source)
except AttributeError:
keys=''
finally:
style_keys.append(keys)
for w, s in zip(names, style_keys):
print('{} has style keys: {}'.format(w, ', '.join(s)))
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/RoetGer/coding-practice/blob/main/solved_coding_problems.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**From Leetcode - Maximum Subarray**
Given an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum.
Solution approach:
Kadane’s Algorithm:
```
Initialize:
max_so_far = INT_MIN
max_ending_here = 0
Loop for each element of the array
(a) max_ending_here = max_ending_here + a[i]
(b) if(max_so_far < max_ending_here)
max_so_far = max_ending_here
(c) if(max_ending_here < 0)
max_ending_here = 0
return max_so_far
```
```
import sys
from typing import List
class Solution:
def maxSubArray(self, nums: List[int]) -> int:
max_so_far = -sys.maxsize - 1
max_ending_here = 0
size = len(nums)
for i in range(0, size):
max_ending_here = max_ending_here + nums[i]
if (max_so_far < max_ending_here):
max_so_far = max_ending_here
if max_ending_here < 0:
max_ending_here = 0
return max_so_far
sol = Solution()
assert sol.maxSubArray([-2,1,-3,4,-1,2,1,-5,4]) == 6
assert sol.maxSubArray([1]) == 1
assert sol.maxSubArray([5,4,-1,7,8]) == 23
assert sol.maxSubArray([-1]) == -1
```
**From Leetcode - Best Time to Buy and Sell Stocks 1**
You are given an array prices where prices[i] is the price of a given stock on the ith day.
You want to maximize your profit by choosing a single day to buy one stock and choosing a different day in the future to sell that stock.
Return the maximum profit you can achieve from this transaction. If you cannot achieve any profit, return 0.
```
class Solution:
def maxProfit(self, prices: List[int]) -> int:
current_min = prices[0]
max_profit = 0
for p in prices[1:]:
profit = p - current_min
if profit > max_profit:
max_profit = profit
if p < current_min:
current_min = p
return max_profit
sol = Solution()
assert sol.maxProfit([7,1,5,3,6,4]) == 5
assert sol.maxProfit([7,6,4,3,1]) == 0
ll = [7]
for i in ll[0:]:
print(i)
```
** From Leetcode - Best Time to Buy and Sell stocks**
You are given an array prices where prices[i] is the price of a given stock on the ith day.
Find the maximum profit you can achieve. You may complete as many transactions as you like (i.e., buy one and sell one share of the stock multiple times).
Note: You may not engage in multiple transactions simultaneously (i.e., you must sell the stock before you buy again).
```
from typing import List
class Solution:
def maxProfit(self, prices: List[int]) -> int:
max_profit = 0
for t in range(len(prices) - 1):
if prices[t] < prices[t+1]:
max_profit += prices[t+1] - prices[t]
return max_profit
sol = Solution()
assert sol.maxProfit([7,1,5,3,6,4]) == 7
assert sol.maxProfit([1,2,3,4,5]) == 4
assert sol.maxProfit([7,6,4,3,1]) == 0
sol.maxProfit([7,1,5,3,6,4])
```
**From Leetcode - Best Time to Buy and Sell Stocks with Cooldown**
You are given an array prices where prices[i] is the price of a given stock on the ith day.
Find the maximum profit you can achieve. You may complete as many transactions as you like (i.e., buy one and sell one share of the stock multiple times) with the following restrictions:
After you sell your stock, you cannot buy stock on the next day (i.e., cooldown one day).
Note: You may not engage in multiple transactions simultaneously (i.e., you must sell the stock before you buy again).
```
from typing import List
class Solution:
def maxProfit(self, prices: List[int]) -> int:
max_profit = 0
cooldown = False
for t in range(len(prices) - 3):
if cooldown:
cooldown = False
continue
diff_t = prices[t+1] - prices[t]
if diff_t < 0:
continue
else:
if prices[t] > prices[t+2]
diff_tp1 = prices[t+3] - prices[t+2]
if diff_t < diff_tp
if prices[t] < prices[t + 1]
class Solution:
def maxProfit(self, prices: List[int]) -> int:
# bought = False
# for t in range(len(prices) - 2):
# if (prices[t] > prices[t+1]) & not bought:
# continue
# elif prices[t]
# max_profit += prices[t+1] - prices[t]
for t in range(len(prices)):
# TODO:
# - Expand over the next 3 prices the tree
# - Pick always maximizing action
max_profit = self._maxProfit(prices, purchase_price=-1)
return max_profit
def _maxProfit(self, prices: List[int], purchase_price: int) -> int:
lpr = len(prices)
if lpr == 0:
return 0
if lpr == 1:
return prices[0] - purchase_price if purchase_price > -1 else 0
if purchase_price > -1:
max_profit = max(
(prices[0] - purchase_price) + self._maxProfit(prices[2:], purchase_price=-1),
self._maxProfit(prices[1:], purchase_price=purchase_price)
)
else:
max_profit = max(
self._maxProfit(prices[1:], purchase_price=prices[0]),
self._maxProfit(prices[1:], purchase_price=-1)
)
return max_profit
sol = Solution()
assert sol.maxProfit([1,2,3,0,2]) == 3
assert sol.maxProfit([1]) == 0
assert sol.maxProfit([6,1,3,2,4,7]) == 6
sol.maxProfit([1,2,3,1,3])
sol.maxProfit([1,2,3,0,2])
sol.maxProfit([6,1,3,2,4,7])
#sol.maxProfit([1,3,2,4,7])
sol.maxProfit([1,2,3,0,2])
sol.maxProfit([1,2,3])
%%time
assert sol.maxProfit([48,12,60,93,97,42,25,64,17,56,85,93,9,48,52,42,58,85,81,84,69,36,1,54,23,15,72,15,11,94]) == 428
%%time
sol.maxProfit([48,12,60,93,97,42,25,64,17,56,85,93,9,48,52,42,58,85,81,84,69,36,1,54,23,15,72,15,11,94])
sol.maxProfit([48,12,60,93,97,42,25,64,17,56,85,93,9,48,52,42,58,85,81,84,69,36,1,54,23,15,72,15,11,94])
%%time
sol.maxProfit([70,4,83,56,94,72,78,43,2,86,65,100,94,56,41,66,3,33,10,3,45,94,15,12,78,60,58,0,58,15,21,7,11,41,12,96,83,77,47,62,27,19,40,63,30,4,77,52,17,57,21,66,63,29,51,40,37,6,44,42,92,16,64,33,31,51,36,0,29,95,92,35,66,91,19,21,100,95,40,61,15,83,31,55,59,84,21,99,45,64,90,25,40,6,41,5,25,52,59,61,51,37,92,90,20,20,96,66,79,28,83,60,91,30,52,55,1,99,8,68,14,84,59,5,34,93,25,10,93,21,35,66,88,20,97,25,63,80,20,86,33,53,43,86,53,55,61,77,9,2,56,78,43,19,68,69,49,1,6,5,82,46,24,33,85,24,56,51,45,100,94,26,15,33,35,59,25,65,32,26,93,73,0,40,92,56,76,18,2,45,64,66,64,39,77,1,55,90,10,27,85,40,95,78,39,40,62,30,12,57,84,95,86,57,41,52,77,17,9,15,33,17,68,63,59,40,5,63,30,86,57,5,55,47,0,92,95,100,25,79,84,93,83,93,18,20,32,63,65,56,68,7,31,100,88,93,11,43,20,13,54,34,29,90,50,24,13,44,89,57,65,95,58,32,67,38,2,41,4,63,56,88,39,57,10,1,97,98,25,45,96,35,22,0,37,74,98,14,37,77,54,40,17,9,28,83,13,92,3,8,60,52,64,8,87,77,96,70,61,3,96,83,56,5,99,81,94,3,38,91,55,83,15,30,39,54,79,55,86,85,32,27,20,74,91,99,100,46,69,77,34,97,0,50,51,21,12,3,84,84,48,69,94,28,64,36,70,34,70,11,89,58,6,90,86,4,97,63,10,37,48,68,30,29,53,4,91,7,56,63,22,93,69,93,1,85,11,20,41,36,66,67,57,76,85,37,80,99,63,23,71,11,73,41,48,54,61,49,91,97,60,38,99,8,17,2,5,56,3,69,90,62,75,76,55,71,83,34,2,36,56,40,15,62,39,78,7,37,58,22,64,59,80,16,2,34,83,43,40,39,38,35,89,72,56,77,78,14,45,0,57,32,82,93,96,3,51,27,36,38,1,19,66,98,93,91,18,95,93,39,12,40,73,100,17,72,93,25,35,45,91,78,13,97,56,40,69,86,69,99,4,36,36,82,35,52,12,46,74,57,65,91,51,41,42,17,78,49,75,9,23,65,44,47,93,84,70,19,22,57,27,84,57,85,2,61,17,90,34,49,74,64,46,61,0,28,57,78,75,31,27,24,10,93,34,19,75,53,17,26,2,41,89,79,37,14,93,55,74,11,77,60,61,2,68,0,15,12,47,12,48,57,73,17,18,11,83,38,5,36,53,94,40,48,81,53,32,53,12,21,90,100,32,29,94,92,83,80,36,73,59,61,43,100,36,71,89,9,24,56,7,48,34,58,0,43,34,18,1,29,97,70,92,88,0,48,51,53,0,50,21,91,23,34,49,19,17,9,23,43,87,72,39,17,17,97,14,29,4,10,84,10,33,100,86,43,20,22,58,90,70,48,23,75,4,66,97,95,1,80,24,43,97,15,38,53,55,86,63,40,7,26,60,95,12,98,15,95,71,86,46,33,68,32,86,89,18,88,97,32,42,5,57,13,1,23,34,37,13,65,13,47,55,85,37,57,14,89,94,57,13,6,98,47,52,51,19,99,42,1,19,74,60,8,48,28,65,6,12,57,49,27,95,1,2,10,25,49,68,57,32,99,24,19,25,32,89,88,73,96,57,14,65,34,8,82,9,94,91,19,53,61,70,54,4,66,26,8,63,62,9,20,42,17,52,97,51,53,19,48,76,40,80,6,1,89,52,70,38,95,62,24,88,64,42,61,6,50,91,87,69,13,58,43,98,19,94,65,56,72,20,72,92,85,58,46,67,2,23,88,58,25,88,18,92,46,15,18,37,9,90,2,38,0,16,86,44,69,71,70,30,38,17,69,69,80,73,79,56,17,95,12,37,43,5,5,6,42,16,44,22,62,37,86,8,51,73,46,44,15,98,54,22,47,28,11,75,52,49,38,84,55,3,69,100,54,66,6,23,98,22,99,21,74,75,33,67,8,80,90,23,46,93,69,85,46,87,76,93,38,77,37,72,35,3,82,11,67,46,53,29,60,33,12,62,23,27,72,35,63,68,14,35,27,98,94,65,3,13,48,83,27,84,86,49,31,63,40,12,34,79,61,47,29,33,52,100,85,38,24,1,16,62,89,36,74,9,49,62,89])
from functools import lru_cache
@lru_cache(maxsize=None)
def fibonacci(k):
if k < 2:
return k
else:
return fibonacci(k - 1) + fibonacci(k - 2)
class Solution:
def maxProfit(self, prices: List[int]) -> int:
if len(prices) <= 1:
return 0
for i, p in enumerate(prices):
if p < prices[i+1]:
max_profit = max(
self.stock_bought(p, prices[(i+1):]),
self.maxProfit(prices[(i+1):])
)
break
else:
max_profit = 0
return max_profit
def stock_bought(self, purchase_price: int, prices: List[int]) -> int:
lp = len(prices)
if lp == 1:
return prices[0] - purchase_price
if lp >= 3:
p0 = prices[0]
p1 = prices[1]
p2 = prices[2]
if p0 < p1 < p2:
max_profit = self.stock_bought(purchase_price, prices[2:])
return max_profit
elif p0 > p1 > p2:
max_profit = (p0 - purchase_price) + self.maxProfit(prices[2:])
max_profit = max(
(prices[0] - purchase_price) + self.maxProfit(prices[2:]),
self.stock_bought(purchase_price, prices[1:])
)
return max_profit
class Solution:
def maxProfit(self, prices: List[int]) -> int:
lp = len(prices)
p = prices[0]
if lp <= 1:
return 0
p1 = prices[1]
if p < p1:
if lp == 2:
return p1 - p
p2 = prices[2]
if lp == 3:
return max(p1, p2) - p
if p1 < p2:
return self.hold_stock(p, prices[1:])
# TODO: fix this part of the code. Either wrong or too slow :)
'''
return max(
self.hold_stock(p, prices[1:]),
self.maxProfit(prices[1:])
)
p3 = prices[3]
if (p1 - p) > (p3 - p2):
return (p1 - p) + self.maxProfit(prices[3:])
return self.maxProfit(prices[1:])
return self.maxProfit(prices[1:])
'''
def hold_stock(self, purchase_price: int, prices: List[int]) -> int:
lp = len(prices)
p = prices[0]
if lp == 1:
return p - purchase_price
if lp > 2:
p1 = prices[1]
p2 = prices[2]
if lp == 3:
return max(p, p1, p2) - purchase_price
else:
p3 = prices[3]
if (p1 > p2) & ((p1 - p) < (p3 - p2)):
return (p - purchase_price) + self.maxProfit(prices[2:])
else:
return self.hold_stock(purchase_price, prices[1:])
return max(p, p1) - purchase_price
```
**Retrieve the nth largest element of each row**
```
import numpy as np
np.random.seed(5)
test_mat = np.random.gamma(shape=1, scale=1, size=(100, 10))
test_mat[0,:]
n = 3
arg_sort_mat = np.argsort(test_mat, axis=1)
nth_largest_idxs = arg_sort_mat[:, -n]
nth_largest_idxs
test_mat[np.arange(test_mat.shape[0]), nth_largest_idxs]
np.argmax(test_mat, axis=1)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/cervantes-loves-ai/100-Days-Of-ML-Code/blob/master/deep_neaural_network.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip3 install torch
import torch
import numpy as np
import matplotlib.pyplot as plt
import torch.nn as nn
from sklearn import datasets
n_pts = 500
X, y = datasets.make_circles(n_samples=n_pts, random_state=123, noise=0.1, factor=0.2)
x_data = torch.Tensor(X)
y_data = torch.Tensor(y.reshape(500, 1))
print(y.shape)
```
```
def scatter_plot():
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y==1, 0], X[y==1, 1])
scatter_plot()
class Model(nn.Module):
def __init__(self, input_size, H1, output_size):
super().__init__()
self.linear = nn.Linear(input_size, H1)
self.linear2 = nn.Linear(H1, output_size)
def forward(self, x):
x = torch.sigmoid(self.linear(x))
x = torch.sigmoid(self.linear2(x))
return x
def predict(self, x):
pred = self.forward(x)
if pred >= 0.5:
return 1
else:
return 0
torch.manual_seed(2)
model = Model(2, 4, 1)
print(list(model.parameters()))
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
epochs = 1000
losses = []
for i in range(epochs):
y_pred = model.forward(x_data)
loss = criterion(y_pred, y_data)
print("epoch:", i, "loss:", loss.item())
losses.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
plt.plot(range(epochs), losses)
plt.ylabel('Loss')
plt.xlabel('epoch')
plt.grid()
def plot_decision_boundary(X, y):
x_span = np.linspace(min(X[:, 0]) -0.25, max(X[:, 0])+0.25)
y_span = np.linspace(min(X[:, 1]) -0.25, max(X[:, 1])+0.25)
xx, yy = np.meshgrid(x_span, y_span)
grid = torch.Tensor(np.c_[xx.ravel(), yy.ravel()])
pred_func = model.forward(grid)
z = pred_func.view(xx.shape).detach().numpy()
plt.contourf(xx, yy, z)
plot_decision_boundary(X,y)
scatter_plot()
x = 0.25
y = 0.25
point = torch.Tensor ([x, y])
prediction = model.predict(point)
plt.plot([x], [y], marker='o', markersize=10, color="red")
print("Prediction is" , prediction)
plot_decision_boundary(X,y)
point1 = torch.Tensor([1.0, -1.0])
point2 = torch.Tensor([-1.0, 1.0])
plt.plot(point1.numpy()[0], point1.numpy()[1], 'ro')
plt.plot(point2.numpy()[0], point2.numpy()[1], 'ko')
plot_fit("Trained Model")
print("Red point positive probability = {}".format(model.forward(point1).item()))
print("Black point positive probability = {}".format(model.forward(point2).item()))
print("Red point belongs in class {}".format(model.predict(point1)))
print("Black point belongs in class = {}".format(model.predict(point2)))
```
|
github_jupyter
|
# DJL BERT Inference Demo
## Introduction
In this tutorial, you walk through running inference using DJL on a [BERT](https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270) QA model trained with MXNet and PyTorch.
You can provide a question and a paragraph containing the answer to the model. The model is then able to find the best answer from the answer paragraph.
Example:
```text
Q: When did BBC Japan start broadcasting?
```
Answer paragraph:
```text
BBC Japan was a general entertainment channel, which operated between December 2004 and April 2006.
It ceased operations after its Japanese distributor folded.
```
And it picked the right answer:
```text
A: December 2004
```
One of the most powerful features of DJL is that it's engine agnostic. Because of this, you can run different backend engines seamlessly. We showcase BERT QA first with an MXNet pre-trained model, then with a PyTorch model.
## Preparation
This tutorial requires the installation of Java Kernel. To install the Java Kernel, see the [README](https://github.com/awslabs/djl/blob/master/jupyter/README.md).
```
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.6.0
%maven ai.djl.mxnet:mxnet-engine:0.6.0
%maven ai.djl.mxnet:mxnet-model-zoo:0.6.0
%maven ai.djl.pytorch:pytorch-engine:0.6.0
%maven ai.djl.pytorch:pytorch-model-zoo:0.6.0
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
%maven net.java.dev.jna:jna:5.3.0
// See https://github.com/awslabs/djl/blob/master/mxnet/mxnet-engine/README.md
// and See https://github.com/awslabs/djl/blob/master/pytorch/pytorch-engine/README.md
// for more engine library selection options
%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-b
%maven ai.djl.pytorch:pytorch-native-auto:1.5.0
```
### Import java packages by running the following:
```
import ai.djl.*;
import ai.djl.engine.*;
import ai.djl.modality.nlp.qa.*;
import ai.djl.repository.zoo.*;
import ai.djl.training.util.*;
import ai.djl.inference.*;
import ai.djl.repository.zoo.*;
```
Now that all of the prerequisites are complete, start writing code to run inference with this example.
## Load the model and input
**First, load the input**
```
var question = "When did BBC Japan start broadcasting?";
var resourceDocument = "BBC Japan was a general entertainment Channel.\n" +
"Which operated between December 2004 and April 2006.\n" +
"It ceased operations after its Japanese distributor folded.";
QAInput input = new QAInput(question, resourceDocument);
```
Then load the model and vocabulary. Create a variable `model` by using the `ModelZoo` as shown in the following code.
```
Criteria<QAInput, String> criteria = Criteria.builder()
.optApplication(Application.NLP.QUESTION_ANSWER)
.setTypes(QAInput.class, String.class)
.optFilter("backbone", "bert")
.optEngine("MXNet") // For DJL to use MXNet engine
.optProgress(new ProgressBar()).build();
ZooModel<QAInput, String> model = ModelZoo.loadModel(criteria);
```
## Run inference
Once the model is loaded, you can call `Predictor` and run inference as follows
```
Predictor<QAInput, String> predictor = model.newPredictor();
String answer = predictor.predict(input);
answer
```
Running inference on DJL is that easy. Now, let's try the PyTorch engine by specifying PyTorch engine in Criteria.optEngine("PyTorch"). Let's rerun the inference code.
```
var question = "When did BBC Japan start broadcasting?";
var resourceDocument = "BBC Japan was a general entertainment Channel.\n" +
"Which operated between December 2004 and April 2006.\n" +
"It ceased operations after its Japanese distributor folded.";
QAInput input = new QAInput(question, resourceDocument);
Criteria<QAInput, String> criteria = Criteria.builder()
.optApplication(Application.NLP.QUESTION_ANSWER)
.setTypes(QAInput.class, String.class)
.optFilter("backbone", "bert")
.optEngine("PyTorch") // Use PyTorch engine
.optProgress(new ProgressBar()).build();
ZooModel<QAInput, String> model = ModelZoo.loadModel(criteria);
Predictor<QAInput, String> predictor = model.newPredictor();
String answer = predictor.predict(input);
answer
```
## Summary
Suprisingly, there are no differences between the PyTorch code snippet and MXNet code snippet.
This is power of DJL. We define a unified API where you can switch to different backend engines on the fly.
Next chapter: Inference with your own BERT: [MXNet](mxnet/load_your_own_mxnet_bert.ipynb) [PyTorch](pytorch/load_your_own_pytorch_bert.ipynb).
|
github_jupyter
|
# Feature Engineering in Keras.
Let's start off with the Python imports that we need.
```
import os, json, math, shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Note that this cell is special. It's got a tag (you can view tags by clicking on the wrench icon on the left menu in Jupyter)
# These are parameters that we will configure so that we can schedule this notebook
DATADIR = '../data'
OUTDIR = './trained_model'
EXPORT_DIR = os.path.join(OUTDIR,'export/savedmodel')
NBUCKETS = 10 # for feature crossing
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # remember the training dataset repeats, so this will wrap around
NUM_EVALS = 5 # evaluate this many times
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but no so much that it slows down
```
## Locating the CSV files
We will start with the CSV files that we wrote out in the [first notebook](../01_explore/taxifare.iypnb) of this sequence. Just so you don't have to run the notebook, we saved a copy in ../data
```
if DATADIR[:5] == 'gs://':
!gsutil ls $DATADIR/*.csv
else:
!ls -l $DATADIR/*.csv
```
## Use tf.data to read the CSV files
We wrote these cells in the [third notebook](../03_tfdata/input_pipeline.ipynb) of this sequence.
```
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
def features_and_labels(row_data):
for unwanted_col in ['key']: # keep the pickup_datetime!
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
pattern = '{}/{}'.format(DATADIR, pattern)
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
.cache())
if mode == tf.estimator.ModeKeys.TRAIN:
print("Repeating training dataset indefinitely")
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
import datetime
# Python 3.5 doesn't handle timezones of the form 00:00, only 0000
s = '2012-07-05 14:18:00+00:00'
print(s)
ts = datetime.datetime.strptime(s.replace(':',''), "%Y-%m-%d %H%M%S%z")
print(ts.weekday())
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
print(DAYS[ts.weekday()])
s = tf.constant('2012-07-05 14:18:00+00:00').numpy().decode('utf-8')
print(s)
ts = datetime.datetime.strptime(s.replace(':',''), "%Y-%m-%d %H%M%S%z")
print(ts.weekday())
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
print(DAYS[ts.weekday()])
## Add transformations
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
def get_dayofweek(s):
# Python 3.5 doesn't handle timezones of the form 00:00, only 0000
s1 = s.numpy().decode('utf-8') # get Python string
ts = datetime.datetime.strptime(s1.replace(':',''), "%Y-%m-%d %H%M%S%z")
return DAYS[ts.weekday()]
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
def transform(inputs, NUMERIC_COLS, STRING_COLS):
transformed = inputs.copy()
print("BEFORE TRANSFORMATION")
print("INPUTS:", inputs.keys())
print(inputs['pickup_longitude'].shape)
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in NUMERIC_COLS
}
# scale the lat, lon values to be in 0, 1
for lon_col in ['pickup_longitude', 'dropoff_longitude']: # in range -70 to -78
transformed[lon_col] = tf.keras.layers.Lambda(
lambda x: (x+78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
for lat_col in ['pickup_latitude', 'dropoff_latitude']: # in range 37 to 45
transformed[lat_col] = tf.keras.layers.Lambda(
lambda x: (x-37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# add Euclidean distance. Doesn't have to be accurate calculation because NN will calibrate it
transformed['euclidean'] = tf.keras.layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = tf.feature_column.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed['hourofday'] = tf.keras.layers.Lambda(
lambda x: tf.strings.to_number(tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity('hourofday', num_buckets=24))
# day of week is hard because there is no TensorFlow function for date handling
transformed['dayofweek'] = tf.keras.layers.Lambda(
lambda x: dayofweek(x),
name='dayofweek_pyfun'
)(inputs['pickup_datetime'])
transformed['dayofweek'] = tf.keras.layers.Reshape((), name='dayofweek')(transformed['dayofweek'])
feature_columns['dayofweek'] = tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(
'dayofweek', vocabulary_list = DAYS))
# featurecross lat, lon into nxn buckets, then embed
# b/135479527
#nbuckets = NBUCKETS
#latbuckets = np.linspace(0, 1, nbuckets).tolist()
#lonbuckets = np.linspace(0, 1, nbuckets).tolist()
#b_plat = tf.feature_column.bucketized_column(feature_columns['pickup_latitude'], latbuckets)
#b_dlat = tf.feature_column.bucketized_column(feature_columns['dropoff_latitude'], latbuckets)
#b_plon = tf.feature_column.bucketized_column(feature_columns['pickup_longitude'], lonbuckets)
#b_dlon = tf.feature_column.bucketized_column(feature_columns['dropoff_longitude'], lonbuckets)
#ploc = tf.feature_column.crossed_column([b_plat, b_plon], nbuckets * nbuckets)
#dloc = tf.feature_column.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)
#pd_pair = tf.feature_column.crossed_column([ploc, dloc], nbuckets ** 4 )
#feature_columns['pickup_and_dropoff'] = tf.feature_column.embedding_column(pd_pair, 100)
print("AFTER TRANSFORMATION")
print("TRANSFORMED:", transformed.keys())
print("FEATURES", feature_columns.keys())
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
print(STRING_COLS)
print(NUMERIC_COLS)
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(inputs, NUMERIC_COLS, STRING_COLS)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(transformed)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = tf.keras.layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = tf.keras.layers.Dense(8, activation='relu', name='h2')(h1)
# final output would normally have a linear activation because this is regression
# However, we know something about the taxi problem (fares are +ve and tend to be below $60).
# Use that here. (You can verify by running this query):
# SELECT APPROX_QUANTILES(fare_amount, 100) FROM serverlessml.cleaned_training_data
# b/136476088
#fare_thresh = lambda x: 60 * tf.keras.activations.relu(x)
#output = tf.keras.layers.Dense(1, activation=fare_thresh, name='fare')(h2)
output = tf.keras.layers.Dense(1, name='fare')(h2)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
model = build_dnn_model()
print(model.summary())
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
```
## Train model
To train the model, call model.fit()
```
trainds = load_dataset('taxi-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//10000) # evaluate on 1/10 final evaluation set
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
shutil.rmtree('{}/checkpoints/'.format(OUTDIR), ignore_errors=True)
checkpoint_path = '{}/checkpoints/taxi'.format(OUTDIR)
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch,
callbacks=[cp_callback])
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
```
## Evaluate over full validation dataset
Let's evaluate over the full validation dataset (provided the validation dataset is large enough).
```
evalds = load_dataset('taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
model.evaluate(evalds)
```
Yippee! We are now at under 4 dollars RMSE!
## Predict with model
This is how to predict with this model:
```
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00+00:00'], dtype=tf.string),
})
```
However, this is not realistic, because we can't expect client code to have a model object in memory. We'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
## Export model
Let's export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
```
export_dir = os.path.join(EXPORT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.keras.experimental.export_saved_model(model, export_dir)
print(export_dir)
# Recreate the exact same model
new_model = tf.keras.experimental.load_from_saved_model(export_dir)
# try predicting with this model
new_model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00+00:00'], dtype=tf.string),
})
```
In this notebook, we have looked at how to implement a custom Keras model using feature columns.
Copyright 2019 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
github_jupyter
|
# Performing Basic Sequence Analysis
Now I am continuing to my bioinformatics cookbook tutorial series. Today's topic is to perform basic sequence analysis which is the basics of Next Generation Sequencing.
We will do some basic sequence analysis on DNA sequences. FASTA files are our main target on this, also Biopython as a main library of Python.
Let's first download a FASTA sequence
```
from Bio import Entrez, SeqIO
# Using my email
Entrez.email = "eneskemalergin@gmail.com"
# Get the FASTA file
hdl = Entrez.efetch(db='nucleotide', id=['NM_002299'],rettype='fasta') # Lactase gene
# Read it and store it in seq
seq = SeqIO.read(hdl, 'fasta')
print "First 10 and last 10: " + seq.seq[:10] + "..." + seq.seq[-10:]
```
- Let's save the Biopython object in FASTA file;
```
from Bio import SeqIO
# Open a new fasta file and make it ready to write on
w_hdl = open('example.fasta', 'w')
# specify the part to write
w_seq = seq[11:5795]
# Write it
SeqIO.write([w_seq], w_hdl, 'fasta')
# And of course close it
w_hdl.close()
```
> If you want to write many sequences (easily millions with NGS), do not use a list, as shown in the preceding code because this will allocate massive amounts of memory.Either use an iterator or use the ```SeqIO.write``` function several times with a subset of sequence on each write.
- We need to read the sequence of course to be able to use it
```
# Parse the fasta file and store it in recs
recs = SeqIO.parse('example.fasta', 'fasta')
# Iterate over each records
for rec in recs:
# Get the sequences of each rec
seq = rec.seq
# Show the desription
print(rec.description)
# Show the first 10 letter in sequence
print(seq[:10])
#
print(seq.alphabet)
```
In our example code we have only 1 sequence in 1 FASTA file so we did not have to iterate through each record. Since we won't know each time how many records we will have in FASTA the code above is suitable for most cases.
> The first line of FASTA file is description of the gene, in this case : ```gi|32481205|ref|NM_002299.2| Homo sapiens lactase (LCT), mRNA```
> The second line is the first 10 lettern in sequence
> The last line is shows how the sequence represented
- Now let's change the alphabet of the sequence we got:
> We create a new sequence with a more informative alphabet.
```
from Bio import Seq
from Bio.Alphabet import IUPAC
seq = Seq.Seq(str(seq), IUPAC.unambiguous_dna)
```
- Now have an unambiguous DNA, we can transcribe it as follows:
```
rna = Seq.Seq(str(seq), IUPAC.unambiguous_dna)
rna = seq.transcribe() # Changing DNA into RNA
print "some of the rna variable: "+rna[:10]+"..."+rna[-10:]
```
> Note that the ```Seq``` constructor takes a string, not a sequence. You will see that the alphabet of the ```rna``` variable is now ```IUPACUnambigousRNA```.
- Finally let's translate it into Protein:
```
prot = seq.translate() # Changing RNA into corresponding Protein
print "some of the resulting protein sequence: "+prot[:10]+"..."+prot[-10:]
```
Now, we have a protein alphabet with the annotation that there is a stop codon (so, our protein is complete).
---
There are other files to store and represent sequences and we talked about some of them in the [first blog post of the series](http://eneskemalergin.github.io/2015/10/11/Getting_Started_NGS/). Now I will show you how to work with modern file formats such as FASTQ format.
FASTQ files are the standard format output by modern sequencers. The purpose of the following content is to make you comfortable with quality scores and how to work with them. To be able to explain the concept we will use real big data from "1000 Genomes Project"
> Next-generation datasets are generally very large like 1000 Genomes Project. You will need to download some stuff so, get ready to wait :)
Let's Start by downloading the dataset: (BTW the following snippet is for IPython NB so if you are following this from my blog go ahead and [click here](ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265.filt.fastq.gz))
```
!wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265.filt.fastq.gz
```
Now we have file "SRR003265.filt.fastq.gz" which has 3 extensions, 1 is fastq so we are fine. The last one ```gz``` is the thing we will solve with Pyhton Library while we are opening it.
- First we need to open the file:
```
import gzip # This is the library we need to unzip .gz
from Bio import SeqIO # The usual SeqIO
# Unzip and read the fastq file at the end store it in recs
recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq')
rec = next(recs)
# Print the id, description and sequence of the record
print(rec.id, rec.description, rec.seq)
# Print the letter_annotations
# Biopython will convert all the Phred encoding letters to logarithmic scores
print(rec.letter_annotations)
```
> You should usually store your FASTQ files in a compressed format, for space saving and processing time saving's sake.
> Don't use list(recs), if you don't want to sacrife a lot of memory, since FASTQ files are usualy big ones.
- Then, let's take a look at the distribution of nucleotide reads:
```
from collections import defaultdict
# Unzip and read the fastq file
recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq')
# Make integer dictionary
cnt = defaultdict(int)
# Iterate over records
for rec in recs:
# In each letter of the sequence
for letter in rec.seq:
# Count the letters and store the number of count in dictionary cnt
cnt[letter] += 1
# Find the total of cnt counts
tot = sum(cnt.values())
# Iterate over the dictionary cnt
for letter, cnt_value in cnt.items():
print('%s: %.2f %d' % (letter, 100. * cnt_value / tot, cnt_value))
# Prints the following
# For each Letter inside
# Print the percentage of apperance in sequences
# and the total number of letter
# Do this for each letter (even for NONE(N))
```
> Note that there is a residual number for N calls. These are calls in which a sequencer reports an unknown base.
- Now, let's plot the distribution of Ns according to its read position:
```
%matplotlib inline
# Plot it in IPython Directly
# Calling libraries
import seaborn as sns
import matplotlib.pyplot as plt
# Again unzip, read the fastq file
recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'), 'fastq')
# Make a dictionary
n_cnt = defaultdict(int)
# The same code as before until here
# iterate through the file and get the position of any references to N.
for rec in recs:
for i, letter in enumerate(rec.seq):
pos = i + 1
if letter == 'N':
n_cnt[pos] += 1
seq_len = max(n_cnt.keys())
positions = range(1, seq_len + 1)
fig, ax = plt.subplots()
ax.plot(positions, [n_cnt[x] for x in positions])
ax.set_xlim(1, seq_len)
```
> Until position 25, there are no errors. This is not what you will get from a typical sequencer output, because Our example file is already filtered and the 1000 genomes filtering rules enforce that no N calls can occur before position 25.
> the quantity of uncalled bases is positiondependent.
- So, what about the quality of reads?
- Let's study the distribution of Phred scores and plot the distribution of qualities according to thei read position:
```
# Reopen and read
recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq')
# default dictionary
qual_pos = defaultdict(list)
for rec in recs:
for i, qual in enumerate(rec.letter_annotations['phred_quality']):
if i < 25 or qual == 40:
continue
pos = i + 1
qual_pos[pos].append(qual)
vps = []
poses = qual_pos.keys()
poses.sort()
for pos in poses:
vps.append(qual_pos[pos])
fig, ax = plt.subplots()
ax.boxplot(vps)
ax.set_xticklabels([str(x) for x in range(26, max(qual_pos.keys()) + 1)])
```
> We will ignore both positions sequenced 25 base pairs from start (again, remove this rule if you have unfiltered sequencer data) and the maximum quality score for this file (40). However, in your case, you can consider starting your plotting analysis also with the maximum. You may want to check the maximum possible value for your sequencer hardware. Generally, as most calls can be performed with maximum quality, you may want to remove them if you are trying to understand where quality problems lie.
---
|
github_jupyter
|
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
pi = np.pi
x = np.linspace(-4*pi, 4*pi, 1000)
plt.plot(x, np.sin(x)/x)
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
pi = np.pi
x = np.linspace(-4*pi, 4*pi, 1000)
plt.plot(x, np.cos(x)/x)
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
pi = np.pi
x = np.linspace(-4*pi, 4*pi, 1000)
plt.plot(x, np.tan(x)/x)
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
pi = np.pi
x = np.linspace(-4*pi, 4*pi, 1000)
plt.plot(x, np.sin(x))
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
pi = np.pi
x = np.linspace(-4*pi, 4*pi, 1000)
plt.plot(x, np.sinh(x)/x)
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
pi = np.pi
x = np.linspace(-100*pi, 100*pi, 1000)
plt.plot(x, np.tanh(x))
plt.show()
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,2,100)
# Note that even in the OO-style, we use `.pyplot.figure` to create the figure.
fig, ax = plt.subplots() # Create a figure and an axes.
ax.plot(x, x, label='linear') # Plot some data on the axes.
ax.plot(x, x**2, label='quadratic') # Plot more data on the axes...
ax.plot(x, x**3, label='cubic') # ... and some more.
ax.set_xlabel('x label') # Add an x-label to the axes.
ax.set_ylabel('y label') # Add a y-label to the axes.
ax.set_title("Simple Plot") # Add a title to the axes.
ax.legend() # Add a legend.
x = np.linspace(0, 2, 100)
plt.plot(x, x, label='linear') # Plot some data on the (implicit) axes.
plt.plot(x, x**2, label='quadratic') # etc.
plt.plot(x, x**3, label='cubic')
plt.xlabel('x label')
plt.ylabel('y label')
plt.title("Simple Plot")
plt.legend()
def my_plotter(ax, data1, data2, param_dict):
"""
A helper function to make a graph
Parameters
----------
ax : Axes
The axes to draw to
data1 : array
The x data
data2 : array
The y data
param_dict : dict
Dictionary of kwargs to pass to ax.plot
Returns
-------
out : list
list of artists added
"""
out = ax.plot(data1, data2, **param_dict)
return out
data1, data2, data3, data4 = np.random.randn(4, 100)
fig, ax = plt.subplots(1, 1)
my_plotter(ax, data1, data2, {'marker': 'x'})
```
# if you wanted to have 2 sub-plots:
```
fig, (ax1, ax2) = plt.subplots(1, 2)
my_plotter(ax1, data1, data2, {'marker': 'x'})
my_plotter(ax2, data3, data4, {'marker': 'o'})
plt.ioff()
for i in range(3):
plt.plot(np.random.rand(10))
plt.show()
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
# Setup, and create the data to plot
y = np.random.rand(100000)
y[50000:] *= 2
y[np.logspace(1, np.log10(50000), 400).astype(int)] = -1
mpl.rcParams['path.simplify'] = True
mpl.rcParams['path.simplify_threshold'] = 0.0
plt.plot(y)
plt.show()
mpl.rcParams['path.simplify_threshold'] = 1.0
plt.plot(y)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
labels = ['G1', 'G2', 'G3', 'G4', 'G5']
men_means = [20, 35, 30, 35, 27]
women_means = [25, 32, 34, 20, 25]
men_std = [2, 3, 4, 1, 2]
women_std = [3, 5, 2, 3, 3]
width = 0.35 # the width of the bars: can also be len(x) sequence
fig, ax = plt.subplots()
ax.bar(labels, men_means, width, yerr=men_std, label='Men')
ax.bar(labels, women_means, width, yerr=women_std, bottom=men_means,
label='Women')
ax.set_ylabel('Scores')
ax.set_title('Scores by group and gender')
ax.legend()
plt.show()
import numpy as np
import matplotlib.pyplot as plt
# Fixing random state for reproducibility
np.random.seed(19680801)
dt = 0.01
t = np.arange(0, 30, dt)
nse1 = np.random.randn(len(t)) # white noise 1
nse2 = np.random.randn(len(t)) # white noise 2
# Two signals with a coherent part at 10Hz and a random part
s1 = np.sin(2 * np.pi * 10 * t) + nse1
s2 = np.sin(2 * np.pi * 10 * t) + nse2
fig, axs = plt.subplots(2, 1)
axs[0].plot(t, s1, t, s2)
axs[0].set_xlim(0, 2)
axs[0].set_xlabel('time')
axs[0].set_ylabel('s1 and s2')
axs[0].grid(True)
cxy, f = axs[1].cohere(s1, s2, 256, 1. / dt)
axs[1].set_ylabel('coherence')
fig.tight_layout()
plt.show()
import numpy as np
import matplotlib.pyplot as plt
# example data
x = np.arange(0.1, 4, 0.1)
y1 = np.exp(-1.0 * x)
y2 = np.exp(-0.5 * x)
# example variable error bar values
y1err = 0.1 + 0.1 * np.sqrt(x)
y2err = 0.1 + 0.1 * np.sqrt(x/2)
# Now switch to a more OO interface to exercise more features.
fig, (ax_l, ax_c, ax_r) = plt.subplots(nrows=1, ncols=3,
sharex=True, figsize=(12, 6))
ax_l.set_title('all errorbars')
ax_l.errorbar(x, y1, yerr=y1err)
ax_l.errorbar(x, y2, yerr=y2err)
ax_c.set_title('only every 6th errorbar')
ax_c.errorbar(x, y1, yerr=y1err, errorevery=6)
ax_c.errorbar(x, y2, yerr=y2err, errorevery=6)
ax_r.set_title('second series shifted by 3')
ax_r.errorbar(x, y1, yerr=y1err, errorevery=(0, 6))
ax_r.errorbar(x, y2, yerr=y2err, errorevery=(3, 6))
fig.suptitle('Errorbar subsampling for better appearance')
plt.show()
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
x=[31.13745
513.76786
98.08295
171.25595
26.46683
593.16834
4.67062
298.91948
146.346
330.05693
130.77727
345.62565
322.27257
174.3697
244.42895
286.4645
1080.46937
2947.15926
213.2915
1184.77982
418.79865
702.14941
110.53793
309.81759
1345.13766
1068.01439
501.31288
1155.19924
2028.6046
976.15893
2568.83929
1032.20633
2903.56683
734.84372
2372.67338
2048.84394
6443.89443
854.72289
1049.33193
846.93853
2671.59286
902.98593
4899.47712
753.52619
1113.16369
4377.9249
4164.63339
5126.78047
6081.14319
3149.55265
3018.77538
1890.04297
6053.11949
8534.77393
6325.57214
1684.53582
9439.31673
4446.42728
7004.36846
6963.88978
10960.38096
6457.90628
3879.72576
5199.95347
7164.72631
74.72987
8017.89232
10580.50412
3529.42949
6383.17641
7806.15769
7029.27842
6896.94427
6014.19768
5998.62896
10471.52306
9629.25515
8413.33789
10563.37853
6548.20488
10673.91646
7619.33302
11592.47112
8786.98724
17770.14039
9282.07263
7021.49405
5676.35639
6499.94184
4461.996
4032.29925
19795.63124
11769.95456
12068.87404
12813.05899
9907.93529
7989.86862
7343.76662
11919.4143
12336.65607
7142.93009
12241.68686
6426.76884
8555.01327
10272.24341
5517.55542
3362.84416
9700.87128
10733.07761
11061.57766
7985.19801
10199.07041
17913.37264
10711.2814
5206.18096
11785.52328
9778.71489
5884.97728
8010.10796
5729.29005
5793.12181
3571.46505
10211.52539
4497.80406
4787.38231
6062.46072
6383.17641
5064.50558
4584.98891
12881.56138
4499.36094
3420.44843
2287.0454
2051.95769
2534.5881
1500.82489
2861.53128
147.90287
3205.60006
3649.30866
3130.87019
411.01429
3017.21851
794.00487
2184.29183
3825.23523
2799.25639
71.61613
1499.26802
256.88393
586.94086
267.78203
130.77727
303.5901
236.64459
261.55455
121.43604
523.10909
278.68014
249.09957
194.60904
29.58057
0
1.55687
18.68247
1.55687
17.1256
294.24886
359.6375
0
194.60904
26.46683
48.26304
12.45498
66.94551
28.0237
38.92181]
y=[20.02045
22.53706
19.76639
29.74577
30.54583
18.08904
17.38001
18.92677
28.29161
19.95979
17.31224
21.83106
34.43798
38.77548
19.32044
18.15695
20.99185
35.56716
23.35302
36.80261
29.67645
43.26466
17.65234
25.95369
26.90249
33.25161
19.6972
47.30997
36.17422
15.03087
22.46046
28.87446
38.74643
24.32572
55.13447
39.74956
48.0014
28.78154
34.04608
35.0192
30.50575
21.83062
54.2865
30.26874
27.08733
42.23335
33.17988
45.17475
37.19604
41.56082
30.83675
47.73186
54.5239
45.49809
41.36297
43.31118
54.99624
49.43302
58.01368
66.8944
77.29361
49.75644
45.22979
55.39367
56.6017
67.05372
66.02036
66.14054
52.69512
57.86693
64.66502
46.73512
76.61953
92.96712
94.47939
131.95119
95.81957
92.1434
95.52797
98.40329
94.16242
124.21348
149.43851
79.63067
95.33529
104.60758
78.20073
159.51689
133.12523
146.49849
184.95519
107.22993
164.68631
120.5651
155.16447
157.83213
156.45957
148.63799
144.4465
148.73119
148.17544
144.70658
154.65209
128.86943
139.48889
182.32727
142.93312
176.04107
139.0309
128.9639
185.60657
170.79296
153.18813
165.2832
179.67027
184.32396
165.56155
121.94358
92.14223
81.13095
104.22317
97.15773
139.5961
82.14449
115.60005
147.97317
109.50184
89.34629
105.83884
130.6108
163.60871
132.88827
63.95976
30.62878
23.92871
37.9417
41.07415
48.80755
48.29521
39.51854
78.94565
25.56546
46.15475
54.18953
74.32862
89.13755
51.08192
26.00067
22.73247
19.78375
24.11613
27.02586
23.82791
23.48308
23.45353
42.51956
19.57252
22.485
26.35474
25.00031
47.96181
23.87384
22.2708
16.50245
19.27467
19.63548
21.59698
26.41076
21.64263
11.94924
28.11239
17.10387
26.49114
18.2528
16.00866
17.5329
21.08909]
plt.scatter(x,y)
plt.show()
```
|
github_jupyter
|
# DSM - Modelling
- ploltting.py is imported to facilitate in visualization
$ (0) \quad \dot{E}_{t} \quad = \quad demand_{t} \quad + \quad DSM_{t}^{up} \quad - \quad
\sum_{tt=t-L}^{t+L} DSM_{t,tt}^{do} \qquad \forall t $
### Formulation after Zerrahn & Schill
$ (1) \quad DSM_{t}^{up} \quad = \quad \sum_{tt=t-L}^{t+L} DSM_{t,tt}^{do}
\qquad \forall t $
$ (2) \quad DSM_{t}^{up} \quad \leq \quad E_{t}^{up} \qquad \forall t $
$ (3) \quad \sum_{t=tt-L}^{tt+L} DSM_{t,tt}^{do} \quad \leq \quad E_{t}^{do}
\qquad \forall tt $
$ (4) \quad DSM_{tt}^{up} \quad + \quad \sum_{t=tt-L}^{tt+L} DSM_{t,tt}^{do} \quad
\leq \quad max \{ E_{t}^{up}, E_{t}^{do} \} \qquad \forall tt $
**Table: Symbols and attribute names of variables V and parameters P**
|symbol | attribute | type|explanation|
|-------------------|-------------------|----|--------------------------------------|
|$DSM_{t}^{up} $ | `dsm_do[g,t,tt] `| $V$| DSM up shift (additional load) |
|$DSM_{t,tt}^{do}$ | `dsm_up[g,t]` | $V$| DSM down shift (less load) |
|$\dot{E}_{t} $ |`flow[g,t]` | $V$| Energy flowing in from electrical bus|
|$L$ |`delay_time` | $P$| Delay time for load shift |
|$demand_{t} $ | `demand[t]` | $P$| Electrical demand series |
|$E_{t}^{do}$ |`capacity_down[tt]`| $P$| Capacity DSM down shift |
|$E_{t}^{up} $ |`capacity_up[tt]` | $P$| Capacity DSM up shift |
## Imports
```
from oemof import solph, outputlib
from oemof.network import Node
import pandas as pd
import os
# plot_dsm.py
import plotting as plt_dsm
import matplotlib.pyplot as plt
## base dataset
plt_dsm.make_directory('graphics')
```
## Energy Model
For the testing, a basic energy system was set up including:
- Coal PP
- Wind PP
- PV PP
- DSM Sink
- shortage
- excess
```
def create_model(data, datetimeindex, directory, project, method, delay_time, shift_interval):
# ----------------- Energy System ----------------------------
# Create Energy System
es = solph.EnergySystem(timeindex=datetimeindex)
Node.registry = es
# Create Busses
b_coal_1 = solph.Bus(label='bus_coal_1')
b_elec = solph.Bus(label='bus_elec')
# Create Sources
s_coal_p1 = solph.Source(label='source_coal_p1',
outputs={
b_coal_1: solph.Flow(
nominal_value=10000,
variable_costs=10)}
)
s_wind = solph.Source(label='wind',
outputs={
b_elec: solph.Flow(
actual_value=data['wind'][datetimeindex],
fixed=True,
nominal_value=1)}
)
s_pv = solph.Source(label='pv',
outputs={
b_elec: solph.Flow(
actual_value=data['pv'][datetimeindex],
fixed=True,
nominal_value=1)}
)
# Create Transformer
cfp_1 = solph.Transformer(label='pp_coal_1',
inputs={b_coal_1: solph.Flow()},
outputs={
b_elec: solph.Flow(
variable_costs=0)},
conversion_factors={b_elec: 1}
)
# Create DSM
demand_dsm = solph.custom.SinkDSM(label='demand_dsm',
inputs={b_elec: solph.Flow(variable_costs=2)},
demand=data['demand_el'][datetimeindex],
capacity_up=data['Cap_up'][datetimeindex],
capacity_down=data['Cap_do'][datetimeindex],
method=method,
delay_time=delay_time,
shift_interval=shift_interval,
#recovery_time=1
)
# Backup excess / shortage
excess = solph.Sink(label='excess_el',
inputs={b_elec: solph.Flow(variable_costs=1)}
)
s_shortage_el = solph.Source(label='shortage_el',
outputs={
b_elec: solph.Flow(
variable_costs=200)}
)
# -------------------------- Create Model ----------------------
# Create Model
model = solph.Model(es)
# Solve Model
model.solve(solver='cbc', solve_kwargs={'tee': False})
# Write LP File
filename = os.path.join(os.path.dirname('__file__'), directory, project +'.lp')
model.write(filename, io_options={'symbolic_solver_labels': True})
# Save Results
es.results['main'] = outputlib.processing.results(model)
es.dump(dpath=None, filename=None)
return model
```
## Presets
```
def start_model(df_data, timesteps, **kwargs):
method = kwargs.get('method', None)
delay_time = kwargs.get('delay_time', None)
shift_interval = kwargs.get('shift_interval', None)
show = kwargs.get('show', False)
plot = kwargs.get('plot', False)
figure_size = kwargs.get('figsize', (10,10))
# ----------------- Input Data & Timesteps ----------------------------
# Provide directory
project = 'demand_shift_test'
directory = './'
# Data manipulation
data = df_data
# Timestamp
datetimeindex = pd.date_range(start='1/1/2013',
periods=timesteps,
freq='H')
# ----------------- Create & Solve Model ----------------------------
# Create model
model = create_model(data,
datetimeindex,
directory,
project,
method,
delay_time,
shift_interval)
# Get Results
es = solph.EnergySystem()
es.restore(dpath=None, filename=None)
# Export data
df_gesamt = plt_dsm.extract_results(model)
# write data in csv
#df_gesamt.to_csv(directory + project + '_data_dump.csv')
# ----------------- Plot Results ----------------------------
# Plot
plt_dsm.plot_dsm(df_gesamt,
datetimeindex,
directory,
timesteps,
project,
days=2,
show=show,
figsize=figure_size)
return df_gesamt
```
## base dataset
For the limitations of the formulation this test dataset is modified.
```
timesteps = 48
# test data base
demand = [100] * timesteps
pv = [0] * timesteps
capup = [100] * timesteps
capdo = [100] * timesteps
wind = [100] * timesteps
#
base = [demand, wind, capup, capdo, pv]
df_base = pd.DataFrame(list(zip(*base)))
df_base.rename(columns={0:'demand_el',1:'wind', 2:'Cap_up', 3:'Cap_do', 4:'pv'}, inplace=True)
df_base['timestamp'] = pd.date_range(start='1/1/2013', periods=timesteps, freq='H')
df_base.set_index('timestamp', drop=True, inplace=True)
```
# How it should work:
```
# data preperation
wind = [100] * timesteps
###### edit specifics
# triple extended
wind[3:4] = [0]
wind[38:41] = [200] * 3
# interupting event
wind[6:7] = [200]
df_data = df_base.copy()
df_data['wind'] = wind
#plot
fig, ax = plt.subplots(figsize=(10,4))
ax = df_data[['demand_el', 'wind']].plot(ax=ax, drawstyle="steps-post")
ax = df_data.Cap_up.plot(ax=ax, drawstyle="steps-post", secondary_y=True)
ax = (df_data.Cap_do*-1).plot(ax=ax, drawstyle="steps-post", secondary_y=True)
ax.set_yticks(range(-100,150,50))
ax.legend(loc=9, ncol=3)
ax.set_ylabel("MW or % ")
plt.show()
# start model
_ = start_model(df_data, timesteps, plot=True, method='delay', delay_time=3)
```
# limitations of the formulation
To preserve the formulation as a linear problem, the simultaneous activation of "DSM Up & Down" cannot be completely prevented. A possible solution with SOS-Variables would end up in a non-convex mixed integer programming problem. Thus leading to an increase in computing time.
## extended delay
Equation $(4)$ limits the sum of $DSM_{t}^{up} $ & $DSM_{t}^{down} $ to the value of the max capcacity.
$ (4) \quad DSM_{tt}^{up} \quad + \quad \sum_{t=tt-L}^{tt+L} DSM_{t,tt}^{do} \quad
\leq \quad max \{ E_{t}^{up}, E_{t}^{do} \} \qquad \forall tt $
If this capacity isn't fullly used, the remaining potential $ E_{x}-DSM^{x} = \Delta $ might be used to artificially extend the delay time if Equation $(0)$ is not violated.
$ (0) \quad demand_{t} \quad = \quad \dot{E}_{t} \quad - \quad DSM_{t}^{up} \quad + \quad
\sum_{tt=t-L}^{t+L} DSM_{t,tt}^{do} \qquad \forall t $
This is the the case if the remaining potential is split in half and added to both variables.
$ (0) \quad demand_{t} \quad = \quad \dot{E}_{t}\quad - \quad (DSM_{t}^{up} + \frac{1}{2} \cdot \Delta) \quad + \quad
(\sum_{tt=t-L}^{t+L} DSM_{t,tt}^{do} +\frac{1}{2} \cdot \Delta) \qquad \forall t $
In the following, there will be some showcases presenting the problem and its influence.
```
# data preperation
wind = [100] * timesteps
###### edit specifics
# triple extended
wind[3:6] = [0] * 3
wind[38:41] = [200] * 3
# no interupting event
df_data = df_base.copy()
df_data['wind'] = wind
#plot
fig, ax = plt.subplots(figsize=(10,4))
ax = df_data[['demand_el', 'wind']].plot(ax=ax, drawstyle="steps-post")
ax = df_data.Cap_up.plot(ax=ax, drawstyle="steps-post", secondary_y=True)
ax = (df_data.Cap_do*-1).plot(ax=ax, drawstyle="steps-post", secondary_y=True)
ax.set_yticks(range(-100,150,50))
ax.legend(loc=9, ncol=3)
ax.set_ylabel("MW or % ")
plt.show()
```
- 100 MW constant demand
- 100 MW missing supply from 3 h to 6 h
- 100 MW surpluss from 14 h to 17 h the next day
- The delay time is set to 1 h.
```
# start model
_ = start_model(df_data, timesteps, plot=True, method='delay', delay_time=1)
```
### what should happen:
- no demand shift, as the delay time could only be realised with a delay time of 32h
- missing demand should be fully compensated by coal power plant
- surpluss should go to excess
### what happens:
- 50 MW of demand is shifted
- demand shift takes place over 32 h
### why:
- $DSM_{t}^{up} $ & $DSM_{t}^{down} $ can be non-zero at the same time.
- the sum of $DSM_{t}^{up} $ & $DSM_{t}^{down} $ is limited to 100 MW. Eq. (4)
- as there is no other demand shift happening $\Delta = 100MW$
- $DSM_{7-32}^{up} $ & $DSM_{7-32}^{down} $ can be 50 MW at the same time.
- 50% of the remaining capacity $\Delta$ can be used to extend the delay if there is no interupting event and suits the overall objective (e.g. min_cost)
## when does it happen:
- if there is any $ \Delta > 0 $ which can be compensated
- depending on the delay time
- for $t_{delay} < dist < \infty$
delay time of $n$ can overcome $\frac{n}{2}$ fully used potential
- for $ dist \leq t_{delay} $
delay time of $n$ can overcome $\frac{n}{2} + 0.5 \cdot x $ fully used potential x = |c_dist < delaytime|
### Interrupting event
interupting event with -50 % wind after 1 timestep
```
# data preperation
wind = [100] * timesteps
###### edit specifics
# triple extended
wind[3:6] = [0] * 3
wind[38:41] = [200] * 3
# interupting event after 1 timestep
wind[6:7] = [150]
df_data = df_base.copy()
df_data['wind'] = wind
# plot
fig, ax = plt.subplots(figsize=(10,4))
ax = df_data[['demand_el', 'wind']].plot(ax=ax, drawstyle="steps-post")
ax = df_data.Cap_up.plot(ax=ax, drawstyle="steps-post", secondary_y=True)
ax = (df_data.Cap_do*-1).plot(ax=ax, drawstyle="steps-post", secondary_y=True)
ax.set_yticks(range(-100,150,50))
ax.legend(loc=9, ncol=3)
ax.set_ylabel("MW or % ")
plt.show()
```
- 100 MW constant demand
- 100 MW missing supply from 3 h to 6 h
- 100 MW surpluss from 14 h to 17 h the next day
- 50 MW surplus from 6 h to 7h
- The delay time is set to 1 h
```
# start model
_ = start_model(df_data, timesteps, plot=True, method='delay', delay_time=1)
```
### what should happen:
- 50 MW should be shifted in between 6 h and 7h as the delay time is 1 h
- missing demand should be fully compensated by coal power plant
- surpluss should fully go to excess
### what happens:
- 50 MW of demand are shifted betwenn 6h and 7 h
- 25 MW additional demand shift takes place over 32 h
### why:
- $DSM_{t}^{up} $ & $DSM_{t}^{down} $ can be non-zero at the same time.
- 50 MW demand shift is happening.
- the sum of $DSM_{t}^{up} $ & $DSM_{t}^{down} $ is limited to 100 MW.
- there is still 50 MW of potential left at 7 h. $\Delta = 50MW$
- $DSM_{7}^{up} = \, 75 MW $
- $DSM_{7}^{down} = \, 25 MW $
- $Eq. \, (4) \quad DSM_{t}^{up} \quad + \quad \sum_{t=tt-L}^{tt+L} DSM_{t,tt}^{do} \quad
\leq \quad max \{ E_{t}^{up}, E_{t}^{do} \} \qquad \forall tt $
- $ Eq. (0) \quad demand_{t} \quad = \quad \dot{E}_{t}\quad - \quad (DSM_{t}^{up} + \frac{1}{2} \cdot \Delta) \quad + \quad
(\sum_{tt=t-L}^{t+L} DSM_{t,tt}^{do} +\frac{1}{2} \cdot \Delta) \qquad \forall t $
## influence of the delay time
varying delay time
```
# data preperation
wind = [100] * timesteps
###### edit specifics
# triple extended
wind[3:8] = [0] * 5
wind[38:41] = [200] * 3
# interupting event
wind[10:11] = [200]
wind[13:14] = [200]
wind[19:20] = [200]
# plot
df_data = df_base.copy()
df_data['wind'] = wind
fig, ax = plt.subplots(figsize=(10,4))
ax = df_data[['demand_el', 'wind']].plot(ax=ax, drawstyle="steps-post")
ax = df_data.Cap_up.plot(ax=ax, drawstyle="steps-post", secondary_y=True)
ax = (df_data.Cap_do*-1).plot(ax=ax, drawstyle="steps-post", secondary_y=True)
ax.set_yticks(range(-100,150,50))
ax.legend(loc=9, ncol=3)
ax.set_ylabel("MW or % ")
plt.show()
```
- 100 MW constant demand
- 100 MW missing supply from 3 h to 8 h
- 100 MW surplus from 10 h to 13 h the next day. (after 32h)
- 100 MW surplus from 10 h to 11 h (c_dist = 3)
- 100 MW surplus from 13 h to 14 h (c_dist = 5
- 100 MW surplus from 19 h to 20 h (c_dist = 21)
- The delay time is set to 1 h
### when does it happen:
- if there is any $ \Delta > 0 $ which can be compensated
- depending on the delay time
- for $t_{delay} < dist < \infty$
delay time of $n$ can overcome $\frac{n}{2}$ fully used potential
- for $ dist \leq t_{delay} $
delay time of $n$ can overcome $\frac{n}{2} + 0.5 \cdot x $ fully used potential x = |c_dist < delaytime|
## iteration over delay_time
```
# start model
for i in range(7):
_ = start_model(df_data, timesteps, plot=True, method='delay', delay_time=i, figsize=(5,5))
plt.title('delay_time = ' + str(i))
```
|
github_jupyter
|
```
import os, numpy, warnings
import pandas as pd
os.environ['R_HOME'] = '/home/gdpoore/anaconda3/envs/tcgaAnalysisPythonR/lib/R'
warnings.filterwarnings('ignore')
%config InlineBackend.figure_format = 'retina'
%reload_ext rpy2.ipython
%%R
require(ggplot2)
require(snm)
require(limma)
require(edgeR)
require(dplyr)
require(edgeR)
require(pvca)
require(lme4)
require(ggsci)
require(cowplot)
require(doMC)
numCores <- detectCores()
registerDoMC(cores=numCores)
%%R
load("tcgaVbDataAndMetadataAndSNM.RData")
%%R
print(dim(vbDataBarnDFReconciled))
print(dim(vbDataBarnDFReconciledQC))
print(dim(metadataSamplesAllQC))
%%R
metadataSamplesAllQCAML <- droplevels(metadataSamplesAll[! (is.na(metadataSamplesAll$race) |
is.na(metadataSamplesAll$portion_is_ffpe) |
is.na(metadataSamplesAll$age_at_diagnosis)),])
# metadataSamplesAllQCAML <- droplevels(metadataSamplesAllQCAML[metadataSamplesAllQCAML$disease_type == "Acute Myeloid Leukemia",])
vbDataBarnDFReconciledQCAML <- vbDataBarnDFReconciled[rownames(metadataSamplesAllQCAML),]
print(dim(metadataSamplesAllQCAML))
print(dim(vbDataBarnDFReconciledQCAML))
%%R
qcMetadata <- metadataSamplesAllQC # metadataSamplesAllQCAML
qcData <- vbDataBarnDFReconciledQC # vbDataBarnDFReconciledQCAML
# Set up design matrix
covDesignNorm <- model.matrix(~0 + sample_type +
data_submitting_center_label +
platform +
experimental_strategy +
tissue_source_site_label +
portion_is_ffpe,
data = qcMetadata)
print(colnames(covDesignNorm))
colnames(covDesignNorm) <- gsub('([[:punct:]])|\\s+','',colnames(covDesignNorm))
print(colnames(covDesignNorm))
# Set up counts matrix
counts <- t(qcData) # DGEList object from a table of counts (rows=features, columns=samples)
# Normalize using edgeR and then plug into voom
dge <- DGEList(counts = counts)
keep <- filterByExpr(dge, covDesignNorm)
dge <- dge[keep,,keep.lib.sizes=FALSE]
print("Now normalizing data...")
dge <- calcNormFactors(dge, method = "TMM")
print("Now applying voom on normalized data...")
vdge <- voom(dge, design = covDesignNorm, plot = TRUE, save.plot = TRUE, normalize.method="none")
%%R
print(table(metadataSamplesAllQCAML$sample_type))
%%R
# Apply
bio.var.sample.type <- model.matrix(~sample_type, #sample_type, # histological_diagnosis_label and disease_type tried but cause function to fail
data=qcMetadata)
bio.var.gender <- model.matrix(~gender, #sample_type, # histological_diagnosis_label and disease_type tried but cause function to fail
data=qcMetadata)
adj.var <- model.matrix(~data_submitting_center_label +
platform +
experimental_strategy +
tissue_source_site_label +
portion_is_ffpe,
data=qcMetadata)
colnames(bio.var.sample.type) <- gsub('([[:punct:]])|\\s+','',colnames(bio.var.sample.type))
colnames(bio.var.gender) <- gsub('([[:punct:]])|\\s+','',colnames(bio.var.gender))
colnames(adj.var) <- gsub('([[:punct:]])|\\s+','',colnames(adj.var))
print(dim(adj.var))
print(dim(bio.var.sample.type))
print(dim(bio.var.gender))
print(dim(t(vdge$E)))
print(dim(covDesignNorm))
%%R
snmDataObjSampleTypeWithExpStrategyFA <- snm(raw.dat = vdge$E,
bio.var = bio.var.sample.type,
adj.var = adj.var,
rm.adj=TRUE,
verbose = TRUE,
diagnose = TRUE)
snmDataSampleTypeWithExpStrategyFA <- t(snmDataObjSampleTypeWithExpStrategyFA$norm.dat)
print(dim(snmDataSampleTypeWithExpStrategyFA))
%%R
save(snmDataSampleTypeWithExpStrategyFA, file = "snmDataSampleTypeWithExpStrategyFA.RData")
```
# PCA plotting to visually examine batch effects and batch correction
```
%%R
pcaPlotting <- function(pcaObject,pcChoices, dataLabels, factorString, titleString){
require(ggbiplot)
theme_update(plot.title = element_text(hjust = 0.5))
g <- ggbiplot(pcaObject,pcChoices, obs.scale = 1, var.scale = 1,
groups = dataLabels, ellipse = TRUE,
alpha = 0.2,
circle = TRUE,var.axes=FALSE) +
scale_color_nejm(name = factorString) +
theme_bw() +
#theme(legend.direction = "horizontal", legend.position = "top") +
ggtitle(titleString) + theme(plot.title = element_text(hjust = 0.5))
print(g)
}
%%R
unnormalizedPCAPlotFA <- pcaPlotting(pcaObject = prcomp(t(vdge$E)),
pcChoices = c(1,2),
dataLabels = qcMetadata$data_submitting_center_label,
factorString = "Batch",
titleString = "PCA w/o Batch Correction")
%%R
snmPCAPlotSampleTypeFA <- pcaPlotting(pcaObject = prcomp(snmDataSampleTypeWithExpStrategyFA),
pcChoices = c(1,2),
dataLabels = qcMetadata$data_submitting_center_label,
factorString = "Sequencing Center",
titleString = "PCA w/ SNM Correction\n(Target: Sample Type)")
# %%R
# snmPCAPlotGender <- pcaPlotting(pcaObject = prcomp(snmDataGenderWithAML),
# pcChoices = c(1,2),
# dataLabels = qcMetadata$data_submitting_center_label,
# factorString = "Sequencing Center",
# titleString = "PCA w/ SNM Correction\n(Target: Gender)")
%%R
ggsave(plot = unnormalizedPCAPlotFA,
filename = "unnormalizedPCAPlotFA_DecreasedOpacity_NEJM.png",
width = 16.2,
height = 5.29,
units = "in",
dpi = "retina")
ggsave(plot = snmPCAPlotSampleTypeFA,
filename = "snmPCAPlotSampleTypeFA_DecreasedOpacity_NEJM.png",
width = 16.2,
height = 5.29,
units = "in",
dpi = "retina")
# save(snmDataGenderWithAML, metadataSamplesAllQCAML,
# vbDataBarnDFReconciledQCAML,
# file = "amlVbDataAndMetadataAndSNMByGender.RData")
# %%R
# snmDataObjGenderWithAML <- snm(raw.dat = vdge$E,
# bio.var = bio.var.gender,
# adj.var = adj.var,
# rm.adj=TRUE,
# verbose = TRUE,
# diagnose = TRUE)
# snmDataGenderWithAML <- t(snmDataObjGenderWithAML$norm.dat)
# print(dim(snmDataGenderWithAML))
```
# PVCA using key filtered metadata features (i.e. narrowing down the extended version of this)
```
%%R
# Implement PVCA
# From extended model, remove variables that contribute very little if at all:
# ethnicity, gender, reference_genome
pct_threshold <- 0.8
metaPVCAExtendedFiltered <- metadataSamplesAllQC[,c("sample_type",
"disease_type",
"data_submitting_center_label",
"platform",
"experimental_strategy",
"tissue_source_site_label",
"portion_is_ffpe")]
print(dim(metaPVCAExtendedFiltered))
print(dim(snmDataSampleTypeWithExpStrategy))
print(dim(vbDataBarnDFReconciledQC))
%%R
pvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA <- PVCA(counts = t(vbDataBarnDFReconciledQC),
meta = metaPVCAExtendedFiltered,
threshold = pct_threshold,
inter = FALSE)
save(pvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA, file = "pvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA.RData")
PlotPVCA(pvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA, "Raw count data")
%%R
pvcaVoomNoSNM_ExtendedFiltered_FA <- PVCA(counts = vdge$E,
meta = metaPVCAExtendedFiltered,
threshold = pct_threshold,
inter = FALSE)
save(pvcaVoomNoSNM_ExtendedFiltered_FA, file = "pvcaVoomNoSNM_ExtendedFiltered_FA.RData")
PlotPVCA(pvcaVoomNoSNM_ExtendedFiltered_FA, "Voom Normalized")
%%R
pvcaSampleWithExpStrategySNM_ExtendedFiltered_FA <- PVCA(counts = t(snmDataSampleTypeWithExpStrategyFA),
meta = metaPVCAExtendedFiltered,
threshold = pct_threshold,
inter = FALSE)
save(pvcaSampleWithExpStrategySNM_ExtendedFiltered_FA,
file = "pvcnoaSampleWithExpStrategySNM_ExtendedFiltered_FA.RData")
PlotPVCA(pvcaSampleWithExpStrategySNM_ExtendedFiltered_FA,
"Voom Normalized & SNM Corrected Plus Exp Strategy (Target is Sample Type)")
%%R
1+2
```
# Examining sample and taxa ratio changes due to batch correction
```
%%R
require(ggplot2)
require(matrixStats)
divSNMDataSampleType <- snmDataSampleType / t(snmDataObjSampleType$raw.dat)
taxaMedians <- data.frame(Medians = colMedians(divSNMDataSampleType),
Taxa = colnames(divSNMDataSampleType),
pval = factor(ifelse(snmDataObjSampleType$pval <=0.05,
yes = "P-value <= 0.05", no = "P-value > 0.05")))
sampleMedians <- data.frame(Medians = rowMedians(divSNMDataSampleType),
Samples = rownames(divSNMDataSampleType),
SeqCenter = metadataSamplesAllQC$data_submitting_center_label,
SampleType = metadataSamplesAllQC$sample_type,
CancerType = metadataSamplesAllQC$disease_type)
gt <- ggplot(taxaMedians, aes(x = reorder(Taxa, -Medians), y = Medians, fill = pval)) +
geom_bar(stat = "identity") +
theme(axis.title.x=element_blank(), axis.text.x=element_blank(), axis.ticks.x=element_blank()) +
labs(y = "Median of Normalizing Ratios Per Taxa", x = "Samples", fill = "ANOVA Result Per Taxa")
gs <- ggplot(sampleMedians, aes(x = reorder(Samples, -Medians), y = Medians, fill = CancerType)) +
geom_bar(stat = "identity") + coord_flip() +
theme(axis.text.y=element_blank(), axis.ticks.y=element_blank()) +
scale_y_log10() + labs(y = "Median of Normalizing Ratios Per Sample", x = "Samples", fill='Cancer Type')
%%R
gt
%%R
ggsave(plot = gt,
filename = "snmNormMedianPerTaxaPval.png",
width = 8.5,
height = 6,
units = "in",
dpi = "retina")
%%R
require(pheatmap)
pheatmap(snmDataSampleTypeLMFit$coefficients,
clustering_distance_rows = "correlation",
clustering_distance_cols = "correlation",
show_rownames = FALSE,
show_colnames = FALSE,
filename = "snmLMFitCoefCorr.png")
# %%R
# save(snmDataObjPathStage, snmDataPathStage, metadataSamplesAllQCPath, file = "snmResultsPathBinned.RData")
```
|
github_jupyter
|

# <font color='Blue'> Ciência dos Dados na Prática</font>
# Sistemas de Recomendação

Cada empresa de consumo de Internet precisa um sistema de recomendação como **Netflix**, **Youtube**, **feed de notícias**, **Site de Viagens e passagens Aéreas**, **Hotéis**, **Mercado livre**, **Magalu**, **Olist**, etc. O que você deseja mostrar de uma grande variedade de itens é um sistema de recomendação.
## O que realmente é o Sistema de Recomendação?
Um mecanismo de recomendação é uma classe de aprendizado de máquina que oferece sugestões relevantes ao cliente. Antes do sistema de recomendação, a grande tendência para comprar era aceitar sugestões de amigos. Mas agora o Google sabe quais notícias você vai ler, o Youtube sabe que tipo de vídeos você vai assistir com base em seu histórico de pesquisa, histórico de exibição ou histórico de compra.
Um sistema de recomendação ajuda uma organização a criar clientes fiéis e construir a confiança deles nos produtos e serviços desejados para os quais vieram em seu site. Os sistemas de recomendação de hoje são tão poderosos que também podem lidar com o novo cliente que visitou o site pela primeira vez. Eles recomendam os produtos que estão em alta ou com alta classificação e também podem recomendar os produtos que trazem o máximo de lucro para a empresa.
Um sistema de recomendação de livros é um tipo de sistema de recomendação em que temos que recomendar livros semelhantes ao leitor com base em seu interesse. O sistema de recomendação de livros é usado por sites online que fornecem e-books como google play books, open library, good Read's, etc.
# 1° Problema de Negócio
Usaremos o método de **filtragem baseada em colaboração** para construir um sistema de recomendação de livros. Ou seja, precisamos construir uma máquina preditiva que, **com base nas escolhas de leituras de outras pessoas, o livro seja recomendado a outras pessoas com interesses semelhantes.**
Ex:
**Eduardo** leu e gostou dos livros A loja de Tudo e Elon Musk.
**Clarice** também leu e gostou desses dois livros

Agora o **Eduardo** leu e gostou do livro "StartUp de U$100" que não é lido pela **Clarice**.

Então **temos que recomendar o livro **"StartUp de U$100" para **Clarice**
## **Resultado**
Você concorda que se vc receber uma recomendação certeira, a chance de vc comprar o livro é muito maior?
Vc concorda que se mais pessoas comprarem, maior será o faturamento da empresa?
Vc concorda que os clientes vão ficar muito mais satisfeitos se o site demonstrar que conhece ela e que realmente só oferece produtos que realmente são relevantes p ela?
# 2° Análise Exploratória dos Dados
```
#Importação das Bibliotecas ou Pacotes
import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
```
Fonte de Dados:
https://www.kaggle.com/rxsraghavagrawal/book-recommender-system
#### Base de Livros
```
# Importação dos Dados Referentes aos Livros
books = pd.read_csv("BX-Books.csv", sep=';', encoding="latin-1", error_bad_lines= False)
books
```
#### Base de Usuários
```
# Importação dos Dados Referentes aos Usuários
users = pd.read_csv("BX-Users.csv", sep=';', encoding="latin-1", error_bad_lines= False)
users
```
#### Base de Ratings
```
# Importação dos Dados Referentes aos Ratings dados aos Livros (Avaliação do Usuário em relação ao Livro)
ratings = pd.read_csv("BX-Book-Ratings.csv", sep=';', encoding="latin-1", error_bad_lines= False)
ratings.info()
```
# 3° Pré-Processamento dos Dados
### Renomeando Colunas
Agora, no arquivo de livros, temos algumas colunas extras que não são necessárias para nossa tarefa, como URLs de imagens. E vamos renomear as colunas de cada arquivo, pois o nome da coluna contém espaço e letras maiúsculas, então faremos as correções para facilitar o uso.
```
# Rename de Colunas
books = books[['ISBN', 'Book-Title', 'Book-Author', 'Year-Of-Publication', 'Publisher']]
books.rename(columns = {'Book-Title':'title', 'Book-Author':'author', 'Year-Of-Publication':'year', 'Publisher':'publisher'}, inplace=True)
users.rename(columns = {'User-ID':'user_id', 'Location':'location', 'Age':'age'}, inplace=True)
ratings.rename(columns = {'User-ID':'user_id', 'Book-Rating':'rating'}, inplace=True)
books
#Quantidade de Ratings por Usuários
ratings['user_id'].value_counts()
# Livros que tenham mais de 200 avaliações
x = ratings['user_id'].value_counts() > 200
x
# Quantidade Usuários
# user_ids
y = x[x].index
print(y.shape)
y
```
#### *Decisão de Negócio*
```
# Trazendo ratings somente dos usuários q avaliaram mais de 200 livros
ratings = ratings[ratings['user_id'].isin(y)]
ratings
# Juntando tabelas (Join ou Merge)
rating_with_books = ratings.merge(books, on='ISBN')
rating_with_books.head()
#Quantidade de rating dos livros
number_rating = rating_with_books.groupby('title')['rating'].count().reset_index()
number_rating
#Renomeando coluna
number_rating.rename(columns= {'rating':'number_of_ratings'}, inplace=True)
number_rating
# Juntando a tabela de livros com os Ratings com a tabela de quantidade de ratings por livro
final_rating = rating_with_books.merge(number_rating, on='title')
final_rating
```
#### *Decisão de Negócio*
```
# Filtrar somente livros que tenham pelo menos 50 avaliações
final_rating = final_rating[final_rating['number_of_ratings'] >= 50]
final_rating.shape
# Vamos descartar os valores duplicados, porque se o mesmo usuário tiver avaliado o mesmo livro várias vezes, isso pode dar rúim.
final_rating.drop_duplicates(['user_id','title'], inplace=True)
final_rating.shape
final_rating
```
### Vamos fazer uma parada que é o seguinte:
Vamos transpor os **usuários** em **colunas**, ao invés de linhas, pois as avaliações dadas por eles serão as **variáveis** da máquina preditiva.
```
final_rating.info()
# Transposição de linhas(users_id) em colunas
book_pivot = final_rating.pivot_table(columns='user_id', index='title', values="rating")
book_pivot
book_pivot.shape
book_pivot.fillna(0, inplace=True)
book_pivot
```
Preparamos nosso conjunto de dados para modelagem. Usaremos o algoritmo de vizinhos mais próximos (nearest neighbors algorithm), que é usado para agrupamento com base na **distância euclidiana**.
**Nesta aula explicadim**:
https://www.youtube.com/watch?v=jD4AKp4-Tmo
Mas aqui na tabela dinâmica, temos muitos valores zero e no agrupamento, esse poder de computação aumentará para calcular a distância dos valores zero, portanto, converteremos a tabela dinâmica para a matriz esparsa e, em seguida, alimentaremos o modelo.
```
from scipy.sparse import csr_matrix
book_sparse = csr_matrix(book_pivot)
```
#4° Criação da Máquina Preditiva
https://scikit-learn.org/stable/modules/neighbors.html
```
from sklearn.neighbors import NearestNeighbors
model = NearestNeighbors(algorithm='brute')
model.fit(book_sparse)
```
## Novas Predições
```
#1984
distances, suggestions = model.kneighbors(book_pivot.iloc[0, :].values.reshape(1, -1))
book_pivot.head()
for i in range(len(suggestions)):
print(book_pivot.index[suggestions[i]])
#Hannibal
distances, suggestions = model.kneighbors(book_pivot.iloc[236, :].values.reshape(1, -1))
book_pivot.head(236)
for i in range(len(suggestions)):
print(book_pivot.index[suggestions[i]])
#Harry Potter
distances, suggestions = model.kneighbors(book_pivot.iloc[238, :].values.reshape(1, -1))
book_pivot.head(238)
for i in range(len(suggestions)):
print(book_pivot.index[suggestions[i]])
```
# Fim
## Valeu!
Fonte de Inspiração:
https://www.analyticsvidhya.com/blog/2021/06/build-book-recommendation-system-unsupervised-learning-project/
|
github_jupyter
|
# Transporter statistics and taxonomic profiles
## Overview
In this notebook some overview statistics of the datasets are computed and taxonomic profiles investigated. The notebook uses data produced by running the [01.process_data](01.process_data.ipynb) notebook.
```
import numpy as np
import pandas as pd
import seaborn as sns
import glob
import os
import matplotlib.pyplot as plt, matplotlib
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
plt.style.use('ggplot')
def make_tax_table(df,name="",rank="superkingdom"):
df_t = df.groupby(rank).sum()
df_tp = df_t.div(df_t.sum())*100
df_tp_mean = df_tp.mean(axis=1)
df_tp_max = df_tp.max(axis=1)
df_tp_min = df_tp.min(axis=1)
df_tp_sd = df_tp.std(axis=1)
table = pd.concat([df_tp_mean,df_tp_max,df_tp_min,df_tp_sd],axis=1)
table.columns = [name+" mean(%)",name+" max(%)",name+" min(%)",name+" std"]
table.rename(index=lambda x: x.split("_")[0], inplace=True)
return table
```
## Load the data
```
transinfo = pd.read_csv("selected_transporters_classified.tab", header=0, sep="\t", index_col=0)
transinfo.head()
```
Read gene abundance values with taxonomic annotations.
```
mg_cov = pd.read_table("data/mg/all_genes.tpm.taxonomy.tsv.gz", header=0, sep="\t", index_col=0)
mt_cov = pd.read_table("data/mt/all_genes.tpm.taxonomy.tsv.gz", header=0, sep="\t", index_col=0)
```
Read orf level transporter data.
```
mg_transcov = pd.read_table("results/mg/all_transporters.tpm.taxonomy.tsv.gz", header=0, sep="\t", index_col=0)
mt_transcov = pd.read_table("results/mt/all_transporters.tpm.taxonomy.tsv.gz", header=0, sep="\t", index_col=0)
mg_select_transcov = pd.read_table("results/mg/select_trans_genes.tpm.tsv", header=0, sep="\t", index_col=0)
mt_select_transcov = pd.read_table("results/mt/select_trans_genes.tpm.tsv", header=0, sep="\t", index_col=0)
```
Read transporter abundances.
```
mg_trans = pd.read_csv("results/mg/all_trans.tpm.tsv", header=0, sep="\t", index_col=0)
mt_trans = pd.read_csv("results/mt/all_trans.tpm.tsv", header=0, sep="\t", index_col=0)
```
## Generate taxonomic overview table
```
mg_tax_table = make_tax_table(mg_cov,name="MG ")
mg_tax_table_cyano = make_tax_table(mg_cov,name="MG ",rank="phylum").loc["Cyanobacteria"]
mg_tax_table = pd.concat([mg_tax_table,pd.DataFrame(mg_tax_table_cyano).T])
mg_tax_table
mt_tax_table = make_tax_table(mt_cov,name="MT ")
mt_tax_table_cyano = make_tax_table(mt_cov,name="MT ",rank="phylum").loc["Cyanobacteria"]
mt_tax_table = pd.concat([mt_tax_table,pd.DataFrame(mt_tax_table_cyano).T])
mt_tax_table
```
Concatenate overview tables. This is **Table 2** in the paper.
```
tax_table = pd.concat([mg_tax_table,mt_tax_table],axis=1).round(2)
tax_table.to_csv("results/Table2.tsv",sep="\t")
```
## Generate general overview of transporters
Make table with number of ORFs, ORFs classified as transporters, min, mean and max coverage for transporter ORFs.
```
num_genes = len(mg_cov)
gene_lengths = pd.read_table("data/mg/all_genes.tpm.tsv.gz", usecols=[1])
gene_lengths = np.round(gene_lengths.mean())
def generate_transporter_stats(df):
# Number of transporter genes (genes with sum > 0)
num_trans_genes = len(df.loc[df.groupby(level=0).sum().sum(axis=1)>0])
# Percent of transporter genes
num_trans_genes_p = np.round((num_trans_genes / float(num_genes))*100,2)
# Mean total coverage for transporter genes across the samples
transcov_mean = np.round(((df.groupby(level=0).sum().sum().mean()) / 1e6)*100,2)
# Minimum total coverage for transporter genes across the samples
transcov_min = np.round(((df.groupby(level=0).sum().sum().min()) / 1e6)*100,2)
# Maximum ...
transcov_max = np.round(((df.groupby(level=0).sum().sum().max()) / 1e6)*100,2)
# Standard dev
transcov_std = np.round(((df.groupby(level=0).sum().sum() / 1e6)*100).std(),2)
return num_trans_genes, num_trans_genes_p, transcov_mean, transcov_min, transcov_max, transcov_std
mg_num_trans_genes, mg_num_trans_genes_p, mg_transcov_mean, mg_transcov_min, mg_transcov_max, mg_transcov_std = generate_transporter_stats(mg_transcov)
mt_num_trans_genes, mt_num_trans_genes_p, mt_transcov_mean, mt_transcov_min, mt_transcov_max, mt_transcov_std = generate_transporter_stats(mt_transcov)
```
Create table with transporter statistics for MG and MT datasets (**Table 3** in the paper).
```
stats_df = pd.DataFrame(data={
"Transporter genes": ["{} ({}%)".format(mg_num_trans_genes,mg_num_trans_genes_p),"{} ({}%)".format(mt_num_trans_genes,mt_num_trans_genes_p)],
"Transporter mean": ["{}%".format(mg_transcov_mean),"{}%".format(mt_transcov_mean)],
"Transporter min": ["{}%".format(mg_transcov_min),"{}%".format(mt_transcov_min)],
"Transporter max": ["{}%".format(mg_transcov_max),"{}%".format(mt_transcov_max)],
"Transporter std": ["{}%".format(mg_transcov_std),"{}%".format(mt_transcov_std)]},index=["MG","MT"]).T
stats_df.to_csv("results/Table3.tsv",sep="\t")
stats_df
```
Do the same with the selected transporters.
```
mg_select_num_trans_genes, mg_select_num_trans_genes_p, mg_select_transcov_mean, mg_select_transcov_min, mg_select_transcov_max, mg_select_transcov_std = generate_transporter_stats(mg_select_transcov)
mt_select_num_trans_genes, mt_select_num_trans_genes_p, mt_select_transcov_mean, mt_select_transcov_min, mt_select_transcov_max, mt_select_transcov_std = generate_transporter_stats(mt_select_transcov)
select_stats_df = pd.DataFrame(data={
"Selected transporter genes": ["{} ({}%)".format(mg_select_num_trans_genes,mg_select_num_trans_genes_p),"{} ({}%)".format(mt_select_num_trans_genes,mt_select_num_trans_genes_p)],
"Selected transporter mean": ["{}%".format(mg_select_transcov_mean),"{}%".format(mt_select_transcov_mean)],
"Selected transporter min": ["{}%".format(mg_select_transcov_min),"{}%".format(mt_select_transcov_min)],
"Selected transporter max": ["{}%".format(mg_select_transcov_max),"{}%".format(mt_select_transcov_max)],
"Selected transporter std": ["{}%".format(mg_select_transcov_std),"{}%".format(mt_select_transcov_std)]},index=["mg_select","mt_select"]).T
select_stats_df.to_csv("results/selected_transporter_stats.tab",sep="\t")
select_stats_df
```
## Generate kingdom/phylum level taxonomic plots
```
def get_euk_taxa(taxa, df, rank):
euk_taxa = []
for t in taxa:
k = df.loc[df[rank]==t, "superkingdom"].unique()[0]
if k=="Eukaryota":
euk_taxa.append(t)
return euk_taxa
def set_euk_hatches(ax):
for patch in ax.patches:
t = color2taxmap[patch.properties()['facecolor'][0:-1]]
if t in euk_taxa:
patch.set_hatch("////")
```
Generate profiles for metagenomes.
```
# Get sum of abundances at superkingdom level
mg_k = mg_cov.groupby("superkingdom").sum()
# Normalize to %
mg_kn = mg_k.div(mg_k.sum())*100
mg_kn = mg_kn.loc[["Archaea","Bacteria","Eukaryota","Viruses","Unclassified.sequences","other sequences"]]
mg_kn = mg_kn.loc[mg_kn.sum(axis=1).sort_values(ascending=False).index]
# Swtich Proteobacterial classes to phylum
mg_cov.loc[mg_cov.phylum=="Proteobacteria","phylum"] = mg_cov.loc[mg_cov.phylum=="Proteobacteria","class"]
# Normalize at phylum level
mg_p = mg_cov.groupby("phylum").sum()
mg_pn = mg_p.div(mg_p.sum())*100
_ = mg_pn.mean(axis=1).sort_values(ascending=False)
_.loc[~_.index.str.contains("Unclassified")].head(8)
```
Create the taxonomic overview of the 7 most abundant phyla in the metagenomic dataset. This is **Figure 1** in the paper.
```
select_taxa = ["Verrucomicrobia","Actinobacteria","Alphaproteobacteria","Gammaproteobacteria","Cyanobacteria","Bacteroidetes","Betaproteobacteria"]
# Sort taxa by mean abundance
taxa_order = mg_pn.loc[select_taxa].mean(axis=1).sort_values(ascending=False).index
ax = mg_pn.loc[taxa_order].T.plot(kind="area",stacked=True)
ax.legend(bbox_to_anchor=(1,1))
ax.set_ylabel("% normalized abundance");
xticks = list(range(0,33))
ax.set_xticks(xticks);
ax.set_xticklabels(mg_pn.columns, rotation=90);
plt.savefig("results/Figure1.svg", bbox_inches="tight")
```
Generate profiles for metatranscriptomes.
```
# Get sum of abundances at superkingdom level
mt_k = mt_cov.groupby("superkingdom").sum()
# Normalize to %
mt_kn = mt_k.div(mt_k.sum())*100
mt_kn = mt_kn.loc[["Archaea","Bacteria","Eukaryota","Viruses","Unclassified.sequences","other sequences"]]
mt_kn = mt_kn.loc[mt_kn.sum(axis=1).sort_values(ascending=False).index]
# Swtich Proteobacterial classes to phylum
mt_cov.loc[mt_cov.phylum=="Proteobacteria","phylum"] = mt_cov.loc[mt_cov.phylum=="Proteobacteria","class"]
# Normalize at phylum level
mt_p = mt_cov.groupby("phylum").sum()
mt_pn = mt_p.div(mt_p.sum())*100
```
Get common taxa for both datasets by taking the union of the top 15 most abundant taxa
```
mg_taxa = mg_pn.mean(axis=1).sort_values(ascending=False).head(15).index
mt_taxa = mt_pn.mean(axis=1).sort_values(ascending=False).head(15).index
taxa = set(mg_taxa).union(set(mt_taxa))
```
Single out eukaryotic taxa
```
euk_taxa = get_euk_taxa(taxa, mg_cov, rank="phylum")
```
Sort the taxa by their mean abundance in the mg data
```
taxa_sort = mg_pn.loc[taxa].mean(axis=1).sort_values(ascending=False).index
taxa_colors = dict(zip(taxa_sort,(sns.color_palette("Set1",7)+sns.color_palette("Set2",7)+sns.color_palette("Dark2",5))))
color2taxmap = {}
for t, c in taxa_colors.items():
color2taxmap[c] = t
```
Plot metagenome profiles
```
fig,axes = plt.subplots(ncols=2,nrows=1, figsize=(12,4))
# Plot the kingdoms
ax1 = mg_kn.T.plot(kind="bar",stacked=True,ax=axes[0])
ax1.legend(loc="lower right",fontsize="small")
ax1.set_ylabel("%")
# Plot the phyla
ax2 = mg_pn.loc[taxa_sort].T.plot(kind="bar",stacked=True, color=[taxa_colors[tax] for tax in taxa_sort], legend=None,ax=axes[1])
set_euk_hatches(ax2)
ax2.set_ylabel("%")
ax2.legend(bbox_to_anchor=(1,1),fontsize="small");
```
Plot metatranscriptome profiles
```
fig,axes = plt.subplots(ncols=2,nrows=1, figsize=(12,4))
# Plot the kingdoms
ax1 = mt_kn.T.plot(kind="bar",stacked=True,ax=axes[0])
ax1.legend(loc="lower center",fontsize="small")
ax1.set_ylabel("%")
# Plot the phyla
ax2 = mt_pn.loc[taxa_sort].T.plot(kind="bar",stacked=True, color=[taxa_colors[tax] for tax in taxa_sort], legend=None,ax=axes[1])
set_euk_hatches(ax2)
ax2.set_ylabel("%")
ax2.legend(bbox_to_anchor=(1,1),fontsize="small");
```
Calculate total number of orders.
```
mg_ordersum = mg_cov.groupby("order").sum()
mg_total_orders = len(mg_ordersum.loc[mg_ordersum.sum(axis=1)>0])
print("{} orders in the entire mg dataset".format(mg_total_orders))
mg_trans_ordersum = mg_select_transcov.groupby("order").sum()
mg_trans_total_orders = len(mg_trans_ordersum.loc[mg_trans_ordersum.sum(axis=1)>0])
print("{} orders in the transporter mg dataset".format(mg_trans_total_orders))
mt_ordersum = mt_cov.groupby("order").sum()
mt_total_orders = len(mt_ordersum.loc[mt_ordersum.sum(axis=1)>0])
print("{} orders in the entire mt dataset".format(mt_total_orders))
mt_trans_ordersum = mt_select_transcov.groupby("order").sum()
mt_trans_total_orders = len(mt_trans_ordersum.loc[mt_trans_ordersum.sum(axis=1)>0])
print("{} orders in the transporter mt dataset".format(mt_trans_total_orders))
```
## Calculate and plot distributions per taxonomic subsets.
Extract ORFs belonging to each subset.
```
cya_orfs = mg_transcov.loc[mg_transcov.phylum=="Cyanobacteria"].index
bac_orfs = mg_transcov.loc[(mg_transcov.phylum!="Cyanobacteria")&(mg_transcov.superkingdom=="Bacteria")].index
euk_orfs = mg_transcov.loc[mg_transcov.superkingdom=="Eukaryota"].index
```
Calculate contribution of taxonomic subsets to the identified transporters.
```
taxgroup_df = pd.DataFrame(columns=["MG","MT"],index=["Bacteria","Cyanobacteria","Eukaryota"])
mg_all_transcov_info = pd.merge(transinfo,mg_transcov,left_index=True,right_on="transporter")
mg_bac_transcov_info = pd.merge(transinfo,mg_transcov.loc[bac_orfs],left_index=True,right_on="transporter")
mg_euk_transcov_info = pd.merge(transinfo,mg_transcov.loc[euk_orfs],left_index=True,right_on="transporter")
mg_cya_transcov_info = pd.merge(transinfo,mg_transcov.loc[cya_orfs],left_index=True,right_on="transporter")
mt_all_transcov_info = pd.merge(transinfo,mt_transcov,left_index=True,right_on="transporter")
mt_bac_transcov_info = pd.merge(transinfo,mt_transcov.loc[bac_orfs],left_index=True,right_on="transporter")
mt_euk_transcov_info = pd.merge(transinfo,mt_transcov.loc[euk_orfs],left_index=True,right_on="transporter")
mt_cya_transcov_info = pd.merge(transinfo,mt_transcov.loc[cya_orfs],left_index=True,right_on="transporter")
mg_cya_part = mg_cya_transcov_info.groupby("transporter").sum().sum().div(mg_all_transcov_info.groupby("transporter").sum().sum())*100
mi,ma,me = mg_cya_part.min(),mg_cya_part.max(),mg_cya_part.mean()
taxgroup_df.loc["Cyanobacteria","MG"] = "{}% ({}-{}%)".format(round(me,2),round(mi,2),round(ma,2))
mg_euk_part = mg_euk_transcov_info.groupby("transporter").sum().sum().div(mg_all_transcov_info.groupby("transporter").sum().sum())*100
mi,ma,me = mg_euk_part.min(),mg_euk_part.max(),mg_euk_part.mean()
taxgroup_df.loc["Eukaryota","MG"] = "{}% ({}-{}%)".format(round(me,2),round(mi,2),round(ma,2))
mg_bac_part = mg_bac_transcov_info.groupby("transporter").sum().sum().div(mg_all_transcov_info.groupby("transporter").sum().sum())*100
mi,ma,me = mg_bac_part.min(),mg_bac_part.max(),mg_bac_part.mean()
taxgroup_df.loc["Bacteria","MG"] = "{}% ({}-{}%)".format(round(me,2),round(mi,2),round(ma,2))
mt_cya_part = mt_cya_transcov_info.groupby("transporter").sum().sum().div(mt_all_transcov_info.groupby("transporter").sum().sum())*100
mi,ma,me = mt_cya_part.min(),mt_cya_part.max(),mt_cya_part.mean()
taxgroup_df.loc["Cyanobacteria","MT"] = "{}% ({}-{}%)".format(round(me,2),round(mi,2),round(ma,2))
mt_euk_part = mt_euk_transcov_info.groupby("transporter").sum().sum().div(mt_all_transcov_info.groupby("transporter").sum().sum())*100
mi,ma,me = mt_euk_part.min(),mt_euk_part.max(),mt_euk_part.mean()
taxgroup_df.loc["Eukaryota","MT"] = "{}% ({}-{}%)".format(round(me,2),round(mi,2),round(ma,2))
mt_bac_part = mt_bac_transcov_info.groupby("transporter").sum().sum().div(mt_all_transcov_info.groupby("transporter").sum().sum())*100
mi,ma,me = mt_bac_part.min(),mt_bac_part.max(),mt_bac_part.mean()
taxgroup_df.loc["Bacteria","MT"] = "{}% ({}-{}%)".format(round(me,2),round(mi,2),round(ma,2))
taxgroup_df
```
### Taxonomic subsets per substrate category
```
def calculate_mean_total_substrate_subset(df,df_sum,subset,var_name="Sample",value_name="%"):
cols = ["fam","transporter","substrate_category","name"]
# Sum to protein family
x = df.groupby(["fam","transporter","substrate_category","name"]).sum().reset_index()
cols.pop(cols.index("fam"))
# Calculate mean of transporters
x.groupby(cols).mean().reset_index()
xt = x.copy()
# Normalize to sum of all transporters
x.iloc[:,4:] = x.iloc[:,4:].div(df_sum)*100
# Sum percent to substrate category
x = x.groupby("substrate_category").sum()
# Melt dataframe and add subset column
x["substrate_category"] = x.index
xm = pd.melt(x,id_vars="substrate_category", var_name="Sample",value_name="%")
xm = xm.assign(Subset=pd.Series(data=subset,index=xm.index))
return xm,xt
# Get contribution of bacterial transporters to total for substrate category
mg_bac_cat_melt,mg_bac_cat = calculate_mean_total_substrate_subset(mg_bac_transcov_info,mg_trans.sum(),"Bacteria")
# Get contribution of eukaryotic transporters to total for substrate category
mg_euk_cat_melt,mg_euk_cat = calculate_mean_total_substrate_subset(mg_euk_transcov_info,mg_trans.sum(),"Eukaryota")
# Get contribution of cyanobacterial transporters to total for substrate category
mg_cya_cat_melt,mg_cya_cat = calculate_mean_total_substrate_subset(mg_cya_transcov_info,mg_trans.sum(),"Cyanobacteria")
# Get contribution of bacterial transporters to total for substrate category
mt_bac_cat_melt,mt_bac_cat = calculate_mean_total_substrate_subset(mt_bac_transcov_info,mt_trans.sum(),"Bacteria")
# Get contribution of eukaryotic transporters to total for substrate category
mt_euk_cat_melt,mt_euk_cat = calculate_mean_total_substrate_subset(mt_euk_transcov_info,mt_trans.sum(),"Eukaryota")
# Get contribution of cyanobacterial transporters to total for substrate category
mt_cya_cat_melt,mt_cya_cat = calculate_mean_total_substrate_subset(mt_cya_transcov_info,mt_trans.sum(),"Cyanobacteria")
# Concatenate dataframes for metagenomes
mg_subsets_cat = pd.concat([pd.concat([mg_bac_cat_melt,mg_euk_cat_melt]),mg_cya_cat_melt])
mg_subsets_cat = mg_subsets_cat.assign(dataset=pd.Series(data="MG",index=mg_subsets_cat.index))
# Concatenate dataframes for metagenomes
mt_subsets_cat = pd.concat([pd.concat([mt_bac_cat_melt,mt_euk_cat_melt]),mt_cya_cat_melt])
mt_subsets_cat = mt_subsets_cat.assign(dataset=pd.Series(data="MT",index=mt_subsets_cat.index))
```
**Concatenate MG and MT**
```
subsets_cat = pd.concat([mg_subsets_cat,mt_subsets_cat])
```
### Plot substrate category distributions
```
cats = transinfo.substrate_category.unique()
# Update Eukaryota subset label
subsets_cat.loc[subsets_cat.Subset=="Eukaryota","Subset"] = ["Picoeukaryota"]*len(subsets_cat.loc[subsets_cat.Subset=="Eukaryota","Subset"])
sns.set(font_scale=0.8)
ax = sns.catplot(kind="bar",data=subsets_cat.loc[subsets_cat.substrate_category.isin(cats)],hue="dataset",
y="substrate_category", x="%", col="Subset",
errwidth=1, height=3, palette="Set1", aspect=1)
ax.set_titles("{col_name}")
ax.set_axis_labels("% of normalized transporter abundance","Substrate category")
plt.savefig("results/Figure3A.svg", bbox_inches="tight")
_ = mg_transcov.groupby(["fam","transporter"]).sum().reset_index()
_ = _.groupby("transporter").mean()
_ = pd.merge(transinfo, _, left_index=True, right_index=True)
_ = _.loc[_.substrate_category=="Carbohydrate"].groupby("name").sum()
(_.div(_.sum())*100).mean(axis=1).sort_values(ascending=False).head(3).sum()
```
|
github_jupyter
|
```
import numpy as np
from keras.models import Sequential
from keras.models import load_model
from keras.models import model_from_json
from keras.layers.core import Dense, Activation
from keras.utils import np_utils
from keras.preprocessing.image import load_img, save_img, img_to_array
from keras.applications.imagenet_utils import preprocess_input
import matplotlib.pyplot as plt
from keras.preprocessing import image
#you can find the model at https://github.com/serengil/tensorflow-101/blob/master/model/facenet_model.json
model = model_from_json(open("C:/Users/IS96273/Desktop/facenet_model.json", "r").read())
#you can find the pre-trained weights at https://drive.google.com/file/d/1971Xk5RwedbudGgTIrGAL4F7Aifu7id1/view?usp=sharing
model.load_weights('C:/Users/IS96273/Desktop/facenet_weights.h5')
#both model and pre-trained weights are inspired from the work of David Sandberg (github.com/davidsandberg/facenet)
#and transformed by Sefik Serengil (sefiks.com)
#model.summary()
def preprocess_image(image_path):
img = load_img(image_path, target_size=(160, 160))
img = img_to_array(img)
img = np.expand_dims(img, axis=0)
img = preprocess_input(img)
return img
def l2_normalize(x):
return x / np.sqrt(np.sum(np.multiply(x, x)))
def findCosineSimilarity(source_representation, test_representation):
a = np.matmul(np.transpose(source_representation), test_representation)
b = np.sum(np.multiply(source_representation, source_representation))
c = np.sum(np.multiply(test_representation, test_representation))
return 1 - (a / (np.sqrt(b) * np.sqrt(c)))
def findEuclideanDistance(source_representation, test_representation):
euclidean_distance = source_representation - test_representation
euclidean_distance = np.sum(np.multiply(euclidean_distance, euclidean_distance))
euclidean_distance = np.sqrt(euclidean_distance)
return euclidean_distance
metric = "euclidean" #euclidean or cosine
threshold = 0
if metric == "euclidean":
threshold = 0.35
elif metric == "cosine":
threshold = 0.07
def verifyFace(img1, img2):
#produce 128-dimensional representation
img1_representation = model.predict(preprocess_image('C:/Users/IS96273/Desktop/trainset/%s' % (img1)))[0,:]
img2_representation = model.predict(preprocess_image('C:/Users/IS96273/Desktop/trainset/%s' % (img2)))[0,:]
if metric == "euclidean":
img1_representation = l2_normalize(img1_representation)
img2_representation = l2_normalize(img2_representation)
euclidean_distance = findEuclideanDistance(img1_representation, img2_representation)
print("euclidean distance (l2 norm): ",euclidean_distance)
if euclidean_distance < threshold:
print("verified... they are same person")
else:
print("unverified! they are not same person!")
elif metric == "cosine":
cosine_similarity = findCosineSimilarity(img1_representation, img2_representation)
print("cosine similarity: ",cosine_similarity)
if cosine_similarity < 0.07:
print("verified... they are same person")
else:
print("unverified! they are not same person!")
f = plt.figure()
f.add_subplot(1,2, 1)
plt.imshow(image.load_img('C:/Users/IS96273/Desktop/trainset/%s' % (img1)))
plt.xticks([]); plt.yticks([])
f.add_subplot(1,2, 2)
plt.imshow(image.load_img('C:/Users/IS96273/Desktop/trainset/%s' % (img2)))
plt.xticks([]); plt.yticks([])
plt.show(block=True)
print("-----------------------------------------")
#true positive
verifyFace("1.jpg", "5.jpg")
verifyFace("1.jpg", "7.jpg")
#true negative
verifyFace("1.jpg", "8.jpg")
verifyFace("1.jpg", "10.jpg")
#true positive
verifyFace("17.jpg", "8.jpg")
verifyFace("17.jpg", "9.jpg")
```
|
github_jupyter
|
```
# Load libraries
import pandas as pd
import numpy as np
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
import time
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
# Load dataset
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class']
dataset = pd.read_csv(url, names=names)
print(dataset.shape)
print(dataset.head(5))
# Split-out validation dataset
array = dataset.values
X = array[:,0:4]
y = array[:,4]
validation_size = 0.20
seed = 7
X_train, X_validation, y_train, y_validation = train_test_split(X, y, test_size=validation_size, random_state=seed)
# Test options and evaluation metric
seed = 7
scoring = 'accuracy'
# test different number of cores: max 8
num_cpu_list = list(range(1,9))
training_times_all = []
param_grid = {"n_neighbors" : list(range(1,10))}
training_times = []
for num_cpu in num_cpu_list:
clf = GridSearchCV(KNeighborsClassifier(), param_grid, scoring=scoring)
clf.set_params(n_jobs=num_cpu)
start_time = time.time()
clf.fit(X_train, y_train)
training_times.append(time.time() - start_time)
# print logging message
print("Computing KNN grid with {} cores DONE.".format(num_cpu))
print("All computations DONE.")
# best parameters found
print("Best parameters:")
print(clf.best_params_)
print("With accuracy:")
print(clf.best_score_)
scores_all_percent = [100 * grid_score[1] for grid_score in clf.grid_scores_]
params_all = [grid_score[0]["n_neighbors"] for grid_score in clf.grid_scores_]
N = 9
ind = np.arange(N) # the x locations for bars
width = 0.5 # the width of the bars
fig, ax = plt.subplots()
ax.bar(ind + width/2, scores_all_percent, width)
ax.set_xticks(ind + width)
ax.set_xticklabels([str(i) for i in params_all])
ax.set_ylim([90,100])
plt.title("Accuracy of KNN vs n_neighbors param")
plt.xlabel("n_neighbors")
plt.ylabel("accuracy [%]")
plt.show()
```
The above plot shows that the best accuracy for KNN algorithm is obtained for **n_neighbors = 7**
```
fig, ax = plt.subplots()
ax.plot(num_cpu_list, training_times, 'ro')
ax.set_xlim([0, len(num_cpu_list)+1])
#plt.axis([0, len(num_cpu_list)+1, 0, max(training_times)+1])
plt.title("Search time vs #CPU Cores")
plt.xlabel("#CPU Cores")
plt.ylabel("search time [s]")
plt.show()
```
We can see that the search time for **n_jobs > 1** is highier than for **n_jobs = 1**. The reason is that multiprocessing comes at cost i.e. the distribution of multiple processes can take more time that the actual execution time for the small datasets like **Iris** (150 rows).
|
github_jupyter
|
Licensed under the MIT License.
Copyright (c) 2021-2031. All rights reserved.
# Kats Outliers Detection
* Kats General
* `TimeSeriesData` params and methods: https://facebookresearch.github.io/Kats/api/kats.consts.html#kats.consts.TimeSeriesData
* Kats Detection
* Kats detection official tutorial: https://github.com/facebookresearch/Kats/blob/main/tutorials/kats_202_detection.ipynb
* It describes Kats' outlier detector's algorithms
* But Kats' multivariate anomaly detection only output strange errors to me, even using the same tutorial code, see this ticket: https://github.com/facebookresearch/Kats/issues/194
* Other Kats Outlier Detectors
* https://facebookresearch.github.io/Kats/api/kats.detectors.prophet_detector.html
* Kats v0.1 requires prophet version to be "0.7" exactly, other will get errors, but my laptop could only install higher version prophet...
* https://facebookresearch.github.io/Kats/api/kats.detectors.hourly_ratio_detection.html
* It requires the time series to be hour-level granularity
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.tsa.stattools import kpss
from statsmodels.tsa.stattools import adfuller
from statsmodels.stats.stattools import durbin_watson
from statsmodels.tsa.api import VAR
from statsmodels.tsa.vector_ar.vecm import VECM
from kats.consts import TimeSeriesData
from kats.detectors.outlier import OutlierDetector
import warnings
warnings.filterwarnings("ignore")
ts_df = pd.read_pickle('../../crystal_ball/data_collector/structured_data/sales_ts.pkl')
print(ts_df.shape)
ts_df_train = ts_df.iloc[ts_df.index < '2015-03-01']
print(ts_df_train.shape)
ts_df.head()
def plot_ts(ts, title):
plt.figure(figsize=(20,3))
for col in ts.columns:
fig = plt.plot(ts[col], label=col)
plt.title(title)
plt.legend(loc='best')
plt.tight_layout()
plt.show()
def plot_ts_outliers(ts, title, outliers, decomp='additive'):
outliers_x = [str(outlier).split()[0] for outlier in outliers[0]]
outliers_y = ts.iloc[ts.index.isin(outliers_x)]
plt.figure(figsize=(20,10))
plt.subplot(411)
fig = plt.plot(ts, label='original ts', color='blue')
plt.scatter(outliers_x, outliers_y, c='red', marker='*')
plt.legend(loc='best')
plt.subplot(412)
decomposition = seasonal_decompose(ts, model=decomp)
residual = decomposition.resid
fig = plt.plot(residual, label='residuals', color='purple')
outliers_y_res = residual.iloc[residual.index.isin(outliers_x)]
plt.scatter(outliers_x, outliers_y_res, c='red', marker='*')
plt.legend(loc='best')
plt.title(title)
plt.tight_layout()
plt.show()
plot_ts(ts_df_train, title='Univariate training ts plot')
# Covert to Kats required TimeSeriesData input
kats_ts_all = TimeSeriesData(ts_df_train.reset_index().rename(index=str, columns={'Date': 'time'}))
print(len(kats_ts_all))
```
## Univariate OutlierDetector
* Kats' outlier detector: https://facebookresearch.github.io/Kats/api/kats.detectors.outlier.html
```
# detect & plot outliers
ts_outlierDetection = OutlierDetector(kats_ts_all, 'multiplication', iqr_mult=5)
ts_outlierDetection.detector()
plot_ts_outliers(ts_df_train, title='Outliers in all ts train', outliers=ts_outlierDetection.outliers, decomp='multipllicative')
# remove and plot outliers
ts_outlierDetection_outliers_removed = ts_outlierDetection.remover(interpolate = False) # No interpolation
ts_outlierDetection_interpolated = ts_outlierDetection.remover(interpolate = True) # With linear interpolation
ts_outlierDetection_outliers_removed
fig, ax = plt.subplots(figsize=(25,8), nrows=1, ncols=2)
ts_outlierDetection_outliers_removed.to_dataframe().plot(x='time',y = 'y_0', ax = ax[0])
ax[0].set_title("Outliers Removed : No interpolation")
ts_outlierDetection_interpolated.to_dataframe().plot(x = 'time',y = 'y_0', ax = ax[1])
ax[1].set_title("Outliers Removed : With interpolation")
plt.show()
sub_original_df = ts_df_train.iloc[(ts_df_train.index>='2013-12-22') & (ts_df_train.index<='2014-01-02')]
sub_df_removed = ts_outlierDetection_outliers_removed.to_dataframe()
sub_df_removed = sub_df_removed.loc[(sub_df_removed['time']>='2013-12-22') & (sub_df_removed['time']<='2014-01-02')]
sub_df_interpolated = ts_outlierDetection_interpolated.to_dataframe()
sub_df_interpolated = sub_df_interpolated.loc[(sub_df_interpolated['time']>='2013-12-22') & (sub_df_interpolated['time']<='2014-01-02')]
fig, ax = plt.subplots(figsize=(25,8), nrows=1, ncols=2)
sub_original_df.reset_index().plot(x='Date', y='Daily_Sales', ax=ax[0], color='orange', marker='o', label='original ts')
sub_df_removed.plot(x='time', y='y_0', ax= ax[0], color='green', label='outlier removed ts')
ax[0].set_title("Outliers Removed Subset: No interpolation")
sub_original_df.reset_index().plot(x='Date', y='Daily_Sales', ax=ax[1], color='orange', marker='o', label='original ts')
sub_df_interpolated.plot(x = 'time',y = 'y_0', ax= ax[1], color='green', label='outlier interpolated ts')
ax[1].set_title("Outliers Removed Subset: With interpolation")
plt.show()
```
## Multivariate Anomaly Detection
* References
* VAR for anomaly detection: https://www.analyticsvidhya.com/blog/2021/08/multivariate-time-series-anomaly-detection-using-var-model/
* More about VAR: https://www.machinelearningplus.com/time-series/vector-autoregression-examples-python/
* More multivariate time series models: https://www.statsmodels.org/dev/api.html#multivariate-time-series-models
```
mul_ts_df = pd.read_pickle('../../crystal_ball/data_collector/structured_data/multivar_ts.pkl')
print(mul_ts_df.shape)
mul_ts_df.head()
occupancy = mul_ts_df[['Occupancy']]
mul_ts_df.drop('Occupancy', inplace=True, axis=1)
print(mul_ts_df.shape)
```
### Convert Data to Stationary
```
def test_stationarity_multi_ts(multi_ts_df):
results_dct = {}
for col in multi_ts_df.columns:
timeseries = multi_ts_df[col]
adf_result, kpss_result = None, None
results_dct[col] = {'Differencing Stationary': None, 'Trending Stationary': None}
# Perform Augmented Dickey-Fuller test:
adftest = adfuller(timeseries, autolag='AIC')
adf_output = pd.Series(adftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
adf_test_stats = adf_output['Test Statistic']
for key,value in adftest[4].items():
adf_output[f'Critical Value {key}'] = value
if abs(adf_test_stats) >= abs(adf_output[f'Critical Value 1%']):
adf_result = '99%'
elif abs(adf_test_stats) >= abs(adf_output[f'Critical Value 5%']) and abs(adf_test_stats) < abs(adf_output[f'Critical Value 1%']):
adf_result = '95%'
elif abs(adf_test_stats) >= abs(adf_output[f'Critical Value 10%']) and abs(adf_test_stats) < abs(adf_output[f'Critical Value 5%']):
adf_result = '90%'
# Perform KPSS
kpsstest = kpss(timeseries, regression='c')
kpss_output = pd.Series(kpsstest[0:3], index=['Test Statistic','p-value','Lags Used'])
kpss_test_stats = kpss_output['Test Statistic']
for key,value in kpsstest[3].items():
kpss_output[f'Critical Value {key}'] = value
if abs(kpss_test_stats) >= abs(kpss_output['Critical Value 1%']):
kpss_result = '99%'
elif abs(kpss_test_stats) >= abs(kpss_output['Critical Value 2.5%']) and abs(kpss_test_stats) < abs(kpss_output[f'Critical Value 1%']):
kpss_result = '97.5%'
elif abs(kpss_test_stats) >= abs(kpss_output['Critical Value 5%']) and abs(kpss_test_stats) < abs(kpss_output[f'Critical Value 2.5%']):
kpss_result = '95%'
elif abs(kpss_test_stats) >= abs(kpss_output['Critical Value 10%']) and abs(kpss_test_stats) < abs(kpss_output[f'Critical Value 5%']):
kpss_result = '90%'
results_dct[col]['Differencing Stationary'] = adf_result
results_dct[col]['Trending Stationary'] = kpss_result
return results_dct
def detect_anomalies(squared_errors, n=1):
threshold = np.mean(squared_errors) + n*np.std(squared_errors)
detections = (squared_errors >= threshold).astype(int)
return threshold, detections
mul_ts_df['Humidity'] = mul_ts_df['Humidity'].diff()
mul_ts_df['HumidityRatio'] = mul_ts_df['HumidityRatio'].diff()
mul_ts_df = mul_ts_df.dropna()
print(mul_ts_df.shape)
multi_ts_stationary = test_stationarity_multi_ts(mul_ts_df)
for k, v in multi_ts_stationary.items():
print(k)
print(v)
print()
```
### VAR to Detect Anomalies
* The way it detects anomalies is to find observations with residuals above a threshold
```
# select better model order
max_lag = 20
var_model = VAR(mul_ts_df)
lag_results = var_model.select_order(max_lag)
selected_lag = lag_results.aic
print(f'Selected VAR order is {selected_lag}')
lag_results.summary()
model_fitted = var_model.fit(selected_lag)
# durbin_watson test to check whether there is any leftover pattern in the residuals, closer to 2, the better
dw_scores = durbin_watson(model_fitted.resid)
for col, dw in zip(mul_ts_df.columns, dw_scores):
print(f'{col}: {dw}')
model_fitted.resid
squared_errors = model_fitted.resid.sum(axis=1)**2
threshold, detections = detect_anomalies(squared_errors, n=1)
detected_mul_ts_df = mul_ts_df.copy()
detected_mul_ts_df['anomaly_detection'] = detections
detected_mul_ts_df['Occupancy'] = occupancy
detected_mul_ts_df = detected_mul_ts_df.iloc[selected_lag:, :]
print(f'Threshold: {threshold}')
detected_mul_ts_df.head()
detected_mul_ts_df.loc[detected_mul_ts_df['anomaly_detection']==1].head()
# Check whether there's any anomaly pattern in different occupancy
no_occpupancy_df = detected_mul_ts_df.loc[detected_mul_ts_df['Occupancy']==0]
has_occpupancy_df = detected_mul_ts_df.loc[detected_mul_ts_df['Occupancy']==1]
print(no_occpupancy_df['anomaly_detection'].value_counts()/len(no_occpupancy_df))
print()
print(has_occpupancy_df['anomaly_detection'].value_counts()/len(has_occpupancy_df))
```
### VECM to Detect Anomalies
* About VECM: https://www.statsmodels.org/dev/generated/statsmodels.tsa.vector_ar.vecm.VECM.html#statsmodels.tsa.vector_ar.vecm.VECM
```
k_ar_diff = 18
vecm_model = VECM(mul_ts_df, k_ar_diff=k_ar_diff)
vecm_model_fitted = vecm_model.fit()
vecm_dw_scores = durbin_watson(vecm_model_fitted.resid)
for col, dw in zip(mul_ts_df.columns, vecm_dw_scores):
print(f'{col}: {dw}')
vecm_squared_errors = vecm_model_fitted.resid.sum(axis=1)**2
vecm_threshold, vecm_detections = detect_anomalies(vecm_squared_errors, n=1)
vecm_detected_mul_ts_df = mul_ts_df.iloc[k_ar_diff+1:, :]
vecm_detected_mul_ts_df['anomaly_detection'] = vecm_detections
print(f'Threshold: {threshold}')
vecm_detected_mul_ts_df.head()
compare_df = pd.merge(vecm_detected_mul_ts_df[['anomaly_detection']], detected_mul_ts_df[['anomaly_detection']], left_index=True, right_index=True)
print(len(compare_df))
compare_df.head()
compare_df.loc[(compare_df['anomaly_detection_x'] != compare_df['anomaly_detection_y'])]
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/mfernandes61/python-intro-gapminder/blob/binder/colab/07_reading_tabular.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
```
---
title: "Reading Tabular Data into DataFrames"
teaching: 10
exercises: 10
questions:
- "How can I read tabular data?"
objectives:
- "Import the Pandas library."
- "Use Pandas to load a simple CSV data set."
- "Get some basic information about a Pandas DataFrame."
keypoints:
- "Use the Pandas library to get basic statistics out of tabular data."
- "Use `index_col` to specify that a column's values should be used as row headings."
- "Use `DataFrame.info` to find out more about a dataframe."
- "The `DataFrame.columns` variable stores information about the dataframe's columns."
- "Use `DataFrame.T` to transpose a dataframe."
- "Use `DataFrame.describe` to get summary statistics about data."
---
## Use the Pandas library to do statistics on tabular data.
* Pandas is a widely-used Python library for statistics, particularly on tabular data.
* Borrows many features from R's dataframes.
* A 2-dimensional table whose columns have names
and potentially have different data types.
* Load it with `import pandas as pd`. The alias pd is commonly used for Pandas.
* Read a Comma Separated Values (CSV) data file with `pd.read_csv`.
* Argument is the name of the file to be read.
* Assign result to a variable to store the data that was read.
~~~
import pandas as pd
data = pd.read_csv('data/gapminder_gdp_oceania.csv')
print(data)
~~~
{: .language-python}
~~~
country gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 \
0 Australia 10039.59564 10949.64959 12217.22686
1 New Zealand 10556.57566 12247.39532 13175.67800
gdpPercap_1967 gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 \
0 14526.12465 16788.62948 18334.19751 19477.00928
1 14463.91893 16046.03728 16233.71770 17632.41040
gdpPercap_1987 gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 \
0 21888.88903 23424.76683 26997.93657 30687.75473
1 19007.19129 18363.32494 21050.41377 23189.80135
gdpPercap_2007
0 34435.36744
1 25185.00911
~~~
{: .output}
* The columns in a dataframe are the observed variables, and the rows are the observations.
* Pandas uses backslash `\` to show wrapped lines when output is too wide to fit the screen.
> ## File Not Found
>
> Our lessons store their data files in a `data` sub-directory,
> which is why the path to the file is `data/gapminder_gdp_oceania.csv`.
> If you forget to include `data/`,
> or if you include it but your copy of the file is somewhere else,
> you will get a [runtime error]({{ page.root }}/04-built-in/#runtime-error)
> that ends with a line like this:
>
> ~~~
> FileNotFoundError: [Errno 2] No such file or directory: 'data/gapminder_gdp_oceania.csv'
> ~~~
> {: .error}
{: .callout}
## Use `index_col` to specify that a column's values should be used as row headings.
* Row headings are numbers (0 and 1 in this case).
* Really want to index by country.
* Pass the name of the column to `read_csv` as its `index_col` parameter to do this.
~~~
data = pd.read_csv('data/gapminder_gdp_oceania.csv', index_col='country')
print(data)
~~~
{: .language-python}
~~~
gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 gdpPercap_1967 \
country
Australia 10039.59564 10949.64959 12217.22686 14526.12465
New Zealand 10556.57566 12247.39532 13175.67800 14463.91893
gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 gdpPercap_1987 \
country
Australia 16788.62948 18334.19751 19477.00928 21888.88903
New Zealand 16046.03728 16233.71770 17632.41040 19007.19129
gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 gdpPercap_2007
country
Australia 23424.76683 26997.93657 30687.75473 34435.36744
New Zealand 18363.32494 21050.41377 23189.80135 25185.00911
~~~
{: .output}
## Use the `DataFrame.info()` method to find out more about a dataframe.
~~~
data.info()
~~~
{: .language-python}
~~~
<class 'pandas.core.frame.DataFrame'>
Index: 2 entries, Australia to New Zealand
Data columns (total 12 columns):
gdpPercap_1952 2 non-null float64
gdpPercap_1957 2 non-null float64
gdpPercap_1962 2 non-null float64
gdpPercap_1967 2 non-null float64
gdpPercap_1972 2 non-null float64
gdpPercap_1977 2 non-null float64
gdpPercap_1982 2 non-null float64
gdpPercap_1987 2 non-null float64
gdpPercap_1992 2 non-null float64
gdpPercap_1997 2 non-null float64
gdpPercap_2002 2 non-null float64
gdpPercap_2007 2 non-null float64
dtypes: float64(12)
memory usage: 208.0+ bytes
~~~
{: .output}
* This is a `DataFrame`
* Two rows named `'Australia'` and `'New Zealand'`
* Twelve columns, each of which has two actual 64-bit floating point values.
* We will talk later about null values, which are used to represent missing observations.
* Uses 208 bytes of memory.
## The `DataFrame.columns` variable stores information about the dataframe's columns.
* Note that this is data, *not* a method. (It doesn't have parentheses.)
* Like `math.pi`.
* So do not use `()` to try to call it.
* Called a *member variable*, or just *member*.
~~~
print(data.columns)
~~~
{: .language-python}
~~~
Index(['gdpPercap_1952', 'gdpPercap_1957', 'gdpPercap_1962', 'gdpPercap_1967',
'gdpPercap_1972', 'gdpPercap_1977', 'gdpPercap_1982', 'gdpPercap_1987',
'gdpPercap_1992', 'gdpPercap_1997', 'gdpPercap_2002', 'gdpPercap_2007'],
dtype='object')
~~~
{: .output}
## Use `DataFrame.T` to transpose a dataframe.
* Sometimes want to treat columns as rows and vice versa.
* Transpose (written `.T`) doesn't copy the data, just changes the program's view of it.
* Like `columns`, it is a member variable.
~~~
print(data.T)
~~~
{: .language-python}
~~~
country Australia New Zealand
gdpPercap_1952 10039.59564 10556.57566
gdpPercap_1957 10949.64959 12247.39532
gdpPercap_1962 12217.22686 13175.67800
gdpPercap_1967 14526.12465 14463.91893
gdpPercap_1972 16788.62948 16046.03728
gdpPercap_1977 18334.19751 16233.71770
gdpPercap_1982 19477.00928 17632.41040
gdpPercap_1987 21888.88903 19007.19129
gdpPercap_1992 23424.76683 18363.32494
gdpPercap_1997 26997.93657 21050.41377
gdpPercap_2002 30687.75473 23189.80135
gdpPercap_2007 34435.36744 25185.00911
~~~
{: .output}
## Use `DataFrame.describe()` to get summary statistics about data.
`DataFrame.describe()` gets the summary statistics of only the columns that have numerical data.
All other columns are ignored, unless you use the argument `include='all'`.
~~~
print(data.describe())
~~~
{: .language-python}
~~~
gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 gdpPercap_1967 \
count 2.000000 2.000000 2.000000 2.000000
mean 10298.085650 11598.522455 12696.452430 14495.021790
std 365.560078 917.644806 677.727301 43.986086
min 10039.595640 10949.649590 12217.226860 14463.918930
25% 10168.840645 11274.086022 12456.839645 14479.470360
50% 10298.085650 11598.522455 12696.452430 14495.021790
75% 10427.330655 11922.958888 12936.065215 14510.573220
max 10556.575660 12247.395320 13175.678000 14526.124650
gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 gdpPercap_1987 \
count 2.00000 2.000000 2.000000 2.000000
mean 16417.33338 17283.957605 18554.709840 20448.040160
std 525.09198 1485.263517 1304.328377 2037.668013
min 16046.03728 16233.717700 17632.410400 19007.191290
25% 16231.68533 16758.837652 18093.560120 19727.615725
50% 16417.33338 17283.957605 18554.709840 20448.040160
75% 16602.98143 17809.077557 19015.859560 21168.464595
max 16788.62948 18334.197510 19477.009280 21888.889030
gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 gdpPercap_2007
count 2.000000 2.000000 2.000000 2.000000
mean 20894.045885 24024.175170 26938.778040 29810.188275
std 3578.979883 4205.533703 5301.853680 6540.991104
min 18363.324940 21050.413770 23189.801350 25185.009110
25% 19628.685413 22537.294470 25064.289695 27497.598692
50% 20894.045885 24024.175170 26938.778040 29810.188275
75% 22159.406358 25511.055870 28813.266385 32122.777857
max 23424.766830 26997.936570 30687.754730 34435.367440
~~~
{: .output}
* Not particularly useful with just two records,
but very helpful when there are thousands.
> ## Reading Other Data
>
> Read the data in `gapminder_gdp_americas.csv`
> (which should be in the same directory as `gapminder_gdp_oceania.csv`)
> into a variable called `americas`
> and display its summary statistics.
>
> > ## Solution
> > To read in a CSV, we use `pd.read_csv` and pass the filename `'data/gapminder_gdp_americas.csv'` to it.
> > We also once again pass the column name `'country'` to the parameter `index_col` in order to index by country.
> > The summary statistics can be displayed with the `DataFrame.describe()` method.
> > ~~~
> > americas = pd.read_csv('data/gapminder_gdp_americas.csv', index_col='country')
> > americas.describe()
> > ~~~
> >{: .language-python}
> {: .solution}
{: .challenge}
> ## Inspecting Data
>
> After reading the data for the Americas,
> use `help(americas.head)` and `help(americas.tail)`
> to find out what `DataFrame.head` and `DataFrame.tail` do.
>
> 1. What method call will display the first three rows of this data?
> 2. What method call will display the last three columns of this data?
> (Hint: you may need to change your view of the data.)
>
> > ## Solution
> > 1. We can check out the first five rows of `americas` by executing `americas.head()`
> > (allowing us to view the head of the DataFrame). We can specify the number of rows we wish
> > to see by specifying the parameter `n` in our call
> > to `americas.head()`. To view the first three rows, execute:
> >
> > ~~~
> > americas.head(n=3)
> > ~~~
> > {: .language-python}
> > ~~~
> > continent gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 \
> > country
> > Argentina Americas 5911.315053 6856.856212 7133.166023
> > Bolivia Americas 2677.326347 2127.686326 2180.972546
> > Brazil Americas 2108.944355 2487.365989 3336.585802
> >
> > gdpPercap_1967 gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 \
> > country
> > Argentina 8052.953021 9443.038526 10079.026740 8997.897412
> > Bolivia 2586.886053 2980.331339 3548.097832 3156.510452
> > Brazil 3429.864357 4985.711467 6660.118654 7030.835878
> >
> > gdpPercap_1987 gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 \
> > country
> > Argentina 9139.671389 9308.418710 10967.281950 8797.640716
> > Bolivia 2753.691490 2961.699694 3326.143191 3413.262690
> > Brazil 7807.095818 6950.283021 7957.980824 8131.212843
> >
> > gdpPercap_2007
> > country
> > Argentina 12779.379640
> > Bolivia 3822.137084
> > Brazil 9065.800825
> > ~~~
> > {: .output}
> > 2. To check out the last three rows of `americas`, we would use the command,
> > `americas.tail(n=3)`, analogous to `head()` used above. However, here we want to look at
> > the last three columns so we need to change our view and then use `tail()`. To do so, we
> > create a new DataFrame in which rows and columns are switched:
> >
> > ~~~
> > americas_flipped = americas.T
> > ~~~
> > {: .language-python}
> >
> > We can then view the last three columns of `americas` by viewing the last three rows
> > of `americas_flipped`:
> > ~~~
> > americas_flipped.tail(n=3)
> > ~~~
> > {: .language-python}
> > ~~~
> > country Argentina Bolivia Brazil Canada Chile Colombia \
> > gdpPercap_1997 10967.3 3326.14 7957.98 28954.9 10118.1 6117.36
> > gdpPercap_2002 8797.64 3413.26 8131.21 33329 10778.8 5755.26
> > gdpPercap_2007 12779.4 3822.14 9065.8 36319.2 13171.6 7006.58
> >
> > country Costa Rica Cuba Dominican Republic Ecuador ... \
> > gdpPercap_1997 6677.05 5431.99 3614.1 7429.46 ...
> > gdpPercap_2002 7723.45 6340.65 4563.81 5773.04 ...
> > gdpPercap_2007 9645.06 8948.1 6025.37 6873.26 ...
> >
> > country Mexico Nicaragua Panama Paraguay Peru Puerto Rico \
> > gdpPercap_1997 9767.3 2253.02 7113.69 4247.4 5838.35 16999.4
> > gdpPercap_2002 10742.4 2474.55 7356.03 3783.67 5909.02 18855.6
> > gdpPercap_2007 11977.6 2749.32 9809.19 4172.84 7408.91 19328.7
> >
> > country Trinidad and Tobago United States Uruguay Venezuela
> > gdpPercap_1997 8792.57 35767.4 9230.24 10165.5
> > gdpPercap_2002 11460.6 39097.1 7727 8605.05
> > gdpPercap_2007 18008.5 42951.7 10611.5 11415.8
> > ~~~
> > {: .output}
> >
> > This shows the data that we want, but we may prefer to display three columns instead of three rows,
> > so we can flip it back:
> > ~~~
> > americas_flipped.tail(n=3).T
> > ~~~
> > {: .language-python}
> > __Note:__ we could have done the above in a single line of code by 'chaining' the commands:
> > ~~~
> > americas.T.tail(n=3).T
> > ~~~
> > {: .language-python}
> {: .solution}
{: .challenge}
> ## Reading Files in Other Directories
>
> The data for your current project is stored in a file called `microbes.csv`,
> which is located in a folder called `field_data`.
> You are doing analysis in a notebook called `analysis.ipynb`
> in a sibling folder called `thesis`:
>
> ~~~
> your_home_directory
> +-- field_data/
> | +-- microbes.csv
> +-- thesis/
> +-- analysis.ipynb
> ~~~
> {: .output}
>
> What value(s) should you pass to `read_csv` to read `microbes.csv` in `analysis.ipynb`?
>
> > ## Solution
> > We need to specify the path to the file of interest in the call to `pd.read_csv`. We first need to 'jump' out of
> > the folder `thesis` using '../' and then into the folder `field_data` using 'field_data/'. Then we can specify the filename `microbes.csv.
> > The result is as follows:
> > ~~~
> > data_microbes = pd.read_csv('../field_data/microbes.csv')
> > ~~~
> >{: .language-python}
> {: .solution}
{: .challenge}
> ## Writing Data
>
> As well as the `read_csv` function for reading data from a file,
> Pandas provides a `to_csv` function to write dataframes to files.
> Applying what you've learned about reading from files,
> write one of your dataframes to a file called `processed.csv`.
> You can use `help` to get information on how to use `to_csv`.
> > ## Solution
> > In order to write the DataFrame `americas` to a file called `processed.csv`, execute the following command:
> > ~~~
> > americas.to_csv('processed.csv')
> > ~~~
> >{: .language-python}
> > For help on `to_csv`, you could execute, for example:
> > ~~~
> > help(americas.to_csv)
> > ~~~
> >{: .language-python}
> > Note that `help(to_csv)` throws an error! This is a subtlety and is due to the fact that `to_csv` is NOT a function in
> > and of itself and the actual call is `americas.to_csv`.
> {: .solution}
{: .challenge}
|
github_jupyter
|
<figure>
<IMG SRC="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/Fachhochschule_Südwestfalen_20xx_logo.svg/320px-Fachhochschule_Südwestfalen_20xx_logo.svg.png" WIDTH=250 ALIGN="right">
</figure>
# Machine Learning
### Sommersemester 2021
Prof. Dr. Heiner Giefers
```
import pandas as pd
import numpy as np
from sklearn.preprocessing import *
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
import wget
from pathlib import Path
```
Fehlende Pakete bitte so nachinstllaieren:
```python
import sys
!{sys.executable} -m pip install <paketname>
```
# Daten Analysieren
Daten sind die Grundlage von Machine Learning (ML) Algorithmen.
Debei zählt nicht nur die Masse an Daten, sondern auch, oder vor allem deren Qualität.
Die Ergebnisse der ML-Algorithmen können nur so gut sein, wie die Qualität der Daten es zulässt.
Daher ist das Verständnis von Daten ein wesentlicher Schritt für jedes ML Projekt.
Lassen Sie uns zunächst einige grundlegende Begriffe von Daten durchgehen:
Liegen die Daten in *strukturierter* Form vor so kann man sie in der Regel als Tabelle oder Matrix beschrieben.
Die einzelnen Zeilen dieser Tabellen nennt man **Instanzen** oder **Datenpunkte**, bei den Spalten spricht man von **Attributen**, **Merkmalen** oder **Variablen**.
- Ein **Datenpunkt** ist ein Datenpaket, das ein Objekt beschreibt (einen Fall, eine Person, ein Zeitpunkt, ...).
- Ein **Attribut** ist eine messbare Eigenschaft, anhand derer Objekte beschrieben werden (Größe, Alter, Gewicht, Augenfarbe, ...).
Attribute können **kategorisch** oder **numerisch** sein:
- **Kategorisch** sind Attribute, deren Wertebereich eine endliche Menge ist (Farbe, Typ, Wochentag, ...)
- **Ordinal** sind katagorische Attribute mit Ordnung (sehr schlecht, schlecht, zufriedenstellend, gut, sehr gut)
- **Nominal** sind katagorische Attribute ohne Reihenfolge (grün, blau, gelb)
- **Numerisch** sind Attribute, die durch Zahlen dargestellt werden (Größe, Gewicht, Temperatur, ...) und innerhalb eines endlichen oder unendlichen Intervalls einen beliebigen Wert annehmen können.
Im Folgenden betrachten wir einen Datensatz für Spielerdaten aus dem FIFA 20-Videospiel:
```
file = Path("./players_20.csv")
if not file.is_file():
wget.download("https://raw.githubusercontent.com/fh-swf-hgi/ml/main/p2/players_20.csv")
data = pd.read_csv("players_20.csv", encoding = "ISO-8859-1")
data.head()
data.describe()
```
`age`, `height_cm` und `weight_kg` sind Beispiele für numerische Attribute, `nationality`, `club` und `preferred_foot` sind kategorische Attribute.
## Visualisierung
Um einen besseren Eindruck über die Attribute zu erhalten, ist es ratsam, die die Daten zu visualisieren.
Eine umfangreiche und weit verbreitete Python Bibliothek dafür ist `matplotlib`.
### 1. Säulendiagramme (Bar Charts)
Säulendiagramme sind eine einfache Möglichkeit, um die Häufigkeit bestimmter Werte in **kategorischen Merkmalen** darzustellen
Im folgenden Diagramm wird gezeigt, wie häufig bestimmte Clubs im Datensatz genannt werden:
```
data['club'].value_counts().plot.bar()
```
Bei einer Variante der Säulendiagramme sind die "Säulen" horizontal aufgetragen. Man spricht in dem Fall von **Balkendiagrammen** (*column charts*). Diese haben den Vorteil, dass die zu vergleichenden Werte besser lesbar sind.
**Aufgabe** Verwenden Sie die Methode `barh` der `matplotlib` Bibliothek um ein weiteres kategorisches Merkmal darzustellen.
```
# YOUR CODE HERE
raise NotImplementedError()
```
### 2. Histogramm
Mit einem Histogramm Häufigkeitsverteilung von werten eines **numerischen Merkmals** grafisch dargestellt werden.
Der Wertebereich des Attributs wird dazu in Intervalle (mit normalerweise gleicher Größe) eingeteilt.
Die Anzahl der Werte, die in einen Bereich hineinfallen, bestimmt dann die Größe der Säulen in einem Säulendiagramm.
```
data['height_cm'].plot.hist()
```
Das obige Histogramm zeigt eine Werteverteilung, die in etw die Form einer Glockenkurve hat. Man kann also vermuten, das die Größe der Spieler eine normalverteilte Variable ist.
**Aufgabe** Plotten Sie die Attribute `potential` und `overall` in ein Histogramm.
```
# YOUR CODE HERE
raise NotImplementedError()
```
### 3. Streudiagramm (scatter plot)
Ein Streudiagramm zeigt den Zusammenhang von Wertepaaren mehrerer Attribute.
Der einfachste Fall ist ein zweidimensionales Diagramm in dem die Abhängigkeiten zweier Attribute dargestellt sind.
Jeder Punkt entspricht einem Datenpunkt, wobei die *x* und *y* Koordinaten den Werten der beiden Attribute für diesen Datenpunkt entspricht.
Im folgenden Diagramm vergleichen wir die Größe und das Gewicht der Spieler.
Man sieht im Diagramm bereits eine Art *Muster*: Je größer ein Spieler ist, desto schwerer ist er **im Allgemeinen** auch.
Wichtig ist hier: Das ist kein *Gesetz* oder eine *Regel* die immer gilt. Es gibt auch Spieler, die größer und gleichzeitig leichter als andere Spieler sind. Solche Fälle sind aber eher die Ausnahme, bzw. es ist weniger wahrscheinlich.
```
data.plot.scatter('height_cm', 'weight_kg')
```
Wenn mehr als 2 Attribute verglichen werden sollen, wird die Darstellung komplizierter.
Eine Möglichkeit ist, von zwei- in den dreidimensionalen Raum zu wechseln.
Eine andere Möglichkeit ist, das Aussehen der einzelnen Punkte (z.B. die Farbe) anhand weiterer Attribute zu variieren.
**Aufgabe:** Verwenden Sie den Parameter `c` der `matplotlib`-Funktion `scatter`, um neben `height_cm` und `weight_kg` ein weiteres Attribut (z.B. `overall`) darzustellen.
```
# YOUR CODE HERE
raise NotImplementedError()
```
### 4. Box Plot
Boxplots sind eine Methode, mit der sich die charakteristischen Eigenschaften eines numerischen Attributs auf kompakte und dennoch übersichtliche Weise darstellen lassen.
Die Box entspricht dem Bereich, in dem die mittleren 50% aller Werte liegen, der Strich in der Box gibt die Position des Medians an.
Die *Whisker* (auch *Antennen* genannt) geben den Bereich an, in dem die allermeisten Werte liegen.
Punkte außerhalb dieses Bereichs sind als *Ausreißer* zu werten.

```
data[['potential','overall']].plot.box()
```
## Ausreißererkennung
Ein *Ausreißer* ist ein Wert, der weit entfernt von allen (oder den allermeisten) anderen Werten des gleich Datensatzes liegt.
Eine Möglichkeit, solche Ausreißer zu erkennen, ist die Daten zu visualisieren, z.B. mithilfe eines Boxplots.
Im folgenden Beispiel schauen wir uns das Merkmal `value_eur`, also den Marktwert der Spieler an.
Wie die Angabe an der Die y-Achse zeigt, sind die Werte in $1e7$, also Millionen dargestellt.
```
data['value_eur'].plot.box()
```
Datenpunkte, die Außreißer in mehreren Kategorien darstellen, identifiziert man auch über Streudiagramme.
Wie wir in folgendem Beispiel sehen, besitzt ein Spieler ein sehr hohen Wert in den Kategorien *Spielerwert* (`value_eur`) und *Gehalt* (`wage_eur`).
```
data.plot.scatter('value_eur', 'wage_eur')
plt.scatter(data['value_eur'][0],data['wage_eur'][0], s=150, edgecolors='k', c='None')
```
Ausreißer können aber auch berechnet werden.
Im folgenden Beispiel berechnen wir für alle Datenpunkte und Merkmale, welche Werte mehr als drei Standardabweichungen größer als der Mittelwert sind.
```
s = 3*data.std() + data.mean()
(data.gt(s, axis=1)).head()
```
Wenn wie die Ausreißer erkannt haben, können wir sie aus dem Datensatz entfernen.
```
data_clean = data[(data.gt(s, axis=1)).any(axis=1)==False].copy()
print(f"Der ursprüngliche Datensatz hat {data.shape[0]} Zeilen")
print(f"Der bereinigte Datensatz hat noch {data_clean.shape[0]} Zeilen")
```
# Datenvorverarbeitung
Bevor Machine Learning Verfahren eingesetzt werden können, müssen die Datensätze in der Regel sorgfältig vorberarbeitet werden.
Je nach verwendetem Verfahren kann es nötig sein, bestimmte Vorverarbeitungsschritte durchzuführen.
Die im Folgenden beschrieben Schritte sind für fast alle Anwendungen notwendig:
## Data selection
Datensätze sind häufig groß, in dem Sinne, dass viele Daten für verschiedene Merkmale gesammelt wurden.
Viele Daten zu haben ist prinzipiell auch vorteilhaft, allerdings sollten für Analysen möglichst nur die *relevanten* Daten herangezogen werden.
In manchen Fällen können Sie die weniger relvanten Merkale direkt identifizieren.
In unserem Datensatz nehmen wir einmal an, dass der Spielername und die Positionen eher unwichtig sind. Daher nehmen dir die kompletten Merkmale, also die Spalten in unserem Datensatz, heraus:
```
data_clean.drop(['short_name','player_positions'], axis=1, inplace=True)
```
Welche Merkmale relevant sind und welche nicht ist allerdings nicht immer einfach zu beantworten.
Daher ist es ratsam, mathematische Verfahren zu verwenden, mit denen weniger relevante oder sogar überflüssige Merkmale identifiziert werden können.
Mahr dazu aber später...
## Normalisierung
Die Wertebereiche der verschiedenen Merkmale können untereinander sehr unterschiedlich sein.
Das Alter der Spieler wird kaum über $50$ gehen, die Gehälter hingegen fangen erst deutlich über $1000$ an.
Wenn unser Machine Learning Modell darauf angewiesen ist, dass alle Merkmale den selben *Hebel* für das Modell haben, ist es sinnvoll, wenn die Werte aller Merkmale *normalisiert* werden.
### 1. Standardisierung
Ein der am häufigsten angewendeten Methoden zur Normalisieurung ist die *Standardisierung*.
Hierbei wird von den Werten der Datenpunkte des Attributs $X$ zunächst der Mittelwert $\bar X$ abgezogen, sodass die transformierten Daten immer den Mittelwert Null besitzen. Anschließend teilt man den Wert durch die Varianz $\sigma^2_X$, sodass das transformierte Merkmal $\hat X$ die Varianz 1 besitzt:
$$\hat x\mapsto \frac{x - \bar X}{\sigma^2_X}$$
Man kann die Standardisierung nach der oben angeeben Formel in Python *per Hand* ausführen, oder bestehende Funktionen verwenden, z.B. die Funktion `scale`aus dem `preprocessing`-Modul der umfangreichen Python-Bibliothek `scikit-klearn` (kurz: `sklearn`).
```
# Selektiere die Spalten mit numerischen Datentypen (int64) in unserem Fall
ncul = data_clean.select_dtypes('int64').columns
# Wende die Formel zur Standardisierung an
data_strd = (data_clean[ncul] - data_clean.mean())/data_clean.std()
# Standardisierung mit sklearn
data_skstrd = scale(data_clean[ncul])
```
Wenn wir die die Streuungswerte der neuen Datensätze vergleichen sehen wir, dass die Mittelwerte sehr nahe bei 0 und die Standardabweichung nahe bei 1 liegt.
Außerdem sind die selbst berechneten Daten sehr nahe an den mit `sklearn` berechneten Daten.
Unterschiede kommen aufgrund der unterschiedlichen numerischen Berechnungen zustande.
```
data_strd.describe().loc[['mean', 'std']]
pd.DataFrame(data_skstrd, columns=data_strd.columns).describe().loc[['mean', 'std']]
```
### 2. Min-Max Normalisierung
Eine, weitere Methode zur Normalisierung der Datenreihen ist die *Min-Max-Skalierung*.
Die Idee dahinter ist überaus einfach: Zunächst wird von allen Datenpunkten $x$ des Attributs $X$ das Minimum der Attributwerte $\min_X$ abgezogen. Nach diesem Schritt beginnen alle Wertebereiche der Attribute bei 0.
Anschließend werden alle Werte durch die Größe des Wertebereichs $\max_X-\min_X$ geteilt. Danach sind alle Werte auf den Wertebereich $[0,1]$ skaliert:
$$\hat x\mapsto \frac{x-\min_X}{\max_X-\min_X}$$
In `sklearn` heißt die Klasse zur Min-Max Normalisierung `MinMaxScaler`
```
# Min-Max-Skalierung per Hand
data_scaled = (data_clean[ncul] - data_clean[ncul].min())/(data_clean[ncul].max()-data_clean[ncul].min())
# Min-Max-Skalierung mit sklearn
data_skscaled = MinMaxScaler().fit_transform(data_clean[ncul])
```
Wir können nun wieder beide Varianten vergleichen:
```
data_scaled.max() - data_scaled.min()
pd.DataFrame(data_scaled, columns=data_scaled.columns).max() - pd.DataFrame(data_scaled, columns=data_scaled.columns).min()
```
**Aufgabe:** Normieren Sie nach den Prinzipien der Min-Max-Skalierung die Werte unseres Datensatzes auf den den Bereich $[-1,1]$ (statt wie oben auf den Wertebereich $[0,1]$)
```
data_ex = None
# YOUR CODE HERE
raise NotImplementedError()
data_ex
# Test Cell
#----------
assert (data_ex.max() == 1).all()
assert (data_ex.min() == -1).all()
```
In `sklearn` kann man dies durch Überschreiben des Attributs `feature_range` des `MinMaxScaler`-Objekts erreichen:
```
data_skex = MinMaxScaler(feature_range=(-1,1)).fit_transform(data_clean[ncul])
data_skex.min(axis=0), data_skex.max(axis=0)
```
Nun können wir den ursprünglichen und den normierten Datensatz zusammenfügen:
```
# updating our dataframe
data_clean[ncul] = data_scaled
data_clean.head()
```
## Encoding
Bisher haben wir ausschließlich die numerischen Merkmale betrachtet, aber noch nicht die kategorischen,
Die allermeisten ML Verfahren basieren darauf, dass mit den Werten der Attribute *gerechnet* wird.
Daher ist es in aller Regel notwendig, kategorische Merkmale in numerische zu überführen, also zu *encodieren*.
### 1. Ganzzahlcodierung
Eine Möglichkeit katagorischen Daten in numerische zu überführen besteht darin, jeder *Kategorie* einen eindeutigen (Integer-) Zahlenwert zuzuordnen.
Diese einfache Methode ist sogar sehr sinnvoll, **aber nur, wenn die kategorialen Variablen ordinal sind**.
Ein gutes Beispiel sind die Schulnoten *sehr gut*, *gut*, *befriedigend*, usw., denen man passenderweise die Werte $1$, $2$, $3$, usw. zuordnen, und dann mit diesen Werten auch sinnvoll rechnen kann.
Sind **nominal**, also ohne erkennbare Ordnung kann die Ganzzahlcodierung zu schlechteren oder gänzlich **unerwarteten Ergebnissen** führen.
Das liegt vereinfacht gesagt daran, dass die Verfahren aus der numerischen Werten ahängigkeiten herleiten, die in wirklichkeit nicht exisieren.
In `sklearn` kann man eine Ganzzahlcodierung durch die Klasse `OrdinalEncoder` realisieren.
```
# Spalten mit kategorischen Attributen
ccul = ['club','nationality','preferred_foot']
data_en = data_clean.copy()
data_en[ccul] = OrdinalEncoder().fit_transform(data_clean[ccul])
data_en.head()
```
Es ist ebenfalls möglich, direkt mit `pandas` kategorische Merkmale zu enkodieren.
Dazu setzt man den Spaltentyp auf auf `category` und wählt als *Encoding* `cat.codes`
```
data_clean['club'].astype('category').cat.codes
```
### 2. One-hot Codierung
Wir haben ja bereits angesprochen, dass sich für die nominale Merkmale die Ganzzahlkodierung nicht eignet.
Eine sehr verbreitete Transformation, die auch für nominale Attribute verwendet werden kann ist die *one-hot Codierung*.
Hierbei wird ein Merkmal mit $n$ unterschiedlichen Kategorien in ein einen $n$-dimensionalen Vektor überführt.
Jede Position dieses Vektors steht für eine bestimmte Kategorie.
Ist für einen Datenpunkt in diesem Vektor an der Stelle $i$ eine $1$ eingetragen, so besitzt der Datenpunkt für dieses Merkmal die $i$-te Kategorie.
Wie man leicht sieht, kann in diesem Vektor nur eine $1$ eingetragen sein, denn der Datenpunkt kann Maximal einer Kategorie zugeordnet sein. Alle anderen Positionen des Vektors sind $0$. Nur eine Eins, daher der Namen *one-hot*.
In `sklearn` verwendet man die One-hot Codierung über die `OneHotEncoder` Klasse.
```
onehot = OneHotEncoder(sparse=False).fit_transform(data_clean[ccul])
onehot[:5]
```
Auch hier können wir das gleiche mit `pandas` erreichen, und zwar mit der Funktion `pandas.get_dummies`.
Jedes kategorische Merkmal wird darüber zu $n$ einzelnen Merkmalen expandiert, wobei $n$ die Anzahl der Werte ($=$ Kategorien) des Merkmals ist.
```
data_oh = pd.get_dummies(data_clean)
data_oh.head()
```
## Aufteilung der Trainings- und Testdaten
Eine typische Vorgehensweise, um ein Machine Learning Modell zu bewerten, ist es, das Modell mit neuen, *neuen* Daten zu testen.
*Neu* bedeutet hier, dass die Daten für das Trainieren des Modells nicht verwendet wurden.
Kennt man die "Ergebnisse" (*Label*) der Testdaten, so kann man sehr genau abschätzen, wie gut das trainierte Modell funktioniert.
Aber warum benötigt man hier neue Daten? Wäre es nicht gut, wenn man diese Daten auch für das training benutzen würde, um das Modell noch besser zu entwickeln?
Ganz im Gegenteil: Im Einsatz wird Ihr Modell immer mit unbekannten Daten arbeiten müssen. Es kommt also vor allem darauf an, wie gut das Modell *generalisiert*.
Verwenden Sie alle Ihre Daten für das Training, so wird das Modell vielleicht sehr gute Ergebnisse liefern; aber eben nur für **diese Daten**. Es ist also *übertrainiert* (engl. *overfit*).
Um *Overfitting* zu verhindern, sollten Sie also immer einen Teil des Datensatzes für das Testen reservieren.
Dieser Teil ist üblicherweise kleiner als der Trainigsdatensatz, wie groß hängt von dem Umfang des Datensatzes ab.
Für große Datensätze (z.B. mit mehr als 1 Mio. Datenpunkten) ist es angebracht, ein kleineres Testset (2%) zu verwenden.
Bei kleineren Datensätzen ist Testdatensatz von $1/3$ bis $1/5$ der Gesamtdaten üblich.
`sklearn` beinhaltet die Funktion `train_test_split` zum automatischen Aufteilen von Daten.
```
train_set, test_set = train_test_split(data_oh, test_size=0.3)
train_set.shape, test_set.shape
```
# Optimierungen
Um die Qualität der Trainingsdaten weiter zu verbessern, können weitere Techniken zum Einsatz kommen.
Einige von diesen Techniken sollen im FOlgenden kurz vorgestellt werden.
## Deminsionalitätsreduzierung
Bei Erfassen von Daten ist man mmeistens nicht "wählerisch". Es wird gesammelt, was erfasst werden kann und nicht so sehr darauf geachtet, welche Daten man später eventuell *benötigt*.
Dieser Ansatz ist prinzipiell gut, lässt er doch den größten Spielraum für spätere Analysen.
Wenn man nun aber ein gewisse Fragestellung mit einem Datensatz bearbeiten möchte, sind in der Regel nicht alle Attribute des Datensatzes relevant.
Sie beim Training beizubehalten bedeutet aber meistens, einen höheren Zeit- und Ressourcenaufwand beim Training der sich nicht selten in einem schlechter trainierten Modell wiederspiegelt.
Es ist also empfehlenswert, die Anzahl der Attribute bzw. die *Dimensionalität* des Datensatzes zu reduzieren.
Der bekannteste Algorithmen zur Reduzierung der Dimensionalität, ist die sogenannte Hauptkomponentenanalyse, oder auf Englisch **Principle Component Analysis (PCA)**. PCA ist eine Methode aus der Statistik, mit der man aus Daten mit vielen Merkmalen einige Faktoren extrahiert, die für diese Merkmale bestimmend, bzw. am meisten aussagekräftig sind. PCA kann nicht nur zur Reduzierung von Eingaben verwendet, sondern auch zur Visualisierung von Daten mit hoher Dimension in einer 2D-Grafik.
Wie PCA genau funktioniert, werden wir an dieser Stelle nicht näher beleuchten, aber wir wollen die PCR-Methode auf unser Daten anwenden und die Ergebnisse darstellen.
`sklearn` stellt die Klasse` PCA` bereit, die Attribute auf eine vorgegebene Zahl `n_components` reduzieren kann. Wir werden unseren Datensatz damit in nur 2 Dimensionen darstellen:
```
pca = PCA(n_components=2).fit(data_clean[ncul])
data_pca = pca.transform(data_clean[ncul])
data_pca.shape
```
Wir haben mit dem obigen Code die Dimensionen unseres Datensatzes von 7 auf 2 Reduziert, ohne dabei ein größeres Maß and *Informationen* (oder mathematisch gesehen an *Varianz*) zu verlieren.
```
var = pca.explained_variance_ratio_*100
print('Die erste Dimension repräsentiert %.2f%% der uerspruenglichen Varianz' %var[0])
print('Die erste Dimension repräsentiert %.2f%% der uerspruenglichen Varianz' %var[1])
print('Also sind %.2f%% der urspruenglichen Varianz (=Information) erhalten geblieben.' %var.sum())
```
Men den nunmehr noch 2 Merkmalen lassen sich die Daten im 2D-Raum plotten:
```
plt.scatter(data_pca[:,0], data_pca[:,1])
```
## Whitening
Das sogenannte *Whitening* ist eine lineare Transformation die mithilfe einer Hauptkomponentenanalyse durchgeführt werden kann.
Dabei wird die Varianz der Hauptkomponenten auf $1$ normiert.
Betrachten wir die Merkmale `height_cm` und `weight_kg` aus unserem Datensatz. Wir sehen, dass wir die Werte unser Datenpunkte bereits *normiert* haben.
In beiden Kategorien ist der Wertebereich zwischen $0.0$ und $1.0$.
```
plt.scatter(data_clean['height_cm'], data_clean['weight_kg'])
plt.axis('equal')
plt.show()
```
Wenn wir nun die PCA anwenden, beobachten wir eine sehr unterschiedliche Varianz der Hauptkomponenten.
```
pca_wh = PCA(whiten=False).fit_transform(data_clean[['height_cm', 'weight_kg']])
print("Varianz der Hauptkomponenten:", pca_wh.std(axis=0)**2)
plt.scatter(pca_wh[:,0], pca_wh[:,1])
plt.xlabel("HK1")
plt.ylabel("HK2")
plt.axis('equal')
plt.show()
```
Führt man die PCA mit Pre-Whitening aus, wird die Varianz der Hauptkomponenten auf 1 normiert.
Dies kann hilfreich für weitere Verarbeitungschritte der Daten sein.
```
pca_wh = PCA(whiten=True).fit_transform(data_clean[['height_cm', 'weight_kg']])
print("Varianz der Hauptkomponenten:", pca_wh.std(axis=0)**2)
plt.scatter(pca_wh[:,0], pca_wh[:,1])
plt.xlabel("HK1")
plt.ylabel("HK2")
plt.axis('equal')
plt.show()
```
-----
## Übung
Erproben Sie die vorgestellten Schritte zur Datenvorverarbeitung anhand eines einfachen Datensatzes.
Wir wollen als Beispiel den bekannten `iris` Datensatz verwenden.
Dabei handelt es sich um einen Datensatz mit 150 Beobachtungen von 4 Attributen von insgesammt drei Schwertlilienarten (*Iris setosa*, *Iris virginica* und *Iris versicolor*).
Die Attribute sind jeweils die Breite und die Länge des Kelchblatts (*Sepalum*) und des Kronblatts (*Petalum*).
Aufgrund dieser 4 Werte lassen sich die Schwertlilienarten recht gut *klassifizieren*.
Der Datensatz wird in vielen Beispielen zum Thema *Data Science* verwendet und ist eine Art *Hello World* des Machine Learnings.
Wir laden nun den Datensatz über die *Scikit-Learn* Funktion `sklearn.datasets.load_iris` herunter.
Danach führen Sie bitte folgende Schritte eigenständig aus:
1. Normalisieren des Datensatzes `X` mit der Min-Max-Normalisierung
2. Encodieren der Zielvariablen `y` mit der One-Hot Codierung
3. Aufteilen des Datensatzes in Trainings- und Testdaten
4. Reduzieren des Datensatzes `X` auf 2 Attribute mit PCA und Whitening
5. Ploten des Vorverarbeiteten Datensatzes `X`
```
iris_data = load_iris()
X = iris_data.data
y = iris_data.target.reshape(-1,1)
OneHotEncoder(sparse=False).fit_transform(y)
```
### Step 1: Normalisieren
Normalisieren Sie `X` mit der **Min-Max-Normalisierung**:
```
X_norm = None
# YOUR CODE HERE
raise NotImplementedError()
# Test Cell
#----------
assert type(X_norm) == np.ndarray, 'X_norm should be a numpy array containing transformed output'
assert X_norm.ptp() == 1.
assert X_norm.shape == X.shape
assert (X_norm>=0).all(), 'All values must be positive'
```
### Step 2: Encoding
Encodieren Sie `y` mit der **One-Hot Codierung**:
```
y_en = None
# YOUR CODE HERE
raise NotImplementedError()
# Test Cell
#----------
assert type(y_en) == np.ndarray, 'y_en should be a numpy array containing transformed output'
assert y_en.shape == (150, 3), 'There should be 3 columns for the 3 classes'
assert y_en.sum() == 150.
assert y_en.ptp() == 1.
```
### Step 3: Splitting
Teilen Sie `X_norm` und `y_en` in einen Test- (`X_test`, `y_test`) und einen Trainingsdatensatz (`X_train`, `y_train`) auf. Der Trainingsdatensatzsoll 70% der Datenpunkte enthalten:
```
X_train, X_test, y_train, y_test = [None]*4
# YOUR CODE HERE
raise NotImplementedError()
# Test Cell
#----------
assert X_train.all() in X_norm, 'X_train data is not in X_norm'
assert X_train.shape[0] == X_norm.shape[0]*0.7, 'The size of training set is not matching 70%'
assert X_train.shape[0]+X_test.shape[0] == X_norm.shape[0]
assert y_train.all() in y_en
```
### Schritt 4: Dimensionsreduktion
Reduzieren Sie mittels der Hauptkomponentenanalyse die Datensätze `X_train` und `X_test` jeweils auf 2 Attribute. Verwenden Sie dabei Whitening.
```
X_train2d = None
X_test2d = None
# YOUR CODE HERE
raise NotImplementedError()
# Test Cell
#----------
assert type(X_train2d) == np.ndarray, 'X_train2d should be a numpy array containing transformed output, not the model'
assert X_train2d.shape == (105, 2), 'The number of attributes is not 2'
assert X_test2d.shape == (45, 2), 'The number of attributes is not 2'
assert np.allclose(X_train2d.std(axis=0).ptp(), 0), 'Attributes have different variances'
```
### Step 5: Visualization
Plotten Sie den vorverarbeiteten Trainingsdatensatz `X_train2d` mit der Funktion `plt.scatter`.
```
# YOUR CODE HERE
raise NotImplementedError()
```
## Quellen:
[1] M. Berthold, C. Borgelt, F. Höppner and F. Klawonn, Guide to Intelligent Data Analysis, London: Springer-Verlag, 2010.
[2] J. VanderPlas, Python Data Science Handbook, O'Reilly Media, Inc., 2016.
|
github_jupyter
|
## Prepare data
```
# mount google drive & set working directory
# requires auth (click on url & copy token into text box when prompted)
from google.colab import drive
drive.mount("/content/gdrive", force_remount=True)
import os
print(os.getcwd())
os.chdir('/content/gdrive/My Drive/Colab Notebooks/MidcurveNN')
!pwd
!pip install drawSVG
"""
Prepare Data: populating input images from raw profile data
Takes raw data from "data/raw/*" files for both, profile shape (shape.dat) as well as midcurve shape (shape.mid)
Generates raster image files from svg (simple vector graphics)
Multiple variations are populated using image transformations.
These images become input for further modeling (stored in "data/input/*")
"""
import os
import sys
import PIL
import json
import shutil
import numpy as np
import PIL.ImageOps
from random import shuffle
from keras.preprocessing.image import img_to_array, load_img, array_to_img
np.set_printoptions(threshold=sys.maxsize)
from PIL import Image
# working directory
#wdir = os.getcwd()
wdir = '/content/gdrive/My Drive/Colab Notebooks/MidcurveNN'
print("Working directory: ", wdir)
imdim = 100
#input_data_folder = wdir + "\\data\\sample"
#input_data_folder = wdir + "/data/newinput"
#print("input data dir: ", input_data_folder)
raw_data_folder = "data/new_shapes"
input_data_folder = "data/new_images"
pix2pix_data_folder = "/data/pix2pix/datasets/pix2pix"
def read_dat_files(datafolder=raw_data_folder):
profiles_dict_list = []
for file in os.listdir(datafolder):
if os.path.isdir(os.path.join(datafolder, file)):
continue
filename = file.split(".")[0]
profile_dict = get_profile_dict(filename,profiles_dict_list)
if file.endswith(".dat"):
with open(os.path.join(datafolder, file)) as f:
profile_dict['Profile'] = [tuple(map(float, i.split('\t'))) for i in f]
if file.endswith(".mid"):
with open(os.path.join(datafolder, file)) as f:
profile_dict['Midcurve'] = [tuple(map(float, i.split('\t'))) for i in f]
profiles_dict_list.append(profile_dict)
return profiles_dict_list
def get_profile_dict(shapename,profiles_dict_list):
for i in profiles_dict_list:
if i['ShapeName'] == shapename:
return i
profile_dict = {}
profile_dict['ShapeName'] = shapename
return profile_dict
import drawSvg as draw
def create_image_file(fieldname,profile_dict,datafolder=input_data_folder,imgsize=imdim, isOpenClose=True):
d = draw.Drawing(imgsize, imgsize, origin='center')
profilepoints = []
for tpl in profile_dict[fieldname]:
profilepoints.append(tpl[0])
profilepoints.append(tpl[1])
d.append(draw.Lines(profilepoints[0],profilepoints[1],*profilepoints,close=isOpenClose,fill='none',stroke='black'))
shape = profile_dict['ShapeName']
# d.saveSvg(datafolder+"/"+shape+'.svg')
# d.savePng(datafolder+"/"+shape+'_'+fieldname+'.png')
d.savePng(datafolder+"/"+shape+'_'+fieldname+'.png')
def get_original_png_files(datafolder=input_data_folder):
pngfilenames = []
for file in os.listdir(datafolder):
fullpath = os.path.join(datafolder, file)
if os.path.isdir(fullpath):
continue
if file.endswith(".png") and file.find("_rotated_") == -1 and file.find("_translated_")==-1 and file.find("_mirrored_")==-1:
pngfilenames.append(fullpath)
return pngfilenames
def mirror_images(pngfilenames, mode=PIL.Image.TRANSPOSE):
mirrored_filenames = []
for fullpath in pngfilenames:
picture= Image.open(fullpath)
newfilename = fullpath.replace(".png", "_mirrored_"+str(mode)+".png")
picture.transpose(mode).save(newfilename)
mirrored_filenames.append(newfilename)
return mirrored_filenames
def rotate_images(pngfilenames, angle=90):
for fullpath in pngfilenames:
picture= Image.open(fullpath)
newfilename = fullpath.replace(".png", "_rotated_"+str(angle)+".png")
picture.rotate(angle).save(newfilename)
def translate_images(pngfilenames, dx=1,dy=1):
for fullpath in pngfilenames:
picture= Image.open(fullpath)
x_shift = dx
y_shift = dy
a = 1
b = 0
c = x_shift #left/right (i.e. 5/-5)
d = 0
e = 1
f = y_shift #up/down (i.e. 5/-5)
translate = picture.transform(picture.size, Image.AFFINE, (a, b, c, d, e, f))
# # Calculate the size after cropping
# size = (translate.size[0] - x_shift, translate.size[1] - y_shift)
# # Crop to the desired size
# translate = translate.transform(size, Image.EXTENT, (0, 0, size[0], size[1]))
newfilename = fullpath.replace(".png", "_translated_"+str(dx)+"_"+str(dy)+".png")
translate.save(newfilename)
def generate_images(datafolder=input_data_folder):
if not os.path.exists(datafolder):
os.makedirs(datafolder)
else:
for file in os.listdir(datafolder):
if file.endswith(".png") and (file.find("_rotated_") != -1 or file.find("_translated_") !=-1):
print("files already present, not generating...")
return
print("transformed files not present, generating...")
profiles_dict_list = read_dat_files()
print(profiles_dict_list)
for profile_dict in profiles_dict_list:
create_image_file('Profile',profile_dict,datafolder,imdim,True)
create_image_file('Midcurve',profile_dict,datafolder,imdim,False)
pngfilenames = get_original_png_files(datafolder)
mirrored_filenames_left_right = mirror_images(pngfilenames, PIL.Image.FLIP_LEFT_RIGHT)
mirrored_filenames_top_bottom = mirror_images(pngfilenames, PIL.Image.FLIP_TOP_BOTTOM)
mirrored_filenames_transpose = mirror_images(pngfilenames, PIL.Image.TRANSPOSE)
files_list_list = [pngfilenames,mirrored_filenames_left_right,mirrored_filenames_top_bottom,mirrored_filenames_transpose]
for filelist in files_list_list:
for angle in range(30,360,30):
rotate_images(filelist,angle)
for dx in range(5,21,5):
for dy in range(5,21,5):
translate_images(filelist,dx,-dy)
generate_images()
# wait till all images are generated before executing the next cell
break
# move images to appropriate directories
# directory names follows the shape name
import os
import shutil
srcpath = input_data_folder
destpath = input_data_folder
for root, subFolders, files in os.walk(srcpath):
for file in files:
#print(file)
subFolder = os.path.join(destpath, file[:4])
if not os.path.isdir(subFolder):
os.makedirs(subFolder)
try:
shutil.move(os.path.join(root, file), subFolder)
except:
pass
print(wdir)
# move images from temporary directory to actual
# directory names follows the shape name
src_shapes = wdir + "/data/new_shapes/"
src_images = wdir + "/data/new_images/"
dest_shapes = wdir + "/data/shapes/"
dest_images = wdir + "/data/images/"
files = os.listdir(src_shapes)
for f in files:
shutil.move(src_shapes+f, dest_shapes)
files = os.listdir(src_images)
for f in files:
shutil.move(src_images+f, dest_images)
```
|
github_jupyter
|
```
import io
import os
import pandas as pd
data_path = 'E:\\BaiduYunDownload\\optiondata3\\'
```
## Definitions
* Underlying The stock, index, or ETF symbol
* Underlying_last The last traded price at the time of the option quote.
* Exchange The exchange of the quote – Asterisk(*) represents a consolidated price of all exchanges and is the most common value.*
* Optionsymbol The option symbol. Note that in the format starting 2010 this will be longer than 18 characters, depending on the length of the underlying. Blank This item is always blank, to preserve continuity with the older format. It is always blank. So if you are importing this into a database, either do not import this column, or make the field nullable.
* Optiontype Call or put Expiration The expiration date of the option.
* Expiration date The date of the expiration
* Quotedate The date and time of the quote. Most of the time, the time will be 4:00 PM. This only means that it is at the close, even though some options trade until 4:15 PM EST
* Strike The strike of the option
* Last The last traded price of the option which could even be from a previous day.
* Bid The bid price of the option
* Ask The ask price of the option
* Volume The number of contracts traded
* Open interest Open Interest – always a day behind. The OCC changes this number at 3:00AM every morning and the number does not change through the day
* BELOW THIS LINE, THESE COLUMNS NOT CONTAINED IN BARE BONES PRODUCTS
* Implied volatility The implied volatility (a measure of the estimate of how much the price could change. A high number means that traders believe the option could make a large change)
* Delta The delta. (a measure of how much the option price would change in relation to the underlying stock price. A delta of .50 means the option would change 50 cents for every 1 dollar the stock moves)
* Gamma The gamma. (a measure of how fast the Delta will change when the stock price changes. A high number means this is a very explosive option, and could gain or loss value quickly)
* Theta The theta (a measure of how fast the option is losing value per day due to time decay. As the expiration day arrives, the theta increases)
* Vega The vega (a measure of how sensitive the option price is to a change in the implied volatility. Options that are way out of the money, or have a long time until expiration are more sensitive to a change in implied volatility)
* Alias If possible, the old name of the option. Because of the 2010 OSI Symbology, it is important to know what the old symbol name was during the 2010 switch over. If this can be determined, it will list the old name, otherwise it will display the same value as the option symbol. The Alias column has no usage outside of 2010.
```
columns= ['UnderlyingSymbol','UnderlyingPrice','Exchange','OptionSymbol','Blank','Type','Expiration', 'DataDate','Strike','Last','Bid','Ask','Volume','OpenInterest','IV','Delta','Gamma','Theta','Vega','Alias']
print(columns)
test=pd.read_csv(data_path+"\\201801\\options_20180102.csv", header=None,
names=columns)
symbols = ['AMD', 'AMED', 'ATLC', 'BLFS', 'CROX', 'DXCM', 'FATE', 'FIVN',
'FRPT', 'HZNP', 'JYNT', 'LPSN', 'LULU', 'MRTX', 'NEO', 'NSTG',
'PCTY', 'PDEX', 'PTCT', 'QDEL', 'REGI', 'RGEN', 'SPSC', 'STAA',
'VCYT', 'VICR', 'WIX']#from 3 years data NASDAQ clustering
df= None
for path in os.listdir(data_path):
for file in os.listdir(data_path+'/'+path):
print('reading file'+file)
df_one = pd.read_csv(data_path+'/'+path+'/'+file,
header=None, names=columns)
df_one = df_one[df_one['UnderlyingSymbol'].isin(symbols)]
print(df_one.shape)
if df is None:
df= df_one
print(df.shape)
continue
#print(df_one.head())
df = pd.concat([df,df_one],axis=0)
print(df.shape)
df.shape
df.head()
df.to_csv(data_path+'/option_data_NASDAQ.csv',index = False)
```
|
github_jupyter
|
# Modeling Transmission Line Properties
## Table of Contents
* [Introduction](#introduction)
* [Propagation constant](#propagation_constant)
* [Interlude on attenuation units](#attenuation_units)
* [Modeling a loaded lossy transmission line using transmission line functions](#tline_functions)
* [Input impedances, reflection coefficients and SWR](#tline_impedances)
* [Voltages and Currents](#voltages_currents)
* [Modeling a loaded lossy transmission line by cascading Networks](#cascading_networks)
* [Determination of the propagation constant from the input impedance](#propagation_constant_from_zin)
## Introduction <a class="anchor" id="introduction"></a>
In this tutorial, `scikit-rf` is used to work with some classical transmission line situations, such as calculating impedances, reflection coefficients, standing wave ratios or voltages and currents. There is at least two way of performing these calculations, one using [transmission line functions](#tline_functions) or by [creating and cascading Networks](#cascading_networks)
Let's consider a lossy coaxial cable of characteristic impedance $Z_0=75 \Omega$ of length $d=12 m$. The coaxial cable has an attenuation of 0.02 Neper/m and a [velocity factor](https://en.wikipedia.org/wiki/Velocity_factor) VF=0.67 (This corresponds roughly to a [RG-6](https://en.wikipedia.org/wiki/RG-6) coaxial). The cable is loaded with a $Z_L=150 \Omega$ impedance. The RF frequency of interest is 250 MHz.
Please note that in `scikit-rf`, the line length is defined from the load, ie $z=0$ at the load and $z=d$ at the input of the transmission line:
<img src="transmission_line_properties.svg">
First, let's make the necessary Python import statements:
```
%matplotlib inline
import skrf as rf
from pylab import *
# skrf figure styling
rf.stylely()
```
And the constants of the problem:
```
freq = rf.Frequency(250, npoints=1, unit='MHz')
Z_0 = 75 # Ohm
Z_L = 150 # Ohm
d = 12 # m
VF = 0.67
att = 0.02 # Np/m. Equivalent to 0.1737 dB/m
```
Before going into impedance and reflection coefficient calculations, first we need to define the transmission line properties, in particular its propagation constant.
### Propagation constant <a class="anchor" id="propagation_constant"></a>
In order to get the RF parameters of the transmission line, it is necessary to derive the propagation constant of the line. The propagation constant $\gamma$ of the line is defined in `scikit-rf` as $\gamma=\alpha + j\beta$ where $\alpha$ is the attenuation (in Neper/m) and $\beta=\frac{2\pi}{\lambda}=\frac{\omega}{c}/\mathrm{VF}=\frac{\omega}{c}\sqrt{\epsilon_r}$ the phase constant.
First, the wavelength in the coaxial cable is $$\lambda=\frac{c}{f \sqrt{\epsilon_r}}=\frac{c}{f} \mathrm{VF} $$
```
lambd = rf.c/freq.f * VF
print('VF=', VF, 'and Wavelength:', lambd, 'm')
```
As the attenuation is already given in Np/m, the propagation constant is:
```
alpha = att # Np/m !
beta = freq.w/rf.c/VF
gamma = alpha + 1j*beta
print('Transmission line propagation constant: gamma = ', gamma, 'rad/m')
```
If the attenuation would have been given in other units, `scikit-rf` provides the necessary tools to convert units, as described below.
### Interlude: On Attenuation Units <a class="anchor" id="attenuation_units"></a>
Attenuation is generally provided (or expected) in various kind of units. `scikit-rf` provides convenience functions to manipulate line attenuation units.
For example, the cable attenuation given in Np/m, can be expressed in dB:
```
print('Attenuation dB/m:', rf.np_2_db(att))
```
Hence, the attenuation in dB/100m is:
```
print('Line attenuation in dB/100m:', rf.np_2_db(att)*100)
```
And in dB/100feet is:
```
print('Line attenuation in dB/100ft:', rf.np_2_db(att)*100*rf.feet_2_meter())
```
If the attenuation would have been given in imperial units, such as dB/100ft, the opposite conversions would have been:
```
rf.db_per_100feet_2_db_per_100meter(5.2949) # to dB/100m
rf.db_2_np(5.2949)/rf.feet_2_meter(100) # to Np/m
```
## Using transmission line functions <a class="anchor" id="tline_functions"></a>
`scikit-rf` brings few convenient functions to deal with transmission lines. They are detailed in the [transmission line functions](https://scikit-rf.readthedocs.io/en/latest/api/tlineFunctions.html) documentation pages.
### Input impedances, reflection coefficients and SWR <a class="anchor" id="tline_impedances"></a>
The reflection coefficient $\Gamma_L$ induced by the load is given by `zl_2_Gamma0()`:
```
Gamma0 = rf.zl_2_Gamma0(Z_0, Z_L)
print('|Gamma0|=', abs(Gamma0))
```
and its associated Standing Wave Ratio (SWR) is obtained from `zl_2_swr()`:
```
rf.zl_2_swr(Z_0, Z_L)
```
After propagating by a distance $d$ in the transmission line of propagation constant $\gamma$ (hence having travelled an electrical length $\theta=\gamma d$), the reflection coefficient at the line input is obtained from `zl_2_Gamma_in()`:
```
Gamma_in = rf.zl_2_Gamma_in(Z_0, Z_L, theta=gamma*d)
print('|Gamma_in|=', abs(Gamma_in), 'phase=', 180/rf.pi*angle(Gamma_in))
```
The input impedance $Z_{in}$ from `zl_2_zin()`:
```
Z_in = rf.zl_2_zin(Z_0, Z_L, gamma * d)
print('Input impedance Z_in=', Z_in)
```
Like previously, the SWR at the line input is:
```
rf.zl_2_swr(Z_0, Z_in)
```
The total line loss in dB is get from `zl_2_total_loss()`:
```
rf.mag_2_db10(rf.zl_2_total_loss(Z_0, Z_L, gamma*d))
```
### Voltages and Currents <a class="anchor" id="voltages_currents"></a>
Now assume that the previous circuit is excited by a source delivering a voltage $V=1 V$ associated to a source impedance $Z_s=100\Omega$ :
<img src="transmission_line_properties_vi.svg">
```
Z_s = 100 # Ohm
V_s = 1 # V
```
At the input of the transmission line, the voltage is a voltage divider circuit:
$$
V_{in} = V_s \frac{Z_{in}}{Z_s + Z_{in}}
$$
```
V_in = V_s * Z_in / (Z_s + Z_in)
print('Voltage at transmission line input : V_in = ', V_in, ' V')
```
and the current at the input of the transmission line is:
$$
I_{in} = \frac{V_s}{Z_s + Z_{in}}
$$
```
I_in = V_s / (Z_s + Z_in)
print('Current at transmission line input : I_in = ', I_in, ' A')
```
which represent a power of
$$
P_{in} = \frac{1}{2} \Re \left[V_{in} I_{in}^* \right]
$$
```
P_in = 1/2 * real(V_in * conj(I_in))
print('Input Power : P_in = ', P_in, 'W')
```
The reflected power is:
$$
P_r = |\Gamma_{in}|^2 P_{in}
$$
```
P_r = abs(Gamma_in)**2 * P_in
print('Reflected power : P_r = ', P_r, 'W')
```
The voltage and current at the load can be deduced from the ABCD parameters of the line of length $L$ :
```
V_out, I_out = rf.voltage_current_propagation(V_in, I_in, Z_0,theta= gamma*d)
print('Voltage at load: V_out = ', V_out, 'V')
print('Current at load: I_out = ', I_out, 'A')
```
Note that voltages and currents are expressed a peak values. RMS values are thus:
```
print(abs(V_out)/sqrt(2), abs(I_out)/sqrt(2))
```
The power delivered to the load is thus:
```
P_out = 1/2 * real(V_out * conj(I_out))
print('Power delivered to the load : P_out = ', P_out, ' W')
```
Voltage and current are plotted below against the transmission line length (pay attention to the sign of $d$ in the voltage and current propagation: as we go from source ($z=d$) to the load ($z=0$), $\theta$ goes in the opposite direction and should be inversed)
```
ds = linspace(0, d, num=1001)
thetas = - gamma*ds
v1 = np.full_like(ds, V_in)
i1 = np.full_like(ds, I_in)
v2, i2 = rf.voltage_current_propagation(v1, i1, Z_0, thetas)
fig, (ax_V, ax_I) = plt.subplots(2, 1, sharex=True)
ax_V.plot(ds, abs(v2), lw=2)
ax_I.plot(ds, abs(i2), lw=2, c='C1')
ax_I.set_xlabel('z [m]')
ax_V.set_ylabel('|V| [V]')
ax_I.set_ylabel('|I| [A]')
ax_V.axvline(0, c='k', lw=5)
ax_I.axvline(0, c='k', lw=5)
ax_V.text(d-2, 0.4, 'input')
ax_V.text(1, 0.6, 'load')
ax_V.axvline(d, c='k', lw=5)
ax_I.axvline(d, c='k', lw=5)
ax_I.set_title('Current')
ax_V.set_title('Voltage')
```
## Using `media` objects for transmission line calculations <a class="anchor" id="cascading_networks"></a>
`scikit-rf` also provides objects representing transmission line mediums. The `Media` object provides generic methods to produce Network’s for any transmission line medium, such as transmission line length (`line()`), lumped components (`resistor()`, `capacitor()`, `inductor()`, `shunt()`, etc.) or terminations (`open()`, `short()`, `load()`). For additional references, please see the [media documentation](https://scikit-rf.readthedocs.io/en/latest/api/media/).
Let's create a transmission line `media` object for our coaxial line of characteristic impedance $Z_0$ and propagation constant $\gamma$:
```
# if not passing the gamma parameter, it would assume that gamma = alpha + j*beta = 0 + j*1
coax_line = rf.media.DefinedGammaZ0(frequency=freq, Z0=Z_0, gamma=gamma)
```
In order to build the circuit illustrated by the figure above, all the circuit's Networks are created and then [cascaded](https://scikit-rf.readthedocs.io/en/latest/tutorials/Networks.html#Cascading-and-De-embedding) with the `**` operator:
<img src="transmission_line_properties_networks.svg">
* [transmission line](https://scikit-rf.readthedocs.io/en/latest/api/media/generated/skrf.media.media.Media.line.html) of length $d$ (from the media created above),
* a [resistor](https://scikit-rf.readthedocs.io/en/latest/api/media/generated/skrf.media.media.Media.resistor.html) of impedance $Z_L$,
* then terminated by a [short](https://scikit-rf.readthedocs.io/en/latest/api/media/generated/skrf.media.media.Media.short.html).
This results in a one-port network, which $Z$-parameter is then the input impedance:
```
ntw = coax_line.line(d, unit='m') ** coax_line.resistor(Z_L) ** coax_line.short()
ntw.z
```
Note that full Network can also be built with convenience functions [load](https://scikit-rf.readthedocs.io/en/latest/api/media/generated/skrf.media.media.Media.load.html):
```
ntw = coax_line.line(d, unit='m') ** coax_line.load(rf.zl_2_Gamma0(Z_0, Z_L))
ntw.z
```
or even more directly using or [delay_load](https://scikit-rf.readthedocs.io/en/latest/api/media/generated/skrf.media.media.Media.delay_load.html):
```
ntw = coax_line.delay_load(rf.zl_2_Gamma0(Z_0, Z_L), d, unit='m')
ntw.z
```
## Determination of the propagation constant from the input impedance <a class="anchor" id="propagation_constant_from_zin"></a>
Let's assume the input impedance of a short‐circuited lossy transmission line of length d=1.5 m and a characteristic impedance of $Z_0=$100 Ohm has been measured to $Z_{in}=40 - 280j \Omega$.
<img src="transmission_line_properties_propagation_constant.svg">
The transmission line propagation constant $\gamma$ is unknown and researched. Let see how to deduce its value using `scikit-rf`:
```
# input data
z_in = 20 - 140j
z_0 = 75
d = 1.5
Gamma_load = -1 # short
```
Since we know the input impedance, we can deduce the reflection coefficient at the input of the transmission line. Since there is a direction relationship between the reflection coefficient at the load and at the input of the line:
$$
\Gamma_{in} = \Gamma_L e^{- 2 \gamma d}
$$
we can deduce the propagation constant value $\gamma$ as:
$$
\gamma = -\frac{1}{2d} \ln \left( \frac{\Gamma_{in}}{\Gamma_l} \right)
$$
This is what the convenience function `reflection_coefficient_2_propagation_constant` is doing:
```
# reflection coefficient at input
Gamma_in = rf.zl_2_Gamma0(z_0, z_in)
# line propagation constant
gamma = rf.reflection_coefficient_2_propagation_constant(Gamma_in, Gamma_load, d)
print('Line propagation constant, gamma =', gamma, 'rad/m')
```
One can check the consistency of the result by making the reverse calculation: the input impedance at a distance $d$ from the load $Z_l$:
```
rf.zl_2_zin(z_0, zl=0, theta=gamma * d)
```
Which was indeed the value given as input of the example.
Now that the line propagation constant has been determined, one can replace the short by a load resistance:
```
rf.zl_2_zin(z_0, zl=50+50j, theta=gamma * d)
```
|
github_jupyter
|
# Arbeit mit Selenium_Arbeitskopie
Die Arbeit mit Selenium erfordert etwas Übung. Aber der Zeitaufwand lohnt sich. Es gibt mit Selenium kaum ein Webdienst der nicht scrapbar wird. Beginnen wir aber wie üblich mit der Dokumentation. Sie ist im Falle von Selenium sehr hilfreich. Ihr findet [sie hier](http://selenium-python.readthedocs.io/). Und [hier](http://selenium-python.readthedocs.io/locating-elements.html).
Um Selenium kennenzulernen, gehen wir zurück zu unserem Beispiel der Lehren: https://www.berufsberatung.ch/dyn/show/2930. Nun wollen wir keine URLs generieren, um unsere Inhalte zu finden. Wir wollen stattdessen mit der Site interagieren. So sollten wir alle Einträge bekommen. BeautifulSoup werden wir trotzdem noch dazu nehmen. Denn Selenium liest keine Inhalte aus. Die Library lässt uns einfach mit dem Webdienst interagieren.
Beginnen wir mit den Imports
```
from bs4 import BeautifulSoup
import requests
import time # damit kann z.B. den Browser verlangsamen, damit man nicht sofort als Mschiner erkennbar wird.
import datetime # braucht es jetzt nicht.
import pandas as pd
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
```
Und dann schicken wir Selenium auf die Seite.
```
#Wir starten den Browser:
driver = webdriver.Chrome('/usr/local/bin/chromedriver')
#Wir besuchen die Site
driver.get("https://www.berufsberatung.ch/dyn/show/2930") # so nenne ich jetzt den Browser
```
Nun suchen wir mit dem Inspector die Elemente, die wir ansteuern wollen.
```
driver.find_element_by_class_name("fs-autocomplete-trigger").click()
driver.find_element_by_id("sw_453").click()
driver.find_element_by_id("uxfs-action").click()
test = driver.page_source
type(test) # zeigt mir zur Orientierung, in welcher Form das Objekt vorlieg
# Wir öffnen eine Datei zum Schreiben ("w": write)
file = open("lehrstellen.html", "w")
file.write(test)
file.close()
# Abschließend müssen wir die Datei wieder schließen
```
Aber wir wollen alle Ergänzen. Holen wir deshalb die Nummern nochmals
```
r = requests.get('https://www.berufsberatung.ch/dyn/show/2930') #Seite suchen
soup = BeautifulSoup(r.text, 'html.parser') #
ids = []
for elem in soup.find('ul',{'class':'ui-autocomplete full-list '}).find_all('a'):
#ich könnte weitere find oder find_all - Befehle anfügen und damit immer weiter meine
#Suche präzisieren. Hier sind wir aber bereits so auf der Listenreihen und können das
#in der for-Schlaufe abfregen
elem = "sw_" + elem['data-id']
ids.append(elem)
len(ids)
ids[:5]
```
Testen wir es mit den ersten fünf Einträgen
```
for elem in ids[:5]:
print(elem)
time.sleep(.5) #damit es nicht zu schnell geht
driver.find_element_by_class_name("fs-autocomplete-trigger").click()
time.sleep(.5)
driver.find_element_by_id(elem).click()
#mit der obigen Abfrage werden aber nur die ersten 230 Berufe abgefragt
driver.find_element_by_id("uxfs-action").click()
```
Zeigen wird alle Ergebnisse an
```
driver.find_element_by_id("aSearchPaging").click()
```
Speichern wir die Ergebnisse ab
```
text = driver.page_source
def lehrstellen(html): #wir bauen die Funktion "lehrstellen"
soup = BeautifulSoup(html, 'html.parser') #wir veraubeiten dea Objekt mit BS
# \ ermöglicht Umbruch
ortsliste = soup.find('div', {'class':'resultpart result-body'})\
.find_all('div', {'class':'display-table-cell table-col-3'})
#mit find bzw. find_all suchen wir die entsprechenden Elemente
firmenliste = soup.find('div', {'class':'resultpart result-body'})\
.find_all('div', {'class':'display-table-cell bold company data-id table-col-1'})
jahresliste = soup.find('div', {'class':'resultpart result-body'})\
.find_all('div', {'class':'display-table-cell float-left-for-sd table-col-4'})
anzahlliste = soup.find('div', {'class':'resultpart result-body'})\
.find_all('div', {'class':'display-table-cell text-align-center float-left-for-sd table-col-5'})
lst = []
#kjetzt bauen wir die vier Variablen der Liste lst
for ort, firma, jahr, anzahl in zip(ortsliste,firmenliste,jahresliste, anzahlliste):
mini_dict = {'Ort':ort.text, #gebe es immer als text aus
'Firma':firma.text,
'Jahr':jahr.text,
'Anzahl':int(anzahl.text.replace(' Lehrstelle(n)\n','').replace('\n',''))}
lst.append(mini_dict)
return lst
lehrstellen(text)
pd.DataFrame(lehrstellen(text))
```
## Bringen wir alles zusammen
```
#Funktion, um nur die Informationen herauszuziehen, die uns interessieren
def lehrstellen(html):
soup = BeautifulSoup(html, 'lxml')
try:
ortsliste = soup.find('div', {'class':'resultpart result-body'})\
.find_all('div', {'class':'display-table-cell table-col-3'})
firmenliste = soup.find('div', {'class':'resultpart result-body'})\
.find_all('div', {'class':'display-table-cell bold company data-id table-col-1'})
jahresliste = soup.find('div', {'class':'resultpart result-body'})\
.find_all('div', {'class':'display-table-cell float-left-for-sd table-col-4'})
anzahlliste = soup.find('div', {'class':'resultpart result-body'})\
.find_all('div', {'class':'display-table-cell text-align-center float-left-for-sd table-col-5'})
lehrstelle = soup.find('ul',{'class':'ui-autocomplete full-list '})\
.find_all('a')
lst = []
for ort, firma, jahr, anzahl,lehr in zip(ortsliste,firmenliste,jahresliste, anzahlliste,lehrstelle):
mini_dict = {'Ort':ort.text,
'Firma':firma.text,
'Jahr':jahr.text,
'Anzahl':int(anzahl.text.replace(' Lehrstelle(n)\n','').replace('\n','')),
'Lehrstelle':lehr['data-value']}
lst.append(mini_dict)
return pd.DataFrame(lst).to_csv("d/"+str(datetime.datetime.now())+".csv")
except:
return pd.DataFrame([{'Ort':'Keine Treffer',
'Firma':'Keine Treffer',
'Jahr':'Keine Treffer',
'Anzahl':'Keine Treffer'}])
#Bauen wir Listen aller Job-IDs
r = requests.get('https://www.berufsberatung.ch/dyn/show/2930')
soup = BeautifulSoup(r.text, 'lxml')
ids = []
for elem in soup.find('ul',{'class':'ui-autocomplete full-list '}).find_all('a'):
elem = "sw_" + elem['data-id']
ids.append(elem)
#Teilen wir diese Listen mit Länge von je 5 Teilen.
#Das habe ich nicht selber geschrieben, sondern hier geholt:
#https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks
idslst = [ids[i:i + 5] for i in range(0, len(ids), 5)]
count = 0
for ids in idslst:
#Starten wir den Chrome-Browser und besuchen die Site
driver = webdriver.Chrome('/usr/local/bin/chromedriver')
driver.get("https://www.berufsberatung.ch/dyn/show/2930")
#Bereiten wir die Suche vor
for elem in ids:
time.sleep(1) #damit es nicht zu schnell geht
driver.find_element_by_class_name("fs-autocomplete-trigger").click()
time.sleep(1)
driver.find_element_by_id(elem).click()
#Suchen wir
time.sleep(1)
driver.find_element_by_id("uxfs-action").click()
#Nun nun sorgen wir dafür, dass alle Ergebnisse anzeigt werden.
exists = 1
while(exists==1):
loadmore = driver.find_element_by_id("aSearchPaging")
if loadmore.text == "MEHR ERGEBNISSE ANZEIGEN":
driver.find_element_by_id("aSearchPaging").click()
time.sleep(1)
else:
exists = 0
print(count)
count += 1
lehrstellen(driver.page_source)
driver.close()
```
Kreieren wir ein kleine .py File und lösen es von der Commandline aus.
|
github_jupyter
|
# Temporal-Difference Methods
In this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.
While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.
---
### Part 0: Explore CliffWalkingEnv
We begin by importing the necessary packages.
```
import sys
import gym
import numpy as np
import random
import math
from collections import defaultdict, deque
import matplotlib.pyplot as plt
%matplotlib inline
import check_test
from plot_utils import plot_values
```
Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment.
```
env = gym.make('CliffWalking-v0')
```
The agent moves through a $4\times 12$ gridworld, with states numbered as follows:
```
[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35],
[36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]
```
At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.
The agent has 4 potential actions:
```
UP = 0
RIGHT = 1
DOWN = 2
LEFT = 3
```
Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below.
```
print(env.action_space)
print(env.observation_space)
```
In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function.
_**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._
```
# define the optimal state-value function
V_opt = np.zeros((4,12))
V_opt[0][0:13] = -np.arange(3, 15)[::-1]
V_opt[1][0:13] = -np.arange(3, 15)[::-1] + 1
V_opt[2][0:13] = -np.arange(3, 15)[::-1] + 2
V_opt[3][0] = -13
plot_values(V_opt)
```
### Part 1: TD Control: Sarsa
In this section, you will write your own implementation of the Sarsa control algorithm.
Your algorithm has four arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `alpha`: This is the step-size parameter for the update step.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
Please complete the function in the code cell below.
(_Feel free to define additional functions to help you to organize your code._)
```
def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None):
"""Returns updated Q-value for the most recent experience."""
current = Q[state][action] # estimate in Q-table (for current state, action pair)
# get value of state, action pair at next time step
Qsa_next = Q[next_state][next_action] if next_state is not None else 0
target = reward + (gamma * Qsa_next) # construct TD target
new_value = current + (alpha * (target - current)) # get updated value
return new_value
def epsilon_greedy(Q, state, nA, eps):
"""Selects epsilon-greedy action for supplied state.
Params
======
Q (dictionary): action-value function
state (int): current state
nA (int): number actions in the environment
eps (float): epsilon
"""
if random.random() > eps: # select greedy action with probability epsilon
return np.argmax(Q[state])
else: # otherwise, select an action randomly
return random.choice(np.arange(env.action_space.n))
def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100):
nA = env.action_space.n # number of actions
Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
score = 0 # initialize score
state = env.reset() # start episode
eps = 1.0 / i_episode # set value of epsilon
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
while True:
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
if not done:
next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action
Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \
state, action, reward, next_state, next_action)
state = next_state # S <- S'
action = next_action # A <- A'
if done:
Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \
state, action, reward)
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
```
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function.
If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
```
# obtain the estimated optimal policy and corresponding action-value function
Q_sarsa = sarsa(env, 5000, .01)
# print the estimated optimal policy
policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12)
check_test.run_check('td_control_check', policy_sarsa)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_sarsa)
# plot the estimated optimal state-value function
V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)])
plot_values(V_sarsa)
```
### Part 2: TD Control: Q-learning
In this section, you will write your own implementation of the Q-learning control algorithm.
Your algorithm has four arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `alpha`: This is the step-size parameter for the update step.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
Please complete the function in the code cell below.
(_Feel free to define additional functions to help you to organize your code._)
```
def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None):
"""Returns updated Q-value for the most recent experience."""
current = Q[state][action] # estimate in Q-table (for current state, action pair)
Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state
target = reward + (gamma * Qsa_next) # construct TD target
new_value = current + (alpha * (target - current)) # get updated value
return new_value
def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100):
"""Q-Learning - TD Control
Params
======
num_episodes (int): number of episodes to run the algorithm
alpha (float): learning rate
gamma (float): discount factor
plot_every (int): number of episodes to use when calculating average score
"""
nA = env.action_space.n # number of actions
Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
score = 0 # initialize score
state = env.reset() # start episode
eps = 1.0 / i_episode # set value of epsilon
while True:
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \
state, action, reward, next_state)
state = next_state # S <- S'
if done:
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
```
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function.
If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
```
# obtain the estimated optimal policy and corresponding action-value function
Q_sarsamax = q_learning(env, 5000, .01)
# print the estimated optimal policy
policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12))
check_test.run_check('td_control_check', policy_sarsamax)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_sarsamax)
# plot the estimated optimal state-value function
plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)])
```
### Part 3: TD Control: Expected Sarsa
In this section, you will write your own implementation of the Expected Sarsa control algorithm.
Your algorithm has four arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `alpha`: This is the step-size parameter for the update step.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
Please complete the function in the code cell below.
(_Feel free to define additional functions to help you to organize your code._)
```
def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None):
"""Returns updated Q-value for the most recent experience."""
current = Q[state][action] # estimate in Q-table (for current state, action pair)
policy_s = np.ones(nA) * eps / nA # current policy (for next state S')
policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action
Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step
target = reward + (gamma * Qsa_next) # construct target
new_value = current + (alpha * (target - current)) # get updated value
return new_value
def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100):
"""Expected SARSA - TD Control
Params
======
num_episodes (int): number of episodes to run the algorithm
alpha (float): step-size parameters for the update step
gamma (float): discount factor
plot_every (int): number of episodes to use when calculating average score
"""
nA = env.action_space.n # number of actions
Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
score = 0 # initialize score
state = env.reset() # start episode
eps = 0.005 # set value of epsilon
while True:
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
# update Q
Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \
state, action, reward, next_state)
state = next_state # S <- S'
if done:
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
```
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function.
If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
```
# obtain the estimated optimal policy and corresponding action-value function
Q_expsarsa = expected_sarsa(env, 5000, 1)
# print the estimated optimal policy
policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12)
check_test.run_check('td_control_check', policy_expsarsa)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_expsarsa)
# plot the estimated optimal state-value function
plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)])
```
|
github_jupyter
|
# Ridge Regression
## Goal
Given a dataset with continuous inputs and corresponding outputs, the objective is to find a function that matches the two as accurately as possible. This function is usually called the target function.
In the case of a ridge regression, the idea is to modellize the target function as a linear sum of functions (that can be non linear and are generally not). Thus, with f the target function, $\phi_i$ a base function and $w_i$ its weight in the linear sum, we suppose that:
$$f(x) = \sum w_i \phi_i(x)$$
The parameters that must be found are the weights $w_i$ for each base function $\phi_i$. This is done by minimizing the [root mean square error](https://en.wikipedia.org/wiki/Root-mean-square_deviation).
There is a closed solution to this problem given by the following equation $W = (\Phi^T \Phi)^{-1} \Phi^T Y$ with:
- $d$ the number of base functions
- $W = (w_0, ..., w_d)$ the weight vector
- $Y$ the output vector
- $\Phi(X) = (\phi_0(X)^T, \phi_1(X)^T, ..., \phi_d(X)^T)$, $\phi_0(X) = \mathbf{1}$ and $\phi_i(X) = (\phi_i(X_1), ... \phi_i(X_n))$.
If you want more details, I find that the best explanation is the one given in the book [Pattern Recognition and Machine Learning](http://research.microsoft.com/en-us/um/people/cmbishop/PRML/) by C. Bishop.
## Implementation
The following implementation does exactly what is explained above and uses three different types of kernel:
- linear $f(x) = w_0 + w_1 x$
- polynomial $f(x) = \sum_{i=0}^d w_i x^i$ with d the degree of the polynome. Notice that d = 1 is the linear case.
- gaussian $f(x) = \sum w_i \exp(-\frac{x - b_i}{2 \sigma^2})$ with $b_i$ define the location of the base function number $i$ (they are usually taken at random within the dataset) and $\sigma$ a parameter tuning the width of the functions. Here the "width" is the same for all base function but you could make them different for each of them.
The steps are:
- normalization
- building the $\Phi$ matrix
- computing the weights $W$
- plotting the found function and the dataset
```
# to display plots within the notebook
%matplotlib inline
# to define the size of the plotted images
from pylab import rcParams
rcParams['figure.figsize'] = (15, 10)
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from numpy.linalg import inv
from fct import normalize_pd
```
The X matrix correspond to the inputs and the Y matrix to the outputs to predict.
```
data = pd.read_csv('datasets/data_regression.csv')
X = data['X']
Y = data['Y']
# Normalization
X = np.asmatrix(normalize_pd(X)).T
Y = np.asmatrix(normalize_pd(Y)).T
```
## Linear regression
Here we have $\Phi(X) = X$. The function we look for has the form $f(x) = ax + b$.
```
def linear_regression(X, Y):
# Building the Phi matrix
Ones = np.ones((X.shape[0], 1))
phi_X = np.hstack((Ones, X))
# Calculating the weights
w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y)
# Predicting the output values
Y_linear_reg = np.dot(phi_X, w)
return Y_linear_reg
Y_linear_reg = linear_regression(X, Y)
plt.plot(X, Y, '.')
plt.plot(X, Y_linear_reg, 'r')
plt.title('Linear Regression')
plt.legend(['Data', 'Linear Regression'])
```
The obtained solution does not represent the data very well. It is because the power of representation is too low compared to the target function. This is usually referred to as **underfitting**.
## Polynomial Regression
Now, we approximate the target function by a polynom $f(x) = w_0 + w_1 x + w_2 x^2 + ... + w_d x^d$ with $d$ the degree of the polynom.
We plotted the results obtained with different degrees.
```
def polynomial_regression(X, Y, degree):
# Building the Phi matrix
Ones = np.ones((X.shape[0], 1))
# Add a column of ones
phi_X = np.hstack((Ones, X))
# add a column of X elevated to all the powers from 2 to degree
for i in range(2, degree + 1):
# calculate the vector X to the power i and add it to the Phi matrix
X_power = np.array(X) ** i
phi_X = np.hstack((phi_X, np.asmatrix(X_power)))
# Calculating the weights
w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y)
# Predicting the output values
Y_poly_reg = np.dot(phi_X, w)
return Y_poly_reg
# Degrees to plot you can change these values to
# see how the degree of the polynom affects the
# predicted function
degrees = [1, 2, 20]
legend = ['Data']
plt.plot(X, Y, '.')
for degree in degrees:
Y_poly_reg = polynomial_regression(X, Y, degree)
plt.plot(X, Y_poly_reg)
legend.append('degree ' + str(degree))
plt.legend(legend)
plt.title('Polynomial regression results depending on the degree of the polynome used')
```
The linear case is still underfitting but now, we see that the polynom of degree 20 is too sensitive to the data, especially around $[-2.5, -1.5]$. This phenomena is called **overfitting**: the model starts fitting the noise in the data as well and looses its capacity to generalize.
## Regression with kernel gaussian
Lastly, we look at function of the type $f(x) = \sum \phi_i(x)$ with $\phi_i(x) = \exp({-\frac{x - b_i}{\sigma^2}}$). $b_i$ is called the base and $\sigma$ is its width.
Usually, the $b_i$ are taken randomly within the dataset. That is what I did in the implementation with b the number of bases.
In the plot, there is the base function used to compute the regressed function and the latter.
```
def gaussian_regression(X, Y, b, sigma, return_base=True):
"""b is the number of bases to use, sigma is the variance of the
base functions."""
# Building the Phi matrix
Ones = np.ones((X.shape[0], 1))
# Add a column of ones
phi_X = np.hstack((Ones, X))
# Choose randomly without replacement b values from X
# to be the center of the base functions
X_array = np.array(X).reshape(1, -1)[0]
bases = np.random.choice(X_array, b, replace=False)
bases_function = []
for i in range(1, b):
base_function = np.exp(-0.5 * (((X_array - bases[i - 1] *
np.ones(len(X_array))) / sigma) ** 2))
bases_function.append(base_function)
phi_X = np.hstack((phi_X, np.asmatrix(base_function).T))
w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y)
if return_base:
return np.dot(phi_X, w), bases_function
else:
return np.dot(phi_X, w)
# By changing this value, you will change the width of the base functions
sigma = 0.2
# b is the number of base functions used
b = 5
Y_gauss_reg, bases_function = gaussian_regression(X, Y, b, sigma)
# Plotting the base functions and the dataset
plt.plot(X, Y, '.')
plt.plot(X, Y_gauss_reg)
legend = ['Data', 'Regression result']
for i, base_function in enumerate(bases_function):
plt.plot(X, base_function)
legend.append('Base function n°' + str(i))
plt.legend(legend)
plt.title('Regression with gaussian base functions')
```
We can observe that here the sigma is too small. Some part of the dataset are too far away from the bases to be taken into accoutn.
If you change the <code>sigma</code> in the code to 0.5 and then 1. You will notice how the output function will get closer to the data.
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
train = pd.read_csv("/kaggle/input/30-days-of-ml/train.csv")
test = pd.read_csv("/kaggle/input/30-days-of-ml/test.csv")
sample_submission = pd.read_csv("/kaggle/input/30-days-of-ml/sample_submission.csv")
from pandas.plotting._misc import scatter_matrix
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import OrdinalEncoder
from sklearn.neighbors import KNeighborsRegressor
from mlxtend.regressor import StackingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
%matplotlib inline
train.head()
train.isnull().sum()
s = (train.dtypes == 'object')
object_cols = list(s[s].index)
ordinal_encoder = OrdinalEncoder()
train[object_cols] = ordinal_encoder.fit_transform(train[object_cols])
train.head()
X_Data= train.drop(['target'],axis=1)
Y_Data= train['target']
x_train,x_test,y_train,y_test = train_test_split(X_Data,Y_Data,test_size=.2)
knn = KNeighborsRegressor(n_neighbors=5)
knn.fit(x_train,y_train)
predicted=knn.predict(x_test)
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, predicted)))
tree_clf = DecisionTreeRegressor(max_depth=2,random_state=42)
tree_clf.fit(X_Data,Y_Data)
tree_clf.score(X_Data,Y_Data)
prediction = tree_clf.predict(x_test)
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, prediction)))
rnd = RandomForestRegressor(max_depth=10)
rnd.fit(x_train,y_train)
rnd.score(x_test,y_test)
prediction = rnd.predict(x_test)
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, prediction)))
dtc=DecisionTreeRegressor()
knnc=KNeighborsRegressor()
rfc=RandomForestRegressor()
stregr = StackingRegressor(regressors=[dtc,knnc,rfc],
meta_regressor=knnc)
stregr.fit(x_train,y_train)
stregr.score(x_test,y_test)
prediction = stregr.predict(x_test)
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, prediction)))
from sklearn import model_selection
train = pd.read_csv("../input/30-days-of-ml/train.csv")
test = pd.read_csv("../input/30-days-of-ml/test.csv")
print(train.shape,test.shape)
train['kfold']=-1
kfold = model_selection.KFold(n_splits=10, shuffle= True, random_state = 42)
for fold, (train_indicies, valid_indicies) in enumerate(kfold.split(X=train)):
train.loc[valid_indicies,'kfold'] = fold
print(train.kfold.value_counts())
train.to_csv("trainfold_10.csv",index=False)
train = pd.read_csv("./trainfold_10.csv")
test = pd.read_csv("../input/30-days-of-ml/test.csv")
sample_submission = pd.read_csv("../input/30-days-of-ml/sample_submission.csv")
print(train.shape,test.shape)
train.sample()
from sklearn import preprocessing
final_predictions = []
score= []
useful_features = [c for c in train.columns if c not in ("id","target","kfold")]
object_cols = [col for col in useful_features if 'cat' in col]
numerical_cols = [col for col in useful_features if 'cont' in col]
test = test[useful_features]
for fold in range(10):
xtrain = train[train.kfold != fold].reset_index(drop=True)
xvalid = train[train.kfold == fold].reset_index(drop=True)
xtest = test.copy()
ytrain = xtrain.target
yvalid = xvalid.target
xtrain = xtrain[useful_features]
xvalid = xvalid[useful_features]
ordinal_encoder = OrdinalEncoder()
xtrain[object_cols] = ordinal_encoder.fit_transform(xtrain[object_cols])
xvalid[object_cols] = ordinal_encoder.transform(xvalid[object_cols])
xtest[object_cols] = ordinal_encoder.transform(xtest[object_cols])
scaler = preprocessing.StandardScaler()
xtrain[numerical_cols] = scaler.fit_transform(xtrain[numerical_cols])
xvalid[numerical_cols] = scaler.transform(xvalid[numerical_cols])
xtest[numerical_cols] = scaler.transform(xtest[numerical_cols])
xgb_params = {
'learning_rate': 0.03628302216953097,
'subsample': 0.7875490025178,
'colsample_bytree': 0.11807135201147,
'max_depth': 3,
'booster': 'gbtree',
'reg_lambda': 0.0008746338866473539,
'reg_alpha': 23.13181079976304,
'random_state':40,
'n_estimators':10000
}
model= XGBRegressor()
model.fit(xtrain,ytrain,early_stopping_rounds=300,eval_set=[(xvalid,yvalid)],verbose=2000)
preds_valid = model.predict(xvalid)
test_pre = model.predict(xtest)
final_predictions.append(test_pre)
rms = mean_squared_error(yvalid,preds_valid,squared=False)
score.append(rms)
print(f"fold:{fold},rmse:{rms}")
print(np.mean(score),np.std(score))
preds = np.mean(np.column_stack(final_predictions),axis=1)
print(preds)
sample_submission.target = preds
sample_submission.to_csv("submission.csv",index=False)
print("success")
```
|
github_jupyter
|
```
# HIDDEN
import matplotlib
#matplotlib.use('Agg')
path_data = '../../../data/'
from datascience import *
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import math
import scipy.stats as stats
plt.style.use('fivethirtyeight')
# HIDDEN
def standard_units(x):
return (x - np.mean(x))/np.std(x)
# HIDDEN
# HIDDEN
def distance(pt1, pt2):
return np.sqrt(np.sum((pt1 - pt2)**2))
def all_dists(training, p):
attributes = training.drop('Class')
def dist_point_row(row):
return distance(np.array(row), p)
return attributes.apply(dist_point_row)
def table_with_distances(training, p):
return training.with_column('Distance', all_dists(training, p))
def closest(training, p, k):
with_dists = table_with_distances(training, p)
sorted_by_dist = with_dists.sort('Distance')
topk = sorted_by_dist.take(np.arange(k))
return topk
def majority(topkclasses):
ones = topkclasses.where('Class', are.equal_to(1)).num_rows
zeros = topkclasses.where('Class', are.equal_to(0)).num_rows
if ones > zeros:
return 1
else:
return 0
def classify(training, p, k):
closestk = closest(training, p, k)
topkclasses = closestk.select('Class')
return majority(topkclasses)
# HIDDEN
def classify_grid(training, test, k):
c = make_array()
for i in range(test.num_rows):
# Run the classifier on the ith patient in the test set
c = np.append(c, classify(training, make_array(test.row(i)), k))
return c
# HIDDEN
ckd = Table.read_table(path_data + 'ckd.csv').relabeled('Blood Glucose Random', 'Glucose')
ckd = Table().with_columns(
'Hemoglobin', standard_units(ckd.column('Hemoglobin')),
'Glucose', standard_units(ckd.column('Glucose')),
'White Blood Cell Count', standard_units(ckd.column('White Blood Cell Count')),
'Class', ckd.column('Class')
)
color_table = Table().with_columns(
'Class', make_array(1, 0),
'Color', make_array('darkblue', 'gold')
)
ckd = ckd.join('Class', color_table)
```
### Training and Testing ###
How good is our nearest neighbor classifier? To answer this we'll need to find out how frequently our classifications are correct. If a patient has chronic kidney disease, how likely is our classifier to pick that up?
If the patient is in our training set, we can find out immediately. We already know what class the patient is in. So we can just compare our prediction and the patient's true class.
But the point of the classifier is to make predictions for *new* patients not in our training set. We don't know what class these patients are in but we can make a prediction based on our classifier. How to find out whether the prediction is correct?
One way is to wait for further medical tests on the patient and then check whether or not our prediction agrees with the test results. With that approach, by the time we can say how likely our prediction is to be accurate, it is no longer useful for helping the patient.
Instead, we will try our classifier on some patients whose true classes are known. Then, we will compute the proportion of the time our classifier was correct. This proportion will serve as an estimate of the proportion of all new patients whose class our classifier will accurately predict. This is called *testing*.
### Overly Optimistic "Testing" ###
The training set offers a very tempting set of patients on whom to test out our classifier, because we know the class of each patient in the training set.
But let's be careful ... there will be pitfalls ahead if we take this path. An example will show us why.
Suppose we use a 1-nearest neighbor classifier to predict whether a patient has chronic kidney disease, based on glucose and white blood cell count.
```
ckd.scatter('White Blood Cell Count', 'Glucose', colors='Color')
```
Earlier, we said that we expect to get some classifications wrong, because there's some intermingling of blue and gold points in the lower-left.
But what about the points in the training set, that is, the points already on the scatter? Will we ever mis-classify them?
The answer is no. Remember that 1-nearest neighbor classification looks for the point *in the training set* that is nearest to the point being classified. Well, if the point being classified is already in the training set, then its nearest neighbor in the training set is itself! And therefore it will be classified as its own color, which will be correct because each point in the training set is already correctly colored.
In other words, **if we use our training set to "test" our 1-nearest neighbor classifier, the classifier will pass the test 100% of the time.**
Mission accomplished. What a great classifier!
No, not so much. A new point in the lower-left might easily be mis-classified, as we noted earlier. "100% accuracy" was a nice dream while it lasted.
The lesson of this example is *not* to use the training set to test a classifier that is based on it.
### Generating a Test Set ###
In earlier chapters, we saw that random sampling could be used to estimate the proportion of individuals in a population that met some criterion. Unfortunately, we have just seen that the training set is not like a random sample from the population of all patients, in one important respect: Our classifier guesses correctly for a higher proportion of individuals in the training set than it does for individuals in the population.
When we computed confidence intervals for numerical parameters, we wanted to have many new random samples from a population, but we only had access to a single sample. We solved that problem by taking bootstrap resamples from our sample.
We will use an analogous idea to test our classifier. We will *create two samples out of the original training set*, use one of the samples as our training set, and *the other one for testing*.
So we will have three groups of individuals:
- a training set on which we can do any amount of exploration to build our classifier;
- a separate testing set on which to try out our classifier and see what fraction of times it classifies correctly;
- the underlying population of individuals for whom we don't know the true classes; the hope is that our classifier will succeed about as well for these individuals as it did for our testing set.
How to generate the training and testing sets? You've guessed it – we'll select at random.
There are 158 individuals in `ckd`. Let's use a random half of them for training and the other half for testing. To do this, we'll shuffle all the rows, take the first 79 as the training set, and the remaining 79 for testing.
```
shuffled_ckd = ckd.sample(with_replacement=False)
training = shuffled_ckd.take(np.arange(79))
testing = shuffled_ckd.take(np.arange(79, 158))
```
Now let's construct our classifier based on the points in the training sample:
```
training.scatter('White Blood Cell Count', 'Glucose', colors='Color')
plt.xlim(-2, 6)
plt.ylim(-2, 6);
```
We get the following classification regions and decision boundary:
```
# HIDDEN
x_array = make_array()
y_array = make_array()
for x in np.arange(-2, 6.1, 0.25):
for y in np.arange(-2, 6.1, 0.25):
x_array = np.append(x_array, x)
y_array = np.append(y_array, y)
test_grid = Table().with_columns(
'Glucose', x_array,
'White Blood Cell Count', y_array
)
# HIDDEN
c = classify_grid(training.drop('Hemoglobin', 'Color'), test_grid, 1)
# HIDDEN
test_grid = test_grid.with_column('Class', c).join('Class', color_table)
test_grid.scatter('White Blood Cell Count', 'Glucose', colors='Color', alpha=0.4, s=30)
plt.xlim(-2, 6)
plt.ylim(-2, 6);
```
Place the *test* data on this graph and you can see at once that while the classifier got almost all the points right, there are some mistakes. For example, some blue points of the test set fall in the gold region of the classifier.
```
# HIDDEN
test_grid = test_grid.with_column('Class', c).join('Class', color_table)
test_grid.scatter('White Blood Cell Count', 'Glucose', colors='Color', alpha=0.4, s=30)
plt.scatter(testing.column('White Blood Cell Count'), testing.column('Glucose'), c=testing.column('Color'), edgecolor='k')
plt.xlim(-2, 6)
plt.ylim(-2, 6);
```
Some errors notwithstanding, it looks like the classifier does fairly well on the test set. Assuming that the original sample was drawn randomly from the underlying population, the hope is that the classifier will perform with similar accuracy on the overall population, since the test set was chosen randomly from the original sample.
|
github_jupyter
|
```
%matplotlib inline
```
GroupLasso for linear regression with dummy variables
=====================================================
A sample script for group lasso with dummy variables
Setup
-----
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import Ridge
from sklearn.metrics import r2_score
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
from group_lasso import GroupLasso
from group_lasso.utils import extract_ohe_groups
np.random.seed(42)
GroupLasso.LOG_LOSSES = True
```
Set dataset parameters
----------------------
```
num_categories = 30
min_options = 2
max_options = 10
num_datapoints = 10000
noise_std = 1
```
Generate data matrix
--------------------
```
X_cat = np.empty((num_datapoints, num_categories))
for i in range(num_categories):
X_cat[:, i] = np.random.randint(min_options, max_options, num_datapoints)
ohe = OneHotEncoder()
X = ohe.fit_transform(X_cat)
groups = extract_ohe_groups(ohe)
group_sizes = [np.sum(groups == g) for g in np.unique(groups)]
active_groups = [np.random.randint(0, 2) for _ in np.unique(groups)]
```
Generate coefficients
---------------------
```
w = np.concatenate(
[
np.random.standard_normal(group_size) * is_active
for group_size, is_active in zip(group_sizes, active_groups)
]
)
w = w.reshape(-1, 1)
true_coefficient_mask = w != 0
intercept = 2
```
Generate regression targets
---------------------------
```
y_true = X @ w + intercept
y = y_true + np.random.randn(*y_true.shape) * noise_std
```
View noisy data and compute maximum R^2
---------------------------------------
```
plt.figure()
plt.plot(y, y_true, ".")
plt.xlabel("Noisy targets")
plt.ylabel("Noise-free targets")
# Use noisy y as true because that is what we would have access
# to in a real-life setting.
R2_best = r2_score(y, y_true)
```
Generate pipeline and train it
------------------------------
```
pipe = pipe = Pipeline(
memory=None,
steps=[
(
"variable_selection",
GroupLasso(
groups=groups,
group_reg=0.1,
l1_reg=0,
scale_reg=None,
supress_warning=True,
n_iter=100000,
frobenius_lipschitz=False,
),
),
("regressor", Ridge(alpha=1)),
],
)
pipe.fit(X, y)
```
Extract results and compute performance metrics
-----------------------------------------------
```
# Extract from pipeline
yhat = pipe.predict(X)
sparsity_mask = pipe["variable_selection"].sparsity_mask_
coef = pipe["regressor"].coef_.T
# Construct full coefficient vector
w_hat = np.zeros_like(w)
w_hat[sparsity_mask] = coef
R2 = r2_score(y, yhat)
# Print performance metrics
print(f"Number variables: {len(sparsity_mask)}")
print(f"Number of chosen variables: {sparsity_mask.sum()}")
print(f"R^2: {R2}, best possible R^2 = {R2_best}")
```
Visualise regression coefficients
---------------------------------
```
for i in range(w.shape[1]):
plt.figure()
plt.plot(w[:, i], ".", label="True weights")
plt.plot(w_hat[:, i], ".", label="Estimated weights")
plt.figure()
plt.plot([w.min(), w.max()], [coef.min(), coef.max()], "gray")
plt.scatter(w, w_hat, s=10)
plt.ylabel("Learned coefficients")
plt.xlabel("True coefficients")
plt.show()
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import sys
import shutil
sys.path.append('../code/')
sys.path.append('../python/')
from pprint import pprint
from os import path
import scipy
import os
from matplotlib import pyplot as plt
from tqdm import tqdm
from argparse import Namespace
import pickle
import seaborn as sns
import torchvision
import torchvision.transforms as transforms
from sklearn.model_selection import train_test_split
# import seaborn as sns
import numpy as np
# import pandas as pd
import scipy
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
from metrics import ranking
# from sh import sh
import data
def get_numpy_data(dataloader):
x, y = [], []
for batch_x, batch_y in tqdm(iter(dataloader)):
x.append(batch_x.numpy())
y.append(batch_y.numpy())
x = np.vstack(x)
y = np.concatenate(y)
return x, y
def create_hashgan_train_test(x, y, db_size, query_size):
train_x, query_x, train_y, query_y = train_test_split(x, y, test_size = query_size, stratify = y)
train_x, db_x, train_y, db_y = train_test_split(train_x, train_y, test_size = db_size, stratify = train_y)
return train_x, train_y, query_x, query_y, db_x, db_y
def create_train_test(x, y, query_size):
"""Train and DB are using the same dataset: gallery"""
train_x, query_x, train_y, query_y = train_test_split(x, y, test_size = query_size, stratify = y)
return train_x, train_y, query_x, query_y, train_x, train_y
def get_cifar10_data(image_size, batch_size, dataroot='../data/', workers=2, data_transforms=None):
if data_transforms is None:
data_transforms = transforms.Compose([
transforms.Scale(image_size),
transforms.ToTensor()
# transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
train_dataset = dset.CIFAR10(root=dataroot, download=True, train=True, transform=data_transforms)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
shuffle=False, num_workers=workers)
test_dataset = dset.CIFAR10(root=dataroot, download=True, train=False, transform=data_transforms)
test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,
shuffle=False, num_workers=workers)
return train_dataloader, test_dataloader
def get_places365_dataloaders(image_size, batch_size, dataroot, workers=2, data_transforms=None):
if data_transforms is None:
data_transforms = transforms.Compose([
transforms.Resize(image_size),
transforms.ToTensor()
])
train_dataloader = torch.utils.data.DataLoader(dset.ImageFolder(
root=path.join(dataroot, 'train'),
transform=data_transforms
),
batch_size=batch_size, shuffle=False, num_workers=workers)
valid_dataloader = torch.utils.data.DataLoader(dset.ImageFolder(
root=path.join(dataroot, 'val'),
transform=data_transforms
),
batch_size=batch_size, shuffle=False, num_workers=workers)
return train_dataloader, valid_dataloader
def get_mnist_data(image_size, batch_size, dataroot='../data/', workers=2, data_transforms=None):
if data_transforms is None:
data_transforms = transforms.Compose([
transforms.Scale(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, ), (0.5, )),
])
train_dataset = dset.MNIST(root=dataroot, download=True, train=True, transform=data_transforms)
train_x, train_y = get_numpy_data(torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
shuffle=False, num_workers=workers))
test_dataset = dset.MNIST(root=dataroot, download=True, train=False, transform=data_transforms)
test_x, test_y = get_numpy_data(torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,
shuffle=False, num_workers=workers))
x = np.vstack([train_x, test_x])
y = np.concatenate([train_y, test_y])
return x, y
def get_mnist_3c_data(image_size, batch_size, dataroot='../data/', workers=2, data_transforms=None):
if data_transforms is None:
data_transforms = transforms.Compose([
transforms.Scale(image_size),
transforms.Grayscale(3),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
train_dataset = dset.MNIST(root=dataroot, download=True, train=True, transform=data_transforms)
train_x, train_y = get_numpy_data(torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
shuffle=False, num_workers=workers))
test_dataset = dset.MNIST(root=dataroot, download=True, train=False, transform=data_transforms)
test_x, test_y = get_numpy_data(torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,
shuffle=False, num_workers=workers))
x = np.vstack([train_x, test_x])
y = np.concatenate([train_y, test_y])
return x, y
def get_flickr_data(image_size, dataroot='../data/Flickr25K', workers=2, data_transforms=None):
data_transforms = transforms.Compose([
transforms.Scale(image_size),
transforms.ToTensor(),
transforms.Normalize((0.0, 0.0, 0.0), (1.0, 1.0, 1.0))])
dataset = torchvision.datasets.ImageFolder(dataroot, transform=data_transforms)
loader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=True, num_workers=0)
test_x, test_y = get_numpy_data(loader)
x = np.vstack([train_x, test_x])
y = np.concatenate([train_y, test_y])
return x, y
def sample_files_from_list(basedir, file_list, n_per_class, seed, ignored_file_list=set()):
sampled_files = {}
permuted_indices = np.arange(len(file_list))
print('Setting seed {}'.format(seed))
np.random.seed(seed)
np.random.shuffle(permuted_indices)
selected_files = []
for idx in tqdm(permuted_indices):
filename = file_list[idx]
if filename not in ignored_file_list:
_, label, img_filename = filename.split('/')
if label not in sampled_files:
sampled_files[label] = []
if len(sampled_files[label]) < n_per_class:
sampled_files[label].append((img_filename, path.join(basedir, filename)))
selected_files.append(filename)
for label, img_list in sampled_files.items():
assert len(img_list) == n_per_class
return sampled_files, selected_files
def sample_train_db_data_from_dataloader(dataloader, num_train, num_db, seed):
x, y = get_numpy_data(dataloader)
assert (num_train + num_db) == x.shape[0]
print('Setting seed {}'.format(seed))
train_x, db_x, train_y, db_y = train_test_split(x, y, train_size = num_train, random_state=seed, stratify = y)
return train_x, train_y, db_x, db_y
def make_dir_if_not_exist(folder):
if not path.exists(folder):
# print('Creating folder: {}'.format(folder))
os.makedirs(folder)
def create_dataset_from_files(basedir, sampled_files):
if path.exists(basedir):
raise Exception('Directory already exists: {}'.format(basedir))
pbar = tqdm(sampled_files.items())
cnt = 0
try:
for label, img_list in pbar :
label_dir = path.join(basedir, label)
make_dir_if_not_exist(label_dir)
for img_filename, img_path in img_list:
cnt += 1
shutil.copyfile(img_path, path.join(label_dir, img_filename))
if cnt %500 == 0:
pbar.set_postfix(file_cnt=cnt)
pbar.set_postfix(file_cnt=cnt)
finally:
pbar.close()
def check_evenly_sampling(a):
cnts = np.sum(ranking.one_hot_label(a), axis=0)
for cnt in cnts:
assert cnt == cnts[0]
IMAGE_SIZE = 64
```
# MNIST-3C
MNIST data with 3 channels (stacking the same copy of the 1-channel)
```
all_x, all_y = get_mnist_3c_data(IMAGE_SIZE, 100, dataroot='../data/', workers=0)
dataset = 'mnist-3c'
NUM_IMAGES = all_x.shape[0]
print('Dataset: {} images'.format(NUM_IMAGES))
print('Data range: [{}, {}]'.format(all_x.min(), all_x.max()))
# DCW-AE paper
for seed, num_query in [
(9, 10000),
(19, 10000),
(29, 10000),
(39, 10000),
(49, 10000)
]:
num_train = num_db = NUM_IMAGES - num_query
output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)
print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))
if path.exists(output_dir):
print('Deleting existing folder: {}'.format(output_dir))
shutil.rmtree(output_dir)
print('Will save in {}'.format(output_dir))
os.makedirs(output_dir)
train_x, query_x, train_y, query_y = train_test_split(
all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)
db_x, db_y = train_x, train_y
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)
# This is used in DistillHash, SSDH papers
for seed, num_train, num_query in [
(109, 5000, 10000),
(119, 5000, 10000),
(129, 5000, 10000),
(139, 5000, 10000),
(149, 5000, 10000),
]:
num_db = NUM_IMAGES - num_train - num_query
output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)
print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))
if path.exists(output_dir):
print('Deleting existing folder: {}'.format(output_dir))
shutil.rmtree(output_dir)
print('Will save in {}'.format(output_dir))
os.makedirs(output_dir)
train_x, query_x, train_y, query_y = train_test_split(
all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)
db_x, query_x, db_y, query_y = train_test_split(
query_x, query_y, train_size = num_db, random_state=seed, stratify = query_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)
```
# MNIST
```
all_x, all_y = get_mnist_data(IMAGE_SIZE, 100, dataroot='../data/', workers=0)
dataset = 'mnist'
NUM_IMAGES = all_x.shape[0]
print('Dataset: {} images'.format(NUM_IMAGES))
print('Data range: [{}, {}]'.format(all_x.min(), all_x.max()))
# DCW-AE paper
for seed, num_query in [
(9, 10000),
(19, 10000),
(29, 10000),
(39, 10000),
(49, 10000)
]:
num_train = num_db = NUM_IMAGES - num_query
output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)
print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))
if path.exists(output_dir):
print('Deleting existing folder: {}'.format(output_dir))
shutil.rmtree(output_dir)
print('Will save in {}'.format(output_dir))
os.makedirs(output_dir)
train_x, query_x, train_y, query_y = train_test_split(
all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)
db_x, db_y = train_x, train_y
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)
# This is used in DistillHash, SSDH papers
for seed, num_train, num_query in [
(109, 5000, 10000),
(119, 5000, 10000),
(129, 5000, 10000),
(139, 5000, 10000),
(149, 5000, 10000),
]:
num_db = NUM_IMAGES - num_train - num_query
output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)
print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))
if path.exists(output_dir):
print('Deleting existing folder: {}'.format(output_dir))
shutil.rmtree(output_dir)
print('Will save in {}'.format(output_dir))
os.makedirs(output_dir)
train_x, query_x, train_y, query_y = train_test_split(
all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)
db_x, query_x, db_y, query_y = train_test_split(
query_x, query_y, train_size = num_db, random_state=seed, stratify = query_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)
```
# Flickr25k
```
dataset = 'flickr25k'
image_size=IMAGE_SIZE
dataroot='../data/Flickr25K/'
workers=0
data_transforms = transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
loader = torch.utils.data.DataLoader(torchvision.datasets.ImageFolder(dataroot, transform=data_transforms),
batch_size=100, shuffle=True, num_workers=0)
all_x, all_y = get_numpy_data(loader)
NUM_IMAGES = all_x.shape[0]
print('Dataset: {} images'.format(NUM_IMAGES))
print('Data range: [{}, {}]'.format(all_x.min(), all_x.max()))
# DCW-AE paper
for seed, num_query in [
(9, 5000),
(19, 5000),
(29, 5000),
(39, 5000),
(49, 5000)
]:
num_train = num_db = NUM_IMAGES - num_query
output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)
print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))
if path.exists(output_dir):
print('Deleting existing folder: {}'.format(output_dir))
shutil.rmtree(output_dir)
print('Will save in {}'.format(output_dir))
os.makedirs(output_dir)
train_x, query_x, train_y, query_y = train_test_split(
all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)
db_x, db_y = train_x, train_y
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)
```
# CIFAR-10
```
dataset = 'cifar10'
train_dataloader, query_dataloader = get_cifar10_data(IMAGE_SIZE, 100, dataroot='../data/', workers=0)
train_x, train_y = get_numpy_data(train_dataloader)
query_x, query_y = get_numpy_data(query_dataloader)
all_x = np.vstack([train_x, query_x])
all_y = np.concatenate([train_y, query_y])
NUM_IMAGES = all_x.shape[0]
print('Dataset: {} images'.format(NUM_IMAGES))
print('Data range: [{}, {}]'.format(all_x.min(), all_x.max()))
# DCW-AE paper
for seed, num_query in [
(9, 10000),
(19, 10000),
(29, 10000),
(39, 10000),
(49, 10000)
]:
num_train = num_db = NUM_IMAGES - num_query
output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)
print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))
if path.exists(output_dir):
print('Deleting existing folder: {}'.format(output_dir))
shutil.rmtree(output_dir)
print('Will save in {}'.format(output_dir))
os.makedirs(output_dir)
train_x, query_x, train_y, query_y = train_test_split(
all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)
db_x, db_y = train_x, train_y
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)
# This is used in DistillHash, SSDH papers
for seed, num_train, num_query in [
(109, 5000, 10000),
(119, 5000, 10000),
(129, 5000, 10000),
(139, 5000, 10000),
(149, 5000, 10000),
]:
num_db = NUM_IMAGES - num_train - num_query
output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)
print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))
if path.exists(output_dir):
print('Deleting existing folder: {}'.format(output_dir))
shutil.rmtree(output_dir)
print('Will save in {}'.format(output_dir))
os.makedirs(output_dir)
train_x, query_x, train_y, query_y = train_test_split(
all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)
db_x, query_x, db_y, query_y = train_test_split(
query_x, query_y, train_size = num_db, random_state=seed, stratify = query_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)
np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)
```
# END
|
github_jupyter
|
## Pandas
### Instructions
This assignment will be done completely inside this Jupyter notebook with answers placed in the cell provided.
All python imports that are needed shown.
Follow all the instructions in this notebook to complete these tasks.
Make sure the CSV data files is in the same folder as this notebook - alumni.csv, groceries.csv
```
# Imports needed to complete this assignment
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
```
### Question 1 : Import CSV file (1 Mark)
Write code to load the alumni csv dataset into a Pandas DataFrame called 'alumni'.
```
#q1 (1)
alumni = pd.read_csv('alumni.csv')
alumni
```
### Question 2 : Understand the data set (5 Marks)
Use the following pandas commands to understand the data set: a) head, b) tail, c) dtypes, d) info, e) describe
```
#a) (1)
alumni.head()
#b) (1)
alumni.tail()
#c) (1)
alumni.dtypes
#d) (1)
alumni.info()
#e) (1)
alumni.describe()
```
### Question 3 : Cleaning the data set - part A (3 Marks)
a) Use clean_currency method below to strip out commas and dollar signs from Savings ($) column and put into a new column called 'Savings'.
```
def clean_currency(curr):
return float(curr.replace(",", "").replace("$", ""))
clean_currency("$66,000")
#a) (2)
savings = []
for saving in alumni["Savings ($)"]:
savings.append(clean_currency(saving))
alumni["Savings"]=savings
alumni
```
b) Uncomment 'alumni.dtypes.Savings' to check that the type change has occurred
```
#b) (1)
alumni.dtypes.Savings
```
### Question 4 : Cleaning the data set - part B (5 Marks)
a) Run the 'alumni["Gender"].value_counts()' to see the incorrect 'M' fields that need to be converted to 'Male'
```
# a) (1)
alumni["Gender"].value_counts()
```
b) Now use a '.str.replace' on the 'Gender' column to covert the incorrect 'M' fields. Hint: We must use ^...$ to restrict the pattern to match the whole string.
```
# b) (1)
gender = alumni["Gender"].str.replace('(^M$)','Male', regex=True)
# b) (1)
gender.value_counts()
```
c) That didn't the set alumni["Gender"] column however. You will need to update the column when using the replace command 'alumni["Gender"]=<replace command>', show how this is done below
```
# c) (1)
alumni["Gender"] = alumni["Gender"].str.replace('(^M$)','Male', regex=True)
alumni
```
d) You can set it directly by using the df.loc command, show how this can be done by using the 'df.loc[row_indexer,col_indexer] = value' command to convert the 'M' to 'Male'
```
# d) (1)
alumni.loc[alumni['Gender'] == 'M'] = 'Male'
```
e) Now run the 'value_counts' for Gender again to see the correct columns - 'Male' and 'Female'
```
# e) (1)
alumni["Gender"].value_counts()
```
### Question 5 : Working with the data set (4)
a) get the median, b) mean and c) standard deviation for the 'Salary' column
```
# a)(1)
alumni["Salary"].median()
# b)(1)
alumni["Salary"].mean()
# c)(1)
alumni["Salary"].std()
```
d) identify which alumni paid more than $15000 in fees, using the 'Fee' column
```
# d) (1)
alumni[alumni["Fee"] > 15000]
```
### Question 6 : Visualise the data set (4 Marks)
a) Using the 'Diploma Type' column, plot a bar chart and show its value counts.
```
#a) (1)
diploma_type = alumni.groupby('Diploma Type')
diplomas=[ diploma for diploma, data in diploma_type]
value_counts=alumni['Diploma Type'].value_counts()
plt.bar(diplomas,value_counts)
plt.xlabel("Diploma Type")
plt.ylabel("Diploma Level")
plt.title("Diploma level by type")
plt.xticks(diplomas,rotation='vertical',size=8)
plt.show
```
b) Now create a box plot comparison between 'Savings' and 'Salary' columns
```
#b) (1)
sns.boxplot(x="Savings", y="Salary", data=alumni, orient="h")
```
c) Generate a histogram with the 'Salary' column and use 12 bins.
```
#c) (1)
plt.hist(alumni["Salary"], bins=12, histtype='bar', rwidth=0.8)
plt.xlabel('Salary')
plt.ylabel('Counts')
plt.title('Salary Comparision')
plt.show()
```
d) Generate a scatter plot comparing 'Salary' and 'Savings' columns.
```
#d) (1)
plt.scatter(alumni["Salary"], alumni["Savings"], label='Salary vs Savings')
plt.xlabel('Salary')
plt.ylabel('Savings')
plt.title('Salary vs Savings')
plt.show()
```
### Question 7 : Contingency Table (2 Marks)
Using both the 'Martial Status' and 'Defaulted' create a contingency table. Hint: crosstab
```
# Q7 (2)
pd.crosstab(alumni["Marital Status"], alumni["Defaulted"])
```
|
github_jupyter
|
<table>
<tr>
<td>
<center>
<font size="+1">If you haven't used BigQuery datasets on Kaggle previously, check out the <a href = "https://www.kaggle.com/rtatman/sql-scavenger-hunt-handbook/">Scavenger Hunt Handbook</a> kernel to get started.</font>
</center>
</td>
</tr>
</table>
___
## Previous days:
* [**Day 1:** SELECT, FROM & WHERE](https://www.kaggle.com/rtatman/sql-scavenger-hunt-day-1/)
* [**Day 2:** GROUP BY, HAVING & COUNT()](https://www.kaggle.com/rtatman/sql-scavenger-hunt-day-2/)
____
# ORDER BY (and Dates!)
So far in our scavenger hunt, we've learned how to use the following clauses:
SELECT ...
FROM ...
(WHERE) ...
GROUP BY ...
(HAVING) ...
We also learned how to use the COUNT() aggregate function and, if you did the optional extra credit, possibly other aggregate functions as well. (If any of this is sounds unfamiliar to you, you can check out the earlier two days using the links above.)
Today we're going to learn how change the order that data is returned to us using the ORDER BY clause. We're also going to talk a little bit about how to work with dates in SQL, because they're sort of their own thing and can lead to headaches if you're unfamiliar with them.
### ORDER BY
___
First, let's learn how to use ORDER BY. ORDER BY is usually the last clause you'll put in your query, since you're going to want to use it to sort the results returned by the rest of your query.
We're going to be making queries against this version of the table we've been using an example over the past few days.
> **Why would the order of a table change?** This can actually happen to active BigQuery datasets, since if your table is being added to regularly [it may be coalesced every so often and that will change the order of the data in your table](https://stackoverflow.com/questions/16854116/the-order-of-records-in-a-regularly-updated-bigquery-databaseg).
You'll notice that, unlike in earlier days, our table is no longer sorted by the ID column.
.
** Ordering by a numeric column**
When you ORDER BY a numeric column, by default the column will be sorted from the lowest to highest number. So this query will return the ID, Name and Animal columns, all sorted by the number in the ID column. The row with the lowest number in the ID column will be returned first.
SELECT ID, Name, Animal
FROM `bigquery-public-data.pet_records.pets`
ORDER BY ID
Visually, this looks something like this:

** Ordering by a text column**
You can also order by columns that have text in them. By default, the column you sort on will be sorted alphabetically from the beginning to the end of the alphabet.
SELECT ID, Name, Animal
FROM `bigquery-public-data.pet_records.pets`
ORDER BY Animal

** Reversing the order**
You can reverse the sort order (reverse alphabetical order for text columns or high to low for numeric columns) using the DESC argument.
> ** DESC** is short for "descending", or high-to-low.
So this query will sort the selected columns by the Animal column, but the values that are last in alphabetic order will be returned first.
SELECT ID, Name, Animal
FROM `bigquery-public-data.pet_records.pets`
ORDER BY Animal DESC

### Dates
____
Finally, let's talk about dates. I'm including these because they are something that I found particularly confusing when I first learned SQL, and I ended up having to use them all. the. time.
There are two different ways that a date can be stored in BigQuery: as a DATE or as a DATETIME. Here's a quick summary:
**DATE format**
The DATE format has the year first, then the month, and then the day. It looks like this:
YYYY-[M]M-[D]D
* YYYY: Four-digit year
* [M]M: One or two digit month
* [D]D: One or two digit day
**DATETIME/TIMESTAMP format**
The DATETIME format is just like the date format... but with time added at the end. (The difference between DATETIME and TIMESTAMP is that the date and time information in a DATETIME is based on a specific timezone. On the other hand, a TIMESTAMP will be the same in all time zones, except for the time zone) . Both formats look like this:
YYYY-[M]M-[D]D[( |T)[H]H:[M]M:[S]S[.DDDDDD]][time zone]
* YYYY: Four-digit year
* [M]M: One or two digit month
* [D]D: One or two digit day
* ( |T): A space or a T separator
* [H]H: One or two digit hour (valid values from 00 to 23)
* [M]M: One or two digit minutes (valid values from 00 to 59)
* [S]S: One or two digit seconds (valid values from 00 to 59)
* [.DDDDDD]: Up to six fractional digits (i.e. up to microsecond precision)
* (TIMESTAMP only) [time zone]: String representing the time zone
** Getting only part of a date **
Often, though, you'll only want to look at part of a date, like the year or the day. You can do this using the EXTRACT function and specifying what part of the date you'd like to extract.
So this query will return one column with just the day of each date in the column_with_timestamp column:
SELECT EXTRACT(DAY FROM column_with_timestamp)
FROM `bigquery-public-data.imaginary_dataset.imaginary_table`
One of the nice things about SQL is that it's very smart about dates and we can ask for information beyond just extracting part of the cell. For example, this query will return one column with just the week in the year (between 1 and 53) of each date in the column_with_timestamp column:
SELECT EXTRACT(WEEK FROM column_with_timestamp)
FROM `bigquery-public-data.imaginary_dataset.imaginary_table`
SQL has a lot of power when it comes to dates, and that lets you ask very specific questions using this information. You can find all the functions you can use with dates in BigQuery [on this page](https://cloud.google.com/bigquery/docs/reference/legacy-sql), under "Date and time functions".
## Example: Which day of the week do the most fatal motor accidents happen on?
___
Now we're ready to work through an example. Today, we're going to be using the US Traffic Fatality Records database, which contains information on traffic accidents in the US where at least one person died. (It's definitely a sad topic, but if we can understand this data and the trends in it we can use that information to help prevent additional accidents.)
First, just like yesterday, we need to get our environment set up. Since you already know how to look at schema information at this point, I'm going to let you do that on your own.
> **Important note:** Make sure that you add the BigQuery dataset you're querying to your kernel. Otherwise you'll get
```
# import package with helper functions
import bq_helper
# create a helper object for this dataset
accidents = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="nhtsa_traffic_fatalities")
```
We're going to look at which day of the week the most fatal traffic accidents happen on. I'm going to get the count of the unique id's (in this table they're called "consecutive_number") as well as the day of the week for each accident. Then I'm going sort my table so that the days with the most accidents are on returned first.
```
# query to find out the number of accidents which
# happen on each day of the week
query = """SELECT COUNT(consecutive_number),
EXTRACT(DAYOFWEEK FROM timestamp_of_crash)
FROM `bigquery-public-data.nhtsa_traffic_fatalities.accident_2015`
GROUP BY EXTRACT(DAYOFWEEK FROM timestamp_of_crash)
ORDER BY COUNT(consecutive_number) DESC
"""
```
Now that our query is ready, let's run it (safely!) and store the results in a dataframe:
```
# the query_to_pandas_safe method will cancel the query if
# it would use too much of your quota, with the limit set
# to 1 GB by default
accidents_by_day = accidents.query_to_pandas_safe(query)
```
And that gives us a dataframe! Let's quickly plot our data to make sure that it's actually been sorted:
```
# library for plotting
import matplotlib.pyplot as plt
# make a plot to show that our data is, actually, sorted:
plt.plot(accidents_by_day.f0_)
plt.title("Number of Accidents by Rank of Day \n (Most to least dangerous)")
```
Yep, our query was, in fact, returned sorted! Now let's take a quick peek to figure out which days are the most dangerous:
```
print(accidents_by_day)
```
To map from the numbers returned for the day of the week (the second column) to the actual day, I consulted [the BigQuery documentation on the DAYOFWEEK function](https://cloud.google.com/bigquery/docs/reference/legacy-sql#dayofweek), which says that it returns "an integer between 1 (Sunday) and 7 (Saturday), inclusively". So we can tell, based on our query, that in 2015 most fatal motor accidents occur on Sunday and Saturday, while the fewest happen on Tuesday.
# Scavenger hunt
___
Now it's your turn! Here are the questions I would like you to get the data to answer:
* Which hours of the day do the most accidents occur during?
* Return a table that has information on how many accidents occurred in each hour of the day in 2015, sorted by the the number of accidents which occurred each hour. Use either the accident_2015 or accident_2016 table for this, and the timestamp_of_crash column. (Yes, there is an hour_of_crash column, but if you use that one you won't get a chance to practice with dates. :P)
* **Hint:** You will probably want to use the [EXTRACT() function](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#extract_1) for this.
* Which state has the most hit and runs?
* Return a table with the number of vehicles registered in each state that were involved in hit-and-run accidents, sorted by the number of hit and runs. Use either the vehicle_2015 or vehicle_2016 table for this, especially the registration_state_name and hit_and_run columns.
In order to answer these questions, you can fork this notebook by hitting the blue "Fork Notebook" at the very top of this page (you may have to scroll up). "Forking" something is making a copy of it that you can edit on your own without changing the original.
**My code begins**
**Solution to question 1**
A quick peek into the accident_2015 table
```
# Your code goes here :)
#accidents.table_schema(table_name="accident_2015") #uncomment for more info
accidents.head(table_name="accident_2015")
accidents.head(table_name="accident_2015", selected_columns=["consecutive_number", "timestamp_of_crash"])
#Which hours of the day do the most accidents occur during?
query1 = """SELECT COUNT(consecutive_number), EXTRACT(HOUR FROM timestamp_of_crash)
FROM `bigquery-public-data.nhtsa_traffic_fatalities.accident_2015`
GROUP BY EXTRACT(HOUR FROM timestamp_of_crash)
ORDER BY COUNT(consecutive_number) DESC
"""
accidents_by_hour_df = accidents.query_to_pandas_safe(query=query1)
accidents_by_hour_df.head(n=24)
```
So, the most accidents of the day occur in the 18th hour.
```
plt.plot(accidents_by_hour_df.f0_)
plt.title(s="Number of accidents by Rank of hour \n (Most to least dangerous)")
```
**Solution to question 2**
```
#Which state has the most hit and runs?
#accidents.table_schema(table_name="vehicle_2015") #uncomment for more info
accidents.head(table_name="vehicle_2015",
selected_columns=["consecutive_number", "registration_state_name", "hit_and_run"],
num_rows=30)
query2 = """SELECT COUNT(hit_and_run), registration_state_name
FROM `bigquery-public-data.nhtsa_traffic_fatalities.vehicle_2015`
WHERE hit_and_run = "Yes"
GROUP BY registration_state_name
ORDER BY COUNT(hit_and_run) DESC
"""
hit_and_run_statewise_df = accidents.query_to_pandas_safe(query=query2)
hit_and_run_statewise_df.head(len(hit_and_run_statewise_df["f0_"]))
```
California has the highest hit and runs(ignoring 'Unknown').
Please feel free to ask any questions you have in this notebook or in the [Q&A forums](https://www.kaggle.com/questions-and-answers)!
Also, if you want to share or get comments on your kernel, remember you need to make it public first! You can change the visibility of your kernel under the "Settings" tab, on the right half of your screen.
|
github_jupyter
|
# Trim a Binary Search Tree - SOLUTION
## Problem Statement
Given the root of a binary search tree and 2 numbers min and max, trim the tree such that all the numbers in the new tree are between min and max (inclusive). The resulting tree should still be a valid binary search tree. So, if we get this tree as input:
___

___
and we’re given **min value as 5** and **max value as 13**, then the resulting binary search tree should be:
___

___
We should remove all the nodes whose value is not between min and max.
___
## Solution
We can do this by performing a post-order traversal of the tree. We first process the left children, then right children, and finally the node itself. So we form the new tree bottom up, starting from the leaves towards the root. As a result while processing the node itself, both its left and right subtrees are valid trimmed binary search trees (may be NULL as well).
At each node we’ll return a reference based on its value, which will then be assigned to its parent’s left or right child pointer, depending on whether the current node is left or right child of the parent. If current node’s value is between min and max (min<=node<=max) then there’s no action need to be taken, so we return the reference to the node itself. If current node’s value is less than min, then we return the reference to its right subtree, and discard the left subtree. Because if a node’s value is less than min, then its left children are definitely less than min since this is a binary search tree. But its right children may or may not be less than min we can’t be sure, so we return the reference to it. Since we’re performing bottom-up post-order traversal, its right subtree is already a trimmed valid binary search tree (possibly NULL), and left subtree is definitely NULL because those nodes were surely less than min and they were eliminated during the post-order traversal. Remember that in post-order traversal we first process all the children of a node, and then finally the node itself.
Similar situation occurs when node’s value is greater than max, we now return the reference to its left subtree. Because if a node’s value is greater than max, then its right children are definitely greater than max. But its left children may or may not be greater than max. So we discard the right subtree and return the reference to the already valid left subtree. The code is easier to understand:
```
def trimBST(tree, minVal, maxVal):
if not tree:
return
tree.left=trimBST(tree.left, minVal, maxVal)
tree.right=trimBST(tree.right, minVal, maxVal)
if minVal<=tree.val<=maxVal:
return tree
if tree.val<minVal:
return tree.right
if tree.val>maxVal:
return tree.left
```
The complexity of this algorithm is O(N), where N is the number of nodes in the tree. Because we basically perform a post-order traversal of the tree, visiting each and every node one. This is optimal because we should visit every node at least once. This is a very elegant question that demonstrates the effectiveness of recursion in trees.
# Good Job!
|
github_jupyter
|
# Pair-wise Correlations
The purpose is to identify predictor variables strongly correlated with the sales price and with each other to get an idea of what variables could be good predictors and potential issues with collinearity.
Furthermore, Box-Cox transformations and linear combinations of variables are added where applicable or useful.
## "Housekeeping"
```
import warnings
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.preprocessing import PowerTransformer
from tabulate import tabulate
from utils import (
ALL_VARIABLES,
CONTINUOUS_VARIABLES,
DISCRETE_VARIABLES,
NUMERIC_VARIABLES,
ORDINAL_VARIABLES,
TARGET_VARIABLES,
encode_ordinals,
load_clean_data,
print_column_list,
)
pd.set_option("display.max_columns", 100)
sns.set_style("white")
```
## Load the Data
Only a subset of the previously cleaned data is used in this analysis. In particular, it does not make sense to calculate correlations involving nominal variables.
Furthermore, ordinal variables are encoded as integers (with greater values indicating a higher sales price by "guts feeling"; refer to the [data documentation](https://www.amstat.org/publications/jse/v19n3/decock/DataDocumentation.txt) to see the un-encoded values) and take part in the analysis.
A `cleaned_df` DataFrame with the original data from the previous notebook is kept so as to restore the encoded ordinal labels again at the end of this notebook for correct storage.
```
cleaned_df = load_clean_data()
df = cleaned_df[NUMERIC_VARIABLES + ORDINAL_VARIABLES + TARGET_VARIABLES]
df = encode_ordinals(df)
df[NUMERIC_VARIABLES].head()
df[ORDINAL_VARIABLES].head()
```
## Linearly "dependent" Features
The "above grade (ground) living area" (= *Gr Liv Area*) can be split into 1st and 2nd floor living area plus some undefined rest.
```
assert not (
df["Gr Liv Area"]
!= (df["1st Flr SF"] + df["2nd Flr SF"] + df["Low Qual Fin SF"])
).any()
```
The various basement areas also add up.
```
assert not (
df["Total Bsmt SF"]
!= (df["BsmtFin SF 1"] + df["BsmtFin SF 2"] + df["Bsmt Unf SF"])
).any()
```
Calculate a variable for the total living area *Total SF* as this is the number communicated most often in housing ads.
```
df["Total SF"] = df["Gr Liv Area"] + df["Total Bsmt SF"]
new_variables = ["Total SF"]
CONTINUOUS_VARIABLES.append("Total SF")
```
The different porch areas are unified into a new variable *Total Porch SF*. This potentially helps making the presence of a porch in general relevant in the prediction.
```
df["Total Porch SF"] = (
df["3Ssn Porch"] + df["Enclosed Porch"] + df["Open Porch SF"]
+ df["Screen Porch"] + df["Wood Deck SF"]
)
new_variables.append("Total Porch SF")
CONTINUOUS_VARIABLES.append("Total Porch SF")
```
The various types of rooms "above grade" (i.e., *TotRms AbvGrd*, *Bedroom AbvGr*, *Kitchen AbvGr*, and *Full Bath*) do not add up (only in 29% of the cases they do). Therefore, no single unified variable can be used as a predictor.
```
round(
100
* (
df["TotRms AbvGrd"]
== (df["Bedroom AbvGr"] + df["Kitchen AbvGr"] + df["Full Bath"])
).sum()
/ df.shape[0]
)
```
Unify the number of various types of bathrooms into a single variable. Note that "half" bathrooms are counted as such.
```
df["Total Bath"] = (
df["Full Bath"] + 0.5 * df["Half Bath"]
+ df["Bsmt Full Bath"] + 0.5 * df["Bsmt Half Bath"]
)
new_variables.append("Total Bath")
DISCRETE_VARIABLES.append("Total Bath")
```
## Box-Cox Transformations
Only numeric columns with non-negative values are eligable for a Box-Cox transformation.
```
columns = CONTINUOUS_VARIABLES + TARGET_VARIABLES
transforms = df[columns].describe().T
transforms = list(transforms[transforms['min'] > 0].index)
print_column_list(transforms)
```
A common convention is to use Box-Cox transformations only if the found lambda value (estimated with Maximum Likelyhood Estimation) is in the range from -3 to +3.
Consequently, the only applicable transformation are for *SalePrice* and the new variable *Total SF*.
```
# Check the Box-Cox tranformations for each column seperately
# to decide if the optimal lambda value is in an acceptable range.
output = []
transformed_columns = []
for column in transforms:
X = df[[column]] # 2D array needed!
pt = PowerTransformer(method="box-cox", standardize=False)
# Suppress a weird but harmless warning from scipy
with warnings.catch_warnings():
warnings.simplefilter("ignore")
pt.fit(X)
# Check if the optimal lambda is ok.
lambda_ = pt.lambdas_[0].round(1)
if -3 <= lambda_ <= 3:
lambda_label = 0 if lambda_ <= 0.01 else lambda_ # to avoid -0.0
new_column = f"{column} (box-cox-{lambda_label})"
df[new_column] = (
np.log(X) if lambda_ <= 0.001 else (((X ** lambda_) - 1) / lambda_)
)
# Track the new column in the appropiate list.
new_variables.append(new_column)
if column in TARGET_VARIABLES:
TARGET_VARIABLES.append(new_column)
else:
CONTINUOUS_VARIABLES.append(new_column)
# To show only the transformed columns below.
transformed_columns.append(column)
transformed_columns.append(new_column)
output.append((
f"{column}:",
f"use lambda of {lambda_}",
))
else:
output.append((
f"{column}:",
f"lambda of {lambda_} not in realistic range",
))
print(tabulate(sorted(output), tablefmt="plain"))
df[transformed_columns].head()
```
## Correlations
The pair-wise correlations are calculated based on the type of the variables:
- **continuous** variables are assumed to be linearly related with the target and each other or not: use **Pearson's correlation coefficient**
- **discrete** (because of the low number of distinct realizations as seen in the data cleaning notebook) and **ordinal** (low number of distinct realizations as well) variables are assumed to be related in a monotonic way with the target and each other or not: use **Spearman's rank correlation coefficient**
Furthermore, for a **naive feature selection** a "rule of thumb" classification in *weak* and *strong* correlation is applied to the predictor variables. The identified variables will be used in the prediction modelling part to speed up the feature selection. A correlation between 0.33 and 0.66 is considered *weak* while a correlation above 0.66 is considered *strong* (these thresholds refer to the absolute value of the correlation). Correlations are calculated for **each** target variable (i.e., raw "SalePrice" and Box-Cox transformation thereof). Correlations below 0.1 are considered "uncorrelated".
```
strong = 0.66
weak = 0.33
uncorrelated = 0.1
```
Two heatmaps below (implemented in the reusable `plot_correlation` function) help visualize the correlations.
Obviously, many variables are pair-wise correlated. This could yield regression coefficients *inprecise* and not usable / interpretable. At the same time, this does not lower the predictive power of a model as a whole. In contrast to the pair-wise correlations, *multi-collinearity* is not checked here.
```
def plot_correlation(data, title):
"""Visualize a correlation matrix in a nice heatmap."""
fig, ax = plt.subplots(figsize=(12, 12))
ax.set_title(title, fontsize=24)
# Blank out the upper triangular part of the matrix.
mask = np.zeros_like(data, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Use a diverging color map.
cmap = sns.diverging_palette(240, 0, as_cmap=True)
# Adjust the labels' font size.
labels = data.columns
ax.set_xticklabels(labels, fontsize=10)
ax.set_yticklabels(labels, fontsize=10)
# Plot it.
sns.heatmap(
data, vmin=-1, vmax=1, cmap=cmap, center=0, linewidths=.5,
cbar_kws={"shrink": .5}, square=True, mask=mask, ax=ax
)
```
### Pearson
Pearson's correlation coefficient shows a linear relationship between two variables.
```
columns = CONTINUOUS_VARIABLES + TARGET_VARIABLES
pearson = df[columns].corr(method="pearson")
plot_correlation(pearson, "Pearson's Correlation")
```
Predictors weakly or strongly correlated with a target variable are collected.
```
pearson_weakly_correlated = set()
pearson_strongly_correlated = set()
pearson_uncorrelated = set()
# Iterate over the raw and transformed target.
for target in TARGET_VARIABLES:
corrs = pearson.loc[target].drop(TARGET_VARIABLES).abs()
pearson_weakly_correlated |= set(corrs[(weak < corrs) & (corrs <= strong)].index)
pearson_strongly_correlated |= set(corrs[(strong < corrs)].index)
pearson_uncorrelated |= set(corrs[(corrs < uncorrelated)].index)
# Show that no contradiction exists between the classifications.
assert pearson_weakly_correlated & pearson_strongly_correlated == set()
assert pearson_weakly_correlated & pearson_uncorrelated == set()
```
Show the continuous variables that are weakly and strongly correlated with the sales price or uncorrelated.
```
print_column_list(pearson_uncorrelated)
print_column_list(pearson_weakly_correlated)
print_column_list(pearson_strongly_correlated)
```
### Spearman
Spearman's correlation coefficient shows an ordinal rank relationship between two variables.
```
columns = sorted(DISCRETE_VARIABLES + ORDINAL_VARIABLES) + TARGET_VARIABLES
spearman = df[columns].corr(method="spearman")
plot_correlation(spearman, "Spearman's Rank Correlation")
```
Predictors weakly or strongly correlated with a target variable are collected.
```
spearman_weakly_correlated = set()
spearman_strongly_correlated = set()
spearman_uncorrelated = set()
# Iterate over the raw and transformed target.
for target in TARGET_VARIABLES:
corrs = spearman.loc[target].drop(TARGET_VARIABLES).abs()
spearman_weakly_correlated |= set(corrs[(weak < corrs) & (corrs <= strong)].index)
spearman_strongly_correlated |= set(corrs[(strong < corrs)].index)
spearman_uncorrelated |= set(corrs[(corrs < uncorrelated)].index)
# Show that no contradiction exists between the classifications.
assert spearman_weakly_correlated & spearman_strongly_correlated == set()
assert spearman_weakly_correlated & spearman_uncorrelated == set()
```
Show the discrete and ordinal variables that are weakly and strongly correlated with the sales price or uncorrelated.
```
print_column_list(spearman_uncorrelated)
print_column_list(spearman_weakly_correlated)
print_column_list(spearman_strongly_correlated)
```
## Save the Results
### Save the weakly and strongly correlated Variables
The subset of variables that have a correlation with the house price are saved in a simple JSON file for easy re-use.
```
with open("data/correlated_variables.json", "w") as file:
file.write(json.dumps({
"uncorrelated": sorted(
list(pearson_uncorrelated) + list(spearman_uncorrelated)
),
"weakly_correlated": sorted(
list(pearson_weakly_correlated) + list(spearman_weakly_correlated)
),
"strongly_correlated": sorted(
list(pearson_strongly_correlated) + list(spearman_strongly_correlated)
),
}))
```
### Save the Data
Sort the new variables into the unprocessed `cleaned_df` DataFrame with the targets at the end. This "restores" the ordinal labels again for storage.
```
for column in new_variables:
cleaned_df[column] = df[column]
for target in set(TARGET_VARIABLES) & set(new_variables):
new_variables.remove(target)
cleaned_df = cleaned_df[sorted(ALL_VARIABLES + new_variables) + TARGET_VARIABLES]
```
In totality, this notebook added two new linear combinations and one Box-Cox transformation to the previous 78 columns.
```
cleaned_df.shape
cleaned_df.head()
cleaned_df.to_csv("data/data_clean_with_transformations.csv")
```
|
github_jupyter
|
```
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = 'https://raw.githubusercontent.com/ageron/handson-ml/master/'
HOUSING_PATH = 'datasets/housing'
HOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + '/housing.tgz'
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, 'housing.tgz')
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, 'housing.csv')
return pd.read_csv(csv_path)
housing = load_housing_data()
housing.head()
housing.info()
housing.describe()
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20,15))
plt.show()
import numpy as np
import hashlib
def split_train_test(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data)* test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
train_set, test_set = split_train_test(housing, 0.2)
print(len(train_set), 'train + ', len(test_set), 'test')
def test_set_check(identifier, test_ratio, hash):
return hash(np.int64(identifier)).digest()[-1] < 256 * test_ratio
def split_train_test_by_id(data, test_ratio, id_column, hash=hashlib.md5):
ids = data[id_column]
in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio, hash))
return data.loc[~in_test_set], data.loc[in_test_set]
housing_with_id = housing.reset_index()
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, 'index')
housing_with_id["id"] = housing["longitude"] * 1000 + housing["latitude"]
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, 'id')
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housing, test_size=.2, random_state=42)
housing['income_cat'] = np.ceil(housing['median_income'] / 1.5)
housing['income_cat'].where(housing['income_cat'] < 5, 5.0, inplace=True)
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing['income_cat']):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
housing['income_cat'].value_counts() / len(housing)
for set in (strat_train_set, strat_test_set):
set.drop(['income_cat'], axis=1, inplace=True)
housing = strat_train_set.copy()
housing.plot(kind='scatter', x='longitude', y='latitude', alpha=0.4,
s=housing['population']/100, label='population',
c='median_house_value', cmap=plt.get_cmap('jet'), colorbar=True,
)
plt.legend()
corr_matrix = housing.corr()
corr_matrix['median_house_value'].sort_values(ascending=False)
from pandas.tools.plotting import scatter_matrix
attributes = ['median_house_value', 'median_income', 'total_rooms',
'housing_median_age']
scatter_matrix(housing[attributes], figsize=(12,8))
housing.plot(kind='scatter', x='median_income', y='median_house_value', alpha=0.1)
housing['rooms_per_household'] = housing['total_rooms']/housing['households']
housing['bedrooms_per_room'] = housing['total_bedrooms']/housing['total_rooms']
housing['population_per_houshold'] = housing['population']/housing['households']
corr_matrix = housing.corr()
corr_matrix['median_house_value'].sort_values(ascending=False)
housing = strat_train_set.drop('median_house_value', axis=1)
housing_labels = strat_train_set['median_house_value'].copy()
housing_labels
housing.dropna(subset=['total_bedrooms'])
from sklearn.preprocessing import Imputer
imputer = Imputer(strategy='median')
housing_num = housing.drop('ocean_proximity', axis=1)
imputer.fit(housing_num)
imputer.statistics_
housing_tr = pd.DataFrame(imputer.transform(housing_num), columns=housing_num.columns)
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
housing_cat = housing['ocean_proximity']
housing_cat_encoded = encoder.fit_transform(housing_cat)
housing_cat_encoded
print(encoder.classes_)
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1, 1))
housing_cat_1hot
```
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
housing_cat_1hot = encoder.fit_transform(housing_cat)
housing_cat_1hot
```
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6
class CombineAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room = True):
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, household_ix]
population_per_household = X[:, population_ix] / X[:,household_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household,
bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombineAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(housing.values)
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
('imputer', Imputer(strategy='median')),
('attribs_adder', CombineAttributesAdder()),
('std_scaler', StandardScaler()),
])
housing_num_tr = num_pipeline.fit_transform(housing_num)
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import LabelBinarizer
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names].values
num_attribs = list(housing_num)
cat_attribs = ['ocean_proximity']
num_pipeline = Pipeline([
('selector', DataFrameSelector(num_attribs)),
('imputer', Imputer(strategy='median')),
('attribs_adder', CombineAttributesAdder()),
('std_scaler', StandardScaler())
])
cat_pipeline = Pipeline([
('selector', DataFrameSelector(cat_attribs)),
('label_binarizer', LabelBinarizer()),
])
full_pipeline = FeatureUnion(transformer_list=[
('num_pipeline', num_pipeline),
('cat_pipeline', cat_pipeline),
])
housing_prepared = full_pipeline.fit_transform(housing)
housing_prepared
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
some_data = housing.iloc[:5]
some_labels = housing_labels.iloc[:5]
some_data_prepared = full_pipeline.transform(some_data)
print('Predicts:\t', lin_reg.predict(some_data_prepared))
print('Labels: \t', list(some_labels))
from sklearn.metrics import mean_squared_error
housing_predicts = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_labels, housing_predicts)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
housing_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(housing_labels, housing_predictions)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels,
scoring='neg_mean_squared_error', cv=10)
rmse_scores = np.sqrt(-scores)
def display_scores(scores):
print('Scores:', scores)
print("Mean", scores.mean())
print('Standard Deviation:', scores.std())
display_scores(rmse_scores)
lin_scores = cross_val_score(lin_reg, housing_prepared, housing_labels,
scoring='neg_mean_squared_error', cv=10)
lin_rmse_scores = np.sqrt(-lin_scores)
display_scores(lin_rmse_scores)
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared, housing_labels)
forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels,
scoring='neg_mean_squared_error', cv=10)
forest_rmse_scores = np.sqrt(-forest_scores)
forest_rmse_scores
display_scores(forest_rmse_scores)
```
|
github_jupyter
|
# This notebook shows an example where a set of electrodes are selected from a dataset and then LFP is extracted from those electrodes and then written to a new NWB file
```
import pynwb
import os
#DataJoint and DataJoint schema
import datajoint as dj
## We also import a bunch of tables so that we can call them easily
from nwb_datajoint.common import (RawPosition, HeadDir, Speed, LinPos, StateScriptFile, VideoFile,
DataAcquisitionDevice, CameraDevice, Probe,
DIOEvents,
ElectrodeGroup, Electrode, Raw, SampleCount,
LFPSelection, LFP, LFPBandSelection, LFPBand,
SortGroup, SpikeSorting, SpikeSorter, SpikeSorterParameters, SpikeSortingWaveformParameters, SpikeSortingParameters, SpikeSortingMetrics, CuratedSpikeSorting,\
FirFilter,
IntervalList, SortInterval,
Lab, LabMember, LabTeam, Institution,
BrainRegion,
SensorData,
Session, ExperimenterList,
Subject,
Task, TaskEpoch,
Nwbfile, AnalysisNwbfile, NwbfileKachery, AnalysisNwbfileKachery,
interval_list_contains,
interval_list_contains_ind,
interval_list_excludes,
interval_list_excludes_ind,
interval_list_intersect,
get_electrode_indices)
import warnings
warnings.simplefilter('ignore', category=DeprecationWarning)
warnings.simplefilter('ignore', category=ResourceWarning)
```
#### Next we select the NWB file, which corresponds to the dataset we want to extract LFP from
```
nwb_file_names = Nwbfile().fetch('nwb_file_name')
# take the first one for this demonstration
nwb_file_name = nwb_file_names[0]
print(nwb_file_name)
```
#### Create the standard LFP Filters. This only needs to be done once.
```
FirFilter().create_standard_filters()
```
#### Now we Select every 16th electrode for LFP or, below, a specific set of electrodes. Choose one
Note that this will delete the current selection, and all downstream LFP and LFPBand information (if it exists), but only for the current dataset. This is fine to do if you want to generate or regenerate the LFP
```
electrode_ids = (Electrode & {'nwb_file_name' : nwb_file_name}).fetch('electrode_id')
lfp_electrode_ids = electrode_ids[range(0, len(electrode_ids), 128)]
LFPSelection().set_lfp_electrodes(nwb_file_name, lfp_electrode_ids.tolist())
LFPSelection().LFPElectrode() & {'nwb_file_name' : nwb_file_name}
```
### Or select one electrode for LFP
```
LFPSelection().set_lfp_electrodes(nwb_file_name, [0, 1])
LFPSelection().LFPElectrode() & {'nwb_file_name':nwb_file_name}
```
### Populate the LFP table. Note that this takes 2 hours or so on a laptop if you use all electrodes
```
LFP().populate([LFPSelection & {'nwb_file_name':nwb_file_name}])
```
### Now that we've created the LFP object we can perform a second level of filtering for a band of interest, in this case the theta band
We first need to create the filter
```
lfp_sampling_rate = (LFP() & {'nwb_file_name' : nwb_file_name}).fetch1('lfp_sampling_rate')
filter_name = 'Theta 5-11 Hz'
FirFilter().add_filter(filter_name, lfp_sampling_rate, 'bandpass', [4, 5, 11, 12], 'theta filter for 1 Khz data')
FirFilter()
```
Next we add an entry for the LFP Band and the electrodes we want to filter
```
# assume that we've filtered these electrodes; change this if not
lfp_band_electrode_ids = [1]
# set the interval list name corresponding to the second epoch (a run session)
interval_list_name = '02_r1'
# set the reference to -1 to indicate no reference for all channels
ref_elect = [-1]
# desired sampling rate
lfp_band_sampling_rate = 100
LFPBandSelection().set_lfp_band_electrodes(nwb_file_name, lfp_band_electrode_ids, filter_name, interval_list_name, ref_elect, lfp_band_sampling_rate)
```
Check to make sure it worked
```
(LFPBandSelection() & {'nwb_file_name' : nwb_file_name})
LFPBand().populate(LFPBandSelection() & {'nwb_file_name' : nwb_file_name})
LFPBand()
```
### Now we can plot the original signal, the LFP filtered trace, and the theta filtered trace together.
Much of the code below could be replaced by a function calls that would return the data from each electrical series
```
import matplotlib.pyplot as plt
import numpy as np
#get the three electrical series objects and the indeces of the electrodes we band pass filtered
orig_eseries = (Raw() & {'nwb_file_name' : nwb_file_name}).fetch_nwb()[0]['raw']
orig_elect_indeces = get_electrode_indices(orig_eseries, lfp_band_electrode_ids)
lfp_eseries = (LFP() & {'nwb_file_name' : nwb_file_name}).fetch_nwb()[0]['lfp']
lfp_elect_indeces = get_electrode_indices(lfp_eseries, lfp_band_electrode_ids)
lfp_band_eseries = (LFPBand() & {'nwb_file_name' : nwb_file_name}).fetch_nwb()[0]['filtered_data']
lfp_band_elect_indeces = get_electrode_indices(lfp_band_eseries, lfp_band_electrode_ids)
# get a list of times for the first run epoch and then select a 2 second interval 100 seconds from the beginning
run1times = (IntervalList & {'nwb_file_name': nwb_file_name, 'interval_list_name' : '02_r1'}).fetch1('valid_times')
plottimes = [run1times[0][0] + 101, run1times[0][0] + 102]
# get the time indeces for each dataset
orig_time_ind = np.argwhere(np.logical_and(orig_eseries.timestamps > plottimes[0], orig_eseries.timestamps < plottimes[1]))
lfp_time_ind = np.argwhere(np.logical_and(lfp_eseries.timestamps > plottimes[0], lfp_eseries.timestamps < plottimes[1]))
lfp_band_time_ind = np.argwhere(np.logical_and(lfp_band_eseries.timestamps > plottimes[0], lfp_band_eseries.timestamps < plottimes[1]))
plt.plot(orig_eseries.timestamps[orig_time_ind], orig_eseries.data[orig_time_ind,orig_elect_indeces[0]], 'k-')
plt.plot(lfp_eseries.timestamps[lfp_time_ind], lfp_eseries.data[lfp_time_ind,lfp_elect_indeces[0]], 'b-')
plt.plot(lfp_band_eseries.timestamps[lfp_band_time_ind], lfp_band_eseries.data[lfp_band_time_ind,lfp_band_elect_indeces[0]], 'r-')
plt.xlabel('Time (sec)')
plt.ylabel('Amplitude (AD units)')
plt.show()
```
|
github_jupyter
|
# Train a model using Watson Studio and deploy it in Watson Machine Learning
This notebook will show how to use your annotated images from Cloud Annotations to train an Object Detection model using a Python Notebook in Watson Studio. After training and testing, some extra steps will show how to deploy this model in Watson Machine Learning as an online API. You can use this API from any application afterwards.
As a suggestion you can use this dataset from Kaggle to test Cloud Annotation and this notebook: https://www.kaggle.com/issaisasank/guns-object-detection
### Specify the credentials for the bucket you used in Cloud Annoations
```
# credentials = {
# 'BUCKET': '$$$BUCKET$$$',
# 'IBM_API_KEY_ID': '$$$IBM_API_KEY_ID$$$',
# 'IAM_SERVICE_ID': '$$$IAM_SERVICE_ID$$$',
# 'ENDPOINT': '$$$ENDPOINT$$$',
# }
credentials = {
'IAM_SERVICE_ID': 'iam-ServiceId-f0afd6e2-22d6-433a-91ce-4d02fab0a8e0',
'IBM_API_KEY_ID': 'Q5ZIqOmUOt9PB2lOZX4n1RzHUO-E_kYQ3RFhbSsEtjfm',
'ENDPOINT': 'https://s3.us.cloud-object-storage.appdomain.cloud',
'IBM_AUTH_ENDPOINT': 'https://iam.cloud.ibm.com/oidc/token',
'BUCKET': 'guns-object-detection'
}
```
# Setup
```
import os
import shutil
if os.path.exists('tmp') and os.path.isdir('tmp'):
shutil.rmtree('tmp')
CLOUD_ANNOTATIONS_DATA = os.path.join('tmp', credentials['BUCKET'])
os.makedirs(CLOUD_ANNOTATIONS_DATA, exist_ok=True)
import json
import ibm_boto3
from ibm_botocore.client import Config, ClientError
def download_file_cos(local_file_name, key):
'''
Wrapper function to download a file from cloud object storage using the
credential dict provided and loading it into memory
'''
cos = ibm_boto3.client("s3",
ibm_api_key_id=credentials['IBM_API_KEY_ID'],
ibm_service_instance_id=credentials['IBM_API_KEY_ID'],
config=Config(signature_version="oauth"),
endpoint_url=credentials['ENDPOINT']
)
try:
res=cos.download_file(Bucket=credentials['BUCKET'], Key=key, Filename=local_file_name)
except Exception as e:
print('Exception', e)
else:
print('File Downloaded')
def get_annotations():
cos = ibm_boto3.client("s3",
ibm_api_key_id=credentials['IBM_API_KEY_ID'],
ibm_service_instance_id=credentials['IBM_API_KEY_ID'],
config=Config(signature_version="oauth"),
endpoint_url=credentials['ENDPOINT']
)
try:
return json.loads(cos.get_object(Bucket=credentials['BUCKET'], Key='_annotations.json')['Body'].read())
except Exception as e:
print('Exception', e)
annotations = get_annotations()
download_file_cos(os.path.join(CLOUD_ANNOTATIONS_DATA, '_annotations.json'), '_annotations.json')
for image in annotations['annotations'].keys():
local_path = os.path.join(CLOUD_ANNOTATIONS_DATA, image)
download_file_cos(local_path, image)
NUM_TRAIN_STEPS = 500
MODEL_TYPE = 'ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18'
CONFIG_TYPE = 'ssd_mobilenet_v1_quantized_300x300_coco14_sync'
import os
CLOUD_ANNOTATIONS_MOUNT = os.path.join('tmp', credentials['BUCKET'])
ANNOTATIONS_JSON_PATH = os.path.join(CLOUD_ANNOTATIONS_MOUNT, '_annotations.json')
CHECKPOINT_PATH = 'tmp/checkpoint'
OUTPUT_PATH = 'tmp/output'
EXPORTED_PATH = 'tmp/exported'
DATA_PATH = 'tmp/data'
LABEL_MAP_PATH = os.path.join(DATA_PATH, 'label_map.pbtxt')
TRAIN_RECORD_PATH = os.path.join(DATA_PATH, 'train.record')
VAL_RECORD_PATH = os.path.join(DATA_PATH, 'val.record')
```
## Installing dependencies
In the next cell we will install the libraries that will be used. Since we are using an older version of Tensorflow and Numpy, compared to the version that is already installed by default in your environment. We highly suggest creating a custom environment in your Watson Studio project for this notebook, using the following configuration:
``````
# Modify the following content to add a software customization to an environment.
# To remove an existing customization, delete the entire content and click Apply.
# The customizations must follow the format of a conda environment yml file.
# Add conda channels below defaults, indented by two spaces and a hyphen.
channels:
- defaults
# To add packages through conda or pip, remove the # on the following line.
dependencies:
# Add conda packages here, indented by two spaces and a hyphen.
# Remove the # on the following line and replace sample package name with your package name:
# Add pip packages here, indented by four spaces and a hyphen.
# Remove the # on the following lines and replace sample package name with your package name.
- pip:
- numpy==1.19.5
- tensorflow==1.15.2
``````
Use Python 3.7 and any hardware configuration without CPU that you would like. This notebook was not prepared to support training using GPUs in Watson Studio. Use the next cell to install the other dependencies as normal. After creating the environment you will have to change it using the **Information** tab, on the right side menu.
```
import os
import sys
import pathlib
# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/cloud-annotations/models
# !pip uninstall Cython -y
# !pip uninstall tf_slim -y
# !pip uninstall opencv-python-headless -y
# !pip uninstall lvis -y
# !pip uninstall pycocotools -y
# !pip uninstall numpy -y
# !pip uninstall tensorflow -y
# !pip install numpy==1.19.5
# !pip install tensorflow==1.15.2
!pip install Cython
!pip install tf_slim
!pip install opencv-python-headless
!pip install lvis --no-deps
!pip install pycocotools
%cd models/research
!protoc object_detection/protos/*.proto --python_out=.
pwd = os.getcwd()
# we need to set both PYTHONPATH for shell scripts and sys.path for python cells
sys.path.append(pwd)
sys.path.append(os.path.join(pwd, 'slim'))
if 'PYTHONPATH' in os.environ:
os.environ['PYTHONPATH'] += f':{pwd}:{pwd}/slim'
else:
os.environ['PYTHONPATH'] = f':{pwd}:{pwd}/slim'
%cd ../..
```
## Testing Tensorflow
```
%cd models/research
!python object_detection/builders/model_builder_tf1_test.py
%cd ../..
```
# Generate a Label Map
One piece of data the Object Detection API needs is a label map protobuf. The label map associates an integer id to the text representation of the label. The ids are indexed by 1, meaning the first label will have an id of 1 not 0.
Here is an example of what a label map looks like:
````
item {
id: 1
name: 'Cat'
}
item {
id: 2
name: 'Dog'
}
item {
id: 3
name: 'Gold Fish'
}
````
```
import os
import json
# Get a list of labels from the annotations.json
labels = {}
with open(ANNOTATIONS_JSON_PATH) as f:
annotations = json.load(f)
labels = annotations['labels']
# Create a file named label_map.pbtxt
os.makedirs(DATA_PATH, exist_ok=True)
with open(LABEL_MAP_PATH, 'w') as f:
# Loop through all of the labels and write each label to the file with an id
for idx, label in enumerate(labels):
f.write('item {\n')
f.write("\tname: '{}'\n".format(label))
f.write('\tid: {}\n'.format(idx + 1)) # indexes must start at 1
f.write('}\n')
```
# Generate TFRecords
The TensorFlow Object Detection API expects our data to be in the format of TFRecords.
The TFRecord format is a collection of serialized feature dicts, one for each image, looking something like this:
````
{
'image/height': 1800,
'image/width': 2400,
'image/filename': 'image1.jpg',
'image/source_id': 'image1.jpg',
'image/encoded': ACTUAL_ENCODED_IMAGE_DATA_AS_BYTES,
'image/format': 'jpeg',
'image/object/bbox/xmin': [0.7255949630314233, 0.8845598428835489],
'image/object/bbox/xmax': [0.9695875693160814, 1.0000000000000000],
'image/object/bbox/ymin': [0.5820120073891626, 0.1829972290640394],
'image/object/bbox/ymax': [1.0000000000000000, 0.9662484605911330],
'image/object/class/text': (['Cat', 'Dog']),
'image/object/class/label': ([1, 2])
}
````
```
import os
import io
import json
import random
import PIL.Image
import tensorflow as tf
from object_detection.utils import dataset_util
from object_detection.utils import label_map_util
def create_tf_record(images, annotations, label_map, image_path, output):
# Create a train.record TFRecord file.
with tf.python_io.TFRecordWriter(output) as writer:
# Loop through all the training examples.
for image_name in images:
try:
# Make sure the image is actually a file
img_path = os.path.join(image_path, image_name)
if not os.path.isfile(img_path):
continue
# Read in the image.
with tf.gfile.GFile(img_path, 'rb') as fid:
encoded_jpg = fid.read()
# Open the image with PIL so we can check that it's a jpeg and get the image
# dimensions.
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = PIL.Image.open(encoded_jpg_io)
if image.format != 'JPEG':
raise ValueError('Image format not JPEG')
width, height = image.size
# Initialize all the arrays.
xmins = []
xmaxs = []
ymins = []
ymaxs = []
classes_text = []
classes = []
# The class text is the label name and the class is the id. If there are 3
# cats in the image and 1 dog, it may look something like this:
# classes_text = ['Cat', 'Cat', 'Dog', 'Cat']
# classes = [ 1 , 1 , 2 , 1 ]
# For each image, loop through all the annotations and append their values.
for a in annotations[image_name]:
if ("x" in a and "x2" in a and "y" in a and "y2" in a):
label = a['label']
xmins.append(a["x"])
xmaxs.append(a["x2"])
ymins.append(a["y"])
ymaxs.append(a["y2"])
classes_text.append(label.encode("utf8"))
classes.append(label_map[label])
# Create the TFExample.
tf_example = tf.train.Example(features=tf.train.Features(feature={
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(image_name.encode('utf8')),
'image/source_id': dataset_util.bytes_feature(image_name.encode('utf8')),
'image/encoded': dataset_util.bytes_feature(encoded_jpg),
'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')),
'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
'image/object/class/label': dataset_util.int64_list_feature(classes),
}))
if tf_example:
# Write the TFExample to the TFRecord.
writer.write(tf_example.SerializeToString())
except ValueError:
print('Invalid example, ignoring.')
pass
except IOError:
print("Can't read example, ignoring.")
pass
with open(ANNOTATIONS_JSON_PATH) as f:
annotations = json.load(f)['annotations']
image_files = [image for image in annotations.keys()]
# Load the label map we created.
label_map = label_map_util.get_label_map_dict(LABEL_MAP_PATH)
random.seed(42)
random.shuffle(image_files)
num_train = int(0.7 * len(image_files))
train_examples = image_files[:num_train]
val_examples = image_files[num_train:]
create_tf_record(train_examples, annotations, label_map, CLOUD_ANNOTATIONS_MOUNT, TRAIN_RECORD_PATH)
create_tf_record(val_examples, annotations, label_map, CLOUD_ANNOTATIONS_MOUNT, VAL_RECORD_PATH)
```
# Download a base model
Training a model from scratch can take days and tons of data. We can mitigate this by using a pretrained model checkpoint. Instead of starting from nothing, we can add to what was already learned with our own data.
There are several pretrained model checkpoints that can be downloaded from the model zoo.
The model we will be training is the SSD MobileNet architecture. SSD MobileNet models have a very small file size and can execute very quickly, compromising little accuracy, which makes it perfect for running in the browser. Additionally, we will be using quantization. When we say the model is quantized it means instead of using float32 as the datatype of our numbers we are using float16 or int8.
````
float32(PI) = 3.1415927 32 bits
float16(PI) = 3.14 16 bits
int8(PI) = 3 8 bits
````
We do this because it can cut our model size down by around a factor of 4! An unquantized version of SSD MobileNet that I trained was 22.3 MB, but the quantized version was 5.7 MB that's a ~75% reduction 🎉
```
import os
import tarfile
import six.moves.urllib as urllib
download_base = 'http://download.tensorflow.org/models/object_detection/'
model = MODEL_TYPE + '.tar.gz'
tmp = 'tmp/checkpoint.tar.gz'
if not (os.path.exists(CHECKPOINT_PATH)):
# Download the checkpoint
opener = urllib.request.URLopener()
opener.retrieve(download_base + model, tmp)
# Extract all the `model.ckpt` files.
with tarfile.open(tmp) as tar:
for member in tar.getmembers():
member.name = os.path.basename(member.name)
if 'model.ckpt' in member.name:
tar.extract(member, path=CHECKPOINT_PATH)
os.remove(tmp)
```
# Model Config
The final thing we need to do is inject our pipline with the amount of labels we have and where to find the label map, TFRecord and model checkpoint. We also need to change the the batch size, because the default batch size of 128 is too large for Colab to handle.
```
#from google.protobuf import text_format
from object_detection.utils import config_util
from object_detection.utils import label_map_util
pipeline_skeleton = 'models/research/object_detection/samples/configs/' + CONFIG_TYPE + '.config'
configs = config_util.get_configs_from_pipeline_file(pipeline_skeleton)
label_map = label_map_util.get_label_map_dict(LABEL_MAP_PATH)
num_classes = len(label_map.keys())
meta_arch = configs["model"].WhichOneof("model")
override_dict = {
'model.{}.num_classes'.format(meta_arch): num_classes,
'train_config.batch_size': 24,
'train_input_path': TRAIN_RECORD_PATH,
'eval_input_path': VAL_RECORD_PATH,
'train_config.fine_tune_checkpoint': os.path.join(CHECKPOINT_PATH, 'model.ckpt'),
'label_map_path': LABEL_MAP_PATH
}
configs = config_util.merge_external_params_with_configs(configs, kwargs_dict=override_dict)
pipeline_config = config_util.create_pipeline_proto_from_configs(configs)
config_util.save_pipeline_config(pipeline_config, DATA_PATH)
```
# Start training
We can start a training run by calling the model_main script, passing:
- The location of the pipepline.config we created
- Where we want to save the model
- How many steps we want to train the model (the longer you train, the more potential there is to learn)
- The number of evaluation steps (or how often to test the model) gives us an idea of how well the model is doing
```
!rm -rf $OUTPUT_PATH
!python -m object_detection.model_main \
--pipeline_config_path=$DATA_PATH/pipeline.config \
--model_dir=$OUTPUT_PATH \
--num_train_steps=$NUM_TRAIN_STEPS \
--num_eval_steps=100
```
# Export inference graph
After your model has been trained, you might have a few checkpoints available. A checkpoint is usually emitted every 500 training steps. Each checkpoint is a snapshot of your model at that point in training. In the event that a long running training process crashes, you can pick up at the last checkpoint instead of starting from scratch.
We need to export a checkpoint to a TensorFlow graph proto in order to actually use it. We use regex to find the checkpoint with the highest training step and export it.
```
import os
import re
import json
from object_detection.utils.label_map_util import get_label_map_dict
regex = re.compile(r"model\.ckpt-([0-9]+)\.index")
numbers = [int(regex.search(f).group(1)) for f in os.listdir(OUTPUT_PATH) if regex.search(f)]
TRAINED_CHECKPOINT_PREFIX = os.path.join(OUTPUT_PATH, 'model.ckpt-{}'.format(max(numbers)))
print(f'Using {TRAINED_CHECKPOINT_PREFIX}')
!rm -rf $EXPORTED_PATH
!python -m object_detection.export_inference_graph \
--pipeline_config_path=$DATA_PATH/pipeline.config \
--trained_checkpoint_prefix=$TRAINED_CHECKPOINT_PREFIX \
--output_directory=$EXPORTED_PATH
label_map = get_label_map_dict(LABEL_MAP_PATH)
label_array = [k for k in sorted(label_map, key=label_map.get)]
with open(os.path.join(EXPORTED_PATH, 'labels.json'), 'w') as f:
json.dump(label_array, f)
```
# Evaluating the results
In the next steps we will use the images from the evaluation set to **visualize** the results of our model. If you don't see any boxes in your images, consider raising the amount of training steps in the **SETUP** section or adding more training images.
```
import os
import numpy as np
from matplotlib import pyplot as plt
from PIL import Image as PImage
from object_detection.utils import visualization_utils as vis_util
from object_detection.utils import label_map_util
# Load the labels
category_index = label_map_util.create_category_index_from_labelmap(LABEL_MAP_PATH, use_display_name=True)
# Load the model
path_to_frozen_graph = os.path.join(EXPORTED_PATH, 'frozen_inference_graph.pb')
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(path_to_frozen_graph, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
bbox_images = []
for image_x in val_examples:
img_path = os.path.join(CLOUD_ANNOTATIONS_MOUNT, image_x)
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
image = PImage.open(img_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
(im_width, im_height) = image.size
image_np = np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
bbox_images.append(image_np)
%matplotlib inline
fig = plt.figure(figsize=(50, 50)) # width, height in inches
for i,bbox_image in enumerate(bbox_images):
sub = fig.add_subplot(len(bbox_images)+1, 1, i + 1)
sub.imshow(bbox_image, interpolation='nearest')
```
### Here you can choose different images from the array to see it in more detail
```
%matplotlib inline
plt.figure(figsize=(12, 8))
plt.imshow(bbox_images[6])
```
# Deploying your model in Watson Machine Leaning
In the following steps we will export the artifacts that were created to a .tar file and upload the model to Watson Machine Learning. Than we will generate an online deployment using this model.
You will need a Watson Machine Leaning instance and an IAM API Key in IBM Cloud that has access to this instance. See the steps in the documentation:
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html
Also, in the new version of WML you will need a Deployment Space and it's ID
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html?audience=wdp
```
!ls $EXPORTED_PATH/saved_model
!tar -zcvf guns-object-detection-model.tar.gz -C $EXPORTED_PATH/saved_model .
from ibm_watson_machine_learning import APIClient
wml_credentials = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey":"<apikey>"
}
client = APIClient(wml_credentials)
client.set.default_space("<deployment-space-id>")
client.software_specifications.list()
model_spec = client.software_specifications.get_id_by_name('tensorflow_1.15-py3.6')
model_meta = {
client.repository.ModelMetaNames.NAME : "Tensorflow Guns Object Detection Model",
client.repository.ModelMetaNames.DESCRIPTION : "Guns Object Detection using Kaggle Dataset",
client.repository.ModelMetaNames.TYPE : "tensorflow_1.15",
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID : model_spec
}
model_details_dir = client.repository.store_model( model="guns-object-detection-model.tar.gz", meta_props=model_meta )
model_id_dir = model_details_dir["metadata"]['id']
client.hardware_specifications.list()
meta_props = {
client.deployments.ConfigurationMetaNames.NAME: "Tensorflow Guns Object Detection Deployment",
client.deployments.ConfigurationMetaNames.ONLINE: {},
client.deployments.ConfigurationMetaNames.HARDWARE_SPEC : { "id": "cf70f086-916d-4684-91a7-264c49c6d425"}
}
deployment_details_dir = client.deployments.create(model_id_dir, meta_props )
deployment_id = deployment_details_dir['metadata']['id']
```
# Test the deployed model
Choose one of the images from the evaluation set to score the model using the newly created API. This step can be done in another notebook or custom code, since your deployed model is not dependent of this kernel.
```
img_path = os.path.join(CLOUD_ANNOTATIONS_MOUNT, val_examples[5])
if os.path.isfile(img_path):
print("OK")
image = PImage.open(img_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
(im_width, im_height) = image.size
image_np = np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)
data = image_np.tolist()
payload_scoring = {
"input_data": [{
"values": [data]
}]
}
%%time
predictions = client.deployments.score(deployment_id, payload_scoring)
for x in predictions['predictions']:
if x['id'] == 'detection_scores':
scores = x['values'][0]
if x['id'] == 'detection_boxes':
boxes = x['values'][0]
if x['id'] == 'num_detections':
num = x['values'][0]
if x['id'] == 'detection_classes':
classes = x['values'][0]
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
%matplotlib inline
plt.figure(figsize=(12, 8))
plt.imshow(image_np)
```
|
github_jupyter
|
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Forecasting with an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c06_forecasting_with_rnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c06_forecasting_with_rnn.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Setup
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
pass
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
keras = tf.keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
def window_dataset(series, window_size, batch_size=32,
shuffle_buffer=1000):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer)
dataset = dataset.map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
```
## Simple RNN Forecasting
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = window_dataset(x_train, window_size, batch_size=128)
model = keras.models.Sequential([
keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.SimpleRNN(100),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
lr_schedule = keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-7 * 10**(epoch / 20))
optimizer = keras.optimizers.SGD(lr=1e-7, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-7, 1e-4, 0, 30])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = window_dataset(x_train, window_size, batch_size=128)
valid_set = window_dataset(x_valid, window_size, batch_size=128)
model = keras.models.Sequential([
keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.SimpleRNN(100),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
optimizer = keras.optimizers.SGD(lr=1.5e-6, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
early_stopping = keras.callbacks.EarlyStopping(patience=50)
model_checkpoint = keras.callbacks.ModelCheckpoint(
"my_checkpoint", save_best_only=True)
model.fit(train_set, epochs=500,
validation_data=valid_set,
callbacks=[early_stopping, model_checkpoint])
model = keras.models.load_model("my_checkpoint")
rnn_forecast = model_forecast(
model,
series[split_time - window_size:-1],
window_size)[:, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
```
## Sequence-to-Sequence Forecasting
```
def seq2seq_window_dataset(series, window_size, batch_size=32,
shuffle_buffer=1000):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[1:]))
return ds.batch(batch_size).prefetch(1)
for X_batch, Y_batch in seq2seq_window_dataset(tf.range(10), 3,
batch_size=1):
print("X:", X_batch.numpy())
print("Y:", Y_batch.numpy())
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = seq2seq_window_dataset(x_train, window_size,
batch_size=128)
model = keras.models.Sequential([
keras.layers.SimpleRNN(100, return_sequences=True,
input_shape=[None, 1]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200)
])
lr_schedule = keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-7 * 10**(epoch / 30))
optimizer = keras.optimizers.SGD(lr=1e-7, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-7, 1e-4, 0, 30])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = seq2seq_window_dataset(x_train, window_size,
batch_size=128)
valid_set = seq2seq_window_dataset(x_valid, window_size,
batch_size=128)
model = keras.models.Sequential([
keras.layers.SimpleRNN(100, return_sequences=True,
input_shape=[None, 1]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
optimizer = keras.optimizers.SGD(lr=1e-6, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
early_stopping = keras.callbacks.EarlyStopping(patience=10)
model.fit(train_set, epochs=500,
validation_data=valid_set,
callbacks=[early_stopping])
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
```
|
github_jupyter
|
### Heroes Of Pymoli Data Analysis
* Of the 1163 active players, the vast majority are male (84%). There also exists, a smaller, but notable proportion of female players (14%).
* Our peak age demographic falls between 20-24 (44.8%) with secondary groups falling between 15-19 (18.60%) and 25-29 (13.4%).
-----
### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data_df = pd.read_csv(file_to_load)
#What are the columns in teh data file
purchase_data_df.columns
```
## Player Count
* Display the total number of players
```
playerCountTotal = purchase_data_df['SN'].nunique()
playerCountTotal
#Make sure set has no duplicates using nunique and return count, display in a dataframe
playerCountTotal_df = pd.DataFrame({'Total Players' : [playerCountTotal]})
playerCountTotal_df
```
## Purchasing Analysis (Total)
* Run basic calculations to obtain number of unique items, average price, etc.
* Create a summary data frame to hold the results
* Optional: give the displayed data cleaner formatting
* Display the summary data frame
```
#Unique Items
itemsTotal = purchase_data_df['Item ID'].nunique()
itemsTotal
#Average Price
itemsAvgPrice = purchase_data_df['Price'].mean()
itemsAvgPrice
#Number Purchases
itemsNumberPurchases = purchase_data_df['Purchase ID'].nunique()
itemsNumberPurchases
itemsTotalRevenue = purchase_data_df['Price'].sum()
itemsTotalRevenue
#Summary Table
#Not able to get this format working 'Average Price' : [purchase_data_df['Mean'].astype(float).map("${:,.0f}".format)],
#hosted_in_us_df["average_donation"] = hosted_in_us_df["average_donation"].astype(float).map("${:,.2f}".format)
purchase_summary_df = pd.DataFrame({'Total Players' : [playerCountTotal],
'Average Price' : [itemsAvgPrice],
'Number Purchases' : [itemsNumberPurchases],
'Total Revenue' : [itemsTotalRevenue]})
purchase_summary_df
#Make it pretty
#purchase_summary_df['Average Price'] = purchase_summary_df['Average Price'].astype(float).map("${:,.2f}".format)
#purchase_summary_df['Total Revenue'] = purchase_summary_df['Total Revenue'].astype(float).map("${:,.2f}".format)
format_dict = {'Average Price':'${0:,.2f}', 'Total Revenue':'${0:,.2f}'}
purchase_summary_df.style.format(format_dict)
#purchase_summary_df
```
## Gender Demographics
* Percentage and Count of Male Players
* Percentage and Count of Female Players
* Percentage and Count of Other / Non-Disclosed
```
#clean data of nulls
purchase_data_df.count()
cleaned_purchase_data_df = purchase_data_df.dropna(how='all')
cleaned_purchase_data_df.head()
#clean data of duplicates to get accurate count of only gender and players
gender_purchase_data_df = cleaned_purchase_data_df.loc[:, ['SN','Gender']]
gender_purchase_data_df.head()
gender_purchase_data_df.drop_duplicates(inplace = True)
genderSummary_df = pd.DataFrame(gender_purchase_data_df['Gender'].value_counts())
genderSummary_df
#Rename column
genderSummary_df = genderSummary_df.rename(columns={'Gender': 'Total Count'})
genderSummary_df
#Calc and display percentage
gender_total_people = genderSummary_df.sum()
genderSummary_df['Percentage'] = genderSummary_df/gender_total_people*100
genderSummary_df
#Make it pretty
#genderSummary_df['Percentage'] = genderSummary_df['Percentage'].astype(float).map("{:,.2f}%".format)
#genderSummary_df
format_dictPercentage = {'Percentage':'{0:,.2f}%'}
genderSummary_df.style.format(format_dictPercentage)
```
## Purchasing Analysis (Gender)
* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender
* Create a summary data frame to hold the results
* Optional: give the displayed data cleaner formatting
* Display the summary data frame
```
#purchase count, avg. purchase price, avg. purchase total per person etc. by gender
#Need to rename, and it's not working...?
genderPurchaseCount = purchase_data_df.loc[:,['Purchase ID', 'Gender']]
#genderPurchaseCount.rename(columns={'Purchase ID': ['Total Purchases'], 'Gender' : 'Gender'})
#print(genderPurchaseCount.columns)
genderPurchaseCount.groupby('Gender').count()
#Average purchase price by gender is all purchases averaged and grouped by Gender
genderPurchaseAvgPrice = purchase_data_df.loc[:,['Price', 'Gender']]
genderPurchaseAvgPrice = genderPurchaseAvgPrice.groupby('Gender').mean()
genderPurchaseAvgPrice
#Average purchase per person is sum purchase prices for each person divided by the number of purchases they made
#Need to get sum of transactions per person, and count each
print(gender_purchase_data_df.columns)
genderNumberPurPerPerson = purchase_data_df.groupby('SN')
genderNumberPurPerPerson = genderNumberPurPerPerson.count()
genderNumberPurPerPerson
genderTotPurPerson = purchase_data_df.groupby('SN')
genderTotPurPerson = genderTotPurPerson['Price'].sum()
genderTotPurPerson
gender_purchase_data_df_merged = pd.merge(gender_purchase_data_df,genderNumberPurPerPerson, on='SN')
gender_purchase_data_df_merged = pd.merge(gender_purchase_data_df_merged, genderTotPurPerson, on= 'SN')
gender_purchase_data_df_merged.head()
genderPurchaseAvgTot = gender_purchase_data_df_merged.groupby('Gender_x')
genderPurchaseAvgTot = genderPurchaseAvgTot['Price_y'].mean()
genderPurchaseAvgTot
#add calcs to summary
genderSummary_df['Avg Price']=genderPurchaseAvgPrice
genderSummary_df.head()
genderSummary_df['Avg Total']=genderPurchaseAvgTot
genderSummary_df.head()
#Make it purdy
#genderSummary_df['Avg Price'] = genderSummary_df['Avg Price'].astype(float).map("${:,.2f}".format)
#genderSummary_df['Avg Total'] = genderSummary_df['Avg Total'].astype(float).map("${:,.2f}".format)
format_dictGenderPurSummary = {'Percentage':'{0:,.2f}%', 'Avg Price':'${0:,.2f}', 'Avg Total':'${0:,.2f}'}
genderSummary_df.style.format(format_dictGenderPurSummary)
#genderSummary_df
```
## Age Demographics
* Establish bins for ages
* Categorize the existing players using the age bins. Hint: use pd.cut()
* Calculate the numbers and percentages by age group
* Create a summary data frame to hold the results
* Optional: round the percentage column to two decimal points
* Display Age Demographics Table
```
#Create bins and labels
ageBins = [0,10,14,19,24,29,34,39,40]
ageBinLabels = ['<10', '10-14','15-19','20-24','25-29','30-34','35-39','40+']
pd.cut(purchase_data_df['Age'],ageBins,labels=ageBinLabels).head()
#Add a column to the the main import df with brackets
purchase_data_df['Age Bracket'] = pd.cut(purchase_data_df['Age'],ageBins,labels=ageBinLabels)
purchase_data_df.head()
#New df for age analysis
age_df = purchase_data_df.loc[:,['SN','Age','Age Bracket','Price']]
#Df for count of age brackets, and we want individual players, so drop the duplicates
age_df = age_df.sort_index()
age_df = age_df.drop_duplicates('SN')
age_df.count()
ageBracketCount = age_df.groupby('Age Bracket')
ageBracketCount = ageBracketCount.count()
ageBracketCount = ageBracketCount.rename(columns={'SN': 'Total Count'})
del ageBracketCount['Age']
del ageBracketCount['Price']
ageBracketCount
#Df for percentage...The numbers don't look right above in the brackets
#ageBracketPercent = age_df.groupby('Age Bracket')
ageBracketPercent = pd.DataFrame(ageBracketCount/ageBracketCount.sum()*100)
ageBracketPercent = ageBracketPercent.rename(columns={'Total Count' : 'Percent'})
#del ageBracketCount['Age']
#del ageBracketCount['Price']
ageBracketPercent
ageSummary_df = ageBracketCount
ageSummary_df['Percent'] = ageBracketPercent
ageSummary_df
#Make purdy
#ageSummary_df['Percent'] = ageSummary_df['Percent'].astype(float).map("{:,.2f}%".format)
format_dictAgeSummary = {'Percent' : '{:,.2f}%' }
ageSummary_df.style.format(format_dictAgeSummary)
```
## Purchasing Analysis (Age)
* Bin the purchase_data data frame by age
* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below
* Create a summary data frame to hold the results
* Optional: give the displayed data cleaner formatting
* Display the summary data frame
```
#Purchase Count Average Purchase Price Total Purchase Value Avg Total Purchase per Person
#New df for age Purchase analysis
agePurchase_df = purchase_data_df.loc[:,['Purchase ID','Age','Age Bracket','Price','SN']]
agePurchase_df.count()
#Df for Purchase Count
agePurchaseCount = agePurchase_df.groupby('Age Bracket')
agePurchaseCount = agePurchaseCount.count()
agePurchaseCount = agePurchaseCount.rename(columns={'Purchase ID' : 'Purchase Count'})
del agePurchaseCount['Age']
del agePurchaseCount['Price']
del agePurchaseCount['SN']
agePurchaseCount
#Df for Average Purchase Price
agePurchaseAvg = agePurchase_df.groupby('Age Bracket')
agePurchaseAvg = agePurchaseAvg.mean()
agePurchaseAvg = agePurchaseAvg.rename(columns={'Price' : 'Purchase Avg'})
del agePurchaseAvg['Age']
del agePurchaseAvg['Purchase ID']
agePurchaseAvg
#Df for Total Purchase Value
agePurchaseTotVal = agePurchase_df.groupby('Age Bracket')
agePurchaseTotVal = agePurchaseTotVal.sum()
agePurchaseTotVal = agePurchaseTotVal.rename(columns={'Price' : 'Total Value'})
del agePurchaseTotVal['Age']
del agePurchaseTotVal['Purchase ID']
agePurchaseTotVal
#Df for Avg Total Purchase per Person, which is sum of Price / number of purchases, which is groupby SN for calcs...
PerPerson = agePurchase_df.groupby('SN')
#print(PerPerson.sum())
#print(PerPerson.count())
agePurchaseAvgTot = PerPerson.sum()/PerPerson.count()
agePurchaseAvgTot = agePurchaseAvgTot.rename(columns={'Price' : 'Average Total'})
del agePurchaseAvgTot['Age']
del agePurchaseAvgTot['Purchase ID']
del agePurchaseAvgTot['Age Bracket']
agePurchaseAvgTot
#then merge as new column to df that's had dups removed, then do a new group by Age Brackets
agePerPerson = agePurchase_df.drop_duplicates('SN')
agePerPerson = pd.merge(agePerPerson, agePurchaseAvgTot, on='SN')
agePurchaseAvgTotByAge = agePerPerson.groupby('Age Bracket')
agePurchaseAvgTotByAge = agePurchaseAvgTotByAge.mean()
del agePurchaseAvgTotByAge['Age']
del agePurchaseAvgTotByAge['Purchase ID']
del agePurchaseAvgTotByAge['Price']
agePurchaseAvgTotByAge
#Summary df
agePurchaseSummary = agePurchaseCount
agePurchaseSummary['Purchase Avg'] = agePurchaseAvg
agePurchaseSummary['Total Value'] = agePurchaseTotVal
agePurchaseSummary['Average Total'] = agePurchaseAvgTotByAge
#Make purdy
format_dictAgePurSummary = {'Purchase Avg' : '${:,.2f}','Total Value': '${:,.2f}','Average Total':'${:,.2f}' }
agePurchaseSummary.style.format(format_dictAgePurSummary)
```
## Top Spenders
* Run basic calculations to obtain the results in the table below
* Create a summary data frame to hold the results
* Sort the total purchase value column in descending order
* Optional: give the displayed data cleaner formatting
* Display a preview of the summary data frame
```
#Purchase Count Average Purchase Price Total Purchase Value
#Add a column to per person df that counts transactions...
PerPersonTot = PerPerson.sum()
PerPersonCount = PerPerson.count()
del PerPersonCount['Purchase ID']
del PerPersonCount['Age']
del PerPersonCount['Age Bracket']
PerPersonCount = PerPersonCount.rename(columns={'Price' : 'Total Transactions'})
del PerPersonTot['Purchase ID']
del PerPersonTot['Age']
PerPersonTot = PerPersonTot.rename(columns={'Price' : 'Total Value'})
PerPersonTot
PerPerson_main = pd.merge(agePerPerson,PerPersonCount, on='SN')
PerPerson_main = pd.merge(PerPerson_main,PerPersonTot, on='SN')
PerPerson_main
#...then sort by Total Transactions
PerPerson_main_sorted = PerPerson_main.sort_values(by=['Total Transactions'], ascending=False)
TopSpenders = PerPerson_main_sorted.iloc[0:5,4:8]
TopSpenders = TopSpenders.set_index('SN')
#TopSpenders['Average Total'] = TopSpenders['Average Total'].astype(float).map("${:,.2f}".format)
#TopSpenders['Total Value'] = TopSpenders['Total Value'].astype(float).map("${:,.2f}".format)
format_dictTopSpenders = {'Average Total' : '${:,.2f}','Total Value': '${:,.2f}'}
TopSpenders.style.format(format_dictTopSpenders)
```
## Most Popular Items
* Retrieve the Item ID, Item Name, and Item Price columns
* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value
* Create a summary data frame to hold the results
* Sort the purchase count column in descending order
* Optional: give the displayed data cleaner formatting
* Display a preview of the summary data frame
```
#Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value
#purchase_data_df.columns
#Retrieve the Item ID, Item Name, and Item Price columns
items = purchase_data_df.loc[:, ['Item ID','Item Name','Price']]
itemsByID = items.groupby('Item ID')
itemsByName = items.groupby('Item Name')
#Item purchase count
itemsByID_count = itemsByID.count()
del itemsByID_count['Item Name']
itemsByID_count = itemsByID_count.rename(columns={'Price': 'Item Count'})
itemsByID_count
#Item Price
itemsByPrice = items.drop_duplicates()
itemsByPrice
#Total purchase value
itemsPurVal = itemsByID.sum()
#del itemsPurVal['Item Name']
itemsPurVal = itemsPurVal.rename(columns={'Price': 'Total Purchase Value'})
itemsPurVal
items_main = pd.merge(itemsByPrice,itemsByID_count, on='Item ID', how='outer')
items_main = pd.merge(items_main,itemsPurVal, on='Item ID', how='outer')
items_main
#Create a summary data frame to hold the results
#Sort the purchase count column in descending order
items_main_sorted = items_main.sort_values(by='Item Count', ascending=False)
items_main_sorted = items_main_sorted.drop_duplicates('Item Name')
PopularItems = items_main_sorted.iloc[0:5,1:6]
PopularItems = PopularItems.set_index('Item Name')
#Optional: give the displayed data cleaner formatting
#PopularItems['Price'] = PopularItems['Price'].astype(float).map("${:,.2f}".format)
#PopularItems['Total Purchase Value'] = PopularItems['Total Purchase Value'].astype(float).map("${:,.2f}".format)
format_dictPopularItems = {'Price' : '${:,.2f}','Total Purchase Value' : '${:,.2f}' }
PopularItems.style.format(format_dictPopularItems)
#Display a preview of the summary data frame
```
## Most Profitable Items
* Sort the above table by total purchase value in descending order
* Optional: give the displayed data cleaner formatting
* Display a preview of the data frame
```
ProfitableItems = PopularItems.sort_values(by='Total Purchase Value', ascending=False)
#ProfitableItems['Total Purchase Value'] = ProfitableItems['Total Purchase Value'].astype(float).map("${:,.2f}".format)
format_dictProfitableItems = {'Price' : '${:,.2f}','Total Purchase Value' : '${:,.2f}' }
ProfitableItems.style.format(format_dictProfitableItems)
```
|
github_jupyter
|
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
leftslope=[]
rightslope=[]
x_left=[]
y_left=[]
x_right=[]
y_right=[]
minslope=0.1
maxslope=1000
for line in lines:
for x1,y1,x2,y2 in line:
slope=(y2-y1)/(x2-x1)
if abs(slope)<minslope or abs(slope)>maxslope:
continue
if slope>0:
leftslope.append(slope)
x_left.append(x1)
y_left.append(y1)
else :
rightslope.append(slope)
x_right.append(x1)
y_right.append(y1)
ysize, xsize = img.shape[0], img.shape[1]
XX, YY = np.meshgrid(np.arange(0, xsize), np.arange(0, ysize))
region_top_y = np.amin(YY)
newline=[]
if len(leftslope)>0 :
meanslope=np.mean(leftslope)
x_mean=np.mean(x_left)
y_mean=np.mean(y_left)
b=y_mean-meanslope*x_mean
x1 = int((region_top_y - b)/meanslope)
x2 = int((ysize - b)/meanslope)
newline.append([(x1, region_top_y, x2, ysize)])
if len(rightslope)>0:
meanslope=np.mean(rightslope)
x_mean=np.mean(x_right)
y_mean=np.mean(y_right)
b=y_mean-meanslope*x_mean
x1 = int((region_top_y - b)/meanslope)
x2 = int((ysize - b)/meanslope)
newline.append([(x1, region_top_y, x2, ysize)])
for line in newline:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
a=os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
#step 1: grayscale the image
gray=grayscale(image)
plt.imshow(gray)
vertices=np.array([(500,300),(100,550), (900,550)])
region=region_of_interest(gray, np.int32([vertices]))
#plt.imshow(region)
#step 2: edge detection
canny_image=canny(gray, 10 ,150)
plt.imshow(canny_image)
#step 3: remove gaussian noise
#noise_removed=gaussian_blur
gaussian=gaussian_blur(canny_image,5)
plt.imshow(gaussian)
#step4: region masking
region=region_of_interest(gaussian, np.int32([vertices]))
#step 5: draw haugh lines
hough=hough_lines(region, 2, np.pi/180, 34, 10, 5)
plt.imshow(hough)
#step 6: final weighted image with lines
final=weighted_img(hough, image)
plt.imshow(final)
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
#step 1: grayscale the image
gray=grayscale(image)
vertices=np.array([(500,300),(100,550), (900,550)])
region=region_of_interest(gray, np.int32([vertices]))
#step 2: edge detection
canny_image=canny(gray, 10 ,150)
plt.imshow(canny_image)
#step 3: remove gaussian noise
#noise_removed=gaussian_blur
gaussian=gaussian_blur(canny_image,5)
plt.imshow(gaussian)
#step4: region masking
region=region_of_interest(gaussian, np.int32([vertices]))
#step 5: draw haugh lines
hough=hough_lines(region, 2, np.pi/180, 15, 10, 70)
plt.imshow(hough)
#step 6: final weighted image with lines
final=weighted_img(hough, image)
plt.imshow(final)
return final
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
|
github_jupyter
|
# Introduction to TensorFlow v2 : Basics
### Importing and printing the versions
```
import tensorflow as tf
print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution is: {}".format(tf.executing_eagerly()))
print("Keras version: {}".format(tf.keras.__version__))
```
### TensorFlow Variables
[Tensors](https://www.tensorflow.org/guide/tensor) are multi-dimensional arrays in TensorFlow. But, Tensors are immutable in nature. [Variables](https://www.tensorflow.org/guide/variable) are a way to store data which can be manipulated and changed easily. Variables are automatically placed on the fastest compatible device for it's datatype. For ex: If GPU is found, the tensors are automatically placed on GPU directly.
```
var = 1
# Defining a Tensorflow Variables
ten = tf.Variable(7)
another_tensor = tf.Variable([[1, 2],[3, 4]])
var, ten, another_tensor
```
### Creating new Variables
```
f1 = tf.Variable(100.6)
print(f1)
```
### Assigning values to existing Variables
```
# Assign and print the Data-Type
print(f1.assign(25))
print(f1.dtype)
f2 = tf.Variable(7, dtype = tf.float64)
print(f2.dtype)
# Creating a TensorFlow constant - Value cannot be changed in future
constant_var = tf.constant(10)
print(constant_var)
```
### Extracting the value from a Tensor and formatting like a Numpy array using .numpy()
```
constant_var.numpy()
```
### Rank and Shape of Tensor
About [Rank and Shape](https://www.tensorflow.org/guide/tensor#about_shapes) in TensorFlow
```
tf.rank(another_tensor)
tf.shape(another_tensor)
new_tensor = tf.Variable([ [ [0., 1., 2.], [3., 4., 5.] ], [ [6., 7., 8.], [9., 10., 11.] ] ])
print(new_tensor.shape)
print(tf.rank(new_tensor))
```
### Reshaping Tensors
```
new_reshape = tf.reshape(new_tensor, [2, 6])
recent_reshape = tf.reshape(new_tensor, [1, 12])
print(new_reshape)
print(recent_reshape)
```
### Broadcasting Feature
```
new_tensor + 4
new_tensor - 4
new_tensor * 4
```
### Matrix Multiplication
```
new_tensor * new_tensor
u = tf.constant([[5, 6, 7]])
v = tf.constant([[8, 9, 0]])
print('Matrix Multiplication - Transpose')
print(tf.matmul(u, tf.transpose(a=v)))
```
### Type Casting
```
int_tensor = tf.cast(ten, dtype=tf.float32)
print(int_tensor)
```
### Arithmetic Operations
```
a = tf.random.normal(shape=(2, 2))
b = tf.random.normal(shape=(2, 2))
c = a + b
d = tf.square(c)
e = tf.exp(d)
print('Addition - {}'.format(c))
print('Square Root - {}'.format(d))
print('Exponent - {}'.format(e))
```
# TensorFlow v2 Functions
### Squared Difference Function
```
#Squared Difference Function
x = [2, 4, 6, 8, 12]
y = 6
#(x-y)*(x-y)
result = tf.math.squared_difference(x, y)
result
```
### Reduce Mean
```
numbers = tf.constant([[6., 9.], [3., 5.]])
print(numbers)
tf.reduce_mean(input_tensor = numbers)
```
### Mean across columns
```
# Reduce rows -> Find mean across columns
#(6. + 3.)/2, (9. + 5.)/2
print(tf.reduce_mean(input_tensor = numbers, axis = 0))
# (6. + 3.)/2, (9. + 5.)/2
print(tf.reduce_mean(input_tensor = numbers, axis = 0, keepdims = True))
```
### Mean across rows
```
# Reduce columns -> Find mean across rows
#(6. + 9.)/2, (3. + 5.)/2
print(tf.reduce_mean(input_tensor = numbers, axis = 1))
# (6. + 9.)/2, (3. + 5.)/2
print(tf.reduce_mean(input_tensor = numbers, axis = 1, keepdims = True))
```
### Generating normal distribution in a tensor
```
print(tf.random.normal(shape = (3, 2), mean = 10, stddev = 2, dtype = tf.float32, seed = None, name = None))
```
### Generating uniform distribution in a tensor
```
tf.random.uniform(shape = (3, 2), minval = 0, maxval = 1, dtype = tf.float32, seed = None, name = None)
```
### Random Seed in Tensorflow
```
print('Random Seed - 11\n')
tf.random.set_seed(11)
random_1 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)
random_2 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)
print(random_1)
print(random_2)
print('\n')
print('Random Seed - 12\n')
tf.random.set_seed(12)
random_1 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)
random_2 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)
print(random_1)
print(random_2)
print('\n')
print('Random Seed - 11\n')
tf.random.set_seed(11)
random_1 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)
random_2 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)
print(random_1)
print(random_2)
```
### Max, Min and Indices
```
tensor_m = tf.constant([2, 20, 15, 32, 77, 29, -16, -51, 29])
print(tensor_m)
# Max argument
index = tf.argmax(input = tensor_m)
print('Index of max: {}\n'.format(index))
print('Max element: {}'.format(tensor_m[index].numpy()))
print(tensor_m)
# Min argument
index = tf.argmin(input = tensor_m)
print('Index of minumum element: {}\n'.format(index))
print('Minimum element: {}'.format(tensor_m[index].numpy()))
```
# TensorFlow v2 : Advanced
### Computing gradients with GradientTape - Automatic Differentiation
TensorFlow v2 has this API for recording gradient values based on the values computed in the forward pass with respect to inputs. Since we need values to be remembered during the forward pass, the tf.GradientTape provides us a way to automatically differentiate a certain function wrt to the input variable specified. To read more on Auto Diiferentiation in TensorFlow v2 click [here]https://www.tensorflow.org/guide/autodiff).
```
x = tf.random.normal(shape=(2, 2))
y = tf.random.normal(shape=(2, 2))
with tf.GradientTape() as tape:
# Start recording the history of operations applied to x
tape.watch(x)
# Do some math using x and y
z = tf.sqrt(tf.square(x) + tf.square(y))
# What's the gradient of z with respect to x
dz = tape.gradient(z, x)
print(dz)
```
tf.GradientTape API automatically watches the function to be differentiated, no need to explicitly mention/run tape.watch()
```
x = tf.Variable(x)
with tf.GradientTape() as tape:
# Doing some calculations using x and y
z = tf.sqrt(tf.square(x) + tf.square(y))
# Getting the gradient of z wrt x
dz = tape.gradient(z, x)
print(dz)
```
We can perform differentiation in chains also, using two tapes!
```
with tf.GradientTape() as outer_tape:
with tf.GradientTape() as tape:
# Computation using x and y
z = tf.sqrt(tf.square(x) + tf.square(y))
# First differentiation of z wrt x
dz = tape.gradient(z, x)
# Second differentiation of z wrt x
dz2 = outer_tape.gradient(dz, x)
print(dz2)
```
### Tensorflow v2 Graph Function
Read [here](https://www.tensorflow.org/guide/intro_to_graphs) for more information on Computation Graphs and TensorFlow Functions of TensorFlow v1
```
#Normal Python function
def f1(x, y):
return tf.reduce_mean(input_tensor=tf.multiply(x ** 2, 5) + y**2)
#Converting that into Tensorflow Graph function
f2 = tf.function(f1)
x = tf.constant([7., -2.])
y = tf.constant([8., 6.])
#Funtion 1 and function 2 return the same value, but function 2 executes as a TensorFlow graph
assert f1(x,y).numpy() == f2(x,y).numpy()
ans = f1(x,y)
print(ans)
ans = f2(x,y)
print(ans)
```
# TensorFlow v2 : Linear Regression and tf.function
### Let's see what is the importance of tf.function with a small example of Linear Regression
```
input_dim = 2
output_dim = 1
learning_rate = 0.01
# This is our weight matrix
w = tf.Variable(tf.random.uniform(shape=(input_dim, output_dim)))
# This is our bias vector
b = tf.Variable(tf.zeros(shape=(output_dim,)))
def compute_predictions(features):
return tf.matmul(features, w) + b
def compute_loss(labels, predictions):
return tf.reduce_mean(tf.square(labels - predictions))
def train_on_batch(x, y):
with tf.GradientTape() as tape:
predictions = compute_predictions(x)
loss = compute_loss(y, predictions)
# Note that `tape.gradient` works with a list as well (w, b).
dloss_dw, dloss_db = tape.gradient(loss, [w, b])
w.assign_sub(learning_rate * dloss_dw)
b.assign_sub(learning_rate * dloss_db)
return loss
import numpy as np
import random
import matplotlib.pyplot as plt
%matplotlib inline
# Prepare a dataset.
num_samples = 10000
negative_samples = np.random.multivariate_normal(mean=[0, 3], cov=[[1, 0.5],[0.5, 1]], size=num_samples)
positive_samples = np.random.multivariate_normal(mean=[3, 0], cov=[[1, 0.5],[0.5, 1]], size=num_samples)
features = np.vstack((negative_samples, positive_samples)).astype(np.float32)
labels = np.vstack((np.zeros((num_samples, 1), dtype='float32'), np.ones((num_samples, 1), dtype='float32')))
plt.scatter(features[:, 0], features[:, 1], c=labels[:, 0])
# Shuffle the data.
indices = np.random.permutation(len(features))
features = features[indices]
labels = labels[indices]
# Create a tf.data.Dataset object for easy batched iteration
dataset = tf.data.Dataset.from_tensor_slices((features, labels))
dataset = dataset.shuffle(buffer_size=1024).batch(256)
for epoch in range(10):
for step, (x, y) in enumerate(dataset):
loss = train_on_batch(x, y)
print('Epoch %d: last batch loss = %.4f' % (epoch, float(loss)))
predictions = compute_predictions(features)
plt.scatter(features[:, 0], features[:, 1], c=predictions[:, 0] > 0.5)
```
### Analysizing the code run time
TensorFlow v2 with Eager Execution
```
import time
t0 = time.time()
for epoch in range(20):
for step, (x, y) in enumerate(dataset):
loss = train_on_batch(x, y)
t_end = time.time() - t0
print('Time per epoch: %.3f s' % (t_end / 20,))
```
Adding the @tf.function to convert the function into a static graph (TensorFlow v1)
```
@tf.function
def train_on_batch_tf(x, y):
with tf.GradientTape() as tape:
predictions = compute_predictions(x)
loss = compute_loss(y, predictions)
dloss_dw, dloss_db = tape.gradient(loss, [w, b])
w.assign_sub(learning_rate * dloss_dw)
b.assign_sub(learning_rate * dloss_db)
return loss
```
Running using the Static Graph method
```
t0 = time.time()
for epoch in range(20):
for step, (x, y) in enumerate(dataset):
loss = train_on_batch_tf(x, y)
t_end = time.time() - t0
print('Time per epoch: %.3f s' % (t_end / 20,))
```
## There is a huge decrease in the time taken per epoch!!!
## Eager execution is great for debugging and printing results line-by-line, but when it's time to scale, static graphs are a researcher's best friends.
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
# Package overview
pandas is a [Python](https://www.python.org) package providing fast,
flexible, and expressive data structures designed to make working with
“relational” or “labeled” data both easy and intuitive. It aims to be the
fundamental high-level building block for doing practical, **real-world** data
analysis in Python. Additionally, it has the broader goal of becoming **the
most powerful and flexible open source data analysis/manipulation tool
available in any language**. It is already well on its way toward this goal.
pandas is well suited for many different kinds of data:
> - Tabular data with heterogeneously-typed columns, as in an SQL table or
Excel spreadsheet
- Ordered and unordered (not necessarily fixed-frequency) time series data.
- Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
column labels
- Any other form of observational / statistical data sets. The data
need not be labeled at all to be placed into a pandas data structure
The two primary data structures of pandas, `Series` (1-dimensional)
and `DataFrame` (2-dimensional), handle the vast majority of typical use
cases in finance, statistics, social science, and many areas of
engineering. For R users, `DataFrame` provides everything that R’s
`data.frame` provides and much more. pandas is built on top of [NumPy](https://www.numpy.org) and is intended to integrate well within a scientific
computing environment with many other 3rd party libraries.
Here are just a few of the things that pandas does well:
> - Easy handling of **missing data** (represented as NaN) in floating point as
well as non-floating point data
- Size mutability: columns can be **inserted and deleted** from DataFrame and
higher dimensional objects
- Automatic and explicit **data alignment**: objects can be explicitly
aligned to a set of labels, or the user can simply ignore the labels and
let `Series`, `DataFrame`, etc. automatically align the data for you in
computations
- Powerful, flexible **group by** functionality to perform
split-apply-combine operations on data sets, for both aggregating and
transforming data
- Make it **easy to convert** ragged, differently-indexed data in other
Python and NumPy data structures into DataFrame objects
- Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
of large data sets
- Intuitive **merging** and **joining** data sets
- Flexible **reshaping** and pivoting of data sets
- **Hierarchical** labeling of axes (possible to have multiple labels per
tick)
- Robust IO tools for loading data from **flat files** (CSV and delimited),
Excel files, databases, and saving / loading data from the ultrafast **HDF5
format**
- **Time series**-specific functionality: date range generation and frequency
conversion, moving window statistics, date shifting, and lagging.
Many of these principles are here to address the shortcomings frequently
experienced using other languages / scientific research environments. For data
scientists, working with data is typically divided into multiple stages:
munging and cleaning data, analyzing / modeling it, then organizing the results
of the analysis into a form suitable for plotting or tabular display. pandas
is the ideal tool for all of these tasks.
Some other notes
> - pandas is **fast**. Many of the low-level algorithmic bits have been
extensively tweaked in [Cython](https://cython.org) code. However, as with
anything else generalization usually sacrifices performance. So if you focus
on one feature for your application you may be able to create a faster
specialized tool.
- pandas is a dependency of [statsmodels](https://www.statsmodels.org/stable/index.html), making it an important part of the
statistical computing ecosystem in Python.
- pandas has been used extensively in production in financial applications.
## Data structures
|Dimensions|Name|Description|
|:-------------:|:------------------:|:------------------------------------------------:|
|1|Series|1D labeled homogeneously-typed array|
|2|DataFrame|General 2D labeled, size-mutable tabular structure with potentially heterogeneously-typed column|
### Why more than one data structure?
The best way to think about the pandas data structures is as flexible
containers for lower dimensional data. For example, DataFrame is a container
for Series, and Series is a container for scalars. We would like to be
able to insert and remove objects from these containers in a dictionary-like
fashion.
Also, we would like sensible default behaviors for the common API functions
which take into account the typical orientation of time series and
cross-sectional data sets. When using the N-dimensional array (ndarrays) to store 2- and 3-dimensional
data, a burden is placed on the user to consider the orientation of the data
set when writing functions; axes are considered more or less equivalent (except
when C- or Fortran-contiguousness matters for performance). In pandas, the axes
are intended to lend more semantic meaning to the data; i.e., for a particular
data set, there is likely to be a “right” way to orient the data. The goal,
then, is to reduce the amount of mental effort required to code up data
transformations in downstream functions.
For example, with tabular data (DataFrame) it is more semantically helpful to
think of the **index** (the rows) and the **columns** rather than axis 0 and
axis 1. Iterating through the columns of the DataFrame thus results in more
readable code:
"""
for col in df.columns:
series = df[col]
# do something with series
"""
## Mutability and copying of data
All pandas data structures are value-mutable (the values they contain can be
altered) but not always size-mutable. The length of a Series cannot be
changed, but, for example, columns can be inserted into a DataFrame. However,
the vast majority of methods produce new objects and leave the input data
untouched. In general we like to **favor immutability** where sensible.
## Getting support
The first stop for pandas issues and ideas is the [Github Issue Tracker](https://github.com/pandas-dev/pandas/issues). If you have a general question,
pandas community experts can answer through [Stack Overflow](https://stackoverflow.com/questions/tagged/pandas).
## Community
pandas is actively supported today by a community of like-minded individuals around
the world who contribute their valuable time and energy to help make open source
pandas possible. Thanks to [all of our contributors](https://github.com/pandas-dev/pandas/graphs/contributors).
If you’re interested in contributing, please visit the contributing guide.
pandas is a [NumFOCUS](https://numfocus.org/sponsored-projects) sponsored project.
This will help ensure the success of the development of pandas as a world-class open-source
project and makes it possible to [donate](https://pandas.pydata.org/donate.html) to the project.
## Project governance
The governance process that pandas project has used informally since its inception in 2008 is formalized in [Project Governance documents](https://github.com/pandas-dev/pandas-governance).
The documents clarify how decisions are made and how the various elements of our community interact, including the relationship between open source collaborative development and work that may be funded by for-profit or non-profit entities.
Wes McKinney is the Benevolent Dictator for Life (BDFL).
## Development team
The list of the Core Team members and more detailed information can be found on the [people’s page](https://github.com/pandas-dev/pandas-governance/blob/master/people.md) of the governance repo.
## Institutional partners
The information about current institutional partners can be found on [pandas website page](https://pandas.pydata.org/about.html).
## License
BSD 3-Clause License
Copyright (c) 2008-2011, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData Development Team
All rights reserved.
Copyright (c) 2011-2021, Open source contributors.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
github_jupyter
|
# Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!
**You will learn to:** Use regularization in your deep learning models.
Let's first import the packages you are going to use.
```
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
```
train_X, train_Y, test_X, test_Y = load_2D_dataset()
```
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
## 1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python.
- in *dropout mode* -- by setting the `keep_prob` to a value less than one
You will first try the model without any regularization. Then, you will implement:
- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"
- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
```
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
```
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
```
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
## 2 - L2 Regularization
The standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
```python
np.sum(np.square(Wl))
```
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
```
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (np.sum(np.square(W1))+ np.sum(np.square(W2))+ np.sum(np.square(W3)))*1/m*lambd/2
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
```
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + None
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + None
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + None
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
```
**Expected Output**:
<table>
<tr>
<td>
**dW1**
</td>
<td>
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
</td>
</tr>
<tr>
<td>
**dW2**
</td>
<td>
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
</td>
</tr>
<tr>
<td>
**dW3**
</td>
<td>
[[-1.77691347 -0.11832879 -0.09397446]]
</td>
</tr>
</table>
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call:
- `compute_cost_with_regularization` instead of `compute_cost`
- `backward_propagation_with_regularization` instead of `backward_propagation`
```
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
```
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
**What is L2-regularization actually doing?**:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
**What you should remember** -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
## 3 - Dropout
Finally, **dropout** is a widely used regularization technique that is specific to deep learning.
**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
### 3.1 - Forward propagation with dropout
**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
**Instructions**:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X > 0.5)`. Note that 0 and 1 are respectively equivalent to False and True.
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
```
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = None # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = None # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = None # Step 3: shut down some neurons of A1
A1 = None # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = None # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = None # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = None # Step 3: shut down some neurons of A2
A2 = None # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
```
**Expected Output**:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
### 3.2 - Backward propagation with dropout
**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
**Instruction**:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`.
2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
```
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = None # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = None # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = None # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = None # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**dA1**
</td>
<td>
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
</td>
</tr>
<tr>
<td>
**dA2**
</td>
<td>
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
</td>
</tr>
</table>
Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:
- `forward_propagation_with_dropout` instead of `forward_propagation`.
- `backward_propagation_with_dropout` instead of `backward_propagation`.
```
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
```
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Note**:
- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.
<font color='blue'>
**What you should remember about dropout:**
- Dropout is a regularization technique.
- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
- Apply dropout both during forward and backward propagation.
- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
## 4 - Conclusions
**Here are the results of our three models**:
<table>
<tr>
<td>
**model**
</td>
<td>
**train accuracy**
</td>
<td>
**test accuracy**
</td>
</tr>
<td>
3-layer NN without regularization
</td>
<td>
95%
</td>
<td>
91.5%
</td>
<tr>
<td>
3-layer NN with L2-regularization
</td>
<td>
94%
</td>
<td>
93%
</td>
</tr>
<tr>
<td>
3-layer NN with dropout
</td>
<td>
93%
</td>
<td>
95%
</td>
</tr>
</table>
Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system.
Congratulations for finishing this assignment! And also for revolutionizing French football. :-)
<font color='blue'>
**What we want you to remember from this notebook**:
- Regularization will help you reduce overfitting.
- Regularization will drive your weights to lower values.
- L2 regularization and Dropout are two very effective regularization techniques.
|
github_jupyter
|
```
%reload_ext autoreload
%autoreload 2
import sys
import os
BASE_DIR = os.path.abspath(os.path.join(os.path.dirname("__file__"), os.path.pardir))
sys.path.append(BASE_DIR)
import cv2
import time
import numpy as np
import matplotlib.pyplot as plt
import imgaug as ia
import imgaug.augmenters as iaa
import tensorflow as tf
from data_processor.data_loader import DataLoader, show_batch, DataLoaderWithoutCache
from models.dcgan import DCGAN, gen_random
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
gpus = tf.config.experimental.list_physical_devices(device_type='GPU')
cpus = tf.config.experimental.list_physical_devices(device_type='CPU')
tf.config.experimental.set_virtual_device_configuration(
gpus[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024*7.5)])
batch_size = 256
cache_size = 1024 * 64
nz = 100
glr = 2e-4
dlr = 2e-4
img_dir = 'data/faces/'
IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS = 64, 64, 3
def scale(img):
return (img - 127.5) / 127.5
def rescale(img):
return img * 127.5 + 127.5
sometimes = lambda aug: iaa.Sometimes(0.5, aug)
aug = iaa.Sequential(
[
iaa.Fliplr(0.5), # horizontally flip 50% of all images
sometimes(iaa.CropAndPad(
percent=(-0.05, 0.1),
pad_mode=ia.ALL,
pad_cval=(0, 255)
)),
sometimes(iaa.Affine(
scale={"x": (0.9, 1.1), "y": (0.9, 1.1)}, # scale images to 80-120% of their size, individually per axis
translate_percent={"x": (-0.1, 0.1), "y": (-0.1, 0.1)}, # translate by -20 to +20 percent (per axis)
rotate=(-10, 10), # rotate by -45 to +45 degrees
order=[0, 1], # use nearest neighbour or bilinear interpolation (fast)
cval=(0, 255), # if mode is constant, use a cval between 0 and 255
mode=ia.ALL # use any of scikit-image's warping modes (see 2nd image from the top for examples)
)),
],
random_order=True
)
data_loader = DataLoaderWithoutCache(data_dir=os.path.join(BASE_DIR, img_dir), img_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), cache_size=cache_size)
data_loader.scale(scale)\
.batch(batch_size)\
.augment(lambda x: aug(images=x))
img_batch = rescale(next(iter(data_loader)))
show_batch(img_batch)
num_examples_to_generate = 36
seed = gen_random((num_examples_to_generate, nz))
def show_generator(generator, seed):
predictions = generator(seed, training=False).numpy()
images = rescale(predictions).astype(np.uint8)
show_batch(images)
dcgan = DCGAN(image_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), dlr=dlr, glr=glr, nz=nz)
dcgan.summary()
show_generator(dcgan.generator, seed)
for epoch in range(500):
for batch_idx, img_batch in enumerate(data_loader):
dcgan.train_step(img_batch, num_iter_disc=1, num_iter_gen=1)
print(f'epoch: {epoch}, batch: {batch_idx} ', end='\r')
show_generator(dcgan.generator, seed)
img_batch = rescale(next(iter(data_loader)))
show_batch(img_batch)
show_generator(dcgan.generator, seed)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/MidasXIV/Artificial-Intelliegence--Deep-Learning--Tensor-Flow/blob/master/Codelabs/1.Hello_ML_World.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# The Hello World of Deep Learning with Neural Networks
Like every first app you should start with something super simple that shows the overall scaffolding for how your code works.
In the case of creating neural networks, the sample I like to use is one where it learns the relationship between two numbers. So, for example, if you were writing code for a function like this, you already know the 'rules' --
```
float my_function(float x){
float y = (3 * x) + 1;
return y;
}
```
So how would you train a neural network to do the equivalent task? Using data! By feeding it with a set of Xs, and a set of Ys, it should be able to figure out the relationship between them.
This is obviously a very different paradigm than what you might be used to, so let's step through it piece by piece.
## Imports
Let's start with our imports. Here we are importing TensorFlow and calling it tf for ease of use.
We then import a library called numpy, which helps us to represent our data as lists easily and quickly.
The framework for defining a neural network as a set of Sequential layers is called keras, so we import that too.
```
import tensorflow as tf
import numpy as np
from tensorflow import keras
```
## Define and Compile the Neural Network
Next we will create the simplest possible neural network. It has 1 layer, and that layer has 1 neuron, and the input shape to it is just 1 value.
```
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
```
Now we compile our Neural Network. When we do so, we have to specify 2 functions, a loss and an optimizer.
If you've seen lots of math for machine learning, here's where it's usually used, but in this case it's nicely encapsulated in functions for you. But what happens here -- let's explain...
We know that in our function, the relationship between the numbers is y=3x+1.
When the computer is trying to 'learn' that, it makes a guess...maybe y=10x+10. The LOSS function measures the guessed answers against the known correct answers and measures how well or how badly it did.
It then uses the OPTIMIZER function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with somehting like y=5x+5, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower)
It will repeat this for the number of EPOCHS which you will see shortly. But first, here's how we tell it to use 'MEAN SQUARED ERROR' for the loss and 'STOCHASTIC GRADIENT DESCENT' for the optimizer. You don't need to understand the math for these yet, but you can see that they work! :)
Over time you will learn the different and appropriate loss and optimizer functions for different scenarios.
```
model.compile(optimizer='sgd', loss='mean_squared_error')
```
## Providing the Data
Next up we'll feed in some data. In this case we are taking 6 xs and 6ys. You can see that the relationship between these is that y=2x-1, so where x = -1, y=-3 etc. etc.
A python library called 'Numpy' provides lots of array type data structures that are a defacto standard way of doing it. We declare that we want to use these by specifying the values asn an np.array[]
```
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-2.0, 1.0, 4.0, 7.0, 10.0, 13.0], dtype=float)
```
# Training the Neural Network
The process of training the neural network, where it 'learns' the relationship between the Xs and Ys is in the **model.fit** call. This is where it will go through the loop we spoke about above, making a guess, measuring how good or bad it is (aka the loss), using the opimizer to make another guess etc. It will do it for the number of epochs you specify. When you run this code, you'll see the loss on the right hand side.
```
model.fit(xs, ys, epochs=500)
```
Ok, now you have a model that has been trained to learn the relationshop between X and Y. You can use the **model.predict** method to have it figure out the Y for a previously unknown X. So, for example, if X = 10, what do you think Y will be? Take a guess before you run this code:
```
print(model.predict([10.0]))
```
You might have thought 31, right? But it ended up being a little over. Why do you think that is?
Remember that neural networks deal with probabilities, so given the data that we fed the NN with, it calculated that there is a very high probability that the relationship between X and Y is Y=3X+1, but with only 6 data points we can't know for sure. As a result, the result for 10 is very close to 31, but not necessarily 31.
As you work with neural networks, you'll see this pattern recurring. You will almost always deal with probabilities, not certainties, and will do a little bit of coding to figure out what the result is based on the probabilities, particularly when it comes to classification.
|
github_jupyter
|
```
import sys
import os
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path + "/src")
from simulation import BaseSimulation
from individual_interaction_population import IndividualInteractionPopulation
from base_test_protocol import ContactTraceProtocol, QuarantineSymptomaticProtocol
import numpy as np
def prepare_pop(interactions_pp):
n_agents = int(1E3)
disease_length = 14
quarantine_length = 14
days_until_symptomatic = 7
interaction_frequency_lambda = interactions_pp
population = IndividualInteractionPopulation(n_agents,
disease_length,
quarantine_length,
days_until_symptomatic,
interaction_frequency_lambda,
interaction_infection_pct=0.05,
initial_prevalence=0.005)
# select only a single individual to be infected:
infected_agent = np.random.choice(range(n_agents))
for agent_idx in range(n_agents):
if agent_idx == infected_agent:
population.infection_status[agent_idx] = True
else:
population.infection_status[agent_idx] = False
return population
def run_simulation(interactions_pp, time_horizon, test_protocol, verbose=False):
pop = prepare_pop(interactions_pp)
simulation = BaseSimulation(pop, test_protocol, test_frequency=1, test_latency=0)
for day in range(time_horizon):
simulation.step()
if verbose:
print("Done simulating day {}".format(day+1))
return simulation
sim_results_notrace = {}
sim_results_trace = {}
interactions_per_person_values = [1,2,3,4,5,6,7,8,9]
time_horizon = 200
R0 = {}
for ipp in interactions_per_person_values:
R0[ipp] = 0.05 * 7 * ipp
print("R0 under symptomatic-only quarantine, under lambda = {}, is equal to {:.2f}".format(ipp, 0.05 * 7 * ipp))
for interactions_pp in interactions_per_person_values:
sim_results_notrace[interactions_pp] = []
sim_results_trace[interactions_pp] = []
for x in range(25):
notrace_test = QuarantineSymptomaticProtocol()
sim_results_notrace[interactions_pp].append(run_simulation(interactions_pp, time_horizon, notrace_test))
trace_test = ContactTraceProtocol()
sim_results_trace[interactions_pp].append(run_simulation(interactions_pp, time_horizon, trace_test))
print("Done iteration for interactions_pp value {}".format(interactions_pp))
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['font.size'] = 12
def add_plot(sim, days, color):
infections = [sim.summary_population_data[day]['cumulative_num_infected'] for day in days]
plt.plot(days, infections, linewidth=10.0, alpha=0.1, color=color)
plt.figure(figsize=(20,50))
interactions_per_person_values = [1,2,3,4,5,6,7,8,9]
colors={1:'purple', 2:'red', 3:'orange', 4:'green', 5:'blue'}
subplot_val = 1
nrows = 9
ncols = 2
days = list(range(time_horizon))
for interactions_pp in interactions_per_person_values:
color = colors[(interactions_pp-1) % 5 + 1]
plt.subplot(nrows, ncols, subplot_val)
subplot_val += 1
plt.title("Without Contact Tracing; lambda = {}; R0 = {:.2f}".format(interactions_pp, R0[interactions_pp]))
plt.ylim(-100,1100)
for sim in sim_results_notrace[interactions_pp]:
add_plot(sim, days, color)
plt.subplot(nrows, ncols, subplot_val)
subplot_val += 1
plt.title("With Contact Tracing; lambda = {}; R0 = {:.2f}".format(interactions_pp, R0[interactions_pp]))
plt.ylim(-100,1100)
for sim in sim_results_trace[interactions_pp]:
add_plot(sim, days, color)
plt.show()
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.