code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# NLP Intent Recognition
Hallo und herzlich willkommen zum codecentric.AI bootcamp!
Heute wollen wir uns mit einem fortgeschrittenen Thema aus dem Bereich _natural language processing_, kurz _NLP_, genannt, beschäftigen:
> Wie bringt man Sprachassistenten, Chatbots und ähnlichen Systemen bei, die Absicht eines Nutzers aus seinen Äußerungen zu erkennen?
Dieses Problem wird im Englischen allgemein als _intent recognition_ bezeichnet und gehört zu dem ambitionierten Gebiet des _natural language understanding_, kurz _NLU_ genannt. Einen Einstieg in dieses Thema bietet das folgende [Youtube-Video](https://www.youtube.com/watch?v=H_3R8inCOvM):
```
# lade Video
from IPython.display import IFrame
IFrame('https://www.youtube.com/embed/H_3R8inCOvM', width=850, height=650)
```
Zusammen werden wir in diesem Tutorial mit Hilfe der NLU-Bibliothek [Rasa-NLU](https://rasa.com/docs/nlu/) einem WetterBot beibringen, einfache Fragemuster zum Wetter zu verstehen und zu beantworten. Zum Beispiel wird er auf die Fragen
> `"Wie warm war es 1989?"`
mit
<img src="img/answer-1.svg" width="85%" align="middle">
und auf
> `"Welche Temperatur hatten wir in Schleswig-Holstein und in Baden-Württemberg?"`
mit
<img src="img/answer-2.svg" width="85%" align="middle">
antworten. Der folgende Screencast gibt einen Überblick über das Notebook:
```
# lade Video
from IPython.display import IFrame
IFrame('https://www.youtube.com/embed/pVwO4Brs4kY', width=850, height=650)
```
Damit es gleich richtig losgehen kann, importieren wir noch zwei Standardbibliotheken und vereinbaren das Datenverzeichnis:
```
import os
import numpy as np
DATA_DIR = 'data'
```
## Unser Ausgangspunkt
Allgemein ist die Aufgabe, aus einer Sprachäußerung die zugrunde liegende Absicht zu erkennen, selbst für Menschen manchmal nicht einfach. Soll ein Computer diese schwierige Aufgabe lösen, so muss man sich überlegen, was man zu einem gegebenen Input — also einer (unstrukturierten) Sprachäußerung — für einen Output erwarten, wie man also Absichten modelliert und strukturiert.
Weit verbreitet ist folgender Ansatz für Intent Recognition:
- jede Äußerung wird einer _Domain_, also einem Gebiet, zugeordnet,
- für jede _Domain_ gibt es einen festen Satz von _Intents_, also eine Reihe von Absichten,
- jede Absicht kann durch _Parameter_ konkretisiert werden und hat dafür eine Reihe von _Slots_, die wie Parameter einer Funktion oder Felder eines Formulares mit gewissen Werten gefüllt werden können.
Für die Äußerungen
> - `"Wie warm war es 1990 in Berlin?"`
> - `"Welche Temperatur hatten wir in Hessen im Jahr 2018?"`
> - `"Wie komme ich zum Hauptbahnhof?"`
könnte _Intent Recognition_ also zum Beispiel jeweils folgende Ergebnisse liefern:
> - `{'intent': 'Frag_Temperatur', 'slots': {'Ort': 'Berlin', 'Jahr': '1990'}}`
> - `{'intent': 'Frag_Temperatur', 'slots': {'Ort': 'Hessen', 'Jahr': '2018'}}`
> - `{'intent': 'Frag_Weg', 'slots': {'Start': None, 'Ziel': 'Hauptbahnhof'}}`
Für Python steht eine ganze von NLP-Bibliotheken zur Verfügung, die Intent Recognition in der einen oder anderen Form ermöglichen, zum Beispiel
- [Rasa NLU](https://rasa.com/docs/nlu/) („Language Understanding for chatbots and AI assistants“),
- [snips](https://snips-nlu.readthedocs.io/en/latest/) („Using Voice to Make Technology Disappear“),
- [DeepPavlov](http://deeppavlov.ai) („an open-source conversational AI library“),
- [NLP Architect](http://nlp_architect.nervanasys.com/index.html) von Intel („for exploring state-of-the-art deep learning topologies and techniques for natural language processing and natural language unterstanding“),
- [pytext](https://pytext-pytext.readthedocs-hosted.com/en/latest/index.html) von Facebook („a deep-learning based NLP modeling framework built on PyTorch“).
Wir entscheiden uns im Folgenden für die Bibliothek Rasa NLU, weil wir dafür bequem mit einem Open-Source-Tool (chatette) umfangreiche Trainingsdaten generieren können. Rasa NLU wiederum benutzt die NLP-Bibliothek [spaCy](https://spacy.io), die Machine-Learning-Bibliothek [scikit-learn](https://scikit-learn.org/stable/) und die Deep-Learning-Bibliothek [TensorFlow](https://www.tensorflow.org/).
## Intent Recognition von Anfang bis Ende mit Rasa NLU
Schauen wir uns an, wie man eine Sprach-Engine für Intent Recognition trainieren kann! Dafür beschränken wir uns zunächst auf wenige Intents und Trainingsdaten und gehen die benötigten Schritte von Anfang bis Ende durch.
### Schritt 1: Intents durch Trainingsdaten beschreiben
Als Erstes müssen wir die Intents mit Hilfe von Trainingsdaten beschreiben. _Rasa NLU_ erwartet beides zusammen in einer Datei im menschenfreundlichen [Markdown-Format](http://markdown.de/) oder im computerfreundlichen [JSON-Format](https://de.wikipedia.org/wiki/JavaScript_Object_Notation). Ein Beispiel für solche Trainingsdaten im Markdown-Format ist der folgende Python-String, den wir in die Datei `intents.md` speichern:
```
TRAIN_INTENTS = """
## intent: Frag_Temperatur
- Wie [warm](Eigenschaft) war es [1900](Zeit) in [Brandenburg](Ort)
- Wie [kalt](Eigenschaft) war es in [Hessen](Ort) [1900](Zeit)
- Was war die Temperatur [1977](Zeit) in [Sachsen](Ort)
## intent: Frag_Ort
- Wo war es [1998](Zeit) am [kältesten](Superlativ:kalt)
- Finde das [kältesten](Superlativ:kalt) Bundesland im Jahr [2004](Zeit)
- Wo war es [2010](Zeit) [kälter](Komparativ:kalt) als [1994](Zeit) in [Rheinland-Pfalz](Ort)
## intent: Frag_Zeit
- Wann war es in [Bayern](Ort) am [kühlsten](Superlativ:kalt)
- Finde das [kälteste](Superlativ:kalt) Jahr im [Saarland](Ort)
- Wann war es in [Schleswig-Holstein](Ort) [wärmer](Komparativ:warm) als in [Baden-Württemberg](Ort)
## intent: Ende
- Ende
- Auf Wiedersehen
- Tschuess
"""
INTENTS_PATH = os.path.join(DATA_DIR, 'intents.md')
def write_file(filename, text):
with open(filename, 'w', encoding='utf-8') as file:
file.write(text)
write_file(INTENTS_PATH, TRAIN_INTENTS)
```
Hier wird jeder Intent erst in der Form
> `## intent: NAME`
deklariert, wobei `NAME` durch die Bezeichnung des Intents zu ersetzen ist. Anschließend wird der Intent durch eine Liste von
Beispiel-Äußerungen beschrieben. Die Parameter beziehungsweise Slots werden in den Beispieläußerungen in der Form
> `[WERT](SLOT)`
markiert, wobei `SLOT` die Bezeichnung des Slots und `Wert` der entsprechende Teil der Äußerung ist.
### Schritt 2: Sprach-Engine konfigurieren...
Die Sprach-Engine von _Rasa NLU_ ist als Pipeline gestaltet und [sehr flexibel konfigurierbar](https://rasa.com/docs/nlu/components/#section-pipeline). Zwei [Beispiel-Konfigurationen](https://rasa.com/docs/nlu/choosing_pipeline/) sind in Rasa bereits enthalten:
- `spacy_sklearn` verwendet vortrainierte Wortvektoren, eine [scikit-learn-Implementierung](https://scikit-learn.org/stable/modules/svm.html) einer linearen [Support-vector Machine]( https://en.wikipedia.org/wiki/Support-vector_machine) für die Klassifikation und wird für kleine Trainingsmengen (<1000) empfohlen. Da diese Pipeline vortrainierte Wortvektoren und spaCy benötigt, kann sie nur für [die meisten westeuropäische Sprachen](https://rasa.com/docs/nlu/languages/#section-languages) verwendet werden. Allerdings sind die Version 0.20.1 von scikit-learn und 0.13.8 von Rasa-NLU nicht kompatibel
- `tensorflow_embedding` trainiert für die Klassifikation Einbettungen von Äußerungen und von Intents in denselben Vektorraum und wird für größere Trainingsmengen (>1000) empfohlen. Die zu Grunde liegende Idee stammt aus dem Artikel [StarSpace: Embed All The Things!](https://arxiv.org/abs/1709.03856). Sie ist sehr vielseitig anwendbar und beispielsweise auch für [Question Answering](https://en.wikipedia.org/wiki/Question_answering) geeignet. Diese Pipeline benötigt kein Vorwissen über die verwendete Sprache, ist also universell einsetzbar, und kann auch auf das Erkennen mehrerer Intents in einer Äußerung trainiert werden.
Zum Füllen der Slots verwenden beide Pipelines eine [Python-Implementierung](http://www.chokkan.org/software/crfsuite/) von [Conditional Random Fields](https://en.wikipedia.org/wiki/Conditional_random_field).
Die Konfiguration der Pipeline wird durch eine YAML-Datei beschrieben. Der folgende Python-String entspricht der Konfiguration `tensorflow_embedding`:
```
CONFIG_TF = """
pipeline:
- name: "tokenizer_whitespace"
- name: "ner_crf"
- name: "ner_synonyms"
- name: "intent_featurizer_count_vectors"
- name: "intent_classifier_tensorflow_embedding"
"""
```
### Schritt 3: ...trainieren...
Sind die Trainingsdaten und die Konfiguration der Pipeline beisammen, so kann die Sprach-Engine trainiert werden. In der Regel erfolgt dies bei Rasa mit Hilfe eines Kommandozeilen-Interface oder direkt [in Python](https://rasa.com/docs/nlu/python/). Die folgende Funktion `train` erwartet die Konfiguration als Python-String und den Namen der Datei mit den Trainingsdaten und gibt die trainierte Sprach-Engine als Instanz einer `Interpreter`-Klasse zurück:
```
import rasa_nlu.training_data
import rasa_nlu.config
from rasa_nlu.model import Trainer, Interpreter
MODEL_DIR = 'models'
def train(config=CONFIG_TF, intents_path=INTENTS_PATH):
config_path = os.path.join(DATA_DIR, 'rasa_config.yml')
write_file(config_path, config)
trainer = Trainer(rasa_nlu.config.load(config_path))
trainer.train(rasa_nlu.training_data.load_data(intents_path))
return Interpreter.load(trainer.persist(MODEL_DIR))
interpreter = train()
```
### Schritt 4: ...und testen!
Wir testen nun, ob die Sprach-Engine `interpreter` folgende Test-Äußerungen richtig versteht:
```
TEST_UTTERANCES = [
'Was war die durchschnittliche Temperatur 2004 in Mecklenburg-Vorpommern',
'Nenn mir das wärmste Bundesland 2018',
'In welchem Jahr war es in Nordrhein-Westfalen heißer als 1990',
'Wo war es 2000 am kältesten',
'Bis bald',
]
```
Die Methode `parse` von `interpreter` erwartet eine Äußerung als Python-String, wendet Intent Recognition an und liefert eine sehr detaillierte Rückgabe:
```
interpreter.parse(TEST_UTTERANCES[0])
```
Die Rückgabe umfasst im Wesentlichen
- den Namen des ermittelten Intent sowie eine Sicherheit beziehungsweise Konfidenz zwischen 0 und 1,
- für jeden ermittelten Parameter die Start- und Endposition in der Äußerung, den Wert und wieder eine Konfidenz,
- ein Ranking der möglichen Intents nach der Sicherheit/Konfidenz, mit der sie in dieser Äußerung vermutet wurden.
Für eine übersichtlichere Darstellung und leichte Weiterverarbeitung bereiten wir die Rückgabe mit Hilfe der Funktionen `extract_intent` und `extract_confidences` ein wenig auf. Anschließend gehen wir unsere Test-Äußerungen durch:
```
def extract_intent(intent):
return (intent['intent']['name'] if intent['intent'] else None,
[(ent['entity'], ent['value']) for ent in intent['entities']])
def extract_confidences(intent):
return (intent['intent']['confidence'] if intent['intent'] else None,
[ent['confidence'] for ent in intent['entities']])
def test(interpreter, utterances=TEST_UTTERANCES):
for utterance in utterances:
intent = interpreter.parse(utterance)
print('<', utterance)
print('>', extract_intent(intent))
print(' ', extract_confidences(intent))
print()
test(interpreter)
```
Das Ergebnis ist noch nicht ganz überzeugend — wir haben aber auch nur ganz wenig Trainingsdaten vorgegeben!
## Trainingsdaten generieren mit Chatette
Für ein erfolgreiches Training brauchen wir also viel mehr Trainingsdaten. Doch fängt man an, weitere Beispiele aufzuschreiben, so fallen einem schnell viele kleine Variationsmöglichkeiten ein, die sich recht frei kombinieren lassen. Zum Beispiel können wir für eine Frage nach der Temperatur in Berlin im Jahr 1990 mit jeder der Phrasen
> - "Wie warm war es..."
> - "Wie kalt war es..."
> - "Welche Temperatur hatten wir..."
beginnen und dann mit
> - "...in Berlin 1990"
> - "...1990 in Berlin"
abschließen, vor "1990" noch "im Jahr" einfügen und so weiter. Statt alle denkbaren Kombinationen aufzuschreiben, ist es sinnvoller, die Möglichkeiten mit Hilfe von Regeln zu beschreiben und daraus Trainingsdaten generieren zu lassen. Genau das ermöglicht das Python-Tool [chatette](https://github.com/SimGus/Chatette), das wir im Folgenden verwenden. Dieses Tool liest Regeln, die einer speziellen Syntax folgen müssen, aus einer Datei aus und erzeugt dann daraus Trainingsdaten für Rasa NLU im JSON-Format.
### Regeln zur Erzeugung von Trainingsdaten
Wir legen im Folgenden erst einen Grundvorrat an Regeln für die Intents `Frag_Temperatur`, `Frag_Ort`, `Frag_Zeit` und `Ende` in einem Python-Dictionary an und erläutern danach genauer, wie die Regeln aufgebaut sind:
```
RULES = {
'@[Ort]': (
'Brandenburg', 'Baden-Wuerttemberg', 'Bayern', 'Hessen',
'Rheinland-Pfalz', 'Schleswig-Holstein', 'Saarland', 'Sachsen',
),
'@[Zeit]': set(map(str, np.random.randint(1891, 2018, size=5))),
'@[Komparativ]': ('wärmer', 'kälter',),
'@[Superlativ]': ('wärmsten', 'kältesten',),
'%[Frag_Temperatur]': ('Wie {warm/kalt} war es ~[zeit_ort]',
'Welche Temperatur hatten wir ~[zeit_ort]',
'Wie war die Temperatur ~[zeit_ort]',
),
'%[Frag_Ort]': (
'~[wo_war] es @[Zeit] @[Komparativ] als {@[Zeit]/in @[Ort]}',
'~[wo_war] es @[Zeit] am @[Superlativ]',
),
'%[Frag_Jahr]': (
'~[wann_war] es in @[Ort] @[Komparativ] als {@[Zeit]/in @[Ort]}',
'~[wann_war] es in @[Ort] am @[Superlativ]',
),
'%[Ende]': ('Ende', 'Auf Wiedersehen', 'Tschuess',),
'~[finde]': ('Sag mir', 'Finde'),
'~[wie_war]': ('Wie war', '~[finde]',),
'~[was_war]': ('Was war', '~[finde]',),
'~[wo_war]': ('Wo war', 'In welchem {Bundesland|Land} war',),
'~[wann_war]': ('Wann war', 'In welchem Jahr war',),
'~[zeit_ort]': ('@[Zeit] in @[Ort]', '@[Ort] in @[Zeit]',),
'~[Bundesland]': ('Land', 'Bundesland',),
}
```
Jede Regel besteht aus einem Namen beziehungsweise Platzhalter und einer Menge von Phrasen. Je nachdem, ob der Name die Form
> `%[NAME]`, `@[NAME]` oder `~[NAME]`
hat, beschreibt die Regel einen
> _Intent_, _Slot_ oder eine _Alternative_
mit der Bezeichnung `NAME`. Jede Phrase kann ihrerseits Platzhalter für Slots und Alternativen erhalten. Diese Platzhalter werden bei der Erzeugung von Trainingsdaten von chatette jeweils durch eine der Phrasen ersetzt, die in der Regel für den jeweiligen Slot beziehungsweise die Alternativen aufgelistet sind. Außerdem können Phrasen
- Alternativen der Form `{_|_|_}`,
- optionale Teile in der Form `[_?]`
und einige weitere spezielle Konstrukte enthalten. Mehr Details finden sich in der [Syntax-Beschreibung](https://github.com/SimGus/Chatette/wiki/Syntax-specifications) von chatette.
### Erzeugung der Trainingsdaten
Die in dem Python-Dictionary kompakt abgelegten Regeln müssen nun für chatette so formatiert werden, dass bei jeder Regel der Name einen neuen Absatz einleitet und anschließend die möglichen Phrasen schön eingerückt Zeile für Zeile aufgelistet werden. Dies leistet die folgende Funktion `format_rules`. Zusätzlich fügt sie eine Vorgabe ein, wieviel Trainingsbeispiele pro Intent erzeugt werden sollen:
```
def format_rules(rules, train_samples):
train_str = "('training':'{}')".format(train_samples)
llines = [[name if (name[0] != '%') else name + train_str]
+ [' ' + val for val in rules[name]] + [''] for name in rules]
return '\n'.join((l for lines in llines for l in lines))
```
Nun wenden wir chatette an, um die Trainingsdaten zu generieren. Dafür bietet chatette ein bequemes [Kommandozeilen-Interface](https://github.com/SimGus/Chatette/wiki/Command-line-interface), aber wir verwenden direkt die zu Grunde liegenden Python-Module.
Die folgende Funktion `chatette` erwartet wie `format_rules` ein Python-Dictionary mit Regeln, schreibt diese passend formatiert in eine Datei, löscht etwaige zuvor generierte Trainingsdateien und erzeugt dann den Regeln entsprechend neue Trainingsdaten.
```
from chatette.adapters import RasaAdapter
from chatette.parsing import Parser
from chatette.generator import Generator
import glob
TRAIN_SAMPLES = 400
CHATETTE_DIR = os.path.join(DATA_DIR, 'chatette')
def chatette(rules=RULES, train_samples=TRAIN_SAMPLES):
rules_path = os.path.join(DATA_DIR, 'intents.chatette')
write_file(rules_path, format_rules(rules, train_samples))
with open(rules_path, 'r') as rule_file:
parser = Parser(rule_file)
parser.parse()
generator = Generator(parser)
for f in glob.glob(os.path.join(CHATETTE_DIR, '*')):
os.remove(f)
RasaAdapter().write(CHATETTE_DIR, list(generator.generate_train()),
generator.get_entities_synonyms())
chatette(train_samples=400)
```
### Und nun: neuer Test!
Bringen die umfangreicheren Trainingsdaten wirklich eine Verbesserung? Schauen wir's uns an! Um verschiedene Sprach-Engines zu vergleichen, nutzen wir die folgende Funktion:
```
def train_and_test(config=CONFIG_TF, utterances=TEST_UTTERANCES):
interpreter = train(config, CHATETTE_DIR)
test(interpreter, utterances)
return interpreter
interpreter = train_and_test()
```
Hier wurde nur die letzte Äußerung nicht verstanden, aber das ist auch nicht weiter verwunderlich.
## Unser kleiner WetterBot
Experimentieren macht mehr Spaß, wenn es auch mal zischt und knallt. Oder zumindest irgendeine andere Reaktion erfolgt. Und deswegen bauen wir uns einen kleinen WetterBot, der auf die erkannten Intents reagieren kann. Zuerst schreiben wir dafür eine Eingabe-Verarbeitungs-Ausgabe-Schleife. Diese erwartet als Parameter erstens die Sprach-Engine `interpreter` und zweitens ein Python-Dictionary `handlers`, welches jeder Intent-Bezeichnung einen Handler zuordnet. Der Handler wird dann mit dem erkannten Intent aufgerufen und sollte zurückgeben, ob die Schleife fortgeführt werden soll oder nicht:
```
def dialog(interpreter, handlers):
quit = False
while not quit:
intent = extract_intent(interpreter.parse(input('>')))
print('<', intent)
intent_name = intent[0]
if intent_name in handlers:
quit = handlers[intent_name](intent)
```
Wir implementieren gleich beispielhaft einen Handler für den Intent `Frag_Temperatur`und reagieren auf alle anderen Intents mit einer Standard-Antwort:
```
def message(msg, quit=False):
print(msg)
return quit
HANDLERS = {
'Ende': lambda intent: message('=> Oh, wie schade. Bis bald!', True),
'Frag_Zeit': lambda intent: message('=> Das ist eine gute Frage.'),
'Frag_Ort': lambda intent: message('=> Dafür wurde ich nicht programmiert.'),
'Frag_Temperatur': lambda intent: message('=> Das weiss ich nicht.')
}
```
Um die Fragen nach den Temperaturen zu beantworten, nutzen wir [Archiv-Daten](ftp://ftp-cdc.dwd.de/pub/CDC/regional_averages_DE/annual/air_temperature_mean/regional_averages_tm_year.txt) des [Deutschen Wetterdienstes](https://www.dwd.de), die wir schon etwas aufbereitet haben. Die Routine `show` gibt die nachgefragten Temperaturdaten je nach Anzahl der angegebenen Jahre und Bundesländer als Liniendiagramm, Balkendiagramm oder in Textform an. Der eigentliche Hander `frag_wert` prüft, ob die angegebenen Jahre und Orte auch zulässig sind und setzt, falls eine der beiden Angaben fehlt, einfach alle Jahre beziehungsweise Bundesländer ein:
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import set_matplotlib_formats
%matplotlib inline
set_matplotlib_formats('svg')
sns.set()
DATA_PATH = os.path.join(DATA_DIR, 'temperaturen.txt')
temperature = pd.read_csv(DATA_PATH, index_col=0, sep=';')
def show(times, places):
if (len(places) == 0) and (len(times) == 0):
print('Keine zulässigen Orte oder Zeiten')
elif (len(places) == 1) and (len(times) == 1):
print(temperature.loc[times, places])
else:
if (len(places) > 1) and (len(times) == 1):
temperature.loc[times[0], places].plot.barh()
if (len(places) == 1) and (len(times) > 1):
temperature.loc[times, places[0]].plot.line()
if (len(places) > 1) and (len(times) > 1):
temperature.loc[times, places].plot.line()
plt.legend(bbox_to_anchor=(1.05,1), loc=2, borderaxespad=0.)
plt.show()
def frag_temperatur(intent):
def validate(options, ent_name, fn):
chosen = [fn(value) for (name, value) in intent[1] if name == ent_name]
return list(set(options) & set(chosen)) if chosen else options
places = validate(list(temperature.columns), 'Ort', lambda x:x)
times = validate(list(temperature.index), 'Zeit', int)
show(times, places)
return False
HANDLERS['Frag_Temperatur'] = frag_temperatur
```
Nun kann der WetterBot getestet werden! Zum Beispiel mit
> "Wie warm war es in Baden-Württemberg und Sachsen?"
```
dialog(interpreter, HANDLERS)
```
## Intent Recognition selbst gemacht — ein Bi-LSTM-Netzwerk mit Keras
Im Prinzip haben wir nun gesehen, wie sich Intent Recognition mit Hilfe von Rasa NLU recht einfach anwenden lässt. Aber wie funktioniert das ganz genau? In diesem zweiten Teil des Notebooks werden wir
- ein bidirektionales rekurrentes Netz, wie es im Video vorgestellt wurde, implementieren,
- die mit chatette erstellten Trainingsdaten so aufbereiten, dass wir damit das Netz trainieren können,
und sehen, dass das ganz gut klappt und gar nicht so schwer ist!
### Intents einlesen und aufbereiten
Zuerst lesen wir die Trainings-Daten, die von chatette im JSON-Format ausgegeben in die Date `RASA_INTENTS` geschrieben wurden, aus, und schauen uns das Format der Einträge an:
```
import json
CHATETTE_DIR = os.path.join(DATA_DIR, 'chatette')
RASA_INTENTS = os.path.join(CHATETTE_DIR, 'output.json')
def load_intents():
with open(RASA_INTENTS) as intents_file:
intents = json.load(intents_file)
return intents['rasa_nlu_data']['common_examples']
sample_intent = load_intents()[0]
```
Wie bereits im [Video](https://www.youtube.com/watch?v=H_3R8inCOvM) erklärt, sind für Intent Recognition zwei Aufgaben zu lösen:
- die _Klassifikation_ des Intent anhand der gegebenen Äußerung und
- das Füllen der Slots.
Die zweite Aufgabe kann man als _Sequence Tagging_ auffassen — für jedes Token der Äußerung ist zu bestimmen, ob es den Parameter für einen Slot darstellt oder nicht. Für den Beispiel-Intent
> `{'entities': [{'end': 20, 'entity': 'Zeit', 'start': 16, 'value': '1993'},
> {'end': 35, 'entity': 'Ort', 'start': 24, 'value': 'Brandenburg'}],
> 'intent': 'Frag_Temperatur',
> 'text': 'Wie warm war es 1993 in Brandenburg'}`
wäre die Eingabe für diese beiden Aufgaben also die Token-Folge
> `['Wie', 'warm', 'war', 'es', '1993', 'in', 'Brandenburg']`
und die gewünschte Ausgabe jeweils
> `'Frag_Temperatur'`
beziehungsweise die Tag-Folge
> `['-', '-', '-', '-', 'Zeit', '-', 'Ort']`
Die folgende Funktion extrahiert aus den geladenen Beispiel-Intents die gewünschte Eingabe und die Ausgaben für diese beiden Aufgaben:
```
import spacy
from itertools import accumulate
nlp = spacy.load('de_core_news_sm')
def tokenize(text):
return [word for word in nlp(text)]
NO_ENTITY = '-'
def intent_and_sequences(intent):
def get_tag(offset):
"""Returns the tag (+slot name) for token starting at `offset`"""
ents = [ent['entity'] for ent in intent['entities'] if ent['start'] == offset]
return ents[0] if ents else NO_ENTITY
token = tokenize(intent['text'])
# `offsets` is the list of starting positions of the token
offsets = list(accumulate([0,] + [len(t.text_with_ws) for t in token]))
return (intent['intent'], token, list(map(get_tag, offsets[:-1])))
intent_and_sequences(sample_intent)
```
### Symbolische Daten in numerische Daten umwandeln
Die aufbereiteten Intents enthalten nun jeweils
1. die Folge der Token als "Eingabe"
2. den Namen des Intent als Ergebnis der Klassifikation und
3. die Folge der Slot-Namen als Ergebnis des Sequence Tagging.
Diese kategoriellen Daten müssen wir für die Weiterverarbeitung in numerische Daten umwandeln. Dafür bieten sich
- für 1. Wortvektoren und
- für 2. und 3. die One-hot-Kodierung an.
Außerdem müssen wir die Eingabe-Folge und Tag-Folge auf eine feste Länge bringen.
Beginnen wir mit der One-hot-Kodierung. Die folgende Funktion erzeug zu einer gegebenen Menge von Objekten ein Paar von Python-Dictionaries, welche jedem Objekt einen One-hot-Code und umgekehrt jedem Index das entsprechende Objekt zuordnet.
```
def ohe(s):
codes = np.eye(len(s))
numerated = list(enumerate(s))
return ({value: codes[idx] for (idx, value) in numerated},
{idx: value for (idx, value) in numerated})
```
Die nächste Hilfsfunktion erwartet eine Liste von Elementen und schneidet diese auf eine vorgegebene Länge beziehungsweise füllt sie mit einem vorgegebenen Element auf diese Länge auf.
```
def fill(items, max_len, filler):
if len(items) < max_len:
return items + [filler] * (max_len - len(items))
else:
return items[0:max_len]
```
Die Umwandlung der aufbereiteten Intent-Tripel in numerische Daten verpacken wir in einen [scikit-learn-Transformer](https://scikit-learn.org/stable/data_transforms.html), weil während der Umwandlung die One-Hot-Kodierung der Intent-Namen und Slot-Namen gelernt und eventuell später für neue Testdaten wieder gebraucht wird.
```
from sklearn.base import BaseEstimator, TransformerMixin
MAX_LEN = 20
VEC_DIM = len(list(nlp(' '))[0].vector)
class IntentNumerizer(BaseEstimator, TransformerMixin):
def __init__(self):
pass
def fit(self, X, y=None):
intent_names = set((x[0] for x in X))
self.intents_ohe, self.idx_intents = ohe(intent_names)
self.nr_intents = len(intent_names)
tag_lists = list(map(lambda x: set(x[2]), X)) + [[NO_ENTITY]]
tag_names = frozenset().union(*tag_lists)
# tag_names = set(())
self.tags_ohe, self.idx_tags = ohe(tag_names)
self.nr_tags = len(tag_names)
return self
def transform_utterance(self, token):
return np.stack(fill([tok.vector for tok in token], MAX_LEN,
np.zeros((VEC_DIM))))
def transform_tags(self, tags):
return np.stack([self.tags_ohe[t] for t in fill(tags, MAX_LEN, NO_ENTITY)])
def transform(self, X):
return (np.stack([self.transform_utterance(x[1]) for x in X]),
np.stack([self.intents_ohe[x[0]] for x in X]),
np.stack([self.transform_tags(x[2]) for x in X]))
def revert(self, intent_idx, tag_idxs):
return (self.idx_intents[intent_idx],
[self.idx_tags[t] for t in tag_idxs])
```
### Keras-Implementierung eines Bi-LSTM-Netzes für Intent Recognition
Wir implementieren nun mit Keras eine Netz-Architektur, die in [diesem Artikel]() vorgeschlagen wurde und schematisch in folgendem Diagramm dargestellt ist:
<img src="img/birnn.svg" style="background:white" width="80%" align="middle">
Hierbei wird
1. die Eingabe, wie bereits erklärt, als Folge von Wortvektoren dargestellt,
2. diese Eingabe erst durch eine rekurrente Schicht forwärts abgearbeitet,
3. der Endzustand dieser Schicht als Initialisierung einer sich anschließenden rekurrenten Schicht verwendet, welche die Eingabefolge rückwärts abarbeitet,
4. der Endzustand dieser Schicht an eine Schicht mit genau so vielen Neuronen, wie es Intent-Klassen gibt, zur Klassifikation des Intent weitergleitet,
5. die Ausgabe der beiden rekurrenten Schichten für jeden Schritt zusammengefügt und
6. die zusammengefügte Ausgabe jeweils an ein Bündel von so vielen Neuronen, wie es Slot-Arten gibt, zur Klassifikation des Tags des jeweiligen Wortes weitergeleitet.
Genau diesen Aufbau bilden wir nun mit Keras ab, wobei wir die [funktionale API]() benutzen. Als Loss-Funktion verwenden wir jeweils [kategorielle Kreuzentropie](). Für die rekurrenten Schichten verwenden wir [LSTM-Zellen](), auf die wir gleich noch eingehen.
```
from keras.models import Model
from keras.layers import Input, LSTM, Concatenate, TimeDistributed, Dense
UNITS = 256
def build_bilstm(input_dim, nr_intents, nr_tags, units=UNITS):
inputs = Input(shape=(MAX_LEN, input_dim))
lstm_params = {'units': units, 'return_sequences': True, 'return_state': True}
fwd = LSTM(**lstm_params)(inputs)
bwd = LSTM(**lstm_params)(inputs, initial_state=fwd[1:])
merged = Concatenate()([fwd[0], bwd[0]])
tags = TimeDistributed(Dense(nr_tags, activation='softmax'))(merged)
intent = Dense(nr_intents, activation='softmax')(bwd[2])
model = Model(inputs=inputs, outputs=[intent, tags])
model.compile(optimizer='Adam' ,loss='categorical_crossentropy')
return model
```
Schauen wir uns einmal genauer an, wie so eine LSTM-Zelle aufgebaut ist:
<img src="img/lstm.svg" style="background:white" width="70%" align="middle">
Die Bezeichnung 'LSTM' steht für _long short-term memory_ und rührt daher, dass solch eine Zelle neben der Eingabe des aktuellen Schrittes nicht nur die Ausgabe des vorherigen Schrittes, sondern zusätzlich auch einen Speicherwert des vorherigen Schrittes erhält. Nacheinander wird in der LSTM-Zelle dann jeweils anhand der aktuellen Eingabe und der vorherigen Ausgabe
1. in einem _forget gate_ entschieden, wieviel vom alten Speicherwert vergessen werden soll,
2. in einem _input gate_ entschieden, wieviel von der neuen Eingabe in den neuen Speicherwert aufgenommen werden soll,
3. in einem _output gate_ aus dem aktuellen Speicher die aktuelle Ausgabe gebildet.
### Training und Test des Bi-LSTM-Netzes
Schauen wir uns nun an, wie gut das funktioniert! Dazu müssen wir nun alles zusammenfügen und tun das in zwei Schritten.
Die Funktion `train_test_data` erwartet als Eingabe Regeln, wie wir sie für chatette in einem Python-Dictionary gespeichert hatten, und liefert die entsprechend erzeugten Intents in numerisch aufbereiter Form, aufgeteilt in Trainings- und Validierungsdaten, einschließlich des angepassten `IntentNumerizer`zurück.
```
TRAIN_RATIO = 0.7
def train_test_data(rules=RULES, train_ratio=TRAIN_RATIO):
structured_intents = list(map(intent_and_sequences, load_intents()))
intent_numerizer = IntentNumerizer()
X, y, Y = intent_numerizer.fit_transform(structured_intents)
nr_samples = len(y)
shuffled_indices = np.random.permutation(nr_samples)
split = int(nr_samples * train_ratio)
train_indices, test_indices = (shuffled_indices[0:split], shuffled_indices[split:])
y_train, X_train, Y_train = y[train_indices], X[train_indices], Y[train_indices]
y_test, X_test, Y_test = y[test_indices], X[test_indices], Y[test_indices]
return intent_numerizer, X_train, y_train, Y_train, X_test, y_test, Y_test
```
Mit diesen Trainings- und Testdaten trainiert beziehungsweise validiert die folgende Funktion `build_interpreter` nun das von `build_lstm` gebaute neuronale Netz und liefert einen Interpreter-Funktion zurück. Diese erwartet als Eingabe eine Äußerung, transformiert diese anschließend mit dem angepassten `IntentNumerizer` und führt mit dem zuvor trainierten Netz die Intent Recognition durch.
```
BATCH_SIZE = 128
EPOCHS = 10
def build_interpreter(rules=RULES, units=UNITS, batch_size=128, epochs=EPOCHS):
def interpreter(utterance):
x = intent_numerizer.transform_utterance(tokenize(utterance))
y, Y = model.predict(np.stack([x]))
tag_idxs = np.argmax(Y[0], axis=1)
intent_idx = np.argmax(y[0])
return intent_numerizer.revert(intent_idx, tag_idxs)
intent_numerizer, X_train, y_train, Y_train, X_test, y_test, Y_test = train_test_data(rules)
model = build_bilstm(X_train.shape[2], y_train.shape[1], Y_train.shape[2], units)
model.fit(x=X_train, y=[y_train, Y_train],
validation_data=(X_test,[y_test, Y_test]),
batch_size=batch_size, epochs=epochs)
return interpreter
```
Und nun sind wir bereit zum Testen!
```
interpreter = build_interpreter()
interpreter('Welche ungefähre Temperatur war 1992 und 2018 in Sachsen')
```
Und jetzt kannt Du loslegen — der WetterBot kann noch nicht viel, ist aber nun recht einfach zu trainieren! Und mit der selbstgebauten Intent Recognition wird er bestimmt noch besser! Ein paar Ideen dazu gibt Dir das Notebook mit Aufgaben zu Intent Recognition.
_Viel Spaß und bis bald zu einer neuen Lektion vom codecentric.AI bootcamp!_
| github_jupyter |
# Tutorial de RISE - parte 1
> Aspectos básicos para hacer presentaciones interactivas con jupyter notebooks
- featured: false
- hide: false
- toc: true
- badges: true
- comments: true
- categories: [jupyter, rise]
- image: images/preview/rise.gif
- permalink: /tutorial-rise-1/
Esta es la parte 1 de 3 del [tutorial de presentaciones interactivas en jupyter notebook](https://sebastiandres.github.io/blog/tutorial-rise/).
## ¿Qué es RISE?
RISE es una extensión a jupyter notebook que, en lugar de desplegar las celdas en una larga página web, las despliega en una presentación usando la librería de javascript [reveal.js](https://revealjs.com/).
Sigue siendo una página web (al igual que el notebook), pero las celdas se agrupan en diapositivas.
## ¿Qué contenido puedo poner?
Puedes mezclar contenido usando las celdas de markdown y código, según necesites:
* **Markdown**: texto, latex, imágenes, tablas, etc.
* **Código**: código, gráficos simples o interactivos, videos, sonido, iframes, javascript, entre otros.
Como regla general, si se muestra correctamente en el notebook, se verá bien en la diapositiva. ¡Sé creativo!
## ¿Cómo instalar?
La instalación de la extensión RISE es extremadamente fácil.
Basta con usar pip o conda:
```
pip install rise
```
o
```
conda install -c conda-forge rise
```
Eso hará que se agregue el botón de iniciar presentación (destacado en rojo).

También puedes agregar `rise` a tu archivo `requirements.txt` para que se instale automáticamente al generar un ambiente.
En caso de tener problemas, revisa los [detalles adicionales de instalación](https://rise.readthedocs.io/en/stable/installation.html).
## ¿Cómo configurar lo que está en cada diapositivas?
Paciencia, se requiere todavía un paso adicional.
En el menú de jupyter notebook, es necesario seleccionar `View/Cell Toolbar/Slideshow` para que permita configurar el tipo de celda para diapositiva.
Esto se requiere porque a cada celda del notebook se le agregará metadata para saber en que diapositiva debe ir (o si se debe saltar).

Eso dejará todo configurado para poder seleccionar el tipo de celda respecto a la diapositiva.

## ¿Cómo moverse por las slides?
Al hacer click en el botón "Iniciar presentación", la presentación se iniciará en la celda que esté activa.
* Se accede a la próxima diapositiva o fragmento con `Espacio` (o la flecha derecha).
* Se retrocede a la diapositiva o fragmento anterior con `Shift Espacio` (o la flecha izquierda).
* Se avanza a la proxima sub-diapositiva con `Page Up`.
* Se retrocede la sub-diapositiva anterior con `Page Down`.
Una diferencia de una presentación típica de PowerPoint es que existen 2 dimensiones: las diapositivas (slides) que avanzan de izquierda a derecha como es tradicional, pero también sub-diapositivas (subslides) que son slides opcionales y que avanzan de arriba a abajo.

Observación: En general, es dificil recordar el orden de las slides y sub-slides. Yo personalmente nunca uso sub-slides por esta razón y prefiero solo tener orden "horizontal".
## ¿Cómo se configuran las diapositivas?
Existen varios tipos de celda con distintas funcionalidades:
* `-`: valor por defecto. La celda se muestra con la slide anterior.
* `Slide`: inicia una nueva diapositiva (dirección horizontal).
* `Sub-slide`: iniciar una nueva sub-diapositiva (dirección vertical).
* `Fragment`: se concatena a la celda anterior, pero no se muestra inmediatamente.
* `Skip`: no se muestra la celda en las diapositivas.
* `Notes`: No se muestra en las diapositivas, sólo se muestra en las notas para el presentador.
## ¿Qué opciones hay?
Existen múltiples funcionalides accesibles con el teclado durante la presentación, pero las principales a recordar son:
* `?`: ver todos los shortcuts.
* `,`: ocultar los botones.
* `\`: poner la pantalla en negro. Útil para discutir algo sin distracciones visuales.
* ``:
Las funcionalidades se controlan con los siguientes botones:

## ¿Se puede editar durante la presentación?
¡Sí!
Es posible editar y ejecutar las celdas de markdown y de código durante la presentación.
Se usa el mismo sistema de doble click para acceder a modo edición, y `Alt Enter` para ejecutarla.

## ¿Cómo controlo el tamaño?
Un problema común es que al conectar el computador a otra pantalla o datashow, no se alcanza a ver en la diapositiva todo el código, texto o imagen.
Lo único que debes hacer es usar usar `Ctrl +` y `Ctrl -` para regular el tamaño (`Command +`y `Command -` en Mac), de la misma manera que regulas el tamaño de una página web.

## ¿Dónde están las notas del presentador?
Puedes abrir las notas del presentador presionando `t`.
Para poder usar las notas del presentador necesitas tener al menos 2 pantallas: una pública para compartir y otra para mantener privada.

## Comparte tu estructura
Cuando se ejecuta código en una presentación, es una buena práctica mostrar la estructura de carpetas y los archivos con los que se va a trabajar. Eso ayudará a despejar dudas de cómo está funcionando el código.
Esto se puede lograr fácilmente con comando mágicos (como `%ls`) o ejecutando código en bash (como `!ls`).
La diferencia entre ambos radica en lo siguiente:
* Los comandos mágicos son específicos y definidos por cada kernel. Pueden existir comandos que estén en python pero no en R.
* Los `!` permiten ejecutar instrucciones en el terminal y es más flexible. Uno de los más comunes es ofrecer instalar las librerías, como:
```bash
pip install rise matplotlib
```
## Gráficos
Para mostrar gráficos resulta práctico que no se genere una ventana adicional, sino que se agreguen a la celda de resultados. Esta es una práctica común en jupyter notebook/lab, pero es más importante aún al pensar en las diapositivas.
```
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
with plt.xkcd():
fig = plt.figure(figsize=(14,6))
x = np.linspace(-5,5,num=100)
y = np.abs(np.abs(np.sin(2*x)/x))
plt.plot(x,y)
```
## Apoyos gráficos
Me ha servido mucho insertar gifs en las presentaciones. Un buen gif animado es un buen compromiso entre una imagen y una película, y sirve para tener una animación que ilustre algun proceso. Existen muchos programas para grabar gifs.
Una solución que me ha funcionado bien para animar gifs simples es crear un diagrama mediante una animación en PowerPoint, y después grabar un gif considerando apropiadamente los tiempos.
Eso concluye la primera parte 1 del [tutorial de presentaciones interactivas en jupyter notebook](https://sebastiandres.github.io/blog/tutorial-rise/).
| github_jupyter |
### Imports
```
# Import the dependencies.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from citipy import citipy
```
### Get Random Coordinates
```
# Create a set of random latitude and longitude combinations.
lats = np.random.uniform(low=-90.000, high=90.000, size=2000)
lngs = np.random.uniform(low=-180.000, high=180.000, size=2000)
lat_lngs = zip(lats, lngs)
lat_lngs
# Add the latitudes and longitudes to a list.
coordinates = list(lat_lngs)
```
### Get the cities from the coordinates
```
# Create a list for holding the cities.
cities = []
# Identify the nearest city for each latitude and longitude combination.
for coordinate in coordinates:
city = citipy.nearest_city(coordinate[0], coordinate[1]).city_name
# If the city is unique, then we will add it to the cities list.
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count.
len(cities)
```
### Get the required weather data from OpenWatherMap using the API calls
Data Needed:
- Latitude and longitude
- Maximum temperature
- Percent humidity
- Percent cloudiness
- Wind speed
- Weather description (for example, clouds, fog, light rain, clear sky)
```
# Import the requests library.
import requests
# Import the API key.
from config import weather_api_key
# Import the time library and the datetime module from the datetime library
import time
from datetime import datetime
# Starting URL for Weather Map API Call.
url = "http://api.openweathermap.org/data/2.5/weather?units=Imperial&APPID=" + weather_api_key
# Create an empty list to hold the weather data.
city_data = []
# Print the beginning of the logging.
print("Beginning Data Retrieval ")
print("-----------------------------")
# Create counters.
record_count = 1
set_count = 1
# Loop through all the cities in the list.
for i, city in enumerate(cities):
# Group cities in sets of 50 for logging purposes.
if (i % 50 == 0 and i >= 50):
set_count += 1
record_count = 1
# time.sleep(60)
# Create endpoint URL with each city.
city_url = url + "&q=" + city.replace(" ","+")
# Log the URL, record, and set numbers and the city.
print(f"Processing Record {record_count} of Set {set_count} | {city}")
# Add 1 to the record count.
record_count += 1
# Run an API request for each of the cities.
try:
# Parse the JSON and retrieve data.
city_weather = requests.get(city_url).json()
# Parse out the needed data.
city_lat = city_weather["coord"]["lat"]
city_lng = city_weather["coord"]["lon"]
city_max_temp = city_weather["main"]["temp_max"]
city_humidity = city_weather["main"]["humidity"]
city_clouds = city_weather["clouds"]["all"]
city_wind = city_weather["wind"]["speed"]
city_country = city_weather["sys"]["country"]
# Convert the date to ISO standard.
city_date = datetime.utcfromtimestamp(city_weather["dt"]).strftime('%Y-%m-%d %H:%M:%S')
# Append the city information into city_data list.
city_data.append({"City": city.title(),
"Lat": city_lat,
"Lng": city_lng,
"Max Temp": city_max_temp,
"Humidity": city_humidity,
"Cloudiness": city_clouds,
"Wind Speed": city_wind,
"Country": city_country,
"Date": city_date})
# If an error is experienced, skip the city.
except:
print("City not found. Skipping...")
pass
# Indicate that Data Loading is complete.
print("-----------------------------")
print("Data Retrieval Complete ")
print("-----------------------------")
# Convert the array of dictionaries to a Pandas DataFrame.
city_data_df = pd.DataFrame(city_data)
city_data_df.head(10)
```
### Output to CSV
```
# Create the output file (CSV).
output_data_file = "Weather_Database/WeatherPy_Database.csv"
# Export the City_Data into a CSV.
city_data_df.to_csv(output_data_file, index_label="City_ID")
```
| github_jupyter |
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="fig/cover-small.jpg">
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).*
*The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).*
<!--NAVIGATION-->
| [Contents](Index.ipynb) | [How to Run Python Code](01-How-to-Run-Python-Code.ipynb) >
# 1. Introduction
Conceived in the late 1980s as a teaching and scripting language, Python has since become an essential tool for many programmers, engineers, researchers, and data scientists across academia and industry.
As an astronomer focused on building and promoting the free open tools for data-intensive science, I've found Python to be a near-perfect fit for the types of problems I face day to day, whether it's extracting meaning from large astronomical datasets, scraping and munging data sources from the Web, or automating day-to-day research tasks.
The appeal of Python is in its simplicity and beauty, as well as the convenience of the large ecosystem of domain-specific tools that have been built on top of it.
For example, most of the Python code in scientific computing and data science is built around a group of mature and useful packages:
- [NumPy](http://numpy.org) provides efficient storage and computation for multi-dimensional data arrays.
- [SciPy](http://scipy.org) contains a wide array of numerical tools such as numerical integration and interpolation.
- [Pandas](http://pandas.pydata.org) provides a DataFrame object along with a powerful set of methods to manipulate, filter, group, and transform data.
- [Matplotlib](http://matplotlib.org) provides a useful interface for creation of publication-quality plots and figures.
- [Scikit-Learn](http://scikit-learn.org) provides a uniform toolkit for applying common machine learning algorithms to data.
- [IPython/Jupyter](http://jupyter.org) provides an enhanced terminal and an interactive notebook environment that is useful for exploratory analysis, as well as creation of interactive, executable documents. For example, the manuscript for this report was composed entirely in Jupyter notebooks.
No less important are the numerous other tools and packages which accompany these: if there is a scientific or data analysis task you want to perform, chances are someone has written a package that will do it for you.
To tap into the power of this data science ecosystem, however, first requires familiarity with the Python language itself.
I often encounter students and colleagues who have (sometimes extensive) backgrounds in computing in some language – MATLAB, IDL, R, Java, C++, etc. – and are looking for a brief but comprehensive tour of the Python language that respects their level of knowledge rather than starting from ground zero.
This report seeks to fill that niche.
As such, this report in no way aims to be a comprehensive introduction to programming, or a full introduction to the Python language itself; if that is what you are looking for, you might check out one of the recommended references listed in [Resources for Learning](16-Further-Resources.ipynb).
Instead, this will provide a whirlwind tour of some of Python's essential syntax and semantics, built-in data types and structures, function definitions, control flow statements, and other aspects of the language.
My aim is that readers will walk away with a solid foundation from which to explore the data science stack just outlined.
## Using Code Examples
Supplemental material (code examples, exercises, etc.) is available for download at https://github.com/jakevdp/WhirlwindTourOfPython/.
This book is here to help you get your job done.
In general, if example code is offered with this book, you may use it in your programs and documentation.
You do not need to contact us for permission unless you’re reproducing a significant portion of the code.
For example, writing a program that uses several chunks of code from this book does not require permission.
Selling or distributing a CD-ROM of examples from O’Reilly books does require permission.
Answering a question by citing this book and quoting example code does not require permission.
Incorporating a significant amount of example code from this book into your product’s documentation does require permission.
We appreciate, but do not require, attribution.
An attribution usually includes the title, author, publisher, and ISBN.
For example: "A Whirlwind Tour of Python by Jake VanderPlas (O’Reilly). Copyright 2016 O’Reilly Media, Inc., 978-1-491-96465-1."
If you feel your use of code examples falls outside fair use or the per‐ mission given above, feel free to contact us at permissions@oreilly.com.
## Installation and Practical Considerations
Installing Python and the suite of libraries that enable scientific computing is straightforward whether you use Windows, Linux, or Mac OS X. This section will outline some of the considerations when setting up your computer.
### Python 2 vs Python 3
This report uses the syntax of Python 3, which contains language enhancements that are not compatible with the *2.x* series of Python.
Though Python 3.0 was first released in 2008, adoption has been relatively slow, particularly in the scientific and web development communities.
This is primarily because it took some time for many of the essential packages and toolkits to be made compatible with the new language internals.
Since early 2014, however, stable releases of the most important tools in the data science ecosystem have been fully-compatible with both Python 2 and 3, and so this book will use the newer Python 3 syntax.
Even though that is the case, the vast majority of code snippets in this book will also work without modification in Python 2: in cases where a Py2-incompatible syntax is used, I will make every effort to note it explicitly.
### Installation with conda
Though there are various ways to install Python, the one I would suggest – particularly if you wish to eventually use the data science tools mentioned above – is via the cross-platform Anaconda distribution.
There are two flavors of the Anaconda distribution:
- [Miniconda](http://conda.pydata.org/miniconda.html) gives you Python interpreter itself, along with a command-line tool called ``conda`` which operates as a cross-platform package manager geared toward Python packages, similar in spirit to the ``apt`` or ``yum`` tools that Linux users might be familiar with.
- [Anaconda](https://www.continuum.io/downloads) includes both Python and ``conda``, and additionally bundles a suite of other pre-installed packages geared toward scientific computing.
Any of the packages included with Anaconda can also be installed manually on top of Miniconda; for this reason I suggest starting with Miniconda.
To get started, download and install the Miniconda package – make sure to choose a version with Python 3 – and then install the IPython notebook package:
```
[~]$ conda install ipython-notebook
```
For more information on ``conda``, including information about creating and using conda environments, refer to the Miniconda package documentation linked at the above page.
## The Zen of Python
Python aficionados are often quick to point out how "intuitive", "beautiful", or "fun" Python is.
While I tend to agree, I also recognize that beauty, intuition, and fun often go hand in hand with familiarity, and so for those familiar with other languages such florid sentiments can come across as a bit smug.
Nevertheless, I hope that if you give Python a chance, you'll see where such impressions might come from.
And if you *really* want to dig into the programming philosophy that drives much of the coding practice of Python power-users, a nice little Easter egg exists in the Python interpreter: simply close your eyes, meditate for a few minutes, and ``import this``:
```
import this
```
With that, let's start our tour of the Python language.
<!--NAVIGATION-->
| [Contents](Index.ipynb) | [How to Run Python Code](01-How-to-Run-Python-Code.ipynb) >
| github_jupyter |
<a href="https://colab.research.google.com/github/egy1st/denmune-clustering-algorithm/blob/main/colab/validation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import time
import os.path
import warnings
warnings.filterwarnings('ignore')
# install DenMune clustering algorithm using pip command from the offecial Python repository, PyPi
# from https://pypi.org/project/denmune/
!pip install denmune
# then import it
from denmune import DenMune
# clone datasets from our repository datasets
if not os.path.exists('datasets'):
!git clone https://github.com/egy1st/datasets
```
You can get your validation results using 3 methods
- by showing the Analyzer
- extract values from the validity returned list from fit_predict function
- extract values from the Analyzer dictionary
The algorithm is associated with five built-in validity measures, which are:
- ACC, Accuracy
- F1 score
- NMI index (Normalized Mutual Information)
- AMI index (Adjusted Mutual Information)
- ARI index (Adjusted Rand Index)
```
# Let us show the analyzer by set show_analyzer to True, which is actually the default parameter's value
data_path = 'datasets/denmune/shapes/'
dataset = "aggregation"
knn = 6
data_file = data_path + dataset + '.csv'
X_train = pd.read_csv(data_file, sep=',', header=None)
y_train = X_train.iloc[:, -1]
X_train = X_train.drop(X_train.columns[-1], axis=1)
print ("Dataset:", dataset)
dm = DenMune(train_data=X_train,
train_truth=y_train,
k_nearest=knn,
rgn_tsne=False)
labels, validity = dm.fit_predict(show_noise=True, show_analyzer=True)
# secondly, we can extract validity returned list from fit_predict function
dm = DenMune(train_data=X_train, train_truth=y_train, k_nearest=knn, rgn_tsne=False)
labels, validity = dm.fit_predict(show_plots=False, show_noise=True, show_analyzer=False)
validity
Accuracy = validity['train']['ACC']
print ('Accuracy:',Accuracy, 'correctely identified points')
F1_score = validity['train']['F1']
print ('F1 score:', round(F1_score*100,2), '%')
NMI = validity['train']['NMI']
print ('NMI index:', round(NMI*100,2), '%')
AMI = validity['train']['AMI']
print ('AMI index:', round(AMI*100,2), '%')
ARI = validity['train']['ARI']
print ('ARI index:', round(ARI*100,2), '%')
# Third, we can extract extract values from the Analyzer dictionary
dm = DenMune(train_data=X_train, train_truth=y_train, k_nearest=knn, rgn_tsne=False)
labels, validity = dm.fit_predict(show_plots=False, show_noise=True, show_analyzer=False)
dm.analyzer
Accuracy = dm.analyzer['validity']['train']['ACC']
print ('Accuracy:',Accuracy, 'correctely identified points')
F1_score = dm.analyzer['validity']['train']['F1']
print ('F1 score:', round(F1_score*100,2), '%')
NMI = dm.analyzer['validity']['train']['NMI']
print ('NMI index:', round(NMI*100,2), '%')
AMI = dm.analyzer['validity']['train']['AMI']
print ('AMI index:', round(AMI*100,2), '%')
ARI = dm.analyzer['validity']['train']['ARI']
print ('ARI index:', round(ARI*100,2), '%')
```
| github_jupyter |
Copyright 2018 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
**This tutorial is for educational purposes purposes only and is not intended for use in clinical diagnosis or clinical decision-making or for any other clinical use.**
# Training/Inference on Breast Density Classification Model on AutoML Vision
The goal of this tutorial is to train, deploy and run inference on a breast density classification model. Breast density is thought to be a factor for an increase in the risk for breast cancer. This will emphasize using the [Cloud Healthcare API](https://cloud.google.com/healthcare/) in order to store, retreive and transcode medical images (in DICOM format) in a managed and scalable way. This tutorial will focus on using [Cloud AutoML Vision](https://cloud.google.com/vision/automl/docs/beginners-guide) to scalably train and serve the model.
**Note: This is the AutoML version of the Cloud ML Engine Codelab found [here](./breast_density_cloud_ml.ipynb).**
## Requirements
- A Google Cloud project.
- Project has [Cloud Healthcare API](https://cloud.google.com/healthcare/docs/quickstart) enabled.
- Project has [Cloud AutoML API ](https://cloud.google.com/vision/automl/docs/quickstart) enabled.
- Project has [Cloud Build API](https://cloud.google.com/cloud-build/docs/quickstart-docker) enabled.
- Project has [Kubernetes engine API](https://console.developers.google.com/apis/api/container.googleapis.com/overview?project=) enabled.
- Project has [Cloud Resource Manager API](https://console.cloud.google.com/cloud-resource-manager) enabled.
## Notebook dependencies
We will need to install the hcls_imaging_ml_toolkit package found [here](./toolkit). This toolkit helps make working with DICOM objects and the Cloud Healthcare API easier.
In addition, we will install [dicomweb-client](https://dicomweb-client.readthedocs.io/en/latest/) to help us interact with the DIOCOMWeb API and [pydicom](https://pydicom.github.io/pydicom/dev/index.html) which is used to help up construct DICOM objects.
```
%%bash
pip3 install git+https://github.com/GoogleCloudPlatform/healthcare.git#subdirectory=imaging/ml/toolkit
pip3 install dicomweb-client
pip3 install pydicom
```
## Input Dataset
The dataset that will be used for training is the [TCIA CBIS-DDSM](https://wiki.cancerimagingarchive.net/display/Public/CBIS-DDSM) dataset. This dataset contains ~2500 mammography images in DICOM format. Each image is given a [BI-RADS breast density ](https://breast-cancer.ca/densitbi-rads/) score from 1 to 4. In this tutorial, we will build a binary classifier that distinguishes between breast density "2" (*scattered density*) and "3" (*heterogeneously dense*). These are the two most common and variably assigned scores. In the literature, this is said to be [particularly difficult for radiologists to consistently distinguish](https://aapm.onlinelibrary.wiley.com/doi/pdf/10.1002/mp.12683).
```
project_id = "MY_PROJECT" # @param
location = "us-central1"
dataset_id = "MY_DATASET" # @param
dicom_store_id = "MY_DICOM_STORE" # @param
# Input data used by AutoML must be in a bucket with the following format.
automl_bucket_name = "gs://" + project_id + "-vcm"
%%bash -s {project_id} {location} {automl_bucket_name}
# Create bucket.
gsutil -q mb -c regional -l $2 $3
# Allow Cloud Healthcare API to write to bucket.
PROJECT_NUMBER=`gcloud projects describe $1 | grep projectNumber | sed 's/[^0-9]//g'`
SERVICE_ACCOUNT="service-${PROJECT_NUMBER}@gcp-sa-healthcare.iam.gserviceaccount.com"
COMPUTE_ENGINE_SERVICE_ACCOUNT="${PROJECT_NUMBER}-compute@developer.gserviceaccount.com"
gsutil -q iam ch serviceAccount:${SERVICE_ACCOUNT}:objectAdmin $3
gsutil -q iam ch serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT}:objectAdmin $3
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${SERVICE_ACCOUNT} --role=roles/pubsub.publisher
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/pubsub.admin
# Allow compute service account to create datasets and dicomStores.
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/healthcare.dicomStoreAdmin
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/healthcare.datasetAdmin
import json
import os
import google.auth
from google.auth.transport.requests import AuthorizedSession
from hcls_imaging_ml_toolkit import dicom_path
credentials, project = google.auth.default()
authed_session = AuthorizedSession(credentials)
# Path to Cloud Healthcare API.
HEALTHCARE_API_URL = 'https://healthcare.googleapis.com/v1'
# Create Cloud Healthcare API dataset.
path = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets?dataset_id=' + dataset_id)
headers = {'Content-Type': 'application/json'}
resp = authed_session.post(path, headers=headers)
assert resp.status_code == 200, 'error creating Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
# Create Cloud Healthcare API DICOM store.
path = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets', dataset_id, 'dicomStores?dicom_store_id=' + dicom_store_id)
resp = authed_session.post(path, headers=headers)
assert resp.status_code == 200, 'error creating DICOM store, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
dicom_store_path = dicom_path.Path(project_id, location, dataset_id, dicom_store_id)
```
Next, we are going to transfer the DICOM instances to the Cloud Healthcare API.
Note: We are transfering >100GB of data so this will take some time to complete
```
# Store DICOM instances in Cloud Healthcare API.
path = 'https://healthcare.googleapis.com/v1/{}:import'.format(dicom_store_path)
headers = {'Content-Type': 'application/json'}
body = {
'gcsSource': {
'uri': 'gs://gcs-public-data--healthcare-tcia-cbis-ddsm/dicom/**'
}
}
resp = authed_session.post(path, headers=headers, json=body)
assert resp.status_code == 200, 'error creating Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
response = json.loads(resp.text)
operation_name = response['name']
import time
def wait_for_operation_completion(path, timeout, sleep_time=30):
success = False
while time.time() < timeout:
print('Waiting for operation completion...')
resp = authed_session.get(path)
assert resp.status_code == 200, 'error polling for Operation results, code: {0}, response: {1}'.format(resp.status_code, resp.text)
response = json.loads(resp.text)
if 'done' in response:
if response['done'] == True and 'error' not in response:
success = True;
break
time.sleep(sleep_time)
print('Full response:\n{0}'.format(resp.text))
assert success, "operation did not complete successfully in time limit"
print('Success!')
return response
path = os.path.join(HEALTHCARE_API_URL, operation_name)
timeout = time.time() + 40*60 # Wait up to 40 minutes.
_ = wait_for_operation_completion(path, timeout)
```
### Explore the Cloud Healthcare DICOM dataset (optional)
This is an optional section to explore the Cloud Healthcare DICOM dataset. In the following code, we simply just list the studies that we have loaded into the Cloud Healthcare API. You can modify the *num_of_studies_to_print* parameter to print as many studies as desired.
```
num_of_studies_to_print = 2 # @param
path = os.path.join(HEALTHCARE_API_URL, dicom_store_path.dicomweb_path_str, 'studies')
resp = authed_session.get(path)
assert resp.status_code == 200, 'error querying Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
response = json.loads(resp.text)
print(json.dumps(response[:num_of_studies_to_print], indent=2))
```
## Convert DICOM to JPEG
The ML model that we will build requires that the dataset be in JPEG. We will leverage the Cloud Healthcare API to transcode DICOM to JPEG.
First we will create a [Google Cloud Storage](https://cloud.google.com/storage/) bucket to hold the output JPEG files. Next, we will use the ExportDicomData API to transform the DICOMs to JPEGs.
```
# Folder to store input images for AutoML Vision.
jpeg_folder = automl_bucket_name + "/images/"
```
Next we will convert the DICOMs to JPEGs using the [ExportDicomData](https://cloud.google.com/sdk/gcloud/reference/beta/healthcare/dicom-stores/export/gcs).
```
%%bash -s {jpeg_folder} {project_id} {location} {dataset_id} {dicom_store_id}
gcloud beta healthcare --project $2 dicom-stores export gcs $5 --location=$3 --dataset=$4 --mime-type="image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50" --gcs-uri-prefix=$1
```
Meanwhile, you should be able to observe the JPEG images being added to your Google Cloud Storage bucket.
Next, we will join the training data stored in Google Cloud Storage with the labels in the TCIA website. The output of this step is a [CSV file that is input to AutoML](https://cloud.google.com/vision/automl/docs/prepare). This CSV contains a list of pairs of (IMAGE_PATH, LABEL).
```
# tensorflow==1.15.0 to have same versions in all environments - dataflow, automl, ai-platform
!pip install tensorflow==1.15.0 --ignore-installed
# CSV to hold (IMAGE_PATH, LABEL) list.
input_data_csv = automl_bucket_name + "/input.csv"
import csv
import os
import re
from tensorflow.python.lib.io import file_io
import scripts.tcia_utils as tcia_utils
# Get map of study_uid -> file paths.
path_list = file_io.get_matching_files(os.path.join(jpeg_folder, '*/*/*'))
study_uid_to_file_paths = {}
pattern = r'^{0}(?P<study_uid>[^/]+)/(?P<series_uid>[^/]+)/(?P<instance_uid>.*)'.format(jpeg_folder)
for path in path_list:
match = re.search(pattern, path)
study_uid_to_file_paths[match.group('study_uid')] = path
# Get map of study_uid -> labels.
study_uid_to_labels = tcia_utils.GetStudyUIDToLabelMap()
# Join the two maps, output results to CSV in Google Cloud Storage.
with file_io.FileIO(input_data_csv, 'w') as f:
writer = csv.writer(f, delimiter=',')
for study_uid, label in study_uid_to_labels.items():
if study_uid in study_uid_to_file_paths:
writer.writerow([study_uid_to_file_paths[study_uid], label])
```
## Training
***This section will focus on using AutoML through its API. AutoML can also be used through the user interface found [here](https://console.cloud.google.com/vision/). The below steps in this section can all be done through the web UI .***
We will use [AutoML Vision ](https://cloud.google.com/automl/) to train the classification model. AutoML provides a fully managed solution for training the model. All we will do is input the list of input images and labels. The trained model in AutoML will be able to classify the mammography images as either "2" (scattered density) or "3" (heterogeneously dense).
As a first step, we will create a AutoML dataset.
```
automl_dataset_display_name = "MY_AUTOML_DATASET" # @param
import json
import os
# Path to AutoML API.
AUTOML_API_URL = 'https://automl.googleapis.com/v1beta1'
# Path to request creation of AutoML dataset.
path = os.path.join(AUTOML_API_URL, 'projects', project_id, 'locations', location, 'datasets')
# Headers (request in JSON format).
headers = {'Content-Type': 'application/json'}
# Body (encoded in JSON format).
config = {'display_name': automl_dataset_display_name, 'image_classification_dataset_metadata': {'classification_type': 'MULTICLASS'}}
resp = authed_session.post(path, headers=headers, json=config)
assert resp.status_code == 200, 'creating AutoML dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
# Record the AutoML dataset name.
response = json.loads(resp.text)
automl_dataset_name = response['name']
```
Next, we will import the CSV that contains the list of (IMAGE_PATH, LABEL) list into AutoML. **Please ignore errors regarding an existing ground truth.**
```
# Path to request import into AutoML dataset.
path = os.path.join(AUTOML_API_URL, automl_dataset_name + ':importData')
# Body (encoded in JSON format).
config = {'input_config': {'gcs_source': {'input_uris': [input_data_csv]}}}
resp = authed_session.post(path, headers=headers, json=config)
assert resp.status_code == 200, 'error importing AutoML dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
# Record operation_name so we can poll for it later.
response = json.loads(resp.text)
operation_name = response['name']
```
The output of the previous step is an [operation](https://cloud.google.com/vision/automl/docs/models#get-operation) that will need to poll the status for. We will poll until the operation's "done" field is set to true. This will take a few minutes to complete so we will wait until completion.
```
path = os.path.join(AUTOML_API_URL, operation_name)
timeout = time.time() + 40*60 # Wait up to 40 minutes.
_ = wait_for_operation_completion(path, timeout)
```
Next, we will train the model to perform classification. We will set the training budget to be a maximum of 1hr (but this can be modified below). The cost of using AutoML can be found [here](https://cloud.google.com/vision/automl/pricing). Typically, the longer the model is trained for, the more accurate it will be.
```
# Name of the model.
model_display_name = "MY_MODEL_NAME" # @param
# Training budget (1 hr).
training_budget = 1 # @param
# Path to request import into AutoML dataset.
path = os.path.join(AUTOML_API_URL, 'projects', project_id, 'locations', location, 'models')
# Headers (request in JSON format).
headers = {'Content-Type': 'application/json'}
# Body (encoded in JSON format).
automl_dataset_id = automl_dataset_name.split('/')[-1]
config = {'display_name': model_display_name, 'dataset_id': automl_dataset_id, 'image_classification_model_metadata': {'train_budget': training_budget}}
resp = authed_session.post(path, headers=headers, json=config)
assert resp.status_code == 200, 'error creating AutoML model, code: {0}, response: {1}'.format(resp.status_code, contenresp.text)
print('Full response:\n{0}'.format(resp.text))
# Record operation_name so we can poll for it later.
response = json.loads(resp.text)
operation_name = response['name']
```
The output of the previous step is also an [operation](https://cloud.google.com/vision/automl/docs/models#get-operation) that will need to poll the status of. We will poll until the operation's "done" field is set to true. This will take a few minutes to complete.
```
path = os.path.join(AUTOML_API_URL, operation_name)
timeout = time.time() + 40*60 # Wait up to 40 minutes.
sleep_time = 5*60 # Update each 5 minutes.
response = wait_for_operation_completion(path, timeout, sleep_time)
full_model_name = response['response']['name']
# google.cloud.automl to make api calls to Cloud AutoML
!pip install google-cloud-automl
from google.cloud import automl_v1
client = automl_v1.AutoMlClient()
response = client.deploy_model(full_model_name)
print(u'Model deployment finished. {}'.format(response.result()))
```
Next, we will check out the accuracy metrics for the trained model. The following command will return the [AUC (ROC)](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc), [precision](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall) and [recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall) for the model, for various ML classification thresholds.
```
# Path to request to get model accuracy metrics.
path = os.path.join(AUTOML_API_URL, full_model_name, 'modelEvaluations')
resp = authed_session.get(path)
assert resp.status_code == 200, 'error getting AutoML model evaluations, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
```
## Inference
To allow medical imaging ML models to be easily integrated into clinical workflows, an *inference module* can be used. A standalone modality, a PACS system or a DICOM router can push DICOM instances into Cloud Healthcare [DICOM stores](https://cloud.google.com/healthcare/docs/introduction), allowing ML models to be triggered for inference. This inference results can then be structured into various DICOM formats (e.g. DICOM [structured reports](http://dicom.nema.org/MEDICAL/Dicom/2014b/output/chtml/part20/sect_A.3.html)) and stored in the Cloud Healthcare API, which can then be retrieved by the customer.
The inference module is built as a [Docker](https://www.docker.com/) container and deployed using [Kubernetes](https://kubernetes.io/), allowing you to easily scale your deployment. The dataflow for inference can look as follows (see corresponding diagram below):
1. Client application uses [STOW-RS](ftp://dicom.nema.org/medical/Dicom/2013/output/chtml/part18/sect_6.6.html) to push a new DICOM instance to the Cloud Healthcare DICOMWeb API.
2. The insertion of the DICOM instance triggers a [Cloud Pubsub](https://cloud.google.com/pubsub/) message to be published. The *inference module* will pull incoming Pubsub messages and will recieve a message for the previously inserted DICOM instance.
3. The *inference module* will retrieve the instance in JPEG format from the Cloud Healthcare API using [WADO-RS](ftp://dicom.nema.org/medical/Dicom/2013/output/chtml/part18/sect_6.5.html).
4. The *inference module* will send the JPEG bytes to the model hosted on AutoML.
5. AutoML will return the prediction back to the *inference module*.
6. The *inference module* will package the prediction into a DICOM instance. This can potentially be a DICOM structured report, [presentation state](ftp://dicom.nema.org/MEDICAL/dicom/2014b/output/chtml/part03/sect_A.33.html), or even burnt text on the image. In this codelab, we will focus on just DICOM structured reports, specifically [Comprehensive Structured Reports](http://dicom.nema.org/dicom/2013/output/chtml/part20/sect_A.3.html). The structured report is then stored back in the Cloud Healthcare API using STOW-RS.
7. The client application can query for (or retrieve) the structured report by using [QIDO-RS](http://dicom.nema.org/dicom/2013/output/chtml/part18/sect_6.7.html) or WADO-RS. Pubsub can also be used by the client application to poll for the newly created DICOM structured report instance.

To begin, we will create a new DICOM store that will store our inference source (DICOM mammography instance) and results (DICOM structured report). In order to enable Pubsub notifications to be triggered on inserted instances, we will give the DICOM store a Pubsub channel to publish on.
```
# Pubsub config.
pubsub_topic_id = "MY_PUBSUB_TOPIC_ID" # @param
pubsub_subscription_id = "MY_PUBSUB_SUBSRIPTION_ID" # @param
# DICOM Store for store DICOM used for inference.
inference_dicom_store_id = "MY_INFERENCE_DICOM_STORE" # @param
pubsub_subscription_name = "projects/" + project_id + "/subscriptions/" + pubsub_subscription_id
inference_dicom_store_path = dicom_path.FromPath(dicom_store_path, store_id=inference_dicom_store_id)
%%bash -s {pubsub_topic_id} {pubsub_subscription_id} {project_id} {location} {dataset_id} {inference_dicom_store_id}
# Create Pubsub channel.
gcloud beta pubsub topics create $1
gcloud beta pubsub subscriptions create $2 --topic $1
# Create a Cloud Healthcare DICOM store that published on given Pubsub topic.
TOKEN=`gcloud beta auth application-default print-access-token`
NOTIFICATION_CONFIG="{notification_config: {pubsub_topic: \"projects/$3/topics/$1\"}}"
curl -s -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${TOKEN}" -d "${NOTIFICATION_CONFIG}" https://healthcare.googleapis.com/v1/projects/$3/locations/$4/datasets/$5/dicomStores?dicom_store_id=$6
# Enable Cloud Healthcare API to publish on given Pubsub topic.
PROJECT_NUMBER=`gcloud projects describe $3 | grep projectNumber | sed 's/[^0-9]//g'`
SERVICE_ACCOUNT="service-${PROJECT_NUMBER}@gcp-sa-healthcare.iam.gserviceaccount.com"
gcloud beta pubsub topics add-iam-policy-binding $1 --member="serviceAccount:${SERVICE_ACCOUNT}" --role="roles/pubsub.publisher"
```
Next, we will building the *inference module* using [Cloud Build API](https://cloud.google.com/cloud-build/docs/api/reference/rest/). This will create a Docker container that will be stored in [Google Container Registry](https://cloud.google.com/container-registry/). The inference module code is found in *[inference.py](./scripts/inference/inference.py)*. The build script used to build the Docker container for this module is *[cloudbuild.yaml](./scripts/inference/cloudbuild.yaml)*. Progress of build may be found on [cloud build dashboard](https://console.cloud.google.com/cloud-build/builds?project=).
```
%%bash -s {project_id}
PROJECT_ID=$1
gcloud builds submit --config scripts/inference/cloudbuild.yaml --timeout 1h scripts/inference
```
Next, we will deploy the *inference module* to Kubernetes.
Then we create a Kubernetes Cluster and a Deployment for the *inference module*.
```
%%bash -s {project_id} {location} {pubsub_subscription_name} {full_model_name} {inference_dicom_store_path}
gcloud container clusters create inference-module --region=$2 --scopes https://www.googleapis.com/auth/cloud-platform --num-nodes=1
PROJECT_ID=$1
SUBSCRIPTION_PATH=$3
MODEL_PATH=$4
INFERENCE_DICOM_STORE_PATH=$5
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: inference-module
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: inference-module
spec:
containers:
- name: inference-module
image: gcr.io/${PROJECT_ID}/inference-module:latest
command:
- "/opt/inference_module/bin/inference_module"
- "--subscription_path=${SUBSCRIPTION_PATH}"
- "--model_path=${MODEL_PATH}"
- "--dicom_store_path=${INFERENCE_DICOM_STORE_PATH}"
- "--prediction_service=AutoML"
EOF
```
Next, we will store a mammography DICOM instance from the TCIA dataset to the DICOM store. This is the image that we will request inference for. Pushing this instance to the DICOM store will result in a Pubsub message, which will trigger the *inference module*.
```
# DICOM Study/Series UID of input mammography image that we'll push for inference.
input_mammo_study_uid = "1.3.6.1.4.1.9590.100.1.2.85935434310203356712688695661986996009"
input_mammo_series_uid = "1.3.6.1.4.1.9590.100.1.2.374115997511889073021386151921807063992"
input_mammo_instance_uid = "1.3.6.1.4.1.9590.100.1.2.289923739312470966435676008311959891294"
from google.cloud import storage
from dicomweb_client.api import DICOMwebClient
from dicomweb_client import session_utils
from pydicom
storage_client = storage.Client()
bucket = storage_client.bucket('gcs-public-data--healthcare-tcia-cbis-ddsm', user_project=project_id)
blob = bucket.blob("dicom/{}/{}/{}.dcm".format(input_mammo_study_uid,input_mammo_series_uid,input_mammo_instance_uid))
blob.download_to_filename('example.dcm')
dataset = pydicom.dcmread('example.dcm')
session = session_utils.create_session_from_gcp_credentials()
study_path = dicom_path.FromPath(inference_dicom_store_path, study_uid=input_mammo_study_uid)
dicomweb_url = os.path.join(HEALTHCARE_API_URL, study_path.dicomweb_path_str)
dcm_client = DICOMwebClient(dicomweb_url, session)
dcm_client.store_instances(datasets=[dataset])
```
You should be able to observe the *inference module*'s logs by running the following command. In the logs, you should observe that the inference module successfully recieved the the Pubsub message and ran inference on the DICOM instance. The logs should also include the inference results. It can take a few minutes for the Kubernetes deployment to start up, so you many need to run this a few times. The logs should also include the inference results. It can take a few minutes for the Kubernetes deployment to start up, so you many need to run this a few times.
```
!kubectl logs -l app=inference-module
```
You can also query the Cloud Healthcare DICOMWeb API (using QIDO-RS) to see that the DICOM structured report has been inserted for the study. The structured report contents can be found under tag **"0040A730"**.
You can optionally also use WADO-RS to recieve the instance (e.g. for viewing).
```
dcm_client.search_for_instances(study_path.study_uid, fields=['all'])
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 사용자 정의 학습: 기초
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/eager/custom_training.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/eager/custom_training.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
[docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)로
메일을 보내주시기 바랍니다.
이전 튜토리얼에서는 머신러닝을 위한 기본 구성 요소인 자동 미분(automatic differentiation)을 위한 텐서플로 API를 알아보았습니다. 이번 튜토리얼에서는 이전 튜토리얼에서 소개되었던 텐서플로의 기본 요소를 사용하여 간단한 머신러닝을 수행해보겠습니다.
텐서플로는 반복되는 코드를 줄이기 위해 유용한 추상화를 제공하는 고수준 신경망(neural network) API인 `tf.keras`를 포함하고 있습니다. 신경망을 다룰 때 이러한 고수준의 API을 강하게 추천합니다. 이번 짧은 튜토리얼에서는 탄탄한 기초를 기르기 위해 기본적인 요소만으로 신경망 훈련시켜 보겠습니다.
## 설정
```
import tensorflow.compat.v1 as tf
```
## 변수
텐서플로의 텐서(Tensor)는 상태가 없고, 변경이 불가능한(immutable stateless) 객체입니다. 그러나 머신러닝 모델은 상태가 변경될(stateful) 필요가 있습니다. 예를 들어, 모델 학습에서 예측을 계산하기 위한 동일한 코드는 시간이 지남에 따라 다르게(희망하건대 더 낮은 손실로 가는 방향으로)동작해야 합니다. 이 연산 과정을 통해 변화되어야 하는 상태를 표현하기 위해 명령형 프로그래밍 언어인 파이썬을 사용 할 수 있습니다.
```
# 파이썬 구문 사용
x = tf.zeros([10, 10])
x += 2 # 이것은 x = x + 2와 같으며, x의 초기값을 변경하지 않습니다.
print(x)
```
텐서플로는 상태를 변경할 수 있는 연산자가 내장되어 있으며, 이러한 연산자는 상태를 표현하기 위한 저수준 파이썬 표현보다 사용하기가 더 좋습니다. 예를 들어, 모델에서 가중치를 나타내기 위해서 텐서플로 변수를 사용하는 것이 편하고 효율적입니다.
텐서플로 변수는 값을 저장하는 객체로 텐서플로 연산에 사용될 때 저장된 이 값을 읽어올 것입니다. `tf.assign_sub`, `tf.scatter_update` 등은 텐서플로 변수에 저장되있는 값을 조작하는 연산자입니다.
```
v = tf.Variable(1.0)
assert v.numpy() == 1.0
# 값을 재배열합니다.
v.assign(3.0)
assert v.numpy() == 3.0
# tf.square()와 같은 텐서플로 연산에 `v`를 사용하고 재할당합니다.
v.assign(tf.square(v))
assert v.numpy() == 9.0
```
변수를 사용한 연산은 그래디언트가 계산될 때 자동적으로 추적됩니다. 임베딩(embedding)을 나타내는 변수의 경우 기본적으로 희소 텐서(sparse tensor)를 사용하여 업데이트됩니다. 이는 연산과 메모리에 더욱 효율적입니다.
또한 변수를 사용하는 것은 코드를 읽는 독자에게 상태가 변경될 수 있다는 것을 알려주는 손쉬운 방법입니다.
## 예: 선형 모델 훈련
지금까지 몇 가지 개념을 설명했습니다. 간단한 모델을 구축하고 학습시키기 위해 ---`Tensor`, `GradientTape`, `Variable` --- 등을 사용하였고, 이는 일반적으로 다음의 과정을 포함합니다.
1. 모델 정의
2. 손실 함수 정의
3. 훈련 데이터 가져오기
4. 훈련 데이터에서 실행, 데이터에 최적화하기 위해 "옵티마이저(optimizer)"를 사용한 변수 조정
이번 튜토리얼에서는 선형 모델의 간단한 예제를 살펴보겠습니다. `f(x) = x * W + b`, 모델은 `W` 와 `b` 두 변수를 가지고 있는 선형모델이며, 잘 학습된 모델이 `W = 3.0` and `b = 2.0`의 값을 갖도록 합성 데이터를 만들겠습니다.
### 모델 정의
변수와 연산을 캡슐화하기 위한 간단한 클래스를 정의해봅시다.
```
class Model(object):
def __init__(self):
# 변수를 (5.0, 0.0)으로 초기화 합니다.
# 실제로는 임의의 값으로 초기화 되어야합니다.
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
```
### 손실 함수 정의
손실 함수는 주어진 입력에 대한 모델의 출력이 원하는 출력과 얼마나 잘 일치하는지를 측정합니다. 평균 제곱 오차(mean square error)를 적용한 손실 함수를 사용하겠습니다.
```
def loss(predicted_y, desired_y):
return tf.reduce_mean(tf.square(predicted_y - desired_y))
```
### 훈련 데이터 가져오기
약간의 잡음과 훈련 데이터를 합칩니다.
```
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random_normal(shape=[NUM_EXAMPLES])
noise = tf.random_normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
```
모델을 훈련시키기 전에, 모델의 현재 상태를 시각화합시다. 모델의 예측을 빨간색으로, 훈련 데이터를 파란색으로 구성합니다.
```
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('현재 손실: '),
print(loss(model(inputs), outputs).numpy())
```
### 훈련 루프 정의
이제 네트워크와 훈련 데이터가 준비되었습니다. 모델의 변수(`W` 와 `b`)를 업데이트하기 위해 훈련 데이터를 사용하여 훈련시켜 보죠. 그리고 [경사 하강법(gradient descent)](https://en.wikipedia.org/wiki/Gradient_descent)을 사용하여 손실을 감소시킵니다. 경사 하강법에는 여러가지 방법이 있으며, `tf.train.Optimizer` 에 구현되어있습니다. 이러한 구현을 사용하는것을 강력히 추천드립니다. 그러나 이번 튜토리얼에서는 기본적인 방법을 사용하겠습니다.
```
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
```
마지막으로, 훈련 데이터를 반복적으로 실행하고, `W` 와 `b`의 변화 과정을 확인합니다.
```
model = Model()
# 도식화를 위해 W값과 b값의 변화를 저장합니다.
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('에포크 %2d: W=%1.2f b=%1.2f, 손실=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# 저장된 값들을 도식화합니다.
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'true W', 'true_b'])
plt.show()
```
## 다음 단계
이번 튜토리얼에서는 변수를 다루었으며, 지금까지 논의된 텐서플로의 기본 요소를 사용하여 간단한 선형 모델을 구축하고 훈련시켰습니다.
이론적으로, 텐서플로를 머신러닝 연구에 사용하기 위해 알아야 할 것이 매우 많습니다. 실제로 신경망에 있어 `tf.keras`와 같은 고수준 API는 고수준 구성 요소("층"으로 불리는)를 제공하고, 저장 및 복원을 위한 유틸리티, 손실 함수 모음, 최적화 전략 모음 등을 제공하기 때문에 더욱 편리합니다.
| github_jupyter |
<a href="https://colab.research.google.com/github/masvgp/math_3280/blob/main/CS246_Colab_5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# CS246 - Colab 5
## PageRank
### Setup
First of all, we authenticate a Google Drive client to download the dataset we will be processing in this Colab.
**Make sure to follow the interactive instructions.**
```
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
id='1EoolSK32_U74I4FeLox88iuUB_SUUYsI'
downloaded = drive.CreateFile({'id': id})
downloaded.GetContentFile('web-Stanford.txt')
```
If you executed the cells above, you should be able to see the dataset we will use for this Colab under the "Files" tab on the left panel.
Next, we import some of the common libraries needed for our task.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
### Data Loading
For this Colab we will be using [NetworkX](https://networkx.github.io), a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.
The dataset we will analyze is a snapshot of the Web Graph centered around [stanford.edu](https://stanford.edu), collected in 2002. Nodes represent pages from Stanford University (stanford.edu) and directed edges represent hyperlinks between them. [[More Info]](http://snap.stanford.edu/data/web-Stanford.html)
```
import networkx as nx
G = nx.read_edgelist('web-Stanford.txt', create_using=nx.DiGraph)
print(nx.info(G))
```
### Your Task
To begin with, let's simplify our analysis by ignoring the dangling nodes and the disconnected components in the original graph.
Use NetworkX to identify the **largest** weakly connected component in the ```G``` graph. From now on, use this connected component for all the following tasks.
```
# YOUR CODE HERE
```
Compute the PageRank vector, using the default parameters in NetworkX: [https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.link_analysis.pagerank_alg.pagerank.html#networkx.algorithms.link_analysis.pagerank_alg.pageranky](https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.link_analysis.pagerank_alg.pagerank.html#networkx.algorithms.link_analysis.pagerank_alg.pagerank)
```
# YOUR CODE HERE
```
In 1999, Barabási and Albert proposed an elegant mathematical model which can generate graphs with topological properties similar to the Web Graph (also called Scale-free Networks).
If you complete the steps below, you should obtain some empirical evidence that the Random Graph model is inferior compared to the Barabási–Albert model when it comes to generating a graph resembling the World Wide Web!
As such, we will use two different graph generator methods, and then we will test how well they approximate the Web Graph structure by means of comparing the respective PageRank vectors. [[NetworkX Graph generators]](https://networkx.github.io/documentation/stable/reference/generators.html#)
Using for both methods ```seed = 1```, generate:
1. a random graph (with the fast method), setting ```n``` equal to the number of nodes in the original connected component, and ```p = 0.00008```
2. a Barabasi-Albert graph (with the standard method), setting ```n``` equal to the number of nodes in the original connected component, and finding the right ***integer*** value for ```m``` such as the resulting number of edges **approximates by excess** the number of edges in the original connected component
and compute the PageRank vectors for both graphs.
```
# YOUR CODE HERE
```
Compare the PageRank vectors obtained on the generated graphs with the PageRank vector you computed on the original connected component.
**Sort** the components of each vector by value, and use cosine similarity as similarity measure.
Feel free to use any implementation of the cosine similarity available in third-party libraries, or implement your own with ```numpy```.
```
# YOUR CODE HERE
```
Once you have working code for each cell above, **head over to Gradescope, read carefully the questions, and submit your solution for this Colab**!
| github_jupyter |
## Activity 4.06: Visualizing the Impact of Education on Annual Salary and Weekly Working Hours
You're asked to get insights whether the education of people has an influence on the annual salary and weekly working hours. You ask 500 people in the state of New York about their age, annual salary, weekly working hours, and their education. You first want to know the percentage for each education type, therefore, use a tree map. Two violin plots shall be used to visualize the annual salary and the weekly working hours. Compare in each case to what extent the education has an impact.
It should also be taken into account that all visualizations in this activity are designed to be suitable for color blind people. In principle, this is always a good idea to bear in mind.
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import squarify
sns.set()
```
Use pandas to read the dataset age_salary_hours.csv located in the Dataset folder. Use a colormap that is suitable for colorblind people.
```
data = pd.read_csv("../../Datasets/age_salary_hours.csv")
data.head()
```
Use a tree map to visualize the percentages for each education type.
```
# Compute percentages from dataset
degrees = set(data['Education'])
percentages = []
for degree in degrees:
percentages.append(data[data['Education'] == degree].shape[0])
percentages = np.array(percentages)
percentages = ((percentages / percentages.sum()) * 100)
percentages
# Create labels for tree map
labels = [degree + '\n({0:.1f}%)'.format(percentage) for degree, percentage in zip(degrees, percentages)]
labels
# Create figure
plt.figure(figsize=(9, 6), dpi=200)
squarify.plot(percentages, label=labels, color=sns.color_palette('colorblind', len(degrees)))
plt.axis('off')
# Add title
plt.title('Degrees')
# Show plot
plt.show()
```
Create a subplot with two rows to visualize two violin plots for the annual salary and weekly working hours, respectively. Compare in each case to what extent the education has an impact. To exclude pensioners, only consider people younger than 65. Use a colormap that is suitable for colorblind people. subplots() can be used in combination with Seaborn's plot, by simply passing the ax argument with the respective Axes.
```
ordered_degrees = sorted(list(degrees))
ordered_degrees = [ordered_degrees[4], ordered_degrees[3], ordered_degrees[1], ordered_degrees[0], ordered_degrees[2]]
ordered_degrees
data = data.loc[data['Age'] < 65]
data
sns.set_palette('colorblind')
fig, ax = plt.subplots(2, 1, dpi=200, figsize=(8, 8))
sns.violinplot('Education', 'Annual Salary', data=data, cut=0, order=ordered_degrees, ax=ax[0])
ax[0]
#ax[0].set_xticklabels(ax[0].get_xticklabels(), rotation=10)
#sns.violinplot('Education', 'Weekly hours', data=data, cut=0, order=ordered_degrees, ax=ax[1])
#ax[1].set_xticklabels(ax[1].get_xticklabels(), rotation=10)
#plt.tight_layout()
# Add title
#fig.suptitle('Impact of Education on Annual Salary and Weekly Working Hours')
# Show figure
# Set color palette to colorblind
sns.set_palette('colorblind')
# Create subplot with two rows
fig, ax = plt.subplots(2, 1, dpi=200, figsize=(8, 8))
sns.violinplot('Education', 'Annual Salary', data=data, cut=0, order=ordered_degrees, ax=ax[0])
ax[0].set_xticklabels(ax[0].get_xticklabels(), rotation=10)
sns.violinplot('Education', 'Weekly hours', data=data, cut=0, order=ordered_degrees, ax=ax[1])
ax[1].set_xticklabels(ax[1].get_xticklabels(), rotation=10)
plt.tight_layout()
# Add title
fig.suptitle('Impact of Education on Annual Salary and Weekly Working Hours')
# Show figure
plt.show()
```
| github_jupyter |
# Lab 2: networkX Drawing and Network Properties
```
import matplotlib.pyplot as plt
import pandas as pd
from networkx import nx
```
## TOC
1. [Q1](#Q1)
2. [Q2](#Q2)
3. [Q3](#Q3)
4. [Q4](#Q4)
```
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(11, 8))
ax = axes.flatten()
path = nx.path_graph(5)
nx.draw_networkx(path, with_labels=True, ax=ax[0])
ax[0].set_title('Path')
cycle = nx.cycle_graph(5)
nx.draw_networkx(cycle, node_color='green', with_labels=True, ax=ax[1])
ax[1].set_title('Cycle')
complete = nx.complete_graph(5)
nx.draw_networkx(complete, node_color='#A0CBE2', edge_color='red', width=2, with_labels=False, ax=ax[2])
ax[2].set_title('Complete')
star = nx.star_graph(5)
pos=nx.spring_layout(star)
nx.draw_networkx(star, pos, with_labels=True, ax=ax[3])
ax[3].set_title('Star')
for i in range(4): ax[i].set_axis_off()
plt.show()
```
### Q1:
*Use one sentence each to briefly describe the characteristics of each graph
type (its shape, edges, etc..)*
$V$ = a set of vertices, where $V \ni \{v_1, v_2, ... , v_n\}$
$E$ = a set of edges, where $E \subseteq \{\{v_x,v_y\}\mid v_x,v_y\in V\}$
Let $G$ = ($V$, $E$) be an undirected graph
- **Path Graph** := Suppose there are n vertices ($v_0, v_1, ... , v_n$) in $G$, such that $\forall e_{(v_x,v_y)} \in E $ | $0 \leq x \leq n-1$; $y = x + 1$
- **Cycle Graph** := Suppose there are n vertices ($v_0, v_1, ... , v_n$) in $G$, such that $\forall e_{(v_x,v_y)} \in E $ | $0 \leq x \leq n; \{(0 \leq x \leq n-1) \Rightarrow (y = x + 1)\} \land \{(x = n) \Rightarrow (y = 0)\}$
- **Complete Graph**:= Suppose there are n vertices ($v_0, v_1, ... , v_n$) in $G$, such that $\forall e_{(v_x,v_y)} \in E $ | $x \neq y; 0 \leq x,y \leq n$
- **Star Graph** := Suppose there are n vertices ($v_0, v_1, ... , v_n$) in $G$, such that $\forall e_{(v_x,v_y)} \in E $ | $x = 0; 1 \leq y \leq n$
```
G = nx.lollipop_graph(3,2)
nx.draw(G, with_labels=True)
plt.show()
list(nx.connected_components(G))
nx.clustering(G)
```
### Q2:
*How many connected components are there in the graph? What are they?*
There is only one connected component in the graph, it's all 5 vertices of the graph
### Q3:
*Which nodes have the highest local clustering coefficient? Explain (from the
definition) why they have high clustering coefficient.*
Node 0 and 1 have the highest local clustering coefficient of 1, because the neighbor of these two nodes are each other and node 2, $(2*1\text{ between neighbor link})\div(2\text{ degrees}*(2-1)) = 1$
```
def netMeta(net):
meta = {}
meta["radius"]= nx.radius(net)
meta["diameter"]= nx.diameter(net)
meta["eccentricity"]= nx.eccentricity(net)
meta["center"]= nx.center(net)
meta["periphery"]= nx.periphery(net)
meta["density"]= nx.density(net)
return meta
netMeta(G)
def netAna(net):
cols = ['Node name', "Betweenness centrality", "Degree centrality", "Closeness centrality", "Eigenvector centrality"]
rows =[]
print()
a = nx.betweenness_centrality(net)
b = nx.degree_centrality(net)
c = nx.closeness_centrality(net)
d = nx.eigenvector_centrality(net)
for v in net.nodes():
temp = []
temp.append(v)
temp.append(a[v])
temp.append(b[v])
temp.append(c[v])
temp.append(d[v])
rows.append(temp)
df = pd.DataFrame(rows,columns=cols)
df.set_index('Node name', inplace = True)
return df
G_stat = netAna(G)
G_stat
G_stat.sort_values(by=['Eigenvector centrality'])
```
### Q4:
*Which node(s) has the highest betweenness, degree, closeness, eigenvector
centrality? Explain using the definitions and graph structures.*
Node 2 has the highest betweenness, degree, closeness, and eigenvector centrality
Because node 2 has the most geodesics passing through, it has the highest degree of 3, it has the shortest average path length, and it has the most refferences by its neighbors
```
pathlengths = []
print("source vertex {target:length, }")
for v in G.nodes():
spl = dict(nx.single_source_shortest_path_length(G, v))
print('{} {} '.format(v, spl))
for p in spl:
pathlengths.append(spl[p])
print('')
print("average shortest path length %s" % (sum(pathlengths) / len(pathlengths)))
dist = {}
for p in pathlengths:
if p in dist: dist[p] += 1
else: dist[p] = 1
print('')
print("length #paths")
verts = dist.keys()
for d in sorted(verts):
print('%s %d' % (d, dist[d]))
mapping = {0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e'}
H = nx.relabel_nodes(G, mapping)
nx.draw(H, with_labels=True)
plt.show()
```
| github_jupyter |
# Lesson 1 Experiments
This section just reproduces lesson 1 logic using my own code and with 30 tennis and 30 basketball player images. I chose all male players for simplicity.
```
# Put these at the top of every notebook, to get automatic reloading and inline plotting
%reload_ext autoreload
%autoreload 2
%matplotlib inline
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
from typing import List, Union
from pathlib import Path
```
## Download the Sample Data
Only execute the cell below once! If the commands below don't work, try the direct link [here](https://1drv.ms/u/s!AkhwiUY5vHPCs03Q26908HIwKFkG).
```
!wget 'https://onedrive.live.com/download?cid=C273BC3946897048&resid=C273BC3946897048%216605&authkey=AIVFQLj7IoJYiz4' -O foo.zip
!unzip -d data foo.zip
!rm foo.zip
```
## Load the Sample Data
```
sz=224
path = Path('data/tennisbball')
path.absolute(), list(path.glob('*'))
sample = plt.imread(next(iter((path / 'valid' / 'tennis').iterdir())))
plt.imshow(sample)
plt.figure()
sample = plt.imread(next(iter((path / 'valid' / 'bball').iterdir())))
plt.imshow(sample)
sample.shape, sample[:4,:4]
torch.cuda.is_available(),torch.backends.cudnn.enabled
```
## Construct the Model
Define the model architecture
```
#tfms_from_model -- model based image transforms (preprocessing stats)
arch=resnet50
data = ImageClassifierData.from_paths(path, test_name='test', test_with_labels=True, tfms=tfms_from_model(arch, sz))
#precompute=True to save conv layer activations! pass False if you want to run the data viz below
learner = ConvLearner.pretrained(f=arch, data=data, precompute=False)
```
## Train a Model
This section trains a model using transfer learning.
```
learner.fit(0.01, 15)
#uncomment line below to save the model
#learner.save('tennis_v_bball.lrnr')
```
## Load/Visualize an Existing Model
Or if you've already trained a model, skip the above section and start from here.
```
learner.load('tennis_v_bball.lrnr')
probs = np.exp(learner.predict())
probs
#TODO: improve
def display_images(images:List[Union[Path, np.ndarray]], columns:int, titles:List[str]=None, figsize=None) -> None:
if not titles:
titles = [f'Image {i+1}' for i in range(len(images))]
rows = len(images) // columns + int(len(images) % columns > 0)
if figsize is None:
figsize = (60,60)
plt.figure(figsize=figsize)
for i, (image, title) in enumerate(zip(images, titles)):
if isinstance(image, Path):
image = np.array(PIL.Image.open(image))
plt.subplot(rows, columns, i+1)
plt.imshow(image)
plt.title(title, fontsize=10*columns)
plt.axis('off')
#val images
predictions = probs.argmax(axis=1)
images, titles = [], []
for prob, pclass, fname in zip(probs, predictions, data.val_ds.fnames):
images.append(path / fname)
titles.append(f'{fname} -- {prob[pclass]:.{3}f} ({data.classes[pclass]})')
display_images(images, 4, titles)
test_probs = np.exp(learner.predict(is_test=True))
test_predictions = test_probs.argmax(axis=1)
#test images
images, titles = [],[]
for prob, pclass, fname in zip(test_probs, test_predictions, data.test_ds.fnames):
images.append(path / fname)
titles.append(f'{fname} -- {prob[pclass]:.{3}f} ({data.classes[pclass]})')
display_images(images, 4, titles)
```
## Dataviz -- Activations
```
#check out the model structure
model = learner.model
model
#
# utilize torch hooks to capture the activations for any conv layer. for simplicity we use a
# batch size of 1.
#
class ActivationHook:
def __init__(self):
self.output = []
def __call__(self, module, input, output):
self.output = output.data
def find_layers(module, ltype):
rv = []
if isinstance(module, ltype):
rv.append(module)
else:
for c in module.children():
rv.extend(find_layers(c, ltype))
return rv
def capture_activations(model, x):
layers = find_layers(model, nn.Conv2d)
hooks = [ActivationHook() for _ in layers]
handles = [conv.register_forward_hook(hook) for conv, hook in zip(layers, hooks)]
model(x)
for h in handles:
h.remove()
return [h.output for h in hooks]
bs = data.bs
data.bs = 1
dl = data.get_dl(data.test_ds, False)
i = iter(dl)
ball_x = next(i)[0]
noball_x = next(i)[0]
data.bs = bs
ball_activations = capture_activations(model, Variable(ball_x))
noball_activations = capture_activations(model, Variable(noball_x))
for i, layer_output in enumerate(ball_activations):
print(f'Layer {i}: {layer_output.squeeze().shape}')
#layer 5, filter 18, 36 seems to like circular type things
layer_idx = 0
images = []
titles = []
num_filters = ball_activations[layer_idx].shape[1]
asize = ball_activations[layer_idx].shape[2]
def filter_activations_to_image(activations, lidx, fidx):
a = activations[lidx].squeeze() #choose conv layer & discard batch dimension
a = a[fidx] #choose conv filter
a = (a - a.mean())/(3*a.std()) + 0.5 #center and scale down
a = a.clamp(0, 1).numpy() # and finally clamp
return a
buff_size = 10
for filter_idx in range(num_filters):
a0 = filter_activations_to_image(ball_activations, layer_idx, filter_idx)
a1 = filter_activations_to_image(noball_activations, layer_idx, filter_idx)
z = np.hstack([a0, np.ones((asize, 10)), a1])
plt.imshow(z, cmap='gray')
plt.axis('off')
plt.title(f'Filter {filter_idx}')
plt.show()
```
## DataViz -- Filters
We can also look at filters. This is easiest at the first layer where each filter is 3 dimensional.
```
import matplotlib.colors as mc
import math
conv = find_layers(learner.model, nn.Conv2d)[0]
weight = conv.weight.data.numpy()
num_filters, depth, w, h = weight.shape
rows = int(num_filters**0.5)
cols = int(math.ceil(num_filters/rows))
border = 1
img = np.zeros((depth, rows*h + (1+rows)*border, cols*w + (1+cols)*border))
for f in range(num_filters):
r = f // rows
c = f % cols
x = border + r * (w+border)
y = border + c * (w+border)
norm = mc.Normalize()
img[:, x:x+w, y:y+h] = norm(weight[f, :, :, :])
plt.figure(figsize=(12,12))
plt.imshow(img.transpose(1,2,0))
_ = plt.axis('off')
```
We can also visualize subsequent layers, though it's not so pretty. We can map each dimension of each filter back into grayscale.
```
# for i, conv in enumerate(find_layers(learner.model, nn.Conv2d)):
# print(conv, conv.weight.shape)
weight = find_layers(learner.model, nn.Conv2d)[2].weight.data.numpy()
num_filters, depth, w, h = weight.shape
rows = num_filters
cols = depth
border = 1
img = np.zeros((rows*h + (1+rows)*border, cols*w + (1+cols)*border))
for f in range(num_filters):
norm = mc.Normalize()
normed = norm(weight[f, :, :, :]) #normalize over all the weights in a filter
for d in range(depth):
r = f
c = d
x = border + r * (w+border)
y = border + c * (w+border)
img[x:x+w, y:y+h] = normed[d]
plt.figure(figsize=(18,18))
plt.imshow(img, cmap='gray')
_ = plt.axis('off')
```
## Occlusion
We can also mask out portions of the image by sliding a gray block over the image repeatedly and record how the predictions change.
```
block_size = 50
image_path = path / data.test_ds.fnames[0]
image = open_image(image_path)
image[50:250, 50:250] = np.full((200,200,3), 0.75)
scaled_image = Scale(sz=224).do_transform(orig_image, False)
# image[0:block_size, 0:block_size] = np.full((block_size,block_size,3), 0.75)
plt.imshow(image)
_ = plt.axis('off')
block_size = 50
image_path = path / data.test_ds.fnames[0]
orig_image = open_image(image_path)
# image[0:200, 0:200] = np.full((200,200,3), 0.75)
scaled_image = Scale(sz=224).do_transform(orig_image, False)
# image[0:block_size, 0:block_size] = np.full((block_size,block_size,3), 0.75)
# plt.imshow(image)
plt.axis('off')
#the prediction for the smaller image should be essentially unchanged
print(learner.model(VV(tfms_from_model(arch, sz)[1](scaled_image)).unsqueeze(0)).exp())
w,h,_ = scaled_image.shape
learner.model.eval()
t0 = time.time()
prob_map = np.zeros((2, w, h))
z = 0
#TODO: add stride for efficiency.
for x in tqdm(range(1 - block_size, w)):
for y in range(1 - block_size, h):
image = np.array(scaled_image)
x0, x1 = max(0, x), min(w, x + block_size)
y0, y1 = max(0, y), min(h, y + block_size)
image[x0:x1,y0:y1] = np.full((x1-x0, y1-y0, 3), 0.75)
image = tfms_from_model(arch, sz)[1](image)
predictions = learner.model(VV(image).unsqueeze(0))
prob_map[0,x0:x1,y0:y1] += predictions.exp().data[0][0]
prob_map[1,x0:x1,y0:y1] += 1
np.save('probs-heatmap.npy', prob_map)
heatmap = prob_map[0]/prob_map[1]
plt.subplot(1,2,1)
plt.imshow(1 - heatmap, cmap='jet')
plt.axis('off')
plt.subplot(1,2,2)
plt.imshow(orig_image)
_ = plt.axis('off')
block_size = 50
image_path = path / 'valid/bball/29.jpg'
orig_image = open_image(image_path)
# image[0:200, 0:200] = np.full((200,200,3), 0.75)
scaled_image = Scale(sz=224).do_transform(orig_image, False)
# orig_image[0:block_size, 0:block_size] = np.full((block_size,block_size,3), 0.75)
# plt.imshow(orig_image)
# plt.axis('off')
#the prediction for the smaller image should be essentially unchanged
print(learner.model(VV(tfms_from_model(arch, sz)[1](scaled_image)).unsqueeze(0)).exp())
w,h,_ = scaled_image.shape
learner.model.eval()
t0 = time.time()
prob_map = np.zeros((2, w, h))
z = 0
#TODO: add stride for efficiency.
for x in tqdm(range(1 - block_size, w)):
for y in range(1 - block_size, h):b
image = np.array(scaled_image)
x0, x1 = max(0, x), min(w, x + block_size)
y0, y1 = max(0, y), min(h, y + block_size)
image[x0:x1,y0:y1] = np.full((x1-x0, y1-y0, 3), 0.75)
image = tfms_from_model(arch, sz)[1](image)
predictions = learner.model(VV(image).unsqueeze(0))
prob_map[0,x0:x1,y0:y1] += predictions.exp().data[0][0]
prob_map[1,x0:x1,y0:y1] += 1
np.save('probs-giannis-heatmap.npy', prob_map)
heatmap = prob_map[0]/prob_map[1]
plt.subplot(1,2,1)
plt.imshow(1 - heatmap, cmap='jet')
plt.axis('off')
plt.subplot(1,2,2)
plt.imshow(orig_image)
_ = plt.axis('off')
block_size = 50
image_path = path / 'valid/tennis/23.jpg'
orig_image = open_image(image_path)
# image[0:200, 0:200] = np.full((200,200,3), 0.75)
scaled_image = Scale(sz=224).do_transform(orig_image, False)
# orig_image[0:block_size, 0:block_size] = np.full((block_size,block_size,3), 0.75)
plt.imshow(scaled_image)
# plt.axis('off')
#the prediction for the smaller image should be essentially unchanged
print(learner.model(VV(tfms_from_model(arch, sz)[1](scaled_image)).unsqueeze(0)).exp())
w,h,_ = scaled_image.shape
learner.model.eval()
t0 = time.time()
prob_map = np.zeros((2, w, h))
z = 0
#TODO: add stride for efficiency.
for x in tqdm(range(1 - block_size, w)):
for y in range(1 - block_size, h):
image = np.array(scaled_image)
x0, x1 = max(0, x), min(w, x + block_size)
y0, y1 = max(0, y), min(h, y + block_size)
image[x0:x1,y0:y1] = np.full((x1-x0, y1-y0, 3), 0.75)
image = tfms_from_model(arch, sz)[1](image)
predictions = learner.model(VV(image).unsqueeze(0))
prob_map[0,x0:x1,y0:y1] += predictions.exp().data[0][0]
prob_map[1,x0:x1,y0:y1] += 1
np.save('probs-tennis-heatmap.npy', prob_map)
heatmap = prob_map[0]/prob_map[1]
plt.subplot(1,2,1)
plt.imshow(heatmap, cmap='jet')
plt.axis('off')
plt.subplot(1,2,2)
plt.imshow(orig_image)
_ = plt.axis('off')
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Recurrent Neural Networks (RNN) with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/rnn">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras/rnn.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/keras/rnn.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/rnn.ipynb">
<img src="https://www.tensorflow.org/images/download_logo_32px.png" />
Download notebook</a>
</td>
</table>
Recurrent neural networks (RNN) are a class of neural networks that is powerful for modeling sequence data such as time series or natural language.
Schematically, a RNN layer uses a `for` loop to iterate over the timesteps of a sequence, while maintaining an internal state that encodes information about the timesteps it has seen so far.
The Keras RNN API is designed with a focus on:
- **Ease of use**: the built-in `tf.keras.layers.RNN`, `tf.keras.layers.LSTM`, `tf.keras.layers.GRU` layers enable you to quickly build recurrent models without having to make difficult configuration choices.
- **Ease of customization**: You can also define your own RNN cell layer (the inner part of the `for` loop) with custom behavior, and use it with the generic `tf.keras.layers.RNN` layer (the `for` loop itself). This allows you to quickly prototype different research ideas in a flexible way with minimal code.
## Setup
```
import collections
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
```
## Build a simple model
There are three built-in RNN layers in Keras:
1. `tf.keras.layers.SimpleRNN`, a fully-connected RNN where the output from previous timestep is to be fed to next timestep.
2. `tf.keras.layers.GRU`, first proposed in [Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation](https://arxiv.org/abs/1406.1078).
3. `tf.keras.layers.LSTM`, first proposed in [Long Short-Term Memory](https://www.bioinf.jku.at/publications/older/2604.pdf).
In early 2015, Keras had the first reusable open-source Python implementations of LSTM and GRU.
Here is a simple example of a `Sequential` model that processes sequences of integers, embeds each integer into a 64-dimensional vector, then processes the sequence of vectors using a `LSTM` layer.
```
model = tf.keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# Add a LSTM layer with 128 internal units.
model.add(layers.LSTM(128))
# Add a Dense layer with 10 units.
model.add(layers.Dense(10))
model.summary()
```
## Outputs and states
By default, the output of a RNN layer contain a single vector per sample. This vector is the RNN cell output corresponding to the last timestep, containing information about the entire input sequence. The shape of this output is `(batch_size, units)` where `units` corresponds to the `units` argument passed to the layer's constructor.
A RNN layer can also return the entire sequence of outputs for each sample (one vector per timestep per sample), if you set `return_sequences=True`. The shape of this output is `(batch_size, timesteps, units)`.
```
model = tf.keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)
model.add(layers.GRU(256, return_sequences=True))
# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)
model.add(layers.SimpleRNN(128))
model.add(layers.Dense(10))
model.summary()
```
In addition, a RNN layer can return its final internal state(s). The returned states can be used to resume the RNN execution later, or [to initialize another RNN](https://arxiv.org/abs/1409.3215). This setting is commonly used in the encoder-decoder sequence-to-sequence model, where the encoder final state is used as the initial state of the decoder.
To configure a RNN layer to return its internal state, set the `return_state` parameter to `True` when creating the layer. Note that `LSTM` has 2 state tensors, but `GRU` only has one.
To configure the initial state of the layer, just call the layer with additional keyword argument `initial_state`.
Note that the shape of the state needs to match the unit size of the layer, like in the example below.
```
encoder_vocab = 1000
decoder_vocab = 2000
encoder_input = layers.Input(shape=(None, ))
encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(encoder_input)
# Return states in addition to output
output, state_h, state_c = layers.LSTM(
64, return_state=True, name='encoder')(encoder_embedded)
encoder_state = [state_h, state_c]
decoder_input = layers.Input(shape=(None, ))
decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(decoder_input)
# Pass the 2 states to a new LSTM layer, as initial state
decoder_output = layers.LSTM(
64, name='decoder')(decoder_embedded, initial_state=encoder_state)
output = layers.Dense(10)(decoder_output)
model = tf.keras.Model([encoder_input, decoder_input], output)
model.summary()
```
## RNN layers and RNN cells
In addition to the built-in RNN layers, the RNN API also provides cell-level APIs. Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only processes a single timestep.
The cell is the inside of the `for` loop of a RNN layer. Wrapping a cell inside a `tf.keras.layers.RNN` layer gives you a layer capable of processing batches of sequences, e.g. `RNN(LSTMCell(10))`.
Mathematically, `RNN(LSTMCell(10))` produces the same result as `LSTM(10)`. In fact, the implementation of this layer in TF v1.x was just creating the corresponding RNN cell and wrapping it in a RNN layer. However using the built-in `GRU` and `LSTM` layers enables the use of CuDNN and you may see better performance.
There are three built-in RNN cells, each of them corresponding to the matching RNN layer.
- `tf.keras.layers.SimpleRNNCell` corresponds to the `SimpleRNN` layer.
- `tf.keras.layers.GRUCell` corresponds to the `GRU` layer.
- `tf.keras.layers.LSTMCell` corresponds to the `LSTM` layer.
The cell abstraction, together with the generic `tf.keras.layers.RNN` class, make it very easy to implement custom RNN architectures for your research.
## Cross-batch statefulness
When processing very long sequences (possibly infinite), you may want to use the pattern of **cross-batch statefulness**.
Normally, the internal state of a RNN layer is reset every time it sees a new batch (i.e. every sample seen by the layer is assume to be independent from the past). The layer will only maintain a state while processing a given sample.
If you have very long sequences though, it is useful to break them into shorter sequences, and to feed these shorter sequences sequentially into a RNN layer without resetting the layer's state. That way, the layer can retain information about the entirety of the sequence, even though it's only seeing one sub-sequence at a time.
You can do this by setting `stateful=True` in the constructor.
If you have a sequence `s = [t0, t1, ... t1546, t1547]`, you would split it into e.g.
```
s1 = [t0, t1, ... t100]
s2 = [t101, ... t201]
...
s16 = [t1501, ... t1547]
```
Then you would process it via:
```python
lstm_layer = layers.LSTM(64, stateful=True)
for s in sub_sequences:
output = lstm_layer(s)
```
When you want to clear the state, you can use `layer.reset_states()`.
> Note: In this setup, sample `i` in a given batch is assumed to be the continuation of sample `i` in the previous batch. This means that all batches should contain the same number of samples (batch size). E.g. if a batch contains `[sequence_A_from_t0_to_t100, sequence_B_from_t0_to_t100]`, the next batch should contain `[sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200]`.
Here is a complete example:
```
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
output = lstm_layer(paragraph3)
# reset_states() will reset the cached state to the original initial_state.
# If no initial_state was provided, zero-states will be used by default.
lstm_layer.reset_states()
```
### RNN State Reuse
<a id="rnn_state_reuse"></a>
The recorded states of the RNN layer are not included in the `layer.weights()`. If you would like to reuse the state from a RNN layer, you can retrieve the states value by `layer.states` and use it as the
initial state for a new layer via the Keras functional API like `new_layer(inputs, initial_state=layer.states)`, or model subclassing.
Please also note that sequential model might not be used in this case since it only supports layers with single input and output, the extra input of initial state makes it impossible to use here.
```
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
existing_state = lstm_layer.states
new_lstm_layer = layers.LSTM(64)
new_output = new_lstm_layer(paragraph3, initial_state=existing_state)
```
##Bidirectional RNNs
For sequences other than time series (e.g. text), it is often the case that a RNN model can perform better if it not only processes sequence from start to end, but also backwards. For example, to predict the next word in a sentence, it is often useful to have the context around the word, not only just the words that come before it.
Keras provides an easy API for you to build such bidirectional RNNs: the `tf.keras.layers.Bidirectional` wrapper.
```
model = tf.keras.Sequential()
model.add(layers.Bidirectional(layers.LSTM(64, return_sequences=True),
input_shape=(5, 10)))
model.add(layers.Bidirectional(layers.LSTM(32)))
model.add(layers.Dense(10))
model.summary()
```
Under the hood, `Bidirectional` will copy the RNN layer passed in, and flip the `go_backwards` field of the newly copied layer, so that it will process the inputs in reverse order.
The output of the `Bidirectional` RNN will be, by default, the concatenation of the forward layer output and the backward layer output. If you need a different merging behavior, e.g. sum, change the `merge_mode` parameter in the `Bidirectional` wrapper constructor. For more details about `Bidirectional`, please check [the API docs](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Bidirectional).
## Performance optimization and CuDNN kernels in TensorFlow 2.0
In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available. With this change, the prior `keras.layers.CuDNNLSTM/CuDNNGRU` layers have been deprecated, and you can build your model without worrying about the hardware it will run on.
Since the CuDNN kernel is built with certain assumptions, this means the layer **will not be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or GRU layers**. E.g.:
- Changing the `activation` function from `tanh` to something else.
- Changing the `recurrent_activation` function from `sigmoid` to something else.
- Using `recurrent_dropout` > 0.
- Setting `unroll` to True, which forces LSTM/GRU to decompose the inner `tf.while_loop` into an unrolled `for` loop.
- Setting `use_bias` to False.
- Using masking when the input data is not strictly right padded (if the mask corresponds to strictly right padded data, CuDNN can still be used. This is the most common case).
For the detailed list of constraints, please see the documentation for the [LSTM](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/LSTM) and [GRU](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/GRU) layers.
### Using CuDNN kernels when available
Let's build a simple LSTM model to demonstrate the performance difference.
We'll use as input sequences the sequence of rows of MNIST digits (treating each row of pixels as a timestep), and we'll predict the digit's label.
```
batch_size = 64
# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).
# Each input sequence will be of size (28, 28) (height is treated like time).
input_dim = 28
units = 64
output_size = 10 # labels are from 0 to 9
# Build the RNN model
def build_model(allow_cudnn_kernel=True):
# CuDNN is only available at the layer level, and not at the cell level.
# This means `LSTM(units)` will use the CuDNN kernel,
# while RNN(LSTMCell(units)) will run on non-CuDNN kernel.
if allow_cudnn_kernel:
# The LSTM layer with default options uses CuDNN.
lstm_layer = tf.keras.layers.LSTM(units, input_shape=(None, input_dim))
else:
# Wrapping a LSTMCell in a RNN layer will not use CuDNN.
lstm_layer = tf.keras.layers.RNN(
tf.keras.layers.LSTMCell(units),
input_shape=(None, input_dim))
model = tf.keras.models.Sequential([
lstm_layer,
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(output_size)]
)
return model
```
### Load MNIST dataset
```
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
sample, sample_label = x_train[0], y_train[0]
```
### Create a model instance and compile it
We choose `sparse_categorical_crossentropy` as the loss function for the model. The output of the model has shape of `[batch_size, 10]`. The target for the model is a integer vector, each of the integer is in the range of 0 to 9.
```
model = build_model(allow_cudnn_kernel=True)
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='sgd',
metrics=['accuracy'])
model.fit(x_train, y_train,
validation_data=(x_test, y_test),
batch_size=batch_size,
epochs=5)
```
### Build a new model without CuDNN kernel
```
slow_model = build_model(allow_cudnn_kernel=False)
slow_model.set_weights(model.get_weights())
slow_model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='sgd',
metrics=['accuracy'])
slow_model.fit(x_train, y_train,
validation_data=(x_test, y_test),
batch_size=batch_size,
epochs=1) # We only train for one epoch because it's slower.
```
As you can see, the model built with CuDNN is much faster to train compared to the model that use the regular TensorFlow kernel.
The same CuDNN-enabled model can also be use to run inference in a CPU-only environment. The `tf.device` annotation below is just forcing the device placement. The model will run on CPU by default if no GPU is available.
You simply don't have to worry about the hardware you're running on anymore. Isn't that pretty cool?
```
with tf.device('CPU:0'):
cpu_model = build_model(allow_cudnn_kernel=True)
cpu_model.set_weights(model.get_weights())
result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)
print('Predicted result is: %s, target result is: %s' % (result.numpy(), sample_label))
plt.imshow(sample, cmap=plt.get_cmap('gray'))
```
## RNNs with list/dict inputs, or nested inputs
Nested structures allow implementers to include more information within a single timestep. For example, a video frame could have audio and video input at the same time. The data shape in this case could be:
`[batch, timestep, {"video": [height, width, channel], "audio": [frequency]}]`
In another example, handwriting data could have both coordinates x and y for the current position of the pen, as well as pressure information. So the data representation could be:
`[batch, timestep, {"location": [x, y], "pressure": [force]}]`
The following code provides an example of how to build a custom RNN cell that accepts such structured inputs.
### Define a custom cell that support nested input/output
See [Custom Layers and Models](custom_layers_and_models.ipynb) for details on
writing your own layers.
```
class NestedCell(tf.keras.layers.Layer):
def __init__(self, unit_1, unit_2, unit_3, **kwargs):
self.unit_1 = unit_1
self.unit_2 = unit_2
self.unit_3 = unit_3
self.state_size = [tf.TensorShape([unit_1]),
tf.TensorShape([unit_2, unit_3])]
self.output_size = [tf.TensorShape([unit_1]),
tf.TensorShape([unit_2, unit_3])]
super(NestedCell, self).__init__(**kwargs)
def build(self, input_shapes):
# expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]
i1 = input_shapes[0][1]
i2 = input_shapes[1][1]
i3 = input_shapes[1][2]
self.kernel_1 = self.add_weight(
shape=(i1, self.unit_1), initializer='uniform', name='kernel_1')
self.kernel_2_3 = self.add_weight(
shape=(i2, i3, self.unit_2, self.unit_3),
initializer='uniform',
name='kernel_2_3')
def call(self, inputs, states):
# inputs should be in [(batch, input_1), (batch, input_2, input_3)]
# state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]
input_1, input_2 = tf.nest.flatten(inputs)
s1, s2 = states
output_1 = tf.matmul(input_1, self.kernel_1)
output_2_3 = tf.einsum('bij,ijkl->bkl', input_2, self.kernel_2_3)
state_1 = s1 + output_1
state_2_3 = s2 + output_2_3
output = (output_1, output_2_3)
new_states = (state_1, state_2_3)
return output, new_states
def get_config(self):
return {'unit_1':self.unit_1, 'unit_2':unit_2, 'unit_3':self.unit_3}
```
### Build a RNN model with nested input/output
Let's build a Keras model that uses a `tf.keras.layers.RNN` layer and the custom cell we just defined.
```
unit_1 = 10
unit_2 = 20
unit_3 = 30
i1 = 32
i2 = 64
i3 = 32
batch_size = 64
num_batches = 100
timestep = 50
cell = NestedCell(unit_1, unit_2, unit_3)
rnn = tf.keras.layers.RNN(cell)
input_1 = tf.keras.Input((None, i1))
input_2 = tf.keras.Input((None, i2, i3))
outputs = rnn((input_1, input_2))
model = tf.keras.models.Model([input_1, input_2], outputs)
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
```
### Train the model with randomly generated data
Since there isn't a good candidate dataset for this model, we use random Numpy data for demonstration.
```
input_1_data = np.random.random((batch_size * num_batches, timestep, i1))
input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))
target_1_data = np.random.random((batch_size * num_batches, unit_1))
target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))
input_data = [input_1_data, input_2_data]
target_data = [target_1_data, target_2_data]
model.fit(input_data, target_data, batch_size=batch_size)
```
With the Keras `tf.keras.layers.RNN` layer, You are only expected to define the math logic for individual step within the sequence, and the `tf.keras.layers.RNN` layer will handle the sequence iteration for you. It's an incredibly powerful way to quickly prototype new kinds of RNNs (e.g. a LSTM variant).
For more details, please visit the [API docs](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/RNN).
| github_jupyter |
# Tutorial
In this notebook, we will see how to pass your own encoder and decoder's architectures to your VAE model using pythae!
```
# If you run on colab uncomment the following line
#!pip install git+https://github.com/clementchadebec/benchmark_VAE.git
import torch
import torchvision.datasets as datasets
import matplotlib.pyplot as plt
import numpy as np
import os
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
### Get the data
```
mnist_trainset = datasets.MNIST(root='../data', train=True, download=True, transform=None)
n_samples = 200
dataset = mnist_trainset.data[np.array(mnist_trainset.targets)==2][:n_samples].reshape(-1, 1, 28, 28) / 255.
fig, axes = plt.subplots(2, 10, figsize=(10, 2))
for i in range(2):
for j in range(10):
axes[i][j].matshow(dataset[i*10 +j].reshape(28, 28), cmap='gray')
axes[i][j].axis('off')
plt.tight_layout(pad=0.8)
```
## Let's build a custom auto-encoding architecture!
### First thing, you need to import the ``BaseEncoder`` and ``BaseDecoder`` as well as ``ModelOutput`` classes from pythae by running
```
from pythae.models.nn import BaseEncoder, BaseDecoder
from pythae.models.base.base_utils import ModelOutput
```
### Then build your own architectures
```
import torch.nn as nn
class Encoder_VAE_MNIST(BaseEncoder):
def __init__(self, args):
BaseEncoder.__init__(self)
self.input_dim = (1, 28, 28)
self.latent_dim = args.latent_dim
self.n_channels = 1
self.conv_layers = nn.Sequential(
nn.Conv2d(self.n_channels, 128, 4, 2, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.Conv2d(128, 256, 4, 2, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Conv2d(256, 512, 4, 2, padding=1),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.Conv2d(512, 1024, 4, 2, padding=1),
nn.BatchNorm2d(1024),
nn.ReLU(),
)
self.embedding = nn.Linear(1024, args.latent_dim)
self.log_var = nn.Linear(1024, args.latent_dim)
def forward(self, x: torch.Tensor):
h1 = self.conv_layers(x).reshape(x.shape[0], -1)
output = ModelOutput(
embedding=self.embedding(h1),
log_covariance=self.log_var(h1)
)
return output
class Decoder_AE_MNIST(BaseDecoder):
def __init__(self, args):
BaseDecoder.__init__(self)
self.input_dim = (1, 28, 28)
self.latent_dim = args.latent_dim
self.n_channels = 1
self.fc = nn.Linear(args.latent_dim, 1024 * 4 * 4)
self.deconv_layers = nn.Sequential(
nn.ConvTranspose2d(1024, 512, 3, 2, padding=1),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.ConvTranspose2d(512, 256, 3, 2, padding=1, output_padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.ConvTranspose2d(256, self.n_channels, 3, 2, padding=1, output_padding=1),
nn.Sigmoid(),
)
def forward(self, z: torch.Tensor):
h1 = self.fc(z).reshape(z.shape[0], 1024, 4, 4)
output = ModelOutput(reconstruction=self.deconv_layers(h1))
return output
```
### Define a model configuration (in which the latent will be stated). Here, we use the RHVAE model.
```
from pythae.models import VAEConfig
model_config = VAEConfig(
input_dim=(1, 28, 28),
latent_dim=10
)
```
### Build your encoder and decoder
```
encoder = Encoder_VAE_MNIST(model_config)
decoder= Decoder_AE_MNIST(model_config)
```
### Last but not least. Build you RHVAE model by passing the ``encoder`` and ``decoder`` arguments
```
from pythae.models import VAE
model = VAE(
model_config=model_config,
encoder=encoder,
decoder=decoder
)
```
### Now you can see the model that you've just built contains the custom autoencoder and decoder
```
model
```
### *note*: If you want to launch a training of such a model, try to ensure that the provided architectures are suited for the data. pythae performs a model sanity check before launching training and raises an error if the model cannot encode and decode an input data point
## Train the model !
```
from pythae.trainers import BaseTrainerConfig
from pythae.pipelines import TrainingPipeline
```
### Build the training pipeline with your ``TrainingConfig`` instance
```
training_config = BaseTrainerConfig(
output_dir='my_model_with_custom_archi',
learning_rate=1e-3,
batch_size=200,
steps_saving=None,
num_epochs=200)
pipeline = TrainingPipeline(
model=model,
training_config=training_config)
```
### Launch the ``Pipeline``
```
torch.manual_seed(8)
torch.cuda.manual_seed(8)
pipeline(
train_data=dataset
)
```
### *note 1*: You will see now that a ``encoder.pkl`` and ``decoder.pkl`` appear in the folder ``my_model_with_custom_archi/training_YYYY_MM_DD_hh_mm_ss/final_model`` to allow model rebuilding with your own architecture ``Encoder_VAE_MNIST`` and ``Decoder_AE_MNIST``.
### *note 2*: Model rebuilding is based on the [dill](https://pypi.org/project/dill/) librairy allowing to reload the class whithout importing them. Hence, you should still be able to reload the model even if the classes ``Encoder_VAE_MNIST`` or ``Decoder_AE_MNIST`` were not imported.
```
last_training = sorted(os.listdir('my_model_with_custom_archi'))[-1]
print(last_training)
```
### You can now reload the model easily using the classmethod ``VAE.load_from_folder``
```
model_rec = VAE.load_from_folder(os.path.join('my_model_with_custom_archi', last_training, 'final_model'))
model_rec
```
## The model can now be used to generate new samples !
```
from pythae.samplers import NormalSampler
sampler = NormalSampler(
model=model_rec
)
gen_data = sampler.sample(
num_samples=25
)
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=5, ncols=5, figsize=(10, 10))
for i in range(5):
for j in range(5):
axes[i][j].imshow(gen_data[i*5 +j].cpu().reshape(28, 28), cmap='gray')
axes[i][j].axis('off')
plt.tight_layout(pad=0.)
```
| github_jupyter |
# Chapter 3 : pandas
```
#load watermark
%load_ext watermark
%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras,tflearn,bokeh,gensim
```
# pandas DataFrames
```
import numpy as np
import scipy as sp
import pandas as pd
```
## Load the data file into data frame
```
from pandas.io.parsers import read_csv
df = read_csv("WHO_first9cols.csv")
print("Dataframe Top 5 rows:\n", df.head())
print("Shape:\n", df.shape)
print("\n")
print("Length:\n", len(df))
print("\n")
print("Column Headers:\n", df.columns)
print("\n")
print("Data types:\n", df.dtypes)
print("\n")
print("Index:\n", df.index)
print("\n")
print("Values:\n", df.values)
```
# pandas Series
```
country_col = df["Country"]
print("Type df:\n", type(df), "\n")
print("Type country col:\n", type(country_col), "\n")
print("Series shape:\n", country_col.shape, "\n")
print("Series index:\n", country_col.index, "\n")
print("Series values:\n", country_col.values, "\n")
print("Series name:\n", country_col.name, "\n")
print("Last 2 countries:\n", country_col[-2:], "\n")
print("Last 2 countries type:\n", type(country_col[-2:]), "\n")
last_col = df.columns[-1]
print("Last df column signs:\n", last_col, np.sign(df[last_col]), "\n")
np.sum([0, np.nan])
df.dtypes
print(np.sum(df[last_col] - df[last_col].values))
```
# Querying Data in pandas
```
!pip install quandl
import quandl
sunspots = quandl.get("SIDC/SUNSPOTS_A")
print("Head 2:\n", sunspots.head(2) )
print("Tail 2:\n", sunspots.tail(2))
last_date = sunspots.index[-1]
print("Last value:\n",sunspots.loc[last_date])
print("Values slice by date:\n", sunspots["20020101": "20131231"])
print("Slice from a list of indices:\n", sunspots.iloc[[2, 4, -4, -2]])
print("Scalar with Iloc:", sunspots.iloc[0, 0])
print("Scalar with iat", sunspots.iat[1, 0])
print("Boolean selection:\n", sunspots[sunspots > sunspots.mean()])
print("Boolean selection with column label:\n", sunspots[sunspots['Number of Observations'] > sunspots['Number of Observations'].mean()])
```
# Statistics with pandas DataFrame
```
import quandl
# Data from http://www.quandl.com/SIDC/SUNSPOTS_A-Sunspot-Numbers-Annual
# PyPi url https://pypi.python.org/pypi/Quandl
sunspots = quandl.get("SIDC/SUNSPOTS_A")
print("Describe", sunspots.describe(),"\n")
print("Non NaN observations", sunspots.count(),"\n")
print("MAD", sunspots.mad(),"\n")
print("Median", sunspots.median(),"\n")
print("Min", sunspots.min(),"\n")
print("Max", sunspots.max(),"\n")
print("Mode", sunspots.mode(),"\n")
print("Standard Deviation", sunspots.std(),"\n")
print("Variance", sunspots.var(),"\n")
print("Skewness", sunspots.skew(),"\n")
print("Kurtosis", sunspots.kurt(),"\n")
```
# Data Aggregation
```
import pandas as pd
from numpy.random import seed
from numpy.random import rand
from numpy.random import randint
import numpy as np
seed(42)
df = pd.DataFrame({'Weather' : ['cold', 'hot', 'cold', 'hot',
'cold', 'hot', 'cold'],
'Food' : ['soup', 'soup', 'icecream', 'chocolate',
'icecream', 'icecream', 'soup'],
'Price' : 10 * rand(7), 'Number' : randint(1, 9)})
print(df)
weather_group = df.groupby('Weather')
i = 0
for name, group in weather_group:
i = i + 1
print("Group", i, name)
print(group)
print("Weather group first\n", weather_group.first())
print("Weather group last\n", weather_group.last())
print("Weather group mean\n", weather_group.mean())
wf_group = df.groupby(['Weather', 'Food'])
print("WF Groups", wf_group.groups)
print("WF Aggregated\n", wf_group.agg([np.mean, np.median]))
```
# Concatenating and appending DataFrames
```
print("df :3\n", df[:3])
print("Concat Back together\n", pd.concat([df[:3], df[3:]]))
print("Appending rows\n", df[:3].append(df[5:]))
```
# joining DataFrames
```
dests = pd.read_csv('dest.csv')
print("Dests\n", dests)
tips = pd.read_csv('tips.csv')
print("Tips\n", tips)
print("Merge() on key\n", pd.merge(dests, tips, on='EmpNr'))
print("Dests join() tips\n", dests.join(tips, lsuffix='Dest', rsuffix='Tips'))
print("Inner join with merge()\n", pd.merge(dests, tips, how='inner'))
print("Outer join\n", pd.merge(dests, tips, how='outer'))
```
# Handlng missing Values
```
df = pd.read_csv('WHO_first9cols.csv')
# Select first 3 rows of country and Net primary school enrolment ratio male (%)
df = df[['Country', df.columns[-2]]][:2]
print("New df\n", df)
print("Null Values\n", pd.isnull(df))
print("Total Null Values\n", pd.isnull(df).sum())
print("Not Null Values\n", df.notnull())
print("Last Column Doubled\n", 2 * df[df.columns[-1]])
print("Last Column plus NaN\n", df[df.columns[-1]] + np.nan)
print("Zero filled\n", df.fillna(0))
```
# dealing with dates
```
print("Date range", pd.date_range('1/1/1900', periods=42, freq='D'))
import sys
try:
print("Date range", pd.date_range('1/1/1677', periods=4, freq='D'))
except:
etype, value, _ = sys.exc_info()
print("Error encountered", etype, value)
offset = pd.DateOffset(seconds=2 ** 33/10 ** 9)
mid = pd.to_datetime('1/1/1970')
print("Start valid range", mid - offset)
print("End valid range", mid + offset)
print("With format", pd.to_datetime(['19021112', '19031230'], format='%Y%m%d'))
print("Illegal date coerced", pd.to_datetime(['1902-11-12', 'not a date'], errors='coerce'))
```
# Pivot Tables
```
seed(42)
N = 7
df = pd.DataFrame({
'Weather' : ['cold', 'hot', 'cold', 'hot',
'cold', 'hot', 'cold'],
'Food' : ['soup', 'soup', 'icecream', 'chocolate',
'icecream', 'icecream', 'soup'],
'Price' : 10 * rand(N), 'Number' : randint(1, 9)})
print("DataFrame\n", df)
print(pd.pivot_table(df, columns=['Food'], aggfunc=np.sum))
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ttim import *
```
### Theis
```
from scipy.special import exp1
def theis(r, t, T, S, Q):
u = r ** 2 * S / (4 * T * t)
h = -Q / (4 * np.pi * T) * exp1(u)
return h
def theisQr(r, t, T, S, Q):
u = r ** 2 * S / (4 * T * t)
return -Q / (2 * np.pi) * np.exp(-u) / r
T = 500
S = 1e-4
t = np.logspace(-5, 0, 100)
r = 30
Q = 788
htheis = theis(r, t, T, S, Q)
Qrtheis = theisQr(r, t, T, S, Q)
ml = ModelMaq(kaq=25, z=[20, 0], Saq=S/20, tmin=1e-5, tmax=1)
w = Well(ml, tsandQ=[(0, Q)], rw=1e-5)
ml.solve()
h = ml.head(r, 0, t)
Qx, Qy = ml.disvec(r, 0, t)
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.semilogx(t, htheis, 'b', label='theis')
plt.semilogx(t, h[0], 'r--', label='ttim')
plt.xlabel('time (day)')
plt.ylabel('head (m)')
plt.legend();
plt.subplot(122)
plt.semilogx(t, Qrtheis, 'b', label='theis')
plt.semilogx(t, Qx[0], 'r--', label='ttim')
plt.xlabel('time (day)')
plt.ylabel('head (m)')
plt.legend(loc='best');
def test(M=10):
ml = ModelMaq(kaq=25, z=[20, 0], Saq=S/20, tmin=1e-5, tmax=1, M=M)
w = Well(ml, tsandQ=[(0, Q)], rw=1e-5)
ml.solve(silent=True)
h = ml.head(r, 0, t)
return htheis - h[0]
enumba = test(M=10)
plt.plot(t, enumba, 'C1')
plt.xlabel('time (d)')
plt.ylabel('head difference Thies - Ttim');
plt.plot(t, Qrtheis - Qx[0])
plt.xlabel('time (d)')
plt.ylabel('Qx difference Thies - Ttim');
def compare(M=10):
ml = ModelMaq(kaq=25, z=[20, 0], Saq=S/20, tmin=1e-5, tmax=1, M=M)
w = Well(ml, tsandQ=[(0, Q)], rw=1e-5)
ml.solve(silent=True)
h = ml.head(r, 0, t)
rmse = np.sqrt(np.mean((h[0] - htheis)**2))
return rmse
Mlist = np.arange(1, 21)
rmse = np.zeros(len(Mlist))
for i, M in enumerate(Mlist):
rmse[i] = compare(M)
plt.semilogy(Mlist, rmse)
plt.xlabel('Number of terms M')
plt.xticks(np.arange(1, 21))
plt.ylabel('relative error')
plt.title('comparison between TTim solution and Theis \n solution using numba and M terms')
plt.grid()
def volume(r, t=1):
return -2 * np.pi * r * ml.head(r, 0, t) * ml.aq.Scoefaq[0]
from scipy.integrate import quad
quad(volume, 1e-5, np.inf)
from scipy.special import exp1
def theis2(r, t, T, S, Q, tend):
u1 = r ** 2 * S / (4 * T * t)
u2 = r ** 2 * S / (4 * T * (t[t > tend] - tend))
h = -Q / (4 * np.pi * T) * exp1(u1)
h[t > tend] -= -Q / (4 * np.pi * T) * exp1(u2)
return h
ml2 = ModelMaq(kaq=25, z=[20, 0], Saq=S/20, tmin=1e-5, tmax=10)
w2 = Well(ml2, tsandQ=[(0, Q), (1, 0)])
ml2.solve()
t2 = np.linspace(0.01, 2, 100)
htheis2 = theis2(r, t2, T, S, Q, tend=1)
h2 = ml2.head(r, 0, t2)
plt.plot(t2, htheis2, 'b', label='theis')
plt.plot(t2, h2[0], 'r--', label='ttim')
plt.legend(loc='best');
```
### Hantush
```
T = 500
S = 1e-4
c = 1000
t = np.logspace(-5, 0, 100)
r = 30
Q = 788
from scipy.integrate import quad
def integrand_hantush(y, r, lab):
return np.exp(-y - r ** 2 / (4 * lab ** 2 * y)) / y
def hantush(r, t, T, S, c, Q, tstart=0):
lab = np.sqrt(T * c)
u = r ** 2 * S / (4 * T * (t - tstart))
F = quad(integrand_hantush, u, np.inf, args=(r, lab))[0]
return -Q / (4 * np.pi * T) * F
hantushvec = np.vectorize(hantush)
ml = ModelMaq(kaq=25, z=[21, 20, 0], c=[1000], Saq=S/20, topboundary='semi', tmin=1e-5, tmax=1)
w = Well(ml, tsandQ=[(0, Q)])
ml.solve()
hhantush = hantushvec(30, t, T, S, c, Q)
h = ml.head(r, 0, t)
plt.semilogx(t, hhantush, 'b', label='hantush')
plt.semilogx(t, h[0], 'r--', label='ttim')
plt.legend(loc='best');
```
### Well with welbore storage
```
T = 500
S = 1e-4
t = np.logspace(-5, 0, 100)
rw = 0.3
Q = 788
ml = ModelMaq(kaq=25, z=[20, 0], Saq=S/20, tmin=1e-5, tmax=1)
w = Well(ml, rw=rw, tsandQ=[(0, Q)])
ml.solve()
hnostorage = ml.head(rw, 0, t)
ml = ModelMaq(kaq=25, z=[20, 0], Saq=S/20, tmin=1e-5, tmax=1)
w = Well(ml, rw=rw, tsandQ=[(0, Q)], rc=rw)
ml.solve()
hstorage = ml.head(rw, 0, t)
plt.semilogx(t, hnostorage[0], label='no storage')
plt.semilogx(t, hstorage[0], label='with storage')
plt.legend(loc='best')
plt.xticks([1/(24*60*60), 1/(24 * 60), 1/24, 1], ['1 sec', '1 min', '1 hr', '1 d']);
```
### Slug test
```
k = 25
H = 20
S = 1e-4 / H
t = np.logspace(-7, -1, 100)
rw = 0.2
rc = 0.2
delh = 1
ml = ModelMaq(kaq=k, z=[H, 0], Saq=S, tmin=1e-7, tmax=1)
Qslug = np.pi * rc ** 2 * delh
w = Well(ml, tsandQ=[(0, -Qslug)], rw=rw, rc=rc, wbstype='slug')
ml.solve()
h = w.headinside(t)
plt.semilogx(t, h[0])
plt.xticks([1 / (24 * 60 * 60) / 10, 1 / (24 * 60 * 60), 1 / (24 * 60), 1 / 24],
['0.1 sec', '1 sec', '1 min', '1 hr']);
```
### Slug test in 5-layer aquifer
Well in top 2 layers
```
k = 25
H = 20
Ss = 1e-4 / H
t = np.logspace(-7, -1, 100)
rw = 0.2
rc = 0.2
delh = 1
ml = Model3D(kaq=k, z=np.linspace(H, 0, 6), Saq=Ss, tmin=1e-7, tmax=1)
Qslug = np.pi * rc**2 * delh
w = Well(ml, tsandQ=[(0, -Qslug)], rw=rw, rc=rc, layers=[0, 1], wbstype='slug')
ml.solve()
hw = w.headinside(t)
plt.semilogx(t, hw[0], label='inside well')
h = ml.head(0.2 + 1e-8, 0, t)
for i in range(2, 5):
plt.semilogx(t, h[i], label='layer' + str(i))
plt.legend()
plt.xticks([1/(24*60*60)/10, 1/(24*60*60), 1/(24 * 60), 1/24], ['0.1 sec', '1 sec', '1 min', '1 hr']);
```
20 layers
```
k = 25
H = 20
S = 1e-4 / H
t = np.logspace(-7, -1, 100)
rw = 0.2
rc = 0.2
delh = 1
ml = Model3D(kaq=k, z=np.linspace(H, 0, 21), Saq=S, tmin=1e-7, tmax=1)
Qslug = np.pi * rc**2 * delh
w = Well(ml, tsandQ=[(0, -Qslug)], rw=rw, rc=rc, layers=np.arange(8), wbstype='slug')
ml.solve()
hw = w.headinside(t)
plt.semilogx(t, hw[0], label='inside well')
h = ml.head(0.2 + 1e-8, 0, t)
for i in range(8, 20):
plt.semilogx(t, h[i], label='layer' + str(i))
plt.legend()
plt.xticks([1/(24*60*60)/10, 1/(24*60*60), 1/(24 * 60), 1/24], ['0.1 sec', '1 sec', '1 min', '1 hr']);
```
### Head Well
```
ml = ModelMaq(kaq=25, z=[20, 0], Saq=1e-5, tmin=1e-3, tmax=1000)
w = HeadWell(ml, tsandh=[(0, -1)], rw=0.2)
ml.solve()
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
ml.xsection(0.2, 100, 0, 0, 100, t=[0.1, 1, 10], sstart=0.2, newfig=False)
t = np.logspace(-3, 3, 100)
dis = w.discharge(t)
plt.subplot(1,2,2)
plt.semilogx(t, dis[0], label='rw=0.2')
ml = ModelMaq(kaq=25, z=[20, 0], Saq=1e-5, tmin=1e-3, tmax=1000)
w = HeadWell(ml, tsandh=[(0, -1)], rw=0.3)
ml.solve()
dis = w.discharge(t)
plt.semilogx(t, dis[0], label='rw=0.3')
plt.xlabel('time (d)')
plt.ylabel('discharge (m3/d)')
plt.legend();
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,LabelEncoder
from sklearn.metrics import r2_score,mean_absolute_error,mean_squared_error
data=pd.read_csv("/Users/chohan/Desktop/ML_DL_ Hackathon/MachineHack Machine Learning Challenge Predicting House Prices In Bengaluru/Data/train.csv")
data.head()
data.tail()
data.shape
data.info()
data.describe()
data.columns
data.isna().sum()
data.isnull().sum()
data.dtypes
data["area_type"].unique()
data["size"].unique()
data["society"].unique()
data["total_sqft"].unique()
data["bath"].unique()
data["availability"].unique()
data["balcony"].unique()
#Check Null Values of Bath
data["bath"].isna().sum()
#Mean of a Bath value
mean_bath=data["bath"].mean().round()
#Replace all Null value to mean_bath
data["bath"]=data["bath"].replace(np.nan,mean_bath).astype(float)
#After changing check the nan value
data["bath"].isna().sum()
#Check Null Values of Balcony
data["balcony"].isna().sum()
#Replace and Mean of Balcony
mean_balcony=data["balcony"].mean().round()
data["balcony"]=data["balcony"].replace(np.nan,mean_balcony).astype(float)
#After changing check the nan value
data["balcony"].isna().sum()
import re
total_sqrft=[]
for i in range(len(data)):
a=data["total_sqft"][i]
total_sqrft.append(a)
total_sqft1=[]
for i in range(len(total_sqrft)):
result=re.sub("\d - \d","",total_sqrft[i])
total_sqft1.append(result)
total_sqft2=[]
for i in range(0,len(total_sqft1)):
result=re.sub("[Sq. Meter,Perch, Yards,A,G,C,n,u,o]","",total_sqft1[i])
total_sqft2.append(result)
data["total_sqft"]=total_sqft2
def total_sqftreplace():
total_sqrft=[]
for i in range(len(data)):
a=data["total_sqft"][i]
total_sqrft.append(a)
total_sqft1=[]
for i in range(len(total_sqrft)):
result=re.sub("\d - \d","",total_sqrft[i])
total_sqft1.append(result)
total_sqft2=[]
for i in range(0,len(total_sqft1)):
result=re.sub("[Sq. Meter,Perch, Yards,A,G,C,n,u,o]","",total_sqft1[i])
total_sqft2.append(result)
data["total_sqft"]=total_sqft2
data["total_sqft"]=data["total_sqft"].replace("",data["total_sqft"][0])
new=[]
for i in range(0,len(data["total_sqft"])):
a=eval(data["total_sqft"][i])
new.append(a)
print(a)
data["total_sqft"]=new
data["total_sqft"]=data["total_sqft"].astype(float)
return data["total_sqft"]
data["total_sqft"].head()
data.drop(columns=['availability', 'location', 'size', 'society'],inplace=True)
data.head()
data.dtypes
data["total_sqft"].replace("",data["total_sqft"][0])
len(total_sqft2)
new=[]
for i in range(0,len(data["total_sqft"])):
a=eval(data["total_sqft"][i])
new.append(a)
print(a)
data["total_sqft"]=new
data["total_sqft"]=data["total_sqft"].astype(float)
data.dtypes
data["area_type"].unique()
data["area_type"]=data["area_type"].replace(['Super built-up Area', 'Plot Area', 'Built-up Area',
'Carpet Area'],[0,1,2,3])
data["area_type"]=data["area_type"].astype(float)
data.dtypes
sns.scatterplot(data["bath"],data["price"],hue=data["bath"])
sns.scatterplot(data["balcony"],data["price"],hue=data["balcony"],palette="deep")
sns.scatterplot(data["area_type"],data["price"])
sns.countplot(data["balcony"])
sns.countplot(data["bath"])
sns.countplot(data["area_type"])
X=data.iloc[:,0:4].values
Y=data.iloc[:,-1].values
X
sc=StandardScaler()
x_labeled=sc.fit(X).transform(X)
x_labeled[0]
x_train,x_test,y_train,y_test=train_test_split(x_labeled,Y,test_size=0.2,random_state=True)
print(len(x_train),len(y_train))
print(len(x_test),len(y_test))
print(x_train.shape,y_train.shape)
print(x_test.shape,y_test.shape)
linear_model=LinearRegression()
linear_model.fit(x_train,y_train)
print(linear_model.intercept_)
print(linear_model.coef_)
predict=linear_model.predict(x_test[1:7])
print("Training_Accuracy:",linear_model.score(x_train,y_train)*100)
print("Testing_Accuracy:",linear_model.score(x_test,y_test)*100)
print("Model_Accuracy:",r2_score(Y,linear_model.predict(x_labeled))*100)
random_forest_model=RandomForestRegressor(n_estimators=50)
random_forest_model.fit(x_train,y_train)
predict1=random_forest_model.predict(x_test[0:7])
print("Training_Accuracy:",random_forest_model.score(x_train,y_train)*100)
print("Testing_Accuracy:",random_forest_model.score(x_test,y_test)*100)
print("Model_Accuracy:",r2_score(Y,random_forest_model.predict(x_labeled))*100)
decision_tree_model=DecisionTreeRegressor(criterion="mse")
decision_tree_model.fit(x_train,y_train)
predict2=decision_tree_model.predict(x_test[0:7])
print("Training_Accuracy:",decision_tree_model.score(x_train,y_train)*100)
print("Testing_Accuracy:",decision_tree_model.score(x_test,y_test)*100)
print("Model_Accuracy:",r2_score(Y,decision_tree_model.predict(x_labeled))*100)
data=pd.read_csv("/Users/chohan/Desktop/ML_DL_ Hackathon/MachineHack Machine Learning Challenge Predicting House Prices In Bengaluru/Data/test.csv")
data.head()
data.isnull().sum()
data.dtypes
data.columns
data.drop(columns=['availability', 'location', 'size', 'society',"price"],inplace=True)
bath_mean=data["bath"].mean().round()
balcony_mean=data["balcony"].mean().round()
# data["bath"]=data["bath"].replace(np.nan,mean_bath).astype(float)
data["bath"]=data["bath"].replace(np.nan,bath_mean)
data["balcony"]=data["balcony"].replace(np.nan,balcony_mean)
data["area_type"]=data["area_type"].replace(['Super built-up Area', 'Plot Area', 'Built-up Area','Carpet Area'],[0,1,2,3])
data["area_type"]=data["area_type"].astype(float)
data["bath"]=data["bath"].astype(float)
data["balcony"]=data["balcony"].astype(float)
data.isnull().sum()
total_sqftreplace()
data.dtypes
x=data.iloc[:,0:4].values
x[0]
y_hat=random_forest_model.predict(x)
y_hat
y_hat1=decision_tree_model.predict(x)
y_hat1
y_hat2=linear_model.predict(x)
y_hat2
```
| github_jupyter |
<a href="https://colab.research.google.com/github/oughtinc/ergo/blob/notebooks-readme/notebooks/covid-19-inference.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Notes
* Switch to Italy
* Graph data and results
* Add variable for true initial infections, lockdown start date (11 March)
* Add lag time (see Jacob/NYT models)
* Add patient recovery
# Setup
Install [Ergo](https://github.com/oughtinc/ergo) (our forecasting library) and a few tools we'll use in this colab:
```
!pip install --quiet poetry # Fixes https://github.com/python-poetry/poetry/issues/532
!pip install --quiet pendulum seaborn
!pip install --quiet torchviz
# !pip uninstall --yes --quiet ergo
!pip install --quiet git+https://github.com/oughtinc/ergo.git@william
# !pip install --upgrade --no-cache-dir --quiet git+https://github.com/oughtinc/ergo.git
import ergo
confirmed_infections = ergo.data.covid19.ConfirmedInfections()
%load_ext google.colab.data_table
import re
import ergo
import pendulum
import pandas
import seaborn
from types import SimpleNamespace
from typing import List
from pendulum import DateTime
from matplotlib import pyplot
```
# Questions
Here are Metaculus ids for the questions we'll load, and some short names that will allow us to associate questions with variables in our model:
```
question_ids = [3704, 3712, 3713, 3711, 3722, 3761, 3705, 3706] # 3740, 3736,
question_names = [
# "WHO Eastern Mediterranean Region on 2020/03/27",
# "WHO Region of the Americas on 2020/03/27",
# "WHO Western Pacific Region on 2020/03/27",
# "WHO South-East Asia Region on 2020/03/27",
"South Korea on 2020/03/27",
# "United Kingdom on 2020/03/27",
# "WHO African Region on 2020/03/27",
# "WHO European Region on 2020/03/27",
# "Bay Area on 2020/04/01",
# "San Francisco on 2020/04/01"
]
```
We load the question data from Metaculus:
```
metaculus = ergo.Metaculus(username="ought", password="R9gHrPtoRQNG29")
questions = [metaculus.get_question(id, name=name) for id, name in zip(question_ids, question_names)]
ergo.MetaculusQuestion.to_dataframe(questions)
```
# Data
Our most important data is the data about confirmed cases (from Hopkins):
```
confirmed_infections = ergo.data.covid19.ConfirmedInfections()
```
# Assumptions
Assumptions are things that should be inferred from data but currently aren't:
```
assumptions = SimpleNamespace()
assumptions.lockdown_start = {
"Italy": pendulum.datetime(2020,3,11),
"Spain": pendulum.datetime(2020,3,15),
}
```
# Model
Main model:
```
import torch
import pyro
Area = str
@ergo.model
def model(start: DateTime, end: DateTime, areas: List[Area], training=True):
for area in areas:
doubling_time = ergo.lognormal_from_interval(1., 14., name=f"doubling_time {area}")
doubling_time_lockdown = ergo.lognormal_from_interval(1., torch.max(doubling_time, ergo.to_float(1.1)), name=f"doubling_time_lockdown {area}")
observation_noise = ergo.halfnormal_from_interval(0.1, name=f"observation_noise {area}")
predicted = ergo.to_float(confirmed_infections(area, start))
for i in range(1,(end - start).days):
date = start.add(days=i)
datestr = date.format('YYYY/MM/DD')
confirmed = None
try:
confirmed = ergo.to_float(confirmed_infections(area, date))
ergo.tag(confirmed, f"actual {area} on {datestr}")
except KeyError:
pass
doubling_time_today = doubling_time
if area in assumptions.lockdown_start.keys() and date >= assumptions.lockdown_start[area]:
doubling_time_today = doubling_time_lockdown
predicted = predicted * 2**(1. / doubling_time_today)
ergo.tag(predicted, f"predicted {area} on {datestr}")
if (not training) or (confirmed is not None):
predict_observed = ergo.normal(predicted,
predicted*observation_noise,
name=f"predict_observed {area} on {datestr}",
obs=confirmed)
```
Run the model:
```
start_date = pendulum.datetime(2020,3,1)
end_date = pendulum.datetime(2020,4,1)
areas = ["Italy", "Spain"]
model_args = (start_date, end_date, areas)
import pandas as pd
from pyro.infer import SVI, Trace_ELBO
from pyro.infer import Predictive
import functools
def infer_and_run(model, num_samples=5000, num_iterations=2000,
debug=False, learning_rate=0.01,
early_stopping_patience=200) -> pd.DataFrame:
"""
debug - whether to output debug information
num_iterations - Number of optimizer iterations
learning_rate - Optimizer learning rate
early_stopping_patience - Stop training if loss hasn't improved for this many iterations
"""
def to_numpy(d):
return {k:v.detach().numpy() for k, v in d.items()}
def debug_output(guide):
quantiles = to_numpy(guide.quantiles([0.05, 0.5, 0.95]))
for k, v in quantiles.items():
print(f"{k}: {v[1]:.4f} [{v[0]:.4f}, {v[2]:.4f}]")
guide = pyro.infer.autoguide.AutoNormal(model,
init_loc_fn=pyro.infer.autoguide.init_to_median)
pyro.clear_param_store()
guide(training=True)
adam = pyro.optim.Adam({"lr": 0.01})
svi = SVI(model, guide, adam, loss=Trace_ELBO())
if debug:
debug_output(guide)
print()
best_loss = None
last_improvement = None
for j in range(num_iterations):
# calculate the loss and take a gradient step
loss = svi.step(training=True)
if best_loss is None or best_loss > loss:
best_loss = loss
last_improvement = j
if j % 100 == 0:
if debug:
print("[iteration %04d]" % (j + 1 ))
print(f"loss: {loss:.4f}")
debug_output(guide)
print()
if j > (last_improvement + early_stopping_patience):
print("Stopping Early")
break
print(f"Final loss: {loss:.4f}")
predictive = Predictive(model, guide=guide, num_samples=num_samples)
raw_samples = predictive(training=False)
return pandas.DataFrame(to_numpy(raw_samples))
samples = infer_and_run(functools.partial(model, *model_args),
num_iterations=1000,
num_samples=1000,
debug=True)
samples.describe().transpose()
from datetime import datetime
to_plot = [
# ("predict_observed", "predict_observed {area} on {date}"),
("predicted", "predicted {area} on {date}"),
("actual", "actual {area} on {date}"),
]
high_quantile = 0.95
low_quantile = 0.05
for area in areas:
for name, template in to_plot:
indices = [x for x in range((end_date - start_date).days)]
highs = []
lows = []
means = []
for i in indices:
date = start_date.add(days=i)
datestr = date.format('YYYY/MM/DD')
key = template.format(area = area, date=datestr)
try:
means.append(samples[key].mean())
highs.append(samples[key].quantile(high_quantile))
lows.append(samples[key].quantile(low_quantile))
except KeyError:
means.append(float("NaN"))
highs.append(float("NaN"))
lows.append(float("NaN"))
pyplot.fill_between(indices, lows, highs, label=name, alpha=0.5)
pyplot.plot(indices, means, label=name)
pyplot.title(area)
pyplot.legend()
pyplot.yscale("log")
pyplot.show()
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import argparse
import sys
from time import sleep
import numpy as np
from rdkit import Chem, DataStructs
from rdkit.Chem import AllChem
from rdkit.Chem.Crippen import MolLogP
from sklearn.metrics import accuracy_score, mean_squared_error
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
#from utils import read_ZINC_smiles, smiles_to_onehot, partition, OneHotLogPDataSet
from tqdm import tnrange, tqdm_notebook
import pandas as pd
import seaborn as sns
paser = argparse.ArgumentParser()
args = paser.parse_args("")
args.seed = 123
args.val_size = 0.15
args.test_size = 0.15
args.shuffle = True
np.random.seed(args.seed)
torch.manual_seed(args.seed)
```
## 1. Pre-Processing
```
def read_ZINC_smiles(file_name, num_mol):
f = open(file_name, 'r')
contents = f.readlines()
smi_list = []
logP_list = []
for i in tqdm_notebook(range(num_mol), desc='Reading Data'):
smi = contents[i].strip()
m = Chem.MolFromSmiles(smi)
smi_list.append(smi)
logP_list.append(MolLogP(m))
logP_list = np.asarray(logP_list).astype(float)
return smi_list, logP_list
def smiles_to_onehot(smi_list):
def smiles_to_vector(smiles, vocab, max_length):
while len(smiles) < max_length:
smiles += " "
vector = [vocab.index(str(x)) for x in smiles]
one_hot = np.zeros((len(vocab), max_length), dtype=int)
for i, elm in enumerate(vector):
one_hot[elm][i] = 1
return one_hot
vocab = np.load('./vocab.npy')
smi_total = []
for i, smi in tqdm_notebook(enumerate(smi_list), desc='Converting to One Hot'):
smi_onehot = smiles_to_vector(smi, list(vocab), 120)
smi_total.append(smi_onehot)
return np.asarray(smi_total)
def convert_to_graph(smiles_list):
adj = []
adj_norm = []
features = []
maxNumAtoms = 50
for i in tqdm_notebook(smiles_list, desc='Converting to Graph'):
# Mol
iMol = Chem.MolFromSmiles(i.strip())
#Adj
iAdjTmp = Chem.rdmolops.GetAdjacencyMatrix(iMol)
# Feature
if( iAdjTmp.shape[0] <= maxNumAtoms):
# Feature-preprocessing
iFeature = np.zeros((maxNumAtoms, 58))
iFeatureTmp = []
for atom in iMol.GetAtoms():
iFeatureTmp.append( atom_feature(atom) ) ### atom features only
iFeature[0:len(iFeatureTmp), 0:58] = iFeatureTmp ### 0 padding for feature-set
features.append(iFeature)
# Adj-preprocessing
iAdj = np.zeros((maxNumAtoms, maxNumAtoms))
iAdj[0:len(iFeatureTmp), 0:len(iFeatureTmp)] = iAdjTmp + np.eye(len(iFeatureTmp))
adj.append(np.asarray(iAdj))
features = np.asarray(features)
return features, adj
def atom_feature(atom):
return np.array(one_of_k_encoding_unk(atom.GetSymbol(),
['C', 'N', 'O', 'S', 'F', 'H', 'Si', 'P', 'Cl', 'Br',
'Li', 'Na', 'K', 'Mg', 'Ca', 'Fe', 'As', 'Al', 'I', 'B',
'V', 'Tl', 'Sb', 'Sn', 'Ag', 'Pd', 'Co', 'Se', 'Ti', 'Zn',
'Ge', 'Cu', 'Au', 'Ni', 'Cd', 'Mn', 'Cr', 'Pt', 'Hg', 'Pb']) +
one_of_k_encoding(atom.GetDegree(), [0, 1, 2, 3, 4, 5]) +
one_of_k_encoding_unk(atom.GetTotalNumHs(), [0, 1, 2, 3, 4]) +
one_of_k_encoding_unk(atom.GetImplicitValence(), [0, 1, 2, 3, 4, 5]) +
[atom.GetIsAromatic()]) # (40, 6, 5, 6, 1)
def one_of_k_encoding(x, allowable_set):
if x not in allowable_set:
raise Exception("input {0} not in allowable set{1}:".format(x, allowable_set))
#print list((map(lambda s: x == s, allowable_set)))
return list(map(lambda s: x == s, allowable_set))
def one_of_k_encoding_unk(x, allowable_set):
"""Maps inputs not in the allowable set to the last element."""
if x not in allowable_set:
x = allowable_set[-1]
return list(map(lambda s: x == s, allowable_set))
class GCNDataset(Dataset):
def __init__(self, list_feature, list_adj, list_logP):
self.list_feature = list_feature
self.list_adj = list_adj
self.list_logP = list_logP
def __len__(self):
return len(self.list_feature)
def __getitem__(self, index):
return self.list_feature[index], self.list_adj[index], self.list_logP[index]
def partition(list_feature, list_adj, list_logP, args):
num_total = list_feature.shape[0]
num_train = int(num_total * (1 - args.test_size - args.val_size))
num_val = int(num_total * args.val_size)
num_test = int(num_total * args.test_size)
feature_train = list_feature[:num_train]
adj_train = list_adj[:num_train]
logP_train = list_logP[:num_train]
feature_val = list_feature[num_train:num_train + num_val]
adj_val = list_adj[num_train:num_train + num_val]
logP_val = list_logP[num_train:num_train + num_val]
feature_test = list_feature[num_total - num_test:]
adj_test = list_adj[num_train:num_train + num_val]
logP_test = list_logP[num_total - num_test:]
train_set = GCNDataset(feature_train, adj_train, logP_train)
val_set = GCNDataset(feature_val, adj_val, logP_val)
test_set = GCNDataset(feature_test, adj_test, logP_test)
partition = {
'train': train_set,
'val': val_set,
'test': test_set
}
return partition
list_smi, list_logP = read_ZINC_smiles('ZINC.smiles', 2000)
list_feature, list_adj = convert_to_graph(list_smi)
args.dict_partition = partition(list_feature, list_adj, list_logP, args)
```
## 2. Model Construction
```
class GatedSkipConnection(nn.Module):
def __init__(self, in_dim, new_dim, out_dim, activation):
super(GatedSkipConnection, self).__init__()
self.in_dim = in_dim
self.new_dim = new_dim
self.out_dim = out_dim
self.activation = activation
self.linear_in = nn.Linear(in_dim, out_dim)
self.linear_new = nn.Linear(new_dim, out_dim)
self.sigmoid = nn.Sigmoid()
def forward(self, input_x, new_x):
z = self.gate_coefficient(input_x, new_x)
if (self.in_dim != self.out_dim):
input_x = self.linear_in(input_x)
if (self.new_dim != self.out_dim):
new_x = self.linear_new(new_x)
out = torch.mul(new_x, z) + torch.mul(input_x, 1.0-z)
return out
def gate_coefficient(self, input_x, new_x):
X1 = self.linear_in(input_x)
X2 = self.linear_new(new_x)
gate_coefficient = self.sigmoid(X1 + X2)
return gate_coefficient
class GraphConvolution(nn.Module):
def __init__(self, in_dim, hidden_dim, activation, sc='no'):
super(GraphConvolution, self).__init__()
self.in_dim = in_dim
self.hidden_dim = hidden_dim
self.activation = activation
self.sc = sc
self.linear = nn.Linear(self.in_dim,
self.hidden_dim)
nn.init.xavier_uniform_(self.linear.weight)
self.gated_skip_connection = GatedSkipConnection(self.in_dim,
self.hidden_dim,
self.hidden_dim,
self.activation)
def forward(self, x, adj):
out = self.linear(x)
out = torch.matmul(adj, out)
if (self.sc == 'gsc'):
out = self.gated_skip_connection(x, out)
elif (self.sc == 'no'):
out = self.activation(out)
else:
out = self.activation(out)
return out
class ReadOut(nn.Module):
def __init__(self, in_dim, out_dim, activation):
super(ReadOut, self).__init__()
self.in_dim = in_dim
self.out_dim= out_dim
self.linear = nn.Linear(self.in_dim,
self.out_dim)
nn.init.xavier_uniform_(self.linear.weight)
self.activation = activation
def forward(self, x):
out = self.linear(x)
out = torch.sum(out, dim=1)
out = self.activation(out)
return out
class Predictor(nn.Module):
def __init__(self, in_dim, out_dim, activation=None):
super(Predictor, self).__init__()
self.in_dim = in_dim
self.out_dim = out_dim
self.linear = nn.Linear(self.in_dim,
self.out_dim)
nn.init.xavier_uniform_(self.linear.weight)
self.activation = activation
def forward(self, x):
out = self.linear(x)
if self.activation != None:
out = self.activation(out)
return out
class LogPPredictor(nn.Module):
def __init__(self,
n_layer,
in_dim,
hidden_dim_1,
hidden_dim_2,
out_dim,
sc='no'):
super(LogPPredictor, self).__init__()
self.n_layer = n_layer
self.graph_convolution_1 = GraphConvolution(in_dim, hidden_dim_1, nn.ReLU(), sc)
self.graph_convolution_2 = GraphConvolution(hidden_dim_1, hidden_dim_1, nn.ReLU(), sc)
self.readout = ReadOut(hidden_dim_1, hidden_dim_2, nn.Sigmoid())
self.predictor_1 = Predictor(hidden_dim_2, hidden_dim_2, nn.ReLU())
self.predictor_2 = Predictor(hidden_dim_2, hidden_dim_2, nn.Tanh())
self.predictor_3 = Predictor(hidden_dim_2, out_dim)
def forward(self, x, adj):
out = self.graph_convolution_1(x, adj)
for i in range(self.n_layer-1):
out = self.graph_convolution_2(out, adj)
out = self.readout(out)
out = self.predictor_1(out)
out = self.predictor_2(out)
out = self.predictor_3(out)
return out
args.batch_size = 10
args.lr = 0.001
args.l2_coef = 0.001
args.optim = optim.Adam
args.criterion = nn.MSELoss()
args.epoch = 10
args.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
use_gpu = lambda x=True: torch.set_default_tensor_type(torch.cuda.DoubleTensor if torch.cuda.is_available() and x else torch.FloatTensor)
use_gpu()
print(args.device)
model = LogPPredictor(1, 58, 64, 128, 1, 'gsc')
model.to(args.device)
model.cuda()
list_train_loss = list()
list_val_loss = list()
acc = 0
mse = 0
optimizer = args.optim(model.parameters(),
lr=args.lr,
weight_decay=args.l2_coef)
data_train = DataLoader(args.dict_partition['train'],
batch_size=args.batch_size,
shuffle=args.shuffle)
data_val = DataLoader(args.dict_partition['val'],
batch_size=args.batch_size,
shuffle=args.shuffle)
for epoch in tqdm_notebook(range(args.epoch), desc='Epoch'):
model.train()
epoch_train_loss = 0
for i, batch in enumerate(data_train):
list_feature = torch.tensor(batch[0])
list_adj = torch.tensor(batch[1])
list_logP = torch.tensor(batch[2])
list_logP = list_logP.view(-1,1)
list_feature, list_adj, list_logP = list_feature.to(args.device), list_adj.to(args.device), list_logP.to(args.device)
optimizer.zero_grad()
list_pred_logP = model(list_feature, list_adj)
list_pred_logP.require_grad = False
train_loss = args.criterion(list_pred_logP, list_logP)
epoch_train_loss += train_loss.item()
train_loss.backward()
optimizer.step()
list_train_loss.append(epoch_train_loss/len(data_train))
model.eval()
epoch_val_loss = 0
with torch.no_grad():
for i, batch in enumerate(data_val):
list_feature = torch.tensor(batch[0])
list_adj = torch.tensor(batch[1])
list_logP = torch.tensor(batch[2])
list_logP = list_logP.view(-1,1)
list_feature, list_adj, list_logP = list_feature.to(args.device), list_adj.to(args.device), list_logP.to(args.device)
list_pred_logP = model(list_feature, list_adj)
val_loss = args.criterion(list_pred_logP, list_logP)
epoch_val_loss += val_loss.item()
list_val_loss.append(epoch_val_loss/len(data_val))
data_test = DataLoader(args.dict_partition['test'],
batch_size=args.batch_size,
shuffle=args.shuffle)
model.eval()
with torch.no_grad():
logP_total = list()
pred_logP_total = list()
for i, batch in enumerate(data_val):
list_feature = torch.tensor(batch[0])
list_adj = torch.tensor(batch[1])
list_logP = torch.tensor(batch[2])
logP_total += list_logP.tolist()
list_logP = list_logP.view(-1,1)
list_feature, list_adj, list_logP = list_feature.to(args.device), list_adj.to(args.device), list_logP.to(args.device)
list_pred_logP = model(list_feature, list_adj)
pred_logP_total += list_pred_logP.tolist()
mse = mean_squared_error(logP_total, pred_logP_total)
data = np.vstack((list_train_loss, list_val_loss))
data = np.transpose(data)
epochs = np.arange(args.epoch)
df_loss = pd.DataFrame(data, epochs, ["Train Loss", "Validation Loss"])
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
grid = sns.lineplot(data=df_loss)
grid.set_title("Loss vs Epoch (tox=nr-ahr)")
grid.set_ylabel("Loss")
grid.set_xlabel("Epoch")
model = LogPPredictor(1, 58, 64, 128, 1, 'gsc')
model.to(args.device)
list_train_loss = list()
list_val_loss = list()
acc = 0
mse = 0
optimizer = args.optim(model.parameters(),
lr=args.lr,
weight_decay=args.l2_coef)
data_train = DataLoader(args.dict_partition['train'],
batch_size=args.batch_size,
shuffle=args.shuffle)
data_val = DataLoader(args.dict_partition['val'],
batch_size=args.batch_size,
shuffle=args.shuffle)
for epoch in tqdm_notebook(range(args.epoch), desc='Epoch'):
model.train()
epoch_train_loss = 0
for i, batch in enumerate(data_train):
list_feature = torch.tensor(batch[0])
list_adj = torch.tensor(batch[1])
list_logP = torch.tensor(batch[2])
list_logP = list_logP.view(-1,1)
list_feature, list_adj, list_logP = list_feature.to(args.device), list_adj.to(args.device), list_logP.to(args.device)
optimizer.zero_grad()
list_pred_logP = model(list_feature, list_adj)
list_pred_logP.require_grad = False
train_loss = args.criterion(list_pred_logP, list_logP)
epoch_train_loss += train_loss.item()
train_loss.backward()
optimizer.step()
list_train_loss.append(epoch_train_loss/len(data_train))
model.eval()
epoch_val_loss = 0
with torch.no_grad():
for i, batch in enumerate(data_val):
list_feature = torch.tensor(batch[0])
list_adj = torch.tensor(batch[1])
list_logP = torch.tensor(batch[2])
list_logP = list_logP.view(-1,1)
list_feature, list_adj, list_logP = list_feature.to(args.device), list_adj.to(args.device), list_logP.to(args.device)
list_pred_logP = model(list_feature, list_adj)
val_loss = args.criterion(list_pred_logP, list_logP)
epoch_val_loss += val_loss.item()
list_val_loss.append(epoch_val_loss/len(data_val))
data_test = DataLoader(args.dict_partition['test'],
batch_size=args.batch_size,
shuffle=args.shuffle)
model.eval()
with torch.no_grad():
logP_total = list()
pred_logP_total = list()
for i, batch in enumerate(data_val):
list_feature = torch.tensor(batch[0])
list_adj = torch.tensor(batch[1])
list_logP = torch.tensor(batch[2])
logP_total += list_logP.tolist()
list_logP = list_logP.view(-1,1)
list_feature, list_adj, list_logP = list_feature.to(args.device), list_adj.to(args.device), list_logP.to(args.device)
list_pred_logP = model(list_feature, list_adj)
pred_logP_total += list_pred_logP.tolist()
mse = mean_squared_error(logP_total, pred_logP_total)
data = np.vstack((list_train_loss, list_val_loss))
data = np.transpose(data)
epochs = np.arange(args.epoch)
df_loss = pd.DataFrame(data, epochs, ["Train Loss", "Validation Loss"])
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
grid = sns.lineplot(data=df_loss)
grid.set_title("Loss vs Epoch (tox=nr-ahr)")
grid.set_ylabel("Loss")
grid.set_xlabel("Epoch")
for i in tqdm_notebook(range(10), desc='1', leave=True, position=1):
for j in tqdm_notebook(range(100), desc='2', leave=False, position=2):
sleep(0.01)
```
| github_jupyter |
# Control
In this notebook we want to control the chaos in the Henon map. The Henon map is defined by
$$
\begin{align}
x_{n+1}&=1-ax_n^2+y_n\\
y_{n+1}&=bx_n
\end{align}.
$$
```
from plotly import offline as py
from plotly import graph_objs as go
py.init_notebook_mode(connected=True)
```
### Fixed points
First we need to find the fixed points of the Henon map. From $y_n=y_{n+1}=bx_n$ we can elliminate $y_n$ in the first equation. The quadratic equation obtained after ellimination with $x_n=x_{n+1}$ yields,
$$
\begin{align}
x^*=\frac{b-1\pm\sqrt{4a+(b-1)^2}}{2a},
&&
y^*=bx^*,
\end{align}
$$
as the fixed points of the Henon map.
```
def henon_map(x0, y0, a, b, N):
x = [x0]
y = [y0]
for i in range(N):
xn = x[-1]
yn = y[-1]
x.append(1 - a * xn**2 + yn)
y.append(b * xn)
return x, y
def fixed_points(a, b):
u = (b - 1) / (2 * a)
v = np.sqrt(4 * a + (b - 1)**2) / (2 * a)
x1 = u - v
x2 = u + v
y1 = b * x1
y2 = b * x2
return [(x1, y1), (x2, y2)]
((xf1, yf1), (xf2, yf2)) = fixed_points(a=1.4, b=0.3)
radius = 0.1
layout = go.Layout(
title='Henon Attractor',
xaxis=dict(title='x'),
yaxis=dict(title='y', scaleanchor='x'),
showlegend=False,
shapes=[
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': xf1 + radius,
'y0': yf1 + radius,
'x1': xf1 - radius,
'y1': yf1 - radius,
'line': { 'color': 'gray' },
},
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': xf2 + radius,
'y0': yf2 + radius,
'x1': xf2 - radius,
'y1': yf2 - radius,
},
]
)
x = []
y = []
for i in range(50):
x0, y0 = np.random.uniform(0.2, 0.8, 2)
xx, yy = henon_map(x0, y0, a=1.4, b=0.3, N=100)
if np.abs(xx[-1]) < 10 and np.abs(yy[-1]) < 10:
x += xx
y += yy
figure = go.Figure([
go.Scatter(x=x, y=y, mode='markers', marker=dict(size=3))
], layout)
py.iplot(figure)
```
So the second fixed point (positive sign) sits on the attractor.
```
def fixed_point(a, b):
return fixed_points(a, b)[1]
fixed_point(a=1.4, b=0.3)
```
We assume that coordinates and parameters are sufficiently close such that the following Taylor expansion is valid,$$
\boldsymbol{x}_{n+1}
=
\boldsymbol{F}\left(\boldsymbol{x}^*,\boldsymbol{r}_0\right)
+
\frac{d\boldsymbol{F}}{d\boldsymbol{x}_n}\Bigr|_{\boldsymbol{x}^*,\boldsymbol{r}_0}\left(\boldsymbol{x}_n-\boldsymbol{x}^*\right)
+
\frac{d\boldsymbol{F}}{d\boldsymbol{r}_n}\Bigr|_{\boldsymbol{x}^*,\boldsymbol{r}_0}\left(\boldsymbol{r}_n-\boldsymbol{r}_0\right).$$
In the regime where these linear approximations are valid we can use, $$
\Delta\boldsymbol{r}_n
=
\gamma\left(\boldsymbol{x}_n-\boldsymbol{x}^*\right). $$
Further introducing $\Delta\boldsymbol{x}_n=\boldsymbol{x}_n-\boldsymbol{x}^*$ we can rewrite the map as, $$
\Delta\boldsymbol{x}_{n+1}
=
\underbrace{\left(
\frac{d\boldsymbol{F}}{d\boldsymbol{x}_n}\Bigr|_{\boldsymbol{x}^*,\boldsymbol{r}_0}
+
\frac{d\boldsymbol{F}}{d\boldsymbol{r}_n}\Bigr|_{\boldsymbol{x}^*,\boldsymbol{r}_0}
\right)}_{A}
\Delta\boldsymbol{x}_n.$$
The Jacobians are $$
\begin{align}
\frac{d\boldsymbol{F}}{d\boldsymbol{x}_n}\Bigr|_{\boldsymbol{x}^*,\boldsymbol{r}_0}
=
\begin{pmatrix}
-2 a_0 x^* & 1 \\
b_0 & 0
\end{pmatrix},
&&
\frac{d\boldsymbol{F}}{d\boldsymbol{r}_n}\Bigr|_{\boldsymbol{x}^*,\boldsymbol{r}_0}
=
\begin{pmatrix}
-{x^*}^2 & 0 \\
0 & x^*
\end{pmatrix}
\end{align}. $$
Thus the matrix $A$ reads, $$
A
=
\begin{pmatrix}
-2a_0x^*-\gamma{x^*}^2 & 1 \\
b_0 & \gamma x^*
\end{pmatrix}.
$$ The optimal value for $\gamma$ can be found for $0=A\Delta\boldsymbol{x}_n$.
```
def eigenvector(a, b):
xf, yf = fixed_point(a, b)
A = np.array([
[-2 * a * xf - xf**2, 1],
[b, xf]
])
return u-v, u+v
eigenvalues(a=1.4, b=0.3)
```
The Jacobian of the Henon map close to $(x^*,a_0,b_0)$ is given through, $$
\begin{pmatrix}
-2 a_0 x^* & 1 \\
b_0 & 0
\end{pmatrix},$$
and has eigenvalues $$\lambda=-a_0\left[x^*\pm\sqrt{{x^*}^2+b_0/a_0^2}\right]$$
```
fixed_point(a=1.4, b=0.3)
```
| github_jupyter |
## Extracting Titanic Disaster Data From Kaggle
```
!pip install python-dotenv
from dotenv import load_dotenv, find_dotenv
# find .env automatically by walking up directories until it's found
dotenv_path = find_dotenv()
# load up the entries as environment variables
load_dotenv(dotenv_path)
# extracting environment variable using os.environ.get
import os
KAGGLE_USERNAME = os.environ.get("KAGGLE_USERNAME")
print(KAGGLE_USERNAME)
# imports
import requests
from requests import session
import os
from dotenv import load_dotenv, find_dotenv
# payload for post
payload = {
'action': 'login',
'username': os.environ.get("KAGGLE_USERNAME"),
'password': os.environ.get("KAGGLE_PASSWORD")
}
# url for train file (get the link from Kaggle website)
url = 'https://www.kaggle.com/c/titanic/download/train.csv'
# setup session
with session() as c:
# post request
c.post('https://www.kaggle.com/account/login', data=payload)
# get request
response = c.get(url)
# print response text
print(response.text)
from requests import session
# payload
payload = {
'action': 'login',
'username': os.environ.get("KAGGLE_USERNAME"),
'password': os.environ.get("KAGGLE_PASSWORD")
}
def extract_data(url, file_path):
'''
extract data from kaggle
'''
# setup session
with session() as c:
c.post('https://www.kaggle.com/account/login', data=payload)
# oppen file to write
with open(file_path, 'w') as handle:
response = c.get(url, stream=True)
for block in response.iter_content(1024):
handle.write(block)
# urls
train_url = 'https://www.kaggle.com/c/titanic/download/train.csv'
test_url = 'https://www.kaggle.com/c/titanic/download/test.csv'
# file paths
raw_data_path = os.path.join(os.path.pardir,'data','raw')
train_data_path = os.path.join(raw_data_path, 'train.csv')
test_data_path = os.path.join(raw_data_path, 'test.csv')
# extract data
extract_data(train_url,train_data_path)
extract_data(test_url,test_data_path)
!ls -l ../data/raw
```
### Builiding the file script
```
get_raw_data_script_file = os.path.join(os.path.pardir,'src','data','get_raw_data.py')
%%writefile $get_raw_data_script_file
# -*- coding: utf-8 -*-
import os
from dotenv import find_dotenv, load_dotenv
from requests import session
import logging
# payload for login to kaggle
payload = {
'action': 'login',
'username': os.environ.get("KAGGLE_USERNAME"),
'password': os.environ.get("KAGGLE_PASSWORD")
}
def extract_data(url, file_path):
'''
method to extract data
'''
with session() as c:
c.post('https://www.kaggle.com/account/login', data=payload)
with open(file_path, 'w') as handle:
response = c.get(url, stream=True)
for block in response.iter_content(1024):
handle.write(block)
def main(project_dir):
'''
main method
'''
# get logger
logger = logging.getLogger(__name__)
logger.info('getting raw data')
# urls
train_url = 'https://www.kaggle.com/c/titanic/download/train.csv'
test_url = 'https://www.kaggle.com/c/titanic/download/test.csv'
# file paths
raw_data_path = os.path.join(project_dir,'data','raw')
train_data_path = os.path.join(raw_data_path, 'train.csv')
test_data_path = os.path.join(raw_data_path, 'test.csv')
# extract data
extract_data(train_url,train_data_path)
extract_data(test_url,test_data_path)
logger.info('downloaded raw training and test data')
if __name__ == '__main__':
# getting root directory
project_dir = os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)
# setup logger
log_fmt = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logging.basicConfig(level=logging.INFO, format=log_fmt)
# find .env automatically by walking up directories until it's found
dotenv_path = find_dotenv()
# load up the entries as environment variables
load_dotenv(dotenv_path)
# call the main
main(project_dir)
!python $get_raw_data_script_file
```
| github_jupyter |
# Single model
```
from consav import runtools
runtools.write_numba_config(disable=0,threads=4)
%matplotlib inline
%load_ext autoreload
%autoreload 2
# Local modules
from Model import RetirementClass
import SimulatedMinimumDistance as SMD
import figs
import funs
# Global modules
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import time
```
### Solve and simulate model
```
tic1 = time.time()
Single = RetirementClass()
tic2 = time.time()
Single.recompute()
tic3 = time.time()
Single.solve()
tic4 = time.time()
Single.simulate(accuracy=True,tax=True)
tic5 = time.time()
print('Class :', round(tic2-tic1,2))
print('Precompute:', round(tic3-tic2,2))
print('Solve :', round(tic4-tic3,2))
print('Simulate :', round(tic5-tic4,2))
tic1 = time.time()
Single.solve()
tic2 = time.time()
Single.simulate(accuracy=True,tax=True)
tic3 = time.time()
print('Solve :', round(tic2-tic1,2))
print('Simulate :', round(tic3-tic2,2))
```
### Retirement probabilities from solution
Women
```
G = figs.choice_probs(Single,ma=0)
G['legendsize'] = 12
G['marker'] = 'o'
figs.MyPlot(G,linewidth=3).savefig('figs/Model/Single_ChoiceProb_Women.png')
```
Men
```
G = figs.choice_probs(Single,ma=1)
G['legendsize'] = 12
G['marker'] = 'o'
figs.MyPlot(G,linewidth=3).savefig('figs/Model/Single_ChoiceProb_Men.png')
```
### Simulation
```
def rename_gender(G_lst):
G_lst[0]['label'] = ['Women']
G_lst[1]['label'] = ['Men']
936092.2561647706 - np.nansum(Single.sol.c)
37833823.081779644 - np.nansum(Single.sol.v)
print(np.nansum(Single.par.labor))
print(np.nansum(Single.par.erp))
print(np.nansum(Single.par.oap))
Single.par.T_erp
68.51622393567519 - np.nansum(Single.par.erp)
Single.par.pension_male = np.array([10.8277686, 18.94859504])
Single.par.pension_female = np.array([ 6.6438835, 11.62679612])
transitions.precompute_inc_single(Single.par)
Single.solve()
Single.simulate()
Single.par.start_T = 53
Single.par.simT = Single.par.end_T - Single.par.start_T + 1
Single.par.var = np.array([0.202, 0.161])
Single.par.reg_labor_male = np.array((1.166, 0.360, 0.432, -0.406))
Single.par.reg_labor_female = np.array((4.261, 0.326, 0.303, -0.289))
Single.par.priv_pension_female = 728*1000/Single.par.denom
Single.par.priv_pension_male = 1236*1000/Single.par.denom
Single.solve(recompute=True)
Single.simulate()
np.nanmean(Single.sim.m[:,0])
Single.sim.m[:,0] = 20
Single.simulate()
Gw = figs.retirement_probs(Single,MA=[0])
Gm = figs.retirement_probs(Single,MA=[1])
rename_gender([Gw,Gm])
figs.MyPlot([Gw,Gm],linewidth=3).savefig('figs/Model/SimSingleProbs')
Gw = figs.retirement_probs(Single,MA=[0])
Gm = figs.retirement_probs(Single,MA=[1])
rename_gender([Gw,Gm])
figs.MyPlot([Gw,Gm],linewidth=3).savefig('figs/Model/SimSingleProbs')
Gw = figs.retirement_probs(Single,MA=[0])
Gm = figs.retirement_probs(Single,MA=[1])
rename_gender([Gw,Gm])
figs.MyPlot([Gw,Gm],linewidth=3).savefig('figs/Model/SimSingleProbs')
Gw = figs.lifecycle(Single,var='m',MA=[0],ages=[57,80])
Gm = figs.lifecycle(Single,var='m',MA=[1],ages=[57,80])
rename_gender([Gw,Gm])
figs.MyPlot([Gw,Gm],linewidth=3,save=False)
Gw = figs.lifecycle(Single,var='c',MA=[0],ages=[57,80])
Gm = figs.lifecycle(Single,var='c',MA=[1],ages=[57,80])
rename_gender([Gw,Gm])
figs.MyPlot([Gw,Gm],linewidth=3,save=False)
```
### Consumption functions
Retired
```
G = figs.policy(Single,var='c',T=list(range(77,87))[::2],MA=[0],ST=[3],RA=[0],D=[0],label=['t'])
G['legendsize'] = 12
figs.MyPlot(G,ylim=[0,12],save=False)
G = figs.policy(Single,var='c',T=list(range(97,111))[::2],MA=[0],ST=[3],RA=[0],D=[0],label=['t'])
G['legendsize'] = 12
figs.MyPlot(G,ylim=[0,16],save=False)
```
Working
```
G = figs.policy(Single,var='c',T=list(range(57,67))[::2],MA=[0],ST=[3],RA=[0],D=[1],label=['t'])
G['legendsize'] = 12
figs.MyPlot(G,ylim=[0,8],save=False)
G = figs.policy(Single,var='c',T=list(range(67,75))[::2],MA=[0],ST=[3],RA=[0],D=[1],label=['t'])
G['legendsize'] = 12
figs.MyPlot(G,ylim=[0,10],save=False)
```
### Simulation - Retirement
```
def rename(G_lst):
G_lst[0]['label'] = ['High skilled']
G_lst[1]['label'] = ['Base']
G_lst[2]['label'] = ['Low skilled']
```
Women
```
G_hs = figs.retirement_probs(Single,MA=[0],ST=[1,3])
G_base = figs.retirement_probs(Single,MA=[0])
G_ls = figs.retirement_probs(Single,MA=[0],ST=[0,2])
rename([G_hs,G_base,G_ls])
figs.MyPlot([G_hs,G_base,G_ls],linewidth=3,save=False)
```
Men
```
G_hs = figs.retirement_probs(Single,MA=[1],ST=[1,3])
G_base = figs.retirement_probs(Single,MA=[1])
G_ls = figs.retirement_probs(Single,MA=[1],ST=[0,2])
rename([G_hs,G_base,G_ls])
figs.MyPlot([G_hs,G_base,G_ls],linewidth=3,save=False)
```
### Simulation - Consumption
Women
```
G_hs = figs.lifecycle(Single,var='c',MA=[0],ST=[1,3],ages=[57,80])
G_base = figs.lifecycle(Single,var='c',MA=[0],ages=[57,80])
G_ls = figs.lifecycle(Single,var='c',MA=[0],ST=[0,2],ages=[57,80])
rename([G_hs,G_base,G_ls])
figs.MyPlot([G_hs,G_base,G_ls],linewidth=3,save=False)
```
Men
```
G_hs = figs.lifecycle(Single,var='c',MA=[1],ST=[1,3],ages=[57,80])
G_base = figs.lifecycle(Single,var='c',MA=[1],ages=[57,80])
G_ls = figs.lifecycle(Single,var='c',MA=[1],ST=[0,2],ages=[57,80])
rename([G_hs,G_base,G_ls])
figs.MyPlot([G_hs,G_base,G_ls],linewidth=3,save=False)
```
### Simulation - Wealth
Women
```
G_hs = figs.lifecycle(Single,var='m',MA=[0],ST=[1,3],ages=[57,68])
G_base = figs.lifecycle(Single,var='m',MA=[0],ages=[57,68])
G_ls = figs.lifecycle(Single,var='m',MA=[0],ST=[0,2],ages=[57,68])
rename([G_hs,G_base,G_ls])
figs.MyPlot([G_hs,G_base,G_ls],linewidth=3,save=False)
```
Men
```
G_hs = figs.lifecycle(Single,var='m',MA=[1],ST=[1,3],ages=[57,68])
G_base = figs.lifecycle(Single,var='m',MA=[1],ages=[57,68])
G_ls = figs.lifecycle(Single,var='m',MA=[1],ST=[0,2],ages=[57,68])
rename([G_hs,G_base,G_ls])
figs.MyPlot([G_hs,G_base,G_ls],linewidth=3,save=False)
```
### Euler errors
```
MA = [0,1]
ST = [0,1,2,3]
ages = [Single.par.start_T,Single.par.end_T-1]
for ma in MA:
for st in ST:
funs.log_euler(Single,MA=[ma],ST=[st],ages=ages,plot=True)
print('Total:',funs.log_euler(Single,ages=ages)[0])
MA = [0,1]
ST = [0,1,2,3]
ages = [Single.par.start_T,Single.par.end_T-1]
for ma in MA:
for st in ST:
funs.log_euler(Single,MA=[ma],ST=[st],ages=ages,plot=True)
print('Total:',funs.log_euler(Single,ages=ages)[0])
Na = Single.par.Na
funs.resolve(Single,Na=np.linspace(50,1000))
Single.par.Na = Na
Single.recompute() # reset
a_phi = test.par.a_phi
funs.resolve(test,a_phi = np.linspace(1.0,2.0,num=10))
test.par.a_phi = a_phi
test.solve(recompute=True) # reset
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Logistic-Regression" data-toc-modified-id="Logistic-Regression-1"><span class="toc-item-num">1 </span>Logistic Regression</a></span><ul class="toc-item"><li><span><a href="#Logistic-Function" data-toc-modified-id="Logistic-Function-1.1"><span class="toc-item-num">1.1 </span>Logistic Function</a></span></li><li><span><a href="#Interpreting-the-Intercept" data-toc-modified-id="Interpreting-the-Intercept-1.2"><span class="toc-item-num">1.2 </span>Interpreting the Intercept</a></span></li><li><span><a href="#Defining-The-Cost-Function" data-toc-modified-id="Defining-The-Cost-Function-1.3"><span class="toc-item-num">1.3 </span>Defining The Cost Function</a></span></li><li><span><a href="#Gradient" data-toc-modified-id="Gradient-1.4"><span class="toc-item-num">1.4 </span>Gradient</a></span></li><li><span><a href="#Stochastic/Mini-batch-Gradient" data-toc-modified-id="Stochastic/Mini-batch-Gradient-1.5"><span class="toc-item-num">1.5 </span>Stochastic/Mini-batch Gradient</a></span></li><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.6"><span class="toc-item-num">1.6 </span>Implementation</a></span></li><li><span><a href="#Comparing-Result-and-Convergence-Behavior" data-toc-modified-id="Comparing-Result-and-Convergence-Behavior-1.7"><span class="toc-item-num">1.7 </span>Comparing Result and Convergence Behavior</a></span></li><li><span><a href="#Pros-and-Cons-of-Logistic-Regression" data-toc-modified-id="Pros-and-Cons-of-Logistic-Regression-1.8"><span class="toc-item-num">1.8 </span>Pros and Cons of Logistic Regression</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
```
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style = False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,matplotlib,sklearn
```
# Logistic Regression
**Logistic regression** is an excellent tool to know for classification problems, which are problems where the output value that we wish to predict only takes on only a small number of discrete values. Here we'll focus on the binary classification problem, where the output can take on only two distinct classes. To make our examples more concrete, we will consider the Glass dataset.
```
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/glass/glass.data'
col_names = ['id', 'ri', 'na', 'mg', 'al', 'si', 'k', 'ca', 'ba', 'fe', 'glass_type']
glass = pd.read_csv(url, names = col_names, index_col = 'id')
glass.sort_values('al', inplace = True)
# convert the glass type into binary outcome
# types 1, 2, 3 are window glass
# types 5, 6, 7 are household glass
glass['household'] = glass['glass_type'].map({1: 0, 2: 0, 3: 0, 5: 1, 6: 1, 7: 1})
glass.head()
```
Our task is to predict the `household` column using the `al` column. Let's visualize the relationship between the input and output and also train the logsitic regression to see the outcome that it produces.
```
logreg = LogisticRegression(C = 1e9)
X = glass['al'].reshape(-1, 1) # sklearn doesn't accept 1d-array, convert it to 2d
y = np.array(glass['household'])
logreg.fit(X, y)
# predict the probability that each observation belongs to class 1
# The first column indicates the predicted probability of class 0,
# and the second column indicates the predicted probability of class 1
glass['household_pred_prob'] = logreg.predict_proba(X)[:, 1]
# plot the predicted probability (familiarize yourself with the S-shape)
# change default figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
plt.scatter(glass['al'], glass['household'])
plt.plot(glass['al'], glass['household_pred_prob'])
plt.xlabel('al')
plt.ylabel('household')
plt.show()
```
As we can see, logistic regression can output the probabilities of observation belonging to a specific class and these probabilities can be converted into class predictions by choosing a cutoff value (e.g. probability higher than 0.5 is classified as class 1).
## Logistic Function
In **Logistic Regression**, the log-odds of a categorical response being "true" (1) is modeled as a linear combination of the features:
\begin{align*}
\log \left({p\over 1-p}\right) &= w_0 + w_1x_1, ..., w_jx_j \nonumber \\
&= w^Tx \nonumber
\end{align*}
Where:
- $w_{0}$ is the intercept term, and $w_1$ to $w_j$ represents the parameters for all the other features (a total of j features).
- By convention of we can assume that $x_0 = 1$, so that we can re-write the whole thing using the matrix notation $w^Tx$.
This is called the **logit function**. The equation can be re-arranged into the **logistic function**:
$$p = \frac{e^{w^Tx}} {1 + e^{w^Tx}}$$
Or in the more commonly seen form:
$$h_w(x) = \frac{1}{ 1 + e^{-w^Tx} }$$
Let's take a look at the plot of the function:
```
x_values = np.linspace(-5, 5, 100)
y_values = [1 / (1 + np.exp(-x)) for x in x_values]
plt.plot(x_values, y_values)
plt.title('Logsitic Function')
plt.show()
```
The **logistic function** has some nice properties. The y-value represents the probability and it is always bounded between 0 and 1, which is want we wanted for probabilities. For an x value of 0 you get a 0.5 probability. Also as you get more positive x value you get a higher probability, on the other hand, a more negative x value results in a lower probability.
Toy sample code of how to predict the probability given the data and the weight is provided below.
```
def predict_probability(data, weights):
"""probability predicted by the logistic regression"""
score = np.dot(data, weights)
predictions = 1 / (1 + np.exp(-score))
return predictions
```
## Interpreting the Intercept
We can check logistic regression's coefficient does in fact generate the log-odds.
```
# compute predicted log-odds for al = 2 using the equation
# convert log-odds to odds
# convert odds to probability
logodds = logreg.intercept_ + logreg.coef_[0] * 2
odds = np.exp(logodds)
prob = odds / (1 + odds)
print(prob)
logreg.predict_proba(2)[:, 1]
# examine the coefficient for al
print('a1', logreg.coef_[0])
```
**Interpretation:** 1 unit increase in `al` is associated with a 4.18 unit increase in the log-odds of the observation being classified as `household 1`. We can confirm that again by doing the calculation ourselves.
```
# increasing al by 1 (so that al now becomes 3)
# increases the log-odds by 4.18
logodds = logodds + logreg.coef_[0]
odds = np.exp(logodds)
prob = odds / (1 + odds)
print(prob)
logreg.predict_proba(3)[:, 1]
```
## Defining The Cost Function
When utilizing logistic regression, we are trying to learn the $w$ values in order to maximize the probability of correctly classifying our glasses. Let's say someone did give us some $w$ values of the logistic regression model, how would we determine if they were good values or not? What we would hope is that for the household of class 1, the probability values are close to 1 and for the household of class 0 the probability is close to 0.
But we don't care about getting the correct probability for just one observation, we want to correctly classify all our observations. If we assume our data are independent and identically distributed (think of it as all of them are treated equally), we can just take the product of all our individually calculated probabilities and that becomes the objective function we want to maximize. So in math:
$$\prod_{class1}h_w(x)\prod_{class0}1 - h_w(x)$$
The $\prod$ symbol means take the product of the $h_w(x)$ for the observations that are classified as that class. You will notice that for observations that are labeled as class 0, we are taking 1 minus the logistic function. That is because we are trying to find a value to maximize, and since observations that are labeled as class 0 should have a probability close to zero, 1 minus the probability should be close to 1. This procedure is also known as the **maximum likelihood estimation** and the following link contains a nice discussion of maximum likelihood using linear regression as an example. [Blog: The Principle of Maximum Likelihood](http://suriyadeepan.github.io/2017-01-22-mle-linear-regression/)
Next we will re-write the original cost function as:
$$\ell(w) = \sum_{i=1}^{N}y_{i}log(h_w(x_{i})) + (1-y_{i})log(1-h_w(x_{i}))$$
Where:
- We define $y_{i}$ to be 1 when the $i_{th}$ observation is labeled class 1 and 0 when labeled as class 0, then we only compute $h_w(x_{i})$ for observations that are labeled class 1 and $1 - h_w(x_{i})$ for observations that are labeled class 0, which is still the same idea as the original function.
- Next we'll transform the original $h_w(x_{i})$ by taking the log. As we'll later see this logarithm transformation will make our cost function more convenient to work with, and because the logarithm is a monotonically increasing function, the logarithm of a function achieves its maximum value at the same points as the function itself. When we take the log, our product across all data points, it becomes a sum. See [log rules](http://www.mathwords.com/l/logarithm_rules.htm) for more details (Hint: log(ab) = log(a) + log(b)).
- The $N$ simply represents the total number of the data.
Often times you'll also see the notation above be simplified in the form of a maximum likelihood estimator:
$$ \ell(w) = \sum_{i=1}^{N} log \big( P( y_i \mid x_i, w ) \big) $$
The equation above simply denotes the idea that , $\mathbf{w}$ represents the parameters we would like to estimate the parameters $w$ by maximizing conditional probability of $y_i$ given $x_i$.
Now by definition of probability in the logistic regression model: $h_w(x_{i}) = 1 \big/ 1 + e^{-w^T x_i}$ and $1- h_w(x_{i}) = e^{ -w^T x_i } \big/ ( 1 + e^{ -w^T x_i } )$. By substituting these expressions into our $\ell(w)$ equation and simplifying it further we can obtain a simpler expression.
$$
\begin{align}
\ell(w)
&= \sum_{i=1}^{N}y_{i}log(h_w(x_{i})) + (1-y_{i})log(1-h_w(x_{i})) \nonumber \\
&= \sum_{i=1}^{N} y_{i} log( \frac{1}{ 1 + e^{ -w^T x_i } } ) + ( 1 - y_{i} )
log( \frac{ e^{ -w^T x_i } }{ 1 + e^{ -w^T x_i } } ) \nonumber \\
&= \sum_{i=1}^{N} -y_{i} log( 1 + e^{ -w^T x_i } ) + ( 1 - y_{i} )
( -w^T x_i - log( 1 + e^{ -w^T x_i } ) ) \nonumber \\
&= \sum_{i=1}^{N} ( y_{i} - 1 ) ( w^T x_i ) - log( 1 + e^{ -w^T x_i } ) \nonumber
\end{align}
$$
We'll use the formula above to compute the log likelihood for the entire dataset, which is used to assess the convergence of the algorithm. Toy code provided below.
```
def compute_avg_log_likelihood(data, label, weights):
"""
the function uses a simple check to prevent overflow problem,
where numbers gets too large to represent and is converted to inf
an example of overflow is provided below, when this problem occurs,
simply use the original score (without taking the exponential)
scores = np.array( [ -10000, 200, 300 ] )
logexp = np.log( 1 + np.exp(-scores) )
logexp
"""
scores = np.dot(data, weights)
logexp = np.log(1 + np.exp(-scores))
# simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
log_likelihood = np.sum((label - 1) * scores - logexp) / data.shape[0]
return log_likelihood
```
**Note:** We made one tiny modification to the log likelihood function We added a ${1/N}$ term which averages the log likelihood across all data points. The ${1/N}$ term will make it easier for us to compare stochastic gradient ascent with batch gradient ascent later.
## Gradient
Now that we obtain the formula to assess our algorithm, we'll dive into the meat of the algorithm, which is to derive the gradient for the formula (the derivative of the formula with respect to each coefficient):
$$\ell(w) = \sum_{i=1}^{N} ( y_{i} - 1 ) ( w^T x_i ) - log( 1 + e^{ -w^T x_i } )$$
And it turns out the derivative of log likelihood with respect to to a single coefficient $w_j$ is as follows (the form is the same for all coefficients):
$$
\frac{\partial\ell(w)}{\partial w_j} = \sum_{i=1}^N (x_{ij})\left( y_i - \frac{1}{ 1 + e^{-w^Tx_i} } \right )
$$
To compute it, you simply need the following two terms:
- $\left( y_i - \frac{1}{ 1 + e^{-w^Tx_i} } \right )$ is the vector containing the difference between the predicted probability and the original label.
- $x_{ij}$ is the vector containing the $j_{th}$ feature's value.
For a step by step derivation, consider going through the following link. [Blog: Maximum likelihood and gradient descent demonstration](https://zlatankr.github.io/posts/2017/03/06/mle-gradient-descent), it uses a slightly different notation, but the walkthrough should still be pretty clear.
## Stochastic/Mini-batch Gradient
The problem with computing the gradient (or so called batched gradient) is the term $\sum_{i=1}^{N}$. This means that we must sum the contributions over all the data points to calculate the gradient, and this can be problematic if the dataset we're studying is extremely large. Thus, in stochastic gradient, we can use a single point as an approximation to the gradient:
$$
\frac{\partial\ell_i(w)}{\partial w_j} = (x_{ij})\left( y_i - \frac{1}{ 1 + e^{-w^Tx_i} } \right )
$$
**Note1:** Because the **Stochastic Gradient** algorithm uses each row of data in turn to update the gradient, if our data has some sort of implicit ordering, this will negatively affect the convergence of the algorithm. At an extreme, what if we had the data sorted so that all positive reviews came before negative reviews? In that case, even if most reviews are negative, we might converge on an answer of +1 because we never get to see the other data. To avoid this, one practical trick is to shuffle the data before we begin so the rows are in random order.
**Note2:** Stochastic gradient compute the gradient using only 1 data point to update the the parameters, while batch gradient uses all $N$ data points. An alternative to these two extremes is a simple change that allows us to use a **mini-batch** of $B \leq N$ data points to calculate the gradient. This simple approach is faster than batch gradient but less noisy than stochastic gradient that uses only 1 data point. Given a mini-batch (or a set of data points) $\mathbf{x}_{i}, \mathbf{x}_{i+1} \ldots \mathbf{x}_{i+B}$, the gradient function for this mini-batch of data points is given by:
$$
\sum_{s = i}^{i+B} \frac{\partial\ell_s(w)}{\partial w_j} = \frac{1}{B} \sum_{s = i}^{i+B} (x_{sj})\left( y_i - \frac{1}{ 1 + e^{-w^Tx_i} } \right )
$$
Here, the $\frac{1}{B}$ means that we are normalizing the gradient update rule by the batch size $B$. In other words, we update the coefficients using the **average gradient over data points** (instead of using a pure summation). By using the average gradient, we ensure that the magnitude of the gradient is approximately the same for all batch sizes. This way, we can more easily compare various batch sizes and study the effect it has on the algorithm.
## Implementation
Recall our task is to find the optimal value for each individual weight to lower the cost. This requires taking the partial derivative of the cost/error function with respect to a single weight, and then running gradient descent for each individual weight to update them. Thus, for any individual weight $w_j$, we'll compute the following:
$$ w_j^{(t + 1)} = w_j^{(t)} + \alpha * \sum_{s = i}^{i+B} \frac{\partial\ell_s(w)}{\partial w_j}$$
Where:
- $\alpha$ denotes the the learning rate or so called step size, in other places you'll see it denoted as $\eta$.
- $w_j^{(t)}$ denotes the weight of the $j_{th}$ feature at iteration $t$.
And we'll do this iteratively for each weight, many times, until the whole network's cost function is minimized.
```
# put the code together into one cell
def predict_probability(data, weights):
"""probability predicted by the logistic regression"""
score = np.dot(data, weights)
predictions = 1 / (1 + np.exp(-score))
return predictions
def compute_avg_log_likelihood(data, label, weights):
"""
the function uses a simple check to prevent overflow problem,
where numbers gets too large to represent and is converted to inf
an example of overflow is provided below, when this problem occurs,
simply use the original score (without taking the exponential)
scores = np.array([-10000, 200, 300])
logexp = np.log(1 + np.exp(-scores))
logexp
"""
scores = np.dot(data, weights)
logexp = np.log(1 + np.exp(-scores))
# simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
log_likelihood = np.sum((label - 1) * scores - logexp) / data.shape[0]
return log_likelihood
def logistic_regression(data, label, step_size, batch_size, max_iter):
# weights of the model are initialized as zero
data_num = data.shape[0]
feature_num = data.shape[1]
weights = np.zeros(data.shape[1])
# `i` keeps track of the starting index of current batch
# and shuffle the data before starting
i = 0
permutation = np.random.permutation(data_num)
data, label = data[permutation], label[permutation]
# do a linear scan over data, for each iteration update the weight using
# batches of data, and store the log likelihood record to visualize convergence
log_likelihood_record = []
for _ in range(max_iter):
# extract the batched data and label use it to compute
# the predicted probability using the current weight and the errors
batch = slice(i, i + batch_size)
batch_data, batch_label = data[batch], label[batch]
predictions = predict_probability(batch_data, weights)
errors = batch_label - predictions
# loop over each coefficient to compute the derivative and update the weight
for j in range(feature_num):
derivative = np.dot(errors, batch_data[:, j])
weights[j] += step_size * derivative / batch_size
# track whether log likelihood is increasing after
# each weight update
log_likelihood = compute_avg_log_likelihood(
data = batch_data,
label = batch_label,
weights = weights
)
log_likelihood_record.append(log_likelihood)
# update starting index of for the batches
# and if we made a complete pass over data, shuffle again
# and refresh the index that keeps track of the batch
i += batch_size
if i + batch_size > data_num:
permutation = np.random.permutation(data_num)
data, label = data[permutation], label[permutation]
i = 0
# We return the list of log likelihoods for plotting purposes.
return weights, log_likelihood_record
```
## Comparing Result and Convergence Behavior
We'll use the logistic regression code that we've implemented and compare the predicted auc score with scikit-learn's implementation. This only serves to check that the predicted results are similar and that our toy code is correctly implemented. Then we'll also explore the convergence difference between batch gradient descent and stochastic gradient descent.
```
# manually append the coefficient term,
# every good open-source library does not
# require this additional step from the user
data = np.c_[np.ones(X.shape[0]), X]
# using our logistic regression code
weights_batch, log_likelihood_batch = logistic_regression(
data = data,
label = np.array(y),
step_size = 5e-1,
batch_size = X.shape[0], # batch gradient descent
max_iter = 200
)
# compare both logistic regression's auc score
logreg = LogisticRegression(C = 1e9)
logreg.fit(X, y)
pred_prob = logreg.predict_proba(X)[:, 1]
proba = predict_probability(data, weights_batch)
# check that the auc score is similar
auc1 = metrics.roc_auc_score(y, pred_prob)
auc2 = metrics.roc_auc_score(y, proba)
print('auc', auc1, auc2)
weights_sgd, log_likelihood_sgd = logistic_regression(
data = data,
label = y,
step_size = 5e-1,
batch_size = 30, # stochastic gradient descent
max_iter = 200
)
weights_minibatch, log_likelihood_minibatch = logistic_regression(
data = data,
label = y,
step_size = 5e-1,
batch_size = 100, # mini-batch gradient descent
max_iter = 200
)
plt.figure(figsize = (10, 7))
plt.plot(log_likelihood_sgd, label = 'stochastic gradient descent')
plt.plot(log_likelihood_batch, label = 'batch gradient descent')
plt.plot(log_likelihood_minibatch, label = 'mini-batch gradient descent')
plt.legend(loc = 'best')
plt.xlabel('# of iterations')
plt.ylabel('Average log likelihood')
plt.title('Convergence Plot')
plt.show()
```
Based on the convergence plot above, we can see that the it's a good idea to use mini-batch gradient descent since it strikes a good balance between batch gradient, which convergences steadily but can be computationly too expensive when the dataset is too large, and stochastic gradient, which is faster to train, but the result can be too noisy.
## Pros and Cons of Logistic Regression
We'll end this notebook listing out some pros and cons of this method.
**Pros:**
- Highly interpretable (if you remember how).
- Model training and prediction are fast. Thus can be desirable in large-scale applications when we're dealing with millions of parameters.
- Almost no parameter tuning is required (excluding regularization).
- Outputs well-calibrated predicted probabilities.
**Cons:**
- Presumes a linear relationship between the features
- Performance is (generally) not competitive with the best supervised learning methods.
- Can't automatically learn feature interactions.
# Reference
- [Notebook: Logistic Regression](http://nbviewer.jupyter.org/github/justmarkham/DAT8/blob/master/notebooks/12_logistic_regression.ipynb)
- [Coursersa: Washington Classification](https://www.coursera.org/learn/ml-classification)
- [Blog: Maximum likelihood and gradient descent demonstration](https://zlatankr.github.io/posts/2017/03/06/mle-gradient-descent)
| github_jupyter |
# Low-Level API
## Prerequisites
If you've already completed the instructions on the **Installation** page, then let's get started.
```
import aiqc
from aiqc import datum
```
## 1. Ingest a `Dataset`
### Object Relational Model (ORM)
At the moment, AIQC supports the following types of Datasets:
* Single-file tabular/ flat/ delimited datasets.
* Multi-file image datasets.
End users only need to worry about passing the right inputs to the Dataset class, but there are a few objects doing the legwork beneath the hood:
* `Dataset` ORM class with subclasses of either `Tabular` or `Image`.
* `File` ORM class one or more files with subclasses of either `Tabular` or `Image`.
* Dedicated `Tabular` and `Image` ORM classes for attributes specific to those data types (e.g. dtype mappings for flat files and colorscale for images).
> Considering these types in the future: Sequence/ time series: multi-file tabular (e.g. 3D numpy, HDF5). Graph: multi-file nodes and multi-file edges (e.g. DGL).
### Persisting and Compressing Structured Data
By default the actual bytes of the file are persisted to the SQLite `BlobField`. It gets gzip compressed, reducing the size by up to 90%. Maximum BlobField size is 2.147 GB, but once you factor in compression, your bottleneck is more likely to be memory beyond that size. The bytes themselves are Parquet (single-partitioned) because, using the PyArrow engine, it preserves every dtype except certain datetimes (which are honestly better off parsed into floats/ ordinal ints). Parquet is also integrated nicely into both Spark and Dask; frameworks for distributed, in-memory computation.
Persisting the file ensures reproducibility by: (a) keeping the data packaged alongside the experiment, and (b) helping entry-level users move away from relying on mutable dataframes they have had in-memory for extended periods of time or floating around on shared file systems.
> *However, we realize that a different approach will be required at scale, so the `source_path` of the file is recorded whenever possible. In the future we could just read the data from that path (e.g. NFS, RDBMS, HDFS, S3) if the BlobField is none. Or just switch our data fetching/ filtering to Dask because it uses the Pandas API and Parquet.*
### Data Sources
You can make a dataset from either:
* `Dataset.Tabular`
* In-memory data structures (pandas dataframe, numpy array).
* Flat files (csv, tsv, parquet).
* Accepts urls.
* `Dataset.Image`
* Any image file format that can be read by the Pillow library.
* Accepts urls.
#### `Dataset.Tabular.from_pandas()`
```
df = datum.to_pandas('iris.tsv')
dataset = aiqc.Dataset.Tabular.from_pandas(
dataframe = df
, name = 'tab separated plants'
, dtype = None #passed to pd.Dataframe(dtype)/ inferred
, column_names = None #passed to pd.Dataframe(columns)
)
```
> Optionally, `dtype`, as seen in the `pandas.DataFrame.astype(dtype)` [docs](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html), can be specified as either a single type for all columns, or as a dictionary that maps a specific type to each column name. This encodes features for analysis. We read NumPy into Pandas before persisting it, so `columns` and `dtype` are read directly by `pd.DataFrame()`.
#### `Dataset.Tabular.from_numpy()`
Must be a 2D NumPy N-Dimensional Array.
> *In the future, we may add support for ingesting 3D arrays as multi-file sequences.*
Regular *ndarrays* don't have column names, and I didn't like the API for *structured arrays* so you have to pass in columns names as a list. If you don't then column names will be numerically assigned in ascending order (zero-based index), but I didn't like the range object, so I stringified numerically assigned columns to string-based numbers.
```
arr = df.to_numpy()
cols = list(df.columns)
other_dataset = aiqc.Dataset.Tabular.from_numpy(
ndarray = arr
, name = None
, dtype = None #passed to pd.Dataframe(dtype)/ inferred
, column_names = cols #passed to pd.Dataframe(columns)
)
```
#### `Dataset.Tabular.from_path`
Intended for flat files, delimited text, and structured tabular data. It's read in via Pandas, so it supports URLs to raw data and bytes as well.
```
file_path = datum.get_path('iris_10x.tsv')
# We'll keep this larger dataset handy for `Foldset` creation later.
big_dataset = aiqc.Dataset.Tabular.from_path(
file_path = file_path
, source_file_format = 'tsv'
, name = None
, dtype = None
, column_names = None
, skip_header_rows = 'infer' #passed to `pd.read_csv(header)`. Incompatible w Parquet.
)
```
> If you leave `name` blank, it will default to a human-readble timestamp with the appropriate file extension (e.g. '2020_10_13-01_28_13_PM.tsv').
#### Image Datasets
Image datasets are somewhat multi-modal in that, in order to perform supervised learning on them, they require a loosely coupled `Dataset.Tabular` that contains their labels.
```
df = datum.to_pandas(name='brain_tumor.csv')
df.head()
```
The `['status']` column of this dataframe serves as the Label of that sample. We'll construct a `Dataset.Tabular` from this.
```
tabular_dataset = aiqc.Dataset.Tabular.from_pandas(dataframe=df)
tabular_label = tabular_dataset.make_label(columns=['status'])
```
#### `Dataset.Image.from_urls()`
During ingestion, all image files must have the same `Image.mode` and `Image.size` according to the Pillow library.
> https://pillow.readthedocs.io/en/stable/handbook/concepts.html
`from_urls(urls:list)` needs a list of urls. In order to perform supervised learning, the order of this list must line up with the samples in your Tabular dataset.
> We happen to have this list prepared in the `['url']` column of the dataframe above. acts as a manifest in that it contains the URL of the image file for that sample, solely for the purposes of initial ingestion. We'll construct a `Dataset.Image` from this.
```
image_urls = datum.get_remote_urls(manifest_name='brain_tumor.csv')
image_dataset = aiqc.Dataset.Image.from_urls(urls=image_urls)
image_featureset = image_dataset.make_featureset()
```
Skipping forward a bit, we bring the heterogenous `Featureset` and `Label` together in the `Splitset`, and they can be used as normal. You can even construct a `Foldset` from this splitset.
```
image_splitset = image_featureset.make_splitset(
label_id = tabular_label.id
, size_test = 0.24
, size_validation = 0.12
)
```
#### `Dataset.Image.from_folder()`
When reading images from a locally accessible folder, the fantastic `natsort.natsorted` library is used as the source of truth for the order of the files.
> Python reads files by insertion order rather than alpha-numerically, which isn't intuitive for humans. So make sure your tabular manifest has the same order as `natsorted`. https://natsort.readthedocs.io/en/master/api.html#natsort.natsorted
```
image_dataset = aiqc.Dataset.Image.from_folder("/Users/layne/desktop/brain_tumor_preprocessed")
```
Here you can see the first 3 files that comprise that dataset.
```
image_dataset.files[:3]
image_featureset = image_dataset.make_featureset()
image_splitset = image_featureset.make_splitset(
label_id = tabular_label.id
, size_test = 0.24
, size_validation = 0.12
)
```
### Reading Datasets
All of the sample-related objects in the API have `to_numpy()` and `to_pandas()` methods that accept the following arguments:
* `samples=[]` list of indices to fetch.
* `columns=[]` list of columns to fetch.
* In some cases you can specify a `split`/ `fold` name.
For structured data, since the `Dataset` itself is fairly removed from the `File.Tabular` it creates, you can get that tabular file with `Dataset.Tabular.get_main_tabular(dataset_id)` to inspect attributes like `dtypes` and `columns`.
Later, we'll see how these arguments allow downstream objects like `Splitset` and `Foldset` to slice up the data.
### `Dataset.Tabular.to_pandas()`
```
df = dataset.to_pandas()
df.head()
df = aiqc.Dataset.to_pandas(
id = dataset.id
, samples = [0,13,29,79]
, columns = ['sepal_length', 'sepal_width']
)
df.tail()
```
### `Dataset.Tabular.to_numpy()`
```
arr = dataset.to_numpy(
samples = [0,13,29,79]
, columns = ['petal_length', 'petal_width']
)
arr[:4]
arr = aiqc.Dataset.to_numpy(id=dataset.id)
arr[:4]
```
### `Dataset.Image.to_pillow()`
Returns a list of `PIL.Image`'s. You can actually see the image when you call them.
```
images_pillow = aiqc.Dataset.Image.to_pillow(id=image_dataset.id, samples=[60,61,62])
images_pillow[1]
```
### `Dataset.Image.to_numpy()`
This simply performs `np.array(Pillow.Image)`. Returns an N-dimensional array where the dimensions vary based on the `mode` aka colorscale of the image. For example, it returns '3D of 2Ds for black and white' or '4D of 3Ds for colored' - which would change the class of convultional layer you would use (`Conv1D`:`Conv3D`).
```
images_pillow = aiqc.Dataset.Image.to_numpy(id=image_dataset.id, samples=[60,61,62])
images_pillow[1]
```
> At the moment, we haven't found it necessary to provide a `to_pandas` method for images as they have no need for column names, the dtypes are homogenous, images are used as a whole so there is no filtering, Pandas isn't great with 3D data, and Pillow is integrated with NumPy.
## 2. Select the `Label` column(s).
### ORM
From a Dataset, pick the column(s) that you want to predict/ train against. Creating a `Label` won't duplicate your data! It simply marks the Dataset `columns` to be used for supervised learning.
Later, we'll see that a `Label` triggers:
* The `supervision` attribute of a `Splitset` to be either `'unsupervised'`/`'supervised'`.
* Approval/ rejection of the `Algorithm.analysis_type`. For example, you wouldn't perform regression on a string label.
Part of the magic of this library is that it prevents you from making silly mistakes like these so that you aren't faced with some obscure NumPy/ Tensor, dtype/ dimensionality error on the nth layer of your neural network.
For categorical labels, but not for continuous/float labels, the `Label.unique_classes` are recorded.
### Deriving Labels
Keep the name of the label column handy as you may want to re-use it later when excluding features.
```
label_column = 'species'
```
Implicit IDs
```
label = dataset.make_label(columns=[label_column])
```
> `columns=[label_column]` is a list in case users have already OneHotEncoded (OHEd) their label. If multiple columns are provided, then they must already be in OHE format. I'm not keen on supporting multi-label/ simultaneous analysis, but that could changed based on feasibility and user demand.
Explicit IDs
```
other_label = aiqc.Label.from_dataset(
dataset_id=other_dataset.id
, columns=[label_column]
)
```
### Reading Labels
The `Label` comes in handy when we need to fetch what is traditionally referred to as '*Y*' in tutorials. It also accepts a `samples` argument, so that `Splitset` can subset it.
```
label.to_pandas().tail()
label.to_numpy(samples=[0,33,66,99,132])[:5]
```
## 3. Select the `Featureset` column(s).
### ORM
Creating a Featureset won't duplicate your data! It simply records the Dataset `columns` to be used as features during training.
There are three ways to define which columns you want to use as features:
- `exclude_columns=[]` e.g. use all columns except the label column.
- `include_columns=[]` e.g. only use these columns that I think are informative.
- Leave both of the above blank and all columns will be used (e.g. images or unsupervised leanring).
For structured data, since the Featureset is far removed from the `File.Tabular` that it is derived from, there is a `Featureset.get_dtypes()` method. This will come in handy when we are selecting dtypes/columns to include/ exclude in our `Featurecoder`(s).
### Deriving Labels
Via `include_columns=[]`
```
include_columns = [
'sepal_length',
'petal_length',
'petal_width'
]
featureset = dataset.make_featureset(include_columns=include_columns)
```
Via `exclude_columns=[]`
```
featureset = dataset.make_featureset(exclude_columns=[label_column])
featureset.columns
```
Either way, any excluded columns will be recorded since they are used for dropping.
```
featureset.columns_excluded
```
Again, for images, just perform `Dataset.Image.make_featureset()` since you'll likely want to include all pixels and your label column is in a separate, coupled Dataset.
### Reading Featuresets
```
featureset.to_numpy()[:4]
featureset.to_pandas(samples=[0,16,32,64]).tail()
```
## 4. Slice samples with a `Splitset`.
A `Splitset` divides a the samples of the Dataset into the following *splits* in the table below. It is the central object of the data preparation side of the ORM in that it touches `Label`, `Featureset`, `Foldset`, and `Encoderset`. It is the only mandatory data preparation object required by the training `Batch`.
Both contiuous and categorical `Labels` are automatically stratified.
| Split | Description |
|-----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| train | The samples that the model will be trained upon. <br/>Later, we’ll see how we can make *cross-folds from our training split*. <br/>Unsupervised learning will only have a training split. |
| validation (optional) | The samples used for training evaluation. <br/>Ensures that the test set is not revealed to the model during training. |
| test (optional) | The samples the model has never seen during training. <br/>Used to assess how well the model will perform on unobserved, natural data when it is applied in the real world aka how generalizable it is. |
Again, creating a Splitset won't duplicate your data. It simply denotes the sample indices (aka rows) to be used in the splits that you specify!
### Split Strategies
#### a) Default supervised 70-30 split.
If you only provide a Label, then 70:30 train:test splits will be generated.
```
splitset = featureset.make_splitset(label_id=label.id)
```
#### b) Specifying test size.
```
splitset = featureset.make_splitset(
label_id = label.id
, size_test = 0.30
)
```
#### c) Specifying validation size.
```
splitset = featureset.make_splitset(
label_id = label.id
, size_test = 0.20
, size_validation = 0.12
)
```
#### d) Taking the whole dataset as a training split.
```
splitset_unsupervised = featureset.make_splitset()
```
> Label-based stratification is used to ensure equally distributed label classes for both categorical and continuous data.
>
> If you want more control over stratification of continuous splits, specify the number of `continuous_bin_count` for grouping.
#### e) Stratification of continuous labels.
All splits are stratified by default in that they contain similar distributions of unique label classes so that each split is a statistically accurate representation of the population as a whole.
In order to support this process for continuous labels, binning/ discretization is utilized. For example, if 4 bins are used, values from *0.0 to 1.0* would be binned as *[0.0-0.25, 0.25-0.50, 0.50-0.75, 0.75-1.0]*. This is controlled by the `make_splitset(bin_count:int)` argument.
> Reference the handy `Pandas.qcut()` and the source code `pd.qcut(x=array_to_bin, q=bin_count, labels=False, duplicates='drop')` for more detail.
### Reading Splitsets
```
splitset.samples.keys()
```
`.keys()` of 1st layer are referred to as "split_name" in the source code: e.g. 'train' as well as, optionally, 'validation' and 'test'.
`Splitset.samples` on disk:
```
{
'train': [<sample_indices>],
'validation': [<sample_indices>],
'test': [<sample_indices>]
}
```
You can also verify the actual size of your splits.
```
splitset.sizes
```
The main attribute of the splitset is the `samples` dictionary. Again, on-disk this only contains sample indices. The dictionary is structured like so:
### `Splitset.to_numpy()`
When fetched to memory, the `.keys()` of the 2nd layer are: 'features' and, optionally, 'labels'.
Note that if you do not specified neither a `size_validation` nor `size_test`, then your dictionary will contain neither a `['validation']` nor `['test']` split.
```
splitset.to_numpy()['train']['features'][:4]
```
### `Splitset.to_pandas()`
Getting more fine-tuned, both the numpy and pandas methods support a few optional filters for the sake of memory-efficiency when fetching larger splits.
For example, imagine you are fetching data to specifically encode the only float column in the featureset of the test split. You don't need the labels and you don't need the other columns.
```
splitset.to_pandas(
splits = ['test']
, include_label = False
, include_featureset = True
, feature_columns = ['sepal_width']
)['test']['features'].head()
```
## 5. Optionally, create a `Foldset` for cross-validation.
### ORM
*Reference the [scikit-learn documentation](https://scikit-learn.org/stable/modules/cross_validation.html) to learn more about folding.*

We refer to the left out fold (blue) as the `fold_validation` and the remaining training data as the `folds_train_combined` (green).
> *In the future, we may introduce more folding `strategies` aside from leave-one-out.*
#### `Fold` objects
For the sake of determining which samples get trained upon, the only thing that matters is the slice of data that gets left out.
> Tip - DO NOT use a `Foldset` unless your *(total sample count / fold_count)* still gives you an accurate representation of your sample population. If you are ignoring that advice and stretching to perform cross-validation, then at least ensure that *(total sample count / fold_count)* is evenly divisible. Both of these tips help avoid poorly stratified/ undersized folds that perform either too well (only most common label class present) or poorly (handful of samples and a few inaccurate prediction on a normally good model).
>
> Tip - The sample indices of the validation fold are not discarded. In fact, `fold_validation` can actually be used alongside a split `validation` for double validation 🤘. However, it's more sensible to skip the validation split when cross-validating because you'll want each `fold_validation` to be as large (representative of the population) as possible. Folds naturally have fewer samples, so a handful of incorrect predictions have the potential to offset your aggregate metrics.
>
> Candidly, if you've ever performed cross-validation manually, let alone systematically, you'll know that, barring stratification of continuous labels, it's easy enough to construct the folds, but then it's a pain to generate performance metrics (e.g. `zero_division`, absent OHE classes) due to the absence of outlying classes and bins. Time has been invested to handle these scenarios elegantly so that folds can be treated as first-class-citizens alongside splits. That being said, if you try to do something undersized like "150 samples in their dataset and a `fold_count` > 3 with `unique_classes` > 4," then you may run into edge cases.
Similar to `Splitset.samples`, there is a `Fold.samples` dictionary of sample indices with the following `.keys()`:
* `samples['folds_train_combined']` - all the included folds.
* `samples['fold_validation']` - the fold that got left out.

### Deriving Foldsets
```
big_label = big_dataset.make_label(columns=[label_column])
big_fset = big_dataset.make_featureset(exclude_columns=[label_column])
big_splits = big_fset.make_splitset(
label_id = big_label.id
, size_test = 0.30
, bin_count=3
)
```
Now we are ready to generate 5 `Fold` objects that belong to the `Foldset`.
```
foldset = big_splits.make_foldset(fold_count=5, bin_count=3)
list(foldset.folds)
```
### Reading Foldsets
##### Sample indices of each Fold:
```
foldset.folds[0].samples['folds_train_combined'][:10]
foldset.folds[0].samples['fold_validation'][:10]
```
### `Foldset.to_numpy()`
In order to reduce memory footprint the `to_numpy()` and `to_pandas()` methods introduce the `fold_index` argument.
If no fold_index is specified, then it will fetch all folds and give each fold a numeric key according to its index.
So you need to specify the `fold_index` as the first key when accessing the dictionary.
```
foldset.to_numpy(fold_index=0)[0]['fold_validation']['features'][:4]
```
### `Foldset.to_pandas()`
Similar to `splitset.to_numpy(splits:list)`, the `foldset.to_numpy(fold_names:list)` argument allows you to pluck the `['folds_train_combined]` and `['fold_validation]` slices. Just make sure you remember to specific all 3 levels of keys when accessing the result.
```
foldset.to_pandas(
fold_index = 0
, fold_names = ['folds_train_combined']
, include_label = True
, include_featureset = False
)[0]['folds_train_combined']['labels'].tail()
```
## 6. Optionally, stage an `Encoderset` for encoding.
### Background
Certain algorithms either (a) require features and/ or labels formatted a certain way, or (b) perform MUCH better when their values are normalized. For example:
* Converting ordinal or categorical string data `[dog, cat, fish]` into one-hot encoded format `[[1,0,0][0,1,0][0,0,1]]`.
* Scaling continuous features from (-1 to 1) or (0.0 to 1.0). Or transforming them to resemble a more Gaussian distribution.
There are two phases of encoding:
1. `fit` - where the encoder learns about the values of the samples made available to it. Ideally, you only want to `fit` aka learn from your training split so that you are not *"leaking"* information from your validation and test spits into your encoder!
2. `transform` - where the encoder transforms all of the samples in the population.
AIQC has solved the following challenges related to encoding:
* How does one dynamically `fit` on only the training samples in advanced scenarios like cross-validation where a different fold is used for validation each time?
* For certain encoders, especially categorical ones, there is arguably no leakage. If an encoder is arbitrarilly assigning values/ tags to a sample through a process that is not aggregate-informed, then the information that is reveal to the `fit` is largely irrelevant. As an analogy, if we are examining swan color and all of a sudden there is a black swan... it's clearly not white, so slap a non-white label on it and move on. In fact, the prediction process and performance metric calucatlion may fail if it doesn't know how to handle the previously unseen category.
* Certain encoders only accept certain dtypes. Certain encoders only accept certain dimensionality (e.g. 1D, 2D, 3D) or shape patterns (odd-by-odd square). Unfortunately, there is not much uniformity here.
* Certain encoders output extraneous objects that don't work with deep learning libraries.
> *For now, only `sklearn.preprocessing` methods are supported. That may change as we add support for more low-level tensor-based frameworks like PyTorch.*
Keeping this in mind, we create an `Encoderset` for our `Splitset`. We can attach a `Labelcoder` and/ or `Featurecoder`(s).
```
encoderset = splitset.make_encoderset()
```
And then import any scikit-learn encoders that you need. AIQC only supports the uppercase methods (e.g. `RobustScaler`, but not `robust_scale`) because the lowercase methods do not separate the `fit` and `transform` steps. FYI, most of the uppercase methods have a combined `fit_transform` method if you need them.
> https://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing
```
from sklearn.preprocessing import *
```
## 7. Optionally, set a single `Labelcoder`.
### Background
The simplistic `Labelcoder` is a good warmup for the moe advanced `Featurecoder`.
Of course, you cannot encode Labels if your `Splitset` does not have labels in the first place.
The process is straightforward. You provide an instantiated encoder [e.g. `StandardScaler()` not `StandardScaler`], and then AIQC will:
* Verify that the encoder works with your `Label`'s dtype, sample values, and figure out what dimensionality it needs in order to succeed.
* Validate the attributes of your encoder to smooth out any common errors they would cause.
* Determine whether the encoder should be `fit` either (a) exclusively on the train split, or (b) if it is not prone to leakage, inclusively on the entire dataset thereby reducing the chance of errors arising.
```
labelcoder = encoderset.make_labelcoder(
sklearn_preprocess = OneHotEncoder(sparse=False)
)
```
## 8. Optionally, determine a sequence of `Featurecoder`(s).
### Background
The `Featurecoder` has the same validation process as the `Labelcoder`. However, it is not without its own challenges:
* We want to be able to apply different encoders to features of different dtypes. So it's likely that the same encoder will neither be applied to all columns, nor will all encoders be applied at the same exact time.
* Additionally, even within the same dtype (e.g. float/ continuous), different distributions call for different encoders.
* Commonly used encoders such a `OneHotEncoder` can ouput multiple columns from a single column input. Therefore, the structure of the features columns is not fixed during encoding.
* And finally, throughout this entire process, we need to avoid data leakage.
For these reasons, `Featurecoder`'s are applied sequentially; in an ordered chain, one after the other. After an encoder is applied, its columns are removed from the raw featureset and placed into an intermediary cache specific to each split/ fold.
> Right now, `Featurecoder` cannot be created for `Dataset.Image.Featureset`. I'm not opposed to changing this, but I would just have to account for 3D arrays.
### Filtering feature columns
The filtering mode is either:
* Inclusive (`include=True`) encode columns that match the filter.
* Exclusive (`include=False`) encode columns outside of the filter.
Then you can select:
1. An optional list of `dtypes`.
2. An optional list of `columns` name.
* The column filter is applied after the dtype filter.
> You can create a filter for all columns by setting `include=False` and then seting both `dtypes` and `columns` to `None`.
After submitting your encoder, if `verbose=True` is enabled:
* The validation rules help determine why it may have failed.
* The print statements help determine which columns your current filter matched, and which raw columns remain.
```
featurecoder = encoderset.make_featurecoder(
sklearn_preprocess = PowerTransformer(method='yeo-johnson', copy=False)
, include = True
, dtypes = ['float64']
, columns = None
, verbose = True
)
```
You can also view this information via the following attributes: `matching_columns`, `leftover_dtypes`, and `leftover_columns`.
## 9. Create an `Algorithm` aka model.
### ORM
Now that our data has been prepared, we transition to the other half of the ORM where the focus is the logic that will be applied to that data.
> An `Algorithm` is the ORM's codename for a machine learning model since *Model* is the most important *reserved word* for ORMs.
The following attributes tell AIQC how to handle the Algorithm behind the scenes:
* `library` - right now, only 'keras' is supported.
* Each library's model object and callbacks (history, early stopping) need to be handled differently.
* `analysis_type` - right now, these types are supported:
* `'classification_multi'`, `'classification_binary'`, `'regression'`.
* Used to determine which performance metrics to run.
* Must be compatible with the type of label fed to it.
### Model Definition
The `Algorithm` is composed of the functions:
* `function_model_build`.
* `function_model_train`.
* `function_model_evaluate` (optional, inferred by `analyis_type`).
* `function_model_predict` (optional, inferred by `analyis_type`).
> May provide overridable defaults for build and train in the future.
You can name the functions whatever you want, but do not change the predetermined arguments (e.g. `input_shape`,`**hyperparameters`, `model`, etc.).
As we define these functions, we'll see that we can pass *hyperparameters* into these function like so: `hyperparameters['<some_variable_name>']` using the `**hyperparameters` kwarg. Later, we'll provide a list of values for each entry in the hyperparameters dictionary.
Let's import the modules that we need.
```
import keras
from keras import metrics
from keras.models import Sequential
from keras.callbacks import History
from keras.layers import Dense, Dropout
```
> Later, when running your `Job`'s, if you receive a "module not found" error, then you can try troubleshooting by importing that module directly within the function where it is used.
#### Function to build model
If you normally don't wrap the building of your model in a function, don't be scared, all you have to do is add `return model` to the bottom of it, and AIQC will handle the rest. Also, if using `hyperparameters` feels intimidating, you can skip them.
The automatically provided `features_shape` and `label_shape` are handy because:
* The number of feature/ label columns is mutable due to encoders (e.g. OHE).
* Shapes can be less obvious in multi-dimensional scenarios like colored images.
> You can customize the metrics if you so desire (e.g. change the loss or accuracy), but they will only be applied to the training process/ `History` callback. We'll see later that AIQC will calculate metrics for you automatically.
```
def function_model_build(features_shape, label_shape, **hyperparameters):
model = Sequential()
model.add(Dense(units=hyperparameters['neuron_count'], input_shape=features_shape, activation='relu', kernel_initializer='he_uniform'))
model.add(Dropout(0.2))
model.add(Dense(units=hyperparameters['neuron_count'], activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(units=label_shape[0], activation='softmax'))
opt = keras.optimizers.Adamax(hyperparameters['learning_rate'])
model.compile(
loss = 'categorical_crossentropy'
, optimizer = opt
, metrics = ['accuracy']
)
return model
```
#### Function to train model
* `samples_train` - the appropriate data will be fed into the training cycle. For example, `Foldset.samples[fold_index]['folds_train_combined']` or `Splitset.samples['train']`.
* `samples_evaluate` - the appropriate data is made available for evaluation. For example, `Foldset.samples[fold_index]['fold_validation']`, `Splitset.samples['validation']`, or `Splitset.samples['test']`.
```
def function_model_train(model, samples_train, samples_evaluate, **hyperparameters):
model.fit(
samples_train["features"]
, samples_train["labels"]
, validation_data = (
samples_evaluate["features"]
, samples_evaluate["labels"]
)
, verbose = 0
, batch_size = 3
, epochs = hyperparameters['epoch_count']
, callbacks=[History()]
)
return model
```
##### Optional, callback to stop training early.
*Early stopping* isn't just about efficiency in reducing the number of `epochs`. If you've specified 300 epochs, there's a chance your model catches on to the underlying patterns early, say around 75-125 epochs. At this point, there's also good chance what it learns in the remaining epochs will cause it to overfit on patterns that are specific to the training data, and thereby and lose it's simplicity/ generalizability.
> The `val_` prefix refers to the evaluation samples.
>
> Remember, regression does not have accuracy metrics.
>
> `TrainingCallback.Keras.MetricCutoff` is a custom class we wrote to make multi-metric cutoffs easier, so you won't find information about it in the official Keras documentation.
```
def function_model_train(model, samples_train, samples_evaluate, **hyperparameters):
#Define one or more metrics to monitor.
metrics_cuttoffs = [
{"metric":"val_accuracy", "cutoff":0.9, "above_or_below":"above"},
{"metric":"val_loss", "cutoff":0.2, "above_or_below":"below"}
]
cutoffs = aiqc.TrainingCallback.Keras.MetricCutoff(metrics_cuttoffs)
# Remember to append `cutoffs` to the list of callbacks.
callbacks=[History(), cutoffs]
callbacks=[History()]
# No changes here.
model.fit(
samples_train["features"]
, samples_train["labels"]
, validation_data = (
samples_evaluate["features"]
, samples_evaluate["labels"]
)
, verbose = 0
, batch_size = 3
, epochs = hyperparameters['epoch_count']
, callbacks = callbacks
)
return model
```
#### Optional, function to predict samples
`function_model_predict` will be generated for you automatically if set to `None`. The `analysis_type` and `library` of the Algorithm help determine how to handle the predictions.
##### a) Regression default.
```
def function_model_predict(model, samples_predict):
predictions = model.predict(samples_predict['features'])
return predictions
```
##### b) Classification binary default.
All classification `predictions`, both mutliclass and binary, must be returned in ordinal format.
> For most libraries, classification algorithms output *probabilities* as opposed to actual predictions when running `model.predict()`. We want to return both of these object `predictions, probabilities` (the order matters) to generate performance metrics behind the scenes.
```
def function_model_predict(model, samples_predict):
probabilities = model.predict(samples_predict['features'])
# This is the official keras replacement for binary classes `.predict_classes()`
# It returns one array per sample: `[[0][1][0][1]]`
predictions = (probabilities > 0.5).astype("int32")
return predictions, probabilities
```
##### c) Classification multiclass default.
```
def function_model_predict(model, samples_predict):
import numpy as np
probabilities = model.predict(samples_predict['features'])
# This is the official keras replacement for multiclass `.predict_classes()`
# It returns one ordinal array per sample: `[[0][2][1][2]]`
predictions = np.argmax(probabilities, axis=-1)
return predictions, probabilities
```
#### Optional, function to calculate loss
When creating an `Algorithm`, the evaluate function will be generated for you automatically if set to `None`. The `analysis_type` and `library` of the Algorithm help determine how to handle the predictions.
The only trick thing here is when `keras.metrics` returns multiple metrics, like *accuracy* and/ or *R^2*. All we are after in this case is the loss for the split/ fold in question.
```
def function_model_loss(model, samples_evaluate):
metrics = model.evaluate(samples_evaluate['features'], samples_evaluate['labels'], verbose=0)
if (isinstance(metrics, list)):
loss = metrics[0]
elif (isinstance(metrics, float)):
loss = metrics
else:
raise ValueError(f"\nYikes - The 'metrics' returned are neither a list nor a float:\n{metrics}\n")
return loss
```
> In contrast to openly specifying a loss function, for example `keras.losses.<loss_fn>()`, the use of `.evaluate()` is consistent because it comes from the compiled model. Also, although `model.compiled_loss` would be more efficient, it requires making encoded `y_true` and `y_pred` available to the user, whereas `.evaluate()` can be called with the same arugments as the other `function_model_*` and many deep learning libraries support this approach.
#### Group the functions together in an `Algorithm`!
```
algorithm = aiqc.Algorithm.make(
library = "keras"
, analysis_type = "classification_multi"
, function_model_build = function_model_build
, function_model_train = function_model_train
, function_model_predict = function_model_predict # Optional
, function_model_loss = function_model_loss # Optional
)
```
> <!> Remember to use `make` and not `create`. Deceptively, `create` runs because it is a standard, built-in ORM method. However, it does so without any validation logic.
## 8. Optional, associate a `Hyperparamset` with your model.
The `hyperparameters` below will be automatically fed into the functions above as `**kwargs` via the `**hyperparameters` argument we saw earlier.
For example, wherever you see `hyperparameters['neuron_count']`, it will pull from the *key:value* pair `"neuron_count": [9, 12]` seen below. Where "model A" would have 9 neurons and "model B" would have 12 neurons.
```
hyperparameters = {
"neuron_count": [12]
, "epoch_count": [30, 60]
, "learning_rate": [0.01, 0.03]
}
hyperparamset = aiqc.Hyperparamset.from_algorithm(
algorithm_id = algorithm.id
, hyperparameters = hyperparameters
)
```
> The number of unique combinations escalates quickly, so in the future, we will provide different strategies for generating and selecting parameters to experiment with.
#### `Hyperparamcombo` objects.
Each unique combination of hyperparameters is recorded as a `Hyperparamcombo`.
Ultimately, a training `Job` will be constructed for each unique combinanation of hyperparameters aka `Hyperparamcombo`.
```
hyperparamset.hyperparamcombo_count
hyperparamcombos = hyperparamset.hyperparamcombos
for h in hyperparamcombos:
print(h.hyperparameters)
hyperparamcombos[0].get_hyperparameters(as_pandas=True)
```
## 9. Create a `Batch` of training `Jobs`.
The `Batch` is the central object of the "logic side" of the ORM. It ties together everything we need for training and hyperparameter tuning.
```
batch = aiqc.Batch.from_algorithm(
algorithm_id = algorithm.id
, splitset_id = splitset.id
, hyperparamset_id = hyperparamset.id # Optional.
, foldset_id = None # Optional.
, encoderset_id = encoderset.id # Optional.
, repeat_count = 3
)
```
* `repeat_count:int` allows us to run the same `Job` multiple times. Normally, each `Job` has 1 `Result` object associated with it upon completion. However, when `repeat_count` (> 1 of course) is used, a single `Job` will have multiple `Results`.
> Due to the fact that training is a *nondeterministic* process, we are likely to get different results each time we train a model, even if we use the same set of parameters. Perhaps you've have the right topology and parameters, but, this time around, the model just didn't recgonize the patterns. Similar to flipping a coin, there is a degree of chance in it, but the real trend averages out upon repetition.
* `hide_test:bool` excludes the test split from the performance metrics and visualizations. This avoids data leakage by forcing the user to make decisions based on the performance on their model on the training and evaluation samples.
### `Job` objects.
Each `Job` in the Batch represents a `Hyperparamcombo` that needs to be trained.
> If a `Foldset` is used during `Batch` creation, then (a) the number of jobs is multiplied by the `hyperparamcombo_count` and the `fold_count`, (b) each Job will have a `Fold`. Additionally, a superficial `Jobset` will be used to keep track of all Jobs related to that Foldset.
`poll_statuses(as_pandas:bool=False)` is used to determine which Job-repeats have been completed.
```
batch.poll_statuses(as_pandas=True)
```
### Execute all `Jobs`.
There are two ways to execute a Batch of Jobs:
#### 1. `batch.run_jobs(in_background=False)`
* Jobs are simply ran on a loop on the main *Process*.
* Stop the Jobs with a keyboard interrupt e.g. `ctrl+Z/D/C` in Python shell or `i,i` in Jupyter.
* It is the more reliable approach on Win/Mac/Lin.
* Although this locks your main process (can't write more code) while models train, you can still fire up a second shell session or notebook.
* Prototype your training jobs in this method so that you can see any errors that arise in the console.
#### 2. `batch.run_jobs(in_background=True)`; experimental
* The Jobs loop is executed on a separate, parallel `multiprocessing.Process`
* Stop the Jobs with `batch.stop_jobs()`, which kills the parallel *Process* unless it already failed.
* The benefit is that you can continue to code while your models are trained. There is no performance boost.
* On Mac and Linux (Unix), `'fork'` multiprocessing is used (`force=True`), which allows us to display the progress bar. FYI, even in 'fork' mode, Python multiprocessing is much more fragile in Python 3.8, which seems to be caused by how pickling is handled in passing variables to the child process.
* On Windows, `'spawn'` multiprocessing is used, which requires polling:
* `batch.poll_statuses()`
* `batch.poll_progress(raw:bool=False, loop:bool=False, loop_delay:int=3)` where `raw=True` is just a float, `loop=True` won't stop checking jobs until they are all complete, and `loop_delay=3` checks the progress every 3 seconds.
* It is a known bug that the `aiqc.TrainingCallbacks.Keras.MetricCutoff` class does not work with `in_background=True` as of Python 3.8.
* Also, during stress tests, I observed that when running multiple batches at the same time, the SQLite database would lock when simultaneous writes were attempted.
#### 3. Future, distributed cloud execution.
* In the future, we look to provide options for horizontal and vertical scale via either AWS or Azure.
```
batch.run_jobs(in_background=False)
```
The queue is interuptable. You can stop the execution of a batch and resume it later.
> This also comes in handy if either your machine or Python kernel either crashes or are interupted by accident. Whatever the reason, rest easy, just `run_jobs()` again to pick up where you left off. Be aware that the `tqdm` iteration time in the progress bar will be wrong because it will be divided by the jobs already ran.
## 10. Assess the `Results`.
Each `Job` has a `Result`. The following attributes are automatically written to the `Result` after training.
* `model_file`: HDF5 bytes of the model.
* `history`: per epoch metrics recorded during training.
* `predictions`: dictionary of predictions per split/ fold.
* `probabilities`: dictionary of prediction probabilities per split/ fold.
* `metrics`: dictionary of single-value metrics depending on the analysis_type.
* `plot_data`: metrics readily formatted for plotting.
> The dictionary attributes use split/ fold-based keys.
### Fetching the trained model.
```
compiled_model = batch.jobs[0].results[0].get_model()
compiled_model
```
### Fetching metrics.
```
batch.jobs[0].results[0].metrics
```
## 11. Metrics & Visualization
For more information on visualization of performance metrics, reference the [Visualization & Metrics](visualization.html) documentation.
| github_jupyter |
# CNTK 201A Part A: CIFAR-10 Data Loader
This tutorial will show how to prepare image data sets for use with deep learning algorithms in CNTK. The CIFAR-10 dataset (http://www.cs.toronto.edu/~kriz/cifar.html) is a popular dataset for image classification, collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. It is a labeled subset of the [80 million tiny images](http://people.csail.mit.edu/torralba/tinyimages/) dataset.
The CIFAR-10 dataset is not included in the CNTK distribution but can be easily downloaded and converted to CNTK-supported format
CNTK 201A tutorial is divided into two parts:
- Part A: Familiarizes you with the CIFAR-10 data and converts them into CNTK supported format. This data will be used later in the tutorial for image classification tasks.
- Part B: We will introduce image understanding tutorials.
If you are curious about how well computers can perform on CIFAR-10 today, Rodrigo Benenson maintains a [blog](http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#43494641522d3130) on the state-of-the-art performance of various algorithms.
```
from __future__ import print_function
from PIL import Image
import getopt
import numpy as np
import pickle as cp
import os
import shutil
import struct
import sys
import tarfile
import xml.etree.cElementTree as et
import xml.dom.minidom
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
# Config matplotlib for inline plotting
%matplotlib inline
```
## Data download
The CIFAR-10 dataset consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class.
There are 50,000 training images and 10,000 test images. The 10 classes are: airplane, automobile, bird,
cat, deer, dog, frog, horse, ship, and truck.
```
# CIFAR Image data
imgSize = 32
numFeature = imgSize * imgSize * 3
```
We first setup a few helper functions to download the CIFAR data. The archive contains the files data_batch_1, data_batch_2, ..., data_batch_5, as well as test_batch. Each of these files is a Python "pickled" object produced with cPickle. To prepare the input data for use in CNTK we use three oprations:
> `readBatch`: Unpack the pickle files
> `loadData`: Compose the data into single train and test objects
> `saveTxt`: As the name suggests, saves the label and the features into text files for both training and testing.
```
def readBatch(src):
with open(src, 'rb') as f:
if sys.version_info[0] < 3:
d = cp.load(f)
else:
d = cp.load(f, encoding='latin1')
data = d['data']
feat = data
res = np.hstack((feat, np.reshape(d['labels'], (len(d['labels']), 1))))
return res.astype(np.int)
def loadData(src):
print ('Downloading ' + src)
fname, h = urlretrieve(src, './delete.me')
print ('Done.')
try:
print ('Extracting files...')
with tarfile.open(fname) as tar:
tar.extractall()
print ('Done.')
print ('Preparing train set...')
trn = np.empty((0, numFeature + 1), dtype=np.int)
for i in range(5):
batchName = './cifar-10-batches-py/data_batch_{0}'.format(i + 1)
trn = np.vstack((trn, readBatch(batchName)))
print ('Done.')
print ('Preparing test set...')
tst = readBatch('./cifar-10-batches-py/test_batch')
print ('Done.')
finally:
os.remove(fname)
return (trn, tst)
def saveTxt(filename, ndarray):
with open(filename, 'w') as f:
labels = list(map(' '.join, np.eye(10, dtype=np.uint).astype(str)))
for row in ndarray:
row_str = row.astype(str)
label_str = labels[row[-1]]
feature_str = ' '.join(row_str[:-1])
f.write('|labels {} |features {}\n'.format(label_str, feature_str))
```
In addition to saving the images in the text format, we would save the images in PNG format. In addition we also compute the mean of the image. `saveImage` and `saveMean` are two functions used for this purpose.
```
def saveImage(fname, data, label, mapFile, regrFile, pad, **key_parms):
# data in CIFAR-10 dataset is in CHW format.
pixData = data.reshape((3, imgSize, imgSize))
if ('mean' in key_parms):
key_parms['mean'] += pixData
if pad > 0:
pixData = np.pad(pixData, ((0, 0), (pad, pad), (pad, pad)), mode='constant', constant_values=128)
img = Image.new('RGB', (imgSize + 2 * pad, imgSize + 2 * pad))
pixels = img.load()
for x in range(img.size[0]):
for y in range(img.size[1]):
pixels[x, y] = (pixData[0][y][x], pixData[1][y][x], pixData[2][y][x])
img.save(fname)
mapFile.write("%s\t%d\n" % (fname, label))
# compute per channel mean and store for regression example
channelMean = np.mean(pixData, axis=(1,2))
regrFile.write("|regrLabels\t%f\t%f\t%f\n" % (channelMean[0]/255.0, channelMean[1]/255.0, channelMean[2]/255.0))
def saveMean(fname, data):
root = et.Element('opencv_storage')
et.SubElement(root, 'Channel').text = '3'
et.SubElement(root, 'Row').text = str(imgSize)
et.SubElement(root, 'Col').text = str(imgSize)
meanImg = et.SubElement(root, 'MeanImg', type_id='opencv-matrix')
et.SubElement(meanImg, 'rows').text = '1'
et.SubElement(meanImg, 'cols').text = str(imgSize * imgSize * 3)
et.SubElement(meanImg, 'dt').text = 'f'
et.SubElement(meanImg, 'data').text = ' '.join(['%e' % n for n in np.reshape(data, (imgSize * imgSize * 3))])
tree = et.ElementTree(root)
tree.write(fname)
x = xml.dom.minidom.parse(fname)
with open(fname, 'w') as f:
f.write(x.toprettyxml(indent = ' '))
```
`saveTrainImages` and `saveTestImages` are simple wrapper functions to iterate through the data set.
```
def saveTrainImages(filename, foldername):
if not os.path.exists(foldername):
os.makedirs(foldername)
data = {}
dataMean = np.zeros((3, imgSize, imgSize)) # mean is in CHW format.
with open('train_map.txt', 'w') as mapFile:
with open('train_regrLabels.txt', 'w') as regrFile:
for ifile in range(1, 6):
with open(os.path.join('./cifar-10-batches-py', 'data_batch_' + str(ifile)), 'rb') as f:
if sys.version_info[0] < 3:
data = cp.load(f)
else:
data = cp.load(f, encoding='latin1')
for i in range(10000):
fname = os.path.join(os.path.abspath(foldername), ('%05d.png' % (i + (ifile - 1) * 10000)))
saveImage(fname, data['data'][i, :], data['labels'][i], mapFile, regrFile, 4, mean=dataMean)
dataMean = dataMean / (50 * 1000)
saveMean('CIFAR-10_mean.xml', dataMean)
def saveTestImages(filename, foldername):
if not os.path.exists(foldername):
os.makedirs(foldername)
with open('test_map.txt', 'w') as mapFile:
with open('test_regrLabels.txt', 'w') as regrFile:
with open(os.path.join('./cifar-10-batches-py', 'test_batch'), 'rb') as f:
if sys.version_info[0] < 3:
data = cp.load(f)
else:
data = cp.load(f, encoding='latin1')
for i in range(10000):
fname = os.path.join(os.path.abspath(foldername), ('%05d.png' % i))
saveImage(fname, data['data'][i, :], data['labels'][i], mapFile, regrFile, 0)
# URLs for the train image and labels data
url_cifar_data = 'http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz'
# Paths for saving the text files
data_dir = './data/CIFAR-10/'
train_filename = data_dir + '/Train_cntk_text.txt'
test_filename = data_dir + '/Test_cntk_text.txt'
train_img_directory = data_dir + '/Train'
test_img_directory = data_dir + '/Test'
root_dir = os.getcwd()
if not os.path.exists(data_dir):
os.makedirs(data_dir)
try:
os.chdir(data_dir)
trn, tst= loadData('http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz')
print ('Writing train text file...')
saveTxt(r'./Train_cntk_text.txt', trn)
print ('Done.')
print ('Writing test text file...')
saveTxt(r'./Test_cntk_text.txt', tst)
print ('Done.')
print ('Converting train data to png images...')
saveTrainImages(r'./Train_cntk_text.txt', 'train')
print ('Done.')
print ('Converting test data to png images...')
saveTestImages(r'./Test_cntk_text.txt', 'test')
print ('Done.')
finally:
os.chdir("../..")
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Eager execution
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/eager"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
TensorFlow's eager execution is an imperative programming environment that
evaluates operations immediately, without building graphs: operations return
concrete values instead of constructing a computational graph to run later. This
makes it easy to get started with TensorFlow and debug models, and it
reduces boilerplate as well. To follow along with this guide, run the code
samples below in an interactive `python` interpreter.
Eager execution is a flexible machine learning platform for research and
experimentation, providing:
* *An intuitive interface*—Structure your code naturally and use Python data
structures. Quickly iterate on small models and small data.
* *Easier debugging*—Call ops directly to inspect running models and test
changes. Use standard Python debugging tools for immediate error reporting.
* *Natural control flow*—Use Python control flow instead of graph control
flow, simplifying the specification of dynamic models.
Eager execution supports most TensorFlow operations and GPU acceleration.
Note: Some models may experience increased overhead with eager execution
enabled. Performance improvements are ongoing, but please
[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find a
problem and share your benchmarks.
## Setup and basic usage
```
from __future__ import absolute_import, division, print_function, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
```
In Tensorflow 2.0, eager execution is enabled by default.
```
tf.executing_eagerly()
```
Now you can run TensorFlow operations and the results will return immediately:
```
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
```
Enabling eager execution changes how TensorFlow operations behave—now they
immediately evaluate and return their values to Python. `tf.Tensor` objects
reference concrete values instead of symbolic handles to nodes in a computational
graph. Since there isn't a computational graph to build and run later in a
session, it's easy to inspect results using `print()` or a debugger. Evaluating,
printing, and checking tensor values does not break the flow for computing
gradients.
Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPy
operations accept `tf.Tensor` arguments. TensorFlow
[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convert
Python objects and NumPy arrays to `tf.Tensor` objects. The
`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
```
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
```
## Dynamic control flow
A major benefit of eager execution is that all the functionality of the host
language is available while your model is executing. So, for example,
it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
```
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
```
This has conditionals that depend on tensor values and it prints these values
at runtime.
## Eager training
### Computing gradients
[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)
is useful for implementing machine learning algorithms such as
[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for training
neural networks. During eager execution, use `tf.GradientTape` to trace
operations for computing gradients later.
You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops.
Since different operations can occur during each call, all
forward-pass operations get recorded to a "tape". To compute the gradient, play
the tape backwards and then discard. A particular `tf.GradientTape` can only
compute one gradient; subsequent calls throw a runtime error.
```
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
```
### Train a model
The following example creates a multi-layer model that classifies the standard
MNIST handwritten digits. It demonstrates the optimizer and layer APIs to build
trainable graphs in an eager execution environment.
```
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
```
Even without training, call the model and inspect the output in eager execution:
```
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
```
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
```
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
```
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
```
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
```
### Variables and optimizers
`tf.Variable` objects store mutable `tf.Tensor`-like values accessed during
training to make automatic differentiation easier.
The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.
For example, the automatic differentiation example above
can be rewritten:
```
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
```
Next:
1. Create the model.
2. The Derivatives of a loss function with respect to model parameters.
3. A strategy for updating the variables based on the derivatives.
```
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
```
Note: Variables persist until the last reference to the python object
is removed, and is the variable is deleted.
### Object-based saving
A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
```
model.save_weights('weights')
status = model.load_weights('weights')
```
Using `tf.train.Checkpoint` you can take full control over this process.
This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
```
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
```
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,
without requiring hidden variables. To record the state of a `model`,
an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
```
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
```
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details.
### Object-oriented metrics
`tf.keras.metrics` are stored as objects. Update a metric by passing the new data to
the callable, and retrieve the result using the `tf.keras.metrics.result` method,
for example:
```
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
```
### Summaries and TensorBoard
[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool for
understanding, debugging and optimizing the model training process. It uses
summary events that are written while executing the program.
You can use `tf.summary` to record summaries of variable in eager execution.
For example, to record summaries of `loss` once every 100 training steps:
```
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
```
## Advanced automatic differentiation topics
### Dynamic models
`tf.GradientTape` can also be used in dynamic models. This example for a
[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)
algorithm looks like normal NumPy code, except there are gradients and is
differentiable, despite the complex control flow:
```
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
```
### Custom gradients
Custom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to the
inputs, outputs, or intermediate results. For example, here's an easy way to clip
the norm of the gradients in the backward pass:
```
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
```
Custom gradients are commonly used to provide a numerically stable gradient for a
sequence of operations:
```
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
```
Here, the `log1pexp` function can be analytically simplified with a custom
gradient. The implementation below reuses the value for `tf.exp(x)` that is
computed during the forward pass—making it more efficient by eliminating
redundant calculations:
```
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
```
## Performance
Computation is automatically offloaded to GPUs during eager execution. If you
want control over where a computation runs you can enclose it in a
`tf.device('/gpu:0')` block (or the CPU equivalent):
```
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
```
A `tf.Tensor` object can be copied to a different device to execute its
operations:
```
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
```
### Benchmarks
For compute-heavy models, such as
[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)
training on a GPU, eager execution performance is comparable to `tf.function` execution.
But this gap grows larger for models with less computation and there is work to
be done for optimizing hot code paths for models with lots of small operations.
## Work with functions
While eager execution makes development and debugging more interactive,
TensorFlow 1.x style graph execution has advantages for distributed training, performance
optimizations, and production deployment. To bridge this gap, TensorFlow 2.0 introduces `function`s via the `tf.function` API. For more information, see the [tf.function](./function.ipynb) guide.
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Train Your Own Model and Serve It With TensorFlow Serving
In this notebook, you will train a neural network to classify images of handwritten digits from the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. You will then save the trained model, and serve it using [TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving).
**Warning: This notebook is designed to be run in a Google Colab only**. It installs packages on the system and requires root access. If you want to run it in a local Jupyter notebook, please proceed with caution.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%204%20-%20TensorFlow%20Serving/Week%201/Exercises/TFServing_Week1_Exercise_Answer.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%204%20-%20TensorFlow%20Serving/Week%201/Exercises/TFServing_Week1_Exercise_Answer.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
## Setup
```
try:
%tensorflow_version 2.x
except:
pass
import os
import json
import tempfile
import requests
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
print("\u2022 Using TensorFlow Version:", tf.__version__)
```
## Import the MNIST Dataset
The [MNIST](http://yann.lecun.com/exdb/mnist/) dataset contains 70,000 grayscale images of the digits 0 through 9. The images show individual digits at a low resolution (28 by 28 pixels).
Even though these are really images, we will load them as NumPy arrays and not as binary image objects.
```
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# EXERCISE: Scale the values of the arrays below to be between 0.0 and 1.0.
train_images = train_images / 255.0
test_images = test_images / 255.0
```
In the cell below use the `.reshape` method to resize the arrays to the following sizes:
```python
train_images.shape: (60000, 28, 28, 1)
test_images.shape: (10000, 28, 28, 1)
```
```
# EXERCISE: Reshape the arrays below.
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
print('\ntrain_images.shape: {}, of {}'.format(train_images.shape, train_images.dtype))
print('test_images.shape: {}, of {}'.format(test_images.shape, test_images.dtype))
```
## Look at a Sample Image
```
idx = 42
plt.imshow(test_images[idx].reshape(28,28), cmap=plt.cm.binary)
plt.title('True Label: {}'.format(test_labels[idx]), fontdict={'size': 16})
plt.show()
```
## Build a Model
In the cell below build a `tf.keras.Sequential` model that can be used to classify the images of the MNIST dataset. Feel free to use the simplest possible CNN. Make sure your model has the correct `input_shape` and the correct number of output units.
```
# EXERCISE: Create a model.
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(input_shape=(28,28,1), filters=8, kernel_size=3,
strides=2, activation='relu', name='Conv1'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation=tf.nn.softmax, name='Softmax')
])
model.summary()
```
## Train the Model
In the cell below configure your model for training using the `adam` optimizer, `sparse_categorical_crossentropy` as the loss, and `accuracy` for your metrics. Then train the model for the given number of epochs, using the `train_images` array.
```
# EXERCISE: Configure the model for training.
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
epochs = 5
# EXERCISE: Train the model.
history = model.fit(train_images, train_labels, epochs=epochs)
```
## Evaluate the Model
```
# EXERCISE: Evaluate the model on the test images.
results_eval = model.evaluate(test_images, test_labels, verbose=0)
for metric, value in zip(model.metrics_names, results_eval):
print(metric + ': {:.3}'.format(value))
```
## Save the Model
```
MODEL_DIR = tempfile.gettempdir()
version = 1
export_path = os.path.join(MODEL_DIR, str(version))
if os.path.isdir(export_path):
print('\nAlready saved a model, cleaning up\n')
!rm -r {export_path}
model.save(export_path, save_format="tf")
print('\nexport_path = {}'.format(export_path))
!ls -l {export_path}
```
## Examine Your Saved Model
```
!saved_model_cli show --dir {export_path} --all
```
## Add TensorFlow Serving Distribution URI as a Package Source
```
# This is the same as you would do from your command line, but without the [arch=amd64], and no sudo
# You would instead do:
# echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
# curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
!echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add -
!apt update
```
## Install TensorFlow Serving
```
!apt-get install tensorflow-model-server
```
## Run the TensorFlow Model Server
You will now launch the TensorFlow model server with a bash script. In the cell below use the following parameters when running the TensorFlow model server:
* `rest_api_port`: Use port `8501` for your requests.
* `model_name`: Use `digits_model` as your model name.
* `model_base_path`: Use the environment variable `MODEL_DIR` defined below as the base path to the saved model.
```
os.environ["MODEL_DIR"] = MODEL_DIR
# EXERCISE: Fill in the missing code below.
%%bash --bg
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=digits_model \
--model_base_path="${MODEL_DIR}" >server.log 2>&1
!tail server.log
```
## Create JSON Object with Test Images
In the cell below construct a JSON object and use the first three images of the testing set (`test_images`) as your data.
```
# EXERCISE: Create JSON Object
data = json.dumps({"signature_name": "serving_default", "instances": test_images[0:3].tolist()})
```
## Make Inference Request
In the cell below, send a predict request as a POST to the server's REST endpoint, and pass it your test data. You should ask the server to give you the latest version of your model.
```
# EXERCISE: Fill in the code below
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/digits_model:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
```
## Plot Predictions
```
plt.figure(figsize=(10,15))
for i in range(3):
plt.subplot(1,3,i+1)
plt.imshow(test_images[i].reshape(28,28), cmap = plt.cm.binary)
plt.axis('off')
color = 'green' if np.argmax(predictions[i]) == test_labels[i] else 'red'
plt.title('Prediction: {}\nTrue Label: {}'.format(np.argmax(predictions[i]), test_labels[i]), color=color)
plt.show()
```
| github_jupyter |
# MARATONA BEHIND THE CODE 2020
## DESAFIO 2: PARTE 1
### Introdução
Em projetos de ciência de dados visando a construção de modelos de *machine learning*, ou aprendizado estatístico, é muito incomum que os dados iniciais estejam já no formato ideal para a construção de modelos. São necessários vários passos intermediários de pré-processamento de dados, como por exemplo a codificação de variáveis categóricas, normalização de variáveis numéricas, tratamento de dados faltantes, etc. A biblioteca **scikit-learn** -- uma das mais populares bibliotecas de código-aberto para *machine learning* no mundo -- possui diversas funções já integradas para a realização das transformações de dados mais utilizadas. Entretanto, em um fluxo comum de um modelo de aprendizado de máquina, é necessária a aplicação dessas transformações pelo menos duas vezes: a primeira vez para "treinar" o modelo, e depois novamente quando novos dados forem enviados como entrada para serem classificados por este modelo.
Para facilitar o trabalho com esse tipo de fluxo, o scikit-learn possui também uma ferramenta chamada **Pipeline**, que nada mais é do que uma lista ordenada de transformações que devem ser aplicadas nos dados. Para auxiliar no desenvolvimento e no gerenciamento de todo o ciclo-de-vida dessas aplicações, alem do uso de Pipelines, as equipes de cientistas de dados podem utilizar em conjunto o **Watson Machine Learning**, que possui dezenas de ferramentas para treinar, gerenciar, hospedar e avaliar modelos baseados em aprendizado de máquina. Além disso, o Watson Machine Learning é capaz de encapsular pipelines e modelos em uma API pronta para uso e integração com outras aplicações.
Durante o desafio 2, você participante irá aprender a construir uma **Pipeline** para um modelo de classificação e hospedá-lo como uma API com o auxílio do Watson Machine Learning. Uma vez hospedado, você poderá integrar o modelo criado com outras aplicações, como assistentes virtuais e muito mais. Neste notebook, será apresentado um exemplo funcional de criação de um modelo e de uma pipeline no scikit-learn (que você poderá utilizar como template para a sua solução!).
## ** ATENÇÃO **
Este notebook serve apenas um propósito educativo, você pode alterar o código como quiser e nada aqui será avaliado/pontuado.
A recomendação é que você experimente e teste diferentes algoritmos aqui antes de passar para a *parte-2*, onde será realizado o deploy do seu modelo no **Watson Machine Learning** :)
### Trabalhando com Pipelines do scikit-learn
```
# Primeiro, realizamos a instalação do scikit-learn versão 0.20.3 e do xgboost versão 0.71 no Kernel deste notebook
# ** CUIDADO AO TROCAR A VERSÃO DAS BIBLIOTECAS -- VERSÕES DIFERENTES PODEM SER INCOMPATÍVEIS COM O WATSON STUDIO **
# OBS: A instalação do xgboost leva um tempo considerável
!pip install scikit-learn==0.20.3 --upgrade
!pip install xgboost==0.71 --upgrade
# Em seguida iremos importar diversas bibliotecas que serão utilizadas:
# Pacote para trabalhar com JSON
import json
# Pacote para realizar requisições HTTP
import requests
# Pacote para exploração e análise de dados
import pandas as pd
# Pacote com métodos numéricos e representações matriciais
import numpy as np
# Pacote para construção de modelo baseado na técnica Gradient Boosting
import xgboost as xgb
# Pacotes do scikit-learn para pré-processamento de dados
# "SimpleImputer" é uma transformação para preencher valores faltantes em conjuntos de dados
from sklearn.impute import SimpleImputer
# Pacotes do scikit-learn para treinamento de modelos e construção de pipelines
# Método para separação de conjunto de dados em amostras de treino e teste
from sklearn.model_selection import train_test_split
# Método para criação de modelos baseados em árvores de decisão
from sklearn.tree import DecisionTreeClassifier
# Classe para a criação de uma pipeline de machine-learning
from sklearn.pipeline import Pipeline
# Pacotes do scikit-learn para avaliação de modelos
# Métodos para validação cruzada do modelo criado
from sklearn.model_selection import KFold, cross_validate
```
### Importando um .csv de seu projeto no IBM Cloud Pak for Data para o Kernel deste notebook
Primeiro iremos importar o dataset fornecido para o desafio, que já está incluso neste projeto!
Você pode realizar a importação dos dados de um arquivo .csv diretamente para o Kernel do notebook como um DataFrame da biblioteca Pandas, muito utilizada para a manipulação de dados em Python.
Para realizar a importação, basta selecionar a próxima célula e seguir as instruções na imagem abaixo:

Após a seleção da opção **"Insert to code"**, a célula abaixo será preenchida com o código necessário para importação e leitura dos dados no arquivo .csv como um DataFrame Pandas.
```
<< INSIRA O DATASET COMO UM PANDAS DATAFRAME NESTA CÉLULA! >>>
```
Temos 15 colunas presentes no dataset fornecido, sendo dezessete delas variáveis características (dados de entrada) e um delas uma variável-alvo (que queremos que o nosso modelo seja capaz de prever).
As variáveis características são:
MATRICULA - número de matrícula do estudante
NOME - nome completo do estudante
REPROVACOES_DE - número de reprovações na disciplina de ``Direito Empresarial``
REPROVACOES_EM - número de reprovações na disciplina de ``Empreendedorismo``
REPROVACOES_MF - número de reprovações na disciplina de ``Matemática Financeira``
REPROVACOES_GO - número de reprovações na disciplina de ``Gestão Operacional``
NOTA_DE - média simples das notas do aluno na disciplina de ``Direito Empresarial`` (0-10)
NOTA_EM - média simples das notas do aluno na disciplina de ``Empreendedorismo`` (0-10)
NOTA_MF - média simples das notas do aluno na disciplina de ``Matemática Financeira`` (0-10)
NOTA_GO - média simples das notas do aluno na disciplina de ``Gestão Operacional`` (0-10)
INGLES - variável binária que indica se o estudante tem conhecimento em língua inglesa (0 -> sim ou 1 -> não).
H_AULA_PRES - horas de estudo presencial realizadas pelo estudante
TAREFAS_ONLINE - número de tarefas online entregues pelo estudante
FALTAS - número de faltas acumuladas do estudante (todas disciplinas)
A variável-alvo é:
PERFIL - uma *string* que indica uma de cinco possibilidades:
"EXCELENTE" - Estudante não necessita de mentoria
"MUITO BOM" - Estudante não necessita de mentoria
"HUMANAS" - Estudante necessita de mentoria exclusivamente em matérias com conteúdo de ciências humanas
"EXATAS" - Estudante necessita de mentoria apenas em disciplinas com conteúdo de ciências exatas
"DIFICULDADE" - Estudante necessita de mentoria em duas ou mais disciplinas
Com um modelo capaz de classificar um estudante em uma dessas categorias, podemos automatizar parte da mentoria estudantil através de assistentes virtuais, que serão capazes de recomendar práticas de estudo e conteúdo personalizado com base nas necessidades de cada aluno.
### Explorando os dados fornecidos
Podemos continuar a exploração dos dados fornecidos com a função ``info()``:
```
df_data_1.info()
```
É notado que existem variáveis do tipo ``float64`` (números "decimais"), variáveis do tipo ``int64`` (números inteiros) e do tipo ``object`` (nesse caso são *strings*, ou texto).
Como a maioria dos algoritmos de aprendizado estatístico supervisionado só aceita valores numéricos como entrada, é necessário então o pré-processamento das variáveis do tipo "object" antes de usar esse dataset como entrada para o treinamento de um modelo. Também é notado que existem valores faltantes em várias colunas. Esses valores faltantes também devem ser tratados antes de serem construídos modelos com esse conjunto de dados base.
A função ``describe()`` gera várias informações sobre as variáveis numéricas que também podem ser úteis:
```
df_data_1.describe()
```
### Visualizações
Para visualizar o dataset fornecido, podemos utilizar as bibliotecas ``matplotlib`` e ``seaborn``:
```
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(28, 4))
sns.countplot(ax=axes[0], x='REPROVACOES_DE', data=df_data_1)
sns.countplot(ax=axes[1], x='REPROVACOES_EM', data=df_data_1)
sns.countplot(ax=axes[2], x='REPROVACOES_MF', data=df_data_1)
sns.countplot(ax=axes[3], x='REPROVACOES_GO', data=df_data_1)
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(28, 4))
sns.distplot(df_data_1['NOTA_DE'], ax=axes[0])
sns.distplot(df_data_1['NOTA_EM'], ax=axes[1])
sns.distplot(df_data_1['NOTA_MF'], ax=axes[2])
sns.distplot(df_data_1['NOTA_GO'].dropna(), ax=axes[3])
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(28, 4))
sns.countplot(ax=axes[0], x='INGLES', data=df_data_1)
sns.countplot(ax=axes[1], x='FALTAS', data=df_data_1)
sns.countplot(ax=axes[2], x='H_AULA_PRES', data=df_data_1)
sns.countplot(ax=axes[3], x='TAREFAS_ONLINE', data=df_data_1)
fig = plt.plot()
sns.countplot(x='PERFIL', data=df_data_1)
```
## ** ATENÇÃO **
Você pode notar pela figura acima que este dataset é desbalanceado, isto é, a quantidade de amostras para cada classe que desejamos classificar é bem discrepante. O participante é livre para adicionar ou remover **LINHAS** no dataset fornecido, inclusive utilizar bibliotecas para balanceamento com ``imblearn``. Entretanto tome **muito cuidado**!!! Você não pode alterar os tipos dos dados e nem remover ou desordenar o dataset fornecido. Todas as operações desse tipo deverão ser feitas por meio de Transforms do scikit-learn :)
<hr>
### Realizando o pré-processamento dos dados
Para o pré-processamento dos dados serão apresentadas duas transformações básicas neste notebook, demonstrando a construção de uma Pipeline com um modelo funcional. Esta Pipeline funcional fornecida deverá ser melhorada pelo participante para que o modelo final alcance a maior acurácia possível, garantindo uma pontuação maior no desafio. Essa melhoria pode ser feita apenas no pré-processamento dos dados, na escolha de um algoritmo para treinamento de modelo diferente, ou até mesmo na alteração do *framework* usado (entretanto só será fornecido um exemplo pronto de integração do Watson Machine Learning com o *scikit-learn*).
A primeira transformação (passo na nossa Pipeline) será a exclusão da coluna "NOME" do nosso dataset, que além de não ser uma variável numérica, também não é uma variável relacionada ao desempenho dos estudantes nas disciplinas. Existem funções prontas no scikit-learn para a realização dessa transformação, entretanto nosso exemplo irá demonstrar como criar uma transformação personalizada do zero no scikit-learn. Se desejado, o participante poderá utilizar esse exemplo para criar outras transformações e adicioná-las à Pipeline final :)
#### Transformação 1: excluindo colunas do dataset
Para a criação de uma transformação de dados personalizada no scikit-learn, é necessária basicamente a criação de uma classe com os métodos ``transform`` e ``fit``. No método transform será executada a lógica da nossa transformação.
Na próxima célula é apresentado o código completo de uma transformação ``DropColumns`` para a remoção de colunas de um DataFrame pandas.
```
from sklearn.base import BaseEstimator, TransformerMixin
# All sklearn Transforms must have the `transform` and `fit` methods
class DropColumns(BaseEstimator, TransformerMixin):
def __init__(self, columns):
self.columns = columns
def fit(self, X, y=None):
return self
def transform(self, X):
# Primeiro realizamos a cópia do dataframe 'X' de entrada
data = X.copy()
# Retornamos um novo dataframe sem as colunas indesejadas
return data.drop(labels=self.columns, axis='columns')
```
Para aplicar essa transformação em um DataFrame pandas, basta instanciar um objeto *DropColumns* e chamar o método transform().
```
# Instanciando uma transformação DropColumns
rm_columns = DropColumns(
columns=["NOME"] # Essa transformação recebe como parâmetro uma lista com os nomes das colunas indesejadas
)
print(rm_columns)
# Visualizando as colunas do dataset original
print("Colunas do dataset original: \n")
print(df_data_1.columns)
# Aplicando a transformação ``DropColumns`` ao conjunto de dados base
rm_columns.fit(X=df_data_1)
# Reconstruindo um DataFrame Pandas com o resultado da transformação
df_data_2 = pd.DataFrame.from_records(
data=rm_columns.transform(
X=df_data_1
),
)
# Visualizando as colunas do dataset transformado
print("Colunas do dataset após a transformação ``DropColumns``: \n")
print(df_data_2.columns)
```
Nota-se que a coluna "NOME" foi removida e nosso dataset agora poossui apenas 17 colunas.
#### Transformação 2: tratando dados faltantes
Para tratar os dados faltantes em nosso conjunto de dados, iremos agora utilizar uma transformação pronta da biblioteca scikit-learn, chamada **SimpleImputer**.
Essa transformação permite diversas estratégias para o tratamento de dados faltantes. A documentação oficial pode ser encontrada em: https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html
Neste exemplo iremos simplesmente transformar todos os valores faltantes em zero.
```
# Criação de um objeto ``SimpleImputer``
si = SimpleImputer(
missing_values=np.nan, # os valores faltantes são do tipo ``np.nan`` (padrão Pandas)
strategy='constant', # a estratégia escolhida é a alteração do valor faltante por uma constante
fill_value=0, # a constante que será usada para preenchimento dos valores faltantes é um int64=0.
verbose=0,
copy=True
)
# Visualizando os dados faltantes do dataset após a primeira transformação (df_data_2)
print("Valores nulos antes da transformação SimpleImputer: \n\n{}\n".format(df_data_2.isnull().sum(axis = 0)))
# Aplicamos o SimpleImputer ``si`` ao conjunto de dados df_data_2 (resultado da primeira transformação)
si.fit(X=df_data_2)
# Reconstrução de um novo DataFrame Pandas com o conjunto imputado (df_data_3)
df_data_3 = pd.DataFrame.from_records(
data=si.transform(
X=df_data_2
), # o resultado SimpleImputer.transform(<<pandas dataframe>>) é lista de listas
columns=df_data_2.columns # as colunas originais devem ser conservadas nessa transformação
)
# Visualizando os dados faltantes do dataset após a segunda transformação (SimpleImputer) (df_data_3)
print("Valores nulos no dataset após a transformação SimpleImputer: \n\n{}\n".format(df_data_3.isnull().sum(axis = 0)))
```
Nota-se que não temos mais nenhum valor faltante no nosso conjunto de dados :)
Vale salientar que nem sempre a alteração dos valores faltantes por 0 é a melhor estratégia. O participante é incentivado a estudar e implementar estratégias diferentes de tratamento dos valores faltantes para aprimorar seu modelo e melhorar sua pontuação final.
### Treinando um modelo de classificação
Finalizado o pré-processamento, já temos o conjunto de dados no formato necessário para o treinamento do nosso modelo:
```
df_data_3.head()
```
No exemplo fornecido, iremos utilizar todas as colunas, exceto a coluna **LABELS** como *features* (variáveis de entrada).
A variável **LABELS** será a variável-alvo do modelo, conforme descrito no enunciado do desafio.
#### Definindo as features do modelo
```
# Definição das colunas que serão features (nota-se que a coluna NOME não está presente)
features = [
"MATRICULA", 'REPROVACOES_DE', 'REPROVACOES_EM', "REPROVACOES_MF", "REPROVACOES_GO",
"NOTA_DE", "NOTA_EM", "NOTA_MF", "NOTA_GO",
"INGLES", "H_AULA_PRES", "TAREFAS_ONLINE", "FALTAS",
]
# Definição da variável-alvo
target = ["PERFIL"]
# Preparação dos argumentos para os métodos da biblioteca ``scikit-learn``
X = df_data_3[features]
y = df_data_3[target]
```
O conjunto de entrada (X):
```
X.head()
```
As variáveis-alvo correspondentes (y):
```
y.head()
```
#### Separando o dataset em um conjunto de treino e um conjunto de teste
Iremos separar o dataset fornecido em dois grupos: um para treinar nosso modelo, e outro para testarmos o resultado através de um teste cego. A separação do dataset pode ser feita facilmente com o método *train_test_split()* do scikit-learn:
```
# Separação dos dados em um conjunto de treino e um conjunto de teste
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=337)
```
<hr>
#### Criando um modelo baseado em árvores de decisão
No exemplo fornecido iremos criar um classificador baseado em **árvores de decisão**.
Material teórico sobre árvores de decisão na documentação oficial do scikit-learn: https://scikit-learn.org/stable/modules/tree.html
O primeiro passo é basicamente instanciar um objeto *DecisionTreeClassifier()* da biblioteca scikit-learn.
```
# Criação de uma árvore de decisão com a biblioteca ``scikit-learn``:
decision_tree = DecisionTreeClassifier()
```
#### Testando o classificador baseado em árvore de decisão
```
# Treino do modelo (é chamado o método *fit()* com os conjuntos de treino)
decision_tree.fit(
X_train,
y_train
)
```
#### Execução de predições e avaliação da árvore de decisão
```
# Realização de teste cego no modelo criado
y_pred = decision_tree.predict(X_test)
X_test.head()
print(y_pred)
from sklearn.metrics import accuracy_score
# Acurácia alcançada pela árvore de decisão
print("Acurácia: {}%".format(100*round(accuracy_score(y_test, y_pred), 2)))
```
<hr>
Neste notebook foi demonstrado como trabalhar com transformações e modelos com a biblioteca scikit-learn. É recomendado que o participante realize seus experimentos editando o código fornecido aqui até que um modelo com acurácia elevada seja alcançado.
Quando você estiver satisfeito com seu modelo, pode passar para a segunda etapa do desafio -- encapsular seu modelo como uma API REST pronta para uso com o Watson Machine Learning!
O notebook para a segunda etapa já se encontra neste projeto, basta acessar a aba **ASSETS** e inicializá-lo! Não se esqueca de antes desligar o Kernel deste notebook para reduzir o consumo de sua camada grátis do IBM Cloud Pak for Data.
| github_jupyter |
```
import os
import time
import random
import pandas as pd
import numpy as np
import gc
import re
import torch
from torchtext import data
import spacy
from tqdm import tqdm_notebook, tnrange
from tqdm.auto import tqdm
from unidecode import unidecode
import random
tqdm.pandas(desc='Progress')
from collections import Counter
from textblob import TextBlob
from nltk import word_tokenize
import torch as t
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
from torch.autograd import Variable
from torchtext.data import Example
from torch.optim.optimizer import Optimizer
import torchtext
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import f1_score
```
### Basic Parameters
```
embed_size = 300 # how big is each word vector
max_features = 120000 # how many unique words to use (i.e num rows in embedding vector)
maxlen = 70 # max number of words in a question to use
batch_size = 512 # how many samples to process at once
n_epochs = 5 # how many times to iterate over all samples
n_splits = 5 # Number of K-fold Splits
SEED = 1029 # seed_everything
def seed_everything(seed=1029):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_everything()
## FUNCTIONS TAKEN FROM https://www.kaggle.com/gmhost/gru-capsule
def load_glove(word_index):
EMBEDDING_FILE = '../input/embeddings/glove.840B.300d/glove.840B.300d.txt'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')[:300]
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE))
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = -0.005838499,0.48782197
embed_size = all_embs.shape[1]
# word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
def load_fasttext(word_index):
EMBEDDING_FILE = '../input/embeddings/wiki-news-300d-1M/wiki-news-300d-1M.vec'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE) if len(o)>100)
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
# word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
def load_para(word_index):
EMBEDDING_FILE = '../input/embeddings/paragram_300_sl999/paragram_300_sl999.txt'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE, encoding="utf8", errors='ignore') if len(o)>100)
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = -0.0053247833,0.49346462
embed_size = all_embs.shape[1]
# word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
df_train = pd.read_csv("../input/train.csv")
df_test = pd.read_csv("../input/test.csv")
df = pd.concat([df_train ,df_test],sort=True)
def build_vocab(texts):
sentences = texts.apply(lambda x: x.split()).values
vocab = {}
for sentence in sentences:
for word in sentence:
try:
vocab[word] += 1
except KeyError:
vocab[word] = 1
return vocab
vocab = build_vocab(df['question_text'])
sin = len(df_train[df_train["target"]==0])
insin = len(df_train[df_train["target"]==1])
persin = (sin/(sin+insin))*100
perinsin = (insin/(sin+insin))*100
print("# Sincere questions: {:,}({:.2f}%) and # Insincere questions: {:,}({:.2f}%)".format(sin,persin,insin,perinsin))
print("# Test samples: {:,}({:.3f}% of train samples)".format(len(df_test),len(df_test)/len(df_train)))
def build_vocab(texts):
sentences = texts.apply(lambda x: x.split()).values
vocab = {}
for sentence in sentences:
for word in sentence:
try:
vocab[word] += 1
except KeyError:
vocab[word] = 1
return vocab
def known_contractions(embed):
known = []
for contract in contraction_mapping:
if contract in embed:
known.append(contract)
return known
def clean_contractions(text, mapping):
specials = ["’", "‘", "´", "`"]
for s in specials:
text = text.replace(s, "'")
text = ' '.join([mapping[t] if t in mapping else t for t in text.split(" ")])
return text
def correct_spelling(x, dic):
for word in dic.keys():
x = x.replace(word, dic[word])
return x
def unknown_punct(embed, punct):
unknown = ''
for p in punct:
if p not in embed:
unknown += p
unknown += ' '
return unknown
def clean_numbers(x):
x = re.sub('[0-9]{5,}', '00000', x)
x = re.sub('[0-9]{4}', '0000', x)
x = re.sub('[0-9]{3}', '000', x)
x = re.sub('[0-9]{2}', '00', x)
return x
def clean_special_chars(text, punct, mapping):
for p in mapping:
text = text.replace(p, mapping[p])
for p in punct:
text = text.replace(p, f' {p} ')
specials = {'\u200b': ' ', '…': ' ... ', '\ufeff': '', 'करना': '', 'है': ''} # Other special characters that I have to deal with in last
for s in specials:
text = text.replace(s, specials[s])
return text
def add_lower(embedding, vocab):
count = 0
for word in vocab:
if word in embedding and word.lower() not in embedding:
embedding[word.lower()] = embedding[word]
count += 1
print(f"Added {count} words to embedding")
puncts = [',', '.', '"', ':', ')', '(', '-', '!', '?', '|', ';', "'", '$', '&', '/', '[', ']', '>', '%', '=', '#', '*', '+', '\\', '•', '~', '@', '£',
'·', '_', '{', '}', '©', '^', '®', '`', '<', '→', '°', '€', '™', '›', '♥', '←', '×', '§', '″', '′', 'Â', '█', '½', 'à', '…',
'“', '★', '”', '–', '●', 'â', '►', '−', '¢', '²', '¬', '░', '¶', '↑', '±', '¿', '▾', '═', '¦', '║', '―', '¥', '▓', '—', '‹', '─',
'▒', ':', '¼', '⊕', '▼', '▪', '†', '■', '’', '▀', '¨', '▄', '♫', '☆', 'é', '¯', '♦', '¤', '▲', 'è', '¸', '¾', 'Ã', '⋅', '‘', '∞',
'∙', ')', '↓', '、', '│', '(', '»', ',', '♪', '╩', '╚', '³', '・', '╦', '╣', '╔', '╗', '▬', '❤', 'ï', 'Ø', '¹', '≤', '‡', '√', ]
mispell_dict = {"ain't": "is not", "aren't": "are not","can't": "cannot", "'cause": "because", "could've": "could have", "couldn't": "could not", "didn't": "did not", "doesn't": "does not", "don't": "do not", "hadn't": "had not", "hasn't": "has not", "haven't": "have not", "he'd": "he would","he'll": "he will", "he's": "he is", "how'd": "how did", "how'd'y": "how do you", "how'll": "how will", "how's": "how is", "I'd": "I would", "I'd've": "I would have", "I'll": "I will", "I'll've": "I will have","I'm": "I am", "I've": "I have", "i'd": "i would", "i'd've": "i would have", "i'll": "i will", "i'll've": "i will have","i'm": "i am", "i've": "i have", "isn't": "is not", "it'd": "it would", "it'd've": "it would have", "it'll": "it will", "it'll've": "it will have","it's": "it is", "let's": "let us", "ma'am": "madam", "mayn't": "may not", "might've": "might have","mightn't": "might not","mightn't've": "might not have", "must've": "must have", "mustn't": "must not", "mustn't've": "must not have", "needn't": "need not", "needn't've": "need not have","o'clock": "of the clock", "oughtn't": "ought not", "oughtn't've": "ought not have", "shan't": "shall not", "sha'n't": "shall not", "shan't've": "shall not have", "she'd": "she would", "she'd've": "she would have", "she'll": "she will", "she'll've": "she will have", "she's": "she is", "should've": "should have", "shouldn't": "should not", "shouldn't've": "should not have", "so've": "so have","so's": "so as", "this's": "this is","that'd": "that would", "that'd've": "that would have", "that's": "that is", "there'd": "there would", "there'd've": "there would have", "there's": "there is", "here's": "here is","they'd": "they would", "they'd've": "they would have", "they'll": "they will", "they'll've": "they will have", "they're": "they are", "they've": "they have", "to've": "to have", "wasn't": "was not", "we'd": "we would", "we'd've": "we would have", "we'll": "we will", "we'll've": "we will have", "we're": "we are", "we've": "we have", "weren't": "were not", "what'll": "what will", "what'll've": "what will have", "what're": "what are", "what's": "what is", "what've": "what have", "when's": "when is", "when've": "when have", "where'd": "where did", "where's": "where is", "where've": "where have", "who'll": "who will", "who'll've": "who will have", "who's": "who is", "who've": "who have", "why's": "why is", "why've": "why have", "will've": "will have", "won't": "will not", "won't've": "will not have", "would've": "would have", "wouldn't": "would not", "wouldn't've": "would not have", "y'all": "you all", "y'all'd": "you all would","y'all'd've": "you all would have","y'all're": "you all are","y'all've": "you all have","you'd": "you would", "you'd've": "you would have", "you'll": "you will", "you'll've": "you will have", "you're": "you are", "you've": "you have", 'colour': 'color', 'centre': 'center', 'favourite': 'favorite', 'travelling': 'traveling', 'counselling': 'counseling', 'theatre': 'theater', 'cancelled': 'canceled', 'labour': 'labor', 'organisation': 'organization', 'wwii': 'world war 2', 'citicise': 'criticize', 'youtu ': 'youtube ', 'Qoura': 'Quora', 'sallary': 'salary', 'Whta': 'What', 'narcisist': 'narcissist', 'howdo': 'how do', 'whatare': 'what are', 'howcan': 'how can', 'howmuch': 'how much', 'howmany': 'how many', 'whydo': 'why do', 'doI': 'do I', 'theBest': 'the best', 'howdoes': 'how does', 'mastrubation': 'masturbation', 'mastrubate': 'masturbate', "mastrubating": 'masturbating', 'pennis': 'penis', 'Etherium': 'Ethereum', 'narcissit': 'narcissist', 'bigdata': 'big data', '2k17': '2017', '2k18': '2018', 'qouta': 'quota', 'exboyfriend': 'ex boyfriend', 'airhostess': 'air hostess', "whst": 'what', 'watsapp': 'whatsapp', 'demonitisation': 'demonetization', 'demonitization': 'demonetization', 'demonetisation': 'demonetization'}
def clean_text(x):
x = str(x)
for punct in puncts:
x = x.replace(punct, f' {punct} ')
return x
def _get_mispell(mispell_dict):
mispell_re = re.compile('(%s)' % '|'.join(mispell_dict.keys()))
return mispell_dict, mispell_re
mispellings, mispellings_re = _get_mispell(mispell_dict)
def replace_typical_misspell(text):
def replace(match):
return mispellings[match.group(0)]
return mispellings_re.sub(replace, text)
```
Extra feature part taken from https://github.com/wongchunghang/toxic-comment-challenge-lstm/blob/master/toxic_comment_9872_model.ipynb
```
from sklearn.preprocessing import StandardScaler
def add_features(df):
df['question_text'] = df['question_text'].progress_apply(lambda x:str(x))
df['total_length'] = df['question_text'].progress_apply(len)
df['capitals'] = df['question_text'].progress_apply(lambda comment: sum(1 for c in comment if c.isupper()))
df['caps_vs_length'] = df.progress_apply(lambda row: float(row['capitals'])/float(row['total_length']),
axis=1)
df['num_words'] = df.question_text.str.count('\S+')
df['num_unique_words'] = df['question_text'].progress_apply(lambda comment: len(set(w for w in comment.split())))
df['words_vs_unique'] = df['num_unique_words'] / df['num_words']
return df
def load_and_prec():
train_df = pd.read_csv("../input/train.csv")
test_df = pd.read_csv("../input/test.csv")
print("Train shape : ",train_df.shape)
print("Test shape : ",test_df.shape)
# lower
train_df["question_text"] = train_df["question_text"].apply(lambda x: x.lower())
test_df["question_text"] = test_df["question_text"].apply(lambda x: x.lower())
# Clean the text
train_df["question_text"] = train_df["question_text"].progress_apply(lambda x: clean_text(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: clean_text(x))
# Clean numbers
train_df["question_text"] = train_df["question_text"].progress_apply(lambda x: clean_numbers(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: clean_numbers(x))
# Clean speelings
train_df["question_text"] = train_df["question_text"].progress_apply(lambda x: replace_typical_misspell(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: replace_typical_misspell(x))
## fill up the missing values
train_X = train_df["question_text"].fillna("_##_").values
test_X = test_df["question_text"].fillna("_##_").values
###################### Add Features ###############################
# https://github.com/wongchunghang/toxic-comment-challenge-lstm/blob/master/toxic_comment_9872_model.ipynb
train = add_features(train_df)
test = add_features(test_df)
features = train[['caps_vs_length', 'words_vs_unique']].fillna(0)
test_features = test[['caps_vs_length', 'words_vs_unique']].fillna(0)
ss = StandardScaler()
ss.fit(np.vstack((features, test_features)))
features = ss.transform(features)
test_features = ss.transform(test_features)
###########################################################################
## Tokenize the sentences
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(train_X))
train_X = tokenizer.texts_to_sequences(train_X)
test_X = tokenizer.texts_to_sequences(test_X)
## Pad the sentences
train_X = pad_sequences(train_X, maxlen=maxlen)
test_X = pad_sequences(test_X, maxlen=maxlen)
## Get the target values
train_y = train_df['target'].values
# # Splitting to training and a final test set
# train_X, x_test_f, train_y, y_test_f = train_test_split(list(zip(train_X,features)), train_y, test_size=0.2, random_state=SEED)
# train_X, features = zip(*train_X)
# x_test_f, features_t = zip(*x_test_f)
#shuffling the data
np.random.seed(SEED)
trn_idx = np.random.permutation(len(train_X))
train_X = train_X[trn_idx]
train_y = train_y[trn_idx]
features = features[trn_idx]
return train_X, test_X, train_y, features, test_features, tokenizer.word_index
# return train_X, test_X, train_y, x_test_f,y_test_f,features, test_features, features_t, tokenizer.word_index
# return train_X, test_X, train_y, tokenizer.word_index
# fill up the missing values
# x_train, x_test, y_train, word_index = load_and_prec()
x_train, x_test, y_train, features, test_features, word_index = load_and_prec()
# x_train, x_test, y_train, x_test_f,y_test_f,features, test_features,features_t, word_index = load_and_prec()
np.save("x_train",x_train)
np.save("x_test",x_test)
np.save("y_train",y_train)
np.save("features",features)
np.save("test_features",test_features)
np.save("word_index.npy",word_index)
x_train = np.load("x_train.npy")
x_test = np.load("x_test.npy")
y_train = np.load("y_train.npy")
features = np.load("features.npy")
test_features = np.load("test_features.npy")
word_index = np.load("word_index.npy").item()
features.shape
# missing entries in the embedding are set using np.random.normal so we have to seed here too
seed_everything()
glove_embeddings = load_glove(word_index)
paragram_embeddings = load_para(word_index)
embedding_matrix = np.mean([glove_embeddings, paragram_embeddings], axis=0)
# vocab = build_vocab(df['question_text'])
# add_lower(embedding_matrix, vocab)
del glove_embeddings, paragram_embeddings
gc.collect()
np.shape(embedding_matrix)
splits = list(StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=SEED).split(x_train, y_train))
splits[:3]
```
### Cyclic CLR
Code taken from https://www.kaggle.com/dannykliu/lstm-with-attention-clr-in-pytorch
```
# code inspired from: https://github.com/anandsaha/pytorch.cyclic.learning.rate/blob/master/cls.py
class CyclicLR(object):
def __init__(self, optimizer, base_lr=1e-3, max_lr=6e-3,
step_size=2000, mode='triangular', gamma=1.,
scale_fn=None, scale_mode='cycle', last_batch_iteration=-1):
if not isinstance(optimizer, Optimizer):
raise TypeError('{} is not an Optimizer'.format(
type(optimizer).__name__))
self.optimizer = optimizer
if isinstance(base_lr, list) or isinstance(base_lr, tuple):
if len(base_lr) != len(optimizer.param_groups):
raise ValueError("expected {} base_lr, got {}".format(
len(optimizer.param_groups), len(base_lr)))
self.base_lrs = list(base_lr)
else:
self.base_lrs = [base_lr] * len(optimizer.param_groups)
if isinstance(max_lr, list) or isinstance(max_lr, tuple):
if len(max_lr) != len(optimizer.param_groups):
raise ValueError("expected {} max_lr, got {}".format(
len(optimizer.param_groups), len(max_lr)))
self.max_lrs = list(max_lr)
else:
self.max_lrs = [max_lr] * len(optimizer.param_groups)
self.step_size = step_size
if mode not in ['triangular', 'triangular2', 'exp_range'] \
and scale_fn is None:
raise ValueError('mode is invalid and scale_fn is None')
self.mode = mode
self.gamma = gamma
if scale_fn is None:
if self.mode == 'triangular':
self.scale_fn = self._triangular_scale_fn
self.scale_mode = 'cycle'
elif self.mode == 'triangular2':
self.scale_fn = self._triangular2_scale_fn
self.scale_mode = 'cycle'
elif self.mode == 'exp_range':
self.scale_fn = self._exp_range_scale_fn
self.scale_mode = 'iterations'
else:
self.scale_fn = scale_fn
self.scale_mode = scale_mode
self.batch_step(last_batch_iteration + 1)
self.last_batch_iteration = last_batch_iteration
def batch_step(self, batch_iteration=None):
if batch_iteration is None:
batch_iteration = self.last_batch_iteration + 1
self.last_batch_iteration = batch_iteration
for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()):
param_group['lr'] = lr
def _triangular_scale_fn(self, x):
return 1.
def _triangular2_scale_fn(self, x):
return 1 / (2. ** (x - 1))
def _exp_range_scale_fn(self, x):
return self.gamma**(x)
def get_lr(self):
step_size = float(self.step_size)
cycle = np.floor(1 + self.last_batch_iteration / (2 * step_size))
x = np.abs(self.last_batch_iteration / step_size - 2 * cycle + 1)
lrs = []
param_lrs = zip(self.optimizer.param_groups, self.base_lrs, self.max_lrs)
for param_group, base_lr, max_lr in param_lrs:
base_height = (max_lr - base_lr) * np.maximum(0, (1 - x))
if self.scale_mode == 'cycle':
lr = base_lr + base_height * self.scale_fn(cycle)
else:
lr = base_lr + base_height * self.scale_fn(self.last_batch_iteration)
lrs.append(lr)
return lrs
import torch as t
import torch.nn as nn
import torch.nn.functional as F
embedding_dim = 300
embedding_path = '../save/embedding_matrix.npy' # or False, not use pre-trained-matrix
use_pretrained_embedding = True
hidden_size = 60
gru_len = hidden_size
Routings = 4 #5
Num_capsule = 5
Dim_capsule = 5#16
dropout_p = 0.25
rate_drop_dense = 0.28
LR = 0.001
T_epsilon = 1e-7
num_classes = 30
class Embed_Layer(nn.Module):
def __init__(self, embedding_matrix=None, vocab_size=None, embedding_dim=300):
super(Embed_Layer, self).__init__()
self.encoder = nn.Embedding(vocab_size + 1, embedding_dim)
if use_pretrained_embedding:
# self.encoder.weight.data.copy_(t.from_numpy(np.load(embedding_path))) # 方法一,加载np.save的npy文件
self.encoder.weight.data.copy_(t.from_numpy(embedding_matrix)) # 方法二
def forward(self, x, dropout_p=0.25):
return nn.Dropout(p=dropout_p)(self.encoder(x))
class GRU_Layer(nn.Module):
def __init__(self):
super(GRU_Layer, self).__init__()
self.gru = nn.GRU(input_size=300,
hidden_size=gru_len,
bidirectional=True)
def init_weights(self):
ih = (param.data for name, param in self.named_parameters() if 'weight_ih' in name)
hh = (param.data for name, param in self.named_parameters() if 'weight_hh' in name)
b = (param.data for name, param in self.named_parameters() if 'bias' in name)
for k in ih:
nn.init.xavier_uniform_(k)
for k in hh:
nn.init.orthogonal_(k)
for k in b:
nn.init.constant_(k, 0)
def forward(self, x):
return self.gru(x)
# core caps_layer with squash func
class Caps_Layer(nn.Module):
def __init__(self, input_dim_capsule=gru_len * 2, num_capsule=Num_capsule, dim_capsule=Dim_capsule, \
routings=Routings, kernel_size=(9, 1), share_weights=True,
activation='default', **kwargs):
super(Caps_Layer, self).__init__(**kwargs)
self.num_capsule = num_capsule
self.dim_capsule = dim_capsule
self.routings = routings
self.kernel_size = kernel_size
self.share_weights = share_weights
if activation == 'default':
self.activation = self.squash
else:
self.activation = nn.ReLU(inplace=True)
if self.share_weights:
self.W = nn.Parameter(
nn.init.xavier_normal_(t.empty(1, input_dim_capsule, self.num_capsule * self.dim_capsule)))
else:
self.W = nn.Parameter(
t.randn(BATCH_SIZE, input_dim_capsule, self.num_capsule * self.dim_capsule)) # 64即batch_size
def forward(self, x):
if self.share_weights:
u_hat_vecs = t.matmul(x, self.W)
else:
print('add later')
batch_size = x.size(0)
input_num_capsule = x.size(1)
u_hat_vecs = u_hat_vecs.view((batch_size, input_num_capsule,
self.num_capsule, self.dim_capsule))
u_hat_vecs = u_hat_vecs.permute(0, 2, 1, 3) # 转成(batch_size,num_capsule,input_num_capsule,dim_capsule)
b = t.zeros_like(u_hat_vecs[:, :, :, 0]) # (batch_size,num_capsule,input_num_capsule)
for i in range(self.routings):
b = b.permute(0, 2, 1)
c = F.softmax(b, dim=2)
c = c.permute(0, 2, 1)
b = b.permute(0, 2, 1)
outputs = self.activation(t.einsum('bij,bijk->bik', (c, u_hat_vecs))) # batch matrix multiplication
# outputs shape (batch_size, num_capsule, dim_capsule)
if i < self.routings - 1:
b = t.einsum('bik,bijk->bij', (outputs, u_hat_vecs)) # batch matrix multiplication
return outputs # (batch_size, num_capsule, dim_capsule)
# text version of squash, slight different from original one
def squash(self, x, axis=-1):
s_squared_norm = (x ** 2).sum(axis, keepdim=True)
scale = t.sqrt(s_squared_norm + T_epsilon)
return x / scale
class Capsule_Main(nn.Module):
def __init__(self, embedding_matrix=None, vocab_size=None):
super(Capsule_Main, self).__init__()
self.embed_layer = Embed_Layer(embedding_matrix, vocab_size)
self.gru_layer = GRU_Layer()
self.gru_layer.init_weights()
self.caps_layer = Caps_Layer()
self.dense_layer = Dense_Layer()
def forward(self, content):
content1 = self.embed_layer(content)
content2, _ = self.gru_layer( content1)
content3 = self.caps_layer(content2)
output = self.dense_layer(content3)
return output
class Attention(nn.Module):
def __init__(self, feature_dim, step_dim, bias=True, **kwargs):
super(Attention, self).__init__(**kwargs)
self.supports_masking = True
self.bias = bias
self.feature_dim = feature_dim
self.step_dim = step_dim
self.features_dim = 0
weight = torch.zeros(feature_dim, 1)
nn.init.xavier_uniform_(weight)
self.weight = nn.Parameter(weight)
if bias:
self.b = nn.Parameter(torch.zeros(step_dim))
def forward(self, x, mask=None):
feature_dim = self.feature_dim
step_dim = self.step_dim
eij = torch.mm(
x.contiguous().view(-1, feature_dim),
self.weight
).view(-1, step_dim)
if self.bias:
eij = eij + self.b
eij = torch.tanh(eij)
a = torch.exp(eij)
if mask is not None:
a = a * mask
a = a / torch.sum(a, 1, keepdim=True) + 1e-10
weighted_input = x * torch.unsqueeze(a, -1)
return torch.sum(weighted_input, 1)
class NeuralNet(nn.Module):
def __init__(self):
super(NeuralNet, self).__init__()
fc_layer = 16
fc_layer1 = 16
self.embedding = nn.Embedding(max_features, embed_size)
self.embedding.weight = nn.Parameter(torch.tensor(embedding_matrix, dtype=torch.float32))
self.embedding.weight.requires_grad = False
self.embedding_dropout = nn.Dropout2d(0.1)
self.lstm = nn.LSTM(embed_size, hidden_size, bidirectional=True, batch_first=True)
self.gru = nn.GRU(hidden_size * 2, hidden_size, bidirectional=True, batch_first=True)
self.lstm2 = nn.LSTM(hidden_size * 2, hidden_size, bidirectional=True, batch_first=True)
self.lstm_attention = Attention(hidden_size * 2, maxlen)
self.gru_attention = Attention(hidden_size * 2, maxlen)
self.bn = nn.BatchNorm1d(16, momentum=0.5)
self.linear = nn.Linear(hidden_size*8+3, fc_layer1) #643:80 - 483:60 - 323:40
self.relu = nn.ReLU()
self.dropout = nn.Dropout(0.1)
self.fc = nn.Linear(fc_layer**2,fc_layer)
self.out = nn.Linear(fc_layer, 1)
self.lincaps = nn.Linear(Num_capsule * Dim_capsule, 1)
self.caps_layer = Caps_Layer()
def forward(self, x):
# Capsule(num_capsule=10, dim_capsule=10, routings=4, share_weights=True)(x)
h_embedding = self.embedding(x[0])
h_embedding = torch.squeeze(
self.embedding_dropout(torch.unsqueeze(h_embedding, 0)))
h_lstm, _ = self.lstm(h_embedding)
h_gru, _ = self.gru(h_lstm)
##Capsule Layer
content3 = self.caps_layer(h_gru)
content3 = self.dropout(content3)
batch_size = content3.size(0)
content3 = content3.view(batch_size, -1)
content3 = self.relu(self.lincaps(content3))
##Attention Layer
h_lstm_atten = self.lstm_attention(h_lstm)
h_gru_atten = self.gru_attention(h_gru)
# global average pooling
avg_pool = torch.mean(h_gru, 1)
# global max pooling
max_pool, _ = torch.max(h_gru, 1)
f = torch.tensor(x[1], dtype=torch.float).cuda()
#[512,160]
conc = torch.cat((h_lstm_atten, h_gru_atten,content3, avg_pool, max_pool,f), 1)
conc = self.relu(self.linear(conc))
conc = self.bn(conc)
conc = self.dropout(conc)
out = self.out(conc)
return out
```
### Training
The method for training is borrowed from https://www.kaggle.com/hengzheng/pytorch-starter
```
class MyDataset(Dataset):
def __init__(self,dataset):
self.dataset = dataset
def __getitem__(self, index):
data, target = self.dataset[index]
return data, target, index
def __len__(self):
return len(self.dataset)
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# matrix for the out-of-fold predictions
train_preds = np.zeros((len(x_train)))
# matrix for the predictions on the test set
test_preds = np.zeros((len(df_test)))
# always call this before training for deterministic results
seed_everything()
# x_test_cuda_f = torch.tensor(x_test_f, dtype=torch.long).cuda()
# test_f = torch.utils.data.TensorDataset(x_test_cuda_f)
# test_loader_f = torch.utils.data.DataLoader(test_f, batch_size=batch_size, shuffle=False)
x_test_cuda = torch.tensor(x_test, dtype=torch.long).cuda()
test = torch.utils.data.TensorDataset(x_test_cuda)
test_loader = torch.utils.data.DataLoader(test, batch_size=batch_size, shuffle=False)
avg_losses_f = []
avg_val_losses_f = []
global_test_saver = []
for split_idx, (train_idx, valid_idx) in enumerate(splits):
# split data in train / validation according to the KFold indeces
# also, convert them to a torch tensor and store them on the GPU (done with .cuda())
x_train = np.array(x_train)
y_train = np.array(y_train)
features = np.array(features)
x_train_fold = torch.tensor(x_train[train_idx.astype(int)], dtype=torch.long).cuda()
y_train_fold = torch.tensor(y_train[train_idx.astype(int), np.newaxis], dtype=torch.float32).cuda()
kfold_X_features = features[train_idx.astype(int)]
kfold_X_valid_features = features[valid_idx.astype(int)]
x_val_fold = torch.tensor(x_train[valid_idx.astype(int)], dtype=torch.long).cuda()
y_val_fold = torch.tensor(y_train[valid_idx.astype(int), np.newaxis], dtype=torch.float32).cuda()
# model = BiLSTM(lstm_layer=2,hidden_dim=40,dropout=DROPOUT).cuda()
model = NeuralNet()
# make sure everything in the model is running on the GPU
model.cuda()
# define binary cross entropy loss
# note that the model returns logit to take advantage of the log-sum-exp trick
# for numerical stability in the loss
loss_fn = torch.nn.BCEWithLogitsLoss(reduction='sum')
step_size = 300
base_lr, max_lr = 0.001, 0.003
optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()),
lr=max_lr)
################################################################################################
scheduler = CyclicLR(optimizer, base_lr=base_lr, max_lr=max_lr,
step_size=step_size, mode='exp_range',
gamma=0.99994)
###############################################################################################
train = torch.utils.data.TensorDataset(x_train_fold, y_train_fold)
valid = torch.utils.data.TensorDataset(x_val_fold, y_val_fold)
train = MyDataset(train)
valid = MyDataset(valid)
##No need to shuffle the data again here. Shuffling happens when splitting for kfolds.
train_loader = torch.utils.data.DataLoader(train, batch_size=batch_size, shuffle=True)
valid_loader = torch.utils.data.DataLoader(valid, batch_size=batch_size, shuffle=False)
print('Fold ',str(split_idx+1))
for epoch in range(n_epochs):
# set train mode of the model. This enables operations which are only applied during training like dropout
start_time = time.time()
model.train()
avg_loss = 0.
for i, (x_batch, y_batch, index) in enumerate(train_loader):
# Forward pass: compute predicted y by passing x to the model.
################################################################################################
f = kfold_X_features[index]
y_pred = model([x_batch,f])
################################################################################################
################################################################################################
if scheduler:
scheduler.batch_step()
################################################################################################
# Compute and print loss.
loss = loss_fn(y_pred, y_batch)
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the Tensors it will update (which are the learnable weights
# of the model)
optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# Calling the step function on an Optimizer makes an update to its parameters
optimizer.step()
avg_loss += loss.item() / len(train_loader)
# set evaluation mode of the model. This disabled operations which are only applied during training like dropout
model.eval()
# predict all the samples in y_val_fold batch per batch
valid_preds_fold = np.zeros((x_val_fold.size(0)))
test_preds_fold = np.zeros((len(df_test)))
avg_val_loss = 0.
for i, (x_batch, y_batch, index) in enumerate(valid_loader):
f = kfold_X_valid_features[index]
y_pred = model([x_batch,f]).detach()
avg_val_loss += loss_fn(y_pred, y_batch).item() / len(valid_loader)
valid_preds_fold[i * batch_size:(i+1) * batch_size] = sigmoid(y_pred.cpu().numpy())[:, 0]
elapsed_time = time.time() - start_time
print('Epoch {}/{} \t loss={:.4f} \t val_loss={:.4f} \t time={:.2f}s'.format(
epoch + 1, n_epochs, avg_loss, avg_val_loss, elapsed_time))
avg_losses_f.append(avg_loss)
avg_val_losses_f.append(avg_val_loss)
# predict all samples in the test set batch per batch
for i, (x_batch,) in enumerate(test_loader):
f = test_features[i * batch_size:(i+1) * batch_size]
y_pred = model([x_batch,f]).detach()
test_preds_fold[i * batch_size:(i+1) * batch_size] = sigmoid(y_pred.cpu().numpy())[:, 0]
train_preds[valid_idx] = valid_preds_fold
test_preds += test_preds_fold / len(splits)
global_test_saver.append(test_preds_fold)
print('All \t loss={:.4f} \t val_loss={:.4f} \t '.format(np.average(avg_losses_f),np.average(avg_val_losses_f)))
diff = np.zeros([n_splits,n_splits])
for ii in range(n_splits):
for jj in range(ii,n_splits):
diff[ii,jj] = int(np.sum(np.abs(global_test_saver[ii] - global_test_saver[jj])))
diff_sum = np.sum(diff)
print(diff)
print(diff_sum / (n_splits)/(n_splits - 1) * 2 /len(global_test_saver[0]))
for ii in range(n_splits):
for jj in range(ii,n_splits):
diff[ii,jj] = int(np.sum(np.power(global_test_saver[ii] - global_test_saver[jj],2)))
diff_sum = np.sum(diff)
print(diff)
print(diff_sum / (n_splits)/(n_splits - 1) * 2 /len(global_test_saver[0]))
# x_train, x_test_f, y_train, y_test_f
```
### Find final Thresshold
Borrowed from: https://www.kaggle.com/ziliwang/baseline-pytorch-bilstm
```
def bestThresshold(y_train,train_preds):
tmp = [0,0,0] # idx, cur, max
delta = 0
for tmp[0] in tqdm(np.arange(0.1, 0.501, 0.01)):
tmp[1] = f1_score(y_train, np.array(train_preds)>tmp[0])
if tmp[1] > tmp[2]:
delta = tmp[0]
tmp[2] = tmp[1]
print('best threshold is {:.4f} with F1 score: {:.4f}'.format(delta, tmp[2]))
return delta
delta = bestThresshold(y_train,train_preds)
submission = df_test[['qid']].copy()
submission['prediction'] = (test_preds > delta).astype(int)
submission.to_csv('submission.csv', index=False)
```
| github_jupyter |
# Discrete stochastic Erlang SEIR model
Author: Lam Ha @lamhm
Date: 2018-10-03
## Calculate Discrete Erlang Probabilities
The following function is to calculate the discrete truncated Erlang probability, given $k$ and $\gamma$:
\begin{equation*}
p_i =
\frac{1}{C(n^{E})}
\Bigl(\sum_{j=0}^{k-1}
\frac{e^{-(i-1)\gamma} \times ((i-1)\gamma)^{j}} {j!}
-\sum_{j=0}^{k-1}
\frac{e^{-i\gamma} \times (i\gamma)^{j}} {j!}\Bigr),\quad\text{for $i=1,...,n^{E}$}.
\end{equation*}
where
\begin{equation*}
n^{E} = argmin_n\Bigl(C(n) = 1 - \sum_{j=0}^{k-1}
\frac{e^{-n\gamma} \times (n\gamma)^{j}} {j!} > 0.99 \Bigr)
\end{equation*}
**N.B. The formula of $p_i$ here is slightly different from what is shown in the original paper because the latter (which is likely to be wrong) would lead to negative probabilities.**
```
#' @param k The shape parameter of the Erlang distribution.
#' @param gamma The rate parameter of the Erlang distribution.
#' @return A vector containing all p_i values, for i = 1 : n.
compute_erlang_discrete_prob <- function(k, gamma) {
n_bin <- 0
factorials <- 1 ## 0! = 1
for (i in 1 : k) {
factorials[i + 1] <- factorials[i] * i ## factorial[i + 1] = i!
}
one_sub_cummulative_probs <- NULL
cummulative_prob <- 0
while (cummulative_prob <= 0.99) {
n_bin <- n_bin + 1
one_sub_cummulative_probs[n_bin] <- 0
for ( j in 0 : (k - 1) ) {
one_sub_cummulative_probs[n_bin] <-
one_sub_cummulative_probs[n_bin] +
(
exp( -n_bin * gamma )
* ( (n_bin * gamma) ^ j )
/ factorials[j + 1] ## factorials[j + 1] = j!
)
}
cummulative_prob <- 1 - one_sub_cummulative_probs[n_bin]
}
one_sub_cummulative_probs <- c(1, one_sub_cummulative_probs)
density_prob <-
head(one_sub_cummulative_probs, -1) - tail(one_sub_cummulative_probs, -1)
density_prob <- density_prob / cummulative_prob
return(density_prob)
}
```
The implementation above calculates discrete probabilities $p_i$'s base on the cummulative density function of the Erlang distribution:
\begin{equation*}
p_i = CDF_{Erlang}(x = i) - CDF_{Erlang}(x = i-1)
\end{equation*}
Meanwhile, the estimates of $p_i$'s in the original paper seems to be based on the probability density function:
\begin{equation*}
p_i = PDF_{Erlang}(x = i)
\end{equation*}
While the two methods give slightly different estimates, they do not lead to any visible differences in the results of the subsequent simulations. This implementation uses the CDF function since it leads to faster runs.
## Simulate the SEIR Dynamics
The next function is to simulate the SEIR (susceptible, exposed, infectious, recovered) dynamics of an epidemic, assuming that transmission is frequency-dependent, i.e.
\begin{equation*}
\beta = \beta_0 \frac{I(t)}{N}
\end{equation*}
where $N$ is the population size, $I(t)$ is the number of infectious people at time $t$, and $\beta_0$ is the base transmission rate.
This model does not consider births and deads (i.e. $N$ is constant).
The rates at which individuals move through the E and the I classes are assumed to follow Erlang distributions of given shapes ($k^E$, $k^I$) and rates ($\gamma^E$, $\gamma^I$).
```
#' @param initial_state A vector that contains 4 numbers corresponding to the
#' initial values of the 4 classes: S, E, I, and R.
#' @param parameters A vector that contains 5 numbers corresponding to the
#' following parameters: the shape and the rate parameters
#' of the Erlang distribution that will be used to
#' calculate the transition rates between the E components
#' (i.e. k[E] and gamma[E]), the shape and the rate parameters
#' of the Erlang distribution that will be used to
#' calculate the transition rates between the I components
#' (i.e. k[I] and gamma[I]), and the base transmission rate
#' (i.e. beta).
#' @param max_time The length of the simulation.
#' @return A data frame containing the values of S, E, I, and R over time
#' (from 1 to max_time).
seir_simulation <- function(initial_state, parameters, max_time) {
names(initial_state) <- c("S", "E", "I", "R")
names(parameters) <- c( "erlang_shape_for_E", "erlang_rate_for_E",
"erlang_shape_for_I", "erlang_rate_for_I",
"base_transmission_rate" )
population_size <- sum(initial_state)
sim_data <- data.frame( time = c(1 : max_time),
S = NA, E = NA, I = NA, R = NA )
sim_data[1, 2:5] <- initial_state
## Initialise a matrix to store the states of the exposed sub-blocks over time.
exposed_block_adm_rates <- compute_erlang_discrete_prob(
k = parameters["erlang_shape_for_E"],
gamma = parameters["erlang_rate_for_E"]
)
n_exposed_blocks <- length(exposed_block_adm_rates)
exposed_blocks <- matrix( data = 0, nrow = max_time,
ncol = n_exposed_blocks )
exposed_blocks[1, n_exposed_blocks] <- sim_data$E[1]
## Initialise a matrix to store the states of the infectious sub-blocks over time.
infectious_block_adm_rates <- compute_erlang_discrete_prob(
k = parameters["erlang_shape_for_I"],
gamma = parameters["erlang_rate_for_I"]
)
n_infectious_blocks <- length(infectious_block_adm_rates)
infectious_blocks <- matrix( data = 0, nrow = max_time,
ncol = n_infectious_blocks )
infectious_blocks[1, n_infectious_blocks] <- sim_data$I[1]
## Run the simulation from time t = 2 to t = max_time
for (time in 2 : max_time) {
transmission_rate <-
parameters["base_transmission_rate"] * sim_data$I[time - 1] /
population_size
exposure_prob <- 1 - exp(-transmission_rate)
new_exposed <- rbinom(1, sim_data$S[time - 1], exposure_prob)
new_infectious <- exposed_blocks[time - 1, 1]
new_recovered <- infectious_blocks[time - 1, 1]
if (new_exposed > 0) {
exposed_blocks[time, ] <- t(
rmultinom(1, size = new_exposed,
prob = exposed_block_adm_rates)
)
}
exposed_blocks[time, ] <-
exposed_blocks[time, ] +
c( exposed_blocks[time - 1, 2 : n_exposed_blocks], 0 )
if (new_infectious > 0) {
infectious_blocks[time, ] <- t(
rmultinom(1, size = new_infectious,
prob = infectious_block_adm_rates)
)
}
infectious_blocks[time, ] <-
infectious_blocks[time, ] +
c( infectious_blocks[time - 1, 2 : n_infectious_blocks], 0 )
sim_data$S[time] <- sim_data$S[time - 1] - new_exposed
sim_data$E[time] <- sum(exposed_blocks[time, ])
sim_data$I[time] <- sum(infectious_blocks[time, ])
sim_data$R[time] <- sim_data$R[time - 1] + new_recovered
}
return(sim_data)
}
```
To run a simulation, simply call the $seir\_simulation(\dots)$ method above.
Below is an example simulation where $k^E = 5$, $\gamma^E = 1$, $k^I = 10$, $\gamma^I = 1$, and $\beta_0 = 0.25$ ($R_0 = \beta_0\frac{k^I}{\gamma^I} = 2.5$). The population size is $N = 10,000$. The simmulation starts with 1 exposed case and everyone else belongs to the susceptible class. These settings are the same the the simulation 11 of the original paper.
**N.B. Since this is a stochastic model, there is chance for the outbreak not to occur even with a high $R_0$.**
```
sim <- seir_simulation( initial_state = c(S = 9999, E = 1, I = 0, R = 0),
parameters = c(5, 1, 10, 1, 0.25),
max_time = 300 )
```
## Visualisation
```
library(ggplot2)
ggplot(sim, aes(time)) +
geom_line(aes(y = S, colour = "Susceptible"), lwd = 1) +
geom_line(aes(y = E, colour = "Exposed"), lwd = 1) +
geom_line(aes(y = I, colour = "Infectious"), lwd = 1) +
geom_line(aes(y = R, colour = "Recovered"), lwd = 1) +
xlab("Time") + ylab("Number of Individuals")
```
## Test Case
```
set.seed(12345)
test_sim <- seir_simulation( initial_state = c(S = 9999, E = 1, I = 0, R = 0),
parameters = c(5, 1, 10, 1, 0.25),
max_time = 100 )
test_result <- as.matrix( tail(test_sim, 3) )
correct_result <- matrix( c( 98, 7384, 794, 1015, 807,
99, 7184, 864, 1068, 884,
100, 6986, 920, 1144, 950), nrow = 3, byrow = T )
n_correct_cells <- sum(correct_result == test_result)
cat("\n--------------------\n")
if (n_correct_cells == 15) {
cat(" Test PASSED\n")
} else {
cat(" Test FAILED\n")
}
cat("--------------------\n\n")
```
| github_jupyter |
```
import numpy as np
import random
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
def GenerateData(c1, r1, c2, r2, N):
X = []
Y = []
for i in range(N):
r_1 = np.random.uniform(0, r1)
theta = np.random.uniform(0, 2*np.pi)
x1 = c1 + r_1 * np.array([np.cos(theta), np.sin(theta)])
X.append(x1)
Y.append(1)
r_2 = np.random.uniform(r1, r2)
theta = np.random.uniform(0, 2*np.pi)
x2 = c2 + r_2 * np.array([np.cos(theta), np.sin(theta)])
X.append(x2)
Y.append(0)
X = np.array(X)
Y = np.array(Y)
Y = Y.reshape((len(Y),1))
return X, Y
X, Y = GenerateData(0, 1, 0, 2, 2000)
print(X.shape)
print(Y.shape)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42)
def PlotData(X, Y):
plt.scatter(X[:,0], X[:,1], c = Y)
PlotData(X_test, Y_test)
def TransformFeatures(X):
transformed_X = np.zeros((len(X), 6))
transformed_X[:,0] = X[:,0]
transformed_X[:,1] = X[:,1]
transformed_X[:,2] = X[:,0]**2
transformed_X[:,3] = X[:,1]**2
transformed_X[:,4] = X[:,0]*X[:,1]
transformed_X[:,5] = np.ones((len(X)))
return transformed_X
transformed_X_train = TransformFeatures(X_train)
def Probability(Z):
return 1.0/(1 + np.exp(-Z))
def ComputeLogit(X, W):
return np.dot(X, W)
def Forward(X, W):
transformed_X = TransformFeatures(X)
Z = ComputeLogit(transformed_X, W)
prediction = Probability(Z)
return prediction
W = np.random.rand(6,1)
def Visualize(X, Y, W):
h = 0.01
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
points =np.c_[xx.ravel(), yy.ravel()]
Probability = Forward(TransformFeatures(points), W)
Probability = Probability.reshape(xx.shape)
plt.subplot(1, 2, 1)
plt.contourf(xx, yy, Probability,alpha=1)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired, alpha=0.5)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
Visualize(X_train, Y_train,W)
def ComputeLoss(X, Y, W):
transformed_X = TransformFeatures(X)
Z = ComputeLogit(transformed_X, W)
prediction = Probability(Z)
loss = -np.sum(Y * np.log(prediction) + (1 - Y) * np.log(1 - prediction))/(Y.shape[0])
return loss
def ComputeGradient(X, prediction, Y):
transformed_X = TransformFeatures(X)
dW = np.average(transformed_X * (prediction - Y), axis=0)
dW = dW.reshape((dW.shape[0],1))
return dW
def SelectBatch(X, Y, indexes):
X_batch = []
Y_batch = []
for i in indexes:
X_batch.append(X[i])
Y_batch.append(Y[i])
return np.array(X_batch), np.array(Y_batch)
def Train(X, Y, W, num_epochs, batch_size, learning_rate = 0.01):
Loss = []
for epoch in range(num_epochs):
loss = ComputeLoss(X, Y, W)
Loss.append(loss)
indexes = np.random.choice(X.shape[0], batch_size)
X_batch, Y_batch = SelectBatch(X, Y, indexes)
prediction = Forward(X_batch, W)
dW = ComputeGradient(X_batch, prediction, Y_batch)
W = W - learning_rate * dW
return Loss, W
W = np.random.rand(6,1)
Visualize(X_train, Y_train ,W)
Loss, W = Train(X_train, Y_train, W, 1000, 1, 0.1)
def Accuracy(X, Y, W):
prediction = Forward(X, W)
prediction[prediction < 0.5] = 0
prediction[prediction >=0.5] = 1
overlap = np.zeros_like(prediction)
overlap[prediction == Y] = 1
accuracy = np.sum(overlap)/len(overlap)
return accuracy
train_acc = Accuracy(X_train, Y_train, W)
print("Train_acc = ", train_acc)
test_acc = Accuracy(X_test, Y_test, W)
print("Test_acc = ", test_acc)
Visualize(X, Y, W)
plt.plot(Loss)
plt.title('Loss')
plt.xlabel('EPOCH')
plt.ylabel('Loss')
```
| github_jupyter |
# <div align="center">Random Forest Classification in Python</div>
---------------------------------------------------------------------
you can Find me on Github:
> ###### [ GitHub](https://github.com/lev1khachatryan)
<img src="asset/main.png" />
<a id="top"></a> <br>
## Notebook Content
1. [The random forests algorithm](#1)
2. [How does the algorithm work?](#2)
3. [Its advantages and disadvantages](#3)
4. [Finding important features](#4)
5. [Comparision between random forests and decision trees](#5)
6. [Building a classifier with scikit-learn](#6)
7. [Finding important features with scikit-learn](#7)
<a id="1"></a> <br>
# <div align="center">1. The Random Forests Algorithm</div>
---------------------------------------------------------------------
[go to top](#top)
Random forests is a supervised learning algorithm. It can be used both for classification and regression. It is also the most flexible and easy to use algorithm. A forest is comprised of trees. It is said that the more trees it has, the more robust a forest is. Random forests creates decision trees on randomly selected data samples, gets prediction from each tree and selects the best solution by means of voting. It also provides a pretty good indicator of the feature importance.
Random forests has a variety of applications, such as recommendation engines, image classification and feature selection. It can be used to classify loyal loan applicants, identify fraudulent activity and predict diseases. It lies at the base of the Boruta algorithm, which selects important features in a dataset.
Let’s understand the algorithm in layman’s terms. Suppose you want to go on a trip and you would like to travel to a place which you will enjoy.
So what do you do to find a place that you will like? You can search online, read reviews on travel blogs and portals, or you can also ask your friends.
Let’s suppose you have decided to ask your friends, and talked with them about their past travel experience to various places. You will get some recommendations from every friend. Now you have to make a list of those recommended places. Then, you ask them to vote (or select one best place for the trip) from the list of recommended places you made. The place with the highest number of votes will be your final choice for the trip.
In the above decision process, there are two parts. First, asking your friends about their individual travel experience and getting one recommendation out of multiple places they have visited. This part is like using the decision tree algorithm. Here, each friend makes a selection of the places he or she has visited so far.
The second part, after collecting all the recommendations, is the voting procedure for selecting the best place in the list of recommendations. This whole process of getting recommendations from friends and voting on them to find the best place is known as the random forests algorithm.
It technically is an ensemble method (based on the divide-and-conquer approach) of decision trees generated on a randomly split dataset. This collection of decision tree classifiers is also known as the forest. The individual decision trees are generated using an attribute selection indicator such as information gain, gain ratio, and Gini index for each attribute. Each tree depends on an independent random sample. In a classification problem, each tree votes and the most popular class is chosen as the final result. In the case of regression, the average of all the tree outputs is considered as the final result. It is simpler and more powerful compared to the other non-linear classification algorithms.
<a id="2"></a> <br>
# <div align="center">2. How does the algorithm work?</div>
---------------------------------------------------------------------
[go to top](#top)
It works in four steps:
1) Select random samples from a given dataset.
2) Construct a decision tree for each sample and get a prediction result from each decision tree.
3) Perform a vote for each predicted result.
4) Select the prediction result with the most votes as the final prediction.
<img src="asset/1.png" />
<a id="3"></a> <br>
# <div align="center">3. Its advantages and disadvantages</div>
---------------------------------------------------------------------
[go to top](#top)
### Advantages:
* Random forests is considered as a highly accurate and robust method because of the number of decision trees participating in the process.
* It does not suffer from the overfitting problem. The main reason is that it takes the average of all the predictions, which cancels out the biases.
* The algorithm can be used in both classification and regression problems.
* Random forests can also handle missing values. There are two ways to handle these: using median values to replace continuous variables, and computing the proximity-weighted average of missing values.
* You can get the relative feature importance, which helps in selecting the most contributing features for the classifier.
### Disadvantages:
* Random forests is slow in generating predictions because it has multiple decision trees. Whenever it makes a prediction, all the trees in the forest have to make a prediction for the same given input and then perform voting on it. This whole process is time-consuming.
* The model is difficult to interpret compared to a decision tree, where you can easily make a decision by following the path in the tree.
<a id="4"></a> <br>
# <div align="center">4. Finding important features</div>
---------------------------------------------------------------------
[go to top](#top)
Random forests also offers a good feature selection indicator. Scikit-learn provides an extra variable with the model, which shows the relative importance or contribution of each feature in the prediction. It automatically computes the relevance score of each feature in the training phase. Then it scales the relevance down so that the sum of all scores is 1.
This score will help you choose the most important features and drop the least important ones for model building.
Random forest uses ***gini importance*** or mean decrease in impurity (***MDI***) to calculate the importance of each feature. Gini importance is also known as the total decrease in node impurity. This is how much the model fit or accuracy decreases when you drop a variable. The larger the decrease, the more significant the variable is. Here, the mean decrease is a significant parameter for variable selection. The Gini index can describe the overall explanatory power of the variables.
<a id="5"></a> <br>
# <div align="center">5. Random Forests vs Decision Trees</div>
---------------------------------------------------------------------
[go to top](#top)
* Random forests is a set of multiple decision trees.
* Deep decision trees may suffer from overfitting, but random forests prevents overfitting by creating trees on random subsets.
* Decision trees are computationally faster.
* Random forests is difficult to interpret, while a decision tree is easily interpretable and can be converted to rules.
<a id="6"></a> <br>
# <div align="center">6. Building a Classifier using Scikit-learn</div>
---------------------------------------------------------------------
[go to top](#top)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.ensemble import RandomForestClassifier # Import Random Forest Classifier
from sklearn.model_selection import train_test_split # Import train_test_split function
from sklearn import metrics #Import scikit-learn metrics module for accuracy calculation
from sklearn.metrics import auc, \
confusion_matrix, \
classification_report, \
roc_curve, \
roc_auc_score, \
precision_recall_curve, \
average_precision_score, \
accuracy_score, \
balanced_accuracy_score, \
precision_score, \
recall_score
def roc_curve_plot(fpr, tpr):
'''
Plot ROC rurve
Parameters:
fpr: float
tpr: float
Returns:
plot: ROC curve graph
'''
x = np.linspace(0,1,100)
plt.figure(figsize = (10,6))
plt.plot(fpr, tpr)
plt.plot(x,x,".", markersize = 1.6)
plt.title("ROC Curve")
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.show()
users = pd.read_csv('input/All_Users.csv')
KPIs = pd.read_csv('input/KPIs_2&3.csv')
Activities = pd.merge(users, KPIs)
Activities.fillna(0, inplace =True)
Activities['Learn'] = Activities.L + Activities.UL
Activities['Social_1'] = Activities.UC + Activities.UP + Activities.DP
Activities['Social_2'] = Activities.CP + Activities.P + Activities.OP
Checkins = pd.read_csv('input/Checkins_4,5&6.csv')
retained_activities = pd.read_csv('input/KPIs_4,5&6.csv')
Retention = pd.merge(pd.merge(users, Checkins, how = 'left'), retained_activities)
Retention.fillna(0, inplace =True)
Retention['Learn'] = Retention.L + Retention.UL
Retention['Social_1'] = Retention.UC + Retention.UP + Retention.DP
Retention['Social_2'] = Retention.CP + Retention.P + Retention.OP
Retention['Total'] = Retention.Learn + Retention.Social_1 + Retention.Social_2
Retention['y'] = np.where((Retention.NofCheckins > 0) & (Retention.Total >= 3) & (Retention.Learn >= 0) & (Retention.Social_1 >= 0), 1 , 0)
# columns to use
X_col = ['UC', 'UP', 'DP', 'CP', 'L', 'UL', 'P', 'OP', 'F']
# X_col = ['Learn', 'Social_1', 'Social_2']
y_col = 'y'
X = Activities[X_col]
y = Retention[y_col]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1) # 70% training and 30% test
from sklearn.model_selection import GridSearchCV
rfc = RandomForestClassifier(random_state=42)
param_grid = {
'n_estimators': [3, 4, 5, 200, 500],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [4,5,6,7,8],
'criterion' :['gini', 'entropy']
}
CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= 5)
CV_rfc.fit(X_train, y_train)
CV_rfc.best_params_
clf = RandomForestClassifier(random_state=42, max_features='auto', n_estimators= 500, max_depth=8, criterion='entropy')
clf.fit(X_train,y_train)
y_pred=clf.predict(X_test)
pred = clf.predict(X_test)
pred_prob = clf.predict_proba(X_test)
'''
Obtain confusion_matrix
'''
tn, fp, fn, tp = confusion_matrix(y_true = y_test, y_pred = pred, labels = np.array([0,1])).ravel()
print(tn, fp, fn, tp)
'''
Calculate auc(Area Under the Curve) for positive class
'''
fpr, tpr, thresholds = roc_curve(y_true = y_test, y_score = pred_prob[:,1], pos_label = 1)
auc_random_forest = auc(fpr,tpr)
print(auc_random_forest)
roc_curve_plot(fpr=fpr, tpr=tpr)
'''
Calculation of metrics using standard functions
'''
print('Accuracy: {}'.format(accuracy_score(y_test,pred)))
print('Balanced accuracy: {}'.format(balanced_accuracy_score(y_test, pred)))
print('Precision: {}'.format(precision_score(y_test, pred)))
print('Recall: {}'.format(recall_score(y_test, pred)))
```
<a id="7"></a> <br>
# <div align="center">7. Finding Important Features in Scikit-learn</div>
---------------------------------------------------------------------
[go to top](#top)
Here, you are finding important features or selecting features in the dataset. In scikit-learn, you can perform this task in the following steps:
1) First, you need to create a random forests model.
2) Second, use the feature importance variable to see feature importance scores.
3) Third, visualize these scores using the seaborn library.
```
feature_imp = pd.Series(clf.feature_importances_, index=X_train.columns.values).sort_values(ascending=False)
feature_imp
```
You can also visualize the feature importance. Visualizations are easy to understand and interpretable.
For visualization, you can use a combination of matplotlib and seaborn. Because seaborn is built on top of matplotlib, it offers a number of customized themes and provides additional plot types. Matplotlib is a superset of seaborn and both are equally important for good visualizations.
```
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Creating a bar plot
sns.barplot(x=feature_imp, y=feature_imp.index)
# Add labels to your graph
plt.xlabel('Feature Importance Score')
plt.ylabel('Features')
plt.title("Visualizing Important Features")
plt.show();
```
| github_jupyter |
# 01. GAN with MNIST
```
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import numpy as np
import os
import matplotlib.pyplot as plt
%matplotlib inline
# GPU 체크
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
```
## Load MNIST Data
```
mnist_train = dsets.MNIST(root='~/.pytorch/MNIST_data/',
train=True,
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
transform=transforms.ToTensor(),
download=True)
batch_size = 100
train_loader = torch.utils.data.DataLoader(dataset=mnist_train,
batch_size=batch_size,
shuffle=True)
def imshow(img, title):
npimg = img.numpy()
fig = plt.figure(figsize = (5, 15))
plt.imshow(np.transpose(npimg,(1,2,0)))
plt.title(title)
plt.show()
images, labels = iter(train_loader).next()
imshow(torchvision.utils.make_grid(images, normalize=True), "Train Image")
```
## Define Model
```
image_size = 28*28
latent_size = 100
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.discriminator = nn.Sequential(
nn.Linear(image_size, 512),
nn.LeakyReLU(0.2),
nn.Dropout(0.5),
nn.Linear(512, 512),
nn.LeakyReLU(0.2),
nn.Dropout(0.5),
nn.Linear(512, 1),
nn.Sigmoid()
)
def forward(self, x):
x = x.view(-1, image_size)
out = self.discriminator(x)
return out
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.generator = nn.Sequential(
nn.Linear(latent_size, 512),
nn.BatchNorm1d(512),
nn.ReLU(),
nn.Linear(512, 512),
nn.BatchNorm1d(512),
nn.ReLU(),
nn.Linear(512, image_size),
nn.Tanh()
)
def forward(self, x):
out = self.generator(x)
out = out.view(-1, 1, 28, 28)
return out
D = Discriminator().to(device)
G = Generator().to(device)
print(D(images.to(device)).shape)
z = torch.randn(batch_size, latent_size).to(device)
print(G(z).shape)
```
## Training
```
def gan_loss(x, target_is_real):
loss = nn.BCELoss() # real or fake
if target_is_real :
target_tensor = torch.ones(batch_size, 1)
else :
target_tensor = torch.zeros(batch_size, 1)
return loss(x, target_tensor.to(device))
G_optimizer = optim.Adam(G.parameters(), lr=0.0001)
D_optimizer = optim.Adam(D.parameters(), lr=0.0001)
num_epochs = 500
for epoch in range(num_epochs):
total_batch = len(mnist_train) // batch_size
for i, (images, labels) in enumerate(train_loader):
images = images.to(device)
# ---------------------------------------
# Train D
prob_real = D(images)
D_loss_real = gan_loss(prob_real, True)
z = torch.randn(batch_size, latent_size).to(device)
fake_images = G(z)
prob_fake = D(fake_images)
D_loss_fake = gan_loss(prob_fake, False)
D_loss = (D_loss_real + D_loss_fake) / 2
D.zero_grad()
D_loss.backward()
D_optimizer.step()
# ---------------------------------------
# Train G
z = torch.randn(batch_size, latent_size).to(device)
fake_images = G(z)
prob_fake = D(fake_images)
G_loss = gan_loss(prob_fake, True)
G.zero_grad()
G_loss.backward()
G_optimizer.step()
# ---------------------------------------
if (i+1) == total_batch:
print('Epoch [%d/%d], lter [%d/%d], D_Loss: %.4f, G_Loss: %.4f'
%(epoch+1, num_epochs, i+1, total_batch, D_loss.item(), G_loss.item()))
```
## Generate Image
```
z = torch.randn(batch_size, latent_size).to(device)
images = G(z)
imshow(torchvision.utils.make_grid(images.data.cpu(), normalize=True), "Test Image")
```
| github_jupyter |
Aнализ данных в R
====================
---
Часть 3 - Анализ данных – карта смертности и заражений
--------------------
В связи с тем, что пандемия была спровоцирована вирусом COVID-19, который охватил целый мир, было бы логичным отобразить данные на карте мира.
<br><br>
Существенным элементом была разрисовка карты цветами, которые бы обозначали количество смертей в данной стране таким образом, чтобы темный цвет означал бы худшее положение вещей. Дополнительно, при наведении курсора на пункт в пределах конкретной страны, в рамке высвечиваются статистические данные.
<br><br>
Принимая во внимание статический характер сервиса, который служит для визуализации данных, было решено закачиваемый файл записать в формате *.html.* Как было сказано в Части 1, статическая страница более удобна для презентации данных, которые не изменяются. Количество смертей не должно со временем возрастать. Конечно, нужно принять во внимание возможность дописывания смертоносных исходов, в случае их приплюсования к предыдущему количеству смертей. В таком случае скрипт должен быть выполнен ещё раз.
<br><br>
Такой подход является типичным для Big Data. Большие сборы данных обрабатывают и визуализируют в такой способ, чтобы как можно быстрее получить результат и презентовать эти данные более широкой аудитории. В момент, когда исследования будут наиболее необходимы, статический сервис будет хорошей базой для разработки динамичной версии, которая бы также отображала интеракции и изменения.
# 1. Подготовка к работе
Так же как в скрипте, который подготавливает данные (описанный во 2-й части), для начала нужно освободить память и загрузить библиотеки. Снова создаётся строка для загрузки библиотек, которые не были ещё установлены в данной среде. В производственной среде подобная ситуация не должна иметь место, так как библиотеки приготовлены заранее.
```
#######################################################
#
# INIT - INITIAL SECTION
#
#######################################################
#clear terminal
cat("\f")
#Remove all variables without CONST_IMPORT_DATASET
CONST_IMPORT_DATASET = c('df_COVID19Base', 'df_Country_DICT')
ls_VariablesToRm <- NULL
ls_VariablesToRm <- ls()
#jezeli nie ma to od razu bład i wyjscie
ls_VariablesToRm <-ls_VariablesToRm[! ls_VariablesToRm %in% CONST_IMPORT_DATASET]
rm(list=ls_VariablesToRm)
#clear memory
gc()
```
# 2. Инсталляция и загрузка библиотек
Перед запуском кода следует установить и загрузить следующие библиотеки:
1. [utils](https://www.rdocumentation.org/packages/utils/versions/3.6.2)
2. [rgdal](https://www.rdocumentation.org/packages/rgdal/versions/1.5-16)
3. [leaflet](https://rstudio.github.io/leaflet/)
4. [rgdal](https://cran.r-project.org/web/packages/rgdal/rgdal.pdf)
5. [lubridate](https://lubridate.tidyverse.org/)
6. [htmltools](https://cran.r-project.org/web/packages/htmltools/htmltools.pdf)
7. [zoo](https://cran.r-project.org/web/packages/zoo/zoo.pdf)
8. [mapview](https://cran.r-project.org/web/packages/mapview/mapview.pdf)
9. [webshot](https://cran.r-project.org/web/packages/webshot/webshot.pdf)
10. [htmlwidgets](https://cran.r-project.org/web/packages/htmlwidgets/htmlwidgets.pdf)
11. [install_phantomJS](https://www.rdocumentation.org/packages/webshot/versions/0.5.2/topics/install_phantomjs)
В производственной среде библиотеки будут уже загружены и установлены, поэтому этот шаг можно будет пропустить. Однако для целей курса это действие необходимо для запуска на локальном компьютере. Следует заметить, что условные инструкции следует добавлять для загрузки библиотек только в том случае, если они были ранее установлены в данной среде программирования - `if(!require(libray_name))`. Исключением является PhantomJS, инструмент который позволяет создавать файлы в формате '*.hmtl*' вместе с интерактивными элементами.
```
###############################
# INIT- LIBRARY SECTION
#these libraries are necessary
if(!require("utils")) install.packages("utils")
library(utils)
if(!require("rgdal")) install.packages("rgdal")
library(rgdal)
if(!require("leaflet")) install.packages("leaflet")
library(leaflet)
if(!require("rgdal")) install.packages("rgdal")
library(rgdal)
if(!require("lubridate")) install.packages("lubridate")
library(lubridate)
if(!require("htmltools")) install.packages("htmltools")
library(htmltools)
if(!require("zoo")) install.packages("zoo")
library(zoo)
if(!require("mapview")) install.packages("mapview")
library(mapview)
if(!require("webshot")) install.packages("webshot")
library(webshot)
if(!require("htmlwidgets")) install.packages("htmlwidgets")
library(htmlwidgets)
install_phantomjs()
```
# 3. Функции
Последующим этапом является завершение части кода функцией, которая упорядочивает повторяющуюся часть фрагмента кода. В коде ниже создано 3 функции:
1. tag.map.title $\;\;$- информация о названии статической страницы HTML
2. trySilent $\;\;\;\;\;\;\;$- исключение, больше информации об исключениях на странице: https://www.rdocumentation.org/packages/rJava/versions/0.9-13/topics/Exceptions
3. geomSeq $\;\;\;\;\;$- используется для определения границ государств на карте
```
###############################
# INIT- FUNCTION AND SECTION
#function add title of map
tag.map.title <- tags$style(HTML("
.leaflet-control.map-title {
transform: translate(-50%,20%);
position: fixed !important;
left: 50%;
text-align: center;
padding-left: 10px;
padding-right: 10px;
background: rgba(255,255,255,0.75);
font-weight: bold;
font-size: 16px;
}
"))
#function error handling
trySilent <- function(code, silent =TRUE) {
tryCatch(code, error = function(c) {
msg <- conditionMessage(c)
if (!silent) message(c)
invisible(structure(msg, class = "try-error"))
})
}
#create geometric seqences - form range collor palette
geomSeq <- function(start,ratio,begin,end){
begin=begin-1
end=end-1
start*ratio**(begin:end)
}
```
# 4. Постоянные переменные и параметризация
В секции определяющей дефиниции упомянуты постоянные переменные, а также переменные, которые упорядочивают использование параметров в коде, а также облегчают слежение за тем как осуществляется код.
<br>
Пути к локализации файлов являются условными, их следует всегда видоизменять учитывая актуальное размещение файлов в системе.
<br>
Эта часть состоит из:
1. Переменные, которые касаются данных для анализа и значений для карт (контуры): `CONST_EXPORT_DATASET`, `df_COVID19Base_MAP`;
2. Переменные, указывающие локализацию контуров, а также локализацию файлов, в которой будут записаны уже созданные карты и информация о лицензии.
3. Постоянные переменные указывающие палитру цветов использованных для карт.
```
###############################
# INIT- CONST AND GLOBAL VAR SECTION
#Exported or dont clear datasets
CONST_EXPORT_DATASET <- CONST_IMPORT_DATASET
#use data with new dataSet for Covid
df_COVID19Base_MAP <- df_COVID19Base
#files with country shape (map)
CONST_COUTRYSHAPE_FILE <- "../../shape/Countries_WGS84"
#path to save map file
CONST_save_folder_name <- "../../map/"
#key for identity country
CONST_COUNTRY_KEY <- "country"
CONST_COUNTRYID_KEY <- "countryID"
#date for first and last (map)
CONST_START_DATE_MAP <- '2020-04-24'
CONST_STOP_DATE_MAP <- ymd((today("GMT") -1)) # default yesterday
#CONSTANT part of map
#copyright
CONST_strCopyrightHtml <- "MIT License, Copyright (c) [2020] [Roman Chadzymski]"
CONST_copyrightHtml <- tags$div(HTML(CONST_strCopyrightHtml))
CONST_strCopyrightPng <- "<font size='1'>MIT License,(c) [Roman Chadznski]</font>"
CONST_copyrightPng <- tags$div(HTML(CONST_strCopyrightPng))
#add a color palette for cumulative deaths - "CumDeath"
colors_CumDeath <- c('green','#238b45','#a1d99b','#fec69d','#fd9c55', '#fd8d3c', '#fc4e2a', '#e31a1c', '#e31a1c', '#800026', 'black')
value_gt_CumDeath <- c(-1,3,6,12,24,48,96,2200, 6500,13000, 26000)
value_lo_CumDeath <- c(10,20,40,80,250,750,2200,6500,13000,26000,100000)
#add a color palette for cumulative deaths - "DeathPre1m70"
colors_CumDeathPre1m70 <- c('green','#238b45','#a1d99b','#fec69d','#fd9c55', '#fd8d3c', '#fc4e2a', '#e31a1c', '#e31a1c', '#800026', 'black')
value_gt_CumDeathPre1m70 <- c(-1,geomSeq(4, 2 , 1, 10))
value_lo_CumDeathPre1m70 <- c(4,geomSeq(4, 2 , 2, 11))
value_lo_CumDeathPre1m70[length(value_lo_CumDeathPre1m70)] <- value_lo_CumDeathPre1m70[length(value_lo_CumDeathPre1m70)]*2
#add a color palette for cumulative deaths - "DeathPre1mA"
colors_CumDeathPre1mA <- c('green','#238b45','#a1d99b','#fec69d','#fd9c55', '#fd8d3c', '#fc4e2a', '#e31a1c', '#e31a1c', '#800026', 'black')
value_gt_CumDeathPre1mA <- c(-1,geomSeq(1, 2 , 1, 10))
value_lo_CumDeathPre1mA <- c(4,geomSeq(1, 2 , 2, 11))
value_lo_CumDeathPre1mA[length(value_lo_CumDeathPre1mA)] <- value_lo_CumDeathPre1mA[length(value_lo_CumDeathPre1mA)]*2
#add dataFrame color palette
CONST_df_collor_palette <- data.frame(colors_CumDeath, value_gt_CumDeath, value_lo_CumDeath,
colors_CumDeathPre1m70, value_gt_CumDeathPre1m70, value_lo_CumDeathPre1m70,
colors_CumDeathPre1mA, value_gt_CumDeathPre1mA, value_lo_CumDeathPre1mA)
#removes vector use to create dataFrame
rm( list= c("colors_CumDeath", "value_gt_CumDeath", "value_lo_CumDeath",
"colors_CumDeathPre1m70", "value_gt_CumDeathPre1m70", "value_lo_CumDeathPre1m70",
"colors_CumDeathPre1mA", "value_gt_CumDeathPre1mA", "value_lo_CumDeathPre1mA"))
#add title form map
CONST_prefix_file_for_use <- c(1,2,3)
CONST_prefix_file <- c("CumDeath","CumDeathPre1m70", "CumDeathPre1mA")
CONST_title_map <- c("summary mortality -- MAP OF COVID-19 CORONAVIRUS PANDEMIC" ,"MAP OF COVID-19 CORONAVIRUS PANDEMIC (mortality per capita over 70)","mortality per capita -- MAP OF COVID-19 CORONAVIRUS PANDEMIC")
```
# 5. Подготовка карт
В этом разделе карты и контуры загружены с файлов размещённых в ранее обозначенной локализации `CONST_COUTRYSHAPE_FILE`. Также добавлен ключ `*key` чтобы обозначить Tag-и контуров.
```
#######################################################
#
# MAIN - EXTRACT (IMPORT) DATA SECTION
#
#######################################################
###############################
# MAIN - Load shape of map
#load map layer with shape
Country.map <- readOGR(dsn = CONST_COUTRYSHAPE_FILE , layer = "Countries_WGS84", stringsAsFactors = FALSE)
setnames(Country.map@data, "CNTRY_NAME", CONST_COUNTRY_KEY)
#join (enrichment) add shapes
df_country_OK <- merge(x=Country.map@data, y=df_Country_DICT, by = CONST_COUNTRY_KEY, all.x=TRUE)
setnames(df_country_OK, "countryID", "countryID_01")
df_country_OK <- merge(x=df_country_OK, y=df_Country_DICT, by.x = CONST_COUNTRY_KEY, by.y = 'countryCommonName' , all.x=TRUE)
setnames(df_country_OK, "countryID", "countryID_02")
df_country_OK <- merge(x=df_country_OK, y=df_Country_DICT, by.x = CONST_COUNTRY_KEY, by.y = 'countryOfficialName' , all.x=TRUE)
setnames(df_country_OK, "countryID", "countryID_03")
df_country_OK <- df_country_OK[, c("country", 'OBJECTID', 'countryID_01','countryID_02','countryID_03')]
df_country_OK <- unite(df_country_OK, 'countryID' , c('countryID_01','countryID_02','countryID_03'))
df_country_OK$countryID <- gsub('NA_', '', df_country_OK$countryID)
df_country_OK$countryID <- substr(df_country_OK$countryID,1,3)
#coutryID exeption
df_country_OK[df_country_OK$OBJECTID==21,]$countryID <- 'BHS' #Bahamas BHS
df_country_OK[df_country_OK$OBJECTID==26,]$countryID <- 'MMR' #Myanma MMR
df_country_OK[df_country_OK$OBJECTID==28,]$countryID <- 'BLR' #Belarus BLR
df_country_OK[df_country_OK$OBJECTID==80,]$countryID <- 'GMB' #Gambia GMB
#coutryID has to by not null
df_country_OKF <- df_country_OK[df_country_OK$countryID!='NA',]
#coutryID has to by null, ERROR SET
df_country_NA <- df_country_OK[df_country_OK$countryID=='NA',]
#join (enrichment) add shapes to coutryID
Country.map <- merge(x=Country.map, y=df_country_OKF[,c('OBJECTID',"countryID")], by = 'OBJECTID', all=FALSE)
```
Отдельно взятые страны заполнены цветом который соответствует количеству смертельных исходов и заражений. Таким образом, чем темнее оттенок, тем хуже вирусная ситуация в стране. Локализации были созданы условно, поэтому их следует приспосабливать к актуальным параметрам локализации файлов в системе.
```
###############################
# MAIN - add colums (enrichment) with color pallet to base co
#add a color palette based on the cumulative number of deaths
paletteColor <- CONST_df_collor_palette[,c(1:3)]
df_COVID19Base_MAP[,'colors_CumDeath'] <- "green"
for (i in 1:11) {
trySilent(df_COVID19Base_MAP[(!is.na(df_COVID19Base_MAP$cumsumDeaths)) & df_COVID19Base_MAP$cumsumDeaths>paletteColor[i,2] & df_COVID19Base_MAP$cumsumDeaths<=paletteColor[i,3],'colors_CumDeath'] <- as.character(paletteColor[i,1]))
}
#add a color palette based on the cumulative number of deaths per DeathsPer1m7
paletteColor <- CONST_df_collor_palette[,c(4:6)]
df_COVID19Base_MAP[,'colors_CumDeathPre1m70'] <- "green"
for (i in 1:11) {
trySilent(df_COVID19Base_MAP[(!is.na(df_COVID19Base_MAP$cumsumDeathsPer1m70)) & df_COVID19Base_MAP$cumsumDeathsPer1m70>paletteColor[i,2] & df_COVID19Base_MAP$cumsumDeathsPer1m70<=paletteColor[i,3],'colors_CumDeathPre1m70'] <- as.character(paletteColor[i,1]))
}
#add a color palette based on the cumulative number of deaths per DeathsPer1mA
paletteColor <- CONST_df_collor_palette[,c(7:9)]
df_COVID19Base_MAP[,'colors_CumDeathPre1mA'] <- "green"
for (i in 1:11) {
trySilent(df_COVID19Base_MAP[(!is.na(df_COVID19Base_MAP$cumsumDeathsPer1mA)) & df_COVID19Base_MAP$cumsumDeathsPer1mA>paletteColor[i,2] & df_COVID19Base_MAP$cumsumDeathsPer1mA<=paletteColor[i,3],]$colors_CumDeathPre1mA <- as.character(paletteColor[i,1]))
}
```
Последним шагом является создание карты через размещение на ней заполненного цветом контура. Дополнительно в центре контура был размещён указатель с таблицей, которая высвечивается при наведении курсора. Таблица отображает статические данные для конкретной территории.
```
###############################
# MAIN - Genreate of map
for (dateRaport in seq(from = as.Date(CONST_START_DATE_MAP), to = as.Date(CONST_STOP_DATE_MAP), by = 1)) {
print(format(as.Date(dateRaport), "%d/%m/%Y"))
#filter data for one map (on date)
data_M_01 <- df_COVID19Base_MAP[df_COVID19Base_MAP$dateRep==format(as.Date(dateRaport), "%d/%m/%Y"),]
data_M_01$ID <- seq.int(nrow(data_M_01))
#join (enrichment) add shapes
data_M_01 <- merge(x=Country.map, y=data_M_01, by = CONST_COUNTRYID_KEY, all=FALSE)
#crete label for map
PopulationOver70 = data_M_01@data$popData2018_F_70_74 + data_M_01@data$popData2018_M_70_74 + data_M_01@data$popData2018_F_75_79 + data_M_01@data$popData2018_M_75_79+ data_M_01@data$popData2018_F_80 + data_M_01@data$popData2018_M_80
data_M_01@data$label <- paste("<p><font size='3'> <b>",data_M_01@data$countryOfficialName,"</b></font></p>",
"<p>deaths:",data_M_01@data$cumsumDeaths ,"</p>",
"<p>daysAfter10Deaths:",data_M_01@data$index10Deaths ,"</p>",
"<p>weeksAfter10Deaths:",data_M_01@data$index10WeekDeaths,"</p>",
"<p>-----------------------------------------</p>",
"<p>cases:",data_M_01@data$cumsumCases,"</p>",
"<p>daysAfter100Cases:",data_M_01@data$index100Cases,"</p>",
"<p>weeksAfter100Cases:",data_M_01@data$index100WeekCases,"</p>",
"<p>-----------------------------------------</p>",
"<p>population:",ceiling(data_M_01@data$popData2018/(10^6)),"mln</p>",
"<p>populationOver70:",ceiling(PopulationOver70/(10^6)),"mln</p>"
)
```
Конечный результат карты был преобразован в готовый файл HTML.
```
# dg data_M_01 , vector colors, title
generateMap <- function(colorsShape, titleMap, save_prefix_name) {
map_html <-leaflet(data = data_M_01) %>%
addTiles() %>%
addControl(titleMap, position = "topleft", className="map-title") %>%
addControl(CONST_copyrightHtml, position = "bottomleft") %>%
addProviderTiles(providers$Stamen.TonerLite) %>%
setView(lat = 52, lng=20, zoom = 3) %>%
addPolygons(fillColor = colorsShape,
fillOpacity = 0.8,
color = "#BDBDC3",
weight = 1) %>%
addCircleMarkers(lng = data_M_01@data$lng,
lat = data_M_01@data$lat,
color = "white",
radius = 0,
label = lapply(data_M_01@data$label,HTML),
group = 'general')%>%
addCircleMarkers(lng = data_M_01@data$lng,
lat = data_M_01@data$lat,
color = '#1919ff',
radius = ifelse(data_M_01@data$index10WeekDeaths<3, data_M_01@data$index10WeekDeaths*5,0),
label = lapply(data_M_01@data$label,HTML),
group = '1-2_weeksAfter10Death') %>%
addCircleMarkers(lng = data_M_01@data$lng,
lat = data_M_01@data$lat,
color = '#00004c',
radius = ifelse((data_M_01@data$index10WeekDeaths>2 & data_M_01@data$index10WeekDeaths<5), data_M_01@data$index10WeekDeaths*5,0),
label = lapply(data_M_01@data$label,HTML),
group = '3-4_weeksAfter10Death') %>%
addCircleMarkers(lng = data_M_01@data$lng,
lat = data_M_01@data$lat,
color = '#000000',
radius = ifelse((data_M_01@data$index10WeekDeaths>4 & data_M_01@data$index10WeekDeaths<7), data_M_01@data$index10WeekDeaths*5,0),
label = lapply(data_M_01@data$label,HTML),
group = '5-6_weeksAfter10Death') %>%
addCircleMarkers(lng = data_M_01@data$lng,
lat = data_M_01@data$lat,
color = '#b2b2ff',
radius = ifelse((data_M_01@data$index10WeekDeaths>6), data_M_01@data$index10WeekDeaths*5,0),
label = lapply(data_M_01@data$label,HTML),
group = '>=7_weeksAfter10Death') %>%
addCircleMarkers(lng = data_M_01@data$lng,
lat = data_M_01@data$lat,
color = 'red',
radius = ifelse(((data_M_01@data$avgDeaths10DWeek-data_M_01@data$avgDeaths10DWeekPrv)/data_M_01@data$avgDeaths10DWeek)<0,0,10),
label = lapply(data_M_01@data$label,HTML),
group = 'avgDeath-UP') %>%
addCircleMarkers(lng = data_M_01@data$lng,
lat = data_M_01@data$lat,
color = 'green',
radius = ifelse(((data_M_01@data$avgDeaths10DWeek-data_M_01@data$avgDeaths10DWeekPrv)/data_M_01@data$avgDeaths10DWeek)<0,10,0),
label = lapply(data_M_01@data$label,HTML),
group = 'avgDeath-DOWN') %>%
addCircleMarkers(lng = data_M_01@data$lng,
lat = data_M_01@data$lat,
color = 'green',
radius = ifelse(((data_M_01@data$avgDeaths10DWeek-data_M_01@data$avgDeaths10DWeekPrv)/data_M_01@data$avgDeaths10DWeek)<0,10,0),
label = lapply(data_M_01@data$label,HTML),
group = 'avgDeath-DOWN') %>%
addCircleMarkers(lng = data_M_01@data$lng,
lat = data_M_01@data$lat,
color = 'yellow',
radius = ((data_M_01@data$popData2018_F_70_74 + data_M_01@data$popData2018_M_70_74 + data_M_01@data$popData2018_F_75_79 + data_M_01@data$popData2018_M_75_79+ data_M_01@data$popData2018_F_80 + data_M_01@data$popData2018_M_80)/(10^6)),
label = lapply(data_M_01@data$label,HTML),
group = 'popultionOver70mln') %>%
addLayersControl(baseGroups = c("(default)","1-2_weeksAfter10Death","3-4_weeksAfter10Death","5-6_weeksAfter10Death",">=7_weeksAfter10Death", "popultionOver70mln", "avgDeath-UP", "avgDeath-DOWN"),
overlayGroups = c("avgDeath-UP", "avgDeath-DOWN","popultionOver70mln"),
options = layersControlOptions(collapsed = FALSE, autoZIndex = TRUE)
)
```
В конечном итоге карты были записаны в локализации обозначенной как постоянная переменная: `CONST_save_folder_name`.
```
#save map as file
saveWidget(map_html, file = paste(CONST_save_folder_name,paste(save_prefix_name,paste(as.Date(dateRaport),"html", sep = "."), sep = "_")))
}
for ( x in CONST_prefix_file_for_use) {
#title form map
strTitle <- paste(CONST_title_map[x], as.Date(dateRaport), sep = ' -- ')
titleHtml <- tags$div(
tag.map.title, HTML(strTitle)
)
#prefix map name
save_prefix_name <- CONST_prefix_file[x]
#color of shape
name_colour_column <- paste('colors_',CONST_prefix_file[x], sep = '')
dataColorColumn <- data_M_01@data[,name_colour_column]
#generate map
generateMap(dataColorColumn, titleHtml, save_prefix_name)
}
}
```
# 4. Чистка кода
Под конец следует осуществить очистку памяти. Учитывая что данные были сохранены в формате HTML нет потребности записывать информацию в формате файлов. Поэтому была применена стандартная процедура очистки переменных `rm()` (), а также освобождение памяти ram `gc()`.
```
#######################################################
#
# FINAL SECTION
#
#######################################################
#Remove all variables without CONST_EXPORT_DATASET
ls_VariablesToRm <- NULL
ls_VariablesToRm <- ls()
ls_VariablesToRm <-ls_VariablesToRm[! ls_VariablesToRm %in% CONST_EXPORT_DATASET]
rm(list=ls_VariablesToRm)
gc()
```
| github_jupyter |
## Traffic on the I-94 Interstate highway.

source:<a href=https://en.wikipedia.org/wiki/Interstate_94> https://en.wikipedia.org/wiki/Interstate_94 </a>
## Introducction to Interstate 94 (I-94)
Is an east–west Interstate Highway connecting the Great Lakes and northern Great Plains regions of the United States.
The goal of our analysis is to determine a few indicators of heavy traffic on I-94.
These indicators can be weather type, time of the day, time of the week, etc.
### INDEX <a id='0'></a>
<a href='#1'>1. The I-94 Traffic Dataset</a>
<a href='#2'>2. Analyzing Traffic Volume</a>
<a href='#3'>3. Traffic Volume: Day vs. Night</a>
<a href='#4'>4. Traffic Volume: Day vs. Night (II)</a>
<a href='#5'>5. Time Indicators</a>
<a href='#6'>6. Time Indicators (II)</a>
<a href='#7'>7. Time Indicators (III)</a>
<a href='#8'>8. Weather Indicators</a>
<a href='#9'>9. Weather Types</a>
<a href='#10'>10. Next steps:</a>
## <a id='1'>1. The I-94 Traffic Dataset</a>
```
#%%html
#<style>
#table {float:left}
#</style>
```
|Attribute Information: | |
|:---|:---|
| `holiday`| Categorical US National holidays plus regional holiday, Minnesota State Fair|
| `temp` | Numeric Average temp in kelvin|
|`rain_1h` | Numeric Amount in mm of rain that occurred in the hour|
|`snow_1h` |Numeric Amount in mm of snow that occurred in the hour |
| `clouds_all` |Numeric Percentage of cloud cover |
| `weather_main` |Categorical Short textual description of the current weather |
|`weather_description`| Categorical Longer textual description of the current weather|
|`date_time`| DateTime Hour of the data collected in local CST time|
|`traffic_volume` |Numeric Hourly I-94 ATR 301 reported westbound traffic volume|
```
import pandas as pd
traffic = pd.read_csv('Metro_Interstate_Traffic_Volume.csv')
traffic.tail()
traffic.info()
```
<a href='#0'> ← index </a>
## <a id='2'>2. Analyzing Traffic Volume</a>
Hourly Interstate 94 westbound traffic volume for MN DoT Station 301, approximately midway between Minneapolis and St Paul, MN.
Hourly weather features and holidays are included to see the impact on traffic volume.
This means that the results of our analysis will be about the westbound traffic in the proximity of that station. In other words, we should avoid generalizing our results for the entire I-94 highway.
```
import matplotlib.pyplot as plt
%matplotlib inline
traffic['traffic_volume'].describe()
```
### Night and day influence on the traffic volume?
- About 25% of the time, there were 1,193 cars or fewer passing the station each hour — **this probably occurs during the night**, or when a road is under construction
- About 25% of the time, the traffic volume was four times as much (4,933 cars or more)
<a href='#0'> ← index </a>
## <a id='3'>3. Traffic Volume: Day vs. Night</a>
This possibility of night and day influencing traffic volume gives our analysis an interesting direction:
- Comparing daytime data with nighttime data.
We will start by dividing the data set into two parts:
- **Daytime data**: hours from 7 a.m. to 7 p.m. (12 hours).
- **Nighttime data**: hours from 7 p.m. to 7 a.m. (12 hours).
Although this is not a perfect criterion for distinguishing between night and daytime hours, it is a good starting point.
```
traffic['date_time'] = pd.to_datetime(traffic['date_time']).copy() # text to datetime
traffic['date_time']
traffic['date_time'].dt.hour.unique()
```
I extract the hours with the `datetime.hour` method to use as a Boolean filter and assign it to a variable `alles_bool`
```
alles_bool = traffic['date_time'].dt.hour
alles_bool
```
These are the values of the hours I will work with.
```
horas_alles = traffic.loc[alles_bool,'date_time'].dt.hour
horas_alles.unique()
plt.hist(horas_alles)
plt.title('Raw Traffic Volume (whole data set)')
plt.xticks(ticks=[ 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 0, 1, 2, 3, 4, 5, 6, 8, 7],
labels=[ '09:00','10:00','11:00','12:00','13:00','14:00',
'15:00','16:00','17:00','18:00','19:00','20:00',
'21:00','22:00','23:00','24:00','01:00','02:00',
'03:00','04:00','05:00','06:00','08:00','07:00'],
rotation=85)
plt.show()
```
A histogram shows accumulation or trend, variability or dispersion and form of the distribution.
Therefore we can see the distribution of traffic volume in a day, we see three time slots where the movement of vehicles is greater:
- **Morning from 07:00 to 09:00 AM**
- **Noon from 14:00 to 16:00 PM**
- **Night from 22:00 to 01:00 AM**.
We continue with the idea of separating day and night.
## Day
We set the interval that corresponds to the hours of the day.
Establish the values that are greater than or equal to 7 and values that are less than or equal to 19. There we will have the interval that corresponds to the daytime hours.
```
day_bool = (traffic['date_time'].dt.hour >=7) & (traffic['date_time'].dt.hour <=19)
horas_daytime = traffic.loc[day_bool,'date_time'].dt.hour
horas_daytime.unique()
horas_daytime
traffic_daytime = traffic[day_bool].copy()
traffic_daytime
```
## Night
We do the same operation with the hours that correspond to those of the night.
```
night_bool = (traffic['date_time'].dt.hour >=19) | (traffic['date_time'].dt.hour <=7)
horas_nighttime = traffic.loc[night_bool,'date_time'].dt.hour
horas_nighttime.unique()
#traffic_night = traffic[night_bool].copy()
traffic_night = traffic[night_bool].copy()
traffic_night
```
<a href='#0'> ← index </a>
## <a id='4'>4. Traffic Volume: Day vs. Night (II)</a>
Now we're going to compare the distribution on traffic volume at night and during day.
```
plt.figure(figsize = (15,4))
plt.subplot(1,2,1)
plt.hist(horas_daytime)
plt.xlabel('Day hours from 7 AM to 7 PM ')
plt.ylabel('Cars per hour')
plt.title('Distribution Traffic Volume: Day')
plt.xticks(ticks=[ 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 7, 8],
labels=[ '09:00','10:00','11:00','12:00','13:00','14:00',
'15:00','16:00','17:00','18:00','19:00','08:00','07:00'
],
rotation=85)
plt.subplot(1,2,2)
bins = 'auto'
plt.hist( horas_nighttime, bins)
plt.xlabel('Night hours from 7 PM to 7 AM')
plt.ylabel('Cars per hour')
plt.title('Distribution Traffic Volume: Night')
plt.xticks(ticks=[ 19, 20, 21, 22, 23, 0, 1, 2, 3, 4, 5, 6, 7 ],
labels=['19:00', '20:00', '21:00', '22:00', '23:00', '24:00','01:00','02:00','03:00','04:00',
'05:00','06:00','07:00' ],
rotation=85)
plt.show()
traffic_daytime.describe()
traffic_night.describe()
```
## Analized results:
We can observe in greater detail, the time slots mentioned above, we see how from 6 am traffic begins to increase until 8 am to reduce the volume in half until 13 noon where it increases again until 14 pm to reduce the volume again and again increase traffic to coincide with the end of the working day and the return home that lasts until 01 am.
We also see that the volume of vehicles at night is much lower than during the day, although there are occasional moments when the return home causes the volume to increase significantly for a short period of time. The rest of the time the volume is much lower.
<a href='#0'> ← index </a>
## <a id='5'>5. Time Indicators</a>
Previously, we determined that the traffic at night is generally light.
Our goal is to find indicators of heavy traffic, so we decided to only focus on the daytime data moving forward.
One of the possible indicators of heavy traffic is time. There might be more people on the road in a certain month, on a certain day, or at a certain time of the day.
We're going to look at a few line plots showing how the traffic volume changed according to the following parameters:
#### - Month
#### - Day of the week
#### - Time of day
The fastest way to get the average traffic volume for each month is by using the `DataFrame.groupby()` method.
## - Monthly
```
traffic_daytime['month_day'] = traffic_daytime['date_time'].dt.month
by_month_day = traffic_daytime.groupby('month_day').mean()
by_month_day['traffic_volume']
```
Generating a line plot to visualize how the traffic volume changed each month on average.
Analyze the line plot.
```
plt.plot(by_month_day['traffic_volume'])
plt.xticks(ticks=[1, 2, 3, 4, 5, 6,7,8,9,10,11,12],
labels=['January', 'February', 'March','April',
'May','June', 'July','August','September', 'October', 'November','December'],
rotation=30)
plt.xlabel('Month')
plt.ylabel('Cars per hour')
plt.title('Traffic Volume / Monthly')
plt.show()
```
It seems that the volume of traffic is more focused on certain months of the year, except for the coldest months and the month of July, which seems to be the month most Americans choose as their vacation month.
<a href='#0'> ← index </a>
## <a id='6'>6. Time Indicators (II)</a>
In the previous screen, we generated a line plot showing how the traffic volume changed each month on average.
We'll now continue with building line plots for another time unit:
## Day of the week.
To get the traffic volume averages for each day of the week, we'll need to use the following code:
```
traffic_daytime['dayofweek'] = traffic_daytime['date_time'].dt.dayofweek
by_dayofweek = traffic_daytime.groupby('dayofweek').mean()
by_dayofweek['traffic_volume'] # 0 is Monday, 6 is Sunday
plt.plot(by_dayofweek['traffic_volume'])
plt.title('Traffic Volume / Day of week (day time) ')
plt.xticks(ticks=[0, 1, 2, 3, 4, 5, 6], labels=[ 'Monday', 'Tuesday',
'Wednesday', 'Thursday',
'Friday', 'Saturday','Sunday'],
rotation=30)
plt.show()
```
It seems that the road is busier on weekdays than on weekends.
<a href='#0'> ← index </a>
## <a id='7'>7. Time Indicators (III)</a>
We found that the traffic volume is significantly heavier on business days compared to the weekends.
We'll now generate a line plot for the time of day.
The weekends, however, will drag down the average values, so we're going to look at the averages separately.
To do that, we'll start by **splitting the data based on the day type**:
**- business day or weekend.**
Below, we show you how to split the dataset so you can focus on plotting the graphs. While your variable names may vary, the logic of the code should be the same.
```
# Splitting Data
traffic_daytime['hour_day'] = traffic_daytime['date_time'].dt.hour # Filtering days
bussiness_days = traffic_daytime.copy()[traffic_daytime['dayofweek'] <= 4] # 4 <= Friday
weekend = traffic_daytime.copy()[traffic_daytime['dayofweek'] >= 5] # 5 >= Saturday
by_hour_business = bussiness_days.groupby('hour_day').mean()
by_hour_weekend = weekend.groupby('hour_day').mean()
print(by_hour_business['traffic_volume'])
print(by_hour_weekend['traffic_volume'])
plt.figure(figsize=(10, 12))
plt.ylim([0, 24])
plt.subplot(3, 2, 1)
plt.title('by hour business traffic_volume (day time)')
plt.plot(by_hour_business['traffic_volume'])
plt.xticks(ticks=[7,8,9,10,11,12,13,14,15,16,17,18,19,],
labels=['07:00','08:00','09:00','10:00','11:00',
'12:00','13:00','14:00','15:00','16:00','17:00','18:00','19:00'],
rotation=85)
plt.subplot(3, 2, 2)
plt.title('by hour weekend traffic_volume (day time)')
plt.plot(by_hour_weekend['traffic_volume'])
plt.xticks(ticks=[7,8,9,10,11,12,13,14,15,16,17,18,19,],
labels=['07:00','08:00','09:00','10:00','11:00',
'12:00','13:00','14:00','15:00','16:00','17:00','18:00','19:00'],
rotation=85)
plt.show()
```
### Summary
- According to what has been seen so far the Road use appears to have a direct relationship with weekday work schedules and therefore the increased frequency of vehicles per hour passing by and the volumen may cause some kind of jam.
- Hoewer the Weekend use is much more progressive and there does not seem to be anything related to work schedules that sets the tempo of use, and the volume of vehicles is also lower compared with the weekday work.
<a href='#0'> ← index </a>
## <a id='8'>8. Weather Indicators</a>
```
# conversion from Fahrenheit to Celsius.
def K_to_C(K):
return (K - 273.15)
traffic_daytime.corr()['traffic_volume'].sort_values(ascending = False)
plt.figure(figsize=(10, 15))
plt.ylim([0, 24])
plt.subplot(3, 2, 1)
plt.title('traffic_volume vs temp (day time ºC)')
plt.scatter(traffic_daytime['traffic_volume'],K_to_C(traffic_daytime['temp']))
plt.xlabel('Car volume hour')
plt.ylabel('temp Celsius')
plt.subplot(3, 2, 2)
plt.title('by_hour_weekend vs clouds_all (day time)')
plt.scatter(traffic_daytime['traffic_volume'],traffic_daytime['clouds_all'])
plt.subplot(3, 2, 3)
plt.title('by_hour_weekend vs rain_1h (day time)')
plt.scatter(traffic_daytime['traffic_volume'],traffic_daytime['rain_1h'])
plt.xlabel('Car volume hour')
plt.subplot(3, 2, 4)
plt.title('by_hour_weekend vs snow_1h (day time)')
plt.scatter(traffic_daytime['traffic_volume'],traffic_daytime['snow_1h'])
plt.xlabel('Car volume hour')
plt.show()
```
The use of scatter charts does not appear that any of these warn any reduction or increase in traffic due to variations in temperature, cloud, rain or snow.
<a href='#0'> ← index </a>
## <a id='9'>9. Weather Types</a>
Previously, we examined the correlation between `traffic_volume` and the numerical weather columns. However, we didn't find any reliable indicator of heavy traffic.
To see if we can find more useful data, we'll look next at the categorical weather-related columns:
`weather_main ` and ` weather_description`
We're going to calculate the average traffic volume associated with each unique value in these two columns.
```
by_weather_main_day = traffic_daytime.groupby('weather_main').mean()
by_weather_description_day = traffic_daytime.groupby('weather_description').mean()
by_weather_main_day['traffic_volume'].plot.barh()
plt.title('Car volume vs different weather conditions (day)')
plt.xlabel('car volume')
plt.show()
plt.figure(figsize=(15,15))
A = by_weather_description_day['traffic_volume'].sort_values(ascending = False)
A.plot.barh()
plt.title('Importance of weather conditions on traffic (day)')
plt.xlabel('car volume')
plt.show()
```
There's traffic volume exceeding 5,000 cars
- shower snow
- light rain and snow
- proximity thunderstorm with drizzle
- thunderstorm with light drizzle
- shower drizzle
Shower snow is the weather in which the highest number of vehicles per day crosses the freeway.
Anything other than the above are situations where the volume of traffic on the freeway begins to be reduced.
<a href='#0'> ← index </a> -
### Summary
### Time indicators for the day:
- The traffic is usually heavier during warm months (March–October) compared to cold months (November–February).
- The traffic is heavier on business days compared to the weekends.
- On business days, the rush hours are around 7 AM and 16 PM.
### Weather indicators:
- When shower snow, light rain and snow proximity thunderstorm with drizzle are over the highway, the volume of cars is reduced with a tendency towards 5000 vehicles per hour.
- The most influential weather event on the freeway is a drizzle storm, which causes the freeway usage to drop to below 2500 vehicles per hour.
<a href='#0'> ← index </a> -
## <a id='10'>10. Beyond the day:</a>
- Use the nighttime data to look for heavy traffic indicators.
- Find more time and weather indicators.
```
traffic_night
traffic_night['month_night'] = traffic_night['date_time'].dt.month
traffic_night
traffic_night.info()
```
### Monthly night.
```
by_month_night = traffic_night.groupby('month_night').mean() # agrupa columnas con valores numéricos en 12 meses
by_month_night
by_month_night['traffic_volume']
plt.plot(by_month_night['traffic_volume'])
plt.xticks(ticks=[1, 2, 3, 4, 5, 6,7,8,9,10,11,12],
labels=['January', 'February', 'March','April',
'May','June', 'July','August','September', 'October', 'November','December'],
rotation=30)
plt.xlabel('Month')
plt.ylabel('Cars per hour')
plt.title('Traffic Volume / 12 Month (night)')
plt.show()
```
the volume of vehicles at night is lower and is centered mainly in the warmer months of the calendar, except for the months of July, mid-March and mid-September
### Day of week
```
traffic_night['dayofweek'] = traffic_night['date_time'].dt.dayofweek
traffic_night['dayofweek']
by_nightofweek = traffic_night.groupby('dayofweek').mean()
by_nightofweek
by_nightofweek['traffic_volume'] # 0 is Monday, 6 is Sunday
plt.plot(by_nightofweek['traffic_volume'])
plt.title('Traffic Volume / Day of week (night time) ')
plt.xticks(ticks=[0, 1, 2, 3, 4, 5, 6], labels=[ 'Monday', 'Tuesday',
'Wednesday', 'Thursday',
'Friday', 'Saturday','Sunday'],
rotation=30)
plt.show()
```
We see how during the week the volume increases and when the weekend arrives again the traffic volume drops again.
```
traffic_night
traffic_night['hour_night'] = traffic_night['date_time'].dt.hour
bussiness_night = traffic_night.copy()[traffic_night['dayofweek'] <= 4] # 4 == Friday
weekend_night = traffic_night.copy()[traffic_night['dayofweek'] >= 5] # 5 == Saturday
by_hour_business_night = bussiness_night.groupby('hour_night').mean()
by_hour_weekend_night = weekend_night.groupby('hour_night').mean()
by_hour_business_night = bussiness_night.groupby('hour_night').mean()
print(by_hour_business_night['traffic_volume'])
print(by_hour_weekend_night['traffic_volume'])
plt.figure(figsize=(10, 12))
plt.ylim([0, 24])
plt.subplot(3, 2, 1)
plt.title('by_hour_business traffic_volume (night)')
plt.plot(by_hour_business_night['traffic_volume'])
plt.xticks(ticks=[0, 1, 2,3,4,5,6,7,19,20,21,22,23],
labels=['24:00', '01:00', '02:00', '03:00', '04:00', '05:00', '06:00','07:00','19:00','20:00','21:00','22:00','23:00'],
rotation=85)
plt.subplot(3, 2, 2)
plt.title('by_hour_business traffic_volume (night)')
plt.plot(by_hour_business_night['traffic_volume'])
plt.xticks(ticks=[19,20,21,22,23,24],
labels=['19:00','20:00','21:00','22:00','23:00','24:00'],
rotation=85)
plt.show()
plt.figure(figsize=(10, 15))
plt.ylim([0, 24])
plt.subplot(3, 2, 1)
plt.title('traffic_volume vs temp (night ºC)')
plt.scatter(traffic_night['traffic_volume'],K_to_C(traffic_night['temp']))
plt.xlabel('Car volume hour')
plt.ylabel('temp Celsius')
plt.subplot(3, 2, 2)
plt.title('by_hour_weekend vs clouds_all (night)')
plt.scatter(traffic_night['traffic_volume'],traffic_night['clouds_all'])
plt.subplot(3, 2, 3)
plt.title('by_hour_weekend vs rain_1h (night)')
plt.scatter(traffic_night['traffic_volume'],traffic_night['rain_1h'])
plt.xlabel('Car volume hour')
plt.subplot(3, 2, 4)
plt.title('by_hour_weekend vs snow_1h (night)')
plt.scatter(traffic_night['traffic_volume'],traffic_night['snow_1h'])
plt.xlabel('Car volume hour')
plt.show()
```
In the same way that happened during the day not appear that any of these warn any reduction or increase in traffic due to variations in temperature, cloud, rain or snow.
Againn and to see if we can find more useful data, we'll look next at the categorical weather-related columns:
`weather_main ` and ` weather_description`
We're going to calculate the average traffic volume associated with each unique value in these two columns:
```
by_weather_main_night = traffic_night.groupby('weather_main').mean()
by_weather_description_night = traffic_night.groupby('weather_description').mean()
by_weather_main_night['traffic_volume'].plot.barh()
plt.title('Car volume vs different weather conditions (night)')
plt.xlabel('car volume')
plt.show()
plt.figure(figsize=(15,10))
A = by_weather_description_night['traffic_volume'].sort_values(ascending = False)
A.plot.barh()
plt.title('Importance of weather conditions on traffic (night)')
plt.xlabel('car volume')
plt.show()
by_weather_description_day.corr()['traffic_volume'].sort_values(ascending = False)
```
### observation:
We can affirm that there is a correlation between the volume of traffic and some of the elements described in `weather_description`for the day:
These indicators show us the importance (from bottom to top) that these meteorological phenomena have in the volume of traffic
For example a rainy day will have less relevance than for example a specific day of the month.
```
by_weather_description_day['month_day'].sort_values(ascending = False)
by_weather_description_night.corr()['traffic_volume'].sort_values(ascending = False)
```
On the other hand at night what most influences the volume of traffic is the time at night, instead the temperature causes the car volume to decrease.
```
top_badweather_day = by_weather_description_day['traffic_volume'].sort_values()
top_badweather_night = by_weather_description_night['traffic_volume'].sort_values()
df_day = pd.DataFrame(top_badweather_day) # ya es una serie no un diccionario como en Ebay
df_day.rename({'traffic_volume':'traffic_volume_day'}, axis=1, inplace=True)
df_night = pd.DataFrame(top_badweather_night)
df_night.rename({'traffic_volume':'traffic_volume_night'}, axis=1, inplace=True)
df = pd.concat([df_night,df_day],axis =1, sort=True )
df
```
Daytime and nighttime weather conditions and how they affect traffic
```
plt.figure(figsize=(15,15))
plt.subplot(3, 2, 1)
A = by_weather_description_day['traffic_volume'].sort_values()
A.plot.bar()
plt.ylabel('Car volumen')
plt.xlabel('Day')
plt.subplot(3, 2, 2)
B = by_weather_description_night['traffic_volume'].sort_values()
B.plot.bar()
plt.ylabel('Car volumen')
plt.xlabel('Night')
plt.show()
%%html
<style>
table {float:left}
</style>
```
## Conclusions:
| **Most Traffic** | **Frequency Month** | **Laboral hours** | **Weekend hours** |
| :--- | :--- | :--- | :--- |
| **Day time** | Feb-May / Aug-Oct | 07:00-09:00 | 11:00-15:00 |
| **Night time** | May-June / Jul-Aug | 04:00-07:00 | 05:00-07:00 |
It's remarkable how light rain and snow at night is responsible for the minimal traffic on the highway and that the same meteorological circumstance during the day is the second last in importance when it comes to taking the vehicle.
The months of the year, the days of the week, working days or holidays and weather phenomena, are facts that determine the use of the highway.
However, and after the comparison between day and night, the lack of visibility is a fact that should not be ignored, as the combination of the aforementioned atmospheric phenomena and nighttime dramatically reduces the volume of cars on the highway.
<a href='#0'> ← index </a> -
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sn
usage = pd.read_csv('usage_data.csv')
usage.head()
usage.shape
usage.isna().sum()
duplicate = usage.duplicated()
duplicate.sum()
usage.drop_duplicates(inplace=True)
usage.shape
m_part_consumption = pd.read_csv('maintenance_part_consumption.csv')
m_part_consumption.head()
m_part_consumption.shape
m_part_consumption.isna().sum()
m_part_consumption.duplicated().sum()
m_part_consumption.drop_duplicates(inplace=True)
m_part_consumption.shape
m_part_consumption.describe(include='all')
m_part_consumption.query('Quantity<0')
m_part_consumption['Quantity'] = abs(m_part_consumption['Quantity'])
m_part_consumption.query('Quantity<0')
m_failure = pd.read_csv('maintenance_failure.csv')
m_failure.head()
m_failure.shape
m_failure.isna().sum()
m_failure.duplicated().sum()
```
Convert Usage Data table into the following columns.
Using the Groupby function we will create the following columns:
Age_initial - It is the Time when the Asset reported first for maintenance service
Age_lastknown - It is the Time when the Asset reported last for maintenance service
Fleet_serviced_count - Using the Unique Time count we are finding the Asset visits for maintenance service
Distance_initial - The Distance traveled by the Asset when reported first for maintenance service
Distance_lastknown - The Distance traveled by the Asset when reported last for maintenance service.
Using the above data we would further investigate the total time spent and the total distance traveled by the Asset for maintenance service.
```
usage.head(5)
usage_data_groups = usage.groupby('Asset', as_index=False).agg(Age_initial=('Time',min), Age_last_known=('Time',max),
Fleet_service_count=('Time','nunique'),
Distance_initial=('Use',min),
Distance_lastknown=('Use', max)
).round(2)
usage_data_groups
# After converting the Usage Data table to the new version. What is the Fleet_serviced_count for Asset “A000495”?
usage_data_groups.query("Asset =='A000495'")
m_part_consumption.drop(['Time'], axis=1, inplace=True)
m_part_consumption
```
As each asset can have multiple failure reasons and each reason might need multiple parts of different quantities for the maintenance. So we will pivot the table based on Asset, columns as Reason, and values as Quantity and Parts. Use the following code to pivot.
```
maintenance_part_consumption_pivottable_part = pd.pivot_table(
m_part_consumption,
values=["Quantity", "Part"],
index=["Asset"],
columns=["Reason"],
aggfunc={"Quantity": np.sum, "Part": lambda x: len(x.unique())},
fill_value=0,
)
maintenance_part_consumption_pivottable_part
# After pivoting successfully, How many unique parts were used for the asset A006104 for the failure reason R565?
maintenance_part_consumption_pivottable_part.query("Asset =='A006104' ")
maintenance_part_consumption_pivottable_part.head(20)
# Maintenance failure data has few missing values.
# It will be difficult(and not efficient also) to impute the values as the failure bin contains only binary values.
# So let's drop it.
m_failure.dropna(subset=['failure_bin'] ,inplace=True)
m_failure
# Select top 3 failure reasons. You can also plot barplot to check visually.
plt.figure(figsize=(10,10))
sn.barplot(x='Reason',y='reason_of_failure', data=m_part_consumption.groupby('Reason')['Reason'].agg([('reason_of_failure', 'count')]).reset_index())
m_part_consumption['Reason'].value_counts()
# Total how much quantity of parts were required for following reasons?
maintenance_part_consumption_pivottable_part.loc[:,'Quantity'][['R707','R446']].count()
m_part_consumption.head()
# Select the top 2 reason which has highest unique parts.
m_part_consumption.groupby('Reason')['Part'].nunique().sort_values(ascending=False)
y = m_part_consumption.loc[(m_part_consumption['Reason']=='R064')]['Asset'].unique()
y
# How many unique parts were used for the asset A654809 for the reason R064?
maintenance_part_consumption_pivottable_part.loc['A654809']
# Which part is most frequently repaired? Enter the part number.
m_part_consumption[m_part_consumption['Quantity']!=0]['Part'].value_counts().reset_index()
# Sum the total quantity for all the parts and enter the one highest value.
m_part_consumption.groupby('Part')['Quantity'].sum().sort_values(ascending=False).reset_index()
usage_data_groups.head()
# Plot histogram of Fleet_serviced_count, and select the range where we see the highest service counts in that range.
plt.hist(usage_data_groups['Fleet_service_count'])
sn.distplot(usage_data_groups['Fleet_service_count'], bins=40)
```
Add the following columns to the table.
Distance_by_fleet - Average distance traveled by fleet per service, derived by the dividing "Distance_service" and "Fleet_serviced_count"
Time_by_fleet - Average time traveled by fleet per service, derived by dividing "Time_in_service" and "Fleet_serviced_count"
```
usage_data_groups['Time_in_service'] = usage_data_groups['Age_last_known']-usage_data_groups['Age_initial']
usage_data_groups['Distance_service'] = usage_data_groups['Distance_lastknown']-usage_data_groups['Distance_initial']
usage_data_groups
usage_data_groups['Distance_by_fleet '] = usage_data_groups['Distance_service']/usage_data_groups['Fleet_service_count']
usage_data_groups['Time_by_fleet '] = usage_data_groups['Time_in_service']/usage_data_groups['Fleet_service_count']
usage_data_groups
# Draw distplot to check whether distribution of Time_in_service is Left or right skewed?
sn.distplot(usage_data_groups['Time_in_service'])
# Draw distplot of the Distance_service column. Do you see any issue with the data points?
sn.distplot(usage_data_groups['Distance_service'])
# Draw boxplot for 'Time_in_service', 'Distance_initial', 'Distance_service', 'Fleet_serviced_count' and select the plots which have outliers.
plt.boxplot(usage_data_groups['Time_in_service'])
plt.boxplot(usage_data_groups['Distance_initial'])
distence_service_box = plt.boxplot(usage_data_groups['Distance_service'])
plt.boxplot(usage_data_groups['Fleet_service_count'])
plt.figure(figsize=(22,10))
usage_data_groups_outliers = ['Time_in_service','Distance_initial','Distance_service','Fleet_service_count']
for i in enumerate(usage_data_groups_outliers):
plt.subplot(2,4,i[0]+1)
box = plt.boxplot(usage_data_groups[i[1]])
plt.title(i[1])
#Q1 quantile and Q3 quantile value can be found using below code
Q1, Q3 = [item.get_ydata()[0] for item in box['whiskers']]
IQR = Q3 - Q1
Lower_Fence = Q1 - (1.5 * IQR)
Upper_Fence = Q3 + (1.5 * IQR)
print('For {} -- Q1: {}, Q3: {}, IQR: {}, Lower_Fence: {}, Upper_Fence: {}'.format(i[1], Q1, Q3, IQR, Lower_Fence, Upper_Fence))
# for distance service Upper_Fence: 3492.0949999999975
# let's analyze Distance_service outliers
usage_data_groups.query('Distance_service >= 3492.0949999999975')
usage_data_groups['Distance_service'] = np.where(
usage_data_groups['Distance_service']> 3492.0949999999975,
3492.0949999999975, usage_data_groups['Distance_service']
)
plt.hist(usage_data_groups['Distance_service']) # now outliet is solved
usage_data_groups['Distance_service'].max()
usage_data_groups.query("Asset == 'A067512'")
# Draw the barplot for “failure_bin”. Select the right option from below.
sn.barplot('failure_bin', 'count', data = m_failure.groupby('failure_bin')['failure_bin'].agg(['count']).reset_index())
```
# Scenario 3
```
# Merge all the the transformed dataset
# usage_data_groups
# maintenance_part_consumption_pivottable_part.shape
# m_failure
maintenance_part_consumption_pivottable_part = maintenance_part_consumption_pivottable_part.reset_index()
column_list = ['Asset']
for a,b in maintenance_part_consumption_pivottable_part.columns.to_flat_index():
if a== 'Part':
column_list.append(b+'__UP')
elif a== 'Quantity':
column_list.append(b+'__TQ')
maintenance_part_consumption_pivottable_part.columns = column_list
maintenance_part_consumption_pivottable_part.head()
merged_data = m_failure.merge(usage_data_groups, on=['Asset'], how='left')
merged_data
merged_data_new = merged_data.merge(maintenance_part_consumption_pivottable_part, on=['Asset'], how='left')
merged_data_new.shape
merged_data_new
merged_data_new.describe()
# R782__TQ have all value 0 droped it.
# merged_data_new.drop(['R782__TQ'], axis=1, inplace=True)
```
Make a new data frame and copy the following columns from the merged dataframe/table.
"Asset",
"failure_bin",
"Fleet_serviced_count",
"Time_in_service",
"Distance_service",
"Distance_by_fleet",
"Time_by_fleet"
```
# "Distance_by_fleet",
# "Time_by_fleet",
New_data_frame = merged_data_new[[
"Asset",
"failure_bin",
"Fleet_service_count",
"Time_in_service",
"Distance_service",
]]
maintenance_part_consumption_pivottable_part.columns
New_data_frame['Total_failures_quantity'] = merged_data_new[[
'R044__TQ', 'R064__TQ', 'R119__TQ',
'R193__TQ', 'R364__TQ', 'R396__TQ', 'R417__TQ', 'R446__TQ', 'R565__TQ',
'R575__TQ', 'R606__TQ', 'R707__TQ', 'R783__TQ'
]].sum(axis=1)
New_data_frame
New_data_frame[New_data_frame['Asset']=='A067512']
New_data_frame['Total_failures_uniqueparts'] = merged_data_new[[
'R044__UP', 'R064__UP', 'R119__UP', 'R193__UP', 'R364__UP',
'R396__UP', 'R417__UP', 'R446__UP', 'R565__UP', 'R575__UP', 'R606__UP',
'R707__UP', 'R782__UP', 'R783__UP'
]].sum(axis=1)
New_data_frame.head(1)
merged_data_new.head(1)
#"Distance_by_fleet",
# "Time_by_fleet" is not added to index
#Total assets came for maintenance?
#Total assets which had breakdown?
a = New_data_frame.groupby('failure_bin')['Asset'].agg([['Total_asset', 'count']]).reset_index()
a
New_data_frame.groupby('failure_bin')['Asset'].agg('count')
# Enter Total_failures_quantity for the assets that came for maintenance
b = New_data_frame.groupby('failure_bin')['Total_failures_quantity'].agg([['Total_failures_quantity_sum','sum']]).reset_index()
b
# Calculate the average Quantity of parts used and Unique parts used for the type of assets.
b['Total_failures_quantity_sum']/a['Total_asset']
# Enter Total_failures_uniqueparts for the assets which had breakdown.
c = New_data_frame.groupby('failure_bin')['Total_failures_uniqueparts'].agg([['Total_failures_uniqueparts_sum','sum']]).reset_index()
c
# Calculate the average Quantity of parts used and Unique parts used for the type of assets.
c['Total_failures_uniqueparts_sum']/a['Total_asset']
New_data_frame.head()
```
## BiViriate Analysis
```
sn.pairplot(New_data_frame)
# plt.figure(figsize=(15,15))
sn.heatmap(New_data_frame.corr(), annot=True)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/zjzsu2000/CMPE297_AdvanceDL_Project/blob/main/Data_Preprocessing/Final_result.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import numpy as np
from google.colab import drive
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Activation, LeakyReLU, BatchNormalization, LSTM, Bidirectional, Input, Concatenate
from keras import backend as K
from keras.callbacks import TensorBoard
from keras.optimizers import Adam
from keras.utils import plot_model
from sklearn.model_selection import train_test_split
drive.mount('/gdrive')
option = pd.read_csv('/gdrive/My Drive/Data set/Option/WIX_call-options-black-scholes.csv')
lstm_option = pd.read_csv('/gdrive/My Drive/Data set/Option/underlying/call/WIX_underlying_call.csv')
model_lstm = load_model('/gdrive/Shareddrives/CMPE297_49_project/models/model1_lstm_call_1.h5')
model2 = load_model('/gdrive/Shareddrives/CMPE297_49_project/models/model2_call21_all_4000.h5')
model3= load_model('/gdrive/Shareddrives/CMPE297_49_project/models/model3_call_sigma5_all_1024_4000.h5')
lstm_option.tail()
model_lstm.summary()
lstm_test = lstm_option[lstm_option['OptionSymbol']=='WIX200117C00060000']
lstm_test
lstm_test = lstm_test[['UnderlyingPrice','Strike','Bid','Ask','sigma_21','date_diff','treasury_rate','1','2','3','4','5','6','7','8','9','10'
,'11','12','13','14','15','16','17','18','19','20','21']]
lstm_test = lstm_test.dropna()
lstm_test
#test_lstm_f = lstm_test[['UnderlyingPrice','Strike','sigma_21','date_diff','treasury_rate']]
test_lstm_x = lstm_test.drop(['Bid','sigma_21','Ask'], axis=1).values
#[['1','2','3','4','5','6','7','8','9','10'
# ,'11','12','13','14','15','16','17','18','19','20','21']]
test_lstm_y = (lstm_test.Bid + lstm_test.Ask) /2
N_TIMESTEPS = 21
test_lstm_x = [test_lstm_x[:, -N_TIMESTEPS:].reshape(test_lstm_x.shape[0], N_TIMESTEPS, 1), test_lstm_x[:, :4]]
test_lstm_x[1]
model_lstm.evaluate([test_lstm_x,test_lstm_f],test_lstm_y,batch_size=1024)
lstmpre = model_lstm.predict(test_lstm_x)
test_lstm_y
lstmpre
from matplotlib import pyplot as plt4
plt4.figure(figsize=(20, 10))
plt4.ylabel('price')
plt4.xlabel('day')
plt4.title('LSTM compare')
plt4.plot(lstmpre,label = 'LSTM')
plt4.plot(test_lstm_y.values, label = 'real')
plt4.legend()
model2.summary()
option.head()
option.OptionSymbol.value_counts()
test = option[option['OptionSymbol']=='WIX200117C00060000']
test = test[['UnderlyingPrice','Strike','Bid','Ask','sigma_21','date_diff','treasury_rate']]
test=test.dropna()
test
test_x = test[['UnderlyingPrice','Strike','sigma_21','date_diff','treasury_rate']]
test_y = (test.Bid + test.Ask) /2
model2.evaluate(test_x,test_y,batch_size=1024)
pred= model2.predict(test_x)
len(pred)
len(test_y)
test_y
from matplotlib import pyplot as plt
plt.figure(figsize=(20, 10))
plt.ylabel('price')
plt.xlabel('day')
plt.title('model2 compare')
plt.plot(pred,label = 'model2')
plt.plot(test_y.values, label = 'real')
plt.legend()
model3.summary()
pre3=model3.predict(test_x)
len(pre3[:,0])
test_y3 = test[['Bid','Ask']]
test_yb = test_y3.Bid.values
test_ya = test_y3.Ask.values
from matplotlib import pyplot as plt2
plt2.figure(figsize=(20, 10))
plt2.ylabel('price')
plt2.xlabel('day')
plt2.title('model3 Bid')
plt2.plot(pre3[:,0],label = 'model3')
plt2.plot(test_yb, label = 'Bid')
plt2.legend()
from matplotlib import pyplot as plt3
plt3.figure(figsize=(20, 10))
plt3.ylabel('price')
plt3.xlabel('day')
plt3.title('model3 Ask')
plt3.plot(pre3[:,1],label = 'model3')
plt3.plot(test_ya, label = 'Ask')
plt3.legend()
```
| github_jupyter |
```
# hide
# all_tutorial
! [ -e /content ] && pip install -Uqq mrl-pypi # upgrade mrl on colab
```
# Tutorial - Conditional LSTM Language Models
>Training and using conditional LSTM language models
## LSTM Language Models
LSTM language models are a type of autoregressive generative model. This particular type of model is a good fit for RL-based optimization as they are light, robust and easy to optimize. These models make use of the [LSTM](https://arxiv.org/abs/1402.1128) architecture design.
Language models are trained in a self-supervised fashion by next token prediction. Given a series of tokens, the model predicts a probability distribution over he next token. Self supervised training is very fast and doesn't require any data labels. Each text string in the dataset labels itself.
During generation, we sample from the model in an autoregression fashion. Given an input token, the model predicts aa distribution of tokens over the next token. We then sample from that distributiona and feed the selected token back into the model. We repeat this process until either an end of sentence (EOS) token is predicted, or the generated sequence reaches a maximum allowed length.
During sampling, we save the log probability of each token predicted. This gives us a probability value for the model's estimated likelihood of the generated compound. We can also backprop through this value.
### Conditional Language Models
Conditional language models condition the generated sequences on some latent vector. Conditioning can be implemented in two ways:
- hidden conditioning: use the latent vector to initialize the hidden state of the model
- output conditioning: concatenate the latent vector to the model activations right before the LSTM layers
The latent vector itself is generated by some encoder
```
import sys
sys.path.append('..')
from mrl.imports import *
from mrl.core import *
from mrl.chem import *
from mrl.torch_imports import *
from mrl.torch_core import *
from mrl.layers import *
from mrl.dataloaders import *
from mrl.g_models.all import *
from mrl.train.agent import *
from mrl.vocab import *
```
## Setup
Before creating a model, we need to set up our data.
Our raw data is in the form of SMILES strings. We need to convert these to tensors.
First we need a `Vocab` to handle converting strings to tokens and mapping those tokens to integers. We will use the `CharacterVocab` class with the `SMILES_CHAR_VOCAB` vocabulary. This will tokenize SMILES on a character basis.
More sophisticated tokenization schemes exist, but character tokenization is nice for the simplicity. Character tokenization has a small, compact vocabulary. Other tokenization strategies can tokenize by more meaningful subwords, but these strategies create a long tail of low frequency tokens and lots of `unk` characters.
```
# if in the repo
df = pd.read_csv('../files/smiles.csv')
# if in Collab:
# download_files()
# df = pd.read_csv('files/smiles.csv')
df.head()
vocab = CharacterVocab(SMILES_CHAR_VOCAB)
```
`vocab` first tokenizes smiles into characters, then numericalizes the tokens into integer keys
```
' '.join(vocab.tokenize(df.smiles.values[0]))
' '.join([str(i) for i in vocab.numericalize(vocab.tokenize(df.smiles.values[0]))])
```
## Dataset
Now we need a dataset. For a conditional language model, we have to decide what data we want to send to the encoder to generate a latent vector.
For this tutorial, we are going to use molecular fingerprints to generate the latent vector. The model will use the encoder to map the input fingerprint to a latent vector. Then the latent vector will be used to condition the hidden state of the decoder, which will reconstruct the SMILES string corresponding to the fingerprint.
To do this, we will use the `Vec_To_Text_Dataset` dataset with `ECFP6` as our fingerprint function
```
dataset = Vec_To_Text_Dataset(df.smiles.values, vocab, ECFP6)
dataloader = dataset.dataloader(32)
```
Now we can look at the actual data
```
x,y = next(iter(dataloader))
x
y
```
The `x` tensor is a tuple containing `(fingerprint, smiles_ints)`. The fingerprint (`x[1]`) will be used to generate the latent vector, while the integer-coded SMILES string (`x[0]`) will be sent to the decoder along with the latent vector.
You will notice the `y` tensor is the same as the `x[0]` tensor with the values shifted by one. This is because the goal of autoregressive language modeling is to predict the next character given the previous series of characters.
## Model Creation
We can create a model through the `Conditional_LSTM_LM` class.
First we need an encoder. Since our encoder data is a fingerprint, we will use a MLP-type encoder and set our latent vector to have a dimension of `512`
```
enc_drops = [0.1, 0.1]
d_latent = 512
encoder = MLP_Encoder(2048, [1024, 512], d_latent, enc_drops)
```
Now we create the model
```
d_vocab = len(vocab.itos)
bos_idx = vocab.stoi['bos']
d_embedding = 256
d_hidden = 1024
n_layers = 3
tie_weights = True
input_dropout = 0.3
lstm_dropout = 0.3
model = Conditional_LSTM_LM(
encoder,
d_vocab,
d_embedding,
d_hidden,
d_latent,
n_layers,
input_dropout=input_dropout,
lstm_dropout=lstm_dropout,
norm_latent=True,
condition_hidden=True,
condition_output=False,
bos_idx=bos_idx,
)
```
We can examine the encoder-decoder structure of the model. We have a `MLP_Encoder` section which contains three MP layers. then we have a `decoder` section that consists of an embedding, three LSTM layers and an output layer
```
model
```
Now we'll put the model into a `GenerativeAgent` to manage supervised training.
We need to specify a loss function - we will use standard cross entropy
```
loss_function = CrossEntropy()
agent = GenerativeAgent(model, vocab, loss_function, dataset, base_model=False)
```
Now we can train in a supervised fashion on next token prediction
```
agent.train_supervised(32, 1, 1e-3)
```
This was just a quick example to show the training API. We're not going to do a whole training process here. To train custom models, repeat this code with your own set of SMILES.
## Pre-trained Models
The MRL model zoo offers a number of pre-trained models. We'll load one of these to continue.
We'll use the `LSTM_LM_Small_Chembl` model. This model was trained first on a chunk of the ZINC database
```
del model
del agent
gc.collect()
torch.cuda.empty_cache()
from mrl.model_zoo import FP_Cond_LSTM_LM_Small_ZINC
agent = FP_Cond_LSTM_LM_Small_ZINC(drop_scale=0.5, base_model=False)
```
Now with a fully pre-trained model, we can look at drawing samples
```
preds, lps = agent.model.sample_no_grad(256, 100, temperature=1.)
preds
lps.shape
```
The `sample_no_grad` function gives is two outputs - `preds` and `lps`.
`preds` is a long tensor of size `(bs, sl)` containing the integer tokens of the samples.
`lps` is a float tensor of size `(bs, sl)` containing the log probabilities of each value in `preds`
We can now reconstruct the predictions back into SMILES strings
```
smiles = agent.reconstruct(preds)
smiles[:10]
mols = to_mols(smiles)
```
Now lets look at some key generation statistics.
- diversity - the percentage of unique samples
- valid - the number of chemically valid samples
```
div = len(set(smiles))/len(smiles)
val = len([i for i in mols if i is not None])/len(mols)
print(f'Diversity:\t{div:.3f}\nValid:\t\t{val:.3f}')
valid_mols = [i for i in mols if i is not None]
draw_mols(valid_mols[:16], mols_per_row=4)
```
## Conditional Generation
Lets look further at how conditional generation works. One important aspect of conditioning on a latent space is the ability to sample from it. This poses an interesting challenge for a conditional language model, because unlike models like `VAE`, there is no set prior distribution.
The `Conditional_LSTM_LM` class handles this with the `norm_latent` argument. If `norm_latent=true` (as above), the latent vectors are all normalized to a magnitude of 1 before being sent to the decoder. This is similar to what is done in StyleGan-type models.
By setting the norm constraint, we can sample valid latent vectors by sampling from a normal distribution and normalizing the resulting vectors.
```
latents = to_device(torch.randn((64, agent.model.encoder.d_latent)))
preds, lps = agent.model.sample_no_grad(latents.shape[0], 100, z=latents, temperature=1.)
smiles = agent.reconstruct(preds)
mols = to_mols(smiles)
mols = [i for i in mols if i is not None]
draw_mols(mols[:16], mols_per_row=4)
```
## Conditional Samples on a Single Input
One advantage of using a conditional generation is the ability to sample multiple compounds from a specific input.
Here we get a latent vector from a specific molecule in the datase
```
smile = df.smiles.values[0]
to_mol(smile)
smile_ds = dataset.new([smile])
batch = to_device(smile_ds.collate_function([smile_ds[0]]))
x,y = batch
agent.model.eval();
latents = agent.model.x_to_latent(x)
latents.shape
```
Now we sample 64 molecules from the same latent vector. The diversity of the outputs will depend on the sampling temperature used and the degree of dropout (if enabled)
```
agent.model.train();
preds, lps = agent.model.sample_no_grad(64, 100, z=latents.repeat(64,1), temperature=1.)
smiles = agent.reconstruct(preds)
print(len(set(smiles)), len(set(smiles))/len(smiles))
smiles = list(set(smiles))
mols = to_mols(smiles)
mols = [i for i in mols if i is not None]
len(mols)
draw_mols(mols[:16], mols_per_row=4)
```
In the above section, we generated 64 compounds and ended up with 10 unique SMILES strings.
If we want more diversity, we can sample with a higher temperature.
You'll see we get 40 unique SMILES. However, only 36 of them are valid SMILES strings. This is the tradeoff of using higher sampling temperatures
```
preds, lps = agent.model.sample_no_grad(64, 100, z=latents.repeat(64,1), temperature=2.)
smiles = agent.reconstruct(preds)
print(len(set(smiles)), len(set(smiles))/len(smiles))
smiles = list(set(smiles))
mols = to_mols(smiles)
mols = [i for i in mols if i is not None]
len(mols)
draw_mols(mols[:16], mols_per_row=4)
```
## Prior
The `Conditional_LSTM_LM` supports a defined prior distrribution for sampling. By default, this prior is not enabled.
We can set the prior using the latent vector we generated earlier to focus the model's prior distribution on that area of the latent space.
To create this distribution, we first need to define the log-variance of the latent space. We will set the log variance to `-5` to set aa tight distribution around the latent vector.
```
logvar = to_device(torch.zeros(latents.shape)-5)
agent.model.set_prior_from_latent(latents.squeeze(0), logvar.squeeze(0), trainable=False)
preds, lps = agent.model.sample_no_grad(128, 100, temperature=1.)
smiles = agent.reconstruct(preds)
print(len(set(smiles)), len(set(smiles))/len(smiles))
smiles = list(set(smiles))
mols = to_mols(smiles)
mols = [i for i in mols if i is not None]
len(mols)
draw_mols(mols[:16], mols_per_row=4)
```
With the small but nonzero variance around the latent vector, we get compounds that are different but share many of the features of the original molecule
| github_jupyter |
# Isolation Forest (IF) Algorithm Documentation
The aim of this document is to explain the Isolation Forest algorithm in Seldon's outlier detection framework.
First, we provide a high level overview of the algorithm and the use case, then we will give a detailed explanation of the implementation.
## Overview
Outlier detection has many applications, ranging from preventing credit card fraud to detecting computer network intrusions. The available data is typically unlabeled and detection needs to be done in real-time. The outlier detector can be used as a standalone algorithm, or to detect anomalies in the input data of another predictive model.
The IF outlier detection algorithm predicts whether the input features are an outlier or not, dependent on a threshold level set by the user. The algorithm needs to be pretrained first on a representable batch of data.
As observations arrive, the algorithm will:
- calculate an anomaly score for the observation
- predict that the observation is an outlier if the anomaly score is below the threshold level
## Why Isolation Forests?
Isolation forests are tree based models specifically used for outlier detection. The IF isolates observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. The number of splittings required to isolate a sample is equivalent to the path length from the root node to the terminating node. This path length, averaged over a forest of random trees, is a measure of normality and is used to define an anomaly score. Outliers can typically be isolated quicker, leading to shorter paths. In the scikit-learn implementation, lower anomaly scores indicate that the probability of an observation being an outlier is higher.
## Implementation
### 1. Defining and training the IF model
The model takes 4 hyperparameters:
- contamination: the fraction of expected outliers in the data set
- number of estimators: the number of base estimators; number of trees in the forest
- max samples: fraction of samples used for each base estimator
- max features: fraction of features used for each base estimator
``` python
!python train.py \
--dataset 'kddcup99' \
--samples 50000 \
--keep_cols "$cols_str" \
--contamination .1 \
--n_estimators 100 \
--max_samples .8 \
--max_features 1. \
--save_path './models/'
```
The model is saved in the folder specified by "save_path".
### 2. Making predictions
In order to make predictions, which can then be served by Seldon Core, the pre-trained model is loaded when defining an OutlierIsolationForest object. The "threshold" argument defines below which anomaly score a sample is classified as an outlier. The threshold is a key hyperparameter and needs to be picked carefully for each application.
``` python
class OutlierIsolationForest(object):
""" Outlier detection using Isolation Forests.
Arguments:
- threshold (float): anomaly score threshold; scores below threshold are outliers
Functions:
- predict: detect and return outliers
- send_feedback: add target labels as part of the feedback loop
- metrics: return custom metrics
"""
def __init__(self,threshold=0.,load_path='./models/'):
self.threshold = threshold
self.N = 0 # total sample count up until now
# load pre-trained model
with open(load_path + 'model.pickle', 'rb') as f:
self.clf = pickle.load(f)
```
The predict method does the actual outlier detection.
``` python
def predict(self,X,feature_names):
""" Detect outliers from mse using the threshold.
Arguments:
- X: input data
- feature_names
"""
```
## References
Scikit-learn Isolation Forest:
- https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Hyperparameter Tuning of the Online Anomaly Detection algorithm
## Introduction
In the previous notebook, you learned to leverage the AML SDK features for Machine Learning experimentation to test the performance of our online solution for Anomaly Detection. These tools allowed you to test the solution with different parameter settings.
In this lab, we are going to take it a step further and use Azure `HyperDrive` to do the hard work of finding the best parameters for us.
Typically it would be used to tune hyperparameters in Machine learning algorithms, such as the regularization constant in a support vector machine, or the number of hidden layers in a neural network.
However, HyperDrive was designed to be extremely flexible architecture. You can combine it with any script that accepts hyper parameters arguments and returns a number that you are tyring to either minimize or maximize by finding the correct setting for your hyperparameters. This is exactly what we are going to do here.
## Getting started
Let's get started. First let's import some Python libraries.
```
#%matplotlib inline
import numpy as np
import os
import azureml
from azureml.core import Workspace, Run
# check core SDK version number
print("Azure ML SDK Version: ", azureml.core.VERSION)
```
## Diagnostics
Opt-in diagnostics for better experience, quality, and security of future releases.
```
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
```
## Initialize workspace
Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
```
from azureml.core.workspace import Workspace
# config_path = '/dbfs/tmp/'
# If you are running this on Jupyter, you may want to run
config_path = '..'
ws = Workspace.from_config(path=os.path.join(config_path, 'aml_config','config.json'))
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
```
## Create an Azure ML experiment
Let's create an experiment named "ADMLExp" and a folder to hold the training scripts. The script runs will be recorded under the experiment in Azure.
```
from azureml.core import Experiment
script_folder = 'scripts'
os.makedirs(script_folder, exist_ok=True)
exp = Experiment(workspace=ws, name='ADMLExp')
```
## Download telemetry dataset
In order to test on the telemetry dataset we will first need to download it from Yan LeCun's web site directly and save them in a `data` folder locally.
```
import os
import urllib
data_path = os.path.join(config_path, 'data')
os.makedirs(data_path, exist_ok=True)
container = 'https://sethmottstore.blob.core.windows.net/predmaint/'
urllib.request.urlretrieve(container + 'telemetry.csv', filename=os.path.join(data_path, 'telemetry.csv'))
urllib.request.urlretrieve(container + 'anoms.csv', filename=os.path.join(data_path, 'anoms.csv'))
```
## Upload dataset to default datastore
A [datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data) is a place where data can be stored that is then made accessible to a Run either by means of mounting or copying the data to the compute target. A datastore can either be backed by an Azure Blob Storage or and Azure File Share (ADLS will be supported in the future). For simple data handling, each workspace provides a default datastore that can be used, in case the data is not already in Blob Storage or File Share.
In this next step, we will upload the training and test set into the workspace's default datastore, which we will then later be mount on a Batch AI cluster for training.
```
ds = ws.get_default_datastore()
ds.upload(src_dir=data_path, target_path='telemetry', overwrite=True, show_progress=True)
```
## Create Batch AI cluster as compute target
[Batch AI](https://docs.microsoft.com/en-us/azure/batch-ai/overview) is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Batch AI cluster in the current workspace, if it doesn't already exist. We will then run the training script on this compute target.
If we could not find the cluster with the given name in the previous cell, then we will create a new cluster here. We will create a AmlCompute Cluster of `Standard_DS3_v2` CPU VMs. This process is broken down into 3 steps:
1. create the configuration (this step is local and only takes a second)
2. create the Batch AI cluster (this step will take about **20 seconds**)
3. provision the VMs to bring the cluster to the initial size (of 1 in this case). This step will take about **3-5 minutes** and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "ADPMAmlCompute"
try:
# look for the existing cluster by name
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
if type(compute_target) is AmlCompute:
print('Found existing compute target {}.'.format(cluster_name))
else:
print('{} exists but it is not a Batch AI cluster. Please choose a different name.'.format(cluster_name))
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size="Standard_DS3_v2",
#vm_priority='lowpriority', # optional
idle_seconds_before_scaledown=1800,
#autoscale_enabled=True,
min_nodes=0,
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it uses the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# Use the 'status' property to get a detailed status for the current cluster.
print(compute_target.status.serialize())
```
## Download the execution script into the script folder
The execution script is already created for you. You can simply copy it into the script folder. You could also use the one from the previous lab, but let's play it safe.
```
# download the script file from the repo.
urllib.request.urlretrieve(
'https://raw.githubusercontent.com/Azure/LearnAI-ADPM/master/solutions/sample_run_AmlCompute.py',
filename=os.path.join(script_folder, 'sample_run_AmlCompute.py'))
```
Make sure the execution script looks correct.
```
with open(os.path.join(script_folder,'sample_run_AmlCompute.py'), 'r') as f:
print(f.read())
```
## Configure Estimator and policy for hyperparameter tuning
We have trained the model with one set of hyperparameters, now let's how we can do hyperparameter tuning by launching multiple runs on the cluster. First let's define the parameter space using random sampling.
```
from azureml.train.hyperdrive import *
ps = RandomParameterSampling(
{
'--window_size': choice(100, 500, 1000, 2000, 5000),
'--com': choice(4, 6, 12, 24)
}
)
```
Next, we will create a new estimator without the above parameters since they will be passed in later. Note we still need to keep the `data-folder` parameter since that's not a hyperparamter we will sweep.
## Create an AzureML training Estimator
Next, we construct an `azureml.train.Estimator` estimator object, use the Batch AI cluster as compute target, and pass the mount-point of the datastore to the training code as a parameter.
The estimator is providing a simple way of launching a custom job on a compute target. It will automatically provide a docker image, if additional pip or conda packages are required, their names can be passed in via the `pip_packages` and `conda_packages` arguments and they will be included in the resulting docker.
In our case, we will need to install the following `pip_packages`: `numpy`, `pandas`, `scikit-learn`.
Unlike in the previous lab, we do not provide hyperparameters as `script_params` to the Estimator, because they will be set by `HyperDrive`.
```
from azureml.train.estimator import Estimator
script_params = {
'--data-folder': ws.get_default_datastore().as_mount(),
# We are not using the following parameters, because they will be set by HyperDrive
# '--window_size': 500,
# '--com': 12
}
est = Estimator(source_directory=script_folder,
script_params=script_params,
compute_target=compute_target,
entry_script='sample_run_AmlCompute.py',
pip_packages=['numpy','pandas','scikit-learn','pyculiarity'])
```
Now we will define an early termnination policy. The `BanditPolicy` basically states to check the job every 2 iterations. If the primary metric (defined later) falls outside of the top 10% range, Azure ML terminate the job. This saves us from continuing to explore hyperparameters that don't show promise of helping reach our target metric.
```
policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1, delay_evaluation=250)
```
Now we are ready to configure a run configuration object, and specify the primary metric `validation_acc` that's recorded in your training runs. If you go back to visit the training script, you will notice that this value is being logged after every epoch (a full batch set). We also want to tell the service that we are looking to maximizing this value. We also set the number of samples to 20, and maximal concurrent job to 4, which is the same as the number of nodes in our computer cluster.
```
htc = HyperDriveRunConfig(estimator=est,
hyperparameter_sampling=ps,
primary_metric_name='fbeta_score',
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
policy=policy,
max_total_runs=30,
max_concurrent_runs=4)
```
Finally, let's launch the hyperparameter tuning job.
```
htr = exp.submit(config=htc)
htr
```
Alternatively, you can also use the widget again
```
from azureml.widgets import RunDetails
RunDetails(htr).show()
```
If you want, you can wait for completion, by uncommenting the next cell. But you can also skip the cell and look at preliminary results. You will still have to wait for about a minute before the first results show up.
```
# htr.wait_for_completion(show_output = True)
```
## Find and register best model
When all the jobs finish, we can find out the one that has the highest accuracy.
**Note**: If you get a `TrainingException` or a `KeyError` below, you probably just have to wait until the first training run is completed.
```
best_run = htr.get_best_run_by_primary_metric()
```
### Hands-on lab
Go to the Azure portal and explore how HyperDrive logs run metrics there.
### End lab
Now let's list the model files uploaded during the run.
```
run_details = best_run.get_details()
print("arguments of best run: %s" % (run_details['runDefinition']['Arguments']))
best_run.get_metrics()['final_fbeta_score']
run_details
```
### Hands-on lab
Use python `help(azureml.train.hyperdrive)` and expore the documentation for HyperDrive.
Above, we used a BanditPolicy. Try to fully understand what the parameters of the policy are.
Try to pick one other policy and see whether you can replace the BanditPolicy above and run a new HyperDrive job.
### End Lab
## Clean up
We can also delete the computer cluster. But remember if you set the `cluster_min_nodes` value to 0 when you created the cluster, once the jobs are finished, all nodes are deleted automatically. So you don't have to delete the cluster itself since it won't incur any cost. Next time you submit jobs to it, the cluster will then automatically "grow" up to the `cluster_min_nodes` which is set to 4.
```
# delete the cluster if you need to.
compute_target.delete()
```
| github_jupyter |
<center>
<img src="../../img/ods_stickers.jpg">
## Open Machine Learning Course
<center>Author: [Yury Kashnitsky](https://www.linkedin.com/in/festline)
First baseline in Kaggle Inclass [competition](https://www.kaggle.com/c/how-good-is-your-medium-article) "How good is your Medium article?"
Import libraries.
```
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
import json
from tqdm import tqdm_notebook
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import mean_absolute_error
```
The following code will help to throw away all HTML tags from an article content.
```
from html.parser import HTMLParser
class MLStripper(HTMLParser):
def __init__(self):
self.reset()
self.strict = False
self.convert_charrefs= True
self.fed = []
def handle_data(self, d):
self.fed.append(d)
def get_data(self):
return ''.join(self.fed)
def strip_tags(html):
s = MLStripper()
s.feed(html)
return s.get_data()
```
Let's have two paths – to raw data (downloaded from competition's page and ungzipped) and to processed data. Change this if you'd like to.
```
PATH_TO_RAW_DATA = '../../raw_data/'
PATH_TO_PROCESSED_DATA = '../../processed_data/'
```
Assume you have all data downloaded from competition's [page](https://www.kaggle.com/c/how-good-is-your-medium-article/data) in the PATH_TO_RAW_DATA folder and `.gz` files are ungzipped.
```
!ls -l $PATH_TO_RAW_DATA
```
Supplementary function to read a JSON line without crashing on escape characters.
```
def read_json_line(line=None):
result = None
try:
result = json.loads(line)
except Exception as e:
# Find the offending character index:
idx_to_replace = int(str(e).split(' ')[-1].replace(')',''))
# Remove the offending character:
new_line = list(line)
new_line[idx_to_replace] = ' '
new_line = ''.join(new_line)
return read_json_line(line=new_line)
return result
```
This function takes a JSON and forms a txt file leaving only article content. When you resort to feature engineering and extract various features from articles, a good idea is to modify this function.
```
def preprocess(path_to_inp_json_file, path_to_out_txt_file):
with open(path_to_inp_json_file, encoding='utf-8') as inp_file, \
open(path_to_out_txt_file, 'w', encoding='utf-8') as out_file:
for line in tqdm_notebook(inp_file):
json_data = read_json_line(line)
content = json_data['content'].replace('\n', ' ').replace('\r', ' ')
content_no_html_tags = strip_tags(content)
out_file.write(content_no_html_tags + '\n')
%%time
preprocess(path_to_inp_json_file=os.path.join(PATH_TO_RAW_DATA, 'train.json'),
path_to_out_txt_file=os.path.join(PATH_TO_PROCESSED_DATA, 'train_raw_content.txt'))
%%time
preprocess(path_to_inp_json_file=os.path.join(PATH_TO_RAW_DATA, 'test.json'),
path_to_out_txt_file=os.path.join(PATH_TO_PROCESSED_DATA, 'test_raw_content.txt'))
!wc -l $PATH_TO_PROCESSED_DATA/*_raw_content.txt
```
We'll use a linear model (`Ridge`) with a very simple feature extractor – `CountVectorizer`, meaning that we resort to the Bag-of-Words approach. For now, we are leaving only 50k features.
```
cv = CountVectorizer(max_features=50000)
%%time
with open(os.path.join(PATH_TO_PROCESSED_DATA, 'train_raw_content.txt'), encoding='utf-8') as input_train_file:
X_train = cv.fit_transform(input_train_file)
%%time
with open(os.path.join(PATH_TO_PROCESSED_DATA, 'test_raw_content.txt'), encoding='utf-8') as input_test_file:
X_test = cv.transform(input_test_file)
X_train.shape, X_test.shape
```
Read targets from file.
```
train_target = pd.read_csv(os.path.join(PATH_TO_RAW_DATA, 'train_log1p_recommends.csv'),
index_col='id')
y_train = train_target['log_recommends'].values
```
Make a 30%-holdout set.
```
train_part_size = int(0.7 * train_target.shape[0])
X_train_part = X_train[:train_part_size, :]
y_train_part = y_train[:train_part_size]
X_valid = X_train[train_part_size:, :]
y_valid = y_train[train_part_size:]
```
Now we are ready to fit a linear model.
```
from sklearn.linear_model import Ridge
ridge = Ridge(random_state=17)
%%time
ridge.fit(X_train_part, y_train_part);
ridge_pred = ridge.predict(X_valid)
```
Let's plot predictions and targets for the holdout set. Recall that these are #recommendations (= #claps) of Medium articles with the `np.log1p` transformation.
```
plt.hist(y_valid, bins=30, alpha=.5, color='red', label='true', range=(0,10));
plt.hist(ridge_pred, bins=30, alpha=.5, color='green', label='pred', range=(0,10));
plt.legend();
```
As we can see, the prediction is far from perfect, and we get MAE $\approx$ 1.3 that corresponds to $\approx$ 2.7 error in #recommendations.
```
valid_mae = mean_absolute_error(y_valid, ridge_pred)
valid_mae, np.expm1(valid_mae)
```
Finally, train the model on the full accessible training set, make predictions for the test set and form a submission file.
```
%%time
ridge.fit(X_train, y_train);
%%time
ridge_test_pred = ridge.predict(X_test)
def write_submission_file(prediction, filename,
path_to_sample=os.path.join(PATH_TO_RAW_DATA, 'sample_submission.csv')):
submission = pd.read_csv(path_to_sample, index_col='id')
submission['log_recommends'] = prediction
submission.to_csv(filename)
write_submission_file(prediction=ridge_test_pred,
filename='first_ridge.csv')
```
With this, you'll get 1.91185 on [public leaderboard](https://www.kaggle.com/c/how-good-is-your-medium-article/leaderboard). This is much higher than our validation MAE. This indicates that the target distribution in test set somewhat differs from that of the training set (recent Medium articles are more popular). This shouldn't confuse us as long as we see a correlation between local improvements and improvements on the leaderboard.
Some ideas for improvement:
- Engineer good features, this is the key to success. Some simple features will be based on publication time, authors, content length and so on
- You may not ignore HTML and extract some features from there
- You'd better experiment with your validation scheme. You should see a correlation between your local improvements and LB score
- Try TF-IDF, ngrams, Word2Vec and GloVe embeddings
- Try various NLP techniques like stemming and lemmatization
- Tune hyperparameters. In our example, we've left only 50k features and used `C`=1 as a regularization parameter, this can be changed
- SGD and Vowpal Wabbit will learn much faster
- In our course, we don't cover neural nets. But it's not obliged to use GRUs or LSTMs in this competition.
| github_jupyter |
```
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
print(physical_devices)
```
From https://www.tensorflow.org/api_docs/python/tf/nn/conv2d
Given an input tensor of shape batch_shape + [in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels], this op performs the following:
Flattens the filter to a 2-D matrix with shape [filter_height * filter_width * in_channels, output_channels].
Extracts image patches from the input tensor to form a virtual tensor of shape [batch, out_height, out_width, filter_height * filter_width * in_channels].
For each patch, right-multiplies the filter matrix and the image patch vector.
```
class spectralConv2D(keras.layers.Layer):
def __init__(self, filters = 32,kernel_size=(3,3),strides=(1, 1), padding='valid',activation=tf.nn.relu):
# Initialization
super(spectralConv2D, self).__init__()
assert len(kernel_size) > 1 , "Please input the Kernel Size as a 2D tuple"
self.strides = strides
self.padding = padding
self.filters = filters
self.kernel_size = kernel_size
self.activation = activation
def build(self, input_shape):
# Initialize Filters
# Assuming Input_shape is channels_last
kernel_shape = [self.kernel_size[0],self.kernel_size[1],input_shape[3],self.filters]
self.kernel_real = self.add_weight( shape=kernel_shape,
initializer="glorot_uniform",
trainable=True)
self.kernel_imag = self.add_weight( shape=kernel_shape,
initializer="glorot_uniform",
trainable=True)
self.bias = self.add_weight(shape=(self.filters,), initializer="zeros", trainable=True)
def call(self, inputs):
kernel = tf.complex(self.kernel_real,kernel_imag)
spatial_kernel = tf.signal.ifft2d(kernel)
print(spatial_kernel.shape)
convolution_output = tf.nn.convolution(
inputs,
spatial_kernel,
strides=list(self.strides),
padding=self.padding
)
convolution_output = tf.nn.bias_add(convolution_output, self.bias, data_format=self._tf_data_format)
if self.activation is not None:
convolution_output= self.activation(convolution_output)
return convolution_output
```
| github_jupyter |
# Decisiton Tree interpretability notebook
```
import os
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.tree import plot_tree
from dtreeviz.trees import *
from pycaret import classification
```
### Exploratory data analysis
Import to specify correctly the data path. Initally we can make an easy exploration.
```
data_folder_path = os.path.join('..', 'data')
data_file = 'ds.csv'
df = pd.read_csv(os.path.join(data_folder_path, data_file))
df.describe()
df.gender.value_counts()
playerTypes = pd.get_dummies(df['PlayerType'])
df = pd.concat([df.drop("PlayerType", axis=1), playerTypes], axis=1)
df.head()
```
### Classification Set-up
Definition of main model hyperparameters. Numeric features and target with full description available.
```
classification_setup = classification.setup(
data=df,
target='gender',
numeric_features=[c for c in df.columns if c not in ['gender', 'matchPeriod', 'PlayerType']]
)
```
Initial exploration to understand general performance metrics for different classification algorithms (Focus at Accuracy and AUC)
```
classification.compare_models()
```
Decision tree implementation, prunning of the tree at 40 samples per leaf and looking at the Entropy gain of each split.
Model evaluated with a 10-fold cross validation.
```
classification.set_config('seed', 7940)
dt_model = classification.create_model('dt', min_samples_leaf=40, criterion='entropy')
dt_model
tuned_dt_model = classification.tune_model(dt_model)
dt_model = classification.load_model('./static_models/dt_6_10')
dt_model
```
### Decision Tree implementation
- Feature importance as an aggregated from each split.
- Full Tree visualization.
```
classification_setup[0].columns
plot_options = ["auc","threshold","pr","confusion_matrix","error","class_report","boundary","rfe","learning","manifold","calibration","vc","dimension","feature","parameter"]
classification.plot_model(dt_model, plot='feature', save=True)
#Importance of the features measured by how much the nod purity is imporved on average.
interpretation_options = ['summary', 'correlation', 'reason']
classification.interpret_model(dt_model, interpretation_options[0])
classification.interpret_model(dt_model, interpretation_options[1], feature='Clearances')
viz = dtreeviz(dt_model, classification_setup[0], df.gender, target_name='gender', feature_names=classification_setup[0].columns, class_names=['Female', 'Male'], orientation='TD', fontname='serif')
viz.view()
classification.plot_model(dt_model, plot='boundary')
plot_tree(dt_model, filled=True)
```
### Scientific Reporting
```
coef_df = pd.DataFrame({'Feature': classification_setup[0].columns, 'Coefficients': dt_model.feature_importances_})
coef_df.sort_values(by=['Coefficients'], ascending=False).head(10).to_latex(index=False)
```
| github_jupyter |
```
import os
#download face database.
if not os.path.isfile('./FaceDB.zip'):
!gdown --id 'replace this with google drive link'
#or download it from wherever you want it
#unzip the database.
if not os.path.isdir('./FaceDB'):
!unzip -oq FaceDB.zip -d "./FaceDB"
#create folder for trained models to be saved.
if not os.path.isdir('./saved_models'):
os.mkdir('./saved_models')
import tensorflow as tf
#using tensorflow's keras implementation
#so the models can be converted to .tflite.
keras = tf.keras
print(tf.__version__)
from sklearn import preprocessing
#getting images from a path.
#path must have sub-folders.
#each sub-folders contains pictures of one person.
def ImgClassLists(path, imageExtension):
imagePaths = []
classes = []
#walk the path.
for root, _, files in sorted(os.walk(path)):
#sort the files iterator.
for file in sorted(files):
if file.endswith(imageExtension):
#get image paths.
imagePath = os.path.join(root, file)
imagePaths.append(imagePath)
#for each sub-folder provide an unique class name.
className = root[len(path) : ]
classes.append(className)
#turn class names into numbers.
#from ["class-a", "class-a", "class-b", "class-b", "class-b", "class-c"]
#to [0 , 0 , 1 , 1 , 1 , 2 ].
le = preprocessing.LabelEncoder()
le.fit(classes)
classes = le.transform(classes)
return (imagePaths, classes)
#MobileNet implementation taken from the TF Keras library.
def MobileNet(shape, num_classes, alpha = 1, include_top = True, weights = None, num_inputs = None):
model = tf.keras.applications.MobileNet(input_shape = shape, alpha = alpha, include_top = include_top, weights = weights, classes = num_classes)
return model
#Keras EffNet implementation taken from
#https://github.com/arthurdouillard/keras-effnet.
from keras.models import Model
from keras.layers import *
from keras.activations import *
from keras.callbacks import *
def get_post(x_in):
x = LeakyReLU()(x_in)
x = BatchNormalization()(x)
return x
def get_block(x_in, ch_in, ch_out):
x = Conv2D(ch_in,
kernel_size=(1, 1),
padding='same',
use_bias=False)(x_in)
x = get_post(x)
x = DepthwiseConv2D(kernel_size=(1, 3), padding='same', use_bias=False)(x)
x = get_post(x)
x = MaxPool2D(pool_size=(2, 1),
strides=(2, 1))(x) # Separable pooling
x = DepthwiseConv2D(kernel_size=(3, 1),
padding='same',
use_bias=False)(x)
x = get_post(x)
x = Conv2D(ch_out,
kernel_size=(2, 1),
strides=(1, 2),
padding='same',
use_bias=False)(x)
x = get_post(x)
return x
def Effnet(input_shape, nb_classes, include_top=True, weights=None, num_inputs=None):
x_in = Input(shape=input_shape)
x = get_block(x_in, 32, 64)
x = get_block(x, 64, 128)
x = get_block(x, 128, 256)
if include_top:
x = Flatten()(x)
x = Dense(nb_classes, activation='softmax')(x)
model = Model(inputs=x_in, outputs=x)
if weights is not None:
model.load_weights(weights, by_name=True)
return model
#ShuffleNet implementation taken from
#https://github.com/arthurdouillard/keras-shufflenet.
#Validation to .tflite model doesn't work, so the model is scrapped
#for .tflite conversion and integration.
'''
from keras.models import Model
from keras.layers import *
from keras.activations import *
from keras.callbacks import *
import keras.backend as K
def _stage(tensor, nb_groups, in_channels, out_channels, repeat):
x = _shufflenet_unit(tensor, nb_groups, in_channels, out_channels, 2)
for _ in range(repeat):
x = _shufflenet_unit(x, nb_groups, out_channels, out_channels, 1)
return x
def _pw_group(tensor, nb_groups, in_channels, out_channels):
"""Pointwise grouped convolution."""
nb_chan_per_grp = in_channels // nb_groups
pw_convs = []
for grp in range(nb_groups):
x = Lambda(lambda x: x[:, :, :, nb_chan_per_grp * grp: nb_chan_per_grp * (grp + 1)])(tensor)
grp_out_chan = int(out_channels / nb_groups + 0.5)
pw_convs.append(
Conv2D(grp_out_chan,
kernel_size=(1, 1),
padding='same',
use_bias=False,
strides=1)(x)
)
return Concatenate(axis=-1)(pw_convs)
def _shuffle(x, nb_groups):
def shuffle_layer(x):
_, w, h, n = K.int_shape(x)
nb_chan_per_grp = n // nb_groups
x = K.reshape(x, (-1, w, h, nb_chan_per_grp, nb_groups))
x = K.permute_dimensions(x, (0, 1, 2, 4, 3)) # Transpose only grps and chs
x = K.reshape(x, (-1, w, h, n))
return x
return Lambda(shuffle_layer)(x)
def _shufflenet_unit(tensor, nb_groups, in_channels, out_channels, strides, shuffle=True, bottleneck=4):
bottleneck_channels = out_channels // bottleneck
x = _pw_group(tensor, nb_groups, in_channels, bottleneck_channels)
x = BatchNormalization()(x)
x = Activation('relu')(x)
if shuffle:
x = _shuffle(x, nb_groups)
x = DepthwiseConv2D(kernel_size=(3, 3),
padding='same',
use_bias=False,
strides=strides)(x)
x = BatchNormalization()(x)
x = _pw_group(x, nb_groups, bottleneck_channels,
out_channels if strides < 2 else out_channels - in_channels)
x = BatchNormalization()(x)
if strides < 2:
x = Add()([tensor, x])
else:
avg = AveragePooling2D(pool_size=(3, 3),
strides=2,
padding='same')(tensor)
x = Concatenate(axis=-1)([avg, x])
x = Activation('relu')(x)
return x
def _info(nb_groups):
return {
1: [24, 144, 288, 576],
2: [24, 200, 400, 800],
3: [24, 240, 480, 960],
4: [24, 272, 544, 1088],
8: [24, 384, 768, 1536]
}[nb_groups], [None, 3, 7, 3]
def ShuffleNet(input_shape, nb_classes, include_top=True, weights=None, nb_groups=8, num_inputs=None):
x_in = Input(shape=input_shape)
x = Conv2D(24,
kernel_size=(3, 3),
strides=2,
use_bias=False,
padding='same')(x_in)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(3, 3),
strides=2,
padding='same')(x)
channels_list, repeat_list = _info(nb_groups)
for i, (out_channels, repeat) in enumerate(zip(channels_list[1:], repeat_list[1:]), start=1):
x = _stage(x, nb_groups, channels_list[i-1], out_channels, repeat)
if include_top:
x = GlobalAveragePooling2D()(x)
x = Dense(nb_classes, activation='softmax')(x)
model = Model(inputs=x_in, outputs=x)
if weights is not None:
model.load_weights(weights, by_name=True)
return model
'''
#L-CNN implementation taken from
#https://github.com/radu-dogaru/LightWeight_Binary_CNN_and_ELM_Keras.
#Radu Dogaru and Ioana Dogaru, "BCONV-ELM: Binary Weights Convolutional Neural
#Network Simulator based on Keras/Tensorflow, for Low Complexity
#Implementations", Proceedings of the ISEEE 2019 conference, submitted.
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Activation
from keras.layers import Conv2D, DepthwiseConv2D, MaxPooling2D, AveragePooling2D
def LCNN(input_shape, num_classes, include_top=True, weights=None, num_inputs=None):
nhid1 = 0 # hidden-1 neurons (put 0 if nhid2=0, or a desired value)
nhid2 = 0 # hidden-2 neurons (take 0 for 0 or 1 hidden layer)
nr_conv = 2 # 0, 1, 2 sau 3 (number of convolution layers - generally, a bigger number allows for better accuracy with the same complexity)
filtre1=8 ; filtre2=8 ; filtre3=8 # filters (kernels) per each layer
csize1=3; csize2=3 ; csize3=3 # convolution kernel size (square kernel)
psize1=4; psize2=4 ; psize3=4 # pooling size (square)
str1=2; str2=2; str3=2 # stride pooling (downsampling rate)
pad='same'; # padding style ('valid' is also an alternative)
type_conv=2 # 1='depth_wise' or 2='normal'
model = Sequential()
if nr_conv>=1:
if type_conv==2:
model.add(Conv2D(filtre1, padding=pad, kernel_size=(csize1, csize1), input_shape=input_shape))
elif type_conv==1:
model.add(DepthwiseConv2D(kernel_size=csize2, padding=pad, input_shape=input_shape, depth_multiplier=filtre1, use_bias=False))
#model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(psize1, psize1),strides=(str1,str1),padding=pad))
#model.add(Activation('relu'))
if nr_conv>=2:
if type_conv==2:
model.add(Conv2D(filtre2, padding=pad, kernel_size=(csize2, csize2)) )
elif type_conv==1:
model.add(DepthwiseConv2D(kernel_size=csize2, padding=pad, depth_multiplier=filtre2, use_bias=False))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(psize2, psize2),strides=(str2,str2),padding=pad))
#model.add(Activation('relu'))
if nr_conv==3:
if type_conv==2:
model.add(Conv2D(filtre3, padding=pad, kernel_size=(csize3, csize3)) )
elif type_conv==1:
model.add(DepthwiseConv2D(kernel_size=csize3, padding=pad, depth_multiplier=filtre3, use_bias=False))
#model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(psize3, psize3),strides=(str3,str3),padding=pad))
#model.add(Activation('relu'))
model.add(Activation('relu'))
model.add(Flatten())
#model.add(Activation('relu'))
#model.add(Dropout(0.25))
elif nr_conv==0:
model.add(Flatten(input_shape=input_shape))
# ---- first fc hidden layer
if nhid1>0:
model.add(Dense(nhid1, activation='relu'))
#model.add(Dropout(0.5))
# ---- second fc hidden layer
if nhid2>0:
model.add(Dense(nhid2, activation='relu'))
# model.add(Dropout(0.2))
# output layer
if (nhid1+nhid2)==0:
model.add(Dense(num_classes, activation='softmax',input_shape=(num_inputs,)))
else:
model.add(Dense(num_classes, activation='softmax'))
return model
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
#get a valid image size from the supported sizes array.
#if optional argument is provided, the function returns the closest image size
#from the supported sizes array to the wanted image size.
def calc_supported_image_size(image, supported_sizes, wanted_image_size = None):
if wanted_image_size is None:
image_shape = image.shape
wanted_image_size = int((image_shape[0] + image_shape[1]) / 2)
#calculate the closest differences from the wanted image sizes to the
#supported sizes.
min_diff_size = abs(wanted_image_size - supported_sizes[0])
min_diff_size_idx = 0
for idx in range(1, len(supported_sizes)):
temp_min_diff_size = abs(wanted_image_size - supported_sizes[idx])
if temp_min_diff_size < min_diff_size:
min_diff_size = temp_min_diff_size
min_diff_size_idx = idx
#return the closest size to the wanted image size from the supported
#size array.
return supported_sizes[min_diff_size_idx]
#split the images into test and train image sets.
def image_split(splittableImagePaths, splittableClasses, test_size = 0.25, wanted_image_size = None, supported_sizes = [128]):
#get valid image size
image_size = calc_supported_image_size(np.asarray(Image.open(splittableImagePaths[0])), supported_sizes = supported_sizes, wanted_image_size = wanted_image_size)
#populate array with images that have been resized to the calculated image size.
splittableImages = []
#convert images to rgb for working with color images.
#resize images to the valid image size.
#pixel values range from 0 to 255.
for img in splittableImagePaths:
splittableImages.append(np.asarray(Image.open(img).convert('RGB').resize((image_size, image_size)), dtype = np.uint8))
#stratify is used for choosing an equal amount of pictures from each class
#for the test and train image sets respectively.
#test size is used to choose the percentage of how many images
#will be used for testing.
x_train, x_test, y_train, y_test = train_test_split(splittableImages, splittableClasses, test_size = test_size, stratify = splittableClasses)
#converts the classes array to a binary matrix
#that is understandable to the model algorithm.
#example:
#y_train = [0, 2, 1]
#encoded_y_train = [[1, 0, 0],
# [0, 0, 1],
# [0, 1, 0]]
encoded_y_train = to_categorical(y_train)
encoded_y_test = to_categorical(y_test)
return np.array(x_train), np.array(x_test), np.array(encoded_y_train), np.array(encoded_y_test)
import matplotlib.pyplot as plt
import time as ti
from tensorflow.keras.models import load_model
#supported databases dictionary with their image extensions.
databases = {"ESSEX" : ".jpg", "JAFFE" : ".tiff", "ORL" : ".pgm"}
#supported models dictionary with their function implementations.
models = {"MobileNet" : MobileNet, "EffNet" : Effnet, "LCNN" : LCNN}
#model accuracy to be hit.
#any accuracies bigger than that will be ignored.
threshold_acc = 0.985
#go through all models.
for current_model in models:
#go through all databases.
for current_database in databases:
print("----------------{} + {}----------------".format(current_database, current_model))
test_size = 0.25
#get image paths and classes.
(splittableImagePaths, splittableClasses) = ImgClassLists("./FaceDB/{}/".format(current_database), databases[current_database])
#split the images into different datasets.
#x_train = images to use for the training process.
#y_train = classes to use for the training process.
#x_test = images to use for the testing process.
#y_test = classes to use for the testing process.
x_train, x_test, y_train, y_test = image_split(splittableImagePaths, splittableClasses, test_size = test_size)
input_shape = np.shape(x_train)[1:4]
num_classes = np.shape(y_train)[1]
print("Using database: {}".format(current_database))
print("Image shape: {}".format(input_shape))
print("Number of images: {}".format(len(splittableImagePaths)))
print("Number of classes: {}".format(num_classes))
#directory to save the model into.
saved_model_path = './saved_models/'
#model to be saved under this name.
saved_model_name = 'model_{}_{}'.format(current_database, current_model)
#model extension.
saved_model_ext = '.h5'
#converted lite model extension.
saved_model_lite = '.tflite'
#learning rate value.
lr = 0.001
#loss function.
loss = 'categorical_crossentropy'
#optimizer to be used.
optimizer = tf.keras.optimizers.Adam(lr = lr)
#metrics to be tracked.
metrics = ['accuracy']
#call the function to invoke chosen model.
model = models[current_model](input_shape, num_classes, num_inputs = np.shape(x_test)[1])
#compile model with set parameters
model.compile(loss = loss, optimizer = optimizer, metrics = metrics)
print("Using deep learning model: {}".format(current_model))
print("Loss function: {}".format(loss))
print("Metrics array: {}".format(metrics))
print("Optimizer config: {}".format(optimizer))
print("Learning rate: {}".format(lr))
#train "batch_size" images at a time.
batch_size = 5
#how many times to repeat the training process .
epochs = 100
#scores array to count the accuracy value for each epoch.
scores = np.zeros(epochs)
best_acc = 0
best_epoch = 0
last_epoch = 0
print('--- {} training images, {} test images, {} batch size ---'.format(len(x_train), len(x_test), batch_size))
#timestamp before beginning training.
t1 = ti.time()
for k in range(epochs):
print('\nEpoch {} out of {} running...'.format(k + 1, epochs))
#train the model with the provided train data set.
model.fit(x_train, y_train, batch_size = batch_size, verbose = 1, epochs = 1, validation_data = (x_test, y_test))
#check the accuracy of the model with the test data set.
score = model.evaluate(x_test, y_test, verbose = 0)
#remember best accuracy.
scores[k] = score[1]
last_epoch = k + 1
if score[1] > best_acc:
best_acc = score[1]
best_epoch = last_epoch
print('Improved accuracy on epoch {} : {}%'.format(k + 1, best_acc * 100))
#remember best weight values from best accuracy.
best_weights = model.get_weights()
if best_acc > threshold_acc:
print('Threshold accuracy surpassed. Stopping training at epoch {} out of {}.'.format(k + 1, epochs))
break
#timestamp after finishing training.
t2 = ti.time()
print('\nBest accuracy on epoch {} out of {} : {}%'.format(best_epoch, epochs, best_acc * 100))
print('Fitting took {} seconds.'.format(t2 - t1))
#set the best weights for the model.
model.set_weights(best_weights)
#save the trained model as a .h5 file.
model.save(saved_model_path + saved_model_name + saved_model_ext)
#plot the scores array for each database and model.
plt.figure()
plt.title("{} + {}".format(current_database, current_model))
plt.xlim(0, last_epoch)
plt.ylim(0, 1)
plt.xlabel('Epochs')
plt.ylabel('Accuracy (0 - min, 1 - max)')
plt.plot(scores[:last_epoch])
print('\nEvaluating {}\n'.format(saved_model_name + saved_model_ext))
t1 = ti.time()
#evaluate the trained model.
score = model.evaluate(x_test, y_test, verbose = 1)
t2 = ti.time()
print('\nTest accuracy: {}%'.format(score[1] * 100))
print('Test duration : {} seconds'.format(t2 - t1))
print('Latency (per input sample) : {} ms.'.format(1000*(t2-t1)/np.shape(x_test)[0]))
print('Converting model to .tflite...')
#loading the model from a filepath.
model = load_model(saved_model_path + saved_model_name + saved_model_ext)
#initializing tflite converter.
tflite_converter = tf.lite.TFLiteConverter.from_keras_model(model)
#the tflite model can use the post training quantize
#further into development.
tflite_converter.post_training_quantize=True
#convert the .h5 keras file to a .tflite file.
tflite_model = tflite_converter.convert()
#save the model.
open(saved_model_path + saved_model_name + saved_model_lite, "wb").write(tflite_model)
print('Validating .tflite model...')
#checking if the conversion was done successfully.
interpreter = tf.lite.Interpreter(model_path = saved_model_path + saved_model_name + saved_model_lite)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype = np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
model.summary()
#zip the saved models folder.
!zip -r saved_models.zip ./saved_models/
#download the zip.
from google.colab import files
files.download("saved_models.zip")
#generating labels file.
#labels file is used in the .tflite integration process.
#labels file is used to get the classes from the image databases
#and for each of them to be given a confidence score.
databases = {"ESSEX" : ".jpg", "JAFFE" : ".tiff", "ORL" : ".pgm"}
for db in databases:
#get the classes from the image folder.
classes = []
path = "./FaceDB/{}/".format(db)
for root, _, files in sorted(os.walk(path)):
#sort the files iterator.
for file in sorted(files):
if file.endswith(databases[db]):
#class name is based on the folder it's contained in.
className = root[len(path) : ]
classes.append(className)
#get only unique entries from the classes array.
unique = []
for cls in classes:
if cls not in unique:
unique.append(cls)
#save the unique class entries in a text file.
with open('labels_{}.txt'.format(db), 'w') as f:
f.write('\n'.join(unique))
```
| github_jupyter |
# OpenCV example. Show webcam image and detect face.
It uses Lena's face and add random noise to it if the video capture doesn't work for some reason.
https://gist.github.com/astanin/3097851
<table >
<tr>
<th></th>
<th></th>
</tr>
</table>
## OpenCV Haar Cascade classifer
http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html
https://github.com/opencv/opencv/tree/master/data/haarcascades
Exit by pressing `ESC`
```
from __future__ import print_function
import numpy as np
import cv2
# local modules
from video import create_capture
from common import clock, draw_str
def detect(img, cascade):
rects = cascade.detectMultiScale(img, scaleFactor=1.3, minNeighbors=4, minSize=(30, 30),
flags=cv2.CASCADE_SCALE_IMAGE)
if len(rects) == 0:
return []
rects[:,2:] += rects[:,:2]
return rects
def draw_rects(img, rects, color):
for x1, y1, x2, y2 in rects:
cv2.rectangle(img, (x1, y1), (x2, y2), color, 2)
import sys, getopt
print(__doc__)
args, video_src = getopt.getopt(sys.argv[2:], '', ['cascade=', 'nested-cascade='])
try:
video_src = video_src[0]
except:
video_src = 0
args = dict(args)
fullpath_openCV = '/home/patechoc/anaconda2/share/OpenCV/haarcascades/'
cascade_fn = args.get('--cascade', fullpath_openCV + "haarcascade_frontalface_alt.xml")
nested_fn = args.get('--nested-cascade', fullpath_openCV + "haarcascade_eye.xml")
cascade = cv2.CascadeClassifier(cascade_fn)
nested = cv2.CascadeClassifier(nested_fn)
# cam = create_capture(video_src, fallback='synth:bg=./data/lena.jpg:noise=0.05')
cam = create_capture(video_src, fallback='synth:bg=./data/lena.jpg:noise=0.05')
while True:
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.equalizeHist(gray)
t = clock()
rects = detect(gray, cascade)
vis = img.copy()
draw_rects(vis, rects, (0, 255, 0))
if not nested.empty():
for x1, y1, x2, y2 in rects:
roi = gray[y1:y2, x1:x2]
vis_roi = vis[y1:y2, x1:x2]
subrects = detect(roi.copy(), nested)
draw_rects(vis_roi, subrects, (255, 0, 0))
dt = clock() - t
draw_str(vis, (20, 20), 'time: %.1f ms' % (dt*1000))
cv2.imshow('facedetect', vis)
if cv2.waitKey(5) == 27:
break
cv2.destroyAllWindows()
```
| github_jupyter |
# Neural Nets with Keras
In this notebook you will learn how to implement neural networks using the Keras API. We will use TensorFlow's own implementation, *tf.keras*, which comes bundled with TensorFlow.
Don't hesitate to look at the documentation at [keras.io](https://keras.io/). All the code examples should work fine with tf.keras, the only difference is how to import Keras:
```python
# keras.io code:
from keras.layers import Dense
output_layer = Dense(10)
# corresponding tf.keras code:
from tensorflow.keras.layers import Dense
output_layer = Dense(10)
# or:
from tensorflow import keras
output_layer = keras.layers.Dense(10)
```
In this notebook, we will not use any TensorFlow-specific code, so everything you see would run just the same way on [keras-team](https://github.com/keras-team/keras) or any other Python implementation of the Keras API (except for the imports).
## Imports
```
%matplotlib inline
%load_ext tensorboard.notebook
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import sklearn
import sys
import tensorflow as tf
from tensorflow import keras # tf.keras
import time
print("python", sys.version)
for module in mpl, np, pd, sklearn, tf, keras:
print(module.__name__, module.__version__)
assert sys.version_info >= (3, 5) # Python ≥3.5 required
assert tf.__version__ >= "2.0" # TensorFlow ≥2.0 required
```
**Note**: The preview version of TensorFlow 2.0 shows up as version 1.13. That's okay. To test that this behaves like TF 2.0, we verify that `tf.function()` is present.

## Exercise 1 – TensorFlow Playground
Visit the [TensorFlow Playground](http://playground.tensorflow.org).
* **Layers and patterns**: try training the default neural network by clicking the "Run" button (top left). Notice how it quickly finds a good solution for the classification task. Notice that the neurons in the first hidden layer have learned simple patterns, while the neurons in the second hidden layer have learned to combine the simple patterns of the first hidden layer into more complex patterns). In general, the more layers, the more complex the patterns can be.
* **Activation function**: try replacing the Tanh activation function with the ReLU activation function, and train the network again. Notice that it finds a solution even faster, but this time the boundaries are linear. This is due to the shape of the ReLU function.
* **Local minima**: modify the network architecture to have just one hidden layer with three neurons. Train it multiple times (to reset the network weights, just add and remove a neuron). Notice that the training time varies a lot, and sometimes it even gets stuck in a local minimum.
* **Too small**: now remove one neuron to keep just 2. Notice that the neural network is now incapable of finding a good solution, even if you try multiple times. The model has too few parameters and it systematically underfits the training set.
* **Large enough**: next, set the number of neurons to 8 and train the network several times. Notice that it is now consistently fast and never gets stuck. This highlights an important finding in neural network theory: large neural networks almost never get stuck in local minima, and even when they do these local optima are almost as good as the global optimum. However, they can still get stuck on long plateaus for a long time.
* **Deep net and vanishing gradients**: now change the dataset to be the spiral (bottom right dataset under "DATA"). Change the network architecture to have 4 hidden layers with 8 neurons each. Notice that training takes much longer, and often gets stuck on plateaus for long periods of time. Also notice that the neurons in the highest layers (i.e. on the right) tend to evolve faster than the neurons in the lowest layers (i.e. on the left). This problem, called the "vanishing gradients" problem, can be alleviated using better weight initialization and other techniques, better optimizers (such as AdaGrad or Adam), or using Batch Normalization.
* **More**: go ahead and play with the other parameters to get a feel of what they do. In fact, after this course you should definitely play with this UI for at least one hour, it will grow your intuitions about neural networks significantly.

## Exercise 2 – Image classification with tf.keras
### Load the Fashion MNIST dataset
Let's start by loading the fashion MNIST dataset. Keras has a number of functions to load popular datasets in `keras.datasets`. The dataset is already split for you between a training set and a test set, but it can be useful to split the training set further to have a validation set:
```
fashion_mnist = keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = (
fashion_mnist.load_data())
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
```
The training set contains 55,000 grayscale images, each 28x28 pixels:
```
X_train.shape
```
Each pixel intensity is represented by a uint8 (byte) from 0 to 255:
```
X_train[0]
```
You can plot an image using Matplotlib's `imshow()` function, with a `'binary'`
color map:
```
plt.imshow(X_train[0], cmap="binary")
plt.show()
```
The labels are the class IDs (represented as uint8), from 0 to 9:
```
y_train
```
Here are the corresponding class names:
```
class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat",
"Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]
```
So the first image in the training set is a coat:
```
class_names[y_train[0]]
```
The validation set contains 5,000 images, and the test set contains 10,000 images:
```
X_valid.shape
X_test.shape
```
Let's take a look at a sample of the images in the dataset:
```
n_rows = 5
n_cols = 10
plt.figure(figsize=(n_cols*1.4, n_rows * 1.6))
for row in range(n_rows):
for col in range(n_cols):
index = n_cols * row + col
plt.subplot(n_rows, n_cols, index + 1)
plt.imshow(X_train[index], cmap="binary", interpolation="nearest")
plt.axis('off')
plt.title(class_names[y_train[index]])
plt.show()
```
This dataset has the same structure as the famous MNIST dataset (which you can load using `keras.datasets.mnist.load_data()`), except the images represent fashion items rather than handwritten digits, and it is much more challenging. A simple linear model can reach 92% accuracy on MNIST, but only 83% on fashion MNIST.
### Build a classification neural network with Keras
### 2.1)
Build a `Sequential` model (`keras.models.Sequential`), without any argument, then and add four layers to it by calling its `add()` method:
* a `Flatten` layer (`keras.layers.Flatten`) to convert each 28x28 image to a single row of 784 pixel values. Since it is the first layer in your model, you should specify the `input_shape` argument, leaving out the batch size: `[28, 28]`.
* a `Dense` layer (`keras.layers.Dense`) with 300 neurons (aka units), and the `"relu"` activation function.
* Another `Dense` layer with 100 neurons, also with the `"relu"` activation function.
* A final `Dense` layer with 10 neurons (one per class), and with the `"softmax"` activation function to ensure that the sum of all the estimated class probabilities for each image is equal to 1.
```
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
model.add(keras.layers.Dense(300, activation="relu"))
```
### 2.2)
Alternatively, you can pass a list containing the 4 layers to the constructor of the `Sequential` model. The model's `layers` attribute holds the list of layers.
### 2.3)
Call the model's `summary()` method and examine the output. Also, try using `keras.utils.plot_model()` to save an image of your model's architecture. Alternatively, you can uncomment the following code to display the image within Jupyter.
**Warning**: you will need `pydot` and `graphviz` to use `plot_model()`.
### 2.4)
After a model is created, you must call its `compile()` method to specify the `loss` function and the `optimizer` to use. In this case, you want to use the `"sparse_categorical_crossentropy"` loss, and the `"sgd"` optimizer (stochastic gradient descent). Moreover, you can optionally specify a list of additional metrics that should be measured during training. In this case you should specify `metrics=["accuracy"]`. **Note**: you can find more loss functions in `keras.losses`, more metrics in `keras.metrics` and more optimizers in `keras.optimizers`.
### 2.5)
Now your model is ready to be trained. Call its `fit()` method, passing it the input features (`X_train`) and the target classes (`y_train`). Set `epochs=10` (or else it will just run for a single epoch). You can also (optionally) pass the validation data by setting `validation_data=(X_valid, y_valid)`. If you do, Keras will compute the loss and the additional metrics (the accuracy in this case) on the validation set at the end of each epoch. If the performance on the training set is much better than on the validation set, your model is probably overfitting the training set (or there is a bug, such as a mismatch between the training set and the validation set).
**Note**: the `fit()` method will return a `History` object containing training stats. Make sure to preserve it (`history = model.fit(...)`).
### 2.6)
Try running `pd.DataFrame(history.history).plot()` to plot the learning curves. To make the graph more readable, you can also set `figsize=(8, 5)`, call `plt.grid(True)` and `plt.gca().set_ylim(0, 1)`.
### 2.7)
Try running `model.fit()` again, and notice that training continues where it left off.
### 2.8)
call the model's `evaluate()` method, passing it the test set (`X_test` and `y_test`). This will compute the loss (cross-entropy) on the test set, as well as all the additional metrics (in this case, the accuracy). Your model should achieve over 80% accuracy on the test set.
### 2.9)
Define `X_new` as the first 10 instances of the test set. Call the model's `predict()` method to estimate the probability of each class for each instance (for better readability, you may use the output array's `round()` method):
### 2.10)
Often, you may only be interested in the most likely class. Use `np.argmax()` to get the class ID of the most likely class for each instance. **Tip**: you want to set `axis=1`.
### 2.11)
Call the model's `predict_classes()` method for `X_new`. You should get the same result as above.
### 2.12)
(Optional) It is often useful to know how confident the model is for each prediction. Try finding the estimated probability for each predicted class using `np.max()`.
### 2.13)
(Optional) It is frequent to want the top k classes and their estimated probabilities rather just the most likely class. You can use `np.argsort()` for this.

## Exercise 2 - Solution
### 2.1)
Build a `Sequential` model (`keras.models.Sequential`), without any argument, then and add four layers to it by calling its `add()` method:
* a `Flatten` layer (`keras.layers.Flatten`) to convert each 28x28 image to a single row of 784 pixel values. Since it is the first layer in your model, you should specify the `input_shape` argument, leaving out the batch size: `[28, 28]`.
* a `Dense` layer (`keras.layers.Dense`) with 300 neurons (aka units), and the `"relu"` activation function.
* Another `Dense` layer with 100 neurons, also with the `"relu"` activation function.
* A final `Dense` layer with 10 neurons (one per class), and with the `"softmax"` activation function to ensure that the sum of all the estimated class probabilities for each image is equal to 1.
```
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu"))
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
```
### 2.2)
Alternatively, you can pass a list containing the 4 layers to the constructor of the `Sequential` model. The model's `layers` attribute holds the list of layers.
```
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.layers
```
### 2.3)
Call the model's `summary()` method and examine the output. Also, try using `keras.utils.plot_model()` to save an image of your model's architecture. Alternatively, you can uncomment the following code to display the image within Jupyter.
```
model.summary()
keras.utils.plot_model(model, "my_mnist_model.png", show_shapes=True)
%%html
<img src="my_mnist_model.png" />
```
**Warning**: at the present, you need `from tensorflow.python.keras.utils.vis_utils import model_to_dot`, instead of simply `keras.utils.model_to_dot`. See [TensorFlow issue 24639](https://github.com/tensorflow/tensorflow/issues/24639).
```
from IPython.display import SVG
from tensorflow.python.keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))
```
### 2.4)
After a model is created, you must call its `compile()` method to specify the `loss` function and the `optimizer` to use. In this case, you want to use the `"sparse_categorical_crossentropy"` loss, and the `"sgd"` optimizer (stochastic gradient descent). Moreover, you can optionally specify a list of additional metrics that should be measured during training. In this case you should specify `metrics=["accuracy"]`. **Note**: you can find more loss functions in `keras.losses`, more metrics in `keras.metrics` and more optimizers in `keras.optimizers`.
```
model.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd", metrics=["accuracy"])
```
### 2.5)
Now your model is ready to be trained. Call its `fit()` method, passing it the input features (`X_train`) and the target classes (`y_train`). Set `epochs=10` (or else it will just run for a single epoch). You can also (optionally) pass the validation data by setting `validation_data=(X_valid, y_valid)`. If you do, Keras will compute the loss and the additional metrics (the accuracy in this case) on the validation set at the end of each epoch. If the performance on the training set is much better than on the validation set, your model is probably overfitting the training set (or there is a bug, such as a mismatch between the training set and the validation set).
**Note**: the `fit()` method will return a `History` object containing training stats. Make sure to preserve it (`history = model.fit(...)`).
```
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
```
### 2.6)
Try running `pd.DataFrame(history.history).plot()` to plot the learning curves. To make the graph more readable, you can also set `figsize=(8, 5)`, call `plt.grid(True)` and `plt.gca().set_ylim(0, 1)`.
```
def plot_learning_curves(history):
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
plot_learning_curves(history)
```
### 2.7)
Try running `model.fit()` again, and notice that training continues where it left off.
```
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
```
### 2.8)
Call the model's `evaluate()` method, passing it the test set (`X_test` and `y_test`). This will compute the loss (cross-entropy) on the test set, as well as all the additional metrics (in this case, the accuracy). Your model should achieve over 80% accuracy on the test set.
```
model.evaluate(X_test, y_test)
```
### 2.9)
Define `X_new` as the first 10 instances of the test set. Call the model's `predict()` method to estimate the probability of each class for each instance (for better readability, you may use the output array's `round()` method):
```
n_new = 10
X_new = X_test[:n_new]
y_proba = model.predict(X_new)
y_proba.round(2)
```
### 2.10)
Often, you may only be interested in the most likely class. Use `np.argmax()` to get the class ID of the most likely class for each instance. **Tip**: you want to set `axis=1`.
```
y_pred = y_proba.argmax(axis=1)
y_pred
```
### 2.11)
Call the model's `predict_classes()` method for `X_new`. You should get the same result as above.
```
y_pred = model.predict_classes(X_new)
y_pred
```
### 2.12)
(Optional) It is often useful to know how confident the model is for each prediction. Try finding the estimated probability for each predicted class using `np.max()`.
```
y_proba.max(axis=1).round(2)
```
### 2.13)
(Optional) It is frequent to want the top k classes and their estimated probabilities rather just the most likely class. You can use `np.argsort()` for this.
```
k = 3
top_k = np.argsort(-y_proba, axis=1)[:, :k]
top_k
row_indices = np.tile(np.arange(len(top_k)), [k, 1]).T
y_proba[row_indices, top_k].round(2)
```

## Exercise 3 – Scale the features
### 3.1)
When using Gradient Descent, it is usually best to ensure that the features all have a similar scale, preferably with a Normal distribution. Try to standardize the pixel values and see if this improves the performance of your neural network.
**Tips**:
* For each feature (pixel intensity), you must subtract the `mean()` of that feature (across all instances, so use `axis=0`) and divide by its standard deviation (`std()`, again `axis=0`). Alternatively, you can use Scikit-Learn's `StandardScaler`.
* Make sure you compute the means and standard deviations on the training set, and use these statistics to scale the training set, the validation set and the test set (you should not fit the validation set or the test set, and computing the means and standard deviations counts as "fitting").
### 3.2)
Plot the learning curves. Do they look better than earlier?

## Exercise 3 – Solution
### 3.1)
When using Gradient Descent, it is usually best to ensure that the features all have a similar scale, preferably with a Normal distribution. Try to standardize the pixel values and see if this improves the performance of your neural network.
```
pixel_means = X_train.mean(axis = 0)
pixel_stds = X_train.std(axis = 0)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float32).reshape(-1, 28 * 28)).reshape(-1, 28, 28)
X_valid_scaled = scaler.transform(X_valid.astype(np.float32).reshape(-1, 28 * 28)).reshape(-1, 28, 28)
X_test_scaled = scaler.transform(X_test.astype(np.float32).reshape(-1, 28 * 28)).reshape(-1, 28, 28)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd", metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=20,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
```
### 3.2)
Plot the learning curves. Do they look better than earlier?
```
plot_learning_curves(history)
```

## Exercise 4 – Use Callbacks
### 4.1)
The `fit()` method accepts a `callbacks` argument. Try training your model with a large number of epochs, a validation set, and with a few callbacks from `keras.callbacks`:
* `TensorBoard`: specify a log directory. It should be a subdirectory of a root logdir, such as `./my_logs/run_1`, and it should be different every time you train your model. You can use a timestamp in the subdirectory's path to ensure that it changes at every run.
* `EarlyStopping`: specify `patience=5`
* `ModelCheckpoint`: specify the path of the checkpoint file to save (e.g., `"my_mnist_model.h5"`) and set `save_best_only=True`
Notice that the `EarlyStopping` callback will interrupt training before it reaches the requested number of epochs. This reduces the risk of overfitting.
```
root_logdir = os.path.join(os.curdir, "my_logs")
```
### 4.2)
The Jupyter plugin for tensorboard was loaded at the beginning of this notebook (`%load_ext tensorboard.notebook`), so you can now simply start it by using the `%tensorboard` magic command. Explore the various tabs available, in particular the SCALARS tab to view learning curves, the GRAPHS tab to view the computation graph, and the PROFILE tab which is very useful to identify bottlenecks if you run into performance issues.
```
%tensorboard --logdir=./my_logs
```
### 4.3)
The early stopping callback only stopped training after 10 epochs without progress, so your model may already have started to overfit the training set. Fortunately, since the `ModelCheckpoint` callback only saved the best models (on the validation set), the last saved model is the best on the validation set, so try loading it using `keras.models.load_model()`. Finally evaluate it on the test set.
### 4.4)
Look at the list of available callbacks at https://keras.io/callbacks/

## Exercise 4 – Solution
### 4.1)
The `fit()` method accepts a `callbacks` argument. Try training your model with a large number of epochs, a validation set, and with a few callbacks from `keras.callbacks`:
* `TensorBoard`: specify a log directory. It should be a subdirectory of a root logdir, such as `./my_logs/run_1`, and it should be different every time you train your model. You can use a timestamp in the subdirectory's path to ensure that it changes at every run.
* `EarlyStopping`: specify `patience=5`
* `ModelCheckpoint`: specify the path of the checkpoint file to save (e.g., `"my_mnist_model.h5"`) and set `save_best_only=True`
Notice that the `EarlyStopping` callback will interrupt training before it reaches the requested number of epochs. This reduces the risk of overfitting.
```
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd", metrics=["accuracy"])
logdir = os.path.join(root_logdir, "run_{}".format(time.time()))
callbacks = [
keras.callbacks.TensorBoard(logdir),
keras.callbacks.EarlyStopping(patience=5),
keras.callbacks.ModelCheckpoint("my_mnist_model.h5", save_best_only=True),
]
history = model.fit(X_train_scaled, y_train, epochs=50,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
```
### 4.2)
Done
### 4.3)
The early stopping callback only stopped training after 10 epochs without progress, so your model may already have started to overfit the training set. Fortunately, since the `ModelCheckpoint` callback only saved the best models (on the validation set), the last saved model is the best on the validation set, so try loading it using `keras.models.load_model()`. Finally evaluate it on the test set.
```
model = keras.models.load_model("my_mnist_model.h5")
model.evaluate(X_valid_scaled, y_valid)
```
### 4.4)
Look at the list of available callbacks at https://keras.io/callbacks/

## Exercise 5 – A neural net for regression
### 5.1)
Load the California housing dataset using `sklearn.datasets.fetch_california_housing`. This returns an object with a `DESCR` attribute describing the dataset, a `data` attribute with the input features, and a `target` attribute with the labels. The goal is to predict the price of houses in a district (a census block) given some stats about that district. This is a regression task (predicting values).
### 5.2)
Split the dataset into a training set, a validation set and a test set using Scikit-Learn's `sklearn.model_selection.train_test_split()` function.
### 5.3)
Scale the input features (e.g., using a `sklearn.preprocessing.StandardScaler`). Once again, don't forget that you should not fit the validation set or the test set, only the training set.
### 5.4)
Now build, train and evaluate a neural network to tackle this problem. Then use it to make predictions on the test set.
**Tips**:
* Since you are predicting a single value per district (the median house price), there should only be one neuron in the output layer.
* Usually for regression tasks you don't want to use any activation function in the output layer (in some cases you may want to use `"relu"` or `"softplus"` if you want to constrain the predicted values to be positive, or `"sigmoid"` or `"tanh"` if you want to constrain the predicted values to 0-1 or -1-1).
* A good loss function for regression is generally the `"mean_squared_error"` (aka `"mse"`). When there are many outliers in your dataset, you may prefer to use the `"mean_absolute_error"` (aka `"mae"`), which is a bit less precise but less sensitive to outliers.

## Exercise 5 – Solution
### 5.1)
Load the California housing dataset using `sklearn.datasets.fetch_california_housing`. This returns an object with a `DESCR` attribute describing the dataset, a `data` attribute with the input features, and a `target` attribute with the labels. The goal is to predict the price of houses in a district (a census block) given some stats about that district. This is a regression task (predicting values).
```
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
print(housing.DESCR)
housing.data.shape
housing.target.shape
```
### 5.2)
Split the dataset into a training set, a validation set and a test set using Scikit-Learn's `sklearn.model_selection.train_test_split()` function.
```
from sklearn.model_selection import train_test_split
X_train_full, X_test, y_train_full, y_test = train_test_split(housing.data, housing.target, random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full, random_state=42)
len(X_train), len(X_valid), len(X_test)
```
### 5.3)
Scale the input features (e.g., using a `sklearn.preprocessing.StandardScaler`). Once again, don't forget that you should not fit the validation set or the test set, only the training set.
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_valid_scaled = scaler.transform(X_valid)
X_test_scaled = scaler.transform(X_test)
```
### 5.4)
Now build, train and evaluate a neural network to tackle this problem. Then use it to make predictions on the test set.
```
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=X_train.shape[1:]),
keras.layers.Dense(1)
])
model.compile(loss="mean_squared_error", optimizer="sgd")
callbacks = [keras.callbacks.EarlyStopping(patience=10)]
history = model.fit(X_train_scaled, y_train,
validation_data=(X_valid_scaled, y_valid), epochs=100,
callbacks=callbacks)
model.evaluate(X_test_scaled, y_test)
model.predict(X_test_scaled)
plot_learning_curves(history)
```

## Exercise 6 – Hyperparameter search
### 6.1)
Try training your model multiple times, with different a learning rate each time (e.g., 1e-4, 3e-4, 1e-3, 3e-3, 3e-2), and compare the learning curves. For this, you need to create a `keras.optimizers.SGD` optimizer and specify the `learning_rate` in its constructor, then pass this `SGD` instance to the `compile()` method using the `optimizer` argument.
### 6.2)
Let's look at a more sophisticated way to tune hyperparameters. Create a `build_model()` function that takes three arguments, `n_hidden`, `n_neurons`, `learning_rate`, and builds, compiles and returns a model with the given number of hidden layers, the given number of neurons and the given learning rate. It is good practice to give a reasonable default value to each argument.
### 6.3)
Create a `keras.wrappers.scikit_learn.KerasRegressor` and pass the `build_model` function to the constructor. This gives you a Scikit-Learn compatible predictor. Try training it and using it to make predictions. Note that you can pass the `n_epochs`, `callbacks` and `validation_data` to the `fit()` method.
### 6.4)
Use a `sklearn.model_selection.RandomizedSearchCV` to search the hyperparameter space of your `KerasRegressor`.
**Tips**:
* create a `param_distribs` dictionary where each key is the name of a hyperparameter you want to fine-tune (e.g., `"n_hidden"`), and each value is the list of values you want to explore (e.g., `[0, 1, 2, 3]`), or a Scipy distribution from `scipy.stats`.
* You can use the reciprocal distribution for the learning rate (e.g, `reciprocal(3e-3, 3e-2)`).
* Create a `RandomizedSearchCV`, passing the `KerasRegressor` and the `param_distribs` to its constructor, as well as the number of iterations (`n_iter`), and the number of cross-validation folds (`cv`). If you are short on time, you can set `n_iter=10` and `cv=3`. You may also want to set `verbose=2`.
* Finally, call the `RandomizedSearchCV`'s `fit()` method on the training set. Once again you can pass it `n_epochs`, `validation_data` and `callbacks` if you want to.
* The best parameters found will be available in the `best_params_` attribute, the best score will be in `best_score_`, and the best model will be in `best_estimator_`.
### 6.5)
Evaluate the best model found on the test set. You can either use the best estimator's `score()` method, or get its underlying Keras model *via* its `model` attribute, and call this model's `evaluate()` method. Note that the estimator returns the negative mean square error (it's a score, not a loss, so higher is better).
### 6.6)
Finally, save the best Keras model found. **Tip**: it is available via the best estimator's `model` attribute, and just need to call its `save()` method.
**Tip**: while a randomized search is nice and simple, there are more powerful (but complex) options available out there for hyperparameter search, for example:
* [Hyperopt](https://github.com/hyperopt/hyperopt)
* [Hyperas](https://github.com/maxpumperla/hyperas)
* [Sklearn-Deap](https://github.com/rsteca/sklearn-deap)
* [Scikit-Optimize](https://scikit-optimize.github.io/)
* [Spearmint](https://github.com/JasperSnoek/spearmint)
* [PyMC3](https://docs.pymc.io/)
* [GPFlow](https://gpflow.readthedocs.io/)
* [Yelp/MOE](https://github.com/Yelp/MOE)
* Commercial services such as: [Google Cloud ML Engine](https://cloud.google.com/ml-engine/docs/tensorflow/using-hyperparameter-tuning), [Arimo](https://arimo.com/) or [Oscar](http://oscar.calldesk.ai/)

## Exercise 6 – Solution
### 6.1)
Try training your model multiple times, with different a learning rate each time (e.g., 1e-4, 3e-4, 1e-3, 3e-3, 3e-2), and compare the learning curves. For this, you need to create a `keras.optimizers.SGD` optimizer and specify the `learning_rate` in its constructor, then pass this `SGD` instance to the `compile()` method using the `optimizer` argument.
```
learning_rates = [1e-4, 3e-4, 1e-3, 3e-3, 1e-2, 3e-2]
histories = []
for learning_rate in learning_rates:
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=X_train.shape[1:]),
keras.layers.Dense(1)
])
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="mean_squared_error", optimizer=optimizer)
callbacks = [keras.callbacks.EarlyStopping(patience=10)]
history = model.fit(X_train_scaled, y_train,
validation_data=(X_valid_scaled, y_valid), epochs=100,
callbacks=callbacks)
histories.append(history)
for learning_rate, history in zip(learning_rates, histories):
print("Learning rate:", learning_rate)
plot_learning_curves(history)
```
### 6.2)
Let's look at a more sophisticated way to tune hyperparameters. Create a `build_model()` function that takes three arguments, `n_hidden`, `n_neurons`, `learning_rate`, and builds, compiles and returns a model with the given number of hidden layers, the given number of neurons and the given learning rate. It is good practice to give a reasonable default value to each argument.
```
def build_model(n_hidden=1, n_neurons=30, learning_rate=3e-3):
model = keras.models.Sequential()
options = {"input_shape": X_train.shape[1:]}
for layer in range(n_hidden + 1):
model.add(keras.layers.Dense(n_neurons, activation="relu", **options))
options = {}
model.add(keras.layers.Dense(1, **options))
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="mse", optimizer=optimizer)
return model
```
### 6.3)
Create a `keras.wrappers.scikit_learn.KerasRegressor` and pass the `build_model` function to the constructor. This gives you a Scikit-Learn compatible predictor. Try training it and using it to make predictions. Note that you can pass the `n_epochs`, `callbacks` and `validation_data` to the `fit()` method.
```
keras_reg = keras.wrappers.scikit_learn.KerasRegressor(build_model)
keras_reg.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=[keras.callbacks.EarlyStopping(patience=10)])
keras_reg.predict(X_test_scaled)
```
### 6.4)
Use a `sklearn.model_selection.RandomizedSearchCV` to search the hyperparameter space of your `KerasRegressor`.
```
from scipy.stats import reciprocal
param_distribs = {
"n_hidden": [0, 1, 2, 3],
"n_neurons": np.arange(1, 100),
"learning_rate": reciprocal(3e-4, 3e-2),
}
from sklearn.model_selection import RandomizedSearchCV
rnd_search_cv = RandomizedSearchCV(keras_reg, param_distribs, n_iter=10, cv=3, verbose=2)
rnd_search_cv.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=[keras.callbacks.EarlyStopping(patience=10)])
rnd_search_cv.best_params_
rnd_search_cv.best_score_
rnd_search_cv.best_estimator_
```
### 6.5)
Evaluate the best model found on the test set. You can either use the best estimator's `score()` method, or get its underlying Keras model *via* its `model` attribute, and call this model's `evaluate()` method. Note that the estimator returns the negative mean square error (it's a score, not a loss, so higher is better).
```
rnd_search_cv.score(X_test_scaled, y_test)
model = rnd_search_cv.best_estimator_.model
model.evaluate(X_test_scaled, y_test)
```
### 6.6)
Finally, save the best Keras model found. **Tip**: it is available via the best estimator's `model` attribute, and just need to call its `save()` method.
```
model.save("my_fine_tuned_housing_model.h5")
```

## Exercise 7 – The functional API
Not all neural network models are simply sequential. Some may have complex topologies. Some may have multiple inputs and/or multiple outputs. For example, a Wide & Deep neural network (see [paper](https://ai.google/research/pubs/pub45413)) connects all or part of the inputs directly to the output layer, as shown on the following diagram:
<img src="images/wide_and_deep_net.png" title="Wide and deep net" width=300 />
### 7.1)
Use Keras' functional API to implement a Wide & Deep network to tackle the California housing problem.
**Tips**:
* You need to create a `keras.layers.Input` layer to represent the inputs. Don't forget to specify the input `shape`.
* Create the `Dense` layers, and connect them by using them like functions. For example, `hidden1 = keras.layers.Dense(30, activation="relu")(input)` and `hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)`
* Use the `keras.layers.concatenate()` function to concatenate the input layer and the second hidden layer's output.
* Create a `keras.models.Model` and specify its `inputs` and `outputs` (e.g., `inputs=[input]`).
* Then use this model just like a `Sequential` model: you need to compile it, display its summary, train it, evaluate it and use it to make predictions.
### 7.2)
After the Sequential API and the Functional API, let's try the Subclassing API:
* Create a subclass of the `keras.models.Model` class.
* Create all the layers you need in the constructor (e.g., `self.hidden1 = keras.layers.Dense(...)`).
* Use the layers to process the `input` in the `call()` method, and return the output.
* Note that you do not need to create a `keras.layers.Input` in this case.
* Also note that `self.output` is used by Keras, so you should use another name for the output layer (e.g., `self.output_layer`).
**When should you use the Subclassing API?**
* Both the Sequential API and the Functional API are declarative: you first declare the list of layers you need and how they are connected, and only then can you feed your model with actual data. The models that these APIs build are just static graphs of layers. This has many advantages (easy inspection, debugging, saving, loading, sharing, etc.), and they cover the vast majority of use cases, but if you need to build a very dynamic model (e.g., with loops or conditional branching), or if you want to experiment with new ideas using an imperative programming style, then the Subclassing API is for you. You can pretty much do any computation you want in the `call()` method, possibly with loops and conditions, using Keras layers of even low-level TensorFlow operations.
* However, this extra flexibility comes at the cost of less transparency. Since the model is defined within the `call()` method, Keras cannot fully inspect it. All it sees is the list of model attributes (which include the layers you define in the constructor), so when you display the model summary you just see a list of unconnected layers. Consequently, you cannot save or load the model without writing extra code. So this API is best used only when you really need the extra flexibility.
```
class MyModel(keras.models.Model):
def __init__(self):
super(MyModel, self).__init__()
# create layers here
def call(self, input):
# write any code here, using layers or even low-level TF code
return output
model = MyModel()
```
### 7.3)
Now suppose you want to send only features 0 to 4 directly to the output, and only features 2 to 7 through the hidden layers, as shown on the following diagram. Use the functional API to build, train and evaluate this model.
**Tips**:
* You need to create two `keras.layers.Input` (`input_A` and `input_B`)
* Build the model using the functional API, as above, but when you build the `keras.models.Model`, remember to set `inputs=[input_A, input_B]`
* When calling `fit()`, `evaluate()` and `predict()`, instead of passing `X_train_scaled`, pass `(X_train_scaled_A, X_train_scaled_B)` (two NumPy arrays containing only the appropriate features copied from `X_train_scaled`).
<img src="images/multiple_inputs.png" title="Multiple inputs" width=300 />
### 7.4)
Build the multi-input and multi-output neural net represented in the following diagram.
<img src="images/multiple_inputs_and_outputs.png" title="Multiple inputs and outputs" width=400 />
**Why?**
There are many use cases in which having multiple outputs can be useful:
* Your task may require multiple outputs, for example, you may want to locate and classify the main object in a picture. This is both a regression task (finding the coordinates of the object's center, as well as its width and height) and a classification task.
* Similarly, you may have multiple independent tasks to perform based on the same data. Sure, you could train one neural network per task, but in many cases you will get better results on all tasks by training a single neural network with one output per task. This is because the neural network can learn features in the data that are useful across tasks.
* Another use case is as a regularization technique (i.e., a training constraint whose objective is to reduce overfitting and thus improve the model's ability to generalize). For example, you may want to add some auxiliary outputs in a neural network architecture (as shown in the diagram) to ensure that that the underlying part of the network learns something useful on its own, without relying on the rest of the network.
**Tips**:
* Building the model is pretty straightforward using the functional API. Just make sure you specify both outputs when creating the `keras.models.Model`, for example `outputs=[output, aux_output]`.
* Each output has its own loss function. In this scenario, they will be identical, so you can either specify `loss="mse"` (this loss will apply to both outputs) or `loss=["mse", "mse"]`, which does the same thing.
* The final loss used to train the whole network is just a weighted sum of all loss functions. In this scenario, you want most to give a much smaller weight to the auxiliary output, so when compiling the model, you must specify `loss_weights=[0.9, 0.1]`.
* When calling `fit()` or `evaluate()`, you need to pass the labels for all outputs. In this scenario the labels will be the same for the main output and for the auxiliary output, so make sure to pass `(y_train, y_train)` instead of `y_train`.
* The `predict()` method will return both the main output and the auxiliary output.

## Exercise 7 – Solution
### 7.1)
Use Keras' functional API to implement a Wide & Deep network to tackle the California housing problem.
```
input = keras.layers.Input(shape=X_train.shape[1:])
hidden1 = keras.layers.Dense(30, activation="relu")(input)
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.concatenate([input, hidden2])
output = keras.layers.Dense(1)(concat)
model = keras.models.Model(inputs=[input], outputs=[output])
model.compile(loss="mean_squared_error", optimizer="sgd")
model.summary()
history = model.fit(X_train_scaled, y_train, epochs=10,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.predict(X_test_scaled)
```
### 7.2)
After the Sequential API and the Functional API, let's try the Subclassing API:
* Create a subclass of the `keras.models.Model` class.
* Create all the layers you need in the constructor (e.g., `self.hidden1 = keras.layers.Dense(...)`).
* Use the layers to process the `input` in the `call()` method, and return the output.
* Note that you do not need to create a `keras.layers.Input` in this case.
* Also note that `self.output` is used by Keras, so you should use another name for the output layer (e.g., `self.output_layer`).
```
class MyModel(keras.models.Model):
def __init__(self):
super(MyModel, self).__init__()
self.hidden1 = keras.layers.Dense(30, activation="relu")
self.hidden2 = keras.layers.Dense(30, activation="relu")
self.output_ = keras.layers.Dense(1)
def call(self, input):
hidden1 = self.hidden1(input)
hidden2 = self.hidden2(hidden1)
concat = keras.layers.concatenate([input, hidden2])
output = self.output_(concat)
return output
model = MyModel()
model.compile(loss="mse", optimizer="sgd")
history = model.fit(X_train_scaled, y_train, epochs=10,
validation_data=(X_valid_scaled, y_valid))
model.summary()
model.evaluate(X_test_scaled, y_test)
model.predict(X_test_scaled)
```
### 7.3)
Now suppose you want to send only features 0 to 4 directly to the output, and only features 2 to 7 through the hidden layers, as shown on the diagram. Use the functional API to build, train and evaluate this model.
```
input_A = keras.layers.Input(shape=[5])
input_B = keras.layers.Input(shape=[6])
hidden1 = keras.layers.Dense(30, activation="relu")(input_B)
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.concatenate([input_A, hidden2])
output = keras.layers.Dense(1)(concat)
model = keras.models.Model(inputs=[input_A, input_B], outputs=[output])
model.compile(loss="mean_squared_error", optimizer="sgd")
model.summary()
X_train_scaled_A = X_train_scaled[:, :5]
X_train_scaled_B = X_train_scaled[:, 2:]
X_valid_scaled_A = X_valid_scaled[:, :5]
X_valid_scaled_B = X_valid_scaled[:, 2:]
X_test_scaled_A = X_test_scaled[:, :5]
X_test_scaled_B = X_test_scaled[:, 2:]
history = model.fit([X_train_scaled_A, X_train_scaled_B], y_train, epochs=10,
validation_data=([X_valid_scaled_A, X_valid_scaled_B], y_valid))
model.evaluate([X_test_scaled_A, X_test_scaled_B], y_test)
model.predict([X_test_scaled_A, X_test_scaled_B])
```
### 7.4)
Build the multi-input and multi-output neural net represented in the diagram.
```
input_A = keras.layers.Input(shape=X_train_scaled_A.shape[1:])
input_B = keras.layers.Input(shape=X_train_scaled_B.shape[1:])
hidden1 = keras.layers.Dense(30, activation="relu")(input_B)
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.concatenate([input_A, hidden2])
output = keras.layers.Dense(1)(concat)
aux_output = keras.layers.Dense(1)(hidden2)
model = keras.models.Model(inputs=[input_A, input_B],
outputs=[output, aux_output])
model.compile(loss="mean_squared_error", loss_weights=[0.9, 0.1],
optimizer="sgd")
model.summary()
history = model.fit([X_train_scaled_A, X_train_scaled_B], [y_train, y_train], epochs=10,
validation_data=([X_valid_scaled_A, X_valid_scaled_B], [y_valid, y_valid]))
model.evaluate([X_test_scaled_A, X_test_scaled_B], [y_test, y_test])
y_pred, y_pred_aux = model.predict([X_test_scaled_A, X_test_scaled_B])
y_pred
y_pred_aux
```

## Exercise 8 – Deep Nets
Let's go back to Fashion MNIST and build deep nets to tackle it. We need to load it, split it and scale it.
```
fashion_mnist = keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data()
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float32).reshape(-1, 28 * 28)).reshape(-1, 28, 28)
X_valid_scaled = scaler.transform(X_valid.astype(np.float32).reshape(-1, 28 * 28)).reshape(-1, 28, 28)
X_test_scaled = scaler.transform(X_test.astype(np.float32).reshape(-1, 28 * 28)).reshape(-1, 28, 28)
```
### 8.1)
Build a sequential model with 20 hidden dense layers, with 100 neurons each, using the ReLU activation function, plus the output layer (10 neurons, softmax activation function). Try to train it for 10 epochs on Fashion MNIST and plot the learning curves. Notice that progress is very slow.
### 8.2)
Update the model to add a `BatchNormalization` layer after every hidden layer. Notice that performance progresses much faster per epoch, although computations are much more intensive. Display the model summary and notice all the non-trainable parameters (the scale $\gamma$ and offset $\beta$ parameters).
### 8.3)
Try moving the BN layers before the hidden layers' activation functions. Does this affect the model's performance?
### 8.4)
Remove all the BN layers, and just use the SELU activation function instead (always use SELU with LeCun Normal weight initialization). Notice that you get better performance than with BN but training is much faster. Isn't it marvelous? :-)
### 8.5)
Try training for 10 additional epochs, and notice that the model starts overfitting. Try adding a Dropout layer (with a 50% dropout rate) just before the output layer. Does it reduce overfitting? What about the final validation accuracy?
**Warning**: you should not use regular Dropout, as it breaks the self-normalizing property of the SELU activation function. Instead, use AlphaDropout, which is designed to work with SELU.

## Exercise 8 – Solution
### 8.1)
Build a sequential model with 20 hidden dense layers, with 100 neurons each, using the ReLU activation function, plus the output layer (10 neurons, softmax activation function). Try to train it for 10 epochs on Fashion MNIST and plot the learning curves. Notice that progress is very slow.
```
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
for _ in range(20):
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=10,
validation_data=(X_valid_scaled, y_valid))
plot_learning_curves(history)
```
### 8.2)
Update the model to add a `BatchNormalization` layer after every hidden layer. Notice that performance progresses much faster per epoch, although computations are much more intensive. Display the model summary and notice all the non-trainable parameters (the scale $\gamma$ and offset $\beta$ parameters).
```
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
for _ in range(20):
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=10,
validation_data=(X_valid_scaled, y_valid))
plot_learning_curves(history)
model.summary()
```
### 8.3)
Try moving the BN layers before the hidden layers' activation functions. Does this affect the model's performance?
```
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
for _ in range(20):
model.add(keras.layers.Dense(100))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=10,
validation_data=(X_valid_scaled, y_valid))
plot_learning_curves(history)
```
### 8.4)
Remove all the BN layers, and just use the SELU activation function instead (always use SELU with LeCun Normal weight initialization). Notice that you get better performance than with BN but training is much faster. Isn't it marvelous? :-)
```
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
for _ in range(20):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=10,
validation_data=(X_valid_scaled, y_valid))
plot_learning_curves(history)
```
### 8.5)
Try training for 10 additional epochs, and notice that the model starts overfitting. Try adding a Dropout layer (with a 50% dropout rate) just before the output layer. Does it reduce overfitting? What about the final validation accuracy?
```
history = model.fit(X_train_scaled, y_train, epochs=10,
validation_data=(X_valid_scaled, y_valid))
plot_learning_curves(history)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
for _ in range(20):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.AlphaDropout(rate=0.5))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=20,
validation_data=(X_valid_scaled, y_valid))
plot_learning_curves(history)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
# Numerical Integration
The definite integral $\int_a^b f(x) dx$ can be computed exactly if the primitive $F$ of $f$ is known, e.g.
```
f = lambda x: np.divide(np.dot(x,np.exp(x)),np.power(x+1,2))
F = lambda x: np.divide(np.exp(x),(x+1))
a = 0; b = 1;
I_ex = F(b) - F(a)
I_ex
```
In many cases the primitive is unknown though and one has to resort to numerical integration. The idea is to approximate the integrand by a function whose integral is known, e.g. piecewise linear interpolation.
- [Riemans Rule](https://www.math.ubc.ca/~pwalls/math-python/integration/riemann-sums/): sum of rectangles
- [Trapezoid Rule](https://www.math.ubc.ca/~pwalls/math-python/integration/trapezoid-rule/): sum of trapezoids
or piecewise quadratic interpolation
- [Simpson Rule](https://www.math.ubc.ca/~pwalls/math-python/integration/simpsons-rule/): quadratic polynomial on each subinterval
Trapezoids:
The definite integral of $f(x)$ is equal to the (net) area under the curve $y=f(x)$ over the interval $[a,b]$. Riemann sums approximate definite integrals by using sums of rectangles to approximate the area.
The trapezoid rule gives a better approximation of a definite integral by summing the areas of the trapezoids connecting the points
$$(x_{i-1},0),(x_i,0),(x_{i-1},f(x_{i-1})),(x_i,f(x_1))$$
for each subinterval $[x_{i-1},x_i]$ of a partition. Note that the area of each trapezoid is the sum of a rectangle and a triangle
$$(x_i-x_{i-1})f(x_{i-1}+\frac{1}{2}(x_i-x_{i-1})(f(x_i)-f(x_{i-1}))=\frac{1}{2}(f(x_i)+f(x_{i-1}))(x_i-x_{i-1})$$
For example, we can use a single trapezoid to approximate:
$$\int_0^1=e^{-x^2}dx$$
First, let's plot the curve $y=e^{-x^2}$ and the trapezoid on the interval $[0,1]$:
```
x = np.linspace(-0.5,1.5,100)
y = np.exp(-x**2)
plt.plot(x,y)
x0 = 0; x1 = 1;
y0 = np.exp(-x0**2); y1 = np.exp(-x1**2);
plt.fill_between([x0,x1],[y0,y1])
plt.xlim([-0.5,1.5]); plt.ylim([0,1.5]);
plt.show()
```
Approximate the integral by the area of the trapezoid:
```
A = 0.5*(y1 + y0)*(x1 - x0)
print("Trapezoid area:", A)
```
## Trapezoid Rule
This choice leads to the trapezoidal rule. If the interval $[a,b]$ is divided into subintervals $[x_k, x_{k+1}]$ of the same length $h = (b-a)/n$, with $x_0 := a$ and $x_n := b$, the summed version reads
$$\int_a^b f(x) dx \approx \frac{h}{2}(f(a) + f(b)) + h \sum_{k=1}^{n-1} f(x_k) =: T(h). $$
This is implemented in `trapez`. The error of the numerical integral is
$$\left| T(h) - \int_a^b f(x) dx \right| = \frac{(b-a)h^2}{12} |f''(\xi)|, \quad \xi\in[a,b]$$
so if the number of intervals is doubled (and hence is halved) then the error is expected to decrease by a factor of 4. Let's check:
Let's write a function called trapz which takes input parameters $f,a,b$ and $N$ and returns the approximation $T_N(f)$. Furthermore, let's assign default value $N=50$. ([source](https://www.math.ubc.ca/~pwalls/math-python/integration/trapezoid-rule/))
```
def trapz(f,a,b,N=50):
'''Approximate the integral of f(x) from a to b by the trapezoid rule.
The trapezoid rule approximates the integral \int_a^b f(x) dx by the sum:
(dx/2) \sum_{k=1}^N (f(x_k) + f(x_{k-1}))
where x_k = a + k*dx and dx = (b - a)/N.
Parameters
----------
f : function
Vectorized function of a single variable
a , b : numbers
Interval of integration [a,b]
N : integer
Number of subintervals of [a,b]
Returns
-------
float
Approximation of the integral of f(x) from a to b using the
trapezoid rule with N subintervals of equal length.
Examples
--------
>>> trapz(np.sin,0,np.pi/2,1000)
0.9999997943832332
'''
x = np.linspace(a,b,N+1) # N+1 points make N subintervals
y = f(x)
y_right = y[1:] # right endpoints
y_left = y[:-1] # left endpoints
dx = (b - a)/N
T = (dx/2) * np.sum(y_right + y_left)
return T
```
Let's test our function on an integral where we know the answer
$$\int_0^1 3x^2 dx=1$$
```
trapz(lambda x : 3*x**2,0,1,10000)
```
The SciPy subpackage `scipy.integrate` contains several functions for approximating definite integrals and numerically solving differential equations. Let's import the subpackage under the name `spi`.
```
import scipy.integrate as spi
```
The function scipy.integrate.trapz computes the approximation of a definite by the trapezoid rule. Consulting the documentation, we see that all we need to do it supply arrays of $x$ and $y$ values for the integrand and `scipy.integrate.trapz` returns the approximation of the integral using the trapezoid rule. The number of points we give to `scipy.integrate.trapz` is up to us but we have to remember that more points gives a better approximation but it takes more time to compute!
```
N = 10000; a = 0; b = 1;
x = np.linspace(a,b,N+1)
y = 3*x**2
approximation = spi.trapz(y,x)
print(approximation)
```
## Simpson Rule
Simpson's rule uses a quadratic polynomial on each subinterval of a partition to approximate the function $f(x)$ and to compute the definite integral. This is an improvement over the trapezoid rule which approximates $f(x)$ by a straight line on each subinterval of a partition.
Here $[a,b]$ is divided into an even number $2n$ of intervals, so $h=(b-a)/(2n)$.
The formula for Simpson's rule is
$$\int_a^b f(x) dx \approx \frac{h}{3} \left( f(a) + f(b) + 4 \sum_{k=1}^{n} f(x_{2k-1}) + 2 \sum_{k=1}^{n-1} f(x_{2k}) \right) =: S(h). $$
The error goes like $h^4$ (instead of $h^2$ for the trapezoidal rule):
$$\left| S(h) - \int_a^b f(x) dx \right| = \frac{(b-a)h^4}{180} |f^{(4)}(\xi)|, \quad \xi\in[a,b].$$
So when the number of intervals is doubled, the error should decrease by a factor of 16:
Let's write a function called simps which takes input parameters $f,a,b$ and $N$ and returns the approximation $S_N(f)$. Furthermore, let's assign a default value $N=50$.
```
def simps(f,a,b,N=50):
'''Approximate the integral of f(x) from a to b by Simpson's rule.
Simpson's rule approximates the integral \int_a^b f(x) dx by the sum:
(dx/3) \sum_{k=1}^{N/2} (f(x_{2i-2} + 4f(x_{2i-1}) + f(x_{2i}))
where x_i = a + i*dx and dx = (b - a)/N.
Parameters
----------
f : function
Vectorized function of a single variable
a , b : numbers
Interval of integration [a,b]
N : (even) integer
Number of subintervals of [a,b]
Returns
-------
float
Approximation of the integral of f(x) from a to b using
Simpson's rule with N subintervals of equal length.
Examples
--------
>>> simps(lambda x : 3*x**2,0,1,10)
1.0
'''
if N % 2 == 1:
raise ValueError("N must be an even integer.")
dx = (b-a)/N
x = np.linspace(a,b,N+1)
y = f(x)
S = dx/3 * np.sum(y[0:-1:2] + 4*y[1::2] + y[2::2])
return S
```
Let's test our function on an integral where we know the answer
$$\int_0^1 3x^2 dx=1$$
```
simps(lambda x : 3*x**2,0,1,10)
```
The SciPy subpackage `scipy.integrate` contains several functions for approximating definite integrals and numerically solving differential equations. Let's import the subpackage under the name spi.
```
import scipy.integrate as spi
```
The function `scipy.integrate.simps` computes the approximation of a definite integral by Simpson's rule. Consulting the documentation, we see that all we need to do it supply arrays of $x$ and $y$ values for the integrand and `scipy.integrate.simps` returns the approximation of the integral using Simpson's rule.
```
N = 10; a = 0; b = 1;
x = np.linspace(a,b,N+1)
y = 3*x**2
approximation = spi.simps(y,x)
print(approximation)
```
| github_jupyter |
```
! pip install datasets transformers
from huggingface_hub import notebook_login
notebook_login()
model_checkpoint = "xlm-roberta-large" # "xlm-roberta-base" # "xlm-roberta-large" # "bert-base-multilingual-uncased" # "distilbert-base-uncased"
batch_size = 4
from google.colab import drive
drive.mount('/content/drive')
from datasets import load_dataset, load_metric
data_files = {"train": "/content/drive/My Drive/nlp/datasets/train2.json", "validation": "/content/drive/My Drive/nlp/datasets/test2.json"}
#datasets = load_dataset("json", data_files="/content/drive/My Drive/nlp/datasets/dev-v2.0.json")
datasets = load_dataset("json", data_files=data_files, field="data")
datasets = load_dataset("squad")
datasets
datasets["train"] = datasets["train"].select(list(range(0, 30000)))
datasets["validation"] = datasets["validation"].select(list(range(0, 1000)))
datasets["test"] = datasets["validation"].select(list(range(1000, 5928)))
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
#!pip install transformers
#!pip install transformers[sentencepiece]
#from transformers import BertTokenizer, RobertaTokenizer, CamembertTokenizer, PreTrainedTokenizerFast
#import sentencepiece
#just SLO
#tokenizer = CamembertTokenizer.from_pretrained('drive/My Drive/ucenje_BERT/bert_pretrained_just_slo/sloberta/', return_offsets_mapping=True)
import transformers
assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)
max_length = 384 # The maximum length of a feature (question and context)
doc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed.
pad_on_right = tokenizer.padding_side == "right"
def prepare_train_features(examples):
# Some of the questions have lots of whitespace on the left, which is not useful and will make the
# truncation of the context fail (the tokenized question will take a lots of space). So we remove that
# left whitespace
examples["question"] = [q.lstrip() for q in examples["question"]]
# Tokenize our examples with truncation and padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The offset mappings will give us a map from token to character position in the original context. This will
# help us compute the start_positions and end_positions.
offset_mapping = tokenized_examples.pop("offset_mapping")
# Let's label those examples!
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []
for i, offsets in enumerate(offset_mapping):
# We will label impossible answers with the index of the CLS token.
input_ids = tokenized_examples["input_ids"][i]
cls_index = input_ids.index(tokenizer.cls_token_id)
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
answers = examples["answers"][sample_index]
# If no answers are given, set the cls_index as answer.
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Start/end character index of the answer in the text.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)
return tokenized_examples
tokenized_datasets = datasets.map(prepare_train_features, batched=True, remove_columns=datasets["train"].column_names)
from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer
model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)
#from transformers import BertForSequenceClassification, CamembertForQuestionAnswering, AdamW, BertConfig
#slo
#model = CamembertForQuestionAnswering.from_pretrained(
# 'drive/My Drive/ucenje_BERT/bert_pretrained_just_slo/sloberta/'
#)
model_name = model_checkpoint.split("/")[-1]
args = TrainingArguments(
f"{model_name}-finetuned-squad",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=1,
weight_decay=0.01,
push_to_hub=True,
)
from transformers import default_data_collator
data_collator = default_data_collator
trainer = Trainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
trainer.save_model("squad-multilingual-base-1-epoch")
```
## Evaluation
```
import torch
for batch in trainer.get_eval_dataloader():
break
batch = {k: v.to(trainer.args.device) for k, v in batch.items()}
with torch.no_grad():
output = trainer.model(**batch)
output.keys()
output.start_logits.shape, output.end_logits.shape
output.start_logits.argmax(dim=-1), output.end_logits.argmax(dim=-1)
n_best_size = 20
import numpy as np
start_logits = output.start_logits[0].cpu().numpy()
end_logits = output.end_logits[0].cpu().numpy()
# Gather the indices the best start/end logits:
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
valid_answers = []
for start_index in start_indexes:
for end_index in end_indexes:
if start_index <= end_index: # We need to refine that test to check the answer is inside the context
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": "" # We need to find a way to get back the original substring corresponding to the answer in the context
}
)
def prepare_validation_features(examples):
# Some of the questions have lots of whitespace on the left, which is not useful and will make the
# truncation of the context fail (the tokenized question will take a lots of space). So we remove that
# left whitespace
examples["question"] = [q.lstrip() for q in examples["question"]]
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# We keep the example_id that gave us this feature and we will store the offset mappings.
tokenized_examples["example_id"] = []
for i in range(len(tokenized_examples["input_ids"])):
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
# position is part of the context or not.
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
validation_features = datasets["validation"].map(
prepare_validation_features,
batched=True,
remove_columns=datasets["validation"].column_names
)
raw_predictions = trainer.predict(validation_features)
validation_features.set_format(type=validation_features.format["type"], columns=list(validation_features.features.keys()))
max_answer_length = 30
start_logits = output.start_logits[0].cpu().numpy()
end_logits = output.end_logits[0].cpu().numpy()
offset_mapping = validation_features[0]["offset_mapping"]
# The first feature comes from the first example. For the more general case, we will need to be match the example_id to
# an example index
context = datasets["validation"][0]["context"]
# Gather the indices the best start/end logits:
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
valid_answers = []
for start_index in start_indexes:
for end_index in end_indexes:
# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
# to part of the input_ids that are not in the context.
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue
if start_index <= end_index: # We need to refine that test to check the answer is inside the context
start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)
valid_answers = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[:n_best_size]
valid_answers
import collections
examples = datasets["validation"]
features = validation_features
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
from tqdm.auto import tqdm
def postprocess_qa_predictions(examples, features, raw_predictions, n_best_size = 20, max_answer_length = 30):
all_start_logits, all_end_logits = raw_predictions
# Build a map example to its corresponding features.
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
# The dictionaries we have to fill.
predictions = collections.OrderedDict()
# Logging.
print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")
# Let's loop over all the examples!
for example_index, example in enumerate(tqdm(examples)):
# Those are the indices of the features associated to the current example.
feature_indices = features_per_example[example_index]
min_null_score = None # Only used if squad_v2 is True.
valid_answers = []
context = example["context"]
# Looping through all the features associated to the current example.
for feature_index in feature_indices:
# We grab the predictions of the model for this feature.
start_logits = all_start_logits[feature_index]
end_logits = all_end_logits[feature_index]
# This is what will allow us to map some the positions in our logits to span of texts in the original
# context.
offset_mapping = features[feature_index]["offset_mapping"]
# Update minimum null prediction.
cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id)
feature_null_score = start_logits[cls_index] + end_logits[cls_index]
if min_null_score is None or min_null_score < feature_null_score:
min_null_score = feature_null_score
# Go through all possibilities for the `n_best_size` greater start and end logits.
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
for start_index in start_indexes:
for end_index in end_indexes:
# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
# to part of the input_ids that are not in the context.
try:
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue
start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)
except:
print("error")
if len(valid_answers) > 0:
best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0]
else:
# In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
# failure.
best_answer = {"text": "", "score": 0.0}
# Let's pick our final answer: the best one or the null answer (only for squad_v2)
#if not squad_v2:
# predictions[example["id"]] = best_answer["text"]
#else:
# answer = best_answer["text"] if best_answer["score"] > min_null_score else ""
# predictions[example["id"]] = answer
#answer = best_answer["text"] if best_answer["score"] > min_null_score else ""
#predictions[example["id"]] = answer
predictions[example["id"]] = best_answer["text"]
return predictions
final_predictions = postprocess_qa_predictions(datasets["validation"], validation_features, raw_predictions.predictions)
metric = load_metric("squad") #squad_v2
#id squad_v2 -> formatted_predictions = [{"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in final_predictions.items()]
formatted_predictions = [{"id": k, "prediction_text": v} for k, v in final_predictions.items()]
#formatted_predictions = [{"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in final_predictions.items()]
references = [{"id": ex["id"], "answers": ex["answers"]} for ex in datasets["validation"]]
metric.compute(predictions=formatted_predictions, references=references)
formatted_predictions[0:10]
import json
with open("predictions.json", "w") as predictions_file:
json.dump(formatted_predictions, predictions_file)
references[0:10]
import json
with open("references.json", "w") as references_file:
json.dump(references, references_file)
def compute_f1(prediction, truth):
pred_tokens = prediction.split()
truth_tokens = truth.split()
# if either the prediction or the truth is no-answer then f1 = 1 if they agree, 0 otherwise
if len(pred_tokens) == 0 or len(truth_tokens) == 0:
return int(pred_tokens == truth_tokens)
common_tokens = set(pred_tokens) & set(truth_tokens)
# if there are no common tokens then f1 = 0
if len(common_tokens) == 0:
return 0
prec = len(common_tokens) / len(pred_tokens)
rec = len(common_tokens) / len(truth_tokens)
return 2 * (prec * rec) / (prec + rec)
compute_f1(formatted_predictions[4]['prediction_text'], references[4]['answers']['text'][0])
while True:
continue
```
| github_jupyter |
# Principal Component Analysis (PCA)
We will implement the PCA algorithm. We will first implement PCA, then apply it (once again) to the MNIST digit dataset.
## Learning objective
1. Write code that implements PCA.
2. Write code that implements PCA for high-dimensional datasets
Let's first import the packages we need for this week.
```
# PACKAGE: DO NOT EDIT THIS CELL
import numpy as np
import timeit
# PACKAGE: DO NOT EDIT THIS CELL
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
from ipywidgets import interact
from load_data import load_mnist
MNIST = load_mnist()
images, labels = MNIST['data'], MNIST['target']
%matplotlib inline
```
Now, let's plot a digit from the dataset:
```
plt.figure(figsize=(4,4))
plt.imshow(images[0].reshape(28,28), cmap='gray');
```
Before we implement PCA, we will need to do some data preprocessing. In this assessment, some of them
will be implemented by you, others we will take care of. However, when you are working on real world problems, you will need to do all these steps by yourself!
The preprocessing steps we will do are
1. Convert unsigned interger 8 (uint8) encoding of pixels to a floating point number between 0-1.
2. Subtract from each image the mean $\boldsymbol \mu$.
3. Scale each dimension of each image by $\frac{1}{\sigma}$ where $\sigma$ is the stardard deviation.
The steps above ensure that our images will have zero mean and one variance. These preprocessing
steps are also known as [Data Normalization or Feature Scaling](https://en.wikipedia.org/wiki/Feature_scaling).
## 1. PCA
Now we will implement PCA. Before we do that, let's pause for a moment and
think about the steps for performing PCA. Assume that we are performing PCA on
some dataset $\boldsymbol X$ for $M$ principal components.
We then need to perform the following steps, which we break into parts:
1. Data normalization (`normalize`).
2. Find eigenvalues and corresponding eigenvectors for the covariance matrix $S$.
Sort by the largest eigenvalues and the corresponding eigenvectors (`eig`).
After these steps, we can then compute the projection and reconstruction of the data onto the spaced spanned by the top $n$ eigenvectors.
```
# GRADED FUNCTION: DO NOT EDIT THIS LINE
def normalize(X):
"""Normalize the given dataset X
Args:
X: ndarray, dataset
Returns:
(Xbar, mean, std): tuple of ndarray, Xbar is the normalized dataset
with mean 0 and standard deviation 1; mean and std are the
mean and standard deviation respectively.
Note:
You will encounter dimensions where the standard deviation is
zero, for those when you do normalization the normalized data
will be NaN. Handle this by setting using `std = 1` for those
dimensions when doing normalization.
"""
mu = np.mean(X, axis=0)
std = np.std(X, axis=0)
std_filled = std.copy()
std_filled[std==0] = 1.
Xbar = (X-mu)/std_filled
return Xbar, mu, std
def eig(S):
"""Compute the eigenvalues and corresponding eigenvectors
for the covariance matrix S.
Args:
S: ndarray, covariance matrix
Returns:
(eigvals, eigvecs): ndarray, the eigenvalues and eigenvectors
Note:
the eigenvals and eigenvecs should be sorted in descending
order of the eigen values
"""
vals, vecs = np.linalg.eig(S)
#argsort sorts in asceding order, so we have to reverse it
sort = np.argsort(vals)[::-1]
return(vals[sort], vecs[:, sort])
def projection_matrix(B):
"""Compute the projection matrix onto the space spanned by `B`
Args:
B: ndarray of dimension (D, M), the basis for the subspace
Returns:
P: the projection matrix
"""
return B@np.linalg.inv(B.T@B)@B.T
def PCA(X, num_components):
"""
Args:
X: ndarray of size (N, D), where D is the dimension of the data,
and N is the number of datapoints
num_components: the number of principal components to use.
Returns:
X_reconstruct: ndarray of the reconstruction
of X from the first `num_components` principal components.
"""
# your solution should take advantage of the functions you have implemented above.
#First, let's normalize our data (DONE OUTSIDE THE FUNCTION!!)
# X, _, _ = normalize(X)
#Second, calculate the covariance matrix
cov_matrix = np.cov(X, rowvar = False, bias=True)
#Third, get the eigen decomposition of the cov_matrix
_, vecs = eig(cov_matrix)
#Then, reconstruct projection matrix from some of the eigen vectors
P = projection_matrix(vecs[:, :num_components]) # projection matrix
#Finally, reconstruct X
X = (P @ X.T).T
return X
## Some preprocessing of the data
NUM_DATAPOINTS = 1000
X = (images.reshape(-1, 28 * 28)[:NUM_DATAPOINTS]) / 255.
Xbar, mu, std = normalize(X)
for num_component in range(1, 20):
from sklearn.decomposition import PCA as SKPCA
# We can compute a standard solution given by scikit-learn's implementation of PCA
pca = SKPCA(n_components=num_component, svd_solver='full')
sklearn_reconst = pca.inverse_transform(pca.fit_transform(Xbar))
reconst = PCA(Xbar, num_component)
np.testing.assert_almost_equal(reconst, sklearn_reconst)
print("Number of components:", num_component)
print("Difference between our PCA and SKlearn's:", np.square(reconst - sklearn_reconst).sum())
```
The greater number of of principal components we use, the smaller will our reconstruction
error be. Now, let's answer the following question:
> How many principal components do we need
> in order to reach a Mean Squared Error (MSE) of less than $100$ for our dataset?
We have provided a function in the next cell which computes the mean squared error (MSE), which will be useful for answering the question above.
```
def mse(predict, actual):
"""Helper function for computing the mean squared error (MSE)"""
return np.square(predict - actual).sum(axis=1).mean()
loss = []
reconstructions = []
# iterate over different number of principal components, and compute the MSE
for num_component in range(1, 100):
if num_component%10==0:
print("#Remaining:", 100-num_component)
reconst = PCA(Xbar, num_component)
error = mse(reconst, Xbar)
reconstructions.append(reconst)
# print('n = {:d}, reconstruction_error = {:f}'.format(num_component, error))
loss.append((num_component, error))
reconstructions = np.asarray(reconstructions)
reconstructions = reconstructions * std + mu # "unnormalize" the reconstructed image
loss = np.asarray(loss)
import pandas as pd
# create a table showing the number of principal components and MSE
pd.DataFrame(loss).head()
```
We can also put these numbers into perspective by plotting them.
```
fig, ax = plt.subplots()
ax.plot(loss[:,0], loss[:,1]);
ax.axhline(100, linestyle='--', color='r', linewidth=2)
ax.xaxis.set_ticks(np.arange(1, 100, 5));
ax.set(xlabel='num_components', ylabel='MSE', title='MSE vs number of principal components');
```
But _numbers dont't tell us everything_! Just what does it mean _qualitatively_ for the loss to decrease from around
$450.0$ to less than $100.0$?
Let's find out! In the next cell, we draw the the leftmost image is the original dight. Then we show the reconstruction of the image on the right, in descending number of principal components used.
```
@interact(image_idx=(0, 1000))
def show_num_components_reconst(image_idx):
fig, ax = plt.subplots(figsize=(20., 20.))
actual = X[image_idx]
# concatenate the actual and reconstructed images as large image before plotting it
x = np.concatenate([actual[np.newaxis, :], reconstructions.astype(float)[:, image_idx]])
ax.imshow(np.hstack(x.reshape(-1, 28, 28)[np.arange(10)]), cmap='gray');
ax.axvline(28, color='orange', linewidth=2)
```
We can also browse throught the reconstructions for other digits. Once again, `interact` becomes handy for visualing the reconstruction.
```
@interact(i=(0, 10))
def show_pca_digits(i=1):
"""Show the i th digit and its reconstruction"""
plt.figure(figsize=(4,4))
actual_sample = X[i].reshape(28,28)
reconst_sample = (reconst[i, :] * std + mu).reshape(28, 28).astype(float)
plt.imshow(np.hstack([actual_sample, reconst_sample]), cmap='gray')
plt.show()
```
## 2. PCA for high-dimensional datasets
Sometimes, the dimensionality of our dataset may be larger than the number of samples we
have. Then it might be inefficient to perform PCA with your implementation above. Instead,
as mentioned in the lectures, you can implement PCA in a more efficient manner, which we
call "PCA for high dimensional data" (PCA_high_dim).
Below are the steps for performing PCA for high dimensional dataset
1. Compute the matrix $XX^T$ (a $N$ by $N$ matrix with $N << D$)
2. Compute eigenvalues $\lambda$s and eigenvectors $V$ for $XX^T$
3. Compute the eigenvectors for the original covariance matrix as $X^TV$. Choose the eigenvectors associated with the M largest eigenvalues to be the basis of the principal subspace $U$.
4. Compute the orthogonal projection of the data onto the subspace spanned by columns of $U$. Functions you wrote for earlier assignments will be useful.
```
# GRADED FUNCTION: DO NOT EDIT THIS LINE
### PCA for high dimensional datasets
def PCA_high_dim(X, n_components):
"""Compute PCA for small sample size but high-dimensional features.
Args:
X: ndarray of size (N, D), where D is the dimension of the sample,
and N is the number of samples
num_components: the number of principal components to use.
Returns:
X_reconstruct: (N, D) ndarray. the reconstruction
of X from the first `num_components` pricipal components.
"""
N, D = X.shape
# Compute the matrix \frac{1}{N}XX^T.
M = (1/N) * (X @ X.T)
# Compute the eigenvalues.
eig_vals, eig_vecs = eig(M)
# Compute the eigenvectors for the original PCA problem.
U = X.T @ eig_vecs
# Compute the projection matrix,
P = projection_matrix(U[: ,:n_components]) # projection matrix
# Reconstruct X using the projection matrix
X = (P @ X.T).T
return X
```
Given the same dataset, `PCA_high_dim` and `PCA` should give the same output.
Assuming we have implemented `PCA`, correctly, we can then use `PCA` to test the correctness
of `PCA_high_dim`. Given the same dataset, `PCA` and `PCA_high_dim` should give identical results.
We can use this __invariant__
to test our implementation of PCA_high_dim, assuming that we have correctly implemented `PCA`.
```
np.testing.assert_almost_equal(PCA(Xbar, 2), PCA_high_dim(Xbar, 2))
```
Now let's compare the running time between `PCA` and `PCA_high_dim`.
__Tips__ for running benchmarks or computationally expensive code:
When you have some computation that takes up a non-negligible amount of time. Try separating
the code that produces output from the code that analyzes the result (e.g. plot the results, comput statistics of the results). In this way, you don't have to recompute when you want to produce more analysis.
The next cell includes a function that records the time taken for executing a function `f` by repeating it for `repeat` number of times. You do not need to modify the function but you can use it to compare the running time for functions which you are interested in knowing the running time.
```
def time(f, repeat=10):
times = []
for _ in range(repeat):
start = timeit.default_timer()
f()
stop = timeit.default_timer()
times.append(stop-start)
return np.mean(times), np.std(times)
```
We first benchmark the time taken to compute $\boldsymbol X^T\boldsymbol X$ and $\boldsymbol X\boldsymbol X^T$. Jupyter's magic command `%time` is quite handy.
The next cell finds the running time for computing $X^TX$ and $XX^T$ for different dimensions of X.
```
times_mm0 = []
times_mm1 = []
# iterate over datasets of different size
for datasetsize in np.arange(4, 784, step=20):
XX = Xbar[:datasetsize] # select the first `datasetsize` samples in the dataset
# record the running time for computing X.T @ X
mu, sigma = time(lambda : XX.T @ XX)
times_mm0.append((datasetsize, mu, sigma))
# record the running time for computing X @ X.T
mu, sigma = time(lambda : XX @ XX.T)
times_mm1.append((datasetsize, mu, sigma))
times_mm0 = np.asarray(times_mm0)
times_mm1 = np.asarray(times_mm1)
```
Having recorded the running time for computing `X @ X.T` and `X @ X.T`, we can plot them.
```
fig, ax = plt.subplots()
ax.set(xlabel='size of dataset', ylabel='running time')
bar = ax.errorbar(times_mm0[:, 0], times_mm0[:, 1], times_mm0[:, 2], label="$X^T X$ (PCA)", linewidth=2)
ax.errorbar(times_mm1[:, 0], times_mm1[:, 1], times_mm1[:, 2], label="$X X^T$ (PCA_high_dim)", linewidth=2)
ax.legend();
```
Alternatively, use the `time` magic command for benchmarking functions.
```
%time Xbar.T @ Xbar
%time Xbar @ Xbar.T
pass # Put this here so that our output does not show result of computing `Xbar @ Xbar.T`
```
Next we benchmark PCA, PCA_high_dim.
```
times0 = []
times1 = []
# iterate over datasets of different size
for datasetsize in np.arange(4, 784, step=100):
XX = Xbar[:datasetsize]
npc = 2
mu, sigma = time(lambda : PCA(XX, npc), repeat=10)
times0.append((datasetsize, mu, sigma))
mu, sigma = time(lambda : PCA_high_dim(XX, npc), repeat=10)
times1.append((datasetsize, mu, sigma))
times0 = np.asarray(times0)
times1 = np.asarray(times1)
```
Let's plot the running time. Spend some time and think about what this plot means. We mentioned in lectures that PCA_high_dim are advantageous when
we have dataset size $N$ < data dimension $M$. Although our plot does not for the two running time does not intersect exactly at $N = M$, it does show the trend.
```
fig, ax = plt.subplots()
ax.set(xlabel='number of datapoints', ylabel='run time')
ax.errorbar(times0[:, 0], times0[:, 1], times0[:, 2], label="PCA", linewidth=2)
ax.errorbar(times1[:, 0], times1[:, 1], times1[:, 2], label="PCA_high_dim", linewidth=2)
ax.legend();
```
Again, with the magic command `time`.
```
%time PCA(Xbar, 2)
%time PCA_high_dim(Xbar, 2)
pass
```
| github_jupyter |
# 012_importing_datasets
[Source](https://github.com/iArunava/Python-TheNoTheoryGuide/)
```
# Required Imports
import pandas as pd
import sklearn as sk
import sqlite3
from pandas.io import sql
# Importing CSV files from local directory
# NOTE: Make sure the Path you use contains the dataset named 'whereisthatdataset.csv'
df1 = pd.read_csv ('./assets/whereisthatdataset.csv') # Using relative path
df2 = pd.read_csv ('/home/arunava/Datasets/whereisthatdataset.csv') # Using absolute path
df1.head(3)
# If a dataset comes without headers then you need to pass `headers=None`
# Note: This Dataset comes with headers,
# specifying `headers=None` leads python to treat the first row as part of the dataset
df1 = pd.read_csv ('./assets/whereisthatdataset.csv', header=None)
df1.head(3)
# Specify header names while importing datasets with (or without) headers
df1 = pd.read_csv ('./assets/whereisthatdataset.csv', header=None, names=['Where', 'on', 'earth', 'did', 'you', 'got', 'this', 'dataset', 'of', 'Pigeons', 'racing'])
df1.head(3)
# Importing file from URL
df1 = pd.read_csv('https://raw.githubusercontent.com/iArunava/Python-TheNoTheoryGuide/master/assets/whereisthatdataset.csv')
df1.head(3)
# Reading Data from text file
# NOTE: Use `sep` to specify how your data is seperated
df1 = pd.read_table ('./assets/whereisthatdataset.txt', sep=',')
df2 = pd.read_csv ('./assets/whereisthatdataset.txt', sep=',')
df1.head(3)
# Read excel file
# NOTE: you need 'xlrd' module to read .xls files
df1 = pd.read_excel ('./assets/whereisthatdataset.xls', sheetname='whereisthatdataset', skiprows=1)
df1.head(3)
# Read SAS file
df1 = pd.read_sas ('./assets/whereisthatdataset.sas7bdat')
df1.head(3)
# Read SQL Table
conn = sqlite3.connect ('./assets/whereisthatdataset.db')
query = 'SELECT * FROM whereisthattable;'
df1 = pd.read_sql(query, con=conn)
df1.head(3)
# Read sample rows and columns
# nrows: Number of rows to select
# usecols: list of cols to use (either all string or unicode)
sdf1 = pd.read_csv ('./assets/whereisthatdataset.csv', nrows=4, usecols=[1, 5, 7])
sdf2 = pd.read_csv ('./assets/whereisthatdataset.csv', nrows=4, usecols=['Breeder', 'Sex', 'Arrival'])
sdf1
# Skip rows while importing
# NOTE: If you don't set header=None, pandas will treat the first row of all the rows to be considered as the header row
df1 = pd.read_csv ('./assets/whereisthatdataset.csv', header=None, skiprows=5)
df1.head(3)
# Specify Missing Values
# na_values: pass a list, which if present in the dataset will be considered as missing values
df1 = pd.read_csv ('./assets/whereisthatdataset.csv', na_values=['NaN'])
df1.head(3)
```
| github_jupyter |
```
# install composer, hiding output to keep the notebook clean
! pip install mosaicml > /dev/null 2>&1
```
# Using the Functional API
In this tutorial, we'll see an example of using Composer's algorithms in a standalone fashion with no changes to the surrounding code and no requirement to use the Composer trainer. We'll be training a simple model on CIFAR-10, similar to the [PyTorch classifier tutorial](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html). Because we'll be using a toy model trained for only a few epochs, we won't get the same speed or accuracy gains we might expect from a more realistic problem. However, this notebook should still serve as a useful illustration of how to use various algorithms. For examples of more realistic results, see the MosaicML [Explorer](https://app.mosaicml.com/explorer/imagenet).
First, we need to define our original model, dataloader, and training loop. Let's start with the dataloader:
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
datadir = './data'
batch_size = 1024
transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
]
)
trainset = torchvision.datasets.CIFAR10(root=datadir, train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root=datadir, train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
```
As you can see, we compose two transforms, one which transforms the images to tensors and another that normalizes them. We apply these transformations to both the train and test sets. Now, let's define our model. We're going to use a toy convolutional neural network so that the training epochs finish quickly.
```
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=(3, 3), stride=2)
self.conv2 = nn.Conv2d(16, 32, kernel_size=(3, 3))
self.norm = nn.BatchNorm2d(32)
self.pool = nn.AdaptiveAvgPool2d(1)
self.fc1 = nn.Linear(32, 128)
self.fc2 = nn.Linear(128, 64)
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.conv2(x)
x = F.relu(self.norm(x))
x = torch.flatten(self.pool(x), 1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
```
Finally, let's write a simple training loop that prints the accuracy on the test set at the end of each epoch. We'll just run a few epochs for brevity.
```
from tqdm.notebook import tqdm
import composer.functional as cf
num_epochs = 5
def train_and_eval(model, train_loader, test_loader):
torch.manual_seed(42)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = model.to(device)
opt = torch.optim.Adam(model.parameters())
for epoch in range(num_epochs):
print(f"---- Beginning epoch {epoch} ----")
model.train()
progress_bar = tqdm(train_loader)
for X, y in progress_bar:
X = X.to(device)
y_hat = model(X).to(device)
loss = F.cross_entropy(y_hat, y)
progress_bar.set_postfix_str(f"train loss: {loss.detach().numpy():.4f}")
loss.backward()
opt.step()
opt.zero_grad()
model.eval()
num_right = 0
eval_size = 0
for X, y in test_loader:
y_hat = model(X.to(device)).to(device)
num_right += (y_hat.argmax(dim=1) == y).sum().numpy()
eval_size += len(y)
acc_percent = 100 * num_right / eval_size
print(f"Epoch {epoch} validation accuracy: {acc_percent:.2f}%")
```
Great. Now, let's instantiate this baseline model and see how it fares on our dataset.
```
model = Net()
train_and_eval(model, trainloader, testloader)
```
Now that we have this baseline, let's add algorithms to improve our data pipeline and model. We'll start by adding some data augmentation, accessed via `cf.colout_batch`.
```
# create dataloaders for the train and test sets
shared_transforms = [
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
]
train_transforms = shared_transforms[:] + [cf.colout_batch]
test_transform = transforms.Compose(shared_transforms)
train_transform = transforms.Compose(train_transforms)
trainset = torchvision.datasets.CIFAR10(root=datadir, train=True,
download=True, transform=train_transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root=datadir, train=False,
download=True, transform=test_transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
```
Let's see how our model does with just these changes.
```
model = Net()
# only use one data augmentation since our small model runs quickly
# and allows the dataloader little time to do anything fancy
train_and_eval(model, trainloader, testloader)
```
As we might expect, adding data augmentation doesn't help us when we aren't training long enough to start overfitting.
Let's try using some algorithms that modify the model. We're going to keep things simple and just add a [Squeeze-and-Excitation](https://docs.mosaicml.com/en/latest/method_cards/squeeze_excite.html) module after the larger of the two conv2d operations in our model.
```
# squeeze-excite can add a lot of overhead for small
# conv2d operations, so only add it after convs with a
# minimum number of channels
cf.apply_squeeze_excite(model, latent_channels=64, min_channels=16)
```
Now let's see how our model does with the above algorithm applied.
```
train_and_eval(model, trainloader, testloader)
```
Adding squeeze-excite gives us another few percentage points of accuracy and does so with little decrease in the number of iterations per second. Great!
Of course, this is a toy model and dataset, but it serves to illustrate how to use Composer's algorithms inside your own training loops, with minimal changes to your code. If you hit any problems or have questions, feel free to [open an issue](https://github.com/mosaicml/composer/issues/new/) or reach out to us [on Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-w0tiddn9-WGTlRpfjcO9J5jyrMub1dg).
| github_jupyter |
# 3. Python modules
In the course, and in your general (data analytics and visualization) work with Python you will typically be using the __numpy__ and __matplotlib.pyplot__ Python 'modules'. A Python _module_ is a "library" with useful procedures (in practice either a file `library.py` or a folder `library/`). To use procedures in a libary, one can refer to them by their full name, after an `import` statement, such as in
#### import
```
import numpy
x = numpy.arange(0.0,1.0,0.05)
print(x)
import matplotlib.pyplot
matplotlib.pyplot.plot(numpy.sin(x*2.0*numpy.pi),'-+')
```
This gets pretty heavy and unreadable, though, so one can use short-cut names for the libaries:
#### import as
```
import numpy as np
x = np.arange(0.0,1.0,0.05)
print(x)
import matplotlib.pyplot as pl
pl.plot(np.sin(x*2.0*np.pi),'-+');
```
### %pylab inline and from ... import
It is possible to import all procedure from a library, using this syntax:
```
from numpy import *
from matplotlib.pyplot import *
```
Since using the `numpy` and `matplotlib.pyplot` procedures in interactive work is extremely common and basic, there is a "magic command" (one among several) that accomplishes this, and also chooses (via the argument), the type of graphics interpreter to use:
```
%pylab inline
x = arange(0.0,1.0,0.05)
print(x)
plot(sin(x*2.0*pi),'-+',label='sin')
legend()
```
### Don't do this in the general case!
For a several reasons this is not recommended in the general case (it leads to socalled _namespace pollution_):
1. The number of imported procedures can be huge, and libraries may contain names that overlap with variable names, or with names in other libraries which can cause confusion.
2. Procedures may mask (overload) builtin Python procedures (such as `abs(), sum(), int()`and so on), which can also cause confusion.
3. Many Python editors have automatic syntax control, and will not detect missing og mismatched procedure names in this case.
4. Not much typing is gained by omitting `np.` and `pl.`, and one quickly gets used to typing the abbreviations.
5. It is good practice to keep the `name.` syntax in front of the procedure, because it informs the reader (yourself in a couple of months!) in which library the procedure is defined.
So, when using other libraries, and also when using `numpy` and `matplotlib.pyplot` in files with pure Python code (perhaps a personal library), it is better to type names that include the (abbreviated) library name !
#### __Task:__
Let's take a look at the `numpy.roll` function, which we will need in the next Jupyter Notebook
```
import numpy as np
help(np.roll)
```
Suppose we have a list `f=[0,1,2,1,0,-1,-2,-1]`, which we would refer to as $f_i$ in math notation
What would be the expression that produces the list that corresponds to $f_{i+1}$ ?
#### __Absalon turn in__:
Just type the expression into the Absalon text field (no notebook upload)
## Next
Reaching this point, you are now ready to start working on numerical methods! Open the __1e Centered derivatives__ notebook, to learn about numerical derivatives.
| github_jupyter |
```
# default_exp distributed
#export
from fastai.basics import *
from fastai.callback.progress import ProgressCallback
from torch.nn.parallel import DistributedDataParallel, DataParallel
from fastai.data.load import _FakeLoader
```
# Distributed and parallel training
> Callbacks and helper functions to train in parallel or use distributed training
## Parallel
Patch the parallel models so they work with RNNs
```
#export
@patch
def reset(self: DataParallel):
if hasattr(self.module, 'reset'): self.module.reset()
#export
@log_args
class ParallelTrainer(Callback):
run_after,run_before = TrainEvalCallback,Recorder
def __init__(self, device_ids): self.device_ids = device_ids
def before_fit(self): self.learn.model = DataParallel(self.learn.model, device_ids=self.device_ids)
def after_fit(self): self.learn.model = self.learn.model.module
#export
@patch
def to_parallel(self: Learner, device_ids=None):
self.add_cb(ParallelTrainer(device_ids))
return self
#export
@patch
def detach_parallel(self: Learner):
"Remove ParallelTrainer callback from Learner."
self.remove_cb(ParallelTrainer)
return self
#export
@patch
@contextmanager
def parallel_ctx(self: Learner, device_ids=None):
"A context manager to adapt a learner to train in data parallel mode."
try:
self.to_parallel(device_ids)
yield self
finally:
self.detach_parallel()
```
## Distributed
Patch the parallel models so they work with RNNs
```
#export
@patch
def reset(self: DistributedDataParallel):
if hasattr(self.module, 'reset'): self.module.reset()
```
Convenience functions to set up/tear down torch distributed data parallel mode.
```
#export
def setup_distrib(gpu=None):
if gpu is None: return gpu
gpu = int(gpu)
torch.cuda.set_device(int(gpu))
if num_distrib() > 1:
torch.distributed.init_process_group(backend='nccl', init_method='env://')
return gpu
#export
def teardown_distrib():
if torch.distributed.is_initialized(): torch.distributed.destroy_process_group()
```
### DataLoader
We need to change the dataloaders so that they only get one part of the batch each (otherwise there is no point in using distributed training).
```
#export
@log_args(but_as=TfmdDL.__init__)
class DistributedDL(TfmdDL):
_round_to_multiple=lambda number,multiple:int(math.ceil(number/multiple)*multiple)
def _broadcast(self,t,rank):
"Broadcasts t from rank `rank` to all other ranks. Returns t so t is same for all ranks after call."
t = LongTensor(t).cuda() # nccl only works with cuda tensors
torch.distributed.broadcast(t,rank)
return t.cpu().tolist()
def __len__(self):
return DistributedDL._round_to_multiple(len(self.dl),self.world_size)//self.world_size
def get_idxs(self):
idxs = self.dl.get_idxs() # compute get_idxs in all ranks (we'll only use rank 0 but size must be consistent)
idxs = self._broadcast(idxs,0) # broadcast and receive it from rank 0 to all
n_idxs = len(idxs)
# add extra samples to make it evenly divisible
idxs += idxs[:(DistributedDL._round_to_multiple(n_idxs,self.world_size)-n_idxs)]
# subsample
return idxs[self.rank::self.world_size]
def sample(self):
# this gets executed in fake_l context (e.g. subprocesses) so we cannot call self.get_idxs() here
return (b for i,b in enumerate(self._idxs) if i//(self.bs or 1)%self.nw==self.offs)
def before_iter(self):
self.dl.before_iter()
self._idxs = self.get_idxs()
def randomize(self): self.dl.randomize()
def after_batch(self,b): return self.dl.after_batch(b)
def after_iter(self): self.dl.after_iter()
def create_batches(self,samps): return self.dl.create_batches(samps)
def __init__(self,dl,rank,world_size):
store_attr('dl,rank,world_size')
self.bs,self.device,self.drop_last,self.dataset = dl.bs,dl.device,dl.drop_last,dl.dataset
self.fake_l = _FakeLoader(self, dl.fake_l.pin_memory, dl.fake_l.num_workers, dl.fake_l.timeout)
_tmp_file = tempfile.NamedTemporaryFile().name # i tried putting this inside self / _broadcast to no avail
# patch _broadcast with a mocked version so we can test DistributedDL w/o a proper DDP setup
@patch
def _broadcast(self:DistributedDL,t,rank):
t = LongTensor(t)
if rank == self.rank: torch.save(t,_tmp_file)
else: t.data = torch.load(_tmp_file)
return t.tolist()
dl = TfmdDL(list(range(50)), bs=16, num_workers=2)
for i in range(4):
dl1 = DistributedDL(dl, i, 4)
test_eq(list(dl1)[0], torch.arange(i, 52, 4)%50)
dl = TfmdDL(list(range(50)), bs=16, num_workers=2, shuffle=True)
res = []
for i in range(4):
dl1 = DistributedDL(dl, i, 4)
res += list(dl1)[0].tolist()
#All items should be sampled (we cannot test order b/c shuffle=True)
test_eq(np.unique(res), np.arange(50))
from fastai.callback.data import WeightedDL
dl = WeightedDL(list(range(50)), bs=16, num_workers=2, shuffle=True,wgts=list(np.arange(50)>=25))
res = []
for i in range(4):
dl1 = DistributedDL(dl, i, 4)
res += list(dl1)[0].tolist()
test(res,[25]*len(res),operator.ge) # all res >=25
test(res,[25]*len(res),lambda a,b: ~(a<b)) # all res NOT < 25
#export
@log_args
class DistributedTrainer(Callback):
run_after,run_before = TrainEvalCallback,Recorder
fup = None # for `find_unused_parameters` in DistributedDataParallel()
def __init__(self, cuda_id=0,sync_bn=True): store_attr('cuda_id,sync_bn')
def before_fit(self):
opt_kwargs = { 'find_unused_parameters' : DistributedTrainer.fup } if DistributedTrainer.fup is not None else {}
self.learn.model = DistributedDataParallel(
nn.SyncBatchNorm.convert_sync_batchnorm(self.model) if self.sync_bn else self.model,
device_ids=[self.cuda_id], output_device=self.cuda_id, **opt_kwargs)
self.old_dls = list(self.dls)
self.learn.dls.loaders = [self._wrap_dl(dl) for dl in self.dls]
if rank_distrib() > 0: self.learn.logger=noop
def _wrap_dl(self, dl):
return dl if isinstance(dl, DistributedDL) else DistributedDL(dl, rank_distrib(), num_distrib())
def before_train(self): self.learn.dl = self._wrap_dl(self.learn.dl)
def before_validate(self): self.learn.dl = self._wrap_dl(self.learn.dl)
def after_fit(self):
self.learn.model = self.learn.model.module
self.learn.dls.loaders = self.old_dls
```
Attach, remove a callback which adapts the model to use DistributedDL to train in distributed data parallel mode.
```
#export
@patch
def to_distributed(self: Learner, cuda_id,sync_bn=True):
self.add_cb(DistributedTrainer(cuda_id,sync_bn))
if rank_distrib() > 0: self.remove_cb(ProgressCallback)
return self
#export
@patch
def detach_distributed(self: Learner):
if num_distrib() <=1: return self
self.remove_cb(DistributedTrainer)
if rank_distrib() > 0 and not hasattr(self, 'progress'): self.add_cb(ProgressCallback())
return self
#export
@patch
@contextmanager
def distrib_ctx(self: Learner, cuda_id=None,sync_bn=True):
"A context manager to adapt a learner to train in distributed data parallel mode."
# Figure out the GPU to use from rank. Create a dpg if none exists yet.
if cuda_id is None: cuda_id = rank_distrib()
if not torch.distributed.is_initialized():
setup_distrib(cuda_id)
cleanup_dpg = torch.distributed.is_initialized()
else: cleanup_dpg = False
# Adapt self to DistributedDataParallel, yield, and cleanup afterwards.
try:
if num_distrib() > 1: self.to_distributed(cuda_id,sync_bn)
yield self
finally:
self.detach_distributed()
if cleanup_dpg: teardown_distrib()
```
### `distrib_ctx` context manager
**`distrib_ctx(cuda_id)`** prepares a learner to train in distributed data parallel mode. It assumes these [environment variables](https://pytorch.org/tutorials/intermediate/dist_tuto.html#initialization-methods) have all been setup properly, such as those launched by [`python -m fastai.launch`](https://github.com/fastai/fastai/blob/master/fastai/launch.py).
#### Typical usage:
```
with learn.distrib_ctx(): learn.fit(.....)
```
It attaches a `DistributedTrainer` callback and `DistributedDL` data loader to the learner, then executes `learn.fit(.....)`. Upon exiting the context, it removes the `DistributedTrainer` and `DistributedDL`, and destroys any locally created distributed process group. The process is still attached to the GPU though.
```
#export
def rank0_first(func):
"Execute `func` in the Rank-0 process first, then in other ranks in parallel."
dummy_l = Learner(DataLoaders(device='cpu'), nn.Linear(1,1), loss_func=lambda: 0)
with dummy_l.distrib_ctx():
if rank_distrib() == 0: res = func()
distrib_barrier()
if rank_distrib() != 0: res = func()
return res
```
**`rank0_first(f)`** calls `f()` in rank-0 process first, then in parallel on the rest, in distributed training mode. In single process, non-distributed training mode, `f()` is called only once as expected.
One application of `rank0_first()` is to make fresh downloads via `untar_data()` safe in distributed training scripts launched by `python -m fastai.launch <script>`:
> <code>path = untar_data(URLs.IMDB)</code>
becomes:
> <code>path = <b>rank0_first(lambda:</b> untar_data(URLs.IMDB))</code>
Some learner factory methods may use `untar_data()` to **download pretrained models** by default:
> <code>learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)</code>
becomes:
> <code>learn = <b>rank0_first(lambda:</b> text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy))</code>
Otherwise, multiple processes will download at the same time and corrupt the data.
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
```
import tensorflow.keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation
from keras.preprocessing.text import Tokenizer
import tensorflow as tf
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
from nltk.probability import FreqDist
import pandas as pd
import time
motivational='''
The first and greatest victory is to conquer self Don’t stop until you’re proud.
Life shrinks or expands in proportion to one’s courage. Upgrade your conviction to match your destiny.
Tough times don’t last. Tough people do. It is during the hard times when the ‘hero’ within us is revealed.
Our greatest glory is not in never falling, but in rising every time we fall. As long as the mind can envision the fact that you can do something, you can do it, as long as you really believe 100 percent.
Don’t try to be perfect. Quitting lasts forever.
Set a goal so large that you can’t achieve it until you grow into the person who can. Think about what might go right.
The universe is a process. Try to be better than you were yesterday.
Remember it’s just a bad day, not a bad life. Take a deep breath, stay positive and know that things will get better.
The only person you are destined to become is the person you decide to be. Work hard, stay consistent, and be patient.
Courage is one step ahead of fear. Upgrade your conviction to match your destiny.
The path to success is to take massive, determined actions. When you face your struggles, you overcome them.
Don’t think about what might go wrong. Be so good they can’t ignore you.
The mind is the limit. Work hard, stay consistent, and be patient.
Goals may give focus, but dreams give power. You can be anything you want to be, do anything you set out to accomplish if you hold to that desire with singleness of purpose.
Use what you have. Never give up
Make the most of yourself….for that is all there is of you. You have to memorize to be disciplined.
The pain you feel today will be the strength you feel tomorrow. Willing is not enough; we must do.
Keep going Try to be better than you were yesterday.
Don’t downgrade your dream just to fit your reality. Work hard, stay consistent, and be patient.
The future belongs to those who believe in the beauty of their dreams. Nothing can be done without hope and confidence.
'''
motivational=motivational.lower()
motivational=motivational.split('\n')
motivational=list(set(motivational))
motivational.remove('')
demotivational='''Sex is mathematics. Individuality no longer an issue. What does intelligence signify? Define reason. Desire - meaningless. Intellect is not a cure. Justice is dead.
Just imagine how terrible it might have been if we’d been at all competent.
When you wish upon a falling star, your dreams can come true. Unless it's really a meteor hurtling to the Earth which will destroy all life. Then you're pretty much hosed no matter what you wish for. Unless it's death by meteorite.
There are no stupid questions, but there are a LOT of inquisitive idiots.
Nothing says "you're a loser" more than owning a motivational poster about being a winner.
Accept that you're just a product, not a gift.
Teach every child you meet the importance of forgiveness. It's our only hope of surviving their wrath once they realize just how badly we've screwed things up for them.
The United States was a big country where everybody wore funny t-shirts and ate too much.
You have to make the good out of the bad because that is all you have got to make it out of.
You can do anything you set your mind to when you have vision, determination, and an endless supply of expendable labor.
Happy people do not wake up for breakfast.
Life is only logical, and to think it's a gift is depressing.
Try & try until you cannot succeed.
Every dead body on Mount Everest was once a highly motivated person. Stay lazy my friends. It may save your life one day.
Furthermore, having lost faith in himself, he thought it his duty to undermine the nation's faith in itself.
If you're not a part of the solution, there's good money to be made in prolonging the problem.
The first step towards failure is trying.
Those who doubt your ability probably have a valid reason.
The best things in life are actually really expensive.
Dream is the only way for you to escape the miserable reality of your life.'''
demotivational=demotivational.lower()
demotivational=demotivational.split('\n')
tokenizer = RegexpTokenizer(r'\w+')
stopWords = set(stopwords.words('english'))
for i in range(len(motivational)):
motivational[i]=tokenizer.tokenize(motivational[i])
demotivational[i]=tokenizer.tokenize(demotivational[i])
demot=[]
mot=[]
for i in motivational:
tempmot=[]
for j in i:
if j not in stopWords:
tempmot.append(j)
mot.append(tempmot)
for i in demotivational:
tempdemot=[]
for j in i:
if j not in stopWords:
tempdemot.append(j)
demot.append(tempdemot)
for i in range(len(mot)):
mot[i] = nltk.FreqDist(mot[i])
for i in range(len(demot)):
demot[i] = nltk.FreqDist(demot[i])
df1=pd.DataFrame.from_dict(mot)
df1['class']=0
df2=pd.DataFrame.from_dict(demot)
df2['class']=1
final=pd.concat([df1,df2])
final=final.fillna(0.0)
final.tail()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(final.drop('class',axis=1)
, final['class'], test_size=0.25, random_state=42)
X_train=X_train.values
X_test=X_test.values
y_train=y_train.values
y_test=y_test.values
nodes=[8, 16, 32, 64, 128, 256, 512,1024]
timesnodes=[]
trainaccnodes=[]
testaccnodes=[]
layers=[2,3,4,5]
timeslayers=[]
trainacclayers=[]
testacclayers=[]
```
# 8 nodes
```
model=Sequential()
model.add(Dense(8,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timesnodes.append(time.time()-t)
trainaccnodes.append(model.evaluate(X_train,y_train)[1])
testaccnodes.append(model.evaluate(X_test,y_test)[1])
```
# 16 nodes
```
model=Sequential()
model.add(Dense(16,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timesnodes.append(time.time()-t)
trainaccnodes.append(model.evaluate(X_train,y_train)[1])
testaccnodes.append(model.evaluate(X_test,y_test)[1])
```
# 32 nodes
```
model=Sequential()
model.add(Dense(32,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timesnodes.append(time.time()-t)
trainaccnodes.append(model.evaluate(X_train,y_train)[1])
testaccnodes.append(model.evaluate(X_test,y_test)[1])
```
# 64 nodes
```
model=Sequential()
model.add(Dense(64,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timesnodes.append(time.time()-t)
trainaccnodes.append(model.evaluate(X_train,y_train)[1])
testaccnodes.append(model.evaluate(X_test,y_test)[1])
```
# 128 nodes
```
model=Sequential()
model.add(Dense(128,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timesnodes.append(time.time()-t)
trainaccnodes.append(model.evaluate(X_train,y_train)[1])
testaccnodes.append(model.evaluate(X_test,y_test)[1])
```
# 256 nodes
```
model=Sequential()
model.add(Dense(256,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timesnodes.append(time.time()-t)
trainaccnodes.append(model.evaluate(X_train,y_train)[1])
testaccnodes.append(model.evaluate(X_test,y_test)[1])
```
# 512 nodes
```
model=Sequential()
model.add(Dense(512,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timesnodes.append(time.time()-t)
trainaccnodes.append(model.evaluate(X_train,y_train)[1])
testaccnodes.append(model.evaluate(X_test,y_test)[1])
```
# 1024 nodes
```
model=Sequential()
model.add(Dense(1024,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timesnodes.append(time.time()-t)
trainaccnodes.append(model.evaluate(X_train,y_train)[1])
testaccnodes.append(model.evaluate(X_test,y_test)[1])
```
# 2 layers
```
model=Sequential()
model.add(Dense(32,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(32,activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timeslayers.append(time.time()-t)
trainacclayers.append(model.evaluate(X_train,y_train)[1])
testacclayers.append(model.evaluate(X_test,y_test)[1])
```
# 3 layers
```
model=Sequential()
model.add(Dense(32,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(32,activation='sigmoid'))
model.add(Dense(32,activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timeslayers.append(time.time()-t)
trainacclayers.append(model.evaluate(X_train,y_train)[1])
testacclayers.append(model.evaluate(X_test,y_test)[1])
```
# 4 layers
```
model=Sequential()
model.add(Dense(32,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(32,activation='sigmoid'))
model.add(Dense(32,activation='sigmoid'))
model.add(Dense(32,activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timeslayers.append(time.time()-t)
trainacclayers.append(model.evaluate(X_train,y_train)[1])
testacclayers.append(model.evaluate(X_test,y_test)[1])
```
# 5 layers
```
model=Sequential()
model.add(Dense(32,input_dim=X_train.shape[1],activation='sigmoid'))
model.add(Dense(32,activation='sigmoid'))
model.add(Dense(32,activation='sigmoid'))
model.add(Dense(32,activation='sigmoid'))
model.add(Dense(32,activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])
t=time.time()
model.fit(X_train,y_train,epochs=100)
timeslayers.append(time.time()-t)
trainacclayers.append(model.evaluate(X_train,y_train)[1])
testacclayers.append(model.evaluate(X_test,y_test)[1])
import matplotlib.pyplot as plt
layers=['2','3','4','5']
nodes=[str(i) for i in nodes]
from pylab import rcParams
import numpy as np
rcParams['figure.figsize'] = 9,5
import seaborn as sns
plt.title("Varying Number of Nodes")
plt.xlabel('Nodes')
plt.ylabel('Training accuracy')
sns.barplot(nodes,trainaccnodes)
import seaborn as sns
plt.title("Varying Number of Nodes")
plt.xlabel('Nodes')
plt.ylabel('Testing accuracy')
sns.barplot(nodes,testaccnodes)
plt.xlabel('Nodes')
plt.ylabel('Time (in seconds)')
sns.barplot(nodes,timesnodes)
plt.title("Varying Number of Nodes")
plt.title("Varying Number of layers")
plt.xlabel('Nodes')
plt.ylabel('Training accuracy')
sns.barplot(layers,trainacclayers)
plt.title("Varying Number of layers")
plt.xlabel('Nodes')
plt.ylabel('Testing accuracy')
sns.barplot(layers,testacclayers)
plt.xlabel('Layers')
plt.ylabel('Time (in seconds)')
sns.barplot(layers,timeslayers)
plt.title("Varying Number of Layers")
```
| github_jupyter |
# Fully-Connected Neural Nets
In this notebook we will implement fully-connected networks using a modular approach. For each layer we will implement a `forward` and a `backward` function. The `forward` function will receive inputs, weights, and other parameters and will return both an output and a `cache` object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
""" Receive inputs x and weights w """
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the `cache` object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
"""
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
"""
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
```
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from deeplearning.classifiers.fc_net import *
from deeplearning.data_utils import get_CIFAR10_data
from deeplearning.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from deeplearning.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
```
# Affine layer: forward
Open the file `deeplearning/layers.py` and implement the `affine_forward` function.
Once you are done you can test your implementaion by running the following:
```
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
```
# Affine layer: backward
Now implement the `affine_backward` function and test your implementation using numeric gradient checking.
```
# Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
```
# ReLU layer: forward
Implement the forward pass for the ReLU activation function in the `relu_forward` function and test your implementation using the following:
```
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 1e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
```
# ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the `relu_backward` function and test your implementation using numeric gradient checking. Note that the ReLU activation is not differentiable at 0, but typically we don't worry about this and simply assign either 0 or 1 as the derivative by convention.
```
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 1e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
```
# "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file `deeplearning/layer_utils.py`.
For now take a look at the `affine_relu_forward` and `affine_relu_backward` functions, and run the following to numerically gradient check the backward pass:
```
from deeplearning.layer_utils import affine_relu_forward, affine_relu_backward
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
```
# Loss layers: Softmax and SVM
Here we provide two loss functions that we will use to train our deep neural networks. You should understand how they work by looking at the implementations in `deeplearnign/layers.py`.
You can make sure that the implementations are correct by running the following:
```
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
```
# Two-layer network
Open the file `deeplearning/classifiers/fc_net.py` and complete the implementation of the `TwoLayerNet` class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. Run the cell below to test your implementation.
```
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-2
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print ('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print ('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print ('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print ('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print ('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
```
# Solver
Following a modular design, for this assignment we have split the logic for training models into a separate class from the models themselves.
Open the file `deeplearning/solver.py` and read through it to familiarize yourself with the API. After doing so, use a `Solver` instance to train a `TwoLayerNet` that achieves at least `50%` accuracy on the validation set.
```
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
optim_config = {}
optim_config['learning_rate'] = 0.001
solver = Solver(model, data, optim_config=optim_config, lr_decay=0.9, num_epochs=10)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy, and save the log file of the
# experiment for submission.
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
solver.record_histories_as_npz('submission_logs/train_2layer_fc.npz')
```
# Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the `FullyConnectedNet` class in the file `deeplearning/classifiers/fc_net.py`.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
## Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
```
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print ('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print ('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print ('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
```
As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
```
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
############################################################################
# TODO: Tune these parameters to get 100% train accuracy within 20 epochs. #
############################################################################
weight_scale = 2e-1
learning_rate = 1e-4
############################################################################
# END OF YOUR CODE #
############################################################################
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
solver.record_histories_as_npz('submission_logs/overfit_3layer_fc.npz')
```
Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
```
## TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
############################################################################
# TODO: Tune these parameters to get 100% train accuracy within 20 epochs. #
############################################################################
learning_rate = 1e-3 #1e-3
weight_scale = 1e-1 #1e-5
############################################################################
# END OF YOUR CODE #
############################################################################
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
solver.record_histories_as_npz('submission_logs/overfit_5layer_fc.npz')
```
# Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
# Answer:
It was considerably harder to find the correct hyperparameters for the five-layer network.
The three-layer network is way more sensitive to the regularization parameter.
# Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
# SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file `deeplearning/optim.py` and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function `sgd_momentum` and run the following to check your implementation. You should see errors less than 1e-8.
```
from deeplearning.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(0.5, 1.5, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-1, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[-0.504, -0.45805263, -0.41210526, -0.36615789, -0.32021053],
[-0.27426316, -0.22831579, -0.18236842, -0.13642105, -0.09047368],
[-0.04452632, 0.00142105, 0.04736842, 0.09331579, 0.13926316],
[ 0.18521053, 0.23115789, 0.27710526, 0.32305263, 0.369 ]])
expected_velocity = np.asarray([
[1.04, 1.10684211, 1.17368421, 1.24052632, 1.30736842],
[1.37421053, 1.44105263, 1.50789474, 1.57473684, 1.64157895],
[1.70842105, 1.77526316, 1.84210526, 1.90894737, 1.97578947],
[2.04263158, 2.10947368, 2.17631579, 2.24315789, 2.31 ]])
print ('next_w error: ', rel_error(next_w, expected_next_w))
print ('velocity error: ', rel_error(expected_velocity, config['velocity']))
```
Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge a bit faster.
```
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print ('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
solver.record_histories_as_npz("submission_logs/optimizer_experiment_{}".format(update_rule))
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.items():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
```
# RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file `deeplearning/optim.py`, implement the RMSProp update rule in the `rmsprop` function and implement the Adam update rule in the `adam` function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
```
# Test RMSProp implementation; you should see errors less than 1e-7.
from deeplearning.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print ('next_w error: ', rel_error(expected_next_w, next_w))
print ('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less.
from deeplearning.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
expected_t = 6
print ('next_w error: ', rel_error(expected_next_w, next_w))
print ('v error: ', rel_error(expected_v, config['v']))
print ('m error: ', rel_error(expected_m, config['m']))
print ('t error: ', rel_error(expected_t, config['t']))
```
Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules. As a sanity check, you should see that RMSProp and Adam typically obtain at least 45% training accuracy within 5 epochs.
```
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print ('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
solver.record_histories_as_npz("submission_logs/optimizer_experiment_{}".format(update_rule))
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.items():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
```
# Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the `best_model` variable and the solver used in the `best_solver` variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the `BatchNormalization.ipynb` and `Dropout.ipynb` notebooks before completing this part, since those techniques can help you train powerful models.
```
best_model = None
best_solver = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# find batch normalization and dropout useful. Store your best model in the #
# best_model variable and the solver used to train it in the best_solver #
# variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
```
# Test your model
Run your best model on the validation and test sets and record the training logs of the best solver. You should achieve above 50% accuracy on the validation set.
```
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
val_acc = (y_val_pred == data['y_val']).mean()
test_acc = (y_test_pred == data['y_test']).mean()
print ('Validation set accuracy: ', val_acc)
print ('Test set accuracy: ', test_acc)
best_solver.record_histories_as_npz('submission_logs/best_fc_model.npz')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
file = "Resources/purchase_data.csv"
purchase_data = pd.read_csv(file)
# Purchasing Analysis
total_revenue = (purchase_data["Price"]).sum()
total_number_players = len(purchase_data["SN"].value_counts())
unique_items = len(purchase_data["Item ID"].unique())
unique_items
average_price = purchase_data["Price"].mean()
Total_Number_purchases = purchase_data["Purchase ID"].count()
Total_Revenue = purchase_data["Price"].sum()
# DataFrames
print("average_price :", "$" + str(average_price))
print("total_number_players:" +str(total_number_players))
Dataframe = pd.DataFrame({"Number of Unique Items" : [unique_items] ,"Average Purchase Price " : [average_price] , "Total Number of Purchases" : [Total_Number_purchases],
"Total Revenue" : [Total_Revenue]})
Dataframe.style.format({"average_price" : "{:,.2f}",
"Total_Revenue" : "{:,.2f}"})
#Percentage and Count of Male Players
gender_count = purchase_data.groupby("Gender")
count_gender = gender_count.nunique()["SN"]
Percentage = (count_gender / total_number_players) * 100
Percentage_male
Percentage_DataFrame = pd.DataFrame({ "Percentage Players" : Percentage ,
"Percentage_male" : count_gender})
Percentage_DataFrame.index.name = None
Percentage_DataFrame.style.format({"Percentage Players" : "{:,.2f}"})
### Age Demographics
purchase_data["Age"]
bins = [0,5,7,10,25,45,50]
bins_group = [ "<10","10-20","25-30","30-35","35-40","45+"]
purchase_data["Age Grouping"] = pd.cut(purchase_data["Age"],bins,labels=bins_group)
purchase_data["Age Grouping"].value_counts()
Age_Grouping = purchase_data.groupby("Age Grouping")
total_count_age = Age_Grouping["SN"].nunique()
Avge_Price = Age_Grouping["Price"].mean()
total_purchase = Age_Grouping['Price'].sum()
Avg_price_person = Avge_Price / total_number_players * 100
AgeGroup = pd.DataFrame({"Purchase Count" :total_count_age, "Average Purchase Price":Avge_Price,
"Total Purchase Value": total_purchase,
"Average Purchase Total per Person by Age Group" :Avg_price_person })
AgeGroup.dropna().style.format({"Average Purchase Price" : "{:,.2f}",
"Total Purchase Value" : "{:,.2f}",
"Average Purchase Total per Person by Age Group" :"{:,.2f}" })
# Count the totalby gender
purchase_count = gender_count["Purchase ID"].count()
purchase_count
# Average purchase
avg_p_price = gender_count["Price"].mean()
# Average purchase
purch_total = gender_count["Price"].sum()
# Average purchase total by gender divivded by purchase count by unique shoppers
avg_p_per_person = purch_total/count_gender
Gender_analsis = pd.DataFrame({"Purchase Count" : purchase_count, "Average Purchase Price":avg_p_price,
"Total Purchase Value" :purch_total,
"Average Purchase Total per Person by Gender":avg_p_per_person })
Gender_analysis.style.format({"Average Purchase Price" : "{:,.2f}", "Total Purchase Value": "{:,.2f}",
"Average Purchase Total per Person by Gender" : "{:,.2f}"})
### Top Spenders
spenders = pd.DataFrame(purchase_data)
spenders_group = spenders.groupby(['SN'])
top_spenders = pd.DataFrame({"Purchase Count":spenders_group["Price"].count(), "Average Purchase Price":spenders_group["Price"].mean(),"Total Purchase Value":spenders_group["Price"].sum()})
top_spenders = top_spenders.sort_values("Total Purchase Value", ascending=False)
top_spenders["Average Purchase Price"] = top_spenders["Average Purchase Price"].map("${:.2f}".format)
top_spenders["Total Purchase Value"] = top_spenders["Total Purchase Value"].map("${:,.2f}".format)
top_spenders.head()
### Most Popular Items
popular_items = pd.DataFrame(purchase_data)
popular_grp = popular_items.groupby(['Item ID','Item Name'])
popular_grp = pd.DataFrame({"Purchase Count":popular_grp["Price"].count(), "Item Price":popular_grp["Price"].mean(),"Total Purchase Value":popular_grp["Price"].sum()})
popular_grp = popular_grp.sort_values("Purchase Count", ascending=False)
popular_grp["Item Price"] = popular_grp["Item Price"].map("${:.2f}".format)
popular_grp["Total Purchase Value"] = popular_grp["Total Purchase Value"].map("${:,.2f}".format)
popular_grp.head()
### Most Profitable Items
profitable_items = pd.DataFrame(purchase_data)
profitable_items_grp = profitable_items.groupby(['Item ID','Item Name'])
profitable_items = pd.DataFrame({"Purchase Count":profitable_items_grp["Price"].count(), "Item Price":profitable_items_grp["Price"].mean(),"Total Purchase Value":profitable_items_grp["Price"].sum()})
profitable_items = profitable_items.sort_values("Total Purchase Value", ascending=False)
profitable_items["Item Price"]= profitable_items["Item Price"].map("${:,.2f}".format )
profitable_items["Total Purchase Value"] = profitable_items["Total Purchase Value"].map("${:,.2f}".format)
profitable_items.head()
```
#####
| github_jupyter |
## How to Test
### Equivalence partitioning
Think hard about the different cases the code will run under: this is science, not coding!
We can't write a test for every possible input: this is an infinite amount of work.
We need to write tests to rule out different bugs. There's no need to separately test *equivalent* inputs.
Let's look at an example of this question outside of coding:
* Research Project : Evolution of agricultural fields in Saskatchewan from aerial photography
* In silico translation : Compute overlap of two rectangles
```
%matplotlib inline
from matplotlib.path import Path
import matplotlib.patches as patches
import matplotlib.pyplot as plt
```
Let's make a little fragment of matplotlib code to visualise a pair of fields.
```
def show_fields(field1, field2):
def vertices(left, bottom, right, top):
verts = [
(left, bottom),
(left, top),
(right, top),
(right, bottom),
(left, bottom),
]
return verts
codes = [Path.MOVETO, Path.LINETO, Path.LINETO, Path.LINETO, Path.CLOSEPOLY]
path1 = Path(vertices(*field1), codes)
path2 = Path(vertices(*field2), codes)
fig = plt.figure()
ax = fig.add_subplot(111)
patch1 = patches.PathPatch(path1, facecolor="orange", lw=2)
patch2 = patches.PathPatch(path2, facecolor="blue", lw=2)
ax.add_patch(patch1)
ax.add_patch(patch2)
ax.set_xlim(0, 5)
ax.set_ylim(0, 5)
show_fields((1.0, 1.0, 4.0, 4.0), (2.0, 2.0, 3.0, 3.0))
```
Here, we can see that the area of overlap, is the same as the smaller field, with area 1.
We could now go ahead and write a subroutine to calculate that, and also write some test cases for our answer.
But first, let's just consider that question abstractly, what other cases, *not equivalent to this* might there be?
For example, this case, is still just a full overlap, and is sufficiently equivalent that it's not worth another test:
```
show_fields((1.0, 1.0, 4.0, 4.0), (2.5, 1.7, 3.2, 3.4))
```
But this case is no longer a full overlap, and should be tested separately:
```
show_fields((1.0, 1.0, 4.0, 4.0), (2.0, 2.0, 3.0, 4.5))
```
On a piece of paper, sketch now the other cases you think should be treated as non-equivalent. Some answers are below:
```
for _ in range(50):
print("Spoiler space")
show_fields((1.0, 1.0, 4.0, 4.0), (2, 2, 4.5, 4.5)) # Overlap corner
show_fields((1.0, 1.0, 4.0, 4.0), (2.0, 2.0, 3.0, 4.0)) # Just touching
show_fields((1.0, 1.0, 4.0, 4.0), (4.5, 4.5, 5, 5)) # No overlap
show_fields((1.0, 1.0, 4.0, 4.0), (2.5, 4, 3.5, 4.5)) # Just touching from outside
show_fields((1.0, 1.0, 4.0, 4.0), (4, 4, 4.5, 4.5)) # Touching corner
```
### Using our tests
OK, so how might our tests be useful?
Here's some code that **might** correctly calculate the area of overlap:
```
def overlap(field1, field2):
left1, bottom1, top1, right1 = field1
left2, bottom2, top2, right2 = field2
overlap_left = max(left1, left2)
overlap_bottom = max(bottom1, bottom2)
overlap_right = min(right1, right2)
overlap_top = min(top1, top2)
overlap_height = overlap_top - overlap_bottom
overlap_width = overlap_right - overlap_left
return overlap_height * overlap_width
```
So how do we check our code?
The manual approach would be to look at some cases, and, once, run it and check:
```
overlap((1.0, 1.0, 4.0, 4.0), (2.0, 2.0, 3.0, 3.0))
```
That looks OK.
But we can do better, we can write code which **raises an error** if it gets an unexpected answer:
```
assert overlap((1.0, 1.0, 4.0, 4.0), (2.0, 2.0, 3.0, 3.0)) == 1.0
assert overlap((1.0, 1.0, 4.0, 4.0), (2.0, 2.0, 3.0, 4.5)) == 2.0
assert overlap((1.0, 1.0, 4.0, 4.0), (2.0, 2.0, 4.5, 4.5)) == 4.0
assert overlap((1.0, 1.0, 4.0, 4.0), (4.5, 4.5, 5, 5)) == 0.0
print(overlap((1.0, 1.0, 4.0, 4.0), (4.5, 4.5, 5, 5)))
show_fields((1.0, 1.0, 4.0, 4.0), (4.5, 4.5, 5, 5))
```
What? Why is this wrong?
In our calculation, we are actually getting:
```
overlap_left = 4.5
overlap_right = 4
overlap_width = -0.5
overlap_height = -0.5
```
Both width and height are negative, resulting in a positive area.
The above code didn't take into account the non-overlap correctly.
It should be:
```
def overlap(field1, field2):
left1, bottom1, top1, right1 = field1
left2, bottom2, top2, right2 = field2
overlap_left = max(left1, left2)
overlap_bottom = max(bottom1, bottom2)
overlap_right = min(right1, right2)
overlap_top = min(top1, top2)
overlap_height = max(0, (overlap_top - overlap_bottom))
overlap_width = max(0, (overlap_right - overlap_left))
return overlap_height * overlap_width
assert overlap((1, 1, 4, 4), (2, 2, 3, 3)) == 1.0
assert overlap((1, 1, 4, 4), (2, 2, 3, 4.5)) == 2.0
assert overlap((1, 1, 4, 4), (2, 2, 4.5, 4.5)) == 4.0
assert overlap((1, 1, 4, 4), (4.5, 4.5, 5, 5)) == 0.0
assert overlap((1, 1, 4, 4), (2.5, 4, 3.5, 4.5)) == 0.0
assert overlap((1, 1, 4, 4), (4, 4, 4.5, 4.5)) == 0.0
```
Note, we reran our other tests, to check our fix didn't break something else. (We call that "fallout")
### Boundary cases
"Boundary cases" are an important area to test:
* Limit between two equivalence classes: edge and corner sharing fields
* Wherever indices appear, check values at ``0``, ``N``, ``N+1``
* Empty arrays:
``` python
atoms = [read_input_atom(input_atom) for input_atom in input_file]
energy = force_field(atoms)
```
* What happens if ``atoms`` is an empty list?
* What happens when a matrix/data-frame reaches one row, or one column?
### Positive *and* negative tests
* **Positive tests**: code should give correct answer with various inputs
* **Negative tests**: code should crash as expected given invalid inputs, rather than lying
<div align="left">
Bad input should be expected and should fail early and explicitly.
<div class="fragment roll-in">
Testing should ensure that explicit failures do indeed happen.
### Raising exceptions
In Python, we can signal an error state by raising an error:
```
def I_only_accept_positive_numbers(number):
# Check input
if number < 0:
raise ValueError("Input " + str(number) + " is negative")
# Do something
I_only_accept_positive_numbers(5)
I_only_accept_positive_numbers(-5)
```
There are standard "Exception" types, like `ValueError` we can `raise`
We would like to be able to write tests like this:
```
assert I_only_accept_positive_numbers(-5) == # Gives a value error
```
But to do that, we need to learn about more sophisticated testing tools, called "test frameworks".
| github_jupyter |
# Description for Modules
pandas-> read our csv files
numpy-> convert the data to suitable form to feed into the classification data
seaborn and matplotlib-> For visualizations
sklearn-> To use logistic regression
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
% matplotlib inline
from google.colab import files
uploaded = files.upload()
from sklearn.linear_model import LogisticRegression
```
# Reading the "diabetes.csv"
# The following features have been provided to help us predict whether a person is diabetic or not:
* **Pregnancies**: Number of times pregnant
* **Glucose**: Plasma glucose concentration over 2 hours in an oral glucose tolerance test
* **BloodPressure**: Diastolic blood pressure (mm Hg)
* **SkinThickness**: Triceps skin fold thickness (mm)
* **Insulin**: 2-Hour serum insulin (mu U/ml)
* **BMI**: Body mass index (weight in kg/(height in m)2)
* **DiabetesPedigreeFunction**: Diabetes pedigree function (a function which scores likelihood of diabetes based on family history)
* **Age**: Age (years)
* **Outcome**: Class variable (0 if non-diabetic, 1 if diabetic)
```
diabetes_df = pd.read_csv("diabetes.csv")
diabetes_df.head(15)
diabetes_df.info()
```
In the above data, you can see that there are many missing values in
* **Insulin**
* **skin thickness**
* **blood pressure**
We could replace the missing values with the mean of the respective features
```
corr = diabetes_df.corr()
print(corr)
sns.heatmap(corr,
xticklabels = corr.columns,
yticklabels = corr.columns,
vmin = 0, vmax = 0.5)
```
In the above heatmap, brighter colours indicate more correlation.
> **Glucose**, **# of pregnancies**, **BMI** and **age** have significant correlation with **outcome** variable.
> Other correlation like **age**->**pregnancies**, **BMI**-> **skin thickness**, **Insulin**-> **skin thickness**
```
outcome = diabetes_df["Outcome"]
outcome.head()
sns.set_theme(style = "darkgrid", palette = "deep")
sns.countplot(x = 'Outcome', data = diabetes_df)
```
```
sns.set_theme(style = "darkgrid", palette = "deep")
sns.barplot(x = 'Outcome', y = "Age", data = diabetes_df, saturation = 1.6)
```
# Data Preparation
splitting the data into:
* **Training Data**
* **Test Data**
* **Check Data**
```
dfTrain = diabetes_df[:650]
dfTest = diabetes_df[650:750]
dfcheck = diabetes_df[750:]
```
**Separating label and features for both training and testing**
```
trainLabel = np.asarray(dfTrain['Outcome'])
trainData = np.asarray(dfTrain.drop("Outcome", 1))
testLabel = np.asarray(dfTest['Outcome'])
testData = np.asarray(dfTest.drop("Outcome", 1))
means = np.mean(trainData, axis = 0)
stds = np.std(trainData, axis = 0)
trainData = (trainData - means) / stds
testData = (testData - means) / stds
diabetesCheck = LogisticRegression()
diabetesCheck.fit(trainData, trainLabel)
accuracy = diabetesCheck.score(testData, testLabel)
print("accuracy = ", accuracy * 100, "%")
```
| github_jupyter |
# sports-book-manager
## Example 1:
### Using the BookScraper class
Importing the scraper class and setting the domain and directory paths.
```
import sports_book_manager.book_scrape_class as bs
PointsBet = bs.BookScraper(domain=r'https://nj.pointsbet.com/sports',
directories={'NHL':r'/ice-hockey/NHL',
'NBA':r'/basketball/NBA'
}
)
```
Setting the element values for the scraper. Lines and Odds data are contained in a class that other data is stored in as well, so parent values need to be set for them. driver_path is needed only if Selenium cannot find the chromedriver using webdriver.Chrome().
```
PointsBet.update_html_elements(driver_path=..., ancestor_container='.facf5sk',
event_row=".f1oyvxkl", team_class='.fji5frh.fr8jv7a.f1wtz5iq', line_class='.fsu5r7i',
line_parent_tag='button', line_parent_attr='data-test', line_parent_val='Market0OddsButton',
odds_class='.fheif50', odds_parent_tag='button', odds_parent_attr='data-test',
odds_parent_val='Market0OddsButton')
```
Retrieving the PointsBet odds in a pandas dataframe.
```
data = PointsBet.retrieve_sports_book('NBA')
data
```
## Example 2:
### Using sports-book-manager functions to compare sports book wagers to model outputs
First we will pull an example of hockey wagers from the data folder in the sports-book-wager package.
```
from os import path
import sports_book_manager
import pandas as pd
data_path = path.join(path.abspath(sports_book_manager.__file__), '..', 'data')
df = pd.read_csv(f'{data_path}\\hockey_odds.csv')
df
```
For implied_probability_calculator and model_probability to work the values must be numeric. This is a good practice to follow as we don't what data type the scraper will set the data to.
```
df['Odds'] = pd.to_numeric(df['Odds'])
df['Lines'] = pd.to_numeric(df['Lines'])
```
Using pandas lambda function we can perform the implied_probability_calculator function to every row to create a new column.
```
import sports_book_manager.implied_probability_calculator as ipc
df['implied_probability'] = df.apply(lambda x: ipc.implied_probability_calculator(x['Odds']), axis = 1)
df
```
Getting an example of model outputs from the data folder in the sports-book-wager package and merging the two dataframes together.
```
model = pd.read_csv(f'{data_path}\\model_output_example.csv')
df_join = df.join(model, lsuffix='Teams', rsuffix='team')
df_join = df_join.drop(columns=['team'])
```
We use the lambda function again to perform the model_probability calculations which evaluate the model's expected probability of a team winning at the margin set with the market line.
```
import sports_book_manager.model_probability as mp
df_join['model_probability'] = df_join.apply(lambda x: mp.model_probability(x['mean_win_margin'],
x['sd'], x['Lines']), axis = 1)
df_join
```
We can then format our pandas dataframe to compare the market's expected probability to the model's and find the bets the model feels best about.
```
df_join['difference'] = df_join.apply(lambda x: (x['model_probability']) - x['implied_probability'], axis=1)
df_join = df_join.sort_values(by=['difference'], ascending=False)
df_join
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jamestheengineer/data-science-from-scratch-Python/blob/master/Chapter_16.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Only do this once per VM, otherwise you'll get multiple clones and nested directories
#!git clone https://github.com/jamestheengineer/data-science-from-scratch-Python.git
#%cd data-science-from-scratch-Python/
#!pip install import-ipynb
#import import_ipynb
# Logistic Regression
# Data we'll be using
tuples = [(0.7,48000,1),(1.9,48000,0),(2.5,60000,1),(4.2,63000,0),(6,76000,0),(6.5,69000,0),(7.5,76000,0),(8.1,88000,0),(8.7,83000,1),(10,83000,1),(0.8,43000,0),(1.8,60000,0),(10,79000,1),(6.1,76000,0),(1.4,50000,0),(9.1,92000,0),(5.8,75000,0),(5.2,69000,0),(1,56000,0),(6,67000,0),(4.9,74000,0),(6.4,63000,1),(6.2,82000,0),(3.3,58000,0),(9.3,90000,1),(5.5,57000,1),(9.1,102000,0),(2.4,54000,0),(8.2,65000,1),(5.3,82000,0),(9.8,107000,0),(1.8,64000,0),(0.6,46000,1),(0.8,48000,0),(8.6,84000,1),(0.6,45000,0),(0.5,30000,1),(7.3,89000,0),(2.5,48000,1),(5.6,76000,0),(7.4,77000,0),(2.7,56000,0),(0.7,48000,0),(1.2,42000,0),(0.2,32000,1),(4.7,56000,1),(2.8,44000,1),(7.6,78000,0),(1.1,63000,0),(8,79000,1),(2.7,56000,0),(6,52000,1),(4.6,56000,0),(2.5,51000,0),(5.7,71000,0),(2.9,65000,0),(1.1,33000,1),(3,62000,0),(4,71000,0),(2.4,61000,0),(7.5,75000,0),(9.7,81000,1),(3.2,62000,0),(7.9,88000,0),(4.7,44000,1),(2.5,55000,0),(1.6,41000,0),(6.7,64000,1),(6.9,66000,1),(7.9,78000,1),(8.1,102000,0),(5.3,48000,1),(8.5,66000,1),(0.2,56000,0),(6,69000,0),(7.5,77000,0),(8,86000,0),(4.4,68000,0),(4.9,75000,0),(1.5,60000,0),(2.2,50000,0),(3.4,49000,1),(4.2,70000,0),(7.7,98000,0),(8.2,85000,0),(5.4,88000,0),(0.1,46000,0),(1.5,37000,0),(6.3,86000,0),(3.7,57000,0),(8.4,85000,0),(2,42000,0),(5.8,69000,1),(2.7,64000,0),(3.1,63000,0),(1.9,48000,0),(10,72000,1),(0.2,45000,0),(8.6,95000,0),(1.5,64000,0),(9.8,95000,0),(5.3,65000,0),(7.5,80000,0),(9.9,91000,0),(9.7,50000,1),(2.8,68000,0),(3.6,58000,0),(3.9,74000,0),(4.4,76000,0),(2.5,49000,0),(7.2,81000,0),(5.2,60000,1),(2.4,62000,0),(8.9,94000,0),(2.4,63000,0),(6.8,69000,1),(6.5,77000,0),(7,86000,0),(9.4,94000,0),(7.8,72000,1),(0.2,53000,0),(10,97000,0),(5.5,65000,0),(7.7,71000,1),(8.1,66000,1),(9.8,91000,0),(8,84000,0),(2.7,55000,0),(2.8,62000,0),(9.4,79000,0),(2.5,57000,0),(7.4,70000,1),(2.1,47000,0),(5.3,62000,1),(6.3,79000,0),(6.8,58000,1),(5.7,80000,0),(2.2,61000,0),(4.8,62000,0),(3.7,64000,0),(4.1,85000,0),(2.3,51000,0),(3.5,58000,0),(0.9,43000,0),(0.9,54000,0),(4.5,74000,0),(6.5,55000,1),(4.1,41000,1),(7.1,73000,0),(1.1,66000,0),(9.1,81000,1),(8,69000,1),(7.3,72000,1),(3.3,50000,0),(3.9,58000,0),(2.6,49000,0),(1.6,78000,0),(0.7,56000,0),(2.1,36000,1),(7.5,90000,0),(4.8,59000,1),(8.9,95000,0),(6.2,72000,0),(6.3,63000,0),(9.1,100000,0),(7.3,61000,1),(5.6,74000,0),(0.5,66000,0),(1.1,59000,0),(5.1,61000,0),(6.2,70000,0),(6.6,56000,1),(6.3,76000,0),(6.5,78000,0),(5.1,59000,0),(9.5,74000,1),(4.5,64000,0),(2,54000,0),(1,52000,0),(4,69000,0),(6.5,76000,0),(3,60000,0),(4.5,63000,0),(7.8,70000,0),(3.9,60000,1),(0.8,51000,0),(4.2,78000,0),(1.1,54000,0),(6.2,60000,0),(2.9,59000,0),(2.1,52000,0),(8.2,87000,0),(4.8,73000,0),(2.2,42000,1),(9.1,98000,0),(6.5,84000,0),(6.9,73000,0),(5.1,72000,0),(9.1,69000,1),(9.8,79000,1),]
data = [list(row) for row in tuples]
xs = [[1.0] + row[:2] for row in data] # [1, experience, salary]
ys = [row[2] for row in data] # paid_account
from matplotlib import pyplot as plt
from Chapter_10 import rescale
from Chapter_15 import least_squares_fit, predict
from Chapter_08 import gradient_step
learning_rate = 0.001
rescaled_xs = rescale(xs)
beta = least_squares_fit(rescaled_xs, ys, learning_rate, 1000, 1)
# [0.26, 0.43, -0.43]
predictions = [predict(x_i, beta) for x_i in rescaled_xs]
plt.scatter(predictions, ys)
plt.xlabel("predicted")
plt.ylabel("actual")
plt.show()
# Logistic function is going to be better for this problem
def logistic(x: float) -> float:
return 1.0 / (1 + math.exp(-x))
def logistic_prime(x: float) -> float:
y = logistic(x)
return y * (1-y)
import math
from Chapter_04 import Vector, dot
def _negative_log_likelihood(x: Vector, y: float, beta: Vector) -> float:
"""The negative log likelihood for one data point"""
if y == 1:
return -math.log(logistic(dot(x, beta)))
else:
return -math.log(1 - logistic(dot(x, beta)))
from typing import List
def negative_log_likelihood(xs: List[Vector],
ys: List[float],
beta: Vector) -> float:
return sum(_negative_log_likelihood(x, y, beta)
for x, y in zip(xs, ys))
from Chapter_04 import vector_sum
def _negative_log_partial_j(x: Vector, y: float, beta: Vector, j: int) -> float:
"""
The jth partial derivative for one data point.
Here j is the index of the data point.
"""
return -(y - logistic(dot(x, beta))) * x[j]
def _negative_log_gradient(x: Vector, y: float, beta: Vector) -> Vector:
"""
The gradient for one data point.
"""
return [_negative_log_partial_j(x, y, beta, j)
for j in range(len(beta))]
def negative_log_gradient(xs: List[Vector],
ys: List[float],
beta: Vector) -> Vector:
return vector_sum([_negative_log_gradient(x, y, beta)
for x, y in zip(xs, ys)])
# Applying the model. We'll split our data in a trianing and test set
from Chapter_11 import train_test_split
import random
import tqdm
random.seed(0)
x_train, x_test, y_train, y_test = train_test_split(rescaled_xs, ys, 0.33)
learning_rate = 0.01
# pick a random starting point
beta = [random.random() for _ in range(3)]
with tqdm.trange(5000) as t:
for epoch in t:
gradient = negative_log_gradient(x_train, y_train, beta)
beta = gradient_step(beta, gradient, -learning_rate)
loss = negative_log_likelihood(x_train, y_train, beta)
t.set_description(f"loss: {loss:.3f} beta: {beta}")
from Chapter_10 import scale
means, stdevs = scale(xs)
beta_unscaled = [(beta[0]
- beta[1] * means[1] / stdevs[1]
- beta[2] * means[2] / stdevs[2]),
beta[1] / stdevs[1],
beta[2] / stdevs[2]]
# Should be [8.9, 1.6, -0.000288]
print(beta_unscaled)
# Goodness of fit
# Let's see what happens if we predict a paid account if the probability exceeds 50%
true_positives = false_positives = true_negatives = false_negatives = 0
for x_i, y_i in zip(x_test, y_test):
prediction = logistic(dot(beta, x_i))
if y_i == 1 and prediction >= 0.5: # TP: paid and we predict paid
true_positives += 1
elif y_i == 1: # FN: paid and we predict unpaid
false_negatives += 1
elif prediction >= 0.5: # FP: unpaid and predict paid
false_positives +=1
else: # TN: unpaid and predic unpaid
true_negatives += 1
precision = true_positives / (true_positives + false_positives)
recall = true_positives / (true_positives + false_negatives)
print(precision, recall)
predictions = [logistic(dot(beta, x_i)) for x_i in x_test]
plt.scatter(predictions, y_test, marker='+')
plt.xlabel("predicted probability")
plt.ylabel("actual outcome")
plt.title("Logistic Regression Predicted vs. Actual")
plt.show()
```
| github_jupyter |
     
     
     
     
     
   
[Home Page](../../START_HERE.ipynb)
[Previous Notebook](Challenge.ipynb)
     
     
     
     
[1](Challenge.ipynb)
[2]
# Challenge - Gene Expression Classification - Workbook
### Introduction
This notebook walks through an end-to-end GPU machine learning workflow where cuDF is used for processing the data and cuML is used to train machine learning models on it.
After completing this excercise, you will be able to use cuDF to load data from disk, combine tables, scale features, use one-hote encoding and even write your own GPU kernels to efficiently transform feature columns. Additionaly you will learn how to pass this data to cuML, and how to train ML models on it. The trained model is saved and it will be used for prediction.
It is not required that the user is familiar with cuDF or cuML. Since our aim is to go from ETL to ML training, a detailed introduction is out of scope for this notebook. We recommend [Introduction to cuDF](../../CuDF/01-Intro_to_cuDF.ipynb) for additional information.
### Problem Statement:
We are trying to classify patients with acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) using machine learning (classification) algorithms. This dataset comes from a proof-of-concept study published in 1999 by Golub et al. It showed how new cases of cancer could be classified by gene expression monitoring (via DNA microarray) and thereby provided a general approach for identifying new cancer classes and assigning tumors to known classes.
Here is the dataset link: https://www.kaggle.com/crawford/gene-expression.
## Here is the list of exercises and modules to work on in the lab:
- Convert the serial Pandas computations to CuDF operations.
- Utilize CuML to accelerate the machine learning models.
- Experiment with Dask to create a cluster and distribute the data and scale the operations.
You will start writing code from <a href='#dask1'>here</a>, but make sure you execute the data processing blocks to understand the dataset.
### 1. Data Processing
The first step is downloading the dataset and putting it in the data directory, for using in this tutorial. Download the dataset here, and place it in (host/data) folder. Now we will import the necessary libraries.
```
import numpy as np; print('NumPy Version:', np.__version__)
import pandas as pd
import sys
import sklearn; print('Scikit-Learn Version:', sklearn.__version__)
from sklearn import preprocessing
from sklearn.utils import resample
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectFromModel
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score, roc_curve, auc
from sklearn.preprocessing import OrdinalEncoder, StandardScaler
import cudf
import cupy
# import for model building
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import mean_squared_error
from cuml.metrics.regression import r2_score
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn import linear_model
from sklearn.metrics import accuracy_score
from sklearn import model_selection, datasets
from cuml.dask.common import utils as dask_utils
from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
import dask_cudf
from cuml.dask.ensemble import RandomForestClassifier as cumlDaskRF
from sklearn.ensemble import RandomForestClassifier as sklRF
```
We'll read the dataframe into y from the csv file, view its dimensions and observe the first 5 rows of the dataframe.
```
%%time
y = pd.read_csv('../../../data/actual.csv')
print(y.shape)
y.head()
```
Let's convert our target variable categories to numbers.
```
y['cancer'].value_counts()
# Recode label to numeric
y = y.replace({'ALL':0,'AML':1})
labels = ['ALL', 'AML'] # for plotting convenience later on
```
Read the training and test data provided in the challenge from the data folder. View their dimensions.
```
# Import training data
df_train = pd.read_csv('../../../data/data_set_ALL_AML_train.csv')
print(df_train.shape)
# Import testing data
df_test = pd.read_csv('../../../data/data_set_ALL_AML_independent.csv')
print(df_test.shape)
```
Observe the first few rows of the train dataframe and the data format.
```
df_train.head()
```
Observe the first few rows of the test dataframe and the data format.
```
df_test.head()
```
As we can see, the data set has categorical values but only for the columns starting with "call". We won't use the columns having categorical values, but remove them.
```
# Remove "call" columns from training and testing data
train_to_keep = [col for col in df_train.columns if "call" not in col]
test_to_keep = [col for col in df_test.columns if "call" not in col]
X_train_tr = df_train[train_to_keep]
X_test_tr = df_test[test_to_keep]
```
Rename the columns and reindex for formatting purposes and ease in reading the data.
```
train_columns_titles = ['Gene Description', 'Gene Accession Number', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10',
'11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25',
'26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38']
X_train_tr = X_train_tr.reindex(columns=train_columns_titles)
test_columns_titles = ['Gene Description', 'Gene Accession Number','39', '40', '41', '42', '43', '44', '45', '46',
'47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59',
'60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72']
X_test_tr = X_test_tr.reindex(columns=test_columns_titles)
```
We will take the transpose of the dataframe so that each row is a patient and each column is a gene.
```
X_train = X_train_tr.T
X_test = X_test_tr.T
print(X_train.shape)
X_train.head()
```
Just clearning the data, removing extra columns and converting to numerical values.
```
# Clean up the column names for training and testing data
X_train.columns = X_train.iloc[1]
X_train = X_train.drop(["Gene Description", "Gene Accession Number"]).apply(pd.to_numeric)
# Clean up the column names for Testing data
X_test.columns = X_test.iloc[1]
X_test = X_test.drop(["Gene Description", "Gene Accession Number"]).apply(pd.to_numeric)
print(X_train.shape)
print(X_test.shape)
X_train.head()
```
We have the 38 patients as rows in the training set, and the other 34 as rows in the testing set. Each of those datasets has 7129 gene expression features. But we haven't yet associated the target labels with the right patients. You will recall that all the labels are all stored in a single dataframe. Let's split the data so that the patients and labels match up across the training and testing dataframes.We are now splitting the data into train and test sets. We will subset the first 38 patient's cancer types.
```
X_train = X_train.reset_index(drop=True)
y_train = y[y.patient <= 38].reset_index(drop=True)
# Subset the rest for testing
X_test = X_test.reset_index(drop=True)
y_test = y[y.patient > 38].reset_index(drop=True)
```
Generate descriptive statistics to analyse the data further.
```
X_train.describe()
```
Clearly there is some variation in the scales across the different features. Many machine learning models work much better with data that's on the same scale, so let's create a scaled version of the dataset.
```
X_train_fl = X_train.astype(float, 64)
X_test_fl = X_test.astype(float, 64)
# Apply the same scaling to both datasets
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train_fl)
X_test = scaler.transform(X_test_fl) # note that we transform rather than fit_transform
```
<a id='dask1'></a>
### 2. Conversion to CuDF Dataframe
Convert the pandas dataframes to CuDF dataframes to carry out the further CuML tasks.
```
#Modify the code in this cell
%%time
X_cudf_train = cudf.DataFrame() #Pass X train dataframe here
X_cudf_test = cudf.DataFrame() #Pass X test dataframe here
y_cudf_train = cudf.DataFrame() #Pass y train dataframe here
#y_cudf_test = cudf.Series(y_test.values) #Pass y test dataframe here
```
### 3. Model Building
#### Dask Integration
We will try using the Random Forests Classifier and implement using CuML and Dask.
#### Start Dask cluster
```
#Modify the code in this cell
# This will use all GPUs on the local host by default
cluster = LocalCUDACluster() #Set 1 thread per worker using arguments to cluster
c = Client() #Pass the cluster as an argument to Client
# Query the client for all connected workers
workers = c.has_what().keys()
n_workers = len(workers)
n_streams = 8 # Performance optimization
```
#### Define Parameters
In addition to the number of examples, random forest fitting performance depends heavily on the number of columns in a dataset and (especially) on the maximum depth to which trees are allowed to grow. Lower `max_depth` values can greatly speed up fitting, though going too low may reduce accuracy.
```
# Random Forest building parameters
max_depth = 12
n_bins = 16
n_trees = 1000
```
#### Distribute data to worker GPUs
```
X_train = X_train.astype(np.float32)
X_test = X_test.astype(np.float32)
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)
n_partitions = n_workers
def distribute(X, y):
# First convert to cudf (with real data, you would likely load in cuDF format to start)
X_cudf = cudf.DataFrame.from_pandas(pd.DataFrame(X))
y_cudf = cudf.Series(y)
# Partition with Dask
# In this case, each worker will train on 1/n_partitions fraction of the data
X_dask = dask_cudf.from_cudf(X_cudf, npartitions=n_partitions)
y_dask = dask_cudf.from_cudf(y_cudf, npartitions=n_partitions)
# Persist to cache the data in active memory
X_dask, y_dask = \
dask_utils.persist_across_workers(c, [X_dask, y_dask], workers=workers)
return X_dask, y_dask
#Modify the code in this cell
X_train_dask, y_train_dask = distribute() #Pass train data as arguments here
X_test_dask, y_test_dask = distribute() #Pass test data as arguments here
```
#### Create the Scikit-learn model
Since a scikit-learn equivalent to the multi-node multi-GPU K-means in cuML doesn't exist, we will use Dask-ML's implementation for comparison.
```
%%time
# Use all avilable CPU cores
skl_model = sklRF(max_depth=max_depth, n_estimators=n_trees, n_jobs=-1)
skl_model.fit(X_train, y_train.iloc[:,1])
```
#### Train the distributed cuML model
```
#Modify the code in this cell
%%time
cuml_model = cumlDaskRF(max_depth=max_depth, n_estimators=n_trees, n_bins=n_bins, n_streams=n_streams)
cuml_model.fit() # Pass X and y train dask data here
wait(cuml_model.rfs) # Allow asynchronous training tasks to finish
```
#### Predict and check accuracy
```
#Modify the code in this cell
skl_y_pred = skl_model.predict(X_test)
cuml_y_pred = cuml_model.predict().compute().to_array() #Pass the X test dask data as argument here
# Due to randomness in the algorithm, you may see slight variation in accuracies
print("SKLearn accuracy: ", accuracy_score(y_test.iloc[:,1], skl_y_pred))
print("CuML accuracy: ", accuracy_score()) #Pass the y test dask data and predicted values from CuML model as argument here
```
<a id='ex4'></a><br>
### 4. CONCLUSION
Let's compare the performance of our solution!
| Algorithm | Implementation | Accuracy | Time | Algorithm | Implementation | Accuracy | Time |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
Write down your observations and compare the CuML and Scikit learn scores. They should be approximately equal. We hope that you found this exercise exciting and beneficial in understanding RAPIDS better. Share your highest accuracy and try to use the unique features of RAPIDS for accelerating your data science pipelines. Don't restrict yourself to the previously explained concepts, but use the documentation to apply more models and functions and achieve the best results.
### 5. References
<p xmlns:dct="http://purl.org/dc/terms/">
<a rel="license"
href="http://creativecommons.org/publicdomain/zero/1.0/">
<center><img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" /></center>
</a>
</p>
- The dataset is licensed under a CC0: Public Domain license.
- Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression. Science 286:531-537. (1999). Published: 1999.10.14. T.R. Golub, D.K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J.P. Mesirov, H. Coller, M. Loh, J.R. Downing, M.A. Caligiuri, C.D. Bloomfield, and E.S. Lander
## Licensing
This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0).
[Previous Notebook](Challenge.ipynb)
     
     
     
     
[1](Challenge.ipynb)
[2]
     
     
     
     
     
     
     
     
     
   
[Home Page](../../START_HERE.ipynb)
| github_jupyter |
```
#example code
#import csv (comma seperated values) libary
import csv
#create and write ("w") in document (if exist will be overwriten)
file = open ("Stars.csv", "w")
newRecord = "Brian,73,Taurus\n"
file.write(str(newRecord))
file.close()
#append data points ("a") - means to add something
file = open ("Stars.csv", "a")
name = input("Enter name: ")
age = input ("Enter age: ")
star = input ("Enter star sign: ")
newRecord = name + ", "+ age + ", " + star +"\n"
file.write(str(newRecord))
file.close()
#read file ("r")
file = open ("Stars.csv", "r")
for row in file:
print (row)
# read specific rows (reads the second line == index 1 )
file = open ("Stars.csv", "r")
reader = csv.reader(file)
rows = list(reader)
print(rows[1])
#look for specific line
file = open ("Stars.csv", "r")
search = input("what are you looking for? ")
reader = csv.reader(file)
for row in file:
if search in str(row):
print(row)
#challange 111 (create new csv file)
#import csv libary
import csv
#write new file
file = open ("Books.csv", "w")
newRecord = "To Kill, Harper Lee, 1960 \n"
file.write(str(newRecord))
newRecord = "A brief, Stehpen Hawking, 1988\n"
file.write(str(newRecord))
newRecord = "The great Gatsby, F. Scott Fritzgeradl, 1992\n"
file.write(str(newRecord))
newRecord = "The Man who mistook his wife for a hat, oliver ssacks, 1985\n"
file.write(str(newRecord))
newRecord = "Pride and Prejudice, Jane Austen, 1813\n"
file.write(str(newRecord))
file.close()
#challange 112
file = open ("Books.csv", "a")
book = input("book title: ")
author = input("book author: ")
published = input ("published: ")
newRecord = book + ", " + author + ", " + published + "\n"
file.write(str(newRecord))
file.close ()
file = open("books.csv", "r")
for row in file:
print (row)
#challange 113 (adding to the list)
import csv
#how many addings needed
how_many = int(input("how many books do you want to add? "))
file = open ("Books.csv", "a")
for i in range (0, how_many):
book = input("book title: ")
author = input("book author: ")
published = input ("published: ")
newRecord = book + ", " + author + ", " + published + "\n"
file.write(str(newRecord))
file.close ()
# search for specific line content and print the whole line
print ("---"*5)
file = open("Books.csv", "r")
search_author = input("which author are you interested in? ")
reader = csv.reader(file)
#goes though the whole document
for row in file:
#looks for specific value
if search_author in str(row):
print (row)
#challange 114
#import the libary for csv
import csv
#range, by user input
start = int(input("start year: "))
end = int(input("end year: "))
#the data is stored in str therefore must be stored in temporary list
#open the file in and safe it as a list
file = list(csv.reader(open("Books.csv")))
#create an empty list
tempo_list = []
#all the rows in the file will be safed in the list (tempo_list)
for row in file:
tempo_list.append(row)
x =0
for row in tempo_list:
if int(tempo_list[x][2]) >=start and int(tempo_list[x][2]) <= end:
print (tempo_list[x])
x = x+1
#challange 115
import csv
file = open ("Books.csv", "r")
row_index = 1
for row in file:
print (row_index,": ", row)
row_index= row_index+1
#challange 116 (very hard !)
import csv
#import csv data into list
file = list(csv.reader(open("Books.csv")))
#create an empty list (only for temporary useage)
bookies = []
#extend the list as long as the lines in the csv file
for row in file:
bookies.append(row)
print ("---"*20)
print("original list: ")
print ("---"*20)
for i in bookies:
print(i)
#delete: ask which line
print ("---"*20)
delete = int(input("which row do you want to delete from the list? "))
#diffent styles
del bookies[delete]
print ("---"*20)
print("list w/o picked line: ")
print ("---"*20)
x = 0
for i in bookies:
print(i)
x = x + 1
print ("---"*20)
alter = int(input("which line do you want to change? "))
where = int(input("which index: "))
new = input("what shall be the new value? ")
bookies[alter][where]= new
print ("---"*20)
print("list w/o picked line and with changed value: ")
print ("---"*20)
x = 0
for i in bookies:
print(i)
x = x + 1
print ("---"*20)
print ("the list is now being stored in the orginal data")
print ("---"*20)
file = open("Books.csv", "w")
x=0
for row in bookies:
#with list index in bookies (temporary list)
newrecord = bookies [x][0] + ", " + bookies [x] [1]+ ", " + bookies [x][2] + "\n"
file.write(newrecord)
x= x+1
file.close()
#challange no 117 (math quiz) (should be with two questions)
import random
import csv
#creation of new file
file = open("Math.csv", "w")
newRecord = "We are giving random math problems to people: \n"
file.write(str(newRecord))
file.close()
#coming up with the math problems
num_1 = random.randint(-50,50)
num_2 = random.randint(-50,50)
score = 0
correct_answer = num_1 + num_2
name = input("what is your name: ")
print ("""Dear %s, number one is %d, and number two is %d."""% ( name, num_1, num_2))
correct = "correct"
your_answer = int(input("what is the sum: "))
if your_answer == correct_answer:
score = score +1
correct = "correct"
print ("Correct! we add one point to your score which is now: ", score)
else:
correct = "wrong"
print("unforunately, that is not correct... ")
#adding to the csv file
file = open("Math.csv", "a")
addings = name + ", " + str(num_1)+ "+" + str(num_2)+ "= " + str(correct_answer)+ "(correct answer)" + name +" sad: "+ str(your_answer) + "that is " + correct + "\n"
file.write(str(addings))
file.close()
```
| github_jupyter |
# Preprocessing Structured Data
Before we actually feed the data into any deep learning system we should look through it carefully. In addition to the kinds of big-picture problems that might arise in collecting data from a noisy world, we need to look out for missing values, strange outliers, and potential errors in the data. The data doesn't have to be completely error free, although obviously that would be best. Frequently, with the size of data we're dealing with, it is not realistic to completely scrub the data of any errors.
Once we have a collection of data that has a tolerable amount of errors (ideally error free, though that does not HAVE to be the case) we have to transform it into a deep learning friendly format. There are a number of tricks that machine learning practitioners apply to get better results from the same data.
It's also wise to explore the data and look for interesting outliers, correlation between different parts of the data, and other anomolies, oddities, and trends. Of course, we're hoping that our deep learning system can tease these out even better than we could—but that's not a good reason to shirk your own responsibility to understand the dataset. Sophisticated as they are, neural nets are still just tools, and understanding the data can help use hone our tools in the areas where they'll be most successful.
For this lab we're going to use a public domain dataset from Kaggle. You can find the dataset here:
https://www.kaggle.com/new-york-city/nyc-property-sales
To run this code you'll need to download and unzip that data.
There is useful supporting information about this dataset as well at the following two URLs:
https://www1.nyc.gov/assets/finance/downloads/pdf/07pdf/glossary_rsf071607.pdf
https://www1.nyc.gov/assets/finance/jump/hlpbldgcode.html
The dataset is a record of every building/condo/appartment that was sold in New York City over a 12 month period.
```
# Pandas is a fantastic and powerful tool for working with structured data
# it's the best of spreadsheets + python, and it has quickly become a go to
# library for data scientists in python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Your path may vary, change this appropriately
path_to_ny_sales = 'datasets/nyc-rolling-sales.csv'
# One of the things we love about pandas is that it's easy to load CSV data
# into a "data frame"
sales_df = pd.read_csv(path_to_ny_sales)
# And, it makes it easy to take a look at the first n items:
sales_df.head(5)
# And a summary with bundles of useful information on the numerical fields
sales_df.describe()
# Sometimes we get some unexpected datatypes when loading data
for col, dtype in zip(sales_df.columns, sales_df.dtypes):
print(col, dtype)
# Look at these types... many are wrong or misleading.
# The first two columns are ... kind of irrelelvant. The first is just the index for the data
# the second is named 'Unnamed: 0' which doesn't sound important. Lets look at it anyway:
# We can check if there is ever a duplicate value in that column:
print(any(sales_df['Unnamed: 0'].duplicated()))
# And we can see which ones. This syntax often confuses Pandas newbs
# .duplicated() returns a parallel dataframe with one column set to
# True if the value in the 'Unnamed: 0' column is duplicated, false otherwise
# for every entry in the dataframe. Using the == comparison operator like this
# within the [] of a dataframe access acts as a filter.
multi_sale_units = sales_df[sales_df['Unnamed: 0'].duplicated() == True]
multi_sale_units
# So there were ~84,500 sales records, and 57,812 records where the 'Unnamed: 0' appeared
# in more than one record. Lets look at ONE such value:
building_8413_records = sales_df[sales_df['Unnamed: 0'] == 8413]
building_8413_records
# Well... the duplicate values suggest it's not an ID, my first hypothosis. Lets plot a histogram
# and see if it's revealing.
sales_df.hist(column='Unnamed: 0', bins=100)
plt.show()
# weird. The data is undocumented, and has a strange distribution.
# Lets see if it correlates with anything?
sales_df.corrwith(sales_df['Unnamed: 0'])
# There is a very weak correlation with block and zip code... which is spurrious because those are both
# actually categorical columns, not numerical columns. Lets see if it has any correlation with what we
# CARE about specifically:
sales_df['SALE PRICE'].corr(sales_df['Unnamed: 0'])
# Uh oh — looks like we've got some problems in our sale amount data...
# Lets take a look:
sales_df['SALE PRICE']
# Looks like the data is a string type, and sometimes has a value of -
# The documentation suggests the - value means that there was no sale
# just a property transfer for nothing, such as an inheritance.
# Lets try to coerce the data to numeric where possible:
coerced_sales = pd.to_numeric(sales_df['SALE PRICE'], errors='coerce')
# Values that cannot be coerced are changed to Not a Number (NaN).
# We can use this code to examine those values:
only_non_numerics = sales_df['SALE PRICE'][coerced_sales.isna()]
# And this to print all the unique values from only_non_numerics
only_non_numerics.unique()
# So, indeed, the only value that wasn't a number as a string was the ' - ' value.
# good to know. Lets go ahead and coerce them all
sales_df['SALE PRICE'] = pd.to_numeric(sales_df['SALE PRICE'], errors='coerce')
sales_df['SALE PRICE'] = sales_df['SALE PRICE'].fillna(0)
# Now we should be able to check the correlation we wanted to originally:
sales_df['SALE PRICE'].corr(sales_df['Unnamed: 0'])
# So... I'm going to go out on a limb and say 'Unnamed: 0' is a junk column. Lets delete it
# along with a few others that we don't want to use.
sales_df.columns
sales_df = sales_df.drop(columns=[
'Unnamed: 0',
'ADDRESS', # Hard to parse. Block/zip/borough/neighborhood capture all the value we need.
'APARTMENT NUMBER', # Likely irrelevent to the price. Ought to be categorical, which would make data large.
'SALE DATE', # Everything was within a 12 month period, likely irrelevant and hard to parse.
'LOT' # A lot is a unique identifier within a block, and categorical. Not worth it.
])
# Look again with dropped columns
sales_df.head(5)
sales_df.describe()
# Two other columns should be numeric, but are objects. Lets look at them too:
convert_to_numeric = [
'LAND SQUARE FEET',
'GROSS SQUARE FEET'
]
for col in convert_to_numeric:
coerced = pd.to_numeric(sales_df[col], errors='coerce')
only_non_numerics = sales_df[col][coerced.isna()]
# And this to print all the unique values from only_non_numerics
print(col, only_non_numerics.unique())
# So... similarly there are missing values. But, unlike the sale data, we don't have
# any clues about what this means, and it's hard to imagine that a building exists with
# but occupies zero square feet... We'll apply another common tactic called "imputation"
# We're just going to use the mean value when there is missing data, it's better than nothing
# even though it may be wrong.
from sklearn.impute import SimpleImputer
# First lets just coerce the values to nan
for col in convert_to_numeric:
coerced = pd.to_numeric(sales_df[col], errors='coerce')
sales_df[col] = coerced
sales_df[col] = sales_df[col].astype('float')
# Then, we can use the Imputer to fill in any missing values
imputer = SimpleImputer(missing_values = np.nan, strategy = 'mean')
# Only fit it on our two relevant columns, to save time
imputer.fit(sales_df[convert_to_numeric])
imputed_values = imputer.transform(sales_df[convert_to_numeric])
# Now replace our old Series with the new imputed values.
sales_df['LAND SQUARE FEET'] = imputed_values[:, 0]
sales_df['GROSS SQUARE FEET'] = imputed_values[:, 1]
sales_df.describe()
# One really cool and helpful thing we can do in pandas is checkout the correlation matrix:
correlation_matrix = sales_df.corr()
plt.matshow(correlation_matrix)
plt.xticks(range(len(correlation_matrix.columns)), correlation_matrix.columns, rotation='vertical');
plt.yticks(range(len(correlation_matrix.columns)), correlation_matrix.columns);
plt.show()
# We can see that everything perfectly correlates with itself, obviously.
# Some of this is still spurious, since for example ZIP CODE seems to correlate
# weakly with SALE PRICE But it's actually a categorical value, not a numeric one.
# Lets inform pandas that these values ought to be considered categorical.
categorical_columns = [
'BOROUGH',
'BLOCK',
'ZIP CODE',
'TAX CLASS AT TIME OF SALE'
]
for c in categorical_columns:
sales_df[c] = sales_df[c].astype('category')
# Try the matrix again:
correlation_matrix = sales_df.corr()
plt.matshow(correlation_matrix)
plt.xticks(range(len(correlation_matrix.columns)), correlation_matrix.columns, rotation='vertical');
plt.yticks(range(len(correlation_matrix.columns)), correlation_matrix.columns);
plt.show()
# Not surprising that total units seems to correlate most with price.
# Interesting that residential units seems more correlated than commercial
# What haven't we looked at...
sales_df.columns
sales_df['EASE-MENT'].unique()
# It only has one value, junk it.
sales_df = sales_df.drop(columns=['EASE-MENT'])
# Lets plot two interesting charts:
for col, dtype in zip(sales_df.columns, sales_df.dtypes):
if dtype not in ['float', 'int', 'float64', 'int64']: continue
print(col)
sales_df.boxplot(column=[col])
sales_df.hist(column=[col])
plt.tight_layout()
plt.show()
```
Something to note about these charts is that, all of our numerical data seems to have a handful of extreme outliers. This might not be a challenge, because they are likely correlated. As in, the building with 1000+ units is probably also one of the sale price outliers. But it does sort of make the histograms unhelpful.
We could consider pruning these outliers before going ahead with the rest of this data processing. Lets use some rough and tumble outlier detection code from Stack Overflow and replot.
```
from scipy import stats
import numpy as np
# Lets plot two interesting charts:
for col, dtype in zip(sales_df.columns, sales_df.dtypes):
if dtype not in ['float', 'int', 'float64', 'int64']: continue
print(col)
# Quick and dirty outlier filtering, anything over 2 std deviations from the mean
# filtered out.
filtered_col = sales_df[col][np.abs(stats.zscore(sales_df[col])) < 2]
filtered_col.plot.box()
plt.show()
filtered_col.hist(bins=10)
plt.show()
```
# Cleaning vs Preparing
What we've done above is mostly just cleaning the data. We looked for missing values, and did some spot/sanity checks on our data. We did one thing that you might consider preparing: making some columns categorical. In addition to make sure the data is clean and error free, it's common practice to prepare the data so that it plays nice with neural networks. Two common examples are centering the mean about 0, and normalize the range to be between (0 to 1) or (-1 to 1).
Why? Consider this: year built, square feet, and total units are all going to impact the sale price. One of those might be more impactful than the other, but in the end our neural network is doing a bunch of complex addition and multiplication with those values, but year is always going to be in a range between basically 1900-2017, and units are almost always between 0-50 or so. 1900, when used as a multiplicitive scalar, is going to have a bigger impact than 50.
For this, and other reasons, it's common to normalize the data so that every datapoint is reduced to it's place within the distribution and to center that distribution between -1 and 1 or 0 and 1. Lets normalize all our numeric values to be between 0 and 1. Note that there are other scaling choices we could make, see the reading resources for this section.
```
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
# Note that we are NOT going to scale "sale price" because
# ultimately that will be our target value. WE still need the
# label to be in the format we wish to predict.
cols_to_scale = [
'RESIDENTIAL UNITS',
'COMMERCIAL UNITS',
'TOTAL UNITS',
'LAND SQUARE FEET',
'GROSS SQUARE FEET',
'YEAR BUILT'
]
scaled_cols = scaler.fit_transform(sales_df[cols_to_scale])
# Wow, was it really that easy?
scaled_cols
# So, we just got back an NDArray, and we need to put these
# columns back into a dataframe.
scaled_df = sales_df.copy(deep=True)
for i, col in enumerate(cols_to_scale):
scaled_df[col] = scaled_cols[:, i]
scaled_df.head(5)
scaled_df.describe()
# Even though we labeled some columns as "category" we still need to one-hot
# encode them. Pandas makes this super easy too:
scaled_dummy_df = pd.get_dummies(scaled_df)
scaled_dummy_df.head(1)
# Note that this takes awhile, it's procecssing a lot of data.
# Note also that pandas automatically looks for columns with
# a categorical type, so being explicit above was important
# to making this part easy.
# Holy crap, 12,413 columns!
# Note that all our numeric columns are between 0 and 1, except SALE PRICE
# All that's left to do here is to separate the labels from the features.
x_train = scaled_dummy_df.drop(columns=['SALE PRICE'])
y_train = scaled_dummy_df['SALE PRICE']
x_train.head(1)
y_train.head(1)
# Sweet, lets make a simple neural net with keras to make sure we can run the data
# through it. We don't expect great predictions out of this simple model we just
# want to be sure that we can :
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
model = Sequential()
# Sigmoid and other functions that squash the output might not be
# very appropriate for this task, because our target values are
# quite large!
print(len(x_train.columns))
model.add(Dense(units=32, activation='relu', input_shape=(len(x_train.columns),)))
# For regression it's common to use a linear activation function
# since our output could be anything. In our case, it would never
# make sense to guess less than 0, so I'm using relu
model.add(Dense(units=1, activation='relu'))
# This function provides useful text data for our network
model.summary()
# MSE is pretty common for regression tasks
model.compile(optimizer="adam", loss='mean_squared_error')
history = model.fit(x_train, y_train, batch_size=128, epochs=5, verbose=True, validation_split=.2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
# So... our model didn't do so great. Okay, it did terribly.
# It's off by a lot and is clearly overfitting the training data.
# Why might we be getting such poor performance?
# How could we improve?
# What should we do to the data?
# What about to the model?
# Two things I would look at:
# Consider discritizing and building a classifier instead of a regressor!
# Neural networks tend to be much better at classification tasks.
# Plus, it's just easier to predict 1 of 10 values compared to a
# continious space $0-$100,000,000 or so
# Consider dropping the values with 0 SALE PRICE or any very low value
# those are not representative of actual sale prices!
```
| github_jupyter |
# Исследование объявлений о продаже квартир
В вашем распоряжении данные сервиса Яндекс.Недвижимость — архив объявлений о продаже квартир в Санкт-Петербурге и соседних населённых пунктов за несколько лет. Нужно научиться определять рыночную стоимость объектов недвижимости. Ваша задача — установить параметры. Это позволит построить автоматизированную систему: она отследит аномалии и мошенническую деятельность.
По каждой квартире на продажу доступны два вида данных. Первые вписаны пользователем, вторые — получены автоматически на основе картографических данных. Например, расстояние до центра, аэропорта, ближайшего парка и водоёма.
### Шаг 1. Откроем файл с данными и изучим общую информацию.
```
import pandas as pd
from IPython.display import display
import matplotlib.pyplot as plt
try:
data = pd.read_csv('/datasets/real_estate_data.csv', sep='\\t') # путь в яндексе
except:
data = pd.read_csv('real_estate_data.csv', sep='\\t') # путь дома
def data_info(dt):
dt.info()
# случайные строчки датафрейма, в скобках кол-во строк
display(dt.sample(5))
# процент пропусков в столбце для наглядности
display(pd.DataFrame(round((dt.isna().mean() * 100), 2)).style.background_gradient('coolwarm'))
data_info(data)
```
### Вывод
- переименуем несколько названий столбцов в привычный вид
- заполним пропуски в ceiling_height, floors_total, balcony, is_apartment, parks_around_3000, ponds_around_3000
- в locality_name заменим пропущеные значение на 'неизвестно'
- first_day_exposition - приведем к формату год-месяц-день
### Шаг 2. Предобработка данных
```
# проверем название столбоцов
print(data.columns)
# переименуем столбцы в привычный вид
data.rename(columns = {'cityCenters_nearest' : 'city_centers_nearest',
'parks_around3000' : 'parks_around_3000',
'ponds_around3000' : 'ponds_around_3000'}, inplace = True)
# заполним пропуски в ceiling_height и floors_total на медиану
data = data.fillna(
{
'ceiling_height' : data['ceiling_height'].median(),
'floors_total' : data['floors_total'].median(),
}
)
# заполним пропуски в balcony, is_apartment, parks_around_3000 и ponds_around_3000 на 0
data = data.fillna(
{
'is_apartment' : 0,
'balcony' : 0,
'parks_around_3000' : 0,
'ponds_around_3000' : 0,
}
)
# locality_name заполним значением'неизвестно'
data['locality_name'] = data['locality_name'].fillna('неизвестно')
# переведем first_day_exposition в формат даты
data['first_day_exposition'] = pd.to_datetime(data['first_day_exposition'], format = '%Y.%m.%d')
display(pd.DataFrame(round((data.isna().mean() * 100), 2)).style.background_gradient('coolwarm'))
```
### Вывод по пропускам
- переименуем несколько названий столбцов в привычный вид;
- заполним пропуски в 'ceiling_height' и 'floors_total' на медиану т.к. высота потолков варьируется в достаточно узких пределах, а также в датафрейме нет указания на класс жилья(эконом, комфорт, бизнес), что не позволяет взять медиану в зависимости от категории. Аналогично и для этажности дома. К тому же медиана равна 9, что вполне соответствует действительности.(Средняя высота «вторички» в городах-миллионниках - Санкт-Петербург - 10 этажей, по данным сайта domofond);
- заполним пропуски в 'balcony','is_apartment', 'parks_around_3000' и 'ponds_around_3000' на 0, т к если нет балконов, то человек мог и не указать их, а столбец апартаменты имеет булев тип и если он не заполнен, значит квартира не относится к апартаментам. по количеству парков и прудов укажим 0, т к возможно их и нет в радиусе 3 км;
- в locality_name заменим пропущеные занечение на 'неизвестно';
- переведем first_day_exposition в формат даты;
- living_area и kitchen_area оставим без изменений, т к посчитать жилую площадь и площадь кухни невозможно, из-за отсутствия информации по площади других комнат;
- airports_nearest, city_centers_nearest, parks_nearest, ponds_nearest - расстояния до объектов возможно восстановить только в ручном режиме, но это долго.
```
# изменим тип данный на числовой через функцию
def to_int(data_f, colum):
data_f[colum] = data_f[colum].astype('int')
to_int(data, 'balcony')
to_int(data, 'floors_total')
to_int(data, 'parks_around_3000')
to_int(data, 'ponds_around_3000')
# изменим тип данных на булев
data['is_apartment'] = data['is_apartment'].astype('bool')
data.info()
# переведем в километры данные из столбцов с расстояниями в метрах
def kilometr(dt, metr):
dt[metr] = round(dt[metr] / 1000, 2)
kilometr(data, 'airports_nearest')
kilometr(data, 'city_centers_nearest')
kilometr(data, 'parks_nearest')
kilometr(data, 'ponds_nearest')
# округлим значения площадей до 1 знака после запятой, для упрощения восприятия цифр
def sign(dt, square_met):
dt[square_met] = round(dt[square_met], 1)
sign(data, 'kitchen_area')
sign(data, 'living_area')
sign(data, 'total_area')
display(data.head(5))
```
### Вывод
- изменим тип данный на числовой через функцию для столбцов 'balcony', 'days_exposition', 'floors_total', 'parks_around_3000','ponds_around_3000' - в данных столбцах исключительно целые значения;
- is_apartment заменим на булев тип данный, тк столбец содержит только информацию да/нет;
- переведем в километры данные из столбцов с расстояниями в метрах и округлим значения площадей до 2 знаков после запятой -для упрощения информации
- округлим значения площадей до 1 знака после запятой, для упрощения восприятия цифр
### Шаг 3. Посчитаем и добавим в таблицу
```
#цена квадратного метра
data['square_meter'] = (data['last_price'] / data['total_area']).astype('int')
# день недели, месяц и год
data['weekday'] = pd.DatetimeIndex(data['first_day_exposition']).weekday
data['month'] = pd.DatetimeIndex(data['first_day_exposition']).month
data['year'] = pd.DatetimeIndex(data['first_day_exposition']).year
# категоризировали по этажности на первый, последний и другие
def floors(row):
floor = row['floor']
floors_total = row['floors_total']
if floor == 1:
return 'первый'
elif floor == floors_total:
return 'последний'
else:
return 'другой'
data['floor_category'] = data.apply(floors, axis=1)
# отношение жилой к общей площади
data['ratio_liv_total'] = round(data['living_area'] / data['total_area'], 2)
# отношение площади кухни к общей площади
data['ratio_kitch_total'] = round(data['kitchen_area'] / data['total_area'], 2)
display(data.head(5))
```
### Шаг 4. Провем исследовательский анализ данных и выполним инструкции:
```
# построим гистограмму по площади квартиры
data['total_area'].hist(bins = 50, range =(0, 300))
plt.title('Total_area')
plt.xlabel('Total_area')
plt.ylabel('Count')
plt.show()
# отобразим диаграмму ящик с усами
data.boxplot(column = 'total_area')
plt.show()
```
- наиболее часто встречаются квартиры с площадью от 28 до 47 м кв, это студии, 1 и маленькие 2-х комн. квартиры, есть еще небольшой пик по большим 2-ным квартирам от 52 до 60 м кв. Есть редкие и выбивающиеся занчения и за пределами 200 м кв.
```
# построим гистограмму по цене квартиры
data['last_price'].hist(bins = 30, range =(0, 20000000))
plt.title('Last_price')
plt.xlabel('Last_price')
plt.ylabel('Count')
plt.show()
```
- наиболее часто встречаются квартиры стоимостью примерно от 2,8 млн до 4,6 млн. Есть редкие и выбивающиеся занчения и за пределами 20 млн.
```
# построим гистограмму по кол-ву комнат
data['rooms'].hist(bins = 10, range = (0, 10))
plt.title('Rooms')
plt.xlabel('Rooms')
plt.ylabel('Count')
plt.show()
# отобразим диаграмму ящик с усами
data.boxplot(column = 'rooms')
plt.show()
```
- Наиболее часто продаются 1 и 2-ные квартиры. Есть редкие и выбивающиеся занчения и за пределами 10 комнат.
```
# построим гистограмму по высоте потолка
data['ceiling_height'].hist(bins = 15, range =(2, 5), color = 'Green', edgecolor = 'Blue')
plt.title('Ceiling_height')
plt.xlabel('Ceiling_height')
plt.ylabel('Count')
plt.show()
```
- Наиболее часто встречаются потолки высотой примерно от 2,6 до 2,8 м. Есть редкие и выбивающиеся занчения и за пределами 5 метров и до 2 метров.
#### Изучим время продажи квартир
```
# построим гистограмму по времени продаж
data['days_exposition'].hist(bins = 60, range = (0, 1500))
plt.title('Days_exposition')
plt.xlabel('Days_exposition')
plt.ylabel('Count')
plt.show()
# числовое описание столбца
print(data['days_exposition'].describe())
# отобразим диаграмму ящик с усами
data.boxplot(column = 'days_exposition')
plt.title('Days_exposition')
plt.ylabel('Count')
plt.show()
print()
# отобразим диаграмму ящик с усами с увеличенным масштабом
data.boxplot(column = 'days_exposition')
plt.ylim(-5, 600)
plt.title('Days_exposition')
plt.ylabel('Count')
plt.show()
print('Процент квартир проданных за 4 дня - {:.2%}'.
format(round(len(data.query('days_exposition <= 4')) / len(data), 4)))
```
- Нижний «ус» упирается в 0, минимальное значение. Верхний заканчивается около 510;
- в половине случаев квартира продается в течении 95 дней;
- в 75% случаев продается за 232 дней;
- Продажи продолжительностью 800 дней — уже редки. Дольше 1000 дней почти не продаются. А на участке более 1200-т дней гистограмма сливается с нулём (это не значит, что там ровно 0);
- посчитав долю дней, которое объявление было размещено, можно сделать вывод, что до 4 дней практически не бывает продаж квартир, о чем свидетельствует значение - 1.43%.
#### Уберем редкие и выбивающие значения
```
# в дальнейшем нам понадобится столбцы: total_area, rooms, city_centers_nearest, ceiling_height
# Посчитаем долю квартир с площадью более 200 м кв
print('Процент квартир с площадью более 200 кв м - {:.2%}'.format(round(len(data.query('total_area > 200')) / len(data), 4)))
# Посчитаем долю квартир, где комнат 6 и более
print('Процент квартир с кол-вом комнат 6 и более - {:.2%}'.format(round(len(data.query('rooms >= 6')) / len(data), 4)))
print()
print('Гистограмма высота потолка от 0 до 2,2 м')
# построение гистограммы по высоте потолка
data['ceiling_height'].hist(bins = 20, range = (0, 2.2))
plt.show()
# Посчитаем долю квартир по высоте потолка менее 2 м
print('Процент квартир с высотой потолка менее 2 м - {:.2%}'.
format(round(len(data.query('ceiling_height <= 2')) / len(data), 4)))
print()
print('Гистограмма высоты потолка более 4 м')
# построение гистограммы по высоте потолка
data['ceiling_height'].hist(bins = 20, range = (4, 15))
plt.show()
# Посчитаем долю квартир по высоте потолка 4 м и более
print('Процент квартир с высотой потолка более 4 м - {:.2%}'.
format(round(len(data.query('ceiling_height >= 4')) / len(data), 4)))
print()
print('Гистограмма по кол-ву фотографий')
# построение гистограммы по кол-ву фотографий в объявлении
data['total_images'].hist(bins = 20, range = (0, 20))
plt.show()
# Посчитаем долю квартир по кол-ву фотографий в объявлении
print('Процент объявлений, где менее 3 фотогр.- {:.2%}'.
format(round(len(data.query('total_images <= 3')) / len(data), 4)))
# создим новый датафрейм с отфильтрованными значениями
good_data = data.query('total_area < 200 and rooms < 6 and (2 < ceiling_height < 4) and (total_images > 3)')
```
- можно отбросить квартиры с площадью более 200 метров квадр, т к это очень редкие квартиры их доля менее 1% и можно предположить,что это ошибочные данные или не соответствующие действительности
- можно отбросить квартиры c 6 и более комнатами, т к это очень редкие квартиры их доля менее 1% и можно предположить,что это ошибочные данные или не соответствующие действительности
- можно отбросить квартиры с высотой потолка менее 2 м, т к это очень редкие квартиры их доля менее 1% и можно предположить,что это ошибочные данные
- можно отбросить квартиры с высотой потолка 4 м и более, т к это очень редкие квартиры их доля менее 1% и можно предположить,что это ошибочные данные
- изучив столбец total_images, мы пришли к выводу, что малое кол-во фотографий - это сомнительно. Без фотографий невозможно оценить квартиру. Уберем объявления с малым кол-вом фотографий, где всего 3 фотогр. Т к это может свидетельствовать об мошенничестве.
#### Факторы влияющие на стоимость квартиры
```
# рассмотрим влияние на стоимость квартиры площади, кол-ва комнат и удаленность от центра
def corr_price(dt, col_1, col_2):
cor = dt[col_1].corr(dt[col_2])
if abs(cor) <= 0.2:
print('Очень слабая взаимосвязь {}, между {} и {}.'.format(round(cor, 2), col_1, col_2))
elif abs(cor) <= 0.5:
print('Слабая взаимосвязь {}, между {} и {}.'.format(round(cor, 2), col_1, col_2))
elif abs(cor) <= 0.7:
print('Средняя взаимосвязь {}, между {} и {}.'.format(round(cor, 2), col_1, col_2))
elif abs(cor) <= 0.9:
print('Высокая взаимосвязь {}, между {} и {}.'.format(round(cor, 2), col_1, col_2))
elif abs(cor) > 0.9:
print('Очень высокая взаимосвязь {}, между {} и {}.'.format(round(cor, 2), col_1, col_2))
dt.plot(x=col_1, y=col_2, kind='scatter', grid = True)
corr_price(good_data, 'total_area', 'last_price')
corr_price(good_data, 'rooms', 'last_price')
corr_price(good_data, 'city_centers_nearest', 'last_price')
```
- между стоимостью и площадью средняя положительная корреляция- 0.65, с увеличением площади стоимость квартиры растет уже не так сильно
- между стоимостью и кол-вом комнат имеется слабая положительная корреляция - 0.36, стоимость растет до 3-х комнатных квартир квартир
- между стоимостью и удаленностью от центра имеется слабая отрицательная корреляция - (-0.26). С увеличением расстояния стоимость квартир падает.
- выделяются 2 квартиры площадью примерно 180 - 185 кв м и расположенные близко к центру города
```
# рассмотрим влияния на стоимость квартиры от этажа
corr_floor = good_data.pivot_table(index = 'floor_category', values = 'last_price', aggfunc = ['median', 'mean'])
corr_floor.plot(kind='bar', grid = True, legend = True)
plt.title('floor_category - last_price')
plt.ylabel('last_price')
plt.show()
```
- стоимость на первом этаже ниже, чем на последнем. Более дорогие квартиры расположены не на первом и не на последнем этажах
```
# рассмотрим влияние на стоимость квартиры от даты размещения
def corr_date(dt, col_1, col_2):
cor = dt[col_1].corr(dt[col_2])
if abs(cor) <= 0.2:
print('Очень слабая взаимосвязь {}'.format(round(cor, 5)))
elif abs(cor) <= 0.5:
print('Слабая взаимосвязь {}'.format(round(cor, 2)))
elif abs(cor) <= 0.7:
print('Средняя взаимосвязь {}'.format(round(cor, 2)))
elif abs(cor) <= 0.9:
print('Высокая взаимосвязь {}'.format(round(cor, 2)))
elif abs(cor) > 0.9:
print('Очень высокая взаимосвязь {}'.format(round(cor, 2)))
dt.plot(x=col_2, y=col_1, kind = 'scatter', grid = True)
corr_date(good_data, 'last_price', 'weekday')
corr_date(good_data, 'last_price', 'month')
corr_date(good_data, 'last_price', 'year')
```
- между стоимостью и днями недели, месяца или года корреляция отсутствует. Можно сказать, что даты никак не влияют на стоимость квартир
#### Найдем 10 населенных пунктов с наибольшим числом объявлений
```
# отберем 10 населенных пунктов с наибольшим числом объявлений
locality_table = good_data.pivot_table(index = 'locality_name',
values = ['last_price', 'square_meter'], aggfunc = ['count','median', 'mean'])
locality_table.columns = ['count', 'count_squa_met', 'median', 'median_squa_met', 'mean', 'mean_squa_met']
locality_table_10 = locality_table.sort_values(by = 'count', ascending=False).head(10)
display(locality_table_10)
locality_table_10.plot(y = ['median', 'mean'], kind='bar', grid = True,
legend = True, title = 'Стоимость квартир')
plt.ylabel('last_price')
plt.show()
locality_table_10.plot(y = ['median_squa_met', 'mean_squa_met'], kind='bar', grid = True,
legend = True, title = 'Стоимость квадратного метра')
plt.ylabel('square_meter')
plt.show()
```
- стоимость квартир в Санкт-Петербурге по медиане немного дороже, чем в Пушкино, а по средн. арифметич. разрыв гораздо больше;
- по стоимости за кв м ситуация аналогична, лишь в СПБ и Пушкино цена более 100 тыс за кв м;
- по всей полученным цифрам самая дешевая недвижимость в Выборге, цена кв м меньше 60 тыс за кв м.
```
# создадим ноый столбец city_cent_near_km расстояние от центра переведем в км и округлим до целого числа
good_data['city_cent_near_km'] = round(data['city_centers_nearest'], 0)
display(good_data.head())
```
#### Определим квартиры относящиеся к центру города
```
# Определим квартиры относящиеся к центру города
centre_room = good_data.query('locality_name == "Санкт-Петербург"')
spb_centre = centre_room.pivot_table(index = 'city_cent_near_km', values = 'square_meter',
aggfunc = ['median', 'mean'])
spb_centre.columns = ['median', 'mean']
spb_centre.plot(y = ['median', 'mean'], style = 'o-', grid = True,
legend = True, figsize = (8, 6), title = 'Стоимость кв м на каждый км')
plt.ylabel('square_meter')
plt.show()
# создадим датафрейм с предложениями, которые мы отнесли к центру СПБ
centre_room_spb = centre_room.query('city_cent_near_km <= 7')
```
- на графике видно, что после 3 км значения сильно возрастают, видимо там расположены какие-то парки, пруды, достопремечательности. Цена растет до 7 км после чего резко падает. Поэтому примем за центр расстояние до 7 км включительно, т к после 7 км цена только падает.
#### Проанализируем параметры и выделим факторы влияющие на стоимость кавартиры в центре и сравним с данными по всему городу
```
# построение гистограммы по площади квартиры
centre_room_spb['total_area'].hist(bins = 50)
plt.title('Total_area')
plt.xlabel('Total_area')
plt.ylabel('Count')
plt.show()
# построение гистограммы по цене квартиры
centre_room_spb['last_price'].hist(bins = 50, range = (0, 50000000))
plt.title('Last_price')
plt.xlabel('Last_price')
plt.ylabel('Count')
plt.show()
# построение гистограммы по кол-ву комнат
centre_room_spb['rooms'].hist(bins = 10)
plt.title('Rooms')
plt.xlabel('Rooms')
plt.ylabel('Count')
plt.show()
# построение гистограммы по высоте потолка
centre_room_spb['ceiling_height'].hist(bins = 15)
plt.title('Ceiling_height')
plt.xlabel('Ceiling_height')
plt.ylabel('Count')
plt.show()
```
- наиболее часто встречаются квартиры от 35 до 75 м, т е 1, 2 и 3 комнатные, в отличие от анализа всех объявлений, где наиболее часто встречались квартиры с площадью от 28 до 47 м кв, это студии, 1 и 2-х комн. квартиры;
- наиболее часто встречаются квартиры стоимостью 6 млн в отличие от анализа всех объявлений, где наиболее часто встречались квартиры стоимостью примерно от 2,8 млн до 4,6 млн;
- наиболее часто встречаются квартиры с 2 и 3 комнатами в отличие от анализа всех объявлений, где наиболее часто продаются 1 и 2-ные квартиры;
- наиболее часто встречаются квартиры с высотой потолка от 2,6 м и выше, этот показатель примерно равен значениям при анализе всех объявлений
- ВЫВОД:
квартиры в центра дороже почти в 2 раза, имеют больше комнат и площадь.
```
# рассмотрим влияние на стоимость квартиры площади, кол-ва комнат и удаленность от центра
def corr_price_centr(dt, col_1, col_2):
cor = dt[col_1].corr(dt[col_2])
if abs(cor) <= 0.2:
print('Очень слабая взаимосвязь {}'.format(round(cor, 2)))
elif abs(cor) <= 0.5:
print('Слабая взаимосвязь {}'.format(round(cor, 2)))
elif abs(cor) <= 0.7:
print('Средняя взаимосвязь {}'.format(round(cor, 2)))
elif abs(cor) <= 0.9:
print('Высокая взаимосвязь {}'.format(round(cor, 2)))
elif abs(cor) > 0.9:
print('Очень высокая взаимосвязь {}'.format(round(cor, 2)))
dt.plot(x = col_1, y = col_2, kind='scatter', grid = True)
corr_price_centr(centre_room_spb, 'total_area', 'last_price')
corr_price_centr(centre_room_spb, 'rooms', 'last_price')
corr_price_centr(centre_room_spb, 'city_centers_nearest', 'last_price')
```
- между стоимостью и площадью средняя положительная корреляция- 0.55, что меньше чем в раннее проведенном анализе (0,65). Цены еще меньше зависят от увеличения площади квартиры;
- между стоимостью и кол-вом комнат слабая положительная корреляция - 0.25, что меньше чем в раннее проведенном анализе (0,36). Цены еще меньше зависят от увеличения кол-ва комнат;
- между стоимостью и удаленностью от центра имеется очень слабая отрицательная корреляция - (-0,03), что меньше чем в раннее проведенном анализе (-0,24). Цены почти не зависят от удаленности от центра. Это объяснимо, ведь мы и рассматриваем центр города;
```
# рассмотрим влияния на стоимость квартиры от этажа
corr_floor_centr = centre_room_spb.pivot_table(index = 'floor_category', values = 'last_price', aggfunc = ['median', 'mean'])
corr_floor_centr.plot(kind='bar', grid = True, legend = True)
plt.title('floor_category-last_price')
plt.ylabel('last_price')
plt.show()
```
- стоимость на первом этаже ниже, чем на последнем. Более дорогие квартиры расположены не на первом и не на последнем этажах. Здесь данные аналогичны тем, что и в раннее проведенном анализе.
```
# рассмотрим влияние на стоимость квартиры от даты размещения
def corr_date_centr(dt, col_1, col_2):
cor = dt[col_1].corr(dt[col_2])
if abs(cor) <= 0.2:
print('Очень слабая взаимосвязь {}'.format(round(cor, 5)))
elif abs(cor) <= 0.5:
print('Слабая взаимосвязь {}'.format(round(cor, 2)))
elif abs(cor) <= 0.7:
print('Средняя взаимосвязь {}'.format(round(cor, 2)))
elif abs(cor) <= 0.9:
print('Высокая взаимосвязь {}'.format(round(cor, 2)))
elif abs(cor) > 0.9:
print('Очень высокая взаимосвязь {}'.format(round(cor, 2)))
dt.plot(x=col_1, y=col_2, kind = 'scatter', grid = True)
corr_date_centr(centre_room_spb, 'last_price', 'weekday')
corr_date_centr(centre_room_spb, 'last_price', 'month')
corr_date_centr(centre_room_spb, 'last_price', 'year')
```
- между стоимостью и днями недели, месяца или года корреляция отсутствует. Точно также, что и в раннее проведенном анализе.
### Шаг 5. Общий вывод
- на стоимость квартиры большего всего влияет площадь квартиры, гораздо меньшее влияние оказыват кол-во комнат;
- по мере удаления от центра города цена ожидаемо снижается;
- дата размещения объявлений никак не влияет на цену;
- по этажности видно, что самые дешевые предложения по квартирам на 1 этаже, самые дорогие на этажах между 1 и последним;
- среди 10 населенных пунктах самый дорогой квадр метр в СПБ и Пушкино, свыше 100 тыс за кв м;
- среди 10 населенных пунктах самый дешевый квадр метр в Выборге, ниже 60 тыс за кв м.
### Чек-лист готовности проекта
Поставьте 'x' в выполненных пунктах. Далее нажмите Shift+Enter.
- [x] открыт файл
- [x] файлы изучены (выведены первые строки, метод info())
- [x] определены пропущенные значения
- [x] заполнены пропущенные значения
- [x] есть пояснение, какие пропущенные значения обнаружены
- [x] изменены типы данных
- [x] есть пояснение, в каких столбцах изменены типы и почему
- [x] посчитано и добавлено в таблицу: цена квадратного метра
- [x] посчитано и добавлено в таблицу: день недели, месяц и год публикации объявления
- [x] посчитано и добавлено в таблицу: этаж квартиры; варианты — первый, последний, другой
- [x] посчитано и добавлено в таблицу: соотношение жилой и общей площади, а также отношение площади кухни к общей
- [x] изучены следующие параметры: площадь, цена, число комнат, высота потолков
- [x] построены гистограммы для каждого параметра
- [x] выполнено задание: "Изучите время продажи квартиры. Постройте гистограмму. Посчитайте среднее и медиану. Опишите, сколько обычно занимает продажа. Когда можно считать, что продажи прошли очень быстро, а когда необычно долго?"
- [x] выполнено задание: "Уберите редкие и выбивающиеся значения. Опишите, какие особенности обнаружили."
- [x] выполнено задание: "Какие факторы больше всего влияют на стоимость квартиры? Изучите, зависит ли цена от квадратного метра, числа комнат, этажа (первого или последнего), удалённости от центра. Также изучите зависимость от даты размещения: дня недели, месяца и года. "Выберите 10 населённых пунктов с наибольшим числом объявлений. Посчитайте среднюю цену квадратного метра в этих населённых пунктах. Выделите населённые пункты с самой высокой и низкой стоимостью жилья. Эти данные можно найти по имени в столбце '*locality_name'*. "
- [x] выполнено задание: "Изучите предложения квартир: для каждой квартиры есть информация о расстоянии до центра. Выделите квартиры в Санкт-Петербурге (*'locality_name'*). Ваша задача — выяснить, какая область входит в центр. Создайте столбец с расстоянием до центра в километрах: округлите до целых значений. После этого посчитайте среднюю цену для каждого километра. Постройте график: он должен показывать, как цена зависит от удалённости от центра. Определите границу, где график сильно меняется — это и будет центральная зона. "
- [x] выполнено задание: "Выделите сегмент квартир в центре. Проанализируйте эту территорию и изучите следующие параметры: площадь, цена, число комнат, высота потолков. Также выделите факторы, которые влияют на стоимость квартиры (число комнат, этаж, удалённость от центра, дата размещения объявления). Сделайте выводы. Отличаются ли они от общих выводов по всему городу?"
- [x] в каждом этапе есть выводы
- [x] есть общий вывод
| github_jupyter |
<a href="https://cognitiveclass.ai"><img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png" width = 400> </a>
<h1 align=center><font size = 5>Regression Models with Keras</font></h1>
## Introduction
As we discussed in the videos, despite the popularity of more powerful libraries such as PyToch and TensorFlow, they are not easy to use and have a steep learning curve. So, for people who are just starting to learn deep learning, there is no better library to use other than the Keras library.
Keras is a high-level API for building deep learning models. It has gained favor for its ease of use and syntactic simplicity facilitating fast development. As you will see in this lab and the other labs in this course, building a very complex deep learning network can be achieved with Keras with only few lines of code. You will appreciate Keras even more, once you learn how to build deep models using PyTorch and TensorFlow in the other courses.
So, in this lab, you will learn how to use the Keras library to build a regression model.
## Table of Contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
<font size = 3>
1. <a href="#item31">Download and Clean Dataset</a>
2. <a href="#item32">Import Keras</a>
3. <a href="#item33">Build a Neural Network</a>
4. <a href="#item34">Train and Test the Network</a>
</font>
</div>
<a id="item31"></a>
## Download and Clean Dataset
Let's start by importing the <em>pandas</em> and the Numpy libraries.
```
import pandas as pd
import numpy as np
```
We will be playing around with the same dataset that we used in the videos.
<strong>The dataset is about the compressive strength of different samples of concrete based on the volumes of the different ingredients that were used to make them. Ingredients include:</strong>
<strong>1. Cement</strong>
<strong>2. Blast Furnace Slag</strong>
<strong>3. Fly Ash</strong>
<strong>4. Water</strong>
<strong>5. Superplasticizer</strong>
<strong>6. Coarse Aggregate</strong>
<strong>7. Fine Aggregate</strong>
Let's download the data and read it into a <em>pandas</em> dataframe.
```
concrete_data = pd.read_csv('https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0101EN/labs/data/concrete_data.csv')
concrete_data.head()
```
So the first concrete sample has 540 cubic meter of cement, 0 cubic meter of blast furnace slag, 0 cubic meter of fly ash, 162 cubic meter of water, 2.5 cubic meter of superplaticizer, 1040 cubic meter of coarse aggregate, 676 cubic meter of fine aggregate. Such a concrete mix which is 28 days old, has a compressive strength of 79.99 MPa.
#### Let's check how many data points we have.
```
concrete_data.shape
```
So, there are approximately 1000 samples to train our model on. Because of the few samples, we have to be careful not to overfit the training data.
Let's check the dataset for any missing values.
```
concrete_data.describe()
concrete_data.isnull().sum()
```
The data looks very clean and is ready to be used to build our model.
#### Split data into predictors and target
The target variable in this problem is the concrete sample strength. Therefore, our predictors will be all the other columns.
```
concrete_data_columns = concrete_data.columns
predictors = concrete_data[concrete_data_columns[concrete_data_columns != 'Strength']] # all columns except Strength
target = concrete_data['Strength'] # Strength column
```
<a id="item2"></a>
Let's do a quick sanity check of the predictors and the target dataframes.
```
predictors.head()
target.head()
```
Finally, the last step is to normalize the data by substracting the mean and dividing by the standard deviation.
```
predictors_norm = (predictors - predictors.mean()) / predictors.std()
predictors_norm.head()
```
Let's save the number of predictors to *n_cols* since we will need this number when building our network.
```
n_cols = predictors_norm.shape[1] # number of predictors
```
<a id="item1"></a>
<a id='item32'></a>
## Import Keras
Recall from the videos that Keras normally runs on top of a low-level library such as TensorFlow. This means that to be able to use the Keras library, you will have to install TensorFlow first and when you import the Keras library, it will be explicitly displayed what backend was used to install the Keras library. In CC Labs, we used TensorFlow as the backend to install Keras, so it should clearly print that when we import Keras.
#### Let's go ahead and import the Keras library
```
import keras
```
As you can see, the TensorFlow backend was used to install the Keras library.
Let's import the rest of the packages from the Keras library that we will need to build our regressoin model.
```
from keras.models import Sequential
from keras.layers import Dense
```
<a id='item33'></a>
## Build a Neural Network
Let's define a function that defines our regression model for us so that we can conveniently call it to create our model.
```
# define regression model
def regression_model():
# create model
model = Sequential()
model.add(Dense(50, activation='relu', input_shape=(n_cols,)))
model.add(Dense(50, activation='relu'))
model.add(Dense(1))
# compile model
model.compile(optimizer='adam', loss='mean_squared_error')
return model
```
The above function create a model that has two hidden layers, each of 50 hidden units.
<a id="item4"></a>
<a id='item34'></a>
## Train and Test the Network
Let's call the function now to create our model.
```
# build the model
model = regression_model()
```
Next, we will train and test the model at the same time using the *fit* method. We will leave out 30% of the data for validation and we will train the model for 100 epochs.
```
# fit the model
model.fit(predictors_norm, target, validation_split=0.3, epochs=100, verbose=2)
```
<strong>You can refer to this [link](https://keras.io/models/sequential/) to learn about other functions that you can use for prediction or evaluation.</strong>
Feel free to vary the following and note what impact each change has on the model's performance:
1. Increase or decreate number of neurons in hidden layers
2. Add more hidden layers
3. Increase number of epochs
### Thank you for completing this lab!
This notebook was created by [Alex Aklson](https://www.linkedin.com/in/aklson/). I hope you found this lab interesting and educational. Feel free to contact me if you have any questions!
This notebook is part of a course on **Coursera** called *Introduction to Deep Learning & Neural Networks with Keras*. If you accessed this notebook outside the course, you can take this course online by clicking [here](https://cocl.us/DL0101EN_Coursera_Week3_LAB1).
<hr>
Copyright © 2019 [IBM Developer Skills Network](https://cognitiveclass.ai/?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
| github_jupyter |
```
#danaderp July'19
#GenerativeLSTM
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.layers import Dot, Input, Dense, Reshape, LSTM, Conv2D, Flatten, MaxPooling1D, Dropout, MaxPooling2D
from tensorflow.keras.layers import Embedding, Multiply, Subtract
from tensorflow.keras.models import Sequential, Model
from tensorflow.python.keras.layers import Lambda
from tensorflow.keras.callbacks import CSVLogger, ModelCheckpoint
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
!pip install Keras
!pip install sklearn
!pip install tqdm
!pip install regex
import os, json
import sys
import tqdm
from pathlib import Path
print(tf.keras.__version__)
import encoderBPE as enco
import functools
from scipy import stats
import scipy
import scipy.stats
from collections import Counter
from sklearn.preprocessing import normalize
#import seaborn as sns
print(os.getcwd())
from keras.utils import np_utils
import pandas as pd
import numpy as np
import re
#import nltk
import matplotlib.pyplot as plt
pd.options.display.max_colwidth = 200
%matplotlib inline
class Args():
def __init__(self, trn_dataset, model_name, combine, batch_size,
learning_rate, optimizer, noise, top_k, top_p, run_name, sample_every,
sample_length, sample_num, save_every, val_dataset, val_batch_size,
val_batch_count, val_every, pretrained, iterations, test_dataset):
self.trn_dataset = trn_dataset
self.model_name = model_name
self.combine = combine
self.batch_size = batch_size
self.learning_rate = learning_rate
self.optimizer = optimizer
self.noise = noise
self.top_k = top_k
self.top_p = top_p
self.run_name = run_name
self.sample_every = sample_every
self.sample_length = sample_length
self.sample_num = sample_num
self.save_every = save_every
self.val_dataset = val_dataset
self.val_batch_size = val_batch_size
self.val_batch_count = val_batch_count
self.val_every = val_every
self.pretrained = pretrained
self.iterations = iterations
self.test_dataset = test_dataset
args = Args(
trn_dataset="/tf/src/data/methods/DATA00M_[god-r]/train", #HardCoded
model_name="117M",
combine=50000,
batch_size=1, # DO NOT TOUCH. INCREASING THIS WILL RAIN DOWN HELL FIRE ONTO YOUR COMPUTER.
learning_rate=0.00002,
optimizer="sgd",
noise=0.0,
top_k=40,
top_p=0.0,
run_name="unconditional_experiment",
sample_every=100,
sample_length=1023,
sample_num=1,
save_every=1000,
val_dataset="/tf/src/data/methods/DATA00M_[god-r]/valid", #HardCoded
val_batch_size=1,
val_batch_count=40,
val_every=100,
pretrained=True,
iterations=1000,
test_dataset="/tf/src/data/methods/DATA00M_[god-r]/test"
)
"""Byte pair encoding utilities"""
def get_encoder():
with open('encoder.json', 'r') as f:
encoder = json.load(f)
with open( 'vocab.bpe', 'r', encoding="utf-8") as f:
bpe_data = f.read()
bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split('\n')[1:-1]]
return enco.Encoder(
encoder=encoder,
bpe_merges=bpe_merges,
)
enc = get_encoder()
'''Load the the dataset
@enc encoder of the BPE vocabulary
@path of the original datasets
'''
def load_dataset(enc, path):
paths = []
if os.path.isfile(path):
# Simple file
paths.append(path)
elif os.path.isdir(path):
# Directory
for i, (dirpath, _, fnames) in enumerate(os.walk(path)):
for fname in fnames:
paths.append(os.path.join(dirpath, fname))
else:
# Assume glob
paths = glob.glob(path)
token_chunks = []
raw_text = ''
for i, path in enumerate(tqdm.tqdm(paths)):
#if i >= 100000: break #<------ Hyperparameter to limit the samples in the dataset
try:
with open(path, 'r') as fp:
raw_text += fp.read()
raw_text += '<|endoftext|>' #<----- Special subword to activate the uncoditional sampling
tokens = np.stack(enc.encode(raw_text))
token_chunks.append(tokens)
raw_text = ''
except Exception as e:
print(e)
return token_chunks
#[Inspection] Checking the load
path = args.trn_dataset
paths = []
if os.path.isfile(path):
# Simple file
paths.append(path)
elif os.path.isdir(path):
# Directory
for i, (dirpath, _, fnames) in enumerate(os.walk(path)):
for fname in fnames:
paths.append(os.path.join(dirpath, fname))
else:
# Assume glob
paths = glob.glob(path)
#[Inspection] raw data
raw_text = open(paths[1], 'r').read()
print(raw_text),print(len(raw_text))
#Loading actual training and validation dataset
trn_set = load_dataset(enc, args.trn_dataset)
val_set = load_dataset(enc, args.val_dataset)
#[Inspect] Decoder
print(trn_set[0])
first_method_decoded = enc.decode(trn_set[0])
print(first_method_decoded)
test_set = load_dataset(enc, args.test_dataset)
#[Inspection] Size of datasets
len(trn_set), len(val_set)
#[Inspection] embedded data
print(trn_set[3]), print(len(trn_set[3]))
#[Inspection] vocabulary dimensions
json_data = open( 'encoder.json', 'r', encoding="utf-8").read()
dic_subwords = json.loads(json_data)
dic_subwords.keys()
#[Inspection]Looking for the mapping (lookup table) for the vocabulary
dic_subwords['return']
# create mapping of unique chars to integers
#chars = sorted(list(set(raw_text)))
#char_to_int = dict((c, i) for i, c in enumerate(chars)) #Lookup Table
#embed_methods = functools.reduce(lambda a,b : a+b,[doc for doc in trn_set])
#Exploratory Analysis
n_methods_trn = len(trn_set)
n_methods_val = len(val_set)
print("Total Methods Train: ", n_methods_trn)
print("Total Methods Validation: ", n_methods_val)
x_trn = [len(doc) for doc in trn_set]
x_val = [len(doc) for doc in val_set]
n_subwords_train = sum(x_trn) #n_chars
n_subwords_val = sum(x_val) #n_chars
print("Total Subwords Train: ", n_subwords_train)
print("Total Subwords Val: ", n_subwords_val)
max_subword_train = max(x_trn)
min_subword_train = min(x_trn)
median_subword_train = np.median(x_trn)
mad_subword_train= stats.median_absolute_deviation(x_trn)
print("Total Max Subword Method Trn: ", max_subword_train)
print("Total Min Subword Method Trn: ", min_subword_train)
print("Total Median Subword Method Trn: ", median_subword_train)
print("Total MAD Subword Method Trn: ", mad_subword_train)
print("Total Avg", np.average(x_trn))
print("Total Std", np.std(x_trn))
#Approx Distribution of the Training Size
#Uniform sampling without replacement 10% from the original set
reduced_x_trn = np.random.choice(x_trn,
int(n_methods_trn*1), replace=False)
normalized_reduced_x_trn = normalize([reduced_x_trn])
normalized_reduced_x_trn[0]
#_ = plt.hist(normalized_reduced_x_trn[0], bins=10) # arguments are passed to np.histogram
#plt.title("Histogram with 'auto' bins")
#plt.show()
a = normalized_reduced_x_trn[0]
res = stats.relfreq(a, numbins=25) #Calculate relative frequencies
res.frequency
#Calculate space of values for x
x = res.lowerlimit + np.linspace(0, res.binsize*res.frequency.size, res.frequency.size)
fig = plt.figure(figsize=(5, 4))
ax = fig.add_subplot(1, 1, 1)
ax.bar(x, res.frequency, width=res.binsize)
ax.set_title('Relative frequency histogram')
ax.set_xlim([x.min(), x.max()])
plt.show()
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(normalized_reduced_x_trn[0])
size = len(normalized_reduced_x_trn[0])
x = scipy.arange(size)
#y = scipy.int_(scipy.round_(scipy.stats.vonmises.rvs(5,size=size)*47))
y = normalized_reduced_x_trn[0]
h = plt.hist(y, bins=range(48))
dist_names = ['gamma', 'beta', 'rayleigh', 'norm', 'pareto']
for dist_name in dist_names:
dist = getattr(scipy.stats, dist_name)
param = dist.fit(y)
pdf_fitted = dist.pdf(x, *param[:-2], loc=param[-2], scale=param[-1]) * size
plt.plot(pdf_fitted, label=dist_name)
plt.xlim(0,47)
plt.legend(loc='upper right')
plt.show()
max_subword_val = max(x_val)
min_subword_val = min(x_val)
median_subword_val = np.median(x_val)
mad_subword_val= stats.median_absolute_deviation(x_val)
print("Total Max Subword Method Trn: ", max_subword_val)
print("Total Min Subword Method Trn: ", min_subword_val)
print("Total Median Subword Method Trn: ", median_subword_val)
print("Total MAD Subword Method Trn: ", mad_subword_val)
print("Total Avg", np.average(x_val))
print("Total Std", np.std(x_val))
#Vocabulary size
n_vocab = len(dic_subwords)
print("Total Vocab: ", n_vocab)
#From toke to subword
token_subword = {v: k for k, v in dic_subwords.items()}
token_subword[0]
counter = Counter()
for method in trn_set:
for subword in method:
counter[subword] += 1
counter
#For Training Dataset
subwords, counts = zip(*counter.most_common(20))
indices = np.arange(len(counts))
plt.figure(figsize=(14, 3))
plt.bar(indices, counts, 0.8)
plt.xticks(indices, subwords);
#enc.decode(
for i in subwords:
print("*"+enc.decode([i])+"*")
enc.decode(subwords)
#Distribution of Subwords
plt.hist(x_trn, bins='auto');
plt.title('Distribution of sentence lengths')
plt.xlabel('Approximate number of words');
#Training Many to Many LSTM
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
# The maximum length sentence we want for a single input in characters
seq_length = 100 #<------ Hyperparameter
examples_per_epoch_trn = sum(list(map(lambda x: x//seq_length, [len(doc) for doc in trn_set])))
examples_per_epoch_val = sum(list(map(lambda x: x//seq_length, [len(doc) for doc in val_set])))
print(examples_per_epoch,examples_per_epoch_val)
subword_trn_dataset = tf.data.Dataset.from_tensor_slices(trn_set)
subword_dataset_trn = [tf.data.Dataset.from_tensor_slices(doc) for doc in trn_set]
subword_dataset_val = [tf.data.Dataset.from_tensor_slices(doc) for doc in val_set]
sequences_list_trn = [subword_dataset.batch(seq_length+1, drop_remainder=False) for
subword_dataset in subword_dataset_trn]
sequences_list_trn
initial_batch_dataset = sequences_list_trn[0]
for batch_dataset in sequences_list_trn[1:]: #without the first batchdataset
initial_batch_dataset = initial_batch_dataset.concatenate(batch_dataset)
sequences_trn = initial_batch_dataset
sequences_list_val = [subword_dataset.batch(seq_length+1, drop_remainder=False) for
subword_dataset in subword_dataset_val]
sequences_val = sequences_list_val[0]
for batch_dataset in sequences_list_val[1:]: #without the first batchdataset
sequences_val = sequences_val.concatenate(batch_dataset)
sequences_trn
for item in sequences_trn.take(5):
print("*"+enc.decode(item.numpy())+"*")
print(enc.decode(trn_set[0]))
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
dataset
#Print the first examples input and target values:
for input_example, target_example in dataset.take(2):
print ('Input data: ', enc.decode(input_example.numpy()))
print ('Target data:', enc.decode(target_example.numpy()))
################################
range(0, n_subwords - 100, 1)
#[Inspection] Input Sequence or context
(trn_set[0][0:0 + 100])
#[Inspection] Predicted Subword
trn_set[0][0 + 100]
# prepare the dataset of input to output pairs encoded as integers
seq_length = min_subword_embedded -1 #<------- [Hyperparameter] Should be 256 to compare to GPT
dataX = []
dataY = []
for method in range(n_methods):
for i in range(0, len(trn_set[method]) - seq_length,1): #start, stop, steps
seq_in = trn_set[method][i:i + seq_length]#Context
seq_out = trn_set[method][i + seq_length] #Predicted Subwords
dataX.append(seq_in) #X datapoint
dataY.append(seq_out) #Y prediction
n_patterns = len(dataX)
print ("Total Patterns: ", n_patterns)
n_outcome = len(dataY)
print("Total Patterns (outcome): ",n_outcome)
dataX
dataY
#Data set organization
from tempfile import mkdtemp
import os.path as path
```
First we must transform the list of input sequences into the form [samples, time steps, features] expected by an LSTM network.
Next we need to rescale the integers to the range 0-to-1 to make the patterns easier to learn by the LSTM network that uses the sigmoid activation function by default.
Finally, we need to convert the output patterns (single characters converted to integers) into a one hot encoding.
Each y value is converted into a sparse vector with a length of 47, full of zeros except with a 1 in the column for the letter (integer) that the pattern represents.
```
#Memoization [Avoid]
file_train_x = path.join(mkdtemp(), 'temp_corpora_train_x.dat') #Update per experiment
#Data sets [Avoid]
X = np.memmap(
filename = file_train_x,
dtype='float32',
mode='w+',
shape = shape_train_x)
#Shaping
shape_train_x = (n_patterns, seq_length, 1)
shape_train_x
X = np.reshape(dataX, (n_patterns, seq_length, 1))
X.shape
# normalize
X = X / float(n_vocab)
# one hot encode the output variable
y = np_utils.to_categorical(dataY)
X.shape
y
```
We can now define our LSTM model. Here we define a single hidden LSTM layer with 256 memory units. The network uses dropout with a probability of 20. The output layer is a Dense layer using the softmax activation function to output a probability prediction for each of the 47 characters between 0 and 1.
The problem is really a single character classification problem with 47 classes and as such is defined as optimizing the log loss (cross entropy), here using the ADAM optimization algorithm for speed.
```
# define the LSTM model
model = Sequential()
model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2])))
model.add(Dropout(0.2))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
```
There is no test dataset. We are modeling the entire training dataset to learn the probability of each character in a sequence.
-----> We are not interested in the most accurate (classification accuracy) model of the training dataset. This would be a model that predicts each character in the training dataset perfectly. Instead we are interested in a generalization of the dataset that minimizes the chosen loss function. We are seeking a balance between generalization and overfitting but short of memorization
------> The network is slow to train (about 300 seconds per epoch on an Nvidia K520 GPU). Because of the slowness and because of our optimization requirements, we will use model checkpointing to record all of the network weights to file each time an improvement in loss is observed at the end of the epoch. We will use the best set of weights (lowest loss) to instantiate our generative model in the next section.
```
# define the checkpoint
filepath="weights-improvement-{epoch:02d}-{loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
model.fit(X, y, epochs=20, batch_size=128, callbacks=callbacks_list)
```
## Generating Text with an LSTM Network
```
# load the network weights
filename = "weights-improvement-19-1.9435.hdf5"
model.load_weights(filename)
model.compile(loss='categorical_crossentropy', optimizer='adam')
int_to_char = dict((i, c) for i, c in enumerate(chars))
```
The simplest way to use the Keras LSTM model to make predictions is to first start off with a seed sequence as input, generate the next character then update the seed sequence to add the generated character on the end and trim off the first character. This process is repeated for as long as we want to predict new characters (e.g. a sequence of 1,000 characters in length).
We can pick a random input pattern as our seed sequence, then print generated characters as we generate them.
| github_jupyter |
# Decison Trees
First we'll load some fake data on past hires I made up. Note how we use pandas to convert a csv file into a DataFrame:
```
import numpy as np
import pandas as pd
from sklearn import tree
input_file = "PastHires.csv"
df = pd.read_csv(input_file, header = 0)
df.head()
```
scikit-learn needs everything to be numerical for decision trees to work. So, we'll map Y,N to 1,0 and levels of education to some scale of 0-2. In the real world, you'd need to think about how to deal with unexpected or missing data! By using map(), we know we'll get NaN for unexpected values.
```
d = {'Y': 1, 'N': 0}
df['Hired'] = df['Hired'].map(d)
df['Employed?'] = df['Employed?'].map(d)
df['Top-tier school'] = df['Top-tier school'].map(d)
df['Interned'] = df['Interned'].map(d)
d = {'BS': 0, 'MS': 1, 'PhD': 2}
df['Level of Education'] = df['Level of Education'].map(d)
df.head()
```
Next we need to separate the features from the target column that we're trying to bulid a decision tree for.
```
features = list(df.columns[:6])
features
```
Now actually construct the decision tree:
```
y = df["Hired"]
X = df[features]
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X,y)
```
... and display it. Note you need to have pydotplus installed for this to work. (!pip install pydotplus)
To read this decision tree, each condition branches left for "true" and right for "false". When you end up at a value, the value array represents how many samples exist in each target value. So value = [0. 5.] mean there are 0 "no hires" and 5 "hires" by the tim we get to that point. value = [3. 0.] means 3 no-hires and 0 hires.
```
from IPython.display import Image
from sklearn.externals.six import StringIO
import pydotplus
dot_data = StringIO()
tree.export_graphviz(clf, out_file=dot_data,
feature_names=features)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
```
## Ensemble learning: using a random forest
We'll use a random forest of 10 decision trees to predict employment of specific candidate profiles:
```
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=10)
clf = clf.fit(X, y)
#Predict employment of an employed 10-year veteran
print (clf.predict([[10, 1, 4, 0, 0, 0]]))
#...and an unemployed 10-year veteran
print (clf.predict([[10, 0, 4, 0, 0, 0]]))
```
## Activity
Modify the test data to create an alternate universe where everyone I hire everyone I normally wouldn't have, and vice versa. Compare the resulting decision tree to the one from the original data.
| github_jupyter |
# UMAP on the PBMC dataset of Zheng
```
%load_ext autoreload
%autoreload 2
%env CUDA_VISIBLE_DEVICES=2
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import umap
from firelight.visualizers.colorization import get_distinct_colors
from matplotlib.colors import ListedColormap
import pickle
import matplotlib.lines as mlines
import matplotlib
from umap.my_plot import plot_all_losses, hists_from_graph_embd
from umap.my_utils import filter_graph
dir_path = "../data/zheng_pbmc"
fig_path = "../figures"
seed = 0
# load the data
pca50 = pd.read_csv(os.path.join(dir_path,
"pbmc_qc_final.txt"),
sep='\t',
header=None)
pca50.shape
cell_types = pd.read_csv(os.path.join(dir_path,
"pbmc_qc_final_labels.txt"),
sep=',',
header=None).to_numpy().flatten()
np.unique(cell_types)
# rename cell types by stripping "CD..." prefixes if possible
for i in range(len(cell_types)):
words = cell_types[i].split(" ")
if words[0].startswith("CD") and not len(words) == 1:
words = words[1:]
cell_types[i] = " ".join(words)
labels = np.zeros(len(cell_types)).astype(int)
name_to_label = {}
for i, phase in enumerate(np.unique(cell_types)):
name_to_label[phase] = i
labels[cell_types==phase] = i
np.random.seed(seed)
colors = get_distinct_colors(len(name_to_label))
cmap = ListedColormap(colors)
np.random.shuffle(colors)
try:
with open(os.path.join(dir_path, f"umapperns_after_seed_{seed}.pkl"), "rb") as file:
umapperns_after = pickle.load(file)
embd_after = umapperns_after.embedding_
except FileNotFoundError:
umapperns_after = umap.UMAP(metric="cosine",
n_neighbors=30,
n_epochs=750,
log_losses="after",
random_state=seed,
verbose=True)
embd_after = umapperns_after.fit_transform(pca50)
with open(os.path.join(dir_path, f"umapperns_after_seed_{seed}.pkl"), "wb") as file:
pickle.dump(umapperns_after, file, pickle.HIGHEST_PROTOCOL)
plt.figure(figsize=(8,8))
scatter = plt.scatter(-embd_after[:,1],
-embd_after[:,0],
c=labels,
s=0.1,
alpha=1.0,
cmap=cmap)
plt.axis("off")
plt.gca().set_aspect("equal")
# dummy dots for legend
dots = []
for i in range(len(np.unique(cell_types))):
dot = mlines.Line2D([], [], color=colors[i], marker='.', linestyle="none",
markersize=15, label=np.unique(cell_types)[i])
dots.append(dot)
plt.legend(handles=dots, prop={'size': 20}, loc=(1,0))
plt.savefig(os.path.join(fig_path, f"pbmc_after_seed_{seed}.png"),
bbox_inches = 'tight',
pad_inches = 0,
dpi=300)
start=10 # omit early epochs where UMAP's sampling approximation is poor
matplotlib.rcParams.update({'font.size': 15})
fig_losses_after = plot_all_losses(umapperns_after.aux_data,start=start)
fig_losses_after.savefig(os.path.join(fig_path, f"pbmc_after_losses_{start}_seed_{seed}.png"),
bbox_inches = 'tight',
pad_inches = 0,
dpi=300)
alpha=0.5
min_dist = 0.1
spread = 1.0
a, b= umap.umap_.find_ab_params(spread=spread, min_dist=min_dist)
fil_graph = filter_graph(umapperns_after.graph_, umapperns_after.n_epochs).tocoo()
hist_high_pbmc, \
hist_high_pos_pbmc, \
hist_target_pbmc, \
hist_target_pos_pbmc, \
hist_low_pbmc, \
hist_low_pos_pbmc, \
bins_pbmc = hists_from_graph_embd(graph=fil_graph,
embedding=embd_after,
a=a,
b=b)
# plot histogram of positive high-dimensional edges
plt.rcParams.update({'font.size': 22})
plt.figure(figsize=(8, 5))
plt.hist(bins_pbmc[:-1], bins_pbmc, weights=hist_high_pos_pbmc, alpha=alpha, label=r"$\mu_{ij}$")
plt.hist(bins_pbmc[:-1], bins_pbmc, weights=hist_target_pos_pbmc, alpha=alpha, label=r"$\nu_{ij}^*$")
plt.hist(bins_pbmc[:-1], bins_pbmc, weights=hist_low_pos_pbmc, alpha=alpha, label=r"$\nu_{ij}$")
plt.legend(loc="upper center", ncol=3)
#plt.yscale("symlog", linthresh=1)
plt.gca().spines['left'].set_position("zero")
plt.gca().spines['bottom'].set_position("zero")
#plt.savefig(os.path.join(fig_path, f"pbmc_hist_sims_pos_seed_{seed}.png"),
# bbox_inches = 'tight',
# pad_inches = 0,dpi=300)
```
| github_jupyter |
```
import numpy as np
from jax.config import config
config.update("jax_enable_x64", True)
config.update('jax_platform_name', 'gpu')
import jax.numpy as jnp
from jax import jit, vmap, lax, grad
from jax.test_util import check_grads
from jax.interpreters import ad, batching, xla
import jax
from caustics import ehrlich_aberth
from functools import partial
@partial(jit, static_argnames=('N',))
def compute_polynomial_coeffs(w, a, e1, N=2):
wbar = jnp.conjugate(w)
p_0 = -a**2 + wbar**2
p_1 = a**2*w - 2*a*e1 + a - w*wbar**2 + wbar
p_2 = 2*a**4 - 2*a**2*wbar**2 + 4*a*wbar*e1 - 2*a*wbar - 2*w*wbar
p_3 = -2*a**4*w + 4*a**3*e1 - 2*a**3 + 2*a**2*w*wbar**2 - 4*a*w*wbar*e1 +\
2*a*w*wbar + 2*a*e1 - a - w
p_4 = -a**6 + a**4*wbar**2 - 4*a**3*wbar*e1 + 2*a**3*wbar + 2*a**2*w*wbar +\
4*a**2*e1**2 - 4*a**2*e1 + 2*a**2 - 4*a*w*e1 + 2*a*w
p_5 = a**6*w - 2*a**5*e1 + a**5 - a**4*w*wbar**2 - a**4*wbar + 4*a**3*w*wbar*e1 -\
2*a**3*w*wbar + 2*a**3*e1 - a**3 - 4*a**2*w*e1**2 + 4*a**2*w*e1 - a**2*w
p = jnp.array([p_0, p_1, p_2, p_3, p_4, p_5])
return p
# Lens postion
a = 0.5*0.9
# Lens mass ratio
e1 = 0.8
e2 = 1. - e1
ncoeffs = 6
# Compute complex polynomial coefficients for each source position
w_points = jnp.linspace(0.39, 0.4, 5000).astype(jnp.complex128)
wgrid = w_points[:, None]
wgrid = wgrid[10, :][:, None] # select just one polynomial
coeffs = vmap(vmap(lambda w: compute_polynomial_coeffs(w, a, e1)))(wgrid).reshape(1, -1)
coeffs.shape
# This fails
test_fn = lambda p: ehrlich_aberth(p)[2]
check_grads(test_fn, (coeffs,), 1)
# Evaluate jvp using finite differences
delta = jnp.zeros_like(coeffs)
delta = delta.at[0, np.random.randint(0, len(coeffs))].set(1e-07 + 2e-07j) # perturbation in random coefficient
df_finite_diff = ehrlich_aberth(coeffs + delta) - ehrlich_aberth(coeffs)
df_finite_diff
# Evaluate jvp using _ehrlich_aberth_jvp from ops.py
f, df = jax.jvp(ehrlich_aberth, (coeffs,), (delta,))
df
# Check roots
vmap(lambda x: jnp.polyval(coeffs.reshape(-1), x))(f)
@jax.custom_jvp
def fn(p):
return ehrlich_aberth(p)
@fn.defjvp
def fn_jvp(args, tangents):
p = args[0]
dp = tangents[0]
size = p.shape[0] # number of polynomials
deg = p.shape[1] - 1 # degree of polynomials
# Roots
z = ehrlich_aberth(p) # shape (size * deg,)
z = z.reshape((size, deg)) # shape (size, deg)
# Evaluate the derivative of the polynomials at the roots
p_deriv = vmap(jnp.polyder)(p)
df_dz = vmap(lambda coeffs, root: jnp.polyval(coeffs, root))(p_deriv, z)
def zero_tangent(tan, val):
return lax.zeros_like_array(val) if type(tan) is ad.Zero else tan
# The Jacobian of f with respect to coefficient p evaluated at each of the
# roots. Shape (size, deg, deg + 1).
df_dp = vmap(vmap(lambda z: jnp.power(z, jnp.arange(deg + 1)[::-1])))(z)
# Jacobian of the roots multiplied by the tangents, shape (size, deg)
dz = (
vmap(
lambda df_dp_i: jnp.sum(df_dp_i * zero_tangent(dp, p), axis=1),
in_axes=1, # vmap over all roots
)(df_dp).T
/ (-df_dz)
)
return (
z.reshape(-1),
dz.reshape(-1),
)
df_finite_diff2 = fn(coeffs + delta) - fn(coeffs)
df_finite_diff2
# ????
f2, df2 = jax.jvp(fn, (coeffs,), (delta,))
df2
# This fails because the precision isn't high enough
check_grads(lambda p: fn(p)[2], (coeffs,), 1)
```
| github_jupyter |
```
print(3)
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
sess = tf.InteractiveSession()
image = np.array([[[[1],[2],[3]],[[4],[5],[6]],[[7],[8],[9]]]], dtype=np.float32)
print("image.shape", image.shape)
weight = tf.constant([[[[1.]],[[1.]]],[[[1.]],[[1.]]]])
print("weight.shpae", weight.shape)
conv2d = tf.nn.conv2d(image, weight, strides=[1,1,1,1], padding='VALID')
conv2d_img = conv2d.eval()
print("conv2d img.shape", conv2d_img.shape)
conv2d_img = np.swapaxes(conv2d_img, 0, 3)
for i, one_img in enumerate(conv2d_img):
print(one_img.reshape(2,2))
plt.subplot(1,2,i+1), plt.imshow(one_img.reshape(2,2), cmap='gray')
# sess = tf.InteractiveSession()
# image = np.array([[[1],[2],[3]],[[4],[5],[6]],[[7],[8],[9]]])
# print(image.shape)
# plt.imshow(image.reshape(3,3), cmap="Greys")
import numpy as np
np.repeat(np.array([1,2]),2)
np.array([[1,2]]) + np.array([[1],[2]])
def get_im2col_indices(x_shape, field_height, field_width, padding=1, stride=1):
# First figure out what the size of the output should be
N, C, H, W = x_shape
assert (H + 2 * padding - field_height) % stride == 0
assert (W + 2 * padding - field_height) % stride == 0
out_height = (H + 2 * padding - field_height) / stride + 1
out_width = (W + 2 * padding - field_width) / stride + 1
i0 = np.repeat(np.arange(field_height), field_width)
i0 = np.tile(i0, C)
i1 = stride * np.repeat(np.arange(out_height), out_width)
j0 = np.tile(np.arange(field_width), field_height*C)
j1 = stride * np.tile(np.arange(out_width), out_height)
i = i0.reshape(1,-1) + i1.reshape(-1,1)
j = j0.reshape(1,-1) + j1.reshape(-1,1)
k = np.tile(np.repeat(np.arange(C), field_width*field_height), out_width*out_height)
x = np.arange(120).reshape(2,3,4,5)
N, C, H, W = x.shape
padding = 0
stride = 1
field_height = 3
field_width = 3
assert (H + 2 * padding - field_height) % stride == 0
assert (W + 2 * padding - field_height) % stride == 0
out_height = (H + 2 * padding - field_height) // stride + 1
out_width = (W + 2 * padding - field_width) // stride + 1
i0 = np.repeat(np.arange(field_height), field_width)
i0 = np.tile(i0, C)
i1 = stride * np.repeat(np.arange(out_height), out_width)
j0 = np.tile(np.arange(field_width), field_height*C)
j1 = stride * np.tile(np.arange(out_width), out_height)
i = i0.reshape(1,-1) + i1.reshape(-1,1)
j = j0.reshape(1,-1) + j1.reshape(-1,1)
k = np.repeat(np.arange(C), field_width*field_height)
print(x)
cols = x[:,k,i,j]
print('-'*10)
cols.transpose(1,2,0).reshape(field_height*field_width*C, -1)
x[]
np.dot(np.array([1,2,3,4])*np.array([1,2]).reshape(2,1))
```
| github_jupyter |
# 深度卷积神经网络(AlexNet)
在LeNet提出后的将近20年里,神经网络一度被其他机器学习方法超越,如支持向量机。虽然LeNet可以在早期的小数据集上取得好的成绩,但是在更大的真实数据集上的表现并不尽如人意。一方面,神经网络计算复杂。虽然20世纪90年代也有过一些针对神经网络的加速硬件,但并没有像之后GPU那样大量普及。因此,训练一个多通道、多层和有大量参数的卷积神经网络在当年很难完成。另一方面,当年研究者还没有大量深入研究参数初始化和非凸优化算法等诸多领域,导致复杂的神经网络的训练通常较困难。
我们在上一节看到,神经网络可以直接基于图像的原始像素进行分类。这种称为端到端(end-to-end)的方法节省了很多中间步骤。然而,在很长一段时间里更流行的是研究者通过勤劳与智慧所设计并生成的手工特征。这类图像分类研究的主要流程是:
1. 获取图像数据集;
2. 使用已有的特征提取函数生成图像的特征;
3. 使用机器学习模型对图像的特征分类。
当时认为的机器学习部分仅限最后这一步。如果那时候跟机器学习研究者交谈,他们会认为机器学习既重要又优美。优雅的定理证明了许多分类器的性质。机器学习领域生机勃勃、严谨而且极其有用。然而,如果跟计算机视觉研究者交谈,则是另外一幅景象。他们会告诉你图像识别里“不可告人”的现实是:计算机视觉流程中真正重要的是数据和特征。也就是说,使用较干净的数据集和较有效的特征甚至比机器学习模型的选择对图像分类结果的影响更大。
## 学习特征表示
既然特征如此重要,它该如何表示呢?
我们已经提到,在相当长的时间里,特征都是基于各式各样手工设计的函数从数据中提取的。事实上,不少研究者通过提出新的特征提取函数不断改进图像分类结果。这一度为计算机视觉的发展做出了重要贡献。
然而,另一些研究者则持异议。他们认为特征本身也应该由学习得来。他们还相信,为了表征足够复杂的输入,特征本身应该分级表示。持这一想法的研究者相信,多层神经网络可能可以学得数据的多级表征,并逐级表示越来越抽象的概念或模式。以图像分类为例,并回忆[“二维卷积层”](conv-layer.ipynb)一节中物体边缘检测的例子。在多层神经网络中,图像的第一级的表示可以是在特定的位置和⻆度是否出现边缘;而第二级的表示说不定能够将这些边缘组合出有趣的模式,如花纹;在第三级的表示中,也许上一级的花纹能进一步汇合成对应物体特定部位的模式。这样逐级表示下去,最终,模型能够较容易根据最后一级的表示完成分类任务。需要强调的是,输入的逐级表示由多层模型中的参数决定,而这些参数都是学出来的。
尽管一直有一群执着的研究者不断钻研,试图学习视觉数据的逐级表征,然而很长一段时间里这些野心都未能实现。这其中有诸多因素值得我们一一分析。
### 缺失要素一:数据
包含许多特征的深度模型需要大量的有标签的数据才能表现得比其他经典方法更好。限于早期计算机有限的存储和90年代有限的研究预算,大部分研究只基于小的公开数据集。例如,不少研究论文基于加州大学欧文分校(UCI)提供的若干个公开数据集,其中许多数据集只有几百至几千张图像。这一状况在2010年前后兴起的大数据浪潮中得到改善。特别是,2009年诞生的ImageNet数据集包含了1,000大类物体,每类有多达数千张不同的图像。这一规模是当时其他公开数据集无法与之相提并论的。ImageNet数据集同时推动计算机视觉和机器学习研究进入新的阶段,使此前的传统方法不再有优势。
### 缺失要素二:硬件
深度学习对计算资源要求很高。早期的硬件计算能力有限,这使训练较复杂的神经网络变得很困难。然而,通用GPU的到来改变了这一格局。很久以来,GPU都是为图像处理和计算机游戏设计的,尤其是针对大吞吐量的矩阵和向量乘法从而服务于基本的图形变换。值得庆幸的是,这其中的数学表达与深度网络中的卷积层的表达类似。通用GPU这个概念在2001年开始兴起,涌现出诸如OpenCL和CUDA之类的编程框架。这使得GPU也在2010年前后开始被机器学习社区使用。
## AlexNet
2012年,AlexNet横空出世。这个模型的名字来源于论文第一作者的姓名Alex Krizhevsky [1]。AlexNet使用了8层卷积神经网络,并以很大的优势赢得了ImageNet 2012图像识别挑战赛。它首次证明了学习到的特征可以超越手工设计的特征,从而一举打破计算机视觉研究的前状。
AlexNet与LeNet的设计理念非常相似,但也有显著的区别。
第一,与相对较小的LeNet相比,AlexNet包含8层变换,其中有5层卷积和2层全连接隐藏层,以及1个全连接输出层。下面我们来详细描述这些层的设计。
AlexNet第一层中的卷积窗口形状是$11\times11$。因为ImageNet中绝大多数图像的高和宽均比MNIST图像的高和宽大10倍以上,ImageNet图像的物体占用更多的像素,所以需要更大的卷积窗口来捕获物体。第二层中的卷积窗口形状减小到$5\times5$,之后全采用$3\times3$。此外,第一、第二和第五个卷积层之后都使用了窗口形状为$3\times3$、步幅为2的最大池化层。而且,AlexNet使用的卷积通道数也大于LeNet中的卷积通道数数十倍。
紧接着最后一个卷积层的是两个输出个数为4096的全连接层。这两个巨大的全连接层带来将近1 GB的模型参数。由于早期显存的限制,最早的AlexNet使用双数据流的设计使一个GPU只需要处理一半模型。幸运的是,显存在过去几年得到了长足的发展,因此通常我们不再需要这样的特别设计了。
第二,AlexNet将sigmoid激活函数改成了更加简单的ReLU激活函数。一方面,ReLU激活函数的计算更简单,例如它并没有sigmoid激活函数中的求幂运算。另一方面,ReLU激活函数在不同的参数初始化方法下使模型更容易训练。这是由于当sigmoid激活函数输出极接近0或1时,这些区域的梯度几乎为0,从而造成反向传播无法继续更新部分模型参数;而ReLU激活函数在正区间的梯度恒为1。因此,若模型参数初始化不当,sigmoid函数可能在正区间得到几乎为0的梯度,从而令模型无法得到有效训练。
第三,AlexNet通过丢弃法(参见[“丢弃法”](../chapter_deep-learning-basics/dropout.ipynb)一节)来控制全连接层的模型复杂度。而LeNet并没有使用丢弃法。
第四,AlexNet引入了大量的图像增广,如翻转、裁剪和颜色变化,从而进一步扩大数据集来缓解过拟合。我们将在后面的[“图像增广”](../chapter_computer-vision/image-augmentation.ipynb)一节详细介绍这种方法。
下面我们实现稍微简化过的AlexNet。
```
import d2lzh as d2l
from mxnet import gluon, init, nd
from mxnet.gluon import data as gdata, nn
import os
import sys
net = nn.Sequential()
# 使用较大的11 x 11窗口来捕获物体。同时使用步幅4来较大幅度减小输出高和宽。这里使用的输出通
# 道数比LeNet中的也要大很多
net.add(nn.Conv2D(96, kernel_size=11, strides=4, activation='relu'),
nn.MaxPool2D(pool_size=3, strides=2),
# 减小卷积窗口,使用填充为2来使得输入与输出的高和宽一致,且增大输出通道数
nn.Conv2D(256, kernel_size=5, padding=2, activation='relu'),
nn.MaxPool2D(pool_size=3, strides=2),
# 连续3个卷积层,且使用更小的卷积窗口。除了最后的卷积层外,进一步增大了输出通道数。
# 前两个卷积层后不使用池化层来减小输入的高和宽
nn.Conv2D(384, kernel_size=3, padding=1, activation='relu'),
nn.Conv2D(384, kernel_size=3, padding=1, activation='relu'),
nn.Conv2D(256, kernel_size=3, padding=1, activation='relu'),
nn.MaxPool2D(pool_size=3, strides=2),
# 这里全连接层的输出个数比LeNet中的大数倍。使用丢弃层来缓解过拟合
nn.Dense(4096, activation="relu"), nn.Dropout(0.5),
nn.Dense(4096, activation="relu"), nn.Dropout(0.5),
# 输出层。由于这里使用Fashion-MNIST,所以用类别数为10,而非论文中的1000
nn.Dense(10))
```
我们构造一个高和宽均为224的单通道数据样本来观察每一层的输出形状。
```
X = nd.random.uniform(shape=(1, 1, 224, 224))
net.initialize()
for layer in net:
X = layer(X)
print(layer.name, 'output shape:\t', X.shape)
```
## 读取数据
虽然论文中AlexNet使用ImageNet数据集,但因为ImageNet数据集训练时间较长,我们仍用前面的Fashion-MNIST数据集来演示AlexNet。读取数据的时候我们额外做了一步将图像高和宽扩大到AlexNet使用的图像高和宽224。这个可以通过`Resize`实例来实现。也就是说,我们在`ToTensor`实例前使用`Resize`实例,然后使用`Compose`实例来将这两个变换串联以方便调用。
```
# 本函数已保存在d2lzh包中方便以后使用
def load_data_fashion_mnist(batch_size, resize=None, root=os.path.join(
'~', '.mxnet', 'datasets', 'fashion-mnist')):
root = os.path.expanduser(root) # 展开用户路径'~'
transformer = []
if resize:
transformer += [gdata.vision.transforms.Resize(resize)]
transformer += [gdata.vision.transforms.ToTensor()]
transformer = gdata.vision.transforms.Compose(transformer)
mnist_train = gdata.vision.FashionMNIST(root=root, train=True)
mnist_test = gdata.vision.FashionMNIST(root=root, train=False)
num_workers = 0 if sys.platform.startswith('win32') else 4
train_iter = gdata.DataLoader(
mnist_train.transform_first(transformer), batch_size, shuffle=True,
num_workers=num_workers)
test_iter = gdata.DataLoader(
mnist_test.transform_first(transformer), batch_size, shuffle=False,
num_workers=num_workers)
return train_iter, test_iter
batch_size = 128
# 如出现“out of memory”的报错信息,可减小batch_size或resize
train_iter, test_iter = load_data_fashion_mnist(batch_size, resize=224)
```
## 训练
这时候我们可以开始训练AlexNet了。相对于上一节的LeNet,这里的主要改动是使用了更小的学习率。
```
lr, num_epochs, ctx = 0.01, 5, d2l.try_gpu()
net.initialize(force_reinit=True, ctx=ctx, init=init.Xavier())
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': lr})
d2l.train_ch5(net, train_iter, test_iter, batch_size, trainer, ctx, num_epochs)
```
## 小结
* AlexNet跟LeNet结构类似,但使用了更多的卷积层和更大的参数空间来拟合大规模数据集ImageNet。它是浅层神经网络和深度神经网络的分界线。
* 虽然看上去AlexNet的实现比LeNet的实现也就多了几行代码而已,但这个观念上的转变和真正优秀实验结果的产生令学术界付出了很多年。
## 练习
* 尝试增加迭代周期。跟LeNet的结果相比,AlexNet的结果有什么区别?为什么?
* AlexNet对Fashion-MNIST数据集来说可能过于复杂。试着简化模型来使训练更快,同时保证准确率不明显下降。
* 修改批量大小,观察准确率和内存或显存的变化。
## 参考文献
[1] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1258)

| github_jupyter |
# Predict Blood Donation for Future Expectancy
Forecasting blood supply is a serious and recurrent problem for blood collection managers: in January 2019, "Nationwide, the Red Cross saw 27,000 fewer blood donations over the holidays than they see at other times of the year." Machine learning can be used to learn the patterns in the data to help to predict future blood donations and therefore save more lives.
## Project Tasks
- Inspecting transfusion.data file
- Loading the blood donations data
- Inspecting transfusion DataFrame
- Creating target column
- Checking target incidence
- Splitting transfusion into train and test datasets
- Selecting model using TPOT
- Checking the variance
- Log normalization
- Training the linear regression model
- Conclusion
## 1. Inspecting transfusion.data file
Blood transfusion saves lives - from replacing lost blood during major surgery or a serious injury to treating various illnesses and blood disorders. Ensuring that there's enough blood in supply whenever needed is a serious challenge for the health professionals. According to WebMD "about 5 million Americans need a blood transfusion every year".
## 2. Loading the blood donations data
```
import pandas as pd
import numpy as np
df = pd.read_csv('transfusion.data')
df
```
## 3. Inspecting transfusion DataFrame
Let's briefly return to our discussion of RFM model. RFM stands for Recency, Frequency and Monetary Value and it is commonly used in marketing for identifying your best customers. In our case, our customers are blood donors.
RFMTC is a variation of the RFM model. Below is a description of what each column means in our dataset:
R (Recency - months since the last donation)
F (Frequency - total number of donation)
M (Monetary - total blood donated in c.c.)
T (Time - months since the first donation)
A binary variable representing whether he/she donated blood in March 2007 (1 stands for donating blood; 0 stands for not donating blood)
It looks like every column in our DataFrame has the numeric type, which is exactly what we want when building a machine learning model. Let's verify our hypothesis.
```
df.info()
df.head()
df.shape
df.isnull().sum()
df.columns
```
## 4. Creating target column
We are aiming to predict the value in whether he/she donated blood in March 2007 column.
Let's rename this it to target so that it's more convenient to work with.
```
df.rename(columns={'whether he/she donated blood in March 2007': 'target'},inplace=True)
df.head()
```
## 5. Checking target incidence
We want to predict whether or not the same donor will give blood the next time the vehicle comes to campus. The model for this is a binary classifier, meaning that there are only 2 possible outcomes:</p>
0 - the donor will not give blood
1 - the donor will give blood
Target incidence is defined as the number of cases of each individual target value in a dataset. That is, how many 0s in the target column compared to how many 1s? Target incidence gives us an idea of how balanced (or imbalanced) is our dataset.
```
df.target.value_counts(normalize=True).round(2)
```
## 6. Splitting transfusion into train and test datasets
Target incidence informed us that in our dataset 0's appear 76% of the time and 1's appear 24 %.
Now, We create a Train Test split and then move on to prediction.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df.drop(columns='target'),df.target,test_size=0.25,random_state=42,stratify=df.target)
X_train.head()
```
## 7. Selecting model using TPOT
TPOT is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming
TPOT will automatically explore hundreds of possible pipelines to find the best one for our dataset. Note, the outcome of this search will be a scikit-learn pipeline, meaning it will include any pre-processing steps as well as the model.
We are using TPOT to help us zero in on one model that we can then explore and optimize further.
```
from tpot import TPOTClassifier
from sklearn.metrics import roc_auc_score
tpot = TPOTClassifier(generations=5,population_size=20,verbosity=2,scoring='roc_auc',random_state=42,disable_update_check=True,config_dict='TPOT light')
tpot.fit(X_train, y_train)
tpot_auc_score = roc_auc_score(y_test, tpot.predict_proba(X_test)[:, 1])
print(f'\nAUC score: {tpot_auc_score:.4f}')
print('\nBest pipeline steps:', end='\n')
for idx, (name, transform) in enumerate(tpot.fitted_pipeline_.steps, start=1):
print(f'{idx}.{transform}')
```
## 8. Checking the variance
TPOT picked Logistic Regression as the best model for our dataset with no pre-processing steps, giving us the AUC score of 0.7853. This is a great starting point. Let's see if we can make it better.
One of the assumptions for linear regression models is that the data and the features we are giving it are related in a linear fashion, or can be measured with a linear distance metric. If a feature in our dataset has a high variance that's an order of magnitude or more greater than the other features, this could impact the model's ability to learn from other features in the dataset.
Correcting for high variance is called normalization. It is one of the possible transformations you do before training a model. Let's check the variance to see if such transformation is needed.
```
X_train.var().round(2)
```
## 9. Log normalization
Monetary (c.c. blood)'s variance is very high in comparison to any other column in the dataset. This means that, unless accounted for, this feature may get more weight by the model (i.e., be seen as more important) than any other feature.
One way to correct for high variance is to use log normalization.
```
X_train_normed, X_test_normed = X_train.copy(), X_test.copy()
col_to_normalize = 'Monetary (c.c. blood)'
for df_ in [X_train_normed, X_test_normed]:
df_['Monetary_log'] = np.log(df_[col_to_normalize])
df_.drop(columns=col_to_normalize, inplace=True)
X_train_normed.var().round(2)
```
## 10. Training the linear regression model
The variance looks much better now. Notice that now Time (months) has the largest variance, but it's not the orders of magnitude higher than the rest of the variables, so we'll leave it as is.
We are now ready to train the linear regression model.
```
from sklearn import linear_model
import pickle as pkl
logreg = linear_model.LogisticRegression(C=0.1, dual=False, penalty='l2',solver='liblinear',random_state=42)
# Train the model
logreg.fit(X_train_normed, y_train)
logreg_auc_score = roc_auc_score(y_test, logreg.predict_proba(X_test_normed)[:, 1])
print(f'\nAUC score: {logreg_auc_score:.4f}')
```
## 11. Conclusion
The demand for blood fluctuates throughout the year. As one prominent example, blood donations slow down during busy holiday seasons. An accurate forecast for the future supply of blood allows for an appropriate action to be taken ahead of time and therefore saving more lives.
In this notebook, we explored automatic model selection using TPOT and AUC score we got was 0.7853. This is better than simply choosing 0 all the time (the target incidence suggests that such a model would have 76% success rate). We then log normalized our training data and improved the AUC score by approx. 1.00% i.e 0.7868 In the field of machine learning, even small improvements in accuracy can be important, depending on the purpose.
Another benefit of using logistic regression model is that it is interpretable.
| github_jupyter |
## Homework: Multilingual Embedding-based Machine Translation (7 points)
**In this homework** **<font color='red'>YOU</font>** will make machine translation system without using parallel corpora, alignment, attention, 100500 depth super-cool recurrent neural network and all that kind superstuff.
But even without parallel corpora this system can be good enough (hopefully).
For our system we choose two kindred Slavic languages: Ukrainian and Russian.
### Feel the difference!
(_синій кіт_ vs. _синій кит_)

### Frament of the Swadesh list for some slavic languages
The Swadesh list is a lexicostatistical stuff. It's named after American linguist Morris Swadesh and contains basic lexis. This list are used to define subgroupings of languages, its relatedness.
So we can see some kind of word invariance for different Slavic languages.
| Russian | Belorussian | Ukrainian | Polish | Czech | Bulgarian |
|-----------------|--------------------------|-------------------------|--------------------|-------------------------------|-----------------------|
| женщина | жанчына, кабета, баба | жінка | kobieta | žena | жена |
| мужчина | мужчына | чоловік, мужчина | mężczyzna | muž | мъж |
| человек | чалавек | людина, чоловік | człowiek | člověk | човек |
| ребёнок, дитя | дзіця, дзіцёнак, немаўля | дитина, дитя | dziecko | dítě | дете |
| жена | жонка | дружина, жінка | żona | žena, manželka, choť | съпруга, жена |
| муж | муж, гаспадар | чоловiк, муж | mąż | muž, manžel, choť | съпруг, мъж |
| мать, мама | маці, матка | мати, матір, неня, мама | matka | matka, máma, 'стар.' mateř | майка |
| отец, тятя | бацька, тата | батько, тато, татусь | ojciec | otec | баща, татко |
| много | шмат, багата | багато | wiele | mnoho, hodně | много |
| несколько | некалькі, колькі | декілька, кілька | kilka | několik, pár, trocha | няколко |
| другой, иной | іншы | інший | inny | druhý, jiný | друг |
| зверь, животное | жывёла, звер, істота | тварина, звір | zwierzę | zvíře | животно |
| рыба | рыба | риба | ryba | ryba | риба |
| птица | птушка | птах, птиця | ptak | pták | птица |
| собака, пёс | сабака | собака, пес | pies | pes | куче, пес |
| вошь | вош | воша | wesz | veš | въшка |
| змея, гад | змяя | змія, гад | wąż | had | змия |
| червь, червяк | чарвяк | хробак, черв'як | robak | červ | червей |
| дерево | дрэва | дерево | drzewo | strom, dřevo | дърво |
| лес | лес | ліс | las | les | гора, лес |
| палка | кій, палка | палиця | patyk, pręt, pałka | hůl, klacek, prut, kůl, pálka | палка, пръчка, бастун |
But the context distribution of these languages demonstrates even more invariance. And we can use this fact for our for our purposes.
## Data
```
import gensim
import numpy as np
from gensim.models import KeyedVectors
```
Download embeddings here:
* [cc.uk.300.vec.zip](https://yadi.sk/d/9CAeNsJiInoyUA)
* [cc.ru.300.vec.zip](https://yadi.sk/d/3yG0-M4M8fypeQ)
Load embeddings for ukrainian and russian.
```
uk_emb = KeyedVectors.load_word2vec_format("cc.uk.300.vec")
ru_emb = KeyedVectors.load_word2vec_format("cc.ru.300.vec")
ru_emb.most_similar([ru_emb["август"]], topn=10)
uk_emb.most_similar([uk_emb["серпень"]])
ru_emb.most_similar([uk_emb["серпень"]])
```
Load small dictionaries for correspoinding words pairs as trainset and testset.
```
def load_word_pairs(filename):
uk_ru_pairs = []
uk_vectors = []
ru_vectors = []
with open(filename, "r") as inpf:
for line in inpf:
uk, ru = line.rstrip().split("\t")
if uk not in uk_emb or ru not in ru_emb:
continue
uk_ru_pairs.append((uk, ru))
uk_vectors.append(uk_emb[uk])
ru_vectors.append(ru_emb[ru])
return uk_ru_pairs, np.array(uk_vectors), np.array(ru_vectors)
uk_ru_train, X_train, Y_train = load_word_pairs("ukr_rus.train.txt")
uk_ru_test, X_test, Y_test = load_word_pairs("ukr_rus.test.txt")
```
## Embedding space mapping
Let $x_i \in \mathrm{R}^d$ be the distributed representation of word $i$ in the source language, and $y_i \in \mathrm{R}^d$ is the vector representation of its translation. Our purpose is to learn such linear transform $W$ that minimizes euclidian distance between $Wx_i$ and $y_i$ for some subset of word embeddings. Thus we can formulate so-called Procrustes problem:
$$W^*= \arg\min_W \sum_{i=1}^n||Wx_i - y_i||_2$$
or
$$W^*= \arg\min_W ||WX - Y||_F$$
where $||*||_F$ - Frobenius norm.
In Greek mythology, Procrustes or "the stretcher" was a rogue smith and bandit from Attica who attacked people by stretching them or cutting off their legs, so as to force them to fit the size of an iron bed. We make same bad things with source embedding space. Our Procrustean bed is target embedding space.


But wait...$W^*= \arg\min_W \sum_{i=1}^n||Wx_i - y_i||_2$ looks like simple multiple linear regression (without intercept fit). So let's code.
```
from sklearn.linear_model import LinearRegression
mapping = LinearRegression(fit_intercept=False)
mapping.fit(X_train, Y_train)
```
Let's take a look at neigbours of the vector of word _"серпень"_ (_"август"_ in Russian) after linear transform.
```
august = mapping.predict(uk_emb["серпень"].reshape(1, -1))
ru_emb.most_similar(august)
```
We can see that neighbourhood of this embedding cosists of different months, but right variant is on the ninth place.
As quality measure we will use precision top-1, top-5 and top-10 (for each transformed Ukrainian embedding we count how many right target pairs are found in top N nearest neighbours in Russian embedding space).
```
from scipy.spatial.distance import cdist
def precision(pairs, mapped_vectors, topn=1):
"""
:args:
pairs = list of right word pairs [(uk_word_0, ru_word_0), ...]
mapped_vectors = list of embeddings after mapping from source embedding space to destination embedding space
topn = the number of nearest neighbours in destination embedding space to choose from
:returns:
precision_val, float number, total number of words for those we can find right translation at top K.
"""
assert len(pairs) == len(mapped_vectors)
num_matches = 0
for i, (_, ru) in enumerate(pairs):
similar_results = ru_emb.most_similar(mapped_vectors[i].reshape(1, -1), topn=topn)
words = [word for word, _ in similar_results]
num_matches += (ru in words)
precision_val = num_matches / len(pairs)
return precision_val
assert precision([("серпень", "август")], august, topn=5) == 0.0
assert precision([("серпень", "август")], august, topn=9) == 1.0
assert precision([("серпень", "август")], august, topn=10) == 1.0
assert precision(uk_ru_test, X_test) == 0.0
assert precision(uk_ru_test, Y_test) == 1.0
precision_top1 = precision(uk_ru_test, mapping.predict(X_test), 1)
precision_top5 = precision(uk_ru_test, mapping.predict(X_test), 5)
assert precision_top1 >= 0.635
assert precision_top5 >= 0.813
```
## Making it better (orthogonal Procrustean problem)
It can be shown (see original paper) that a self-consistent linear mapping between semantic spaces should be orthogonal.
We can restrict transform $W$ to be orthogonal. Then we will solve next problem:
$$W^*= \arg\min_W ||WX - Y||_F \text{, where: } W^TW = I$$
$$I \text{- identity matrix}$$
Instead of making yet another regression problem we can find optimal orthogonal transformation using singular value decomposition. It turns out that optimal transformation $W^*$ can be expressed via SVD components:
$$X^TY=U\Sigma V^T\text{, singular value decompostion}$$
$$W^*=UV^T$$
```
from numpy.linalg import svd
def learn_transform(X_train, Y_train):
"""
:returns: W* : float matrix[emb_dim x emb_dim] as defined in formulae above
"""
U, _, VT = svd(X_train.T.dot(Y_train))
return U.dot(VT)
W = learn_transform(X_train, Y_train)
ru_emb.most_similar([np.matmul(uk_emb["серпень"], W)])
assert precision(uk_ru_test, np.matmul(X_test, W)) >= 0.653
assert precision(uk_ru_test, np.matmul(X_test, W), 5) >= 0.824
```
## UK-RU Translator
Now we are ready to make simple word-based translator: for earch word in source language in shared embedding space we find the nearest in target language.
```
with open("fairy_tale.txt", "r") as inpf:
uk_sentences = [line.rstrip().lower() for line in inpf]
import re
def translate(sentence):
"""
:args:
sentence - sentence in Ukrainian (str)
:returns:
translation - sentence in Russian (str)
* find ukrainian embedding for each word in sentence
* transform ukrainian embedding vector
* find nearest russian word and replace
"""
tokens = sentence.split(" ")
translation = []
for token in tokens:
search_result = re.search(r'^(\W*)(.*?)(\W*)$', token)
if search_result is None:
print('Token: ', token)
word = search_result.group(2)
if word == '':
translate = ''
elif word not in uk_emb:
translate = '_'
else:
translate = ru_emb.most_similar([np.matmul(uk_emb[word], W)], topn=1)[0][0]
translation.append(search_result.group(1) + translate + search_result.group(3))
return ' '.join(translation)
assert translate(".") == "."
assert translate("1 , 3") == "1 , 3"
assert translate("кіт зловив мишу") == "кот поймал мышку"
for sentence in uk_sentences:
print("src: {}\ndst: {}\n".format(sentence, translate(sentence)))
```
Not so bad, right? We can easily improve translation using language model and not one but several nearest neighbours in shared embedding space. But next time.
## Would you like to learn more?
### Articles:
* [Exploiting Similarities among Languages for Machine Translation](https://arxiv.org/pdf/1309.4168) - entry point for multilingual embedding studies by Tomas Mikolov (the author of W2V)
* [Offline bilingual word vectors, orthogonal transformations and the inverted softmax](https://arxiv.org/pdf/1702.03859) - orthogonal transform for unsupervised MT
* [Word Translation Without Parallel Data](https://arxiv.org/pdf/1710.04087)
* [Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion](https://arxiv.org/pdf/1804.07745)
* [Unsupervised Alignment of Embeddings with Wasserstein Procrustes](https://arxiv.org/pdf/1805.11222)
### Repos (with ready-to-use multilingual embeddings):
* https://github.com/facebookresearch/MUSE
* https://github.com/Babylonpartners/fastText_multilingual -
| github_jupyter |
# Ray Crash Course - Actors
© 2019-2021, Anyscale. All Rights Reserved

Using Ray _tasks_ is great for distributing work around a cluster, but we've said nothing so far about managing distributed _state_, one of the big challenges in distributed computing. Ray tasks are great for _stateless_ computation, but we need something for _stateful_ computation.
Python classes are a familiar mechanism for encapsulating state. Just as Ray tasks extend the familiar concept of Python _functions_, Ray addresses stateful computation by extending _classes_ to become Ray _actors_.
> **Tip:** For more about Ray, see [ray.io](https://ray.io) or the [Ray documentation](https://docs.ray.io/en/latest/).
## What We Mean by Distributed State
If you've worked with data processing libraries like [Pandas](https://pandas.pydata.org/) or big data tools like [Apache Spark](https://spark.apache.org), you know that they provide rich features for manipulating large, structured _data sets_, i.e., the analogs of tables in a database. Some tools even support partitioning of these data sets over clusters for scalability.
This isn't the kind of distributed "state" Ray addresses. Instead, it's the more open-ended _graph of objects_ found in more general-purpose applications. For example, it could be the state of a game engine used in a reinforcement learning (RL) application or the total set of parameters in a giant neural network, some of which now have hundreds of millions of parameters.
## Conway's Game of Life
Let's explore Ray's actor model using [Conway's Game of Life](https://en.wikipedia.org/wiki/Conway's_Game_of_Life), a famous _cellular automaton_.
Here is an example of a notable pattern of game evolution, _Gospers glider gun_:

(credit: Lucas Vieira - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=101736)
We'll use an implementation of Conway's Game of Life as a nontrivial example of maintaining state, the current grid of living and dead cells. We'll see how to leverage Ray to scale it.
> **Note:** Sadly, [John Horton Conway](https://en.wikipedia.org/wiki/John_Horton_Conway), the inventor of this automaton, passed away from COVID-19 on April 11, 2020. This lesson is dedicated to Professor Conway.
Let's start with some imports
```
import ray, time, statistics, sys, os
import numpy as np
import os
sys.path.append("..") # For library helper functions
```
I've never seen this done anywhere else, but our implementation of Game of Life doesn't just use `1` for living cells, it uses the number of iterations they've been alive, so `1-N`. I'll exploit this when we graph the game.
```
from game_of_life import Game, State, ConwaysRules
```
Utility functions for plotting using Holoviews and Bokeh, as well as running and timing games.
```
from actor_lesson_util import new_game_of_life_graph, new_game_of_life_grid, run_games, run_ray_games, show_cmap
```
The implementation is a bit long, so all the code is contained in [`game_of_life.py`](game_of_life.py).
(You can also run that file as a standalone script from the command line, try `python game_of_life.py --help`. On MacOS and Linux machines, the script is executable, so you can omit the `python`).
The first class is the `State`, which encapsulates the board state as an `N x N` grid of _cells_, where `N` is specified by the user. (For simplicity, we just use square grids.) There are two ways to initialize the game, specifying a starting grid or a size, in which case the cells are set randomly. The sample below just shows the size option. `State` instances are _immutable_, because the `Game` (discussed below) keeps a sequence of them, representing the lifetime states of the game.
For smaller grids, it's often possible that the game reaches a terminal state where it stops evolving. Larger grids are more likely to exhibit different cyclic patterns that would evolve forever, thereby making those runs appear to be _immortal_, except they eventually get disrupted by evolving neighbors.
```python
class State:
def __init__(self, size = 10):
# The version in the file also lets you pass in a grid of initial cells.
self.size = size
self.grid = np.random.randint(2, size = size*size).reshape((size, size))
def living_cells(self):
cells = [(i,j) for i in range(self.size) for j in range(self.size) if self.grid[i][j] != 0]
return zip(*cells)
```
Next, `ConwaysRules` encapsulates the logic of computing the new state of a game from the current state, using the update rules defined as follows:
* Any live cell with fewer than two live neighbours dies, as if by underpopulation.
* Any live cell with two or three live neighbours lives on to the next generation.
* Any live cell with more than three live neighbours dies, as if by overpopulation.
* Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
This class is stateless; `step()` is passed a `State` instance and it returns a new instance for the udpated state.
```python
class ConwaysRules:
def step(self, state):
"""
Determine the next values for all the cells, based on the current
state. Creates a new State with the changes.
"""
new_grid = state.grid.copy()
for i in range(state.size):
for j in range(state.size):
new_grid[i][j] = self.apply_rules(i, j, state)
new_state = State(grid = new_grid)
return new_state
def apply_rules(self, i, j, state):
# Compute and return the next state for grid[i][j]
return ...
```
Finally, the game holds a sequence of states and the rules "engine".
```python
class Game:
def __init__(self, initial_state, rules):
self.states = [initial_state]
self.rules = rules
def step(self, num_steps = 1):
"""Take 1 or more steps, returning a list of new states."""
new_states = [self.rules.step(self.states[-1]) for _ in range(num_steps)]
self.states.extend(new_states)
return new_states
```
Okay, let's try it out!!
```
steps = 100 # Use a larger number for a long-running game.
game_size = 100
plot_size = 800
max_cell_age = 10 # clip the age of cells for graphing.
use_fixed_cell_sizes = True # Keep the points the same size. Try False, too!
```
For the graphs, we'll use a "greenish" background that looks good with `RdYlBu` color map.
However, if you have red-green color blindness, change the `bgcolor` string to `white`! Or, try the second combination with a custom color map `cmap` and background color `white` or `darkgrey`.
```
# Color maps from Bokeh:
cmap = 'RdYlBu' # others: 'Turbo' 'YlOrBr'
bgcolor = '#C0CfC8' # a greenish color, but not great for forms of red-green color blindness, where 'white' is better.
# A custom color map created at https://projects.susielu.com/viz-palette. Works best with white or dark grey background
#cmap=['#ffd700', '#ffb14e', '#fa8775', '#ea5f94', '#cd34b5', '#9d02d7', '#0000ff']
#bgcolor = 'darkgrey' # 'white'
def new_game(game_size):
initial_state = State(size = game_size)
rules = ConwaysRules()
game = Game(initial_state=initial_state, rules=rules)
return game
game = new_game(10)
print(game.states[0])
```
Now let's create a graph for a game of life using the imported utility function, `new_game_of_life_grid` (with only one graph in the "grid" for now).
**Note:** It will be empty for now.
```
_, graphs = new_game_of_life_grid(game_size, plot_size, x_grid=1, y_grid=1, shrink_factor=1.0,
bgcolor=bgcolor, cmap=cmap,
use_fixed_cell_sizes=use_fixed_cell_sizes, max_cell_age=max_cell_age)
graphs[0]
```
To make sure we don't consume too much driver memory, since games can grow large, let's write a function, `do_trial`, to run the experiment, then when it returns, the games will go out of scope and their memory will be reclaimed. It will use a library function we imported, `run_games` and the `new_game` function above to do most of the work.
(You might wonder why we don't create the `graphs` inside the function. It's essentially impossible to show the grid **before** the games run **and** to do the update visualization after it's shown inside one function inside a notebook cell. We have to build the grid, render it separately, then call `do_trial`.)
```
def do_trial(graphs, num_games=1, steps=steps, batch_size=1, game_size_for_each=game_size, pause_between_batches=0.0):
games = [new_game(game_size_for_each) for _ in range(num_games)]
return run_games(games, graphs, steps, batch_size, pause_between_batches)
%time num_games, steps, batch_size, duration = do_trial(graphs, steps=steps, pause_between_batches=0.1)
num_games, steps, batch_size, duration
```
If you can't see the plot or see it update, click here for a screen shot:
* [colored background](../images/ConwaysGameOfLife-Snapshot.png)
* [white background](../images/ConwaysGameOfLife-Snapshot-White-Background.png)
(Want to run longer? Pass a larger value for `steps` in the previous cell. 1000 takes several minutes, but you'll see interesting patterns develop.)
The first line of output is written by `run_games`, which is called by `do_trial`. The next two lines are output from the `%time` "magic". The fourth line shows the values returned by `run_games` through `do_trial`, which we'll use more fully in the exercise below.
How much time did it take? Note that there were `steps*0.1` seconds of sleep time between steps, so the rest is compute time. Does that account for the difference between the _user_ time and the _wall_ time?
```
steps*0.1
```
Yes, this covers most of the extra wall time.
A point's color changed as it lived longer. Here is the _color map_ used, where the top color corresponds to the longest-lived cells.
```
show_cmap(cmap=cmap, max_index=max_cell_age)
```
If you can't see the color map in the previous cell output, click [here](../images/ConwaysGameOfLife-ColorMap-RdYlBu.png) for the color map `RdYlBu`.
You could experiment with different values for `max_cell_age`.
> **Mini Exercise:** Change the value passed for `use_fixed_cell_sizes` to be `False` (in the cell that calls `new_game_of_life_grid`). Then rerun the `%time do_trial()` cell. What happens to the graph?
### Running Lots of Games
Suppose we wanted to run many of these games at the same time. For example, we might use reinforcement learning to find the initial state that maximizes some _reward_, like the most live cells after `N` steps or for immortal games. You could try writing a loop that starts `M` games and run the previous step loop interleaving games. Let's try that, with smaller grids.
```
x_grid = 5
y_grid = 3
shrink_factor = y_grid # Instead of 1 N-size game, build N/shrinkfactor size games
small_game_size = round(game_size/shrink_factor)
```
First build a grid of graphs, like before:
```
gridspace, all_graphs = new_game_of_life_grid(small_game_size, plot_size, x_grid, y_grid, shrink_factor,
bgcolor=bgcolor, cmap=cmap,
use_fixed_cell_sizes=use_fixed_cell_sizes, max_cell_age=max_cell_age)
gridspace
%time num_games, steps, batch_size, duration = do_trial(all_graphs, num_games=x_grid*y_grid, steps=steps, batch_size=1, game_size_for_each=small_game_size, pause_between_batches=0.1)
num_games, steps, batch_size, duration
```
If you can't see the plot or see it update, click here for a screen shot:
* [colored background](../images/ConwaysGameOfLife-Grid-Snapshot.png)
* [white background](../images/ConwaysGameOfLife-Grid-Snapshot-White-Background.png) (captured earlier in the run)
How much time did it take? You can perceive a "wave" across the graphs at each time step, because the games aren't running concurrently. Sometimes, a "spurt" of updates will happen, etc. Not ideal...
There were the same `steps*0.1` seconds of sleep time between steps, not dependent on the number of games, so the rest is compute time.
## Improving Performance with Ray.
Let's start Ray as before in the [first lesson](01-Ray-Tasks.ipynb).
```
ray.init(ignore_reinit_error=True)
```
Running on your laptop? Click the output of the next cell to open the Ray Dashboard.
If you are running on the Anyscale platform, use the dashboard URL provided to you.
```
print(f'New port? http://{ray.get_dashboard_url()}')
```
## Actors - Ray's Tool for Distributed State
Python is an object-oriented language. We often encapsulate bits of state in classes, like we did for `State` above. Ray leverages this familiar mechanism to manage distributed state.
Recall that adding the `@ray.remote` annotation to a _function_ turned it into a _task_. If we use the same annotation on a Python _class_, we get an _actor_.
### Why "Actor"
The [Actor Model of Concurrency](https://en.wikipedia.org/wiki/Actor_model) is almost 50 years old! It's a _message-passing_ model, where autonomous blocks of code, the actors, receive messages from other actors asking them to perform work or return some results. Implementations provide thread safety while the messages are processed, one at a time. This means the user of an actor model implementation doesn't have to worry about writing thread-safe code. Because many messages might arrive while one is being processed, they are stored in a queue and processed one at a time, the order of arrival.
There are many other implementations of the actor model, including [Erlang](https://www.erlang.org/), the first system to create a production-grade implementation, initially used for telecom switches, and [Akka](https://akka.io), a JVM implementation inspired by Erlang.
> **Tip:** The [Ray Package Reference](https://ray.readthedocs.io/en/latest/package-ref.html) in the [Ray Docs](https://ray.readthedocs.io/en/latest/) is useful for exploring the API features we'll learn.
Let's start by simply making `Game` an actor. We'll just subclass it and add `@ray.remote` to the subclass.
There's one other change we have to make; if we want to access the `state` and `rules` instances in an Actor, we can't just use `mygame.state`, for example, as you would normally do for Python instances. Instead, we have to add "getter" methods for them.
Here's our Game actor definition.
```
@ray.remote
class RayGame(Game):
def __init__(self, initial_state, rules):
super().__init__(initial_state, rules)
def get_states(self):
return self.states
def get_rules(self):
return self.rules
```
To construct an instance and call methods, you use `.remote` as for tasks:
```
def new_ray_game(game_size):
initial_state = State(size = game_size)
rules = ConwaysRules()
ray_game_actor = RayGame.remote(initial_state, rules) # Note that .remote(...) is used to construct the instance.
return ray_game_actor
```
We'll use the following function to try out the implementation, but then take the Ray actor out of scope when we're done. This is because actors remain pinned to a worker as long as the driver (this notebook) has a reference to them. We don't want that wasted space...
```
def try_ray_game_actor():
ray_game_actor = new_ray_game(small_game_size)
print(f'Actor for game: {ray_game_actor}')
init_states = ray.get(ray_game_actor.step.remote())
print(f'\nInitial state:\n{init_states[0]}')
new_states = ray.get(ray_game_actor.step.remote())
print(f'\nState after step #1:\n{new_states[0]}')
try_ray_game_actor()
```
> **Key Points:** To summarize:
>
> 1. Declare an _actor_ by annotating a class with `@ray.remote`, just like declaring a _task_ from a function.
> 2. Add _accessor_ methods for any data members that you need to read or write, because using direct access, such as `my_game.state`, doesn't work for actors.
> 3. Construct actor instances with `my_instance = MyClass.remote(...)`.
> 4. Call methods with `my_instance.some_method.remote(...)`.
> 5. Use `ray.get()` and `ray.wait()` to retrieve results, just like you do for task results.
> **Tip:** If you start getting warnings about lots of Python processes running or you have too many actors scheduled, you can safely ignore these messages for now, but the performance measurements below won't be as accurate.
Okay, now let's repeat our grid experiment with a Ray-enabled Game of Life. Let's define a helper function, `do_ray_trail`, which is analogous to `do_trial` above. It encapsulates some of the steps, for the same reasons mentioned above; so that our actors go out of scope and the worker slots are reclaimed when the function call returns.
We call a library function `run_ray_games` to run these games. It's somewhat complicated, because it uses `ray.wait()` to process updates as soon as they are available, and also has hooks for batch processing and running without graphing (see below).
We'll create the graphs separately and pass them into `do_ray_trial`.
```
def do_ray_trial(graphs, num_games=1, steps=steps, batch_size=1, game_size_for_each=game_size, pause_between_batches=0.0):
game_actors = [new_ray_game(game_size_for_each) for _ in range(num_games)]
return run_ray_games(game_actors, graphs, steps, batch_size, pause_between_batches)
ray_gridspace, ray_graphs = new_game_of_life_grid(small_game_size, plot_size, x_grid, y_grid, shrink_factor,
bgcolor=bgcolor, cmap=cmap,
use_fixed_cell_sizes=use_fixed_cell_sizes, max_cell_age=max_cell_age)
ray_gridspace
%time do_ray_trial(ray_graphs, num_games=x_grid*y_grid, steps=steps, batch_size=1, game_size_for_each=small_game_size, pause_between_batches=0.1)
```
(Can't see the image? It's basically the same as the previous grid example.)
How did your times compare? For example, using a recent model MacBook Pro laptop, this run took roughly 19 seconds vs. 21 seconds for the previous run without Ray. That's not much of an improvement. Why?
In fact, updating the graphs causes enough overhead to remove most of the speed advantage of using Ray. We also sleep briefly between generations for nicer output. However, using Ray does produce smoother graph updates.
So, if we want to study more performance optimizations, we should remove the graphing overhead, which we'll do for the rest of this lesson.
Let's run the two trials without graphs and compare the performance. We'll use no pauses between "batches" and run the same number of games as the number of CPU (cores) Ray says we have. This is actually the number of workers Ray started for us and 2x the number of actual cores:
```
num_cpus_float = ray.cluster_resources()['CPU']
num_cpus_float
```
As soon as you start the next two cell, switch to the Ray Dashboard and watch the CPU utilization. You'll see the Ray workers are idle, because we aren't using them right now, but the total CPU utilization will be about well under 100%. For example, on a four-core laptop, the total CPU utilization will be 20-25% or roughly 1/4th capacity.
Why? We're running the whole computation in the Python process for this notebook, which only utilizes one core.
```
%time do_trial(None, num_games=round(num_cpus_float), steps=steps, batch_size=1, game_size_for_each=game_size, pause_between_batches=0.0)
```
Now use Ray. Again, as soon as you start the next cell, switch to the Ray Dashboard and watch the CPU utilization. Now, the Ray workers will be utilized (but not 100%) and the total CPU utilization will be higher. You'll probably see 70-80% utilization.
Hence, now we're running on all cores.
```
%time do_ray_trial(None, num_games=round(num_cpus_float), steps=steps, batch_size=1, game_size_for_each=game_size, pause_between_batches=0.0)
```
So, using Ray does help when running parallel games. On a typical laptop, the performance boost is about 2-3 times better. It's not 15 times better (the number of concurrent games), because the computation is CPU intensive for each game with frequent memory access, so all the available cores are fully utilized. We would see much more impressive improvements on a cluster with a lot of CPU cores when running a massive number of games.
Notice the times for `user` and `total` times reported for the non-Ray and Ray runs (which are printed by the `%time` "magic"). They are only measuring the time for the notebook Python process, i.e., our "driver" program, not the whole application. Without Ray, all the work is done in this process, as we said previously, so the `user` and `total` times roughly equal the wall clock time. However, for Ray, these times are very low; the notebook is mostly idle, while the work is done in the separate Ray worker processes.
## More about Actors
Let's finish with a discussion of additional important information about actors, including recapping some points mentioned above.
### Actor Scheduling and Lifetimes
For the most part, when Ray runs actor code, it uses the same _task_ mechanisms we discussed in the [Ray Tasks](01-Ray-Tasks.ipynb) lesson. Actor constructor and method invocations work just like task invocations. However, there are a few notable differences:
* Once a _task_ finishes, it is removed from the worker that executed it, while an actor is _pinned_ to the worker until all Python references to it in the driver program are out of scope. That is, the usual garbage collection mechanism in Python determines when an actor is no longer needed and is removed from a worker. The reason the actor must remain in memory is because it holds state that might be needed, whereas tasks are stateless.
* Currently, each actor instance uses tens of MB of memory overhead. Hence, just as you should avoid having too many fine-grained tasks, you should avoid too many actor instances. (Reducing the overhead per actor is an ongoing improvement project.)
We explore actor scheduling and lifecycles in much greater depth in lesson [03: Ray Internals](03-Ray-Internals.ipynb) in the [Advanced Ray](../advanced-ray/00-Advanced-Ray-Overview.ipynb) tutorial.
### Durability of Actor State
At this time, Ray provides no built-in mechanism for _persisting_ actor state, i.e., writing to disk or a database in case of process failure. Hence, if a worker or whole server goes down with actor instances, their state is lost.
This is an area where Ray will evolve and improve in the future. For now, an important design consideration is to decide when you need to _checkpoint_ state and to use an appropriate mechanism for this purpose. Some of the Ray APIs explored in other tutorials have built-in checkpoint features, such as for saving snapshots of trained models to a file system.
## Extra - Does It Help to Run with Larger Batch Sizes?
You can read this section but choose to skip running the code for time's sake. The outcomes are discussed at the end.
You'll notice that we defined `run_games` and `do_trial`, as well as `run_ray_games` and `do_ray_trial` to take an optional `batch_size` that defaults to `1`. The idea is that maybe running game steps in batches, rather than one step at a time, will improve performance (but look less pleasing in the graphs).
This concept works in some contexts, such as minimizing the number of messages sent in networks (that is, fewer, but larger payloads), but it actually doesn't help a lot here, because each game is played in a single process, whether using Ray or not (at least as currently implemented...). Batching reduces the number of method invocations, but it's not an important amount of overhead in our case.
Let's confirm our suspicion about batching, that it doesn't help a lot.
Let's time several batch sizes without and with Ray. We'll run several times with each batch size to get an informal sense of the variation possible.
Once again, watch the Ray Dashboard while the next two code cells run.
```
for batch in [1, 10, 25, 50]:
for run in [0, 1]:
do_trial(graphs = None, num_games=1, steps=steps, batch_size=batch, game_size_for_each=game_size, pause_between_batches=0.0)
```
There isn't a significant difference based on batch size.
What about Ray? If we're running just one game, the results should be about the same.
```
for batch in [1, 10, 25, 50]:
for run in [0, 1]:
do_ray_trial(graphs = None, num_games=1, steps=steps, batch_size=batch, game_size_for_each=game_size, pause_between_batches=0.0)
```
With Ray's background activity, there is likely to be a little more variation in the numbers, but the conclusion is the same; the batch size doesn't matter because no additional exploitation of asynchronous computing is used.
# Exercises
When we needed to run multiple games concurrently as fast as possible, Ray was an easy win. If we graphed them while running, the wall-clock time is about the same, due to the graphics overhead, but the graphs updated more smoothly and each one looked independent.
Just as for Ray tasks, actors add some overhead, so there will be a crossing point for small problems where the concurrency provided by Ray won't be as beneficial. This exercise uses a simple actor example to explore this tradeoff.
See the [solutions notebook](solutions/Ray-Crash-Course-Solutions.ipynb) for a discussion of questions posed in this exercise.
## Exercise 1
Let's investigat Ray Actor performance. Answers to the questions posed here are in the [solutions](solutions/Ray-Crash-Course-Solutions.ipynb) notebook.
Consider the following class and actor, which simulate a busy process using `time.sleep()`:
```
class Counter:
"""Remember how many times ``next()`` has been called."""
def __init__(self, pause):
self.count = 0
self.pause = pause
def next(self):
time.sleep(self.pause)
self.count += 1
return self.count
@ray.remote
class RayCounter(Counter):
"""Remember how many times ``next()`` has been called."""
def __init__(self, pause):
super().__init__(pause)
def get_count(self):
return self.count
```
Recall that for an actor we need an accessor method to get the current count.
Here are methods to time them.
```
def counter_trial(count_to, num_counters = 1, pause = 0.01):
print('not ray: count_to = {:5d}, num counters = {:4d}, pause = {:5.3f}: '.format(count_to, num_counters, pause), end='')
start = time.time()
counters = [Counter(pause) for _ in range(num_counters)]
for i in range(num_counters):
for n in range(count_to):
counters[i].next()
duration = time.time() - start
print('time = {:9.5f} seconds'.format(duration))
return count_to, num_counters, pause, duration
def ray_counter_trial(count_to, num_counters = 1, pause = 0.01):
print('ray: count_to = {:5d}, num counters = {:4d}, pause = {:5.3f}: '.format(count_to, num_counters, pause), end='')
start = time.time()
final_count_futures = []
counters = [RayCounter.remote(pause) for _ in range(num_counters)]
for i in range(num_counters):
for n in range(count_to):
counters[i].next.remote()
final_count_futures.append(counters[i].get_count.remote())
ray.get(final_count_futures) # Discard result, but wait until finished!
duration = time.time() - start
print('time = {:9.5f} seconds'.format(duration))
return count_to, num_counters, pause, duration
```
Let's get a sense of what the performance looks like:
```
count_to = 10
for num_counters in [1, 2, 3, 4]:
counter_trial(count_to, num_counters, 0.0)
for num_counters in [1, 2, 3, 4]:
counter_trial(count_to, num_counters, 0.1)
for num_counters in [1, 2, 3, 4]:
counter_trial(count_to, num_counters, 0.2)
```
When there is no sleep pause, the results are almost instaneous. For nonzero pauses, the times scale linearly in the pause size and the number of `Counter` instances. This is expected, since `Counter` and `counter_trail` are completely synchronous.
What about for Ray?
```
count_to = 10
for num_counters in [1, 2, 3, 4]:
ray_counter_trial(count_to, num_counters, 0.0)
for num_counters in [1, 2, 3, 4]:
ray_counter_trial(count_to, num_counters, 0.1)
for num_counters in [1, 2, 3, 4]:
ray_counter_trial(count_to, num_counters, 0.2)
```
Ray has higher overhead, so the zero-pause times for `RayCounter` are much longer than for `Counter`, but the times are roughly independent of the number of counters, because the instances are now running in parallel unlike before. However, the times _per counter_ still grow linearly in the pause time and they are very close to the the times per counter for `Counter` instances. Here's a repeat run to show what we mean:
```
count_to=10
num_counters = 1
for pause in range(0,6):
counter_trial(count_to, num_counters, pause*0.1)
ray_counter_trial(count_to, num_counters, pause*0.1)
```
Ignoring pause = 0, can you explain why the Ray times are almost, but slightly larger than the non-ray times consistently? Study the implementations for `ray_counter_trial` and `RayCounter`. What code is synchronous and blocking vs. concurrent? In fact, is there _any_ code that is actually concurrent when you have just one instance of `Counter` or `RayCounter`?
To finish, let's look at the behavior for smaller pause steps, 0.0 to 0.1, and plot the times.
```
count_to=10
num_counters = 1
pauses=[]
durations=[]
ray_durations=[]
for pause in range(0,11):
pauses.append(pause*0.01)
_, _, _, duration = counter_trial(count_to, num_counters, pause*0.01)
durations.append(duration)
_, _, _, duration = ray_counter_trial(count_to, num_counters, pause*0.01)
ray_durations.append(duration)
from bokeh_util import two_lines_plot # utility we used in the previous lesson
from bokeh.plotting import show, figure
from bokeh.layouts import gridplot
two_lines = two_lines_plot(
"Pause vs. Execution Times (Smaller Is Better)", 'Pause', 'Time', 'No Ray', 'Ray',
pauses, durations, pauses, ray_durations,
x_axis_type='linear', y_axis_type='linear')
show(two_lines, plot_width=800, plot_height=400)
```
(Can't see the plot? Click [here](../images/actor-trials.png) for a screen shot.)
Once past zero pauses, the Ray overhead is constant. It doesn't grow with the pause time. Can you explain why it doesn't grow?
Run the next cell when you are finished with this notebook:
```
ray.shutdown() # "Undo ray.init()". Terminate all the processes started in this notebook.
```
The next lesson, [Why Ray?](03-Why-Ray.ipynb), takes a step back and explores the origin and motivations for Ray, and Ray's growing ecosystem of libraries and tools.
| github_jupyter |
# Summarising the mean and the variance using the IMNN
For this example we are going use the IMNN and the LFI module to infer the unknown mean, $\mu$, and variance, $\Sigma$, of $n_{\bf d}=10$ data points of a 1D random Gaussian field, ${\bf d}=\{d_i\sim\mathcal{N}(\mu,\Sigma)|i\in[1, n_{\bf d}]\}$. This is an interesting problem since we know the likelihood analytically, but it is non-Gaussian
$$\mathcal{L}({\bf d}|\mu,\Sigma) = \prod_i^{n_{\bf d}}\frac{1}{\sqrt{2\pi|\Sigma|}}\exp\left[-\frac{1}{2}\frac{(d_i-\mu)^2}{\Sigma}\right]$$
```
import numpy as np
import tensorflow as tf
import imnn_tf
from imnn_tf.lfi import GaussianApproximation, \
ApproximateBayesianComputation, PopulationMonteCarlo
from make_data import GenerateGaussianNoise
from make_data import AnalyticLikelihood
import matplotlib.pyplot as plt
import tensorflow_probability as tfp
import glob
tfd = tfp.distributions
print("IMNN {}\nTensorFlow {}\nTensorFlow Probability {}\nnumpy {}".format(
imnn_tf.__version__, tf.__version__, tfp.__version__, np.__version__))
```
The objective is to use neural networks to fit the function that optimally extracts information from data about parameters of interest.
The neural network takes some data ${\bf d}$ and maps it to a compressed summary $\mathscr{f}:{\bf d}\to{\bf x}$ where ${\bf x}$ can have the same size as the dimensionality of the parameter space, rather than the data space, potentially without losing any information. To do so we maximise the Fisher information of the summary statistics provided by the neural network, and in doing so, find a functional form of the optimal compression.
To train the neural network a batch of simulations ${\bf d}_{\sf sim}^{\sf fid}$ created at a fiducial parameter value $\boldsymbol{\theta}^{\rm fid}$ for training (and another for validation). These simulations are compressed by the neural network to obtain some statistic ${\bf x}_{\sf sim}^{\sf fid}$, i.e. the output of the neural network. We can use these to calculate the covariance ${\bf C_\mathscr{f}}$ of the compressed summaries. The sensitivity to model parameters uses the derivative of the simulation. This can be provided analytically or numercially using ${\bf d}_{\sf sim}^{\sf fid+}$ created above the fiducial parameter value $\boldsymbol{\theta}^{\sf fid+}$ and ${\bf d}_{\sf sim}^{\sf fid-}$ created below the fiducial parameter value $\boldsymbol{\theta}^{\sf fid-}$. The simulations are compressed using the network and used to find mean of the summaries
$$\frac{\partial\boldsymbol{\mu}_\mathscr{f}}{\partial\theta_\alpha}=\frac{1}{n_\textrm{derivs}}\sum_{i=1}^{n_\textrm{derivs}}\frac{{\bf x}_{\sf sim}^{\sf fid+}-{\bf x}_{\sf sim}^{\sf fid-}}{\boldsymbol{\theta}^{\sf fid+}-\boldsymbol{\theta}^{\sf fid-}}.$$
If the derivative of the simulations with respect to the parameters can be calculated analytically (or via autograd, etc.) then that can be used directly using the chain rule since the derivative of the network outputs with respect to the network input can be calculated easily
$$\frac{\partial\mu_\mathscr{f}}{\partial\theta_\alpha} = \frac{1}{n_{\textrm{sims}}}\sum_{i=1}^{n_{\textrm{sims}}}\frac{\partial{\bf x}_i}{\partial{\bf d}_j}\frac{\partial{\bf d}_j}{\partial\theta_\alpha}\delta_{ij}.$$
We then use ${\bf C}_\mathscr{f}$ and $\boldsymbol{\mu}_\mathscr{f},_\alpha$ to calculate the Fisher information
$${\bf F}_{\alpha\beta} = \boldsymbol{\mu}_\mathscr{f},^T_{\alpha}{\bf C}^{-1}_\mathscr{f}\boldsymbol{\mu}_\mathscr{f},_{\beta}.$$
Since any linear rescaling of the summaries is also a summary, when maximising the Fisher information we set their scale using
$$\Lambda = -\ln|{\bf F}_{\alpha\beta}|+r(\Lambda_2)\Lambda_2$$
where
$$\Lambda_2 = ||{\bf C}_\mathscr{f}-\mathbb{1}||_2 + ||{\bf C}_\mathscr{f}^{-1}-\mathbb{1}||_2$$
is a regularisation term whose strength is dictated by
$$r(\Lambda_2) = \frac{\lambda\Lambda_2}{\Lambda_2 + \exp(-\alpha\Lambda_2)}$$
with $\lambda$ as a strength and $\alpha$ as a rate parameter which can be determined from a closeness condition on the Frobenius norm of the difference between the convariance (and inverse covariance) from the identity matrix.
### Generate data
For the mean and variance problem described above we can generate the training and validation data using the premade module `GenerateGaussianNoise`. Our fiducial set will be created with model parameters $\mu=0$ and $\Sigma=1$. In this example we use 1000 simulations for the fiducial training set (with $\mu^\textrm{fid}=0$ and $\Sigma^\textrm{fid}=1$), 1000 simulations for the fiducial validation set (also with $\mu^\textrm{fid}=0$ and $\Sigma^\textrm{fid}=1$), and we will use numerical derivatives with 1000 simulations for each parameter in each direction (1000 simulations at $\mu^-=-0.1$ and $\Sigma^\textrm{fid}=1$, 1000 simulations at $\mu^+=0.1$ and $\Sigma^\textrm{fid}=1$, 1000 simulations at $\mu^\textrm{fid}=0$ and $\Sigma^-=0.9$ and 1000 simulations at $\mu^\textrm{fid}=0$ and $\Sigma^{+}=1.1$. Obviously, we'll need a set of derivative simulations for the validation set too. There will be sensitivity in fitting the network dependent on how well the numerical derivatives are performed (how far apart the $\delta\theta$). Note that for the numerical derivatives it is best to use seed matched simulations, i.e. if there are 1000 simulations for each derivative, then the first simulation for each derivative of those 1000 would all have the same seed, only varying by parameter value. The second simulation for each derivative would all have the same seed, differing from the first, and so on. This increases stability and can massively reduce the number of simulations needed (by orders of magnitude in principle).
In this example, we will imagine that the data is too big to fit in memory and so we will use TFRecords which are a highly efficient serialised format (efficient for reading, but large in storage). This method is preferred for speed over the function loading method `IMNN - function loading.ipynb`. Note that our example here will fit in memory and so the data should be passed as an array for efficiency (see `IMNN - data in memory.ipynb`). First we will generate the data. A description of how to create the tfrecords for ingestion by the network can be found in `TFRecords.ipynb`. We can generate the data for this problem using the `GenerateGaussianNoise` module which places the TFRecords in `data/tfrecords`.
```
GN = GenerateGaussianNoise()
GN.save("tfrecords")
```
We just need a list of TFRecords to pass to the IMNN which will build the dataset.
```
fiducial = glob.glob("data/tfrecords/fiducial*")
validation_fiducial = glob.glob("data/tfrecords/validation_fiducial*")
derivative = glob.glob("data/tfrecords/derivative*")
validation_derivative = glob.glob("data/tfrecords/validation_derivative*")
```
### Setup network
The IMNN is built around a keras(-like) model and optimiser for simplicity, which takes in the data and compresses (sensibly) down to as few as the number of parameters in the model. Here we will use a fully connected network with two layers and 128 neurons in each and using `leaky relu` for activation. We will also use the `Adam` optimiser.
```
model = tf.keras.Sequential(
[tf.keras.Input(shape=GN.input_shape),
tf.keras.layers.Dense(128),
tf.keras.layers.LeakyReLU(0.01),
tf.keras.layers.Dense(128),
tf.keras.layers.LeakyReLU(0.01),
tf.keras.layers.Dense(GN.n_summaries),
])
opt = tf.keras.optimizers.Adam()
```
### Initialise the IMNN
The IMNN needs a bunch of details about the number of simulations used for training, the number of parameters, the number of summaries, the neural network and the optimiser, the fiducial parameters and numerical derivative parameter differences as well as the data to fit the summarising function. If all the data fits into memory at once then we can set `at_once=GN.n_s` which will process all the data at once. We can save the neural network by setting `save=True` and passing a directory and filename to save to. Here we will save the network in a directory called `model` (and called `model`).
```
imnn = imnn_tf.IMNN(n_s=GN.n_s, n_d=GN.n_d, n_params=GN.n_params, n_summaries=GN.n_summaries,
model=model, optimiser=opt, θ_fid=GN.θ_fid, δθ=GN.δθ, input_shape=GN.input_shape,
fiducial=fiducial, derivative=derivative, validation_fiducial=validation_fiducial,
validation_derivative=validation_derivative, at_once=GN.n_s, check_shape=True,
verbose=True, directory="model", filename="model", save=True)
```
We can train the IMNN by running `fit`. This can be done for `n_iterations` number of iterations or run with stopping (which is preferred to make sure that saturation of the Fisher information happens. For stopping, we can set a `patience` and `min_iterations` to prevent any initial settling of the fitting proceuder. Checkpointing can be done every `x` epochs by setting `checkpoint=x`, and the weights can be save to a file `filename` using `weight_file=filename`.
```
imnn.fit(patience=10, min_iterations=1000)
```
A history (imnn.history) dictionary is collecting diagnostics. It contains
- `det_F` : determinant of the Fisher information
- `det_C` : determinant of the covariance of the summaries
- `det_Cinv` : determinant of the inverse covariance of the summaries
- `det_dμ_dθ` : derivative of the mean of the summaries with respect to the parameters
- `reg` : value of the regulariser
- `r` : value of the regulariser strength
and the same for the validation set
- `val_det_F` : determinant of the Fisher information for the validation set
- `val_det_C` : determinant of the covariance of the summaries from the validation set
- `val_det_Cinv` : determinant of the inverse covariance of the summaries from the validation set
- `val_det_dμ_dθ` : derivative of the mean of the summaries from the validation set with respect to the parameters
which can be plotted using
```
imnn.plot(known_det_fisher=50)
```
Note that when initialising the IMNN, `imnn = IMNN.IMNN(...)`, once training has completed we can also load a model by setting `load=True` and passing a `filename` and `directory` which contains the model. In this case, `model=None` and `optimiser=None` can be set. Note that optimisers with a state should be loaded externally - loading keras models might be easier to do externally too. Once the network is fit it is not actually necessary to reload the IMNN module though, the model can be used directly for likelihood-free inference by defining an estimator like `imnn.get_MLE(data)` which we can recover from a saved IMNN instance using
```python
estimator_parameters = np.load("model/model/estimator.npz")
Finv = estimator_parameters["Finv"]
θ_fid = estimator_parameters["θ_fid"]
dμ_dθ = estimator_parameters["dμ_dθ"]
Cinv = estimator_parameters["Cinv"]
μ = estimator_parameters["μ"]
@tf.function:
def estimator(data):
return tf.add(
θ_fid,
tf.einsum(
"ij,jk,kl,ml->mi",
Finv,
dμ_dθ,
Cinv,
model(data) - μ))
```
or a numpy-like alternative.
## Inferring the mean and variance
Lets observe some data generated from a Gaussian distribution with a mean, $\mu=0$, and a variance $\Sigma=1$ - we're going to generate the data from seed 37 (for no particular reason). Once this is generated we're going to forget that we ever knew this. Simulations from this model can be made using
```
generator = GenerateGaussianNoise()
θ_target = np.array([0., 1.])[np.newaxis, :]
target_data = generator.simulator(
parameters=θ_target,
seed=37,
simulator_args={"input_shape": generator.input_shape})
generator.plot_data(target_data, label="Observed data")
```
For the inference we start by defining our prior as a uniform distribution. This distribution can be a TensorFlow Probability distribution for simplicity. We are going to choose the prior to be uniform from -10 to 10 for the mean and 0 to 10 for the variance.
$$p(\mu,\Sigma)=\textrm{Uniform}\left[\textrm{lower}=(-10, 0),\textrm{upper}=(10,10)\right]$$
```
prior = tfd.Blockwise([tfd.Uniform(-10., 10.),
tfd.Uniform(0.1, 10.)])
```
In the `AnalyticLikelihood` module we have routines for calculating the exact likelihood for this problem.
```
AL = AnalyticLikelihood(
parameters=2,
data=target_data,
prior=prior,
generator=generator,
labels=[r"$\mu$", r"$\Sigma$"])
```
### Gaussian approximation to the posterior
We can compare an estimate of the mean and variance of the target data from the network with the exact values - note it is not necessary for these to coincide particularly well
```
print("Mean and variance of observed data = {}\nNetwork estimate of observed data = {}".format(
AL.get_estimate(target_data), imnn.get_estimate(target_data.astype(np.float32))))
```
Now we can use the IMNN to try and infer the parameters of the model given the observations, i.e. the mean and variance of the target data. A more in depth overview of the LFI module is provided in `examples`.
```
GA = GaussianApproximation(
target_data=target_data.astype(np.float32),
prior=prior,
Fisher=imnn.F,
get_estimate=imnn.get_estimate,
labels=[r"$\mu$", r"$\Sigma$"])
```
The inverse Fisher information describes the Cramer-Rao bound, i.e. the minimum variance of a Gaussian approximation of the likelihood about the fiducial parameter values. We can therefore use the Fisher information to make an approximation to posterior. The inverse Fisher can be viewed using
```
GA.plot_Fisher(figsize=(5, 5));
```
And the Gaussian approximation to the posterior given by
```
ax = AL.plot(
gridsize=(1000, 1000),
figsize=(7, 7),
color="C0",
label="Analytic posterior");
GA.plot(
gridsize=(1000, 1000),
ax=ax,
color="C2",
label="Gaussian approximation");
```
## Approximate Bayesian computation
We can also do approximate Bayesian computation using the IMNN outputs as sufficient statistics describing the data. The ABC draws parameter values from the prior and makes simulations at these points. These simulations are then summarised, i.e. we find the mean and variance of the simulations in this case, and then the distance between these estimates and the estimate of the target data can be calculated. Estimates within some small ϵ-ball around the target estimate are approximately samples from the posterior. Note that the larger the value of ϵ, the worse the approximation to the posterior.
```
ABC = ApproximateBayesianComputation(
target_data=target_data.astype(np.float32),
prior=prior,
Fisher=imnn.F,
get_estimate=imnn.get_estimate,
simulator=GN.simulator,
labels=[r"$\mu$", r"$\Sigma$"])
ax = AL.plot(
gridsize=(1000, 1000),
figsize=(7, 7),
color="C0",
label="Analytic posterior");
GA.plot(
gridsize=(1000, 1000),
ax=ax,
color="C2",
label="Gaussian approximation");
ABC.plot(
ϵ=1.,
accepted=2000,
draws=10000,
ax=ax,
color="C1",
label="ABC posterior at ϵ={}".format(1),
bins=50);
ABC.scatter_plot(axes="parameter_estimate", rejected=0.01);
ABC.scatter_plot(axes="estimate_estimate", rejected=0.01);
ABC.scatter_plot(axes="parameter_parameter", rejected=0.01);
```
## Population Monte Carlo
Whilst we can obtain approximate posteriors using ABC, the rejection rate is very high because we sample always from the prior. Population Monte Carlo (PMC) uses statistics of the population of samples to propose new parameter values, so each new simulation is more likely to be accepted. This prevents us needing to define an ϵ parameter to define the acceptance distance. Instead we start with a population from the prior and iteratively move samples inwards. Once it becomes difficult to move the population any more, i.e. the number of attempts to accept a parameter becomes very large, then the distribution is seen to be a stable approximation to the posterior.
The whole module works very similarly to `ABC` with a few changes in arguments.
```
PMC = PopulationMonteCarlo(
target_data=target_data.astype(np.float32),
prior=prior,
Fisher=imnn.F,
get_estimate=imnn.get_estimate,
simulator=GN.simulator,
labels=[r"$\mu$", r"$\Sigma$"])
ax = AL.plot(
gridsize=(1000, 1000),
figsize=(7, 7),
color="C0",
label="Analytic posterior");
GA.plot(
gridsize=(1000, 1000),
ax=ax,
color="C2",
label="Gaussian approximation");
ABC.plot(
ax=ax,
color="C1",
label="ABC posterior at ϵ={}".format(1),
bins=50)
PMC.plot(
draws=2000,
initial_draws=5000,
criterion=0.1,
percentile=75,
ax=ax,
color="C3",
label="PMC posterior",
bins=50);
PMC.scatter_plot(axes="parameter_estimate")
PMC.scatter_plot(axes="estimate_estimate")
PMC.scatter_plot(axes="parameter_parameter")
```
| github_jupyter |
# Brand Names for Image Retrieval From Instagram
**Goal:** process data to get all brand names to search for images on instagram
### 1. Import dependencies
Install non-standard libraries: requests, BeautifulSoup
```
import os
import pandas as pd
import numpy as np
```
### 2. Get Brand Names from Data
```
# to specify
directory= r'C:\Users\Anonym\Documents\GitHub\DLfM_BrandManagement\other_project\data\brand_hashtag_and_account'
os.chdir(directory)
```
#### 2.1 Load all dataframes
There are dataframes for apparel and beverage brand category. Then, there are total dataframes that include both brand categories and add the information about the full name and instagram name of the brand name.
First, we get out apparel brand dataframes.
```
#convert to dataframe
data1 = pd.read_csv('apparel_brands.txt', header=None)
data1.columns = ["apparel_brands"]
data1.sort_values(by='apparel_brands')
print(data1.head())
print(data1.info())
data2 = pd.read_csv('apparel_full_brand_names.txt', header=None)
data2.columns = ["apparel_full_brand_names"]
data2.sort_values(by='apparel_full_brand_names')
print(data2.head())
print(data2.info())
data3 = pd.read_csv('official_apparel_brands.txt', header=None)
data3.columns = ["official_apparel_brands"]
data3.sort_values(by='official_apparel_brands')
print(data3.head())
print(data3.info())
```
Get out all beverage brands.
```
data4 = pd.read_csv('beverage_brands.txt', header=None)
data4.columns = ["beverage_brands"]
print(data4.head())
print(data4.info())
data5 = pd.read_csv('beverage_full_brand_names.txt', header=None)
data5.columns = ["beverage_full_brand_names"]
print(data5.head())
print(data5.info())
data6 = pd.read_csv('official_beverage_brands.txt', header=None)
data6.columns = ["official_beverage_brands"]
print(data6.head())
print(data6.info())
```
Get apparel + beverage dataframes
Convert the brand names of a user profile to a full brand name or a hashtag to a full brandname or vice versa.
```
data7 = pd.read_csv('official_account_to_brand_name.txt')
data7.sort_values(by='firm_account')
data7.columns = ['firm_account', 'brand_full_name']
print(data7)
print(data7.info())
data8 = pd.read_csv('instagram_hashtag_to_brand_name.txt')
data8.columns = ['instagram_hashtag', 'brand_full_name']
print(data8)
print(data8.info())
```
#### 2.2 Build global dataframe from single dataframes
From the individual dataframes about apparel and beverage, a global dataframe is built which gives an overview over all brand names and categorizes them into a brand category.
```
data78 = pd.merge(data7, data8, how='outer', left_on='brand_full_name', right_on='brand_full_name')
data78 = data78[['instagram_hashtag', 'firm_account','brand_full_name' ]] #reorder columns
data78.info()
data13 = pd.concat([data1, data3], axis=1)
data13.head()
# data cleaning
data2['apparel_full_brand_names'].replace(" Victoria's Secret", "Victoria's Secret", inplace=True) #victoria's secret's aposthrophe different to same
data7['brand_full_name'].replace(" Victoria`s Secret", "Victoria's Secret", inplace=True) #victoria's secret's aposthrophe different to same
data2['apparel_full_brand_names'].replace(" Levi's", "Levi's", inplace=True) #victoria's secret's aposthrophe different to same
data7['brand_full_name'].replace(" Levi`s", "Levi's", inplace=True) #victoria's secret's aposthrophe different to same
data27 = pd.merge(data2, data7, how='outer', left_on='apparel_full_brand_names', right_on='brand_full_name')
data27.head()
data1327 = pd.merge(data13, data27, how='outer', left_on='official_apparel_brands', right_on='firm_account')
data1327 = data1327.where(pd.notnull(data1327), None)
data1327
import numpy as np
def f(row):
if row['apparel_full_brand_names'] != None:
val = 'apparel'
else:
val = 'beverage'
return val
data1327['brand_category'] = data1327.apply(f, axis=1)
data1327.head()
data46 = pd.merge(data4, data6, how='outer', left_on='beverage_brands', right_on='official_beverage_brands')
data57 = pd.merge(data5, data7, how='outer', left_on='beverage_full_brand_names', right_on='brand_full_name')
data57
data4657 = pd.merge(data46, data57, how='outer', left_on='official_beverage_brands', right_on='firm_account')
data4657 = data4657.where(pd.notnull(data4657), None)
data4657
import numpy as np
def f(row):
if row['beverage_full_brand_names'] == None:
val = 'apparel'
else:
val = 'beverage'
return val
data4657['brand_category'] = data4657.apply(f, axis=1)
data4657.head()
bigdf = pd.merge(data1327, data4657, how='outer')
bigdf = pd.merge(bigdf, data78, how='outer')
bigdf
```
### 3. Save Dataframes to csv
Specify directory to put in all data about dataframes for furhter processing.
```
# to specify
directory= r"C:\Users\Anonym\Documents\GitHub\DLfM_BrandManagement\data"
folder = 'instagram_urls'
os.chdir(directory)
path = os.path.join(directory, folder)
try:
os.mkdir(folder)
except:
pass
os.chdir(path)
```
Put all firm account names (of all brand categories) into a list, which can be iterated over to get images from these firm accounts on Instagram. It is used to specify user-profiles Instagram pages.
```
firm_usernames = data7.firm_account.values.tolist()
def write_list_to_file(guest_list, filename):
"""Write the list to csv file."""
with open(filename, "w") as outfile:
for entries in guest_list:
outfile.write(entries)
outfile.write("\n")
write_list_to_file(firm_usernames, 'firm_usernames.csv')
```
Put all hashtag names (of all brand categories) into a list, which can be iterated over to get images from these hashtag results on Instagram. It is used to specify hashed Instagram pages.
```
instagram_hashtags = data8.instagram_hashtag.values.tolist()
def write_list_to_file(guest_list, filename):
"""Write the list to csv file."""
with open(filename, "w") as outfile:
for entries in guest_list:
outfile.write(entries)
outfile.write("\n")
write_list_to_file(instagram_hashtags, 'instagram_hashtags.csv')
```
| github_jupyter |
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
```
NAME = ""
COLLABORATORS = ""
```
---
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
<!--NAVIGATION-->
< [Visualization with the `PyMOLMover`](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.06-Visualization-and-PyMOL-Mover.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Visualization and `pyrosetta.distributed.viewer`](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.08-Visualization-and-pyrosetta.distributed.viewer.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.07-RosettaScripts-in-PyRosetta.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
# RosettaScripts in PyRosetta
Keywords: RosettaScripts, script, xml, XMLObjects
## Overview
RosettaScripts in another way to script custom modules in PyRosetta. It is much simpler than PyRosetta, but can be extremely powerful, and with great documentation. There are also many publications that give `RosettaScript` examples, or whole protocols as a `RosettaScript` instead of a mover or application. In addition, some early Rosetta code was written with `RosettaScripts` in mind, and still may only be fully accessible via `RosettaScripts` in order to change important variables.
Recent versions of Rosetta have enabled full RosettaScript protocols to be run in PyRosetta. A new class called `XMLObjects`, has also enabled the setup of specific rosetta class types in PyRosetta instead of constructing them from code. This tutorial will introduce how to use this integration to get the most out of Rosetta. Note that some tutorials use RosettaScripts almost exclusively, such as the parametric protein design notebook, as it is simpler to use RS than setting up everything manually in code.
## RosettaScripts
A RosettaScript is made up of different sections where different types of Rosetta classes are constructed. You will see many of these types throughout the notebooks to come. Briefly:
`ScoreFunctions`: A scorefunction evaluates the energy of a pose through physical and statistal energy terms
`ResidueSelectors`: These select a list of residues in a pose according to some criteria
`Movers`: These do things to a pose. They all have an `apply()` method that you will see shortly.
`TaskOperations`: These control side-chain packing and design
`SimpleMetrics`: The return some metric value of a pose. This value can be a real number, string, or a composite of values.
### Skeleton RosettaScript Format
```xml
<ROSETTASCRIPTS>
<SCOREFXNS>
</SCOREFXNS>
<RESIDUE_SELECTORS>
</RESIDUE_SELECTORS>
<TASKOPERATIONS>
</TASKOPERATIONS>
<SIMPLE_METRICS>
</SIMPLE_METRICS>
<FILTERS>
</FILTERS>
<MOVERS>
</MOVERS>
<PROTOCOLS>
</PROTOCOLS>
<OUTPUT />
</ROSETTASCRIPTS>
```
Anything outside of the \< \> notation is ignored and can be used to comment the xml file
### RosettaScript Example
```xml
<ROSETTASCRIPTS>
<SCOREFXNS>
</SCOREFXNS>
<RESIDUE_SELECTORS>
<CDR name="L1" cdrs="L1"/>
</RESIDUE_SELECTORS>
<MOVE_MAP_FACTORIES>
<MoveMapFactory name="movemap_L1" bb="0" chi="0">
<Backbone residue_selector="L1" />
<Chi residue_selector="L1" />
</MoveMapFactory>
</MOVE_MAP_FACTORIES>
<SIMPLE_METRICS>
<TimingProfileMetric name="timing" />
<SelectedResiduesMetric name="rosetta_sele" residue_selector="L1" rosetta_numbering="1"/>
<SelectedResiduesPyMOLMetric name="pymol_selection" residue_selector="L1" />
<SequenceMetric name="sequence" residue_selector="L1" />
<SecondaryStructureMetric name="ss" residue_selector="L1" />
</SIMPLE_METRICS>
<MOVERS>
<MinMover name="min_mover" movemap_factory="movemap_L1" tolerance=".1" />
<RunSimpleMetrics name="run_metrics1" metrics="pymol_selection,sequence,ss,rosetta_sele" prefix="m1_" />
<RunSimpleMetrics name="run_metrics2" metrics="timing,ss" prefix="m2_" />
</MOVERS>
<PROTOCOLS>
<Add mover_name="run_metrics1"/>
<Add mover_name="min_mover" />
<Add mover_name="run_metrics2" />
</PROTOCOLS>
</ROSETTASCRIPTS>
```
Rosetta will carry out the order of operations specified in PROTOCOLS. An important point is that SimpleMetrics and Filters never change the sequence or conformation of the structure.
The movers do change the pose, and the output file will be the result of sequentially applying the movers in the protocols section. The standard scores of the output will be carried over from any protocol doing scoring, unless the OUTPUT tag is specified, in which case the corresponding score function from the SCOREFXNS block will be used.
## RosettaScripts Documentation
It is recommended to read up on RosettaScripts here. Note that each type of Rosetta class has a list and documentation of ALL accessible components. This is extremely useful to get an idea of what Rosetta can do and how to use it in PyRosetta.
https://www.rosettacommons.org/docs/latest/scripting_documentation/RosettaScripts/RosettaScripts
```
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.setup()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
```
## Running Whole Protocols via RosettaScriptsParser
Here we will use a whole the parser to generate a ParsedProtocol (mover). This mover can then be run with the apply method on a pose of interest.
Lets run the protocol above. We will be running this on the file itself.
```
from pyrosetta import *
from rosetta.protocols.rosetta_scripts import *
init('-no_fconfig @inputs/rabd/common')
pose = pose_from_pdb("inputs/rabd/my_ab.pdb")
original_pose = pose.clone()
if not os.getenv("DEBUG"):
parser = RosettaScriptsParser()
protocol = parser.generate_mover_and_apply_to_pose(pose, "inputs/min_L1.xml")
protocol.apply(pose)
```
## Running via XMLObjects and strings
Next, we will use XMLObjects to create a protocol from a string. Note that in-code, XMLOjbects uses special functionality of the `RosettaScriptsParser`. Also note that the `XMLObjects` also has a `create_from_file` method that will take a path to an XML file.
```
pose = original_pose.clone()
min_L1 = """
<ROSETTASCRIPTS>
<SCOREFXNS>
</SCOREFXNS>
<RESIDUE_SELECTORS>
<CDR name="L1" cdrs="L1"/>
</RESIDUE_SELECTORS>
<MOVE_MAP_FACTORIES>
<MoveMapFactory name="movemap_L1" bb="0" chi="0">
<Backbone residue_selector="L1" />
<Chi residue_selector="L1" />
</MoveMapFactory>
</MOVE_MAP_FACTORIES>
<SIMPLE_METRICS>
<TimingProfileMetric name="timing" />
<SelectedResiduesMetric name="rosetta_sele" residue_selector="L1" rosetta_numbering="1"/>
<SelectedResiduesPyMOLMetric name="pymol_selection" residue_selector="L1" />
<SequenceMetric name="sequence" residue_selector="L1" />
<SecondaryStructureMetric name="ss" residue_selector="L1" />
</SIMPLE_METRICS>
<MOVERS>
<MinMover name="min_mover" movemap_factory="movemap_L1" tolerance=".1" />
<RunSimpleMetrics name="run_metrics1" metrics="pymol_selection,sequence,ss,rosetta_sele" prefix="m1_" />
<RunSimpleMetrics name="run_metrics2" metrics="timing,ss" prefix="m2_" />
</MOVERS>
<PROTOCOLS>
<Add mover_name="run_metrics1"/>
<Add mover_name="min_mover" />
<Add mover_name="run_metrics2" />
</PROTOCOLS>
</ROSETTASCRIPTS>
"""
if not os.getenv("DEBUG"):
xml = XmlObjects.create_from_string(min_L1)
protocol = xml.get_mover("ParsedProtocol")
protocol.apply(pose)
```
## Constructing Rosetta objects using XMLObjects
### Pulling from whole script
Here we will use our previous XMLObject that we setup using our script to pull a specific component from it. Note that while this is very useful for running pre-defined Rosetta objects, we will not have any tab completion for it as it will be a generic type - which means we will be unable to further modify it.
Lets grab the residue selector and then see which residues are L1.
```
if not os.getenv("DEBUG"):
L1_sele = xml.get_residue_selector("L1")
L1_res = L1_sele.apply(pose)
for i in range(1, len(L1_res)+1):
if L1_res[i]:
print("L1 Residue: ", pose.pdb_info().pose2pdb(i), ":", i )
```
### Constructing from single section
Here, we instead of parsing a whole script, we'll simply create the same L1 selector from the string itself. This can be used for nearly every Rosetta class type in the script. The 'static' part in the name means that we do not have to construct the XMLObject first, we can simply call its function.
```
if not os.getenv("DEBUG"):
L1_sele = XmlObjects.static_get_residue_selector('<CDR name="L1" cdrs="L1"/>')
L1_res = L1_sele.apply(pose)
for i in range(1, len(L1_res)+1):
if L1_res[i]:
print("L1 Residue: ", pose.pdb_info().pose2pdb(i), ":", i )
```
Do these residues match what we had before? Why do both of these seem a bit slower? The actual residue selection is extremely quick, but validating the XML against a schema (which checks to make sure the string that you passed is valid and works) takes time.
And that's it! That should be everything you need to know about RosettaScripts in PyRosetta. Enjoy!
For XMLObjects, each type has a corresponding function (with and without static), these are listed below, but tab completion will help you here. As you have seen above, the static functions are called on the class type, `XmlObjects`, while the non-static objects are called on an instance of the class after parsing a script, in our example, it was called `xml`.
```
.get_score_function / .static_get_score_function
.get_residue_selector / .static_get_residue_selector
.get_simple_metric / .static_get_simple_metric
.get_filter / .static_get_filter
.get_mover / .static_get_mover
.get_task_operation / .static_get_task_operation
```
<!--NAVIGATION-->
< [Visualization with the `PyMOLMover`](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.06-Visualization-and-PyMOL-Mover.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Visualization and `pyrosetta.distributed.viewer`](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.08-Visualization-and-pyrosetta.distributed.viewer.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.07-RosettaScripts-in-PyRosetta.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
| github_jupyter |
<a href="https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/python/python_in_a_nutshell.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Python in a Nutshell
##Introduction
Python is a widespread and popular script language, which is capable of supporting object oriented programming, and that has several extending modules, for example GDAL, OGR, numpy, OpenCV, open3d, etc. While Python run time environment is available on different operating systems (e.g. Linux, OS X, Android, Windows), there are many end user programs which use Python as a script language to extend their functionalities (e.g. GIMP, QGIS, Apache, PostgreSQL, GRASS, ...). For those programs, the Python interpreter compiles the source code into the so-called byte code, capable to run the programs quickly, faster than some other script languages.
The Python 2 version reached the end of its lifetime, having its development closed in 2020. The Version 3 development started in 2008, and the two versions are incompatibile. Nowadays, everybody uses the new version for new projects, but there are many projects that have not been upgraded to the new version yet. The examples of this document were tested in Python 3, but probably they also work in Python 2.7.
Depending on the operating system, there are different installation options. For GIS users on Windows, the OSGeo4W is the optimal choice, having the GIS programs Python installed with many extension modules. Python is available in the OSGeo4W Shell window!
***Did you know?***
*The language’s name isn’t about snakes, but about the popular British comedy troupe Monty Python.*
*If you are interested you can find more interesting facts about python at the following [link](https://data-flair.training/blogs/facts-about-python-programming/).*
##Basics
In [Jupyter notebook](https://jupyter.org), it is possible to execute Python commands in code blocks. The backgroud of a code block is grey, having a triangle at the top left corner. After clicking on the triangle to execute the code block, the results are displayed below the code block. Comments start with '#' (hash mark).
Simple basic matemathical operations:
```
5 * 4
3 ** 4 # square
(2 + 8) // 3 # integer division
(2 + 8) / 3
56.12 // 12.34 # result is float!
```
The results of the expressions can be stored in variables:
```
a = 5 * 4
l = True # Bool value
c = None # special undefined value
s = "Hello world" # string
print(a, l, c, s)
```
There is no automatic type conversion for variables and constants in Python (most of the script languages apply automatic type conversion, e.g. JavaScript, awk, Tcl). Therefore, the variable type is not explicitly given (no declaration), being set at the first assignment. We can query the type of a variable using **type** function.
```
b = '16' # a string variable
a + b # 'a' got its value in the previous code block
```
In the code block above, we tried to add an integer and a string value, an operation that is not supported.
```
type(a)
type(b)
type(b) is str
a + int(b) # explicit type conversion helps
a * b # repeat string 'b' 'a' times
s[0] # first character of a string, the value of s is 'Hello world'
s[1:3] # second and third characters
s[:5] # first five characters
s[6:] # from the sixth character to the end
s[-1] # last character
s[-5:] # last five characters
len(s) # length of string
s[:5] + ' Dolly' # concatenation of strings
```
Mathematical (trigonometrical) functions are in an external module. Given this, it is necessary to use the **import math** module.
```
import math
math.sin(1)
print(f'{math.sin(1):6.4f}') # formatted output
```
Modules can be imported two ways. The **import** *module_name* will import everything from the module. It is possible to refer to a member of the module using *module_name*_*member_name*. The other method is used only when some specified memebers are imported, using **from** *module_name* **import** *member_name* form. In this case case, it is not necessary to put the module name in front of the member name, being careful to avoid name collision. In the second form, you specify a comma separated list of member names.
```
from math import cos
cos(1)
```
The modules usually have documentation where the available functions can be seen. If you would like to check these, use **help(*module_name*)**!
```
help(math)
```
More information about modules also can be found on the internet. For example, the **math** module's functions can be seen on the following URL: https://docs.python.org/3/library/math.html. Remember, if you are not sure about something, Google is your friend!
##Python data structures
The base data types of Python can be divided into two groups, mutable and unmutable.
Mutable objects: data can be partially change in place.
list, set, dictionary
---
Immutable object: the stored data cannot be changed.
bool, int, float, string, tuple
The usage of simple variable types (e.g. bool, int, float) is similar to all other programming languages. We'll discuss only compound types (e.g. list, tuple, dictionary, set). From Python3, even simple data types are represented by objects.
###Lists
Lists are ordered and mutable Python container. Lists can contain elements of different types, even other lists. The items are indexed starting from zero.
```
l0 = [] # create an empty list
l1 = list() # create an empty list
l2 = ['apple', 5, 4] # differenc data types in the same list
print(l2[0]) # access memebers by index
print(l2[1:]) # index ranges similar to strings
print(len(l2))
l2[0] = 3 # lists are mutable
print(l2)
l2.append(5) # extending list
print(l2)
l2[0:3] = [1] # replace a range of list with another list
print(l2)
del l2[0] # delete list item
print(l2)
l3 = [[1, 2], [6, 4]] # list of lists
print(l3[0], l3[0][1])
l4 = l3 + [3, 7] # concatenating lists
print(l4)
l4 = l3 + [[3, 7]]
print(l4)
```
You can use **help(list)** to get more information about lists.
```
help(list)
```
There are three operators related to lists:
* **+** concatenate lists
* **\*** repeat the list (similar to strings)
* **in** is value in the list
Open a new code block and try the operators above. Studying Lists may help in trying some other methods of list objects (e.g. pop, sort, reverse)
###Tuples
List like immutable ordered data type.
```
t = (65, 6.35) # create a new tuple, brackets are not obligatory -> t = 65, 6.35
print(t[0]) # indexing
u = tuple() # empty tuple
v = 2, # comma is neccessary to creat a list of one item
print(v)
```
###List comprehesion
List comprehension is an effective tool to apply the same operation for all elements of the list/tuple.
```
l = [2, 6, -3, 4]
l2 = [x**2 for x in l] # square of all mebers of the list into a new list
print(l2)
l3 = (i**0.5 for i in l if i > 0) # square root of positive members of list
print(l3)
print(tuple(l3))
```
The map function is similar, but a function can be applied for each member of the list.
```
from math import sin
l4 = map(sin, l)
print(l4)
print(tuple(l4))
print([a for a in l if a % 2]) # odd numbers from the list
```
###Sets
Sets are mutable unordered data types. It is simply possible to create a set from a string or a list.
```
a = "abcabdseacbfds"
s = set(a) # create a set repeated values stored once
print(s)
t = set((1, 3, 2, 4, 3, 2, 5, 1))
print(t)
e = set() # empty set
u = set(['apple', 'plum', 'peach', 'apple'])
print(u)
```
There are different operators for sets, including difference, union, intersection and symmetrical difference.
```
a = set('abcdefgh')
b = set('fghijklm')
print(a - b) # difference of sets
print(a | b) # union of sets
print(a & b) # intersection of sets
print(a ^ b) # symmetrical difference
```
###Dictionaries
Dictionaries are mutable unordered date types. Each member of a dictionary has a key value, which can be used as an index, that can be a string or a number. The members of a dictionary can be lists or dictionaries.
```
dic = {} # empty dictionary
dic['first'] = 4 # adding new key and value to the list
dic[5] = 123
print(dic)
dic['first'] = 'apple' # new value for a key
print(dic)
print('first' in dic) # is the key in dic?
d = {'b': 1, 12: 4, 'lista': [1, 5, 8]} # initialization
t = {(1,1): 2, (1,2): 4, (2,1): -1, (2,2): 6} # keys are tuples
print(t[1,1])
```
###Some restrictions for immutable objects:
```
txt = "Hello world!"
#txt[0] ='h' # a part of a string cannot be changed!
print(id(txt))
txt = 'h' + txt[1:] # here we create a new object using an existing object, we didn't change its value
print(id(txt)) # the two ids are different, the memory allocated for the old str will be freed by grabage collection
print(txt)
t = (2, 4, 6)
#t.append(8) # tuple cannot be extended
#t = t + (8) # upps it should work
print(id(t))
t = t + (8,) # (8) is an integer for the Python
print(t)
print(id(t))
```
##Python programs
Interactive use of Python is good for simple tasks or for trying different things. In productive use, Python codes are written into files (modules) and are run as a single command. A complex task is usually divided into smaller code blocks (function or objects), connected by the use of loops and conditional expressions. Before we start to write programs, it is important to highlight a speciality of the syntax of the Python: the hierarchy of code blocks are marked by the number of spaces at the beginning of the program lines, forcing the users to write more readable programs. Usually four spaces are used to indent code blocks. In case of complex programs, an Integrated Development Environment (IDE) is very useful. The simplest IDE is **IDLE** which is written in pure Python. There are several open source IDEs (e.g. Spyder, Eric) and propriatery ones. The minimal requirements to develop Python programs are:
* Install Python and the neccesarry modules on your machine (there are lots of installation guides for different operating systems)
* Install a text editor with Python syntax highlighting (for example Notepad++)
Let's create our first program whose objective is to calculate the sum of the first 100 integer numbers (loop with counter). """ marks the start and the end of a multiline comment.
```
""" My first program
the sum of the firs 100 numbers
"""
s = 0
for i in range(1, 101):
s += i
print(s)
```
The first three lines of this program are comments. The colon (:) at the end of the fifth line marks the start of a new block, having the following indented lines belonging to this block (in our program, it is a single line block). The *range* function creates an iterator between the to parameters (first inclusive and last exculsive, that's why we write 101 to finish at 100). The *s* variable is intialized by zero, and the integer values are added in the loop. If you copy the code above into a text file (e.g. first.py), you can run it from the command line:
`python first.py`
##Python functions
Python functions are flexible building blocks of a program. Functions have input parameters and can return a single or compound data.
We'll give more solutions to calculate the value of *nth* the factorial.
n! = 1 * 2 * 3 * ... * n
In the first solution we'll use the *while* loop.
```
def f(n):
""" factorial calculation using while loop """
w = 1
while n > 0:
w *= n
n -= 1
return w
```
Let's try our function.
```
n = 6
print(n, '!:', f(n))
print(f'15!: {f(15)}')
```
In the second version of factorial, we'll use the *for* loop.
```
def f1(n):
""" factorial calculation using for loop """
w = 1
for i in range(1, n+1): # range(1, n) gives a serie from 1 to n-1!
w *= i
return w
```
Let's try it too.
```
n = 6
print(f'{n}!: {f1(n)}')
print(f'15!: {f1(15)}')
```
We can use the recursive formula of factorials.
n! = n * (n-1)!
```
def f2(n):
""" recursive factorial calculation """
if n <= 1: # stop criteria for recursion
return 1
return n * f2(n-1) # the function calls itself, it is called recursive function
```
Let's try it.
```
print(f'6!: {f2(6)}')
```
Finally, let's make a function without loop or recursion using Python built in functions.
```
import numpy as np
def f3(n):
""" factorial calculation using built in functions """
return np.prod(range(1, n+1))
```
Let's try it.
```
f3(6)
```
###Function parameters
A function can have optional parameters with default values. The following function has two obligatory parameters (x, a0) and two optional ones (a1, a2). Optional parameters have to be at the end of the parameter list.
```
def quadratic(x, a0, a1=1, a2=0):
""" claculate the value of a quadratic function """
return a0 + a1 * x + a2 * x**2
# valid usages
print(quadratic(2.5, 3)) # a1=1 and a2=0
print(quadratic(4.1, 1.5, 3.3)) # a2=0
print(quadratic(-2.3, 5.1, -2, 0.56))
```
The order of the function parameters are given by the function definition. You can change oder when you call the function.
```
print(quadratic(a2=2, a1=4, a0=1, x=3))
print(quadratic(3, 5, a2=4)) # x=3, a0=5, a1=1
```
##Object Oriented Programming (OOP)
In Python version 3.x, all variables are objects (an instance of a class). Now, we'll create custom classes. In this example, we'll create a class for 2D points.
```
class Point2D(object):
""" class for 2D points """
def __init__(self, east = 0, north = 0): # it is the constructor
""" initialize point
:param east: first coordinate
:param north: second coordinate
"""
self.east = east # member variable
self.north = north
def abs(self):
""" distance from the origin
:returns: distance
"""
return (self.east**2 + self.north**2)**0.5
def __str__(self):
""" Convert point to string for prining
:returns: string with coordinates
"""
return "{:.3f}; {:.3f}".format(self.east, self.north)
def __add__(self, p):
""" sum of two vectors
:param p: point to add
:returns: sum as a Point2D
"""
return Point2D(self.east + p.east, self.north + p.north)
```
Let's create an istance of the point class.
```
p1 = Point2D() # origin
p2 = Point2D(10,8)
p3 = Point2D(-1, 4)
print(p1.east, p1.north)
print(p1)
print(p2)
print(p2.abs())
print(p2.__doc__)
print('p1 + p2: ', p2 + p3) # p1 + p2 -> p1.__add__(p2)
```
##Samples for Pythonic and non-Pythonic code
**Change the value of two variables**
Non-Pythonic
```
tmp = a; a = b; b = tmp
```
Pythonic
```a, b = b, a```
**Assign the same value to more variables**
Non-Pythonic
```
a = 0; b = 0; c= 0
```
Pythonic
```
a = b = c = 0
```
**Value is in an interval**
Non-Pythonic
```
if a < b and b < c:
```
Pythonic
```
if a < b < c:
```
**Value is equal to one of a set**
Non-Pythonic
```
if a == 'apple' or a == 'peach' or a == 'plum':
```
Pythonic
```
if a in ('apple', 'peach', 'plum'):
```
**Conditional assigment**
Non-Pythonic
```
if b > 2:
a = 1
else:
a = 2
```
Pythonic
```
a = 1 if b > 2 else 2
```
**Loops**
Non-Pythonic
```
i = 0
while i < 10:
print(i)
i += 1
```
Pythonic
```
for i in range(10):
print(i)
```
Non-Pythonic
```
my_list = ['Joe', 'Fred', 'Tim']
index = 0;
for index < len(my_list):
print(index, my_list[index])
```
Pythonic
```
my_list = ['Joe', 'Fred', 'Tim']
for index, item in enumerate(my_list):
print(index, item)
```
**Reverse list/string**
Non-Pythonic
```
s = 'python'
w = ''
for i in range(len(s)-1, -1, -1):
w = w + s[i]
```
Pythonic
```
s = 'python'
w = s[::-1]
```
**Remove double items from list**
```
my_list = [1, 2, 1, 3, 2, 4, 3]
l = []
for i in my_list:
if i not in l:
l.append(i)
```
Pythonic
```
my_list = [1, 2, 1, 3, 2, 4, 3]
l = list(set(my_list))
```
| github_jupyter |
# MultiAgentEnvironment: simple Map
```
#%autosave 30
import glob
import sys
from operator import itemgetter
import numpy as np
import random
import time
import math
import networkx as nx
from networkx.algorithms.shortest_paths.generic import shortest_path_length
import ray
from ray import tune
#from ray.tune.logger import pretty_print
from ray.rllib.policy.policy import Policy, PolicySpec
from ray.rllib.models.tf.tf_modelv2 import TFModelV2
from ray.rllib.models.tf.fcnet import FullyConnectedNetwork
#from ray.rllib.agents.callbacks import DefaultCallbacks
from ray.rllib.env import MultiAgentEnv
from ray.rllib.utils.framework import try_import_tf
from gym.spaces import Discrete, Box, Tuple, MultiDiscrete, Dict, MultiBinary
from ray.rllib.utils.spaces.repeated import Repeated
import matplotlib.pyplot as plt
tf1, tf, tfv = try_import_tf() # prefered TF import for Ray.io
from threading import Thread, Event
print("Imports successful")
######## Utility Classes ########
class BoolTimer(Thread):
"""A boolean value that toggles after a specified number of seconds:
Example:
bt = BoolTimer(30.0, False)
bt.start()
Used in the Centrality Baseline to limit the computation time.
"""
def __init__(self, interval, initial_state=True):
Thread.__init__(self)
self.interval = interval
self.state = initial_state
self.finished = Event()
def __bool__(self):
return bool(self.state)
def run(self):
self.finished.wait(self.interval)
if not self.finished.is_set():
self.state = not self.state
self.finished.set()
######## Static helper functions ########
def shuffle_actions(action_dict, check_space = False):
"""
Used to shuffle the action dict to ensure that agents with a lower id are not always preferred
when picking up parcels over other agents that chose the same action.
For debugging: Can also be used to check if all actions are in the action_space.
"""
keys = list(action_dict)
random.shuffle(keys)
shuffled = {}
for agent in keys:
if check_space: #assert actions are in action space -> disable for later trainig, extra check while development
assert self.action_space.contains(action_dict[agent]),f"Action {action_dict[agent]} taken by agent {agent} not in action space"
shuffled[agent] = action_dict[agent]
return shuffled
def load_graph(data):
"""Loads topology (map) from json file into a networkX Graph and returns the graph"""
nodes = data["nodes"]
edges = data["edges"]
g = nx.DiGraph() # directed graph
g.add_nodes_from(nodes)
g.edges(data=True)
for node in nodes: # add attribute values
g.nodes[node]["id"] = nodes[node]["id"]
g.nodes[node]["type"] = nodes[node]["type"]
for edge in edges: # add edges with attributes
f = edges[edge]["from"]
t = edges[edge]["to"]
weight_road, weight_air, _type = sys.maxsize, sys.maxsize, None
if edges[edge]["road"] >= 0:
weight_road = edges[edge]["road"]
_type = 'road'
if edges[edge]["air"] >= 0:
weight_air = edges[edge]["air"]
_type = 'both' if _type == 'road' else 'air'
weight = min(weight_road, weight_air) # needed for optimality baseline
g.add_edge(f, t, type=_type, road= weight_road, air=weight_air, weight=weight)
#print(list(g.nodes(data=True)))
return g
```
## Environment Code
```
# Map definition --> USE_CASE MAP
topology_UseCase = {
'nodes': {
0: {'id': 1, 'type': 'parking'},
1: {'id': 1, 'type': 'parking'},
2: {'id': 2, 'type': 'parking'},
3: {'id': 3, 'type': 'parking'},
4: {'id': 4, 'type': 'parking'},
5: {'id': 5, 'type': 'parking'},
6: {'id': 6, 'type': 'parking'},
7: {'id': 7, 'type': 'parking'},
8: {'id': 8, 'type': 'parking'},
9: {'id': 9, 'type': 'parking'},
10: {'id': 10, 'type': 'parking'},
11: {'id': 11, 'type': 'parking'},
12: {'id': 10, 'type': 'parking'},
13: {'id': 11, 'type': 'road'}
},
'edges': {
# lower half
"e01":{"from": 0,"to": 4,"road": 5, "air": 5},
"e02":{"from": 4,"to": 0,"road": 5, "air": 5},
"e03":{"from": 0,"to": 5,"road": -1, "air": 3},
"e04":{"from": 5,"to": 0,"road": -1, "air": 3},
"e03":{"from": 0,"to": 1,"road": 3, "air": 3},
"e04":{"from": 1,"to": 0,"road": 3, "air": 3},
"e05":{"from": 0,"to": 2,"road": -1, "air": 2},
"e06":{"from": 2,"to": 0,"road": -1, "air": 2},
"e07":{"from": 1,"to": 3,"road": 8, "air": 4},
"e08":{"from": 3,"to": 1,"road": 8, "air": 4},
"e11":{"from": 1,"to": 2,"road": -1, "air": 2},
"e12":{"from": 2,"to": 1,"road": -1, "air": 2},
"e13":{"from": 1,"to": 5,"road": 8, "air": 4},
"e14":{"from": 5,"to": 1,"road": 8, "air": 4},
"e15":{"from": 1,"to": 6,"road": 5, "air": 5},
"e16":{"from": 6,"to": 1,"road": 5, "air": 5},
"e17":{"from": 2,"to": 3,"road": 10, "air": 5},
"e18":{"from": 3,"to": 2,"road": 10, "air": 5},
"e19":{"from": 3,"to": 12,"road": 12, "air": 6},
"e20":{"from": 12,"to": 3,"road": 12, "air": 6},
"e21":{"from": 4,"to": 5,"road": 5, "air": 5},
"e22":{"from": 5,"to": 4,"road": 5, "air": 5},
"e23":{"from": 5,"to": 6,"road": 5, "air": 5},
"e24":{"from": 6,"to": 5,"road": 5, "air": 5},
"e25":{"from": 6,"to": 12,"road": 10, "air": 5},
"e26":{"from": 12,"to": 6,"road": 10, "air": 5},
"e27":{"from": 12,"to": 13,"road": 10, "air": -1},
"e28":{"from": 13,"to": 12,"road": 10, "air": -1},
# Bridges
"e29":{"from": 4,"to": 7,"road": 3, "air": -1},
"e30":{"from": 7,"to": 4,"road": 3, "air": -1},
"e31":{"from": 5,"to": 8,"road": 3, "air": -1},
"e32":{"from": 8,"to": 5,"road": 3, "air": -1},
"e33":{"from": 6,"to": 9,"road": 3, "air": -1},
"e34":{"from": 9,"to": 6,"road": 3, "air": -1},
# Upper half
"e35":{"from": 8,"to": 7,"road": 8, "air": 3},
"e36":{"from": 7,"to": 8,"road": 8, "air": 3},
"e37":{"from": 8,"to": 9,"road": 8, "air": 3},
"e38":{"from": 9,"to": 8,"road": 8, "air": 3},
"e39":{"from": 7,"to": 10,"road": 6, "air": 2},
"e40":{"from": 10,"to": 7,"road": 6, "air": 2},
"e41":{"from": 10,"to": 11,"road": 12, "air": 3},
"e42":{"from": 11,"to": 10,"road": 12, "air": 3},
"e43":{"from": 9,"to": 11,"road": 10, "air": 5},
"e44":{"from": 11,"to": 9,"road": 10, "air": 5}
}
}
class Case_Environment(MultiAgentEnv):
def __init__(self, env_config: dict = {}):
# ensure config file includes all necessary settings
assert 'NUMBER_STEPS_PER_EPISODE' in env_config
assert 'NUMBER_OF_DRONES' in env_config
assert 'NUMBER_OF_CARS' in env_config
assert 'INIT_NUMBER_OF_PARCELS' in env_config
assert 'TOPOLOGY' in env_config
assert 'MAX_NUMBER_OF_PARCELS' in env_config
assert 'THRESHOLD_ADD_NEW_PARCEL' in env_config
assert 'BASELINE_FLAG' in env_config
assert 'BASELINE_TIME_CONSTRAINT' in env_config
assert 'BASELINE_OPT_CONSTANT' in env_config
assert 'CHARGING_STATION_NODES' in env_config
assert 'REWARDS' in env_config
topology = env_config['TOPOLOGY']
self.graph = load_graph(topology)
# Map config
self.NUMBER_OF_DRONES = env_config['NUMBER_OF_DRONES']
self.NUMBER_OF_CARS = env_config['NUMBER_OF_CARS']
self.NUMBER_OF_EDGES = self.graph.number_of_edges()
self.NUMBER_OF_NODES = self.graph.number_of_nodes()
self.CHARGING_STATION_NODES = env_config['CHARGING_STATION_NODES']
self.MAX_BATTERY_POWER = env_config['MAX_BATTERY_POWER']
# Simulation config
self.NUMBER_STEPS_PER_EPISODE = env_config['NUMBER_STEPS_PER_EPISODE']
self.INIT_NUMBER_OF_PARCELS = env_config['INIT_NUMBER_OF_PARCELS']
self.RANDOM_SEED = env_config.get('RANDOM_SEED', None)
self.MAX_NUMBER_OF_PARCELS = env_config['MAX_NUMBER_OF_PARCELS']
self.THRESHOLD_ADD_NEW_PARCEL = env_config['THRESHOLD_ADD_NEW_PARCEL']
self.BASELINE_FLAG = env_config['BASELINE_FLAG']
self.BASELINE_TIME_CONSTRAINT = env_config['BASELINE_TIME_CONSTRAINT']
self.BASELINE_OPT_CONSTANT = env_config['BASELINE_OPT_CONSTANT']
self.DEBUG_LOG = env_config.get('DEBUG_LOGS', False)
# Some Sanity Checks on the settings
if self.DEBUG_LOG: assert self.MAX_NUMBER_OF_PARCELS >= self.INIT_NUMBER_OF_PARCELS, "Number of initial parcels exceeds max parcel limit"
#Reward constants
self.STEP_PENALTY = env_config['REWARDS']['STEP_PENALTY']
self.PARCEL_DELIVERED = env_config['REWARDS']['PARCEL_DELIVERED']
# compute other rewards
self.BATTERY_DIED = self.STEP_PENALTY * self.NUMBER_STEPS_PER_EPISODE
self.BATTERY_DIED_WITH_PARCEL = self.BATTERY_DIED * 2
#self.DELIVERY_CONTRIBUTION: depends on active agents --> computed when given in prepare_global_reward()
#self.ALL_DELIVERED_CONTRIB: depends on active agents --> computed when given in prepare_global_reward(episode_success=True)
# Computed constants
self.NUMBER_OF_AGENTS = self.NUMBER_OF_DRONES + self.NUMBER_OF_CARS
self.PARCEL_STATE_DELIVERED = self.NUMBER_OF_AGENTS + self.NUMBER_OF_NODES
self.NUMBER_OF_ACTIONS = 1 + self.NUMBER_OF_NODES + 1 + self.MAX_NUMBER_OF_PARCELS
self.ACTION_DROPOFF = 1 + self.NUMBER_OF_NODES # First Action NOOP is 0
# seed RNGs
self.seed(self.RANDOM_SEED)
self.state = None
self.current_step = None
self.blocked_agents = None
self.parcels_delivered = None
self.done_agents = None
self.all_done = None
self.allowed_actions = None
# baseline related
self.baseline_missions = None
self.agents_base = None
self.o_employed = None
# metrics for the evaluation
self.parcel_delivered_steps = None # --> {p1: 20, p2: 240, p:140}
self.parcel_added_steps = None # --> {p1: 0, p2: 0, p: 50}
self.agents_crashed = None # --> {c_2: 120, d_0: 242}
self.metrics = None
self.agents = [*["d_" + str(i) for i in range(self.NUMBER_OF_DRONES)],*["c_" + str(i) for i in range(self.NUMBER_OF_DRONES, self.NUMBER_OF_DRONES + self.NUMBER_OF_CARS)]]
# Define observation and action spaces for individual agents
self.action_space = Discrete(self.NUMBER_OF_ACTIONS)
#---- Repeated Obs Space: Represents a parcel with (id, location, destination) --> # parcel_id starts at 1
parcel_space = Dict({'id': Discrete(self.MAX_NUMBER_OF_PARCELS+1),
'location': Discrete(self.NUMBER_OF_NODES + self.NUMBER_OF_AGENTS + 1),
'destination': Discrete(self.NUMBER_OF_NODES)
})
self.observation_space = Dict({'obs': Dict({'state':
Dict({ 'position': Discrete(self.NUMBER_OF_NODES),
'battery': Discrete(self.MAX_BATTERY_POWER + 1), #[0-100]
'has_parcel': Discrete(self.MAX_NUMBER_OF_PARCELS + 1),
'current_step': Discrete(self.NUMBER_STEPS_PER_EPISODE + 1)}),
'parcels': Repeated(parcel_space, max_len=self.MAX_NUMBER_OF_PARCELS)
}),
'allowed_actions': MultiBinary(self.NUMBER_OF_ACTIONS)
})
#TODO: why is reset() not called by env?
self.reset()
def step(self, action_dict):
"""conduct the state transitions caused by actions in action_dict
:returns:
- observation_dict: observations for agents that need to act in the next round
- rewards_dict: rewards for agents following their chosen actions
- done_dict: indicates end of episode if max_steps reached or all parcels delivered
- info_dict: pass data to custom logger
"""
if self.DEBUG_LOG: print(f"Debug log flag set to {self.DEBUG_LOG}")
# ensure no disadvantage for agents with higher IDs if action conflicts with that taken by other agent
action_dict = shuffle_actions(action_dict)
self.current_step += 1
# grant step penalty reward
agent_rewards = {agent: self.STEP_PENALTY for agent in self.agents}
# setting an agent done twice might cause crash when used with tune... -> https://github.com/ray-project/ray/issues/10761
dones = {}
self.metrics['step'] = self.current_step
# dynamically add parcel
if random.random() <= self.THRESHOLD_ADD_NEW_PARCEL and len(self.state['parcels']) < self.MAX_NUMBER_OF_PARCELS:
p_id, parcel = self.generate_parcel()
assert p_id not in self.state['parcels'], "Duplicate parcel ID generated"
self.state['parcels'][p_id] = parcel
if self.BASELINE_FLAG:
self.compute_central_delivery(p_id)
self.metrics["optimal"] = self.compute_optimality_baseline(p_id, extra_charge=self.BASELINE_OPT_CONSTANT)
for agent in self.agents:
self.allowed_actions[agent][self.ACTION_DROPOFF + p_id] = np.array([1]).astype(bool)
if self.BASELINE_FLAG:
old_actions = action_dict
action_dict = {}
# Replace actions with actions recommended by the central baseline
for agent, action in old_actions.items():
if len(self.baseline_missions[agent]) > 0:
new_action = self.baseline_missions[agent][0]
if type(new_action) is tuple: # dropoff action with minimal time
if self.current_step >= new_action[1]:
new_action = new_action[0]
self.baseline_missions[agent].pop(0)
else: #agent has to wait for previous subroute agent
new_action = 0
else: # move or pickup or charge
self.baseline_missions[agent].pop(0)
action_dict[agent] = new_action
else: # agent has no baseline mission -> Noop
action_dict[agent] = 0
# carry out State Transition
# handel NOP actions: -> action == 0
noop_agents = {agent: action for agent, action in action_dict.items() if action == 0}
effectual_agents_items = {agent: action for agent, action in action_dict.items() if action > 0}.items()
# now: transaction between agents modelled as pickup of just offloaded (=dropped) parcel --> handle dropoff first
moving_agents = {agent: action for agent, action in effectual_agents_items if 0 < action and action <= self.NUMBER_OF_NODES}
dropoff_agents = {agent: action for agent, action in effectual_agents_items if action == self.ACTION_DROPOFF}
pickup_agents = {agent: action for agent, action in effectual_agents_items if action > self.ACTION_DROPOFF}
# handle noop / charge decisions:
for agent, action in noop_agents.items():
# check if recharge is possible
current_pos = self.state['agents'][agent]['position']
if current_pos in self.CHARGING_STATION_NODES:
self.state['agents'][agent]['battery'] = self.MAX_BATTERY_POWER
# handle Movement actions:
for agent, action in moving_agents.items():
# get Current agent position from state
self.state['agents'][agent]['battery'] += -1
current_pos = self.state['agents'][agent]['position']
# networkX: use node instead of edge:
destination = action - 1
if self.graph.has_edge(current_pos, destination):
# Agent chose existing edge! -> check if type is suitable
agent_type = 'road' if agent[:1] == 'c' else 'air'
if self.graph[current_pos][destination]["type"] in [agent_type, 'both']:
# Edge has correct type
self.state['agents'][agent]['position'] = destination
self.state['agents'][agent]['battery'] += -(self.graph[current_pos][destination][agent_type] +1)
if self.state['agents'][agent]['battery'] < 0: # ensure negative battery value does not break obs_space
#Battery below 0 --> reset to 0 (stay in obs space)
self.state['agents'][agent]['battery'] = 0
self.blocked_agents[agent] = self.graph[current_pos][destination][agent_type]
self.update_allowed_actions_nodes(agent)
# handle Dropoff Decision: -> action == self.NUMBER_OF_NODES + 2
for agent, action in dropoff_agents.items():
self.state['agents'][agent]['battery'] += -1
if self.state['agents'][agent]['has_parcel'] > 0: # agent has parcel
parcel_id = self.state['agents'][agent]['has_parcel']
self.state['agents'][agent]['has_parcel'] = 0
self.state['parcels'][parcel_id][0] = self.state['agents'][agent]['position']
if self.state['parcels'][parcel_id][0] == self.state['parcels'][parcel_id][1]:
# Delivery successful
agent_rewards[agent] += self.PARCEL_DELIVERED # local reward
# global contribution rewards
active_agents, reward = self.prepare_global_reward()
for a_id in active_agents: agent_rewards[a_id] += reward
self.state['parcels'][parcel_id][0] = self.PARCEL_STATE_DELIVERED
self.parcels_delivered[int(parcel_id) -1] = True # Parcel_ids start at 1
self.metrics['delivered'].update({"p_" + str(parcel_id): self.current_step})
self.update_allowed_actions_parcels(agent)
# handle Pickup Decision:
for agent, action in pickup_agents.items():
self.state['agents'][agent]['battery'] += -1
# agent has free capacity
if self.state['agents'][agent]['has_parcel'] == 0: #free parcel capacity
# convert action_id to parcel_id
parcel_id = action - self.ACTION_DROPOFF
if self.DEBUG_LOG: assert parcel_id in self.state['parcels'] # parcel {parcel_id} already in ENV ??
elif self.state['parcels'][parcel_id][0] == self.state['agents'][agent]['position']:
# Successful pickup operation
self.state['parcels'][parcel_id][0] = self.NUMBER_OF_NODES + int(agent[2:])
self.state['agents'][agent]['has_parcel'] = int(parcel_id)
self.update_allowed_actions_parcels(agent)
# unblock agents for next round
self.blocked_agents = {agent: remaining_steps -1 for agent, remaining_steps in self.blocked_agents.items() if remaining_steps > 1}
# handle dones - out of battery or max_steps or goal reached
for agent in action_dict.keys():
if agent not in self.done_agents and self.state['agents'][agent]['battery'] <= 0:
agent_rewards[agent] = self.BATTERY_DIED_WITH_PARCEL if self.state['agents'][agent]['has_parcel'] != 0 else self.BATTERY_DIED
dones[agent] = True
self.done_agents.append(agent)
self.metrics['crashed'].update({agent: self.current_step})
if len(self.done_agents) == self.NUMBER_OF_AGENTS:
# all agents dead
self.all_done = True
# check if episode terminated because of goal reached or all agents crashed -> avoid setting done twice
if self.current_step >= self.NUMBER_STEPS_PER_EPISODE or (all(self.parcels_delivered) and len(self.parcels_delivered) == self.MAX_NUMBER_OF_PARCELS):
# check if episode success:
if self.current_step < self.NUMBER_STEPS_PER_EPISODE:
# grant global reward for all parcels delivered
active_agents, reward = self.prepare_global_reward(_episode_success=True)
for a_id in active_agents: agent_rewards[a_id] += reward
self.all_done = True
dones['__all__'] = self.all_done
parcel_obs = self.get_parcel_obs()
# obs \ rewards \ dones\ info
return {agent: { 'obs': {'state': {'position': self.state['agents'][agent]['position'], 'battery': self.state['agents'][agent]['battery'],
'has_parcel': self.state['agents'][agent]['has_parcel'],'current_step': self.current_step},
'parcels': parcel_obs},
'allowed_actions': self.allowed_actions[agent]} for agent in self.agents if agent not in self.blocked_agents and agent not in self.done_agents}, \
{ agent: agent_rewards[agent] for agent in self.agents}, \
dones, \
{}
def seed(self, seed=None):
tf.random.set_seed(seed)
np.random.seed(seed)
random.seed(seed)
def reset(self):
"""resets variables; returns dict with observations, keys are agent_ids"""
self.current_step = 0
self.blocked_agents = {}
self.parcels_delivered = [False for _ in range(self.MAX_NUMBER_OF_PARCELS)]
self.done_agents = []
self.all_done = False
self.metrics = { "step": self.current_step,
"delivered": {},
"crashed": {},
"added": {},
"optimal": None
}
#Baseline
self.baseline_missions = {agent: [] for agent in self.agents}
self.o_employed = [0 for _ in range(self.NUMBER_OF_AGENTS)]
self.agents_base = None # env.agents in the pseudocode
self.allowed_actions = { agent: np.array([1 for act in range(self.NUMBER_OF_ACTIONS)]) for agent in self.agents}
#Reset State
self.state = {'agents': {},
'parcels': {}}
#generate initial parcels
for _ in range(self.INIT_NUMBER_OF_PARCELS):
p_id, parcel = self.generate_parcel()
self.state['parcels'][p_id] = parcel
parcel_obs = self.get_parcel_obs()
# init agents
self.state['agents'] = {agent: {'position': self._random_feasible_agent_position(agent),
'battery': self.MAX_BATTERY_POWER, 'has_parcel': 0} for agent in self.agents}
if self.BASELINE_FLAG:
for parcel in self.state['parcels']:
self.compute_central_delivery(parcel, debug_log=False)
# TODO really return something here??
self.metrics['optimal'] = self.compute_optimality_baseline(parcel, extra_charge=self.BASELINE_OPT_CONSTANT, debug_log=False)
# compute allowed actions per agent --> move to function
for agent in self.agents:
self.update_allowed_actions_nodes(agent)
self.update_allowed_actions_parcels(agent)
agent_obs = {agent: {'obs': {'state': {'position': state['position'], 'battery': state['battery'],
'has_parcel': state['has_parcel'],'current_step': self.current_step},
'parcels': parcel_obs
},
'allowed_actions': self.allowed_actions[agent]
} for agent, state in self.state['agents'].items()
}
return {**agent_obs}
def _random_feasible_agent_position(self, agent_id):
"""Needed to avoid car agents being initialized at nodes only reachable by drones
and thus trapped from the beginning. Ensures that car agents start at a node of type 'road' or 'parking'.
"""
position = random.randrange(self.NUMBER_OF_NODES)
# not necessary on the Case Study Map
#if agent_id[0] == 'c': # agent is car
#while self.graph.nodes[position]['type'] == 'air': # position not reachable by car
# position = random.randrange(self.NUMBER_OF_NODES)
return position
def update_allowed_actions_nodes(self, agent):
new_pos = self.state['agents'][agent]['position']
next_steps = list(self.graph.neighbors(new_pos))
agent_type = 'air' if agent[0]=='d' else 'road'
allowed_nodes = np.zeros(self.NUMBER_OF_NODES)
for neighbor in next_steps:
if self.graph[new_pos][neighbor]['type'] in [agent_type, 'both']:
allowed_nodes[neighbor] = 1
self.allowed_actions[agent][1:self.NUMBER_OF_NODES+1] = np.array(allowed_nodes).astype(bool)
def update_allowed_actions_parcels(self, agent):
""" Allow only the Dropoff or Pickup actions, depending on the has_parcel value of the agent.
Pickup is not concerned with the parcel actually being at the agents current location, only with free capacity
and the parcel already being added to the ENV"""
num_parcels = len(self.state['parcels'])
allowed_parcels = np.zeros(self.MAX_NUMBER_OF_PARCELS)
dropoff = 1
if self.state['agents'][agent]['has_parcel'] == 0:
dropoff = 0
allowed_parcels = np.concatenate([np.ones(num_parcels), np.zeros(self.MAX_NUMBER_OF_PARCELS - num_parcels)])
self.allowed_actions[agent][self.NUMBER_OF_NODES+1:] = np.array([dropoff, *allowed_parcels]).astype(bool)
def get_parcel_obs(self):
parcel_obs = [{'id':pid, 'location': parcel[0], 'destination':parcel[1]} for (pid, parcel) in self.state['parcels'].items()]
return parcel_obs
def generate_parcel(self):
"""generate new parcel id and new parcel with random nodes for location and destination.
p_ids (int) start at 1, later nodes for parcel will be sampled to avoid parcels that already spawn at their destination"""
p_id = len(self.state['parcels']) + 1
parcel = random.sample(range(self.NUMBER_OF_NODES), 2) # => initial location != destination
self.metrics['added'].update({p_id: self.current_step})
return p_id, parcel
def prepare_global_reward(self, _episode_success=False):
""" computes a global reward for all active agents still in the environment.
If _episode_success is set to True, all parcels have been delivered an the ALL_DELIVERED reward is granted.
:Returns: list of active agents and the reward value
"""
agents_alive = list(set(self.agents).difference(set(self.done_agents)))
if self.DEBUG_LOG: assert len(agents_alive) > 0
reward = self.PARCEL_DELIVERED * (self.NUMBER_STEPS_PER_EPISODE - self.current_step) / self.NUMBER_STEPS_PER_EPISODE if _episode_success else self.PARCEL_DELIVERED / len(agents_alive)
return agents_alive, reward
#------ BASELINE related methods ------###
def compute_optimality_baseline(self, parcel_id, extra_charge=2.5, debug_log=False):
"""Used in the optimality baseline
Input: parcel_id, (extra_charge)
Output: new total delivery rounds needed for all parcels
"""
parcel = self.state['parcels'][parcel_id]
path_time = 2 + shortest_path_length(self.graph, parcel[0], parcel[1], 'weight')
_time = math.ceil(path_time * extra_charge) # round to next higher integer
min_index = self.o_employed.index(min(self.o_employed))
self.o_employed[min_index] += _time
return max(self.o_employed)
def compute_central_delivery(self, p_id, debug_log = False):
"""Used in the central baseline, iteratively tries to find a good delivery route
with the available agents in the time specified in BASELINE_TIME_CONSTRAINT
Input: parcel_id
Output: Dict: {agent_id: [actions], ...} --> update that dict! (merge in this function with prev actions!)
"""
if self.agents_base is None:
self.agents_base = {a_id: (a['position'], 0) for (a_id, a) in self.state["agents"].items()} # last instructed pos + its step count
min_time = None
new_missions = {} # key: agent, value: [actions]
source = self.state["parcels"][p_id][0]
target = self.state["parcels"][p_id][1]
shortest_paths_generator = nx.shortest_simple_paths(self.graph, source, target, weight='weight')
running = BoolTimer(self.BASELINE_TIME_CONSTRAINT)
running.start()
while running:
# Default --> assign full route to nearest drone
if min_time is None:
air_route = nx.shortest_path(self.graph, source= source, target= target, weight="air")
air_route_time = nx.shortest_path_length(self.graph, source= source, target= target, weight="air")
air_route.pop(0) # remove source node from path
# Assign most suitable Drone
best_drone = None
for (a_id, a_tp) in self.agents_base.items():
if a_id[0] != 'd': # filter for correct agent type
continue
journey_time = nx.shortest_path_length(self.graph, source=a_tp[0], target=source, weight="air") + a_tp[1]
if min_time is None or journey_time < min_time:
min_time = journey_time
best_drone = (a_id, a_tp)
# construct path for agent
drone_route = nx.shortest_path(self.graph, source= best_drone[1][0], target= source, weight="air")
drone_route_actions = [x+1 for x in drone_route[1:]] + [(self.ACTION_DROPOFF + p_id, 0)] + [x+1 for x in air_route[1:]] + [self.ACTION_DROPOFF] # increment node_ids by one to get corresponding action
min_time += air_route_time + 2 # add 2 steps for pick & drop
self._add_charging_stops_to_route(drone_route_actions, debug_log=debug_log)
new_missions[best_drone[0]] = (drone_route_actions, min_time)
else: # try to improve the existing base mission
try:
shortest_route = next(shortest_paths_generator)
except StopIteration:
break # all existing shortest paths already tried
subroutes = self._path_to_subroutes(shortest_route, debug_log=debug_log)
duration, min_agents = self._find_best_agents(subroutes, min_time, debug_log=debug_log)
if duration < min_time:
# faster delivery route found!
assert duration < min_time, "Central Baseline prefered a longer route..."
# update min_time
min_time = duration
new_missions = self._build_missions(min_agents, subroutes, p_id, debug_log=debug_log)
#---- end while = Timer
# now save the best mission found in the ENV
for agent in new_missions.keys():
# retrieve target node from mission, depends on charging stops and case no move necessary
target = None
if isinstance(new_missions[agent][0][-2], int):
target = new_missions[agent][0][-2]
if target == 0: target = new_missions[agent][0][-3] # charging action was added before dropoff
target -= 1 # action-1 = node_id
else: # handle case no move necessary -> pick action before dropoff
target = self.agents_base[agent][0]
self.agents_base[agent] = (target, self.agents_base[agent][1] + new_missions[agent][1])
self.baseline_missions[agent].extend(new_missions[agent][0])
return new_missions
def _find_best_agents(self, subroutes, min_time, debug_log=False):
""" For use in centrality_baseline. Finds best available agents for traversing a set of subroutes
and returns these with the total duration for doing so.
Input: subroutes = [ edge_type, [actions]]
"""
min_agents = {}
temp_agents_base = {k:v for k,v in self.agents_base.items()} # deep copy for temporary planning transfer or stick with agent??
for i,r in enumerate(subroutes):
# init some helper vars
a_type = "d" if r[0] == 'air' else 'c'
min_time_sub = None # best time (min) for this subroute (closest agent)
best_agent_sub = None
# iterate over agents of correct type! --> later: Busy / unbusy
for (a_id, a_tp) in temp_agents_base.items():
#reminder: a_tp is tuple of latest future position (node, timestep)
# filter for correct agent type - even if type is 'both' one agent can still take only its edge type
weight_agent = r[0]
if r[0] == 'both':
weight_agent = 'road' if a_id[0] == 'c' else 'air'
else: # wrong agent type
if a_id[0] != a_type: # todo replace with parameter variable in function!
continue
journey_time = nx.shortest_path_length(self.graph, source=a_tp[0], target=r[1][0], weight=weight_agent) + a_tp[1] # earliest time agent can be there
if min_time_sub is None or journey_time < min_time_sub:
min_time_sub = journey_time
best_agent_sub = (a_id, a_tp) # a_id, a_tp = (latest location, timestep)
# closest available agent found
best_agent_weight = weight_agent = 'road' if best_agent_sub[0] == 'c' else 'air'
duration_sub = min_time_sub + nx.shortest_path_length(self.graph, source=r[1][0], target=r[1][-1], weight=best_agent_weight) + 1
# update agent state in temporary planning
temp_agents_base[best_agent_sub[0]] = (r[1][-1], duration_sub)
# agent_tuple, duration_subroutes_until_then
min_agents[i] = (best_agent_sub, duration_sub)
if debug_log: assert duration_sub < sys.maxsize, "Non existent edge taken somewhere..."
# check if current subroute already longer than the min one
if duration_sub > min_time:
break # already worse, check next simple path
return duration_sub, min_agents
def _build_missions(self, min_agents, subroutes, parcel_id, debug_log = False):
"""For use in centrality_baseline. Computes list of actions for delivery of parcel parcel_id
and necessary duration for execution from list of agents, subroutes"""
new_mission = {}
for i, s in enumerate(subroutes):
best_agent_pos = min_agents[i][0][1][0]
#earliest time to start that action
time_pickup = min_agents[i-1][1] if i > 0 else 0 # First subroute pickup as soon as possible
# construct the actual delivery path to pickup
delivery_route = nx.shortest_path(self.graph, source=best_agent_pos, target=s[1][0], weight=s[0])
delivery_route_actions = [x+1 for x in delivery_route[1:]] + [(self.ACTION_DROPOFF + parcel_id, time_pickup)] + [x+1 for x in s[1][1:]] + [self.ACTION_DROPOFF]
if debug_log: assert min_agents[i][0] not in new_mission #I see no preferable case where agent picks-up 1 parcel 2 times --> holding always better than following
self._add_charging_stops_to_route(delivery_route_actions, debug_log=debug_log)
new_mission[min_agents[i][0][0]] = (delivery_route_actions, min_agents[i][1])
return new_mission
def _add_charging_stops_to_route(self, route_actions, debug_log=False):
"""For use in centrality_baseline. Iterates through a list of actions and inserts a charging action
after every move action to a node with a charging station.
Tuples representing Dropoff actions with minimal executions time are updated to account for eventual delays. """
delay = 0
for i, n in enumerate(route_actions):
if type(n) is tuple:
if delay > 0: route_actions[i] = (n[0], n[1] + delay)
else:
if n-1 in self.CHARGING_STATION_NODES:
delay += 1
route_actions.insert(i+1, 0)
def _path_to_subroutes(self, path, debug_log= False):
"""For use in centrality_baseline. Takes path in the graph as input and returns list of subroutes
split at changes of the edge types with the type"""
# get subroutes by their edge_types
e_type_prev = None
e_type_changes = [] # save indices of source nodes before new edge type
subroutes = []
_subroute = [path[0]]
if len(path) > 1:
e_type_prev = self.graph.edges[path[0], path[1]]['type']
for i, node in enumerate(path[1:-1], start=1):
e_type_next = self.graph.edges[node, path[i+1]]['type']
_subroute.append(node)
if e_type_next != e_type_prev:
subroutes.append((e_type_prev, _subroute))
_subroute = [node]
e_type_prev = e_type_next
_subroute.append(path[-1])
subroutes.append((e_type_prev, _subroute)) # don't forget last subroute
return subroutes
```
## Agent Model and Experiment Evaluation Code
```
from gym.spaces import Discrete, Box, Tuple, MultiDiscrete, Dict, MultiBinary
#from ray.rllib.utils.spaces.space_utils import flatten_space
#from ray.rllib.models.preprocessors import DictFlatteningPreprocessor
# Parametric-action agent model --> apply Action Masking!
class ParametricAgentModel(TFModelV2):
def __init__(self, obs_space, action_space, num_outputs, model_config, name, *args, **kwargs):
super(ParametricAgentModel, self).__init__(obs_space, action_space, num_outputs, model_config, name, *args, **kwargs)
assert isinstance(action_space, Discrete), f'action_space is a {type(action_space)}, but should be Discrete!'
# Adjust for number of agents/parcels/Nodes!! -> Simply copy found shape from the thrown exception
true_obs_shape = (2099, )
action_embed_size = action_space.n
self.action_embed_model = FullyConnectedNetwork(
Box(0, 1, shape=true_obs_shape), # TODO hier nochmal die 1 anpassen?? --> muss das hier der obs entsprechen??
action_space,
action_embed_size,
model_config,
name + '_action_embedding')
def forward(self, input_dict, state, seq_lens):
action_mask = input_dict['obs']['allowed_actions']
action_embedding, _ = self.action_embed_model.forward({'obs_flat': input_dict["obs_flat"]}, state, seq_lens)
intent_vector = tf.expand_dims(action_embedding, 1)
action_logits = tf.reduce_sum(intent_vector, axis=1)
inf_mask = tf.maximum(tf.math.log(action_mask), tf.float32.min)
return action_logits + inf_mask, state
def value_function(self):
return self.action_embed_model.value_function()
## Proposed way to train / evaluate MARL policy from github Issues: --> https://github.com/ray-project/ray/issues/9123 and https://github.com/ray-project/ray/issues/9208
from ray.rllib.agents.dqn import DQNTrainer
def train(config, name, save_dir, stop_criteria, num_samples, verbosity=1):
"""
Train an RLlib agent using tune until any of the configured stopping criteria is met.
:param stop_criteria: Dict with stopping criteria.
See https://docs.ray.io/en/latest/tune/api_docs/execution.html#tune-run
:return: Return the path to the saved agent (checkpoint) and tune's ExperimentAnalysis object
See https://docs.ray.io/en/latest/tune/api_docs/analysis.html#experimentanalysis-tune-experimentanalysis
"""
print("Start training")
analysis = ray.tune.run(DQNTrainer, verbose=verbosity, config=config, local_dir=save_dir,
stop=stop_criteria, name=name, num_samples=num_samples,
checkpoint_at_end=True, resume=True)
# list of lists: one list per checkpoint; each checkpoint list contains 1st the path, 2nd the metric value
checkpoints = analysis.get_trial_checkpoints_paths(trial=analysis.get_best_trial('episode_reward_mean', mode='max'),
metric='episode_reward_mean')
# retriev the checkpoint path; we only have a single checkpoint, so take the first one
checkpoint_path = checkpoints[0][0]
print(f"Saved trained model in checkpoint {checkpoint_path} - achieved episode_reward_mean: {checkpoints[0][1]}")
return checkpoint_path, analysis
def load(config, path):
"""
Load a trained RLlib agent from the specified path. Call this before testing the trained agent.
"""
agent = DQNTrainer(config=config) #, env=env_class)
agent.restore(path)
return agent
def test(env_class, env_config, policy_mapping_fcn, agent):
"""Test trained agent for a single episode. Return the retrieved env metrics for this episode and the episode reward"""
# instantiate env class
env = env_class(env_config)
episode_reward = 0
done = False
obs = env.reset()
while not done: # run until episode ends
actions = {}
for agent_id, agent_obs in obs.items():
# Here: policy_id == agent_id - added this to avoid confusion for other policy mappings
policy_id = policy_mapping_fcn(agent_id, episode=None, worker=None)
actions[agent_id] = agent.compute_action(agent_obs, policy_id=policy_id)
obs, reward, done, info = env.step(actions)
done = done['__all__']
# sum up reward for all agents
episode_reward += sum(reward.values())
# Retrieve custom metrics from ENV
return env.metrics, episode_reward
def train_and_test_scenarios(config, seeds=None):
""" Trains for a single scenario indicated by a seed """
# TODO how to distinguish between the different algos ?
print("Starte: run_function_trainer!")
# prepare the config dicts
NAME = config['NAME']
SAVE_DIR = config['SAVE_DIR']
ENVIRONMENT = config['ENV']
# Simulations
NUMBER_STEPS_PER_EPISODE = config['NUMBER_STEPS_PER_EPISODE']
STOP_CRITERIA = config['STOP_CRITERIA']
NUMBER_OF_SAMPLES = config['NUMBER_OF_SAMPLES']
#MAP / ENV
NUMBER_OF_DRONES = config['NUMBER_OF_DRONES']
NUMBER_OF_CARS = config['NUMBER_OF_CARS']
NUMBER_OF_AGENTS = NUMBER_OF_DRONES + NUMBER_OF_CARS
MAX_NUMBER_OF_PARCELS = config['MAX_NUMBER_OF_PARCELS']
# TESTING
SEEDS = config['SEEDS']
env_config = {
'DEBUG_LOGS':False,
'TOPOLOGY': config['TOPOLOGY'],
# Simulation config
'NUMBER_STEPS_PER_EPISODE': NUMBER_STEPS_PER_EPISODE,
#'NUMBER_OF_TIMESTEPS': NUMBER_OF_TIMESTEPS,
'RANDOM_SEED': None, # 42
# Map
'CHARGING_STATION_NODES': config['CHARGING_STATION_NODES'],
# Entities
'NUMBER_OF_DRONES': NUMBER_OF_DRONES,
'NUMBER_OF_CARS': NUMBER_OF_CARS,
'MAX_BATTERY_POWER': config['MAX_BATTERY_POWER'], # TODO split this for drone and car??
'INIT_NUMBER_OF_PARCELS': config['INIT_NUMBER_OF_PARCELS'],
'MAX_NUMBER_OF_PARCELS': config['MAX_NUMBER_OF_PARCELS'],
'THRESHOLD_ADD_NEW_PARCEL': config['THRESHOLD_ADD_NEW_PARCEL'],
# Baseline settings
'BASELINE_FLAG': False, # is set True in the test function when needed
'BASELINE_OPT_CONSTANT': config['BASELINE_OPT_CONSTANT'],
'BASELINE_TIME_CONSTRAINT': config['BASELINE_TIME_CONSTRAINT'],
# TODO
#Rewards
'REWARDS': config['REWARDS']
}
run_config = {
'num_gpus': config['NUM_GPUS'],
'num_workers': config['NUM_WORKERS'],
'env': ENVIRONMENT,
'env_config': env_config,
'multiagent': {
'policies': {
# tuple values: policy, obs_space, action_space, config
**{a: (None, None, None, { 'model': {'custom_model': ParametricAgentModel }, 'framework': 'tf'}) for a in ['d_'+ str(j) for j in range(NUMBER_OF_DRONES)] + ['c_'+ str(i) for i in range(NUMBER_OF_DRONES, NUMBER_OF_CARS + NUMBER_OF_DRONES)]}
},
'policy_mapping_fn': policy_mapping_fn,
'policies_to_train': ['d_'+ str(i) for i in range(NUMBER_OF_DRONES)] + ['c_'+ str(i) for i in range(NUMBER_OF_DRONES, NUMBER_OF_CARS + NUMBER_OF_DRONES)]
},
#'log_level': "INFO",
"hiddens": [], # For DQN
"dueling": False, # For DQN
}
# Train and Evaluate the agents !
checkpoint, analysis = train(run_config, NAME, SAVE_DIR, STOP_CRITERIA, NUMBER_OF_SAMPLES)
print("Training finished - Checkpoint: ", checkpoint)
env_class = ENVIRONMENT
# Restore trained policies for evaluation
agent = load(run_config, checkpoint)
print("Agent loaded - Agent: ", agent)
# Run the test cases for the specified seeds
runs = {'max_steps': NUMBER_STEPS_PER_EPISODE, 'max_parcels': MAX_NUMBER_OF_PARCELS, 'max_agents': NUMBER_OF_AGENTS}
for seed in SEEDS:
print(seed)
env_config['RANDOM_SEED'] = seed
# TODO check if
assert run_config['env_config']['RANDOM_SEED'] == seed
result = test_scenario(run_config, agent)
runs.update(result)
return runs
def test_scenario(config, agent):
"""
Loads a pretrained agent, initializes an environment from the seed
and then evaluates it over one episode with the Marl agents and the central baseline.
Returns: dict with results for graph creation for both evaluation / inference runs
"""
#TODO unterscheidung run_config > env_config
# Wie die beiden Runs abspeichern --> {Seed + [marl / base] : result }
env_class = config['env']
env_config = config['env_config']
seed = env_config['RANDOM_SEED']
policy_mapping_fn = config['multiagent']['policy_mapping_fn']
# Test with MARL
metrics_marl, reward_marl = test(env_class, env_config, policy_mapping_fn, agent)
# Test with CentralBase
env_config['BASELINE_FLAG'] = True
metrics_central, reward_central = test(env_class, env_config, policy_mapping_fn, agent)
env_config['BASELINE_FLAG'] = False
# ASSERT that both optimal values are equal
#assert metrics_marl['optimal'] == metrics_central['optimal']
return {"M_" + str(seed): metrics_marl, "C_" + str(seed): metrics_central}
def policy_mapping_fn(agent_id, episode, worker, **kwargs):
return agent_id
basic_config = {
# experiment
'NAME': 'case_study',
'SAVE_DIR': 'Exp_casestudy',
'ALGO': DQNTrainer,
'ENV': Case_Environment,
'DEBUG_LOGS':False,
'NUM_GPUS': 1,
'NUM_WORKERS': 20,
'NUMBER_OF_SAMPLES': 1,
# Simulation config
'NUMBER_STEPS_PER_EPISODE': 1200,
# Map
'TOPOLOGY': topology_UseCase,
'CHARGING_STATION_NODES': [0,1,5,7,9,12],
# Entities
'NUMBER_OF_DRONES': 2,
'NUMBER_OF_CARS': 2,
'INIT_NUMBER_OF_PARCELS': 15,
'MAX_NUMBER_OF_PARCELS': 15,
'THRESHOLD_ADD_NEW_PARCEL': 0.1, # 10% chance
'MAX_BATTERY_POWER': 100,
#Baseline
'BASELINE_TIME_CONSTRAINT': 10,
'BASELINE_OPT_CONSTANT': 2.5,
#TESTING
'SEEDS': [72, 21, 44, 66, 86, 14],
#Rewards
'REWARDS': {
'PARCEL_DELIVERED': 200,
'STEP_PENALTY': -0.1,
},
'STOP_CRITERIA': {
'timesteps_total': 12_000_000,
}
}
print("Test evaluation functions")
experiment_results = train_and_test_scenarios(basic_config)
print("Test evaluation finished ;)")
experiment_results
saved_results = {'max_steps': 1200,
'max_parcels': 15,
'max_agents': 4,
'M_72': {'step': 1200,
'delivered': {},
'crashed': {'c_3': 90},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': None},
'C_72': {'step': 1200,
'delivered': {'p_3': 21, 'p_6': 51},
'crashed': {},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': 171},
'M_21': {'step': 1200,
'delivered': {},
'crashed': {'d_0': 93},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': None},
'C_21': {'step': 1200,
'delivered': {'p_2': 15, 'p_3': 32, 'p_1': 37},
'crashed': {},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': 129},
'M_44': {'step': 1200,
'delivered': {},
'crashed': {'c_3': 362, 'c_2': 825},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': None},
'C_44': {'step': 1200,
'delivered': {'p_5': 28},
'crashed': {},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': 131},
'M_66': {'step': 1200,
'delivered': {},
'crashed': {'c_2': 1075},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': None},
'C_66': {'step': 1200,
'delivered': {'p_9': 77, 'p_5': 82, 'p_13': 100, 'p_8': 118},
'crashed': {},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': 123},
'M_86': {'step': 1200,
'delivered': {},
'crashed': {},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': None},
'C_86': {'step': 1200,
'delivered': {'p_1': 13, 'p_6': 28, 'p_3': 30, 'p_11': 84, 'p_13': 114},
'crashed': {},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': 111},
'M_14': {'step': 1200,
'delivered': {},
'crashed': {'c_3': 356, 'd_0': 368},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': None},
'C_14': {'step': 1200,
'delivered': {'p_4': 12, 'p_2': 29, 'p_9': 50, 'p_7': 66, 'p_11': 94},
'crashed': {},
'added': {1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 0},
'optimal': 148}}
def create_chart_bars(results_dict):
""" Function that plots a bar graph with the duration of one episode
run with MARL agents/ Centrality Baseline/ Optimality Baseline, recorded in the :param results_dict:.
"""
# Design choices
colors = {
"marl": 'blue',
"central": 'green',
"optimal": 'red'
}
# Retrieve settings values from results dict
max_steps = results_dict['max_steps']
max_parcels = results_dict['max_parcels']
max_agents = results_dict['max_agents']
# Deep copy for further computations
scenario_results = {k:v for k,v in results_dict.items() if k[0:3] != 'max'}
# filter
runs = scenario_results.keys()
values = scenario_results.values()
# TODO statt all dead => einfach not all delivered marker
merged = {} # key is seed as str, value is a dict with [marl, central, optimal, all_dead_marl, all_dead_cent]
# Retrieve the data
for run_id, res in scenario_results.items():
split_id = run_id.split('_') # --> type, seed
key_type, key_seed = split_id[0], split_id[1]
# merge data from the runs with same seed (marl + baselines)
# add new dict if seed not encountered yet
if key_seed not in merged:
merged[key_seed] = {}
_key_delivered = 'marl'
_key_crashed = 'all_dead_marl'
if key_type == 'C':
# Baseline run
merged[key_seed]['optimal'] = res['optimal']
_key_delivered = 'central'
_key_crashed = 'all_dead_central'
# Retrieve number of steps in run
last_step = res['step']
merged[key_seed][_key_delivered] = last_step
all_parcels_delivered = len(res['delivered']) == max_parcels # were all parcels delivered
merged[key_seed][_key_delivered + '_all'] = all_parcels_delivered
if not all_parcels_delivered:
merged[key_seed][_key_delivered] = max_steps
#print("Merged: ", merged)
# example data = [[30, 25, 50, 20],
# [40, 23, 51, 17],
# [35, 22, 45, 19]]
data = [[],[],[],[], []]
labels = []
# split data into type of run
for seed,values in merged.items():
labels.append('S_'+seed)
data[0].append(values['marl'])
data[1].append(values['central'])
data[2].append(values['optimal'])
data[3].append(values['marl_all'])
data[4].append(values['central_all'])
print("Data: ", data)
X = np.arange(len(labels))
#print(X)
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.bar(X + 0.00, data[2], color = colors['optimal'], label="Optimality Baseline", width = 0.25)
ax.bar(X + 0.25, data[0], color = colors['marl'], label="MARL System", width = 0.25, alpha=0.8)
ax.bar(X + 0.50, data[1], color = colors['central'], label="Centrality Baseline", width = 0.25)
plt.xlabel("Experiments")
plt.ylabel("Steps taken")
# Add experiment identifiers x-Axis
plt.xticks(X + 0.25, labels)
# Add legend
ax.legend()
# Plot duration bar graphs from the results
create_chart_bars(experiment_results)
import matplotlib.ticker as mtick
def create_chart_episode_events(results_dict, draw_crashes = False):
""" Function that plots a graph with either the deliveries of parcels or the crashes of agents
over the course of an episode, recorded in the :param results_dict:.
Set the :param draw_crashes: flag for plotting crashes, default are deliveries.
"""
# TODO Graphen auch abspeichern oder nur hier anzeigen ??
# TODO - Parcel additions ??
# TODO - overthink fill with max_value for mean computation
# Design choices
colors = {
"marl": 'blue',
"central": 'green',
"optimal": 'red'
}
alpha = 0.6 # Opacity of individual value lines
alpha_opt = 0.2
opt_marker_size = 15
line_width_mean = 5
line_width_optimal = 2
fig = plt.figure()
ax = fig.add_subplot(111)
# Retrieve settings values from results dict
max_steps = results_dict['max_steps']
max_parcels = results_dict['max_parcels']
max_agents = results_dict['max_agents']
_len = max_agents if draw_crashes else max_parcels
_len += 1 # start plot at origin
_key = 'crashed' if draw_crashes else 'delivered'
# Deep copy for further computations
scenario_results = {k:v for k,v in results_dict.items() if k[0:3] != 'max'}
# for computation of mean
m_values, c_values, o_values = [], [], []
Y = [str(i) for i in range(0, _len)]
# iterate over configs
for scenario, results in scenario_results.items():
# Default settings -> MARL run
color = colors["marl"]
_type = m_values
label= "marl_system"
if scenario[0] == 'C':
# Baseline run
color = colors["central"]
_type = c_values
label = "centrality_baseline"
# Retrieve and plot optimality baseline
optimal_time = results['optimal']
assert optimal_time is not None
if not draw_crashes: ax.plot(optimal_time, 0, "*", color = colors["optimal"], label="optimality_baseline", markersize=opt_marker_size, alpha= alpha_opt, clip_on=False)
o_values.append(optimal_time)
_num_steps = results['step']
X = [0] + list(results[_key].values())
X = X + [max_steps]*(_len - len(X)) # Fill X up with max_step values for not delivered parcels / not crashed agents
_type.append(X) # add X to the respective mean list
#Y = [str(i) for i in range(0, len(results[_key].values())+1)]
#print("Data: ", results[_key].values())
#print("new X: ", X)
#print("new Y: ", Y)
ax.step(X, Y, label=label, where='post', color=color, alpha=alpha)
# Attempt to improve the filling mess in the plot...
#X = X + [max_steps]*(_len - len(X)) # Fill X up with max_step values for not delivered parcels / not crashed agents
#_type.append(X) # add X to the respective mean list
# compute mean values
m_mean = np.mean(np.array(m_values), axis=0)
c_mean = np.mean(np.array(c_values), axis=0)
o_mean = np.mean(np.array(o_values), axis=0)
ax.step(m_mean, Y, label="marl_system", where='post', color=colors["marl"], linewidth=line_width_mean)
ax.step(c_mean, Y, label="centrality_baseline", where='post', color=colors["central"], linewidth=line_width_mean)
# star for opt: if not draw_crashes: plt.plot(o_mean, 0, "*", label="optimality_baseline", color="r", markersize=opt_marker_size, clip_on=False)
# better?: vertical line for opt
if not draw_crashes: ax.axvline(o_mean, label="optimality_baseline", color=colors["optimal"], linewidth=2, alpha=alpha_opt+0.3)
# y axis as percentage
max_percent = max_agents if draw_crashes else max_parcels
xticks = mtick.PercentFormatter(max_percent)
ax.yaxis.set_major_formatter(xticks)
# Lables and Legend
plt.xlabel("steps")
ylabel = "% of crashed agents" if draw_crashes else "% of delivered parcels"
plt.ylabel(ylabel)
handles, labels = plt.gca().get_legend_handles_labels()
by_label = dict(zip(labels, handles))
plt.legend(by_label.values(), by_label.keys())
# Margins
plt.ylim(bottom=0)
plt.xlim()
plt.margins(x=0, y=0)
plt.show()
# Plot delivery graphs from the results
create_chart_episode_events(experiment_results, draw_crashes=False)
# Plot crash graphs from the results
create_chart_episode_events(experiment_results, draw_crashes=True)
```
### Manual Actions for debugging
```
##################
env_config = {
'DEBUG_LOGS':False,
'TOPOLOGY': topology_UseCase,
# Simulation config
'NUMBER_STEPS_PER_EPISODE': 1000,
#'NUMBER_OF_TIMESTEPS': NUMBER_OF_TIMESTEPS,
'RANDOM_SEED': None, # 42
# Map
'CHARGING_STATION_NODES': [0,1,2,3,4],
# Entities
'NUMBER_OF_DRONES': 2,
'NUMBER_OF_CARS': 2,
'MAX_BATTERY_POWER': 100, # TODO split this for drone and car??
'INIT_NUMBER_OF_PARCELS': 3,
'MAX_NUMBER_OF_PARCELS': 3,
'THRESHOLD_ADD_NEW_PARCEL': 0.01,
# Baseline settings
'BASELINE_FLAG': False, # is set True in the test function when needed
'BASELINE_OPT_CONSTANT': 2.5,
'BASELINE_TIME_CONSTRAINT': 5,
# TODO
#Rewards
'REWARDS': {
'PARCEL_DELIVERED': 200,
'STEP_PENALTY': -0.1,
},
}
env = Case_Environment(env_config)
env.state
#env.ACTION_DROPOFF
# TODO select actions to give agent 0 reward!!
#print(env.action_space)
actions_1 = {'d_0': 6, 'd_1': 2, 'c_2': 3, 'c_3': 4}
#actions_1 = {'d_0': 0, 'd_1': 0, 'c_2': 5, 'c_3': 5}
actions_2 = {'d_0': 2, 'd_1': 8, 'c_2': 7, 'c_3':0}
actions_3 = {'d_0': 5, 'd_1': 5, 'c_2':2 }
new_obs, rewards, dones, infos = env.step(actions_1)
print(infos)
print(f"New Obs are: {new_obs}")
print(rewards)
print("------------------")
new_obs2, rewards2, dones2, infos2 = env.step(actions_2)
print(f"New Obs are: {new_obs2}")
print(rewards2)
print("------------------")
new_obs3, rewards3, dones3, infos3 = env.step(actions_3)
print(f"New Obs are: {new_obs3}")
print(rewards3)
actions_4 = {'d_0': 0, 'd_1': 0, 'c_2': 1, 'c_3': 0}
actions_5 = {'d_0': 0, 'd_1': 0, 'c_2': 0, 'c_3': 0}
actions_6 = {'d_0': 0, 'd_1': 0, 'c_2': 5, 'c_3': 0}
new_obs, rewards, dones, infos = env.step(actions_4)
print(f"New Obs are: {new_obs}")
print(rewards)
print("------------------")
new_obs2, rewards2, dones2, infos2 = env.step(actions_5)
print(f"New Obs are: {new_obs2}")
print(rewards2)
print("------------------")
new_obs3, rewards3, dones3, infos3 = env.step(actions_6)
print(f"New Obs are: {new_obs3}")
print(rewards3)
print("------------------")
print(dones3)
# TENSORBOARD
# Load the TensorBoard notebook extension
%load_ext tensorboard
#Start tensorboard below the notebook
#%tensorboard --logdir logs
```
| github_jupyter |
```
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import norm
# Plot normal distribution areas*
k=3 #Will plot areas below -k, above k and between -k and k
mean=0 #plotting will assume mean=0
std=1
plt.rcParams["figure.figsize"] = (35,35)
plt.fill_between(x=np.arange(-4*std+mean,-k*std+mean,0.01),
y1= norm.pdf(np.arange(-4*std+mean,-k*std+mean,0.01),mean,std) ,
facecolor='red',
alpha=0.35)
plt.fill_between(x=np.arange(k*std+mean,k*4+mean,0.01),
y1= norm.pdf(np.arange(k*std+mean,k*4+mean,0.01),mean,std) ,
facecolor='red',
alpha=0.35)
plt.fill_between(x=np.arange(-k*std+mean,k*std+mean,0.01),
y1= norm.pdf(np.arange(-k*std+mean,k*std+mean,0.01),mean,std) ,
facecolor='blue',
alpha=0.35)
prob_under_minusk = norm.cdf(x= -k,
loc = 0,
scale= 1)
prob_over_k = 1 - norm.cdf(x= k,
loc = 0,
scale= 1)
between_prob = 1-(prob_under_minusk+prob_over_k)
plt.text(x=-1.8, y=0.03, s= round(prob_under_minusk,3))
plt.text(x=-0.2, y=0.1, s= round(between_prob,3))
plt.text(x=1.4, y=0.03, s= round(prob_over_k,3))
plt.show()
# -*- coding: utf-8 -*-
# <nbformat>2</nbformat>
# <markdowncell>
# <h1>Readings</h1>
# <ul>
# <li>Bishop: 3.1.0-3.1.4</li>
# <li>Ng: Lecture 2 pdf, page 4, LMS algorithm</li>
# <li>Ng: Lecture 2 pdf, page 13, Locally weighted linear regression</li>
# <li>Bishop: 3.3.0-3.3.2</li>
# </ul>
# <p><font color="blue"><em><b>Regression</b></em></font>: Given the value of a D-dimensional input vector $\mathbf{x}$, predict the value of one or more <em>target</em> variables</p>
# <p><font color="blue"><b><em>Linear</em></b></font>: The models discussed in this section are <em>linear</em> with respect to the adjustable parameters, <em>not</em>
# necessisarily with respect to the input variables. </p>
# <markdowncell>
# <h1>Creating A Model</h1>
# In this notebook, our objective is to construct models that can predict the value of some target variable, $t$, given some
# input vector, $\mathbf{x}$, where the target value can occupy any value in some space - though here we'll only consider the space of
# real valued vectors. We want the models to allow for uncertainty in the accuracy of the model and/or noise on the observed data.
# We also want the model to provide some information on our confidence in a given prediction.
#
# The first step is to contruct a mathematical model that adequately represents the observations we wish to predict.
# The model we will use is described in the next two subsections. It is **important to note** that the model itself is independent
# of the use of a frequentist or Bayesian viewpoint. It is *how we obtain the free parameters* of the model that is affected by using
# frequentist or Bayesian approaches. However, if the model is a poor choice for a particular observation, then its predictive
# capability is likely to be poor whether we use a frequentist or Bayesian approach to obtain the parameters.
# <markdowncell>
# <h2><font size="4">Gaussian Noise: Model Assumption 1</font></h2>
# We will *assume* throughout this notebook that the target variable is described by <br/><br/>
# $t = y(\mathbf{x},\mathbf{w}) + \epsilon$
# <br/><br/>
# where $y(\mathbf{x},\mathbf{w})$ is an as of yet undefined function of $\mathbf{x}$ and $\mathbf{w}$ and $\epsilon$ is a <font color="red"><em>Gaussian</em></font> distributed noise component.
#
# **Gaussian Noise?** The derivations provided below all assume Gaussian noise on the target data. Is this a good assumption? In many cases yes. The argument hinges
# on the use of the [Central_Limit_Theorem](http://en.wikipedia.org/wiki/Central_limit_theorem) that basically says the the **sum** of many independent random
# variables behaves behaves like a Gaussian distributed random variable. The _noise_ term in this model, $\epsilon$, can be thought of as the sum of features
# not included in the model function, $y(\mathbf{x},\mathbf{w})$. Assuming these features are themselves independent random variables then the Central Limit Theorom suggests a Gaussian model
# is appropriate, assuming there are many independent unaccounted for features. It is possible that there is only a small number of unaccounted for features
# or that there is genuine _non-Gauisian_ noise in our observation measurements, e.g. sensor shot noise that often has a Poisson distribution. In such cases, the assumption is no longer valid.
# <markdowncell>
# <h2><font size="4">General Linear Model: Model Assumption 2</font></h2>
# In order to proceed, we need to define a model for $y(\mathbf{x},\mathbf{w})$. We will use the *general linear regression* model defined as follows <br/><br/>
# $y(\mathbf{x},\mathbf{w}) = \sum_{j=0}^{M-1} w_j\phi_j(\mathbf{x}) = \mathbf{w}^T\mathbf{\phi}(\mathbf{x})$ <br/><br/>
# where $\mathbf{x}$ is a $D$ dimensional input vector, $M$ is the number of free parameters in the model, $\mathbf{w}$ is a column
# vector of the free parameters, and
# $\phi(\mathbf{x}) = \\{\phi_0(\mathbf{x}),\phi_1(\mathbf{x}), \ldots,\phi_{M-1}(\mathbf{x})\\}$ with $\phi_0(\mathbf{x})=1$ is a set of basis functions where
# each $\phi_i$ is in the real valued function space
# $\\{f \in \mathbf{R}^D\Rightarrow\mathbf{R}^1\\}$. It is important to note that the set of basis functions, $\phi$, <font color="red">need
# not be linear</font> with respect to $\mathbf{x}$. Further, note that this model defines an entire class of models. In order to
# contruct an actual predictive model for some observable quantity, we will have to make a further assumption on the choice of the
# set of basis functions, $\phi$. However, for the purposes of deriving general results, we can delay this choice.
#
# Note that that $\mathbf{w}^T$ is an $1 \times M$ vector and that $\mathbf{\phi}(\mathbf{x})$ is a $M \times 1$ vector so that the target, $y$
# is a scalar. This will be exteneded to $K$ dimensional target variables below.
#
#
# <markdowncell>
# <h1>Frequentist View: Maximum Likelihood</h1>
# Let's now embark on the path of obtaining the free parameters, $\mathbf{w}$, of our model. We will begin using a *frequentist*, or
# *maximum likelihood*, approach. This approach assumes that we first obtain observation training data, $\mathbf{t}$, and that the *best*
# value of $\mathbf{w}$, is that which maximizes the likelihood function, $p(\mathbf{t}|\mathbf{w})$.
#
# <p>Under the Gaussian noise condition it can be shown that the maximum likelihood function for the training data is <br/><br/>
#
# $p(\mathbf{t}|\mathbf{X},\mathbf{w},\sigma^2) = \prod_{n=1}^N ND(t_n|\mathbf{w}^T\phi(\mathbf{x}_n),\sigma^2)$ <br/><br/>
#
# $=\frac{N}{2}\ln\frac{1}{\sigma^2} -\frac{N}{2}\ln(2\pi) - \frac{1}{2\sigma^2}\sum_{n=1}^N
# \{t_n -\mathbf{w}^T\phi(\mathbf{x}_n)\}^2$ <br/><br/>
#
# where $\mathbf{X}=\{\mathbf{x}_1,\ldots,\mathbf{x}_N\}$ is the input value set for the corresponding $N$ oberved output values contained in the vector
# $\mathbf{t}$, and $ND(\mu,\sigma^2)$ is the Normal Distribution (Gaussian). (I used ND instead of the standard N to avoid confusion
# with the product limit).
#
# Taking the logarithm of the maximum likelihood and setting the derivative with respect to $\mathbf{w}$ equal to zero, one can obtain
# the maximum likelikhood parameters given by the <em>normal equations</em>: <br/><br/>
# $\mathbf{w}_{ML} = \left(\mathbf{\Phi}^T\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^T\mathbf{t}$ <br/><br/>
# where $\Phi$ is the $N \times M$ <em>design matrix</em> with elements $\Phi_{n,j}=\phi_j(\mathbf{x}_n)$, and $\mathbf{t}$ is the $N \times K$
# matrix of training set target values (for $K=1$, it is simply a column vector). Note that $\mathbf{\Phi}^T$ is a $M \times N$ matrix, so that $\mathbf{w}_{ML}=\left(\mathbf{\Phi}^T \mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^T\mathbf{t}$ is
# $(M \times N)\times(N \times M)\times(M\times N)\times(N \times K) = M \times K$, where $M$ is the number of free parameters and $K$ is the number of predicted
# target values for a given input. <br/>
# </p>
#
# Note that the only term in the likelihood function that depends on $\mathbf{w}$ is the last term. <font color="red">Thus, maximizing the likelihood
# function with respect to $\mathbf{w}$ __under the assumption of Gaussian noise__ is equivalent to minimizing a
# sum-of-squares error function. </font>
#
# <p>
# The quantity, $\mathbf{\Phi}^\dagger=\left(\mathbf{\Phi}^T\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^T$ is known as the
# <em>Moore-Penrose pseudo-inverse</em> of $\Phi$. When $\Phi^T\Phi$ is invertible, the pseudo-inverse is
# equivalent to the inverse. When this condition fails, the pseudo-inverse can be found with techniques such as <em>singular value decomposition</em>.
# </p>
# <markdowncell>
# <h3>Example 1</h3>
# <h4>(a) Linear Data</h4>
# <p>Let's generate data of the for $y = m*x + b + \epsilon $ where $\epsilon$ is a random Gaussian component with zero mean. Given this data, let's apply the maximum likelihood
# solution to find values for the parameters $m$ and $b$. Given that we know our data is linear, we chose basis functions $\phi_0(x)=1$ and $\phi_1(x)=x$. Thus, our
# our model will be $y=\theta_0\phi_0(x) + \theta_1\phi_1(x)$, where presumabely the solution should yield $\theta_0 \approx b$ and $\theta_1 \approx
# m$
# </p>
```
| github_jupyter |
Jupyter notebooks can contain Markdown (formatted text) or Python code.
This is a Markdown cell.
Below are Python cells. You can perform standard arithmetic in Python cells.
```
3 + 7
7 * 4
```
You can assign values to variables using `=`. This doesn't print any values out.
```
age = 42
age
```
Values include integers (above) and strings (of characters).
```
first_name = 'Charles'
first_name
Charles
```
Variable names are case-sensitive. `Age` is different from `age`.
```
Age
```
You can evaluate series of code statements by putting them on different lines.
```
age
first_name
```
As you see above, only the last-evaluated statement is printed. To print multiple statements, use the `print()` functions
```
print(age)
# Do something in code in between
print(first_name)
```
You can also print multiple variables, by separating them by commas.
```
print('In three years', first_name, 'will be', age + 3, 'years old.')
first_name_company = 'genentech'
first_name_company
```
You can type the start of a variable, such as `first_`, then press `Tab` to see a list of suggestions.
`whos` lists all local variable names, types, and values.
```
whos
```
If you make a typo, you can delete variables using the `del` keyword, although it's recommended that you fix the typo and restart your kernel instead (Refresh icon above).
```
del age
age
whos
```
You can change the value of variables by reassigning them. You can even assign variables to values of a different type (`int` to `str` below), and this will update the variable type.
```
age = 42
age = '53'
whos
```
Strings are sequences of characters. You can subselect specific characters using square brackets `string_name[index]`. Indexing starts at `0` (not `1`), similar to how the ground floor is the `0`th floor in UK, and the `1`st floor is `1` floor *offset* from the ground.
```
atom_name = 'helium'
print(atom_name[0])
print(atom_name[3])
```
You can select subsequences of strings by using `[low_index:high_index]`
```
print(atom_name[0:2])
print(atom_name[0:4])
```
Negative indices count backwards from the end of the string. `-1` is the last character, `-2` is the second to last character, etc.
```
atom_name = 'helium'
print(atom_name[-1])
```
`len()` provides the length of the string (number of characters). Spaces are included in the length
```
print(atom_name)
print(len(atom_name))
print('Length of atom_name is', len(atom_name))
atom_name = 'helium'
atom_name_length = len(atom_name)
print(atom_name_length)
```
Values are only assigned when you re-run a code block, so don't forget to update a value that needs to be used. Below, `atom_name_length` is still 6 until it gets reassigned.
```
atom_name = 'hydrogen'
print(atom_name_length)
atom_name_length = len(atom_name)
print(atom_name_length)
```
## Variable types
You can get the data type of a variable using `whos` or the `type()` functions. `class` indicates that `int`, `str`, `builtin_function_or_method` are *types* of objects.
```
whos
print(type(print))
print(type(3))
print(type(first_name))
```
The variable type determines what operations we can perform on the variables. We can subtract integers. We cannot subtract strings.
```
print(5 - 3)
print('helloh' - 'h')
print('hello' - 3)
```
We can `+` strings. This puts the strings together (without any spaces)
```
print('hello' + 'my' + 'name' + 'is')
```
If we can `+` strings, then multiplication (`*`) is the same as addition multiple times. These two cells are equivalent.
```
'hello' * 3
'hello' + 'hello' + 'hello'
```
String multiplication is useful when it is tedious to write out a string many times.
```
print('=' * 20)
print('important text')
print('=' * 20)
```
Note that strings that "represent" integers are still strings. Most programming languages (including Python) require you to be specific about the desired operation. Below, `'234' * 2` repeats the string, instead of multiplying the value. You can convert the string to an integer first using `int()`.
```
number_string = '234'
print(number_string * 2)
int(number_string) * 2
```
The one exception is that Python will convert between integers and floats (decimals) as necessary. Note that there can be slight precision errors with floats.
```
2 + 2.4
10 / 3
```
`^` is saved for binary operations, not powers
Two asterisks, no-space (`**`) indicate the power operation
```
three_squared = 3**2
print(three_squared)
```
## Some common errors
```
# Be sure to close quotation marks, brackets, and parentheses
name = 'Charles
# Extra equals sign, without a space
age = = 52
age == 52 # <- `==` is valid syntax but not always what you want.
```
## Documentation
It is good practice to document and comment your code, so future readers can understand your approach.
You can do all sorts of formatting with Markdown. For example:
```markdown
# Documentation text
## Header 2
Here is some text.
Include bullet points:
* bullet 1
* bullet 2
```
prints out the following:
# Documentation text
## Header 2
Here is some text.
Include bullet points:
* bullet 1
* bullet 2
```
# You can also comment in Code-blocks using hashtags
print(1234) # comments can also go on the same line as code, but be careful of extra-long comments that are difficult to read (like this one)
```
In `.py` files, you may also see triple-quotes:
```python
'''
Multi-line comment
here
'''
```
to indicate comments. This doesn't work in Jupyter notebooks.
```
# Comment things using hashtags in Jupyter
```
^ is saved for binary operations
| github_jupyter |
**Predicting IDC in Breast Cancer Histology Images**
Breast cancer is the most common form of cancer in women, and invasive ductal carcinoma (IDC) is the most common form of breast cancer. Accurately identifying and categorizing breast cancer subtypes is an important clinical task, and automated methods can be used to save time and reduce error.
The goal of this script is to identify IDC when it is present in otherwise unlabeled histopathology images. The dataset consists of approximately five thousand 50x50 pixel RGB digital images of H&E-stained breast histopathology samples that are labeled as either IDC or non-IDC. These numpy arrays are small patches that were extracted from digital images of breast tissue samples. The breast tissue contains many cells but only some of them are cancerous. Patches that are labeled "1" contain cells that are characteristic of invasive ductal carcinoma. For more information about the data, see https://www.ncbi.nlm.nih.gov/pubmed/27563488 and http://spie.org/Publications/Proceedings/Paper/10.1117/12.2043872.
For more information about IDC and breast cancer, please review the following publications:
* https://www.ncbi.nlm.nih.gov/pubmed/27864452
* https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3893344/
* https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4952020/
*Step 1: Import Modules*
```
import numpy as np
import matplotlib.pylab as plt
from scipy.misc import imresize, imread
import itertools
import sklearn
from sklearn import model_selection
from sklearn.model_selection import train_test_split, KFold, cross_val_score, StratifiedKFold, learning_curve, GridSearchCV
from sklearn.metrics import confusion_matrix, make_scorer, accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
import keras
from keras import backend as K
from keras.callbacks import Callback, EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
from keras.preprocessing.image import ImageDataGenerator
from keras.utils.np_utils import to_categorical
from keras.models import Sequential, model_from_json
from keras.optimizers import SGD, RMSprop, Adam, Adagrad, Adadelta
from keras.layers import Dense, Dropout, Activation, Flatten, BatchNormalization, Conv2D, MaxPool2D, MaxPooling2D
%matplotlib inline
```
*Step 2: Load Data*
```
X = np.load('../input/X.npy') # images
Y = np.load('../input/Y.npy') # labels associated to images (0 = no IDC, 1 = IDC)
```
*Step 3: Describe Data*
```
def describeData(a,b):
print('Total number of images: {}'.format(len(a)))
print('Number of IDC(-) Images: {}'.format(np.sum(b==0)))
print('Number of IDC(+) Images: {}'.format(np.sum(b==1)))
print('Percentage of positive images: {:.2f}%'.format(100*np.mean(b)))
print('Image shape (Width, Height, Channels): {}'.format(a[0].shape))
describeData(X,Y)
```
*Step 4: Plot Data*
```
imgs0 = X[Y==0] # (0 = no IDC, 1 = IDC)
imgs1 = X[Y==1]
def plotOne(a,b):
"""
Plot one numpy array
"""
plt.subplot(1,2,1)
plt.title('IDC (-)')
plt.imshow(a[100])
plt.subplot(1,2,2)
plt.title('IDC (+)')
plt.imshow(b[100])
plotOne(imgs0, imgs1)
def plotTwo(a,b):
"""
Plot a bunch of numpy arrays sorted by label
"""
for row in range(3):
plt.figure(figsize=(20, 10))
for col in range(3):
plt.subplot(1,8,col+1)
plt.title('IDC (-)')
plt.imshow(a[row+col])
plt.axis('off')
plt.subplot(1,8,col+4)
plt.title('IDC (+)')
plt.imshow(b[row+col])
plt.axis('off')
plotTwo(imgs0, imgs1)
```
*Step 4: Preprocess Data*
```
def plotHistogram(a):
"""
Plot histogram of RGB Pixel Intensities
"""
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.imshow(a)
plt.axis('off')
plt.title('IDC(+)' if Y[1] else 'IDC(-)')
histo = plt.subplot(1,2,2)
histo.set_ylabel('Count')
histo.set_xlabel('Pixel Intensity')
n_bins = 30
plt.hist(a[:,:,0].flatten(), bins= n_bins, lw = 0, color='r', alpha=0.5);
plt.hist(a[:,:,1].flatten(), bins= n_bins, lw = 0, color='g', alpha=0.5);
plt.hist(a[:,:,2].flatten(), bins= n_bins, lw = 0, color='b', alpha=0.5);
plotHistogram(X[100])
```
The data is scaled from 0 to 256 but we want it to be scaled from 0 to 1. This will make the data compatible with a wide variety of different classification algorithms.
We also want to set aside 20% of the data for k-fold cross-validation testing. This will make the trained model less prone to overfitting.
```
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
# Reduce Sample Size for DeBugging
X_train = X_train[0:30000]
Y_train = Y_train[0:30000]
X_test = X_test[0:30000]
Y_test = Y_test[0:30000]
# Normalize the data
X_train = X_train / 256.0
X_test = X_test / 256.0
print("Training Data Shape:", X_train.shape, X_train.shape)
print("Testing Data Shape:", X_test.shape, X_test.shape)
plotHistogram(X_train[100])
```
Now the data is scaled from 0 to 1.
Next we can try using some standard classification algorithms to predict whether or not IDC is present in each given sample.
*Step 5: Evaluate Classification Algorithms*
```
# Make Data 1D for compatability with standard classifiers
X_trainShape = X_train.shape[1]*X_train.shape[2]*X_train.shape[3]
X_testShape = X_test.shape[1]*X_test.shape[2]*X_test.shape[3]
X_trainFlat = X_train.reshape(X_train.shape[0], X_trainShape)
X_testFlat = X_test.reshape(X_test.shape[0], X_testShape)
#runLogisticRegression
def runLogisticRegression(a,b,c,d):
"""Run LogisticRegression w/ Kfold CV"""
model = LogisticRegression()
model.fit(a,b)
kfold = model_selection.KFold(n_splits=10)
accuracy = model_selection.cross_val_score(model, c,d, cv=kfold, scoring='accuracy')
mean = accuracy.mean()
stdev = accuracy.std()
print('LogisticRegression - Training set accuracy: %s (%s)' % (mean, stdev))
print('')
runLogisticRegression(X_trainFlat, Y_train, X_testFlat, Y_test)
# Compare Performance of Classification Algorithms
def compareABunchOfDifferentModelsAccuracy(a,b,c,d):
"""
compare performance of classifiers on X_train, X_test, Y_train, Y_test
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html#sklearn.metrics.accuracy_score
http://scikit-learn.org/stable/modules/model_evaluation.html#accuracy-score
"""
print('')
print('Compare Multiple Classifiers:')
print('')
print('K-Fold Cross-Validation Accuracy:')
print('')
models = []
models.append(('LR', LogisticRegression()))
models.append(('RF', RandomForestClassifier()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('SVM', SVC()))
models.append(('LSVM', LinearSVC()))
models.append(('GNB', GaussianNB()))
models.append(('DTC', DecisionTreeClassifier()))
#models.append(('GBC', GradientBoostingClassifier()))
#models.append(('LDA', LinearDiscriminantAnalysis()))
resultsAccuracy = []
names = []
for name, model in models:
model.fit(a, b)
kfold = model_selection.KFold(n_splits=10)
accuracy_results = model_selection.cross_val_score(model, c, d, cv=kfold, scoring='accuracy')
resultsAccuracy.append(accuracy_results)
names.append(name)
accuracyMessage = "%s: %f (%f)" % (name, accuracy_results.mean(), accuracy_results.std())
print(accuracyMessage)
# boxplot algorithm comparison
fig = plt.figure()
fig.suptitle('Algorithm Comparison: Accuracy')
ax = fig.add_subplot(111)
plt.boxplot(resultsAccuracy)
ax.set_xticklabels(names)
ax.set_ylabel('Cross-Validation: Accuracy Score')
plt.show()
return
compareABunchOfDifferentModelsAccuracy(X_trainFlat, Y_train, X_testFlat, Y_test)
def defineModels():
"""
This function just defines each abbreviation used in the previous function (e.g. LR = Logistic Regression)
"""
print('')
print('LR = LogisticRegression')
print('RF = RandomForestClassifier')
print('KNN = KNeighborsClassifier')
print('SVM = Support Vector Machine SVC')
print('LSVM = LinearSVC')
print('GNB = GaussianNB')
print('DTC = DecisionTreeClassifier')
#print('GBC = GradientBoostingClassifier')
#print('LDA = LinearDiscriminantAnalysis')
print('')
return
defineModels()
```
With the Support Vector Machine we are getting ~75% accuracy. Next I will plot a confusion matrix for the results that were produced by the Support Vector Machine in order to verify that we do not have too many false positives. I will also plot a learning curve to see if our model is overfitting or if our model has high bias.
```
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
"""
Plots a learning curve. http://scikit-learn.org/stable/modules/learning_curve.html
"""
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
def plotLotsOfLearningCurves(a,b):
"""Plot a bunch of learning curves http://scikit-learn.org/stable/modules/learning_curve.html"""
models = []
models.append(('Support Vector Machine', SVC()))
for name, model in models:
plot_learning_curve(model, 'Learning Curve For %s Classifier'% (name), a,b, (0.5,1), 10)
plotLotsOfLearningCurves(X_trainFlat, Y_train)
# Look at confusion matrix
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.figure(figsize = (5,5))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
#Run SVC w/ Confusion Matrix
def runSVCconfusion(a,b,c,d):
"""Run SVC w/ Kfold CV + Confusion Matrix"""
model = SVC()
model.fit(a, b)
prediction = model.predict(c)
kfold = model_selection.KFold(n_splits=10)
accuracy = model_selection.cross_val_score(model, c,d, cv=kfold, scoring='accuracy')
mean = accuracy.mean()
stdev = accuracy.std()
print('\nSupport Vector Machine - Training set accuracy: %s (%s)' % (mean, stdev),"\n")
cnf_matrix = confusion_matrix(d, prediction)
np.set_printoptions(precision=2)
class_names = ["Diagnosis" "IDC(-)", "Diagnosis" "IDC(+)"]
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
plt.show()
runSVCconfusion(X_trainFlat, Y_train, X_testFlat, Y_test)
```
Here in these confusion plots the Y-Axis represents the True labels ["IDC(-)" or "IDC(+)"] while the X-Axis represents the Predicted labels (generated by the Support Vector Machine). Ideally, the predicted labels will be the same as the idea labels. This is actually pretty good! But on the learning curve you can see that the training score tracks very closely to the cross-validation score and this makes me suspicious that the model might be overfitting. And anyways... we should be able to improve our model's accuracy by using neural networks. Next I will use the original 2-D data and I will try to solve this classification problem by using 2D convolutional neural networks.
```
# Encode labels to hot vectors (ex : 2 -> [0,0,1,0,0,0,0,0,0,0])
Y_train = to_categorical(Y_train, num_classes = 2)
Y_test = to_categorical(Y_test, num_classes = 2)
# Special callback to see learning curves
class MetricsCheckpoint(Callback):
"""Callback that saves metrics after each epoch"""
def __init__(self, savepath):
super(MetricsCheckpoint, self).__init__()
self.savepath = savepath
self.history = {}
def on_epoch_end(self, epoch, logs=None):
for k, v in logs.items():
self.history.setdefault(k, []).append(v)
np.save(self.savepath, self.history)
def plotKerasLearningCurve():
plt.figure(figsize=(10,5))
metrics = np.load('logs.npy')[()]
filt = ['acc'] # try to add 'loss' to see the loss learning curve
for k in filter(lambda x : np.any([kk in x for kk in filt]), metrics.keys()):
l = np.array(metrics[k])
plt.plot(l, c= 'r' if 'val' not in k else 'b', label='val' if 'val' in k else 'train')
x = np.argmin(l) if 'loss' in k else np.argmax(l)
y = l[x]
plt.scatter(x,y, lw=0, alpha=0.25, s=100, c='r' if 'val' not in k else 'b')
plt.text(x, y, '{} = {:.4f}'.format(x,y), size='15', color= 'r' if 'val' not in k else 'b')
plt.legend(loc=4)
plt.axis([0, None, None, None]);
plt.grid()
plt.xlabel('Number of epochs')
def runKerasCNN(a,b,c,d):
"""
https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py
"""
batch_size = 128
num_classes = 2
epochs = 12
img_rows, img_cols = X_train.shape[1],X_train.shape[2]
input_shape = (img_rows, img_cols, 3)
x_train = a
y_train = b
x_test = c
y_test = d
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
verbose=1,
epochs=epochs,
validation_data=(x_test, y_test),callbacks = [MetricsCheckpoint('logs')])
score = model.evaluate(x_test, y_test, verbose=0)
print('\nKeras CNN #1A - accuracy:', score[1],'\n')
y_pred = model.predict(c)
map_characters = {0: 'IDC(-)', 1: 'IDC(+)'}
print('\n', sklearn.metrics.classification_report(np.where(d > 0)[1], np.argmax(y_pred, axis=1), target_names=list(map_characters.values())), sep='')
Y_pred_classes = np.argmax(y_pred,axis = 1)
Y_true = np.argmax(Y_test,axis = 1)
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
plot_confusion_matrix(confusion_mtx, classes = list(map_characters.values()))
runKerasCNN(X_train, Y_train, X_test, Y_test)
plotKerasLearningCurve()
```
The confusion matrix illustrates that this model is predicting IDC(+) too often and the learning curve illustrates that the validation score is consistently less than the traning score. Together, these results suggest that our model may have some bias.
I will try using different artificial neural network.
```
def runAnotherKeras(a, b,c,d):
# my CNN architechture is In -> [[Conv2D->relu]*2 -> MaxPool2D -> Dropout]*2 -> Flatten -> Dense -> Dropout -> Out
batch_size = 128
num_classes = 2
epochs = 12
# input image dimensions
img_rows, img_cols = X_train.shape[1],X_train.shape[2]
input_shape = (img_rows, img_cols, 3)
model = Sequential()
model.add(Conv2D(filters = 32, kernel_size = (3,3),padding = 'Same',
activation ='relu', input_shape = input_shape))
model.add(Conv2D(filters = 32, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Conv2D(filters = 86, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(Conv2D(filters = 86, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Flatten())
#model.add(Dense(1024, activation = "relu"))
#model.add(Dropout(0.5))
model.add(Dense(512, activation = "relu"))
model.add(Dropout(0.5))
model.add(Dense(2, activation = "softmax"))
optimizer = RMSprop(lr=0.001, decay=1e-6)
model.compile(optimizer = optimizer , loss = "categorical_crossentropy", metrics=["accuracy"])
model.fit(a,b,
batch_size=batch_size,
verbose=1,
epochs=epochs,
validation_data=(c,d),callbacks = [MetricsCheckpoint('logs')])
score = model.evaluate(c,d, verbose=0)
print('\nKeras CNN #2 - accuracy:', score[1], '\n')
y_pred = model.predict(c)
map_characters = {0: 'IDC(-)', 1: 'IDC(+)'}
print('\n', sklearn.metrics.classification_report(np.where(d > 0)[1], np.argmax(y_pred, axis=1), target_names=list(map_characters.values())), sep='')
Y_pred_classes = np.argmax(y_pred,axis = 1)
Y_true = np.argmax(Y_test,axis = 1)
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
plot_confusion_matrix(confusion_mtx, classes = list(map_characters.values()))
runAnotherKeras(X_train, Y_train, X_test, Y_test)
plotKerasLearningCurve()
```
The confusion matrix illustrates that this model is predicting IDC(-) too often and the learning curve illustrates that the validation score is consistently less than the traning score. Together, these results suggest that our model suffers from high bias.
I will try using another network architecture and I will also include a data augmentation step in our to try to decrease the bias in our model.
```
def kerasAugmentation(a,b,c,d):
img_rows, img_cols = 50,50
input_shape = (img_rows, img_cols, 3)
batch_size = 128
num_classes = 2
epochs = 12
map_characters = {0: 'IDC(-)', 1: 'IDC(+)'}
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', input_shape=input_shape))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(256, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
opt = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',optimizer=opt,metrics=['accuracy'])
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(a)
model.fit_generator(datagen.flow(a,b, batch_size=32),
steps_per_epoch=len(a) / 32, epochs=epochs, validation_data = [c, d],callbacks = [MetricsCheckpoint('logs')])
score = model.evaluate(c,d, verbose=0)
print('\nKeras CNN #3B - accuracy:', score[1],'\n')
y_pred = model.predict(c)
print('\n', sklearn.metrics.classification_report(np.where(d > 0)[1], np.argmax(y_pred, axis=1), target_names=list(map_characters.values())), sep='')
Y_pred_classes = np.argmax(y_pred,axis = 1)
Y_true = np.argmax(Y_test,axis = 1)
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
plot_confusion_matrix(confusion_mtx, classes = list(map_characters.values()))
kerasAugmentation(X_train, Y_train, X_test, Y_test)
plotKerasLearningCurve()
```
This model picked IDC(+) every single time which suggests a high bias, but the learning curve suggests that there maybe some overfitting. Either way, this model will not work.
I will try another model now where I change the network architecture but retain the data augmentation step.
```
def runAnotherKerasAugmentedConfusion(a,b,c,d):
# my CNN architechture is In -> [[Conv2D->relu]*2 -> MaxPool2D -> Dropout]*2 -> Flatten -> Dense -> Dropout -> Out
batch_size = 128
num_classes = 2
epochs = 16
img_rows, img_cols = X_train.shape[1],X_train.shape[2]
input_shape = (img_rows, img_cols, 3)
model = Sequential()
model.add(Conv2D(filters = 32, kernel_size = (3,3),padding = 'Same',
activation ='relu', input_shape = input_shape))
model.add(Conv2D(filters = 32, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Conv2D(filters = 86, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(Conv2D(filters = 86, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Flatten())
#model.add(Dense(1024, activation = "relu"))
#model.add(Dropout(0.5))
model.add(Dense(512, activation = "relu"))
model.add(Dropout(0.5))
model.add(Dense(2, activation = "softmax"))
optimizer = RMSprop(lr=0.001, decay=1e-6)
model.compile(optimizer = optimizer , loss = "categorical_crossentropy", metrics=["accuracy"])
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(a)
model.fit_generator(datagen.flow(a,b, batch_size=32),steps_per_epoch=len(a) / 32, epochs=epochs, validation_data = [c, d],callbacks = [MetricsCheckpoint('logs')])
score = model.evaluate(c,d, verbose=0)
print('\nKeras CNN #2B - accuracy:', score[1],'\n')
y_pred = model.predict(c)
map_characters = {0: 'IDC(-)', 1: 'IDC(+)'}
print('\n', sklearn.metrics.classification_report(np.where(d > 0)[1], np.argmax(y_pred, axis=1), target_names=list(map_characters.values())), sep='')
Y_pred_classes = np.argmax(y_pred,axis = 1)
Y_true = np.argmax(Y_test,axis = 1)
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
plot_confusion_matrix(confusion_mtx, classes = list(map_characters.values()))
runAnotherKerasAugmentedConfusion(X_train, Y_train, X_test, Y_test)
plotKerasLearningCurve()
```
The confusion matrix illustrates that this model is predicting IDC(-) far too often and the learning curve illustrates that the validation score is consistently less than the traning score. Together, these results suggest that our model suffers from high bias despite containing a data augmentation step.
I will try using another network architecture.
```
# Create the model
def yetAnotherKeras(a,b,c,d):
model = Sequential()
model.add(Conv2D(32, (5, 5), activation='relu', input_shape=(50, 50, 3))) # first layer : convolution
model.add(MaxPooling2D(pool_size=(3, 3))) # second layer : pooling (reduce the size of the image per 3)
model.add(Conv2D(32, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='sigmoid')) # output 1 value between 0 and 1 : probability to have cancer
model.summary()
model.compile(loss=keras.losses.binary_crossentropy, # Use binary crossentropy as a loss function
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit(a,b,
batch_size=128,
epochs=12,
verbose=1,
validation_data = [c,d],
callbacks = [MetricsCheckpoint('logs')])
map_characters = {0: 'IDC(-)', 1: 'IDC(+)'}
y_pred = model.predict(c)
print('\n', sklearn.metrics.classification_report(np.where(d > 0)[1], np.argmax(y_pred, axis=1), target_names=list(map_characters.values())), sep='')
Y_pred_classes = np.argmax(y_pred,axis = 1)
Y_true = np.argmax(Y_test,axis = 1)
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
plot_confusion_matrix(confusion_mtx, classes = list(map_characters.values()))
yetAnotherKeras(X_train,Y_train,X_test,Y_test)
plotKerasLearningCurve()
```
This is a decent result. The learning curve here suggests that our model does not have too much bias. If anything, the model may be overfitting a bit, given the close relationship between the training and validation scores.
I will try using a different network architecture and once again I will also include a data augmentation step.
```
def runKerasCNNAugment(a,b,c,d):
"""
Run Keras CNN: https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py
"""
batch_size = 128
num_classes = 2
epochs = 12
img_rows, img_cols = X_train.shape[1],X_train.shape[2]
input_shape = (img_rows, img_cols, 3)
x_train = a
y_train = b
x_test = c
y_test = d
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
model.fit_generator(datagen.flow(a,b, batch_size=32),
steps_per_epoch=len(a) / 32, epochs=epochs, validation_data = [c, d],callbacks = [MetricsCheckpoint('logs')])
score = model.evaluate(c,d, verbose=0)
print('\nKeras CNN #1C - accuracy:', score[1],'\n')
y_pred = model.predict(c)
map_characters = {0: 'IDC(-)', 1: 'IDC(+)'}
print('\n', sklearn.metrics.classification_report(np.where(d > 0)[1], np.argmax(y_pred, axis=1), target_names=list(map_characters.values())), sep='')
score = model.evaluate(x_test, y_test, verbose=0)
Y_pred_classes = np.argmax(y_pred,axis = 1)
Y_true = np.argmax(Y_test,axis = 1)
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
plot_confusion_matrix(confusion_mtx, classes = list(map_characters.values()))
runKerasCNNAugment(X_train, Y_train, X_test, Y_test)
plotKerasLearningCurve()
```
This is our best result yet. 76% accuracy and a distribution of predicted labels that is similar to the distribtion of actual labels (50/50). The learning curve suggests that there is not too much overfitting given the different shapes of the training and cross-validation curves, and both the confusion matrix and the learning curve suggest that the model does not have high bias. But with only two categories (IDC negative/IDC plus), we should hope to do better than 80% accuracy. Soon I will experiment with different data augmentation approaches in an attempt to improve our model's accuracy. In the future, tools like this can be used to save time, cut costs, and increase the accuracy of imaging-based diagnostic approaches in the healthcare industry.
To Do:
1) Improve data visualization
2) Optimize data augmentation
3) Optimize NN architecture
| github_jupyter |
<a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/57_cartoee_blend.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
Uncomment the following line to install [geemap](https://geemap.org) and [cartopy](https://scitools.org.uk/cartopy/docs/latest/installing.html#installing) if needed. Keep in mind that cartopy can be challenging to install. If you are unable to install cartopy on your computer, you can try Google Colab with this the [notebook example](https://colab.research.google.com/github/giswqs/geemap/blob/master/examples/notebooks/cartoee_colab.ipynb).
See below the commands to install cartopy and geemap using conda/mamba:
```
conda create -n carto python=3.8
conda activate carto
conda install mamba -c conda-forge
mamba install cartopy scipy -c conda-forge
mamba install geemap -c conda-forge
jupyter notebook
```
```
# !pip install cartopy scipy
# !pip install geemap
```
# Creating publication-quality maps with multiple Earth Engine layers
```
import ee
import geemap
from geemap import cartoee
import cartopy.crs as ccrs
%pylab inline
```
## Create an interactive map
```
Map = geemap.Map()
image = (
ee.ImageCollection('MODIS/MCD43A4_006_NDVI')
.filter(ee.Filter.date('2018-04-01', '2018-05-01'))
.select("NDVI")
.first()
)
vis_params = {
'min': 0.0,
'max': 1.0,
'palette': [
'FFFFFF',
'CE7E45',
'DF923D',
'F1B555',
'FCD163',
'99B718',
'74A901',
'66A000',
'529400',
'3E8601',
'207401',
'056201',
'004C00',
'023B01',
'012E01',
'011D01',
'011301',
],
}
Map.setCenter(-7.03125, 31.0529339857, 2)
Map.addLayer(image, vis_params, 'MODIS NDVI')
countries = ee.FeatureCollection('users/giswqs/public/countries')
style = {"color": "00000088", "width": 1, "fillColor": "00000000"}
Map.addLayer(countries.style(**style), {}, "Countries")
ndvi = image.visualize(**vis_params)
blend = ndvi.blend(countries.style(**style))
Map.addLayer(blend, {}, "Blend")
Map
```
## Plot an image with the default projection
```
# specify region to focus on
bbox = [180, -88, -180, 88]
fig = plt.figure(figsize=(15, 10))
# plot the result with cartoee using a PlateCarre projection (default)
ax = cartoee.get_map(blend, region=bbox)
cb = cartoee.add_colorbar(ax, vis_params=vis_params, loc='right')
ax.set_title(label='MODIS NDVI', fontsize=15)
# ax.coastlines()
plt.show()
```
## Plot an image with a different projection
```
fig = plt.figure(figsize=(15, 10))
projection = ccrs.EqualEarth(central_longitude=-180)
# plot the result with cartoee using a PlateCarre projection (default)
ax = cartoee.get_map(blend, region=bbox, proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=vis_params, loc='right')
ax.set_title(label='MODIS NDVI', fontsize=15)
# ax.coastlines()
plt.show()
```
| github_jupyter |
# Posture detection for team wellbeing app
* Ported to ONNX runtime
* Simplified processing from https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch
* Added heatmap postprocessing & posture scoring (check out scripts/simplified_functions.py for details)
* Bonus: __Live demo at the end!__
```
import matplotlib.pyplot as plt
import cv2
import numpy as np
from scripts.simplified_functions import sigmoid, normalize, find_distance, find_peaks, choose_best_peak, \
get_keypoint_locations, get_signals_from_keypoints, \
get_scores_from_signals, preprocess_image
import onnx
import onnxruntime as ort
# Should be CPU by default, which is ok
print('Device: {}'.format(ort.get_device()))
```
## Defining some variables
```
# We will only be using 6 of the key points
kpt_names = ['nose', 'neck', 'r_sho', 'r_elb', 'r_wri', 'l_sho', 'l_elb',
'l_wri', 'r_hip', 'r_knee', 'r_ank', 'l_hip', 'l_knee',
'l_ank', 'r_eye', 'l_eye', 'r_ear', 'l_ear']
# points of interest
poi = [0, 1, 2, 5, 14, 15]
print([kpt_names[i] for i in poi])
# default positions (y,x) of different parts with the origin on the top left
default_keypoints = [(0.4,0.5), (0.75,0.5), (0.75,0.3), (0.75,0.7),(0.35,0.45), (0.35,0.55)]
# they're really just telling the algorithm roughly where to look for these keypoints by default
plt.figure(figsize=(3,2)); plt.scatter([x[1] for x in default_keypoints], [-x[0] for x in default_keypoints]);
```
## Processing
### Workflow within the app:
1. **Calibration**
* First, we need to do **Steps 0 to 4** on a calibration image to get `calib_signals`.
* In the code below, I'm using the `default_keypoints` as a placeholder for calibration keypoints, but in the app, we want to get their actual "good posture" keypoints.
2. **Evaluation**
* We do **Steps 0 to 4** on a current image to get the current `signals`.
* Then do **Step 5 and 6** which compares current `signals` with `calib_signals` to get the final score
```
calib_keypoints = default_keypoints
# %%timeit
"""
0. load .onnx and start onnx session
1. load test image (or acquire it from webcam)
2. inference is run here, will take about 2 seconds. the output will be heatmaps
3. postprocess heatmaps to get keypoint locations
4. process keypoint locations to get signals
"""
# model = onnx.load("pose360smallfloat16.onnx")
model = onnx.load("old_onnx/pose360small.onnx")
sess = ort.InferenceSession('pose360smallfloat16.onnx')
# img = cv2.imread('calibration/test2.jpg')
img = cv2.imread('../tw_onnxruntime/csharp/sample/Microsoft.ML.OnnxRuntime.ResNet50v2Sample/bin/Debug/netcoreapp3.1/test11.jpg')
input_tensor = preprocess_image(img)
output_tensor = sess.run(None, {'input': input_tensor})
keypoint_locations = get_keypoint_locations(output_tensor, poi, calib_keypoints)
signals = get_signals_from_keypoints(keypoint_locations)
"""
5. scoring by comparing signals with calibration signals (4 scores from 0-1)
"""
calib_signals = get_signals_from_keypoints(calib_keypoints)
scores = get_scores_from_signals(signals, calib_signals)
"""
6. average the 4 scores to get final score (from 0-1)
"""
final_score = sum(scores)/4
print(final_score)
keypoint_locations
output_tensor[0][0].dtype = np.float32
np.array(output_tensor[0][2], dtype=np.float32).max()
plt.imshow(np.array(output_tensor[0][poi[1]] + img[::16,::16,:].mean(2)/255/10, dtype=np.float32))
plt.colorbar()
for keypoint in keypoint_locations:
draw_box(img, keypoint)
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
```
#### End of code needed for integration with windows app
---
---
## Live Demo
#### 1. Calibration: Sit in a correct posture, then run the cell below.
* hold it there until the image shows up, make sure your shoulders and head are visible
```
def draw_box(img, pos, size=9):
h,w,_ = img.shape
y, x = int(pos[0]*h), int(pos[1]*w)
y1, y2 = max(y-size, 0), min(y+size, h-1)
x1, x2 = max(x-size, 0), min(x+size, w-1)
img[y1:y2, x1:x2, :] = 255
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,360)
ret, calib_img = cap.read()
calib_input = preprocess_image(calib_img)
calib_output = sess.run(None, {'input': calib_input})
calib_keypoints = get_keypoint_locations(calib_output, poi, default_keypoints)
calib_signals = get_signals_from_keypoints(calib_keypoints)
for keypoint in calib_keypoints:
draw_box(calib_img, keypoint)
plt.imshow(cv2.cvtColor(calib_img, cv2.COLOR_BGR2RGB))
cap.release()
```
#### 2. Live demo, run the cell below.
* you must have all 6 parts visible for the demo to work, if not it'll tell you how many is missing
* because it's running on CPU only it might be quite slow (1 hz)
```
import pandas as pd
from matplotlib.backends.backend_agg import FigureCanvasAgg
from matplotlib.figure import Figure
def generate_live_plot4(time_series, window_size=5, max_data_length=20):
fig = Figure(figsize=(4, 6), dpi=100)
canvas = FigureCanvasAgg(fig)
labels = ['shoulder uprightness', 'neck uprightness', 'eye-screen distance', '"non-slouchness"']
colors = ['gray','gray','gray','gray']
axs = fig.subplots(4)
for i,col in enumerate(time_series.columns):
axs[i].set_ylim([-0.2,1.2])
time_series[col].reset_index(drop=True).rolling(window_size,min_periods=1).mean().iloc[-max_data_length:].plot.area(ax=axs[i],label=labels[i],color=colors[i], alpha=0.3)
axs[i].legend(loc='upper left')
score = time_series[col].iloc[-1]*0.3
axs[i].set_facecolor((1-score, 0.7+score, 0.7))
canvas.draw()
s, (width, height) = canvas.print_to_buffer()
live_plot = np.frombuffer(s, np.uint8).reshape((height, width, 4))
return live_plot
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,360)
ret, calib_img = cap.read()
time_series = pd.DataFrame()
n = 0
while(True):
ret, img = cap.read()
if (img is not None):
input_tensor = preprocess_image(img)
# inference is run here, will take about 2 seconds
output_tensor = sess.run(None, {'input': input_tensor})
# ONNX output --> keypoint locations --> signals ---> score
keypoint_locations = get_keypoint_locations(output_tensor, poi, calib_keypoints)
# if there's any point missing, get_keypoint_locations() will output None
if keypoint_locations is None:
continue
signals = get_signals_from_keypoints(keypoint_locations)
# to get scores, calibration signals are required
scores = get_scores_from_signals(signals, calib_signals)
time_series = time_series.append([scores])
for keypoint in keypoint_locations:
draw_box(img, keypoint)
live_plot = generate_live_plot4(time_series, 5,20)[:,:,:3][:,:,::-1]
cv2.imshow('video', img)
cv2.imshow('plot', live_plot)
if cv2.waitKey(1) == 27: # when escape key is pressed
break
cap.release()
cv2.destroyAllWindows()
cap.release()
cv2.destroyAllWindows()
```
### STUFF FOR DEBUGGING
not part of the integration!
```
# Take a look at the heatmaps to make sure theyre ok
for i in poi:
heatmap = np.squeeze(output_tensor[0][:,i,:,:]) # convert it back to 2D array
plt.figure(figsize=(2,2))
plt.imshow(heatmap); plt.show()
plt.scatter(x=[x[1]/80 for x in keypoint_locations], y=[-x[0]/45 for x in keypoint_locations], c=['r','purple','b','c','g','y'])
plt.scatter(x=[x[1] for x in calib if x != None], y=[-x[0] for x in calib if x != None])
plt.show()
plt.imshow(np.squeeze(output_tensor[0][:,2,:,:]))
plt.scatter(x=[x[1]*80 for x in calib if x != None], y=[x[0]*45 for x in calib if x != None])
plt.scatter(x=[x[1]*80 for x in keypoint_locations if x != None], y=[x[0]*45 for x in keypoint_locations if x != None])
plt.show()
# I roughly came up w these numbers for now.
calib = [(0.4,0.5), (0.75,0.5), (0.75,0.3), None, None,
(0.75,0.7), None, None, None, None, None, None, None, None,
(0.35,0.45), (0.35,0.55), None, None]
```
| github_jupyter |
# Using the MANN Package to convert and prune an existing TensorFlow model
In this notebook, we utilize the MANN package on an existing TensorFlow model to convert existing layers to MANN layers and then prune the model.
```
# Load the MANN package and TensorFlow
import tensorflow as tf
import mann
# Load the data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x_train = x_train/255
x_test = x_test/255
# Load the model to be used
vgg16 = tf.keras.applications.VGG16(
include_top = False, # Don't include the top layers
weights = 'imagenet', # Load the imagenet weights
input_shape = x_train.shape[1:] # Input shape is the shape of the images
)
```
## Create the model to be trained
In the following cell, we create the model using the existing VGG model fed into fully-connected layers.
```
# Build the model using VGG16 and a few layers on top of it
model = tf.keras.models.Sequential()
model.add(vgg16)
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(512, activation = 'relu'))
model.add(tf.keras.layers.Dense(512, activation = 'relu'))
model.add(tf.keras.layers.Dense(512, activation = 'relu'))
model.add(tf.keras.layers.Dense(10, activation = 'softmax'))
# Compile the model
model.compile(
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy'],
optimizer = 'adam'
)
# Present model summary
model.summary()
```
## Convert the model and perform initial pruning
In the following cell, we convert the model and perform initial pruning of the model to 40%.
```
# Use the add_layer_masks function to add masking layers to the model
converted_model = mann.utils.add_layer_masks(model)
# Compile the model
converted_model.compile(
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy'],
optimizer = 'adam'
)
# Mask the model using magnitude as the metric
converted_model = mann.utils.mask_model(
converted_model,
40,
method = 'magnitude'
)
# Recompile the model for the weights to take effect
converted_model.compile(
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy'],
optimizer = 'adam'
)
# Present the model summary
converted_model.summary()
```
## Train and further prune the model
In this cell, we create the ActiveSparsification callback and train the model using that callback to prune the model as the model improves in performance.
```
# Create the sparsification callback object
callback = mann.utils.ActiveSparsification(
performance_cutoff = 0.75, # The accuracy score the model needs to achieve
starting_sparsification = 40, # Starting sparsification
sparsification_rate = 5 # Sparsification increase every time the model achieves performance cutoff
)
# Fit the model
model.fit(
x_train,
y_train,
epochs = 1000,
callbacks = [callback],
validation_split = 0.2,
batch_size = 256
)
```
## Convert the model back to remove masking layers
In the following cell, we remove the layer masks created for training, while completely preserving performance.
```
# Convert the model back
model = mann.utils.remove_layer_masks(model)
# Present the model
model.summary()
```
## Report accuracy and save model
```
# Get the predictions on test data
preds = model.predict(x_test).argmax(axis = 1)
# Print the accuracy
print(f'Model Accuracy: {(preds.flatten() == y_test.flatten()).sum().astype(int)/y_test.flatten().shape[0]}')
# Save the model
model.save('cifar_vgg16.h5')
```
| github_jupyter |
# Demistifying GANs in TensorFlow 2.0
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
print(tf.__version__)
```
## Global Parameters
```
BATCH_SIZE = 256
BUFFER_SIZE = 60000
EPOCHES = 300
OUTPUT_DIR = "img" # The output directory where the images of the generator a stored during training
```
## Loading the MNIST dataset
```
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
#train_images[0]
(train_images[0].shape)
train_images.shape
plt.imshow(train_images[1], cmap = "gray")
```
### Adding the Data to tf.Dataset
```
train_images = train_images.astype("float32")
train_images = (train_images - 127.5) / 127.5
train_dataset = tf.data.Dataset.from_tensor_slices(train_images.reshape(train_images.shape[0],784)).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
train_dataset
```
## Generator Network
```
class Generator(keras.Model):
def __init__(self, random_noise_size = 100):
super().__init__(name='generator')
#layers
self.input_layer = keras.layers.Dense(units = random_noise_size)
self.dense_1 = keras.layers.Dense(units = 128)
self.leaky_1 = keras.layers.LeakyReLU(alpha = 0.01)
self.dense_2 = keras.layers.Dense(units = 128)
self.leaky_2 = keras.layers.LeakyReLU(alpha = 0.01)
self.dense_3 = keras.layers.Dense(units = 256)
self.leaky_3 = keras.layers.LeakyReLU(alpha = 0.01)
self.output_layer = keras.layers.Dense(units=784, activation = "tanh")
def call(self, input_tensor):
## Definition of Forward Pass
x = self.input_layer(input_tensor)
x = self.dense_1(x)
x = self.leaky_1(x)
x = self.dense_2(x)
x = self.leaky_2(x)
x = self.dense_3(x)
x = self.leaky_3(x)
return self.output_layer(x)
def generate_noise(self,batch_size, random_noise_size):
return np.random.uniform(-1,1, size = (batch_size, random_noise_size))
```
### Objective Function
```
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits = True)
def generator_objective(dx_of_gx):
# Labels are true here because generator thinks he produces real images.
return cross_entropy(tf.ones_like(dx_of_gx), dx_of_gx)
```
### Plotting The Noise (Fake Image)
```
generator = Generator()
fake_image = generator(np.random.uniform(-1,1, size =(1,100)))
fake_image = tf.reshape(fake_image, shape = (28,28))
plt.imshow(fake_image, cmap = "gray")
```
## Discriminator Network
```
class Discriminator(keras.Model):
def __init__(self):
super().__init__(name = "discriminator")
#Layers
self.input_layer = keras.layers.Dense(units = 784)
self.dense_1 = keras.layers.Dense(units = 128)
self.leaky_1 = keras.layers.LeakyReLU(alpha = 0.01)
self.dense_2 = keras.layers.Dense(units = 128)
self.leaky_2 = keras.layers.LeakyReLU(alpha = 0.01)
self.dense_3 = keras.layers.Dense(units = 128)
self.leaky_3 = keras.layers.LeakyReLU(alpha = 0.01)
self.logits = keras.layers.Dense(units = 1) # This neuron tells us if the input is fake or real
def call(self, input_tensor):
## Definition of Forward Pass
x = self.input_layer(input_tensor)
x = self.dense_1(x)
x = self.leaky_1(x)
x = self.leaky_2(x)
x = self.leaky_3(x)
x = self.leaky_3(x)
x = self.logits(x)
return x
discriminator = Discriminator()
```
### Objective Function
```
def discriminator_objective(d_x, g_z, smoothing_factor = 0.9):
"""
d_x = real output
g_z = fake output
"""
real_loss = cross_entropy(tf.ones_like(d_x) * smoothing_factor, d_x) # If we feed the discriminator with real images, we assume they all are the right pictures --> Because of that label == 1
fake_loss = cross_entropy(tf.zeros_like(g_z), g_z) # Each noise we feed in are fakes image --> Because of that labels are 0
total_loss = real_loss + fake_loss
return total_loss
```
## Optimizer
```
generator_optimizer = keras.optimizers.RMSprop()
discriminator_optimizer = keras.optimizers.RMSprop()
```
## Training Functions
```
@tf.function()
def training_step(generator: Discriminator, discriminator: Discriminator, images:np.ndarray , k:int =1, batch_size = 32):
for _ in range(k):
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
noise = generator.generate_noise(batch_size, 100)
g_z = generator(noise)
d_x_true = discriminator(images) # Trainable?
d_x_fake = discriminator(g_z) # dx_of_gx
discriminator_loss = discriminator_objective(d_x_true, d_x_fake)
# Adjusting Gradient of Discriminator
gradients_of_discriminator = disc_tape.gradient(discriminator_loss, discriminator.trainable_variables)
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) # Takes a list of gradient and variables pairs
generator_loss = generator_objective(d_x_fake)
# Adjusting Gradient of Generator
gradients_of_generator = gen_tape.gradient(generator_loss, generator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
seed = np.random.uniform(-1,1, size = (1, 100)) # generating some noise for the training
# Just to make sure the output directory exists..
import os
directory=OUTPUT_DIR
if not os.path.exists(directory):
os.makedirs(directory)
def training(dataset, epoches):
for epoch in range(epoches):
for batch in dataset:
training_step(generator, discriminator, batch ,batch_size = BATCH_SIZE, k = 1)
## After ith epoch plot image
if (epoch % 50) == 0:
fake_image = tf.reshape(generator(seed), shape = (28,28))
print("{}/{} epoches".format(epoch, epoches))
#plt.imshow(fake_image, cmap = "gray")
plt.imsave("{}/{}.png".format(OUTPUT_DIR,epoch),fake_image, cmap = "gray")
%%time
training(train_dataset, EPOCHES)
```
## Testing the Generator
```
fake_image = generator(np.random.uniform(-1,1, size = (1, 100)))
plt.imshow(tf.reshape(fake_image, shape = (28,28)), cmap="gray")
```
## Obsolete Training Function
I tried to implement the training step with the k factor as described in the original paper. I achieved much worse results as with the function above. Maybe i did something wrong?!
@tf.function()
def training_step(generator: Discriminator, discriminator: Discriminator, images:np.ndarray , k:int =1, batch_size = 256):
for _ in range(k):
with tf.GradientTape() as disc_tape:
noise = generator.generate_noise(batch_size, 100)
g_z = generator(noise)
d_x_true = discriminator(images) # Trainable?
d_x_fake = discriminator(g_z) # dx_of_gx
discriminator_loss = discriminator_objective(d_x_true, d_x_fake, smoothing_factor=0.9)
# Adjusting Gradient of Discriminator
gradients_of_discriminator = disc_tape.gradient(discriminator_loss, discriminator.trainable_variables)
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) # Takes a list of gradient and variables pairs
with tf.GradientTape() as gen_tape:
noise = generator.generate_noise(batch_size, 100)
d_x_fake = discriminator(generator(noise))
generator_loss = generator_objective(d_x_fake)
# Adjusting Gradient of Generator
gradients_of_generator = gen_tape.gradient(generator_loss, generator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
```
```
| github_jupyter |
```
import re
from glob import glob
import requests
import pandas as pd
import os
from probml_utils.url_utils import is_dead_url,make_url_from_chapter_no_and_script_name, extract_scripts_name_from_caption
from TexSoup import TexSoup
```
## Get chapter names
```
chap_names = {}
for chap_no in range(1,24):
suppl = f"../../pml-book/pml1/supplements/chap{chap_no}.md"
with open(suppl, "r") as fp:
text = fp.read()
names = re.findall(r"Chapter.+?[(](.+)[)]",text)
chap_names[chap_no] = names[0]
print(chap_no, names)
df = pd.DataFrame(chap_names.items(), columns=["chap_no","chap_name"])
df
df.to_csv("chapter_no_to_name_mapping.csv", index=None)
```
## Create a Readme.md
```
content = '''
# "Probabilistic Machine Learning: An Introduction"
## Chapters
|Chapter|Name| Notebooks|
|-|-|-|
'''
for chap_no in range(1,24):
chap_url = f"https://github.com/probml/pyprobml/tree/master/notebooks/book1/{chap_no:02d}"
content+=f"| {chap_no} | {chap_names[chap_no]} | [{chap_no:02d}/]({chap_no:02d}/) |\n"
content
readme_file = "../notebooks/book1/README.md"
with open(readme_file,"w") as fp:
fp.write(content)
```
## Chapterwise README.md
```
with open("pml1.lof") as fp:
LoF_File_Contents = fp.read()
soup = TexSoup(LoF_File_Contents)
# create mapping of fig_no to list of script_name
url_mapping = {}
for caption in soup.find_all("numberline"):
fig_no = str(caption.contents[0])
extracted_scripts = extract_scripts_name_from_caption(str(caption))
if len(extracted_scripts) == 1:
url_mapping[fig_no] = extracted_scripts[0]+""
elif len(extracted_scripts) > 1:
url_mapping[fig_no] = "fig_"+fig_no.replace(".","_")+".ipynb"
else:
url_mapping[fig_no] = ""
url_mapping
chapter_wise_mappping = {}
for fig_no in url_mapping:
chap_no = int(fig_no.split(".")[0])
if chap_no not in chapter_wise_mappping:
chapter_wise_mappping[chap_no] = {}
chapter_wise_mappping[chap_no][fig_no] = url_mapping[fig_no]
chapter_wise_mappping
book1_figures = os.listdir("../../pml-book/book1-figures/")
image_mapping = {}
for each in book1_figures:
fig_no = re.findall(r"\d+\.\d+", each)[0]
try:
image_mapping[fig_no].append(each)
except:
image_mapping[fig_no] = [each]
image_mapping
def get_figure_text(fig_no):
if fig_no not in image_mapping:
return "-"
url = "https://github.com/probml/pml-book/blob/main/book1-figures/"
text = ""
for fig in image_mapping[fig_no]:
text += f"[{fig}]({os.path.join(url,fig)})<br/>"
return text
def extract_url(line):
links = re.findall(r"(https.+)?\)" ,line)
if links:
return links
return None
dead = []
for chap_no in chapter_wise_mappping:
if chap_no == 23:
continue #not present in pyprobml
content = f'''
# Chapter {chap_no}: {chap_names[chap_no]}
## Figures
|Figure No. | Notebook | Figure |
|--|--|--|
'''
for fig_no in chapter_wise_mappping[chap_no]:
notebook_link = f"[{chapter_wise_mappping[chap_no][fig_no]}]({chapter_wise_mappping[chap_no][fig_no]})" if chapter_wise_mappping[chap_no][fig_no] != "" else "-"
content += f"| {fig_no} | {notebook_link} "
content+= f"| {get_figure_text(fig_no)} |\n"
# append supplementary
suppl = f"../../pml-book/pml1/supplements/chap{chap_no}.md"
with open(suppl, "r") as fp:
text = fp.read()
print(chap_no,len(text.split("\n")))
if len(text.split("\n")) > 3:
content += "## Supplementary material\n"
text = "\n".join(text.split("\n")[1:])
#change tutorial location from probml_notebooks to pyprobml
text = text.replace("https://github.com/probml/probml-notebooks/blob/main/markdown/","https://github.com/probml/pyprobml/tree/master/tutorials/")
content+=text
#print(content)
# save this as README.md
readme_file = f"../notebooks/book1/{chap_no:02d}/README.md"
with open(readme_file,"w") as fp:
fp.write(content)
# for line in lines:
# links = extract_url(line)
# if links:
# for link in links:
# if "http" in link and is_dead_url(link):
# print(link)
# dead.append(link)
# text = "\n".join(lines)
# content+=text
```
| github_jupyter |
# Landsat Collection 2 Level-2 Surface Reflectance
**Date modified:** 23 August 2021
## Product overview
### Background
Digital Earth Africa (DE Africa) provides free and open access to a copy of [Landsat Collection 2 Level-2](https://www.usgs.gov/core-science-systems/nli/landsat/landsat-collection-2-level-2-science-products) products over Africa. These products are produced and provided by the United States Geological Survey (USGS).
The [Landsat series](https://www.usgs.gov/core-science-systems/nli/landsat) of Earth Observation satellites, jointly led by USGS and NASA, have been continuously acquiring images of the Earth’s land surface since 1972. DE Africa provides data from Landsat 5, 7 and 8 satellites, including historical observations dating back to late 1980s and regularly updated new acquisitions.
New Level-2 Landsat 7 and Landsat 8 data are available after 15 to 27 days from acquisition. See [Landsat Collection 2 Generation Timeline](https://www.usgs.gov/media/images/landsat-collection-2-generation-timeline) for details.
USGS Landsat Collection 2 was released early 2021 and offers improved processing, geometric accuracy, and radiometric calibration compared to previous Collection 1 products. The Level-2 products are endorsed by the Committee on Earth Observation Satellites (CEOS) to be Analysis Ready Data for Land ([CARD4L](https://ceos.org/ard/))-compliant. This internationally-recognised certification ensures these products have been processed to a minimum set of requirements and organised into a form that allows immediate analysis with a minimum of additional user effort and interoperability both through time and with other datasets.
USGS Landsat Collection 2 Level-2 includes:
* Surface Reflectance
* Surface Temperature
This document provides technical specifications for the Surface Reflectance product. Information for the Surface Temperature product can be found in the [Landsat Collection 2 Level-2 Surface Temperature specification](https://docs.digitalearthafrica.org/en/latest/data_specs/Landsat_C2_ST_specs.html).
Surface reflectance is the fraction of incoming solar radiation that is reflected from Earth's surface. Variations in satellite measured radiance due to atmospheric properties have been corrected for so images acquired over the same area at different times are comparable and can be used readily to detect changes on Earth's surface.
### Specifications
#### Spatial and temporal coverage
DE Africa provides Landsat Collection 2 Level-2 from Landsat 5, 7 and 8 as seperate products. Relevant coverage and metadata can be viewed on DE Africa Metadata Exploer:
* [Landsat 5 Collection 2 Level-2 Surface Reflectance](https://explorer.digitalearth.africa/products/ls5_sr)
* [Landsat 7 Collection 2 Level-2 Surface Reflectance](https://explorer.digitalearth.africa/products/ls7_sr)
* [Landsat 8 Collection 2 Level-2 Surface Reflectance](https://explorer.digitalearth.africa/products/ls8_sr)
**Table 1: Landsat Collection 2 Level-2 Surface Reflectance product specifications**
|Satellite | Landsat 5 | Landsat 7 | Landsat 8 |
|----------|:---------:|:---------:|:---------:|
|Instrument| Multispectral Scanner (MSS), Thematic Mapper (TM)| Enhanced Thematic Mapper (ETM+)| Operational Land Imager (OLI), Thermal Infrared Sensor (TIRS) |
|Number of bands | 10 | 10 | 10 |
|Cell size - X (metres) | 30 | 30 | 30 |
|Cell size - Y (metres) | 30 | 30 | 30 |
|Coordinate reference system |Universal Transverse Mercator (UTM) | UTM | UTM |
|Temporal resolution | Every 16 days | Every 16 days | Every 16 days |
|Temporal range| 1984 – 2012 | 1999 – present | 2013 – present |
|Parent dataset| [Landsat Collection 2 Level-1](https://www.usgs.gov/land-resources/nli/landsat/landsat-collection-2-level-1-data) | [Landsat Collection 2 Level-1](https://www.usgs.gov/land-resources/nli/landsat/landsat-collection-2-level-1-data) | [Landsat Collection 2 Level-1](https://www.usgs.gov/land-resources/nli/landsat/landsat-collection-2-level-1-data) |
|Update frequency| NA (archive)| Daily | Daily |
#### Measurements
**Table 2: Landsat 5 and Landsat 7 Level-2 Surface Reflectance measurements**
|Band ID|Description |Units | Range | Data type| No data$^\dagger$ | Conversion$^\ddagger$ |
|----------|-------------|----------------|----------------|:---------:|:----------:|:----------:|
|SR_B1 | Surface reflectance band 1 (Blue) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|SR_B2 | Surface reflectance band 2 (Green) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|SR_B3 | Surface reflectance band 3 (Red) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|SR_B4 | Surface reflectance band 4 (Near-Infrared (NIR)) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|SR_B5 | Surface reflectance band 5 (Short Wavelength Infrared (SWIR) 1) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|SR_B7 | Surface reflectance band 7 (SWIR 2) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|QA_PIXEL | Pixel quality | Bit Index | `0-65535` | `uint16` | `1` | NA |
|QA_RADSAT | Radiometric saturation | Bit Index | `0-65535` | `uint16` | `0` | NA |
|SR_ATMOS \_OPACITY | Atmospheric opacity | Unitless | `0-32767` | `int16` | `-9999` | 0.001 \* DN |
|SR_CLOUD \_QA | Cloud mask quality | Bit Index | `0-255` | `uint8` | `0` | NA |
$^\dagger$ No data or fill value.
$^\ddagger$ Physical measurement can be derived from the Digital Number (DN) stored in the product using the conversion equation listed.
More inforamtion can be found from the [Landsat 4-7 Collection 2 Science Product Guide](https://www.usgs.gov/media/files/landsat-4-7-collection-2-level-2-science-product-guide).
**Table 3: Landsat 8 Level-2 Surface Reflectance measurements**
Landsat 8 Level-2 science product is generated using a different algorithm and has different output measurements compared to Landsat 5 and Landsat 7.
|Band ID|Description |Units | Range | Data type| No data$^\dagger$ | Conversion$^\ddagger$ |
|----------|-------------|----------------|----------------|:---------:|:----------:|:----------:|
|SR_B1 | Surface reflectance band 1 (Coastal Aerosol) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|SR_B2 | Surface reflectance band 2 (Blue) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|SR_B3 | Surface reflectance band 3 (Green) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|SR_B4 | Surface reflectance band 4 (Red) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|SR_B5 | Surface reflectance band 5 (NIR) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|SR_B6 | Surface reflectance band 6 (SWIR 1) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|SR_B7 | Surface reflectance band 7 (SWIR 2) | Unitless | `1-65455`| `uint16` | `0` | 2.75e-05 \* DN - 0.2 |
|QA_PIXEL | Pixel quality | Bit Index | `0-65535` | `uint16` | `1` | NA |
|QA_RADSAT | Radiometric saturation | Bit Index | `0-65535` | `uint16` | `0` | NA |
|SR_QA \_AEROSOL | Aerosol level | Bit Index | `0-255` | `uint8` | `1` | NA |
$^\dagger$ No data or fill value.
$^\ddagger$ Physical measurement can be derived from the Digital Number (DN) stored in the product using the conversion equation listed.
More inforamtion can be found from [Landsat 8 OLI/TIRS Collection 2 Science Product Guide](https://www.usgs.gov/media/files/landsat-8-collection-2-level-2-science-product-guide) and [Landsat 8-9 OLI/TIRS Collection 2 Level 2 Data Format Control Book](https://www.usgs.gov/media/files/landsat-8-9-olitirs-collection-2-level-2-data-format-control-book)
#### Quality assessment bands
Pixel quality assessment (QA_PIXEL) bands for Landsat 5, 7 and 8 are generated by the CFMask algorithm. Different bit definitions are used because the cirrus band is only available on Landsat 8. This band is relevant to both Surface Reflectance and Surface Temperature products.
**Table 4: Pixel quality assessment (QA_PIXEL) bit index.**
| Bit | Landat 5 & 7 | Landsat 8 | Description Values |
|-----|------|------------|-------------------|
| 0 | Fill | Fill | 0 for image data; 1 for fill data |
| 1 | Dilated Cloud | Dilated Cloud | 0 for cloud is not dilated or no cloud; 1 for cloud dilation |
| 2 | Unused | Cirrus | 0 for cirrus confidence is not; 1 for high confidence cirrus |
| 3 | Cloud | Cloud | 0 for cloud confidence is not high; 1 for high confidence cloud |
| 4 | Cloud Shadow | Cloud Shadow | 0 for Cloud Shadow Confidence is not high; 1 for high confidence cloud shadow |
| 5 | Snow | Snow | 0 for Snow/Ice Confidence is not high; 1 for high confidence snow cover |
| 6 | Clear | Clear | 0 if Cloud or Dilated Cloud bits are set; 1 if Cloud and Dilated Cloud bits are not set |
| 7 | Water | Water | 0 for land or cloud; 1 for water |
| 8-9 | Cloud Confidence | Cloud Confidence | 00 for no confidence level set; 01 Low confidence; 10 Medium confidence; 11 High confidence |
| 10-11 | Cloud Shadow Confidence | Cloud Shadow Confidence | 00 for no confidence level set; 01 Low confidence; 10 Reserved; 11 High confidence |
| 12-13 | Snow/Ice Confidence | Snow/Ice Confidence | 00 for no confidence level set; 01 Low confidence; 10 Reserved; 11 High confidence |
| 14-15 | Unused | Cirrus Confidence | 00 for no confidence level set; 01 Low confidence; 10 Reserved; 11 High confidence |
Radiometric saturation quality assessment (QA_RADSAT) bands are different for Landsat 5, 7 and 8 because the sensors have different spectral bands. This band is relevant to both Surface Reflectance and Surface Temperature products.
**Table 5: Radiometric saturation quality assessment (QA_RADSAT) bit index.**
| Bit | Landsat 5 | Landsat 7 | Landsat 8 | Description Values |
|-----|-----------|-----------|-----------|--------------------|
| 0 | Band 1 (Blue) | Band 1 (Blue) | Band 1 (Coastal)| 0 no saturation; 1 saturated data |
| 1 | Band 2 (Green) | Band 2 (Green) | Band 2 (Blue) | 0 no saturation; 1 saturated data |
| 2 | Band 3 (Red) | Band 3 (Red) | Band 3 (Green) | 0 no saturation; 1 saturated data |
| 3 | Band 4 (NIR) | Band 4 (NIR) | Band 4 (Red) | 0 no saturation; 1 saturated data |
| 4 | Band 5 (SWIR1) | Band 5 (SWIR1) | Band 5 (NIR) | 0 no saturation; 1 saturated data |
| 5 | Band 6 (TIR) | Band 6L (TIR)† | Band 6 (SWIR1) | 0 no saturation; 1 saturated data |
| 6 | Band 7 (SWIR2) | Band 7 (SWIR2) | Band 7 (SWIR2) | 0 no saturation; 1 saturated data |
| 7 | Unused | Unused | Unused | 0 |
| 8 | Unused | Band 6H (TIR)‡ | Band 9 (Cirrus) | 0 no saturation; 1 saturated data |
| 9 | Dropped Pixel | Dropped Pixel | Unused | 0 Pixel present; 1 detector doesn’t have a value – no data |
| 10 | Unused | Unused | Unused | 0 |
| 11 | Unused | Unused | Terrain occlusion | 0 no terrain occlusion; 1 terrain occlusion |
| 12 | Unused | Unused | Unused | 0 |
| 13 | Unused | Unused | Unused | 0 |
| 14 | Unused | Unused | Unused | 0 |
| 15 | Unused | Unused |Unused | 0 |
$^\dagger$, $^\ddagger$ For Landsat 7 products, the Band 6 TOA brightness temperature product is generated from ETM+ Band 6 High gain (6H) and Band 6 Low gain (6L) merged together. The merged Band 6 is comprised of pixels that are not saturated in Band 6H. When Band 6H pixels are saturated with a brightness temperature outside of the 6H dynamic range (from 240K to 322K), they will be filled with pixels from the 6L band even if those pixels are saturated.
For Landsat 5 and 7, another cloud mask band (SR_CLOUD_QA) is available but is less accurate than the QA_PIXEL band.
**Table 6: Landsat 5 and Landsat 7 cloud mask (SR_CLOUD_QA) bit index.**
| Bit | Attribute |
|-----|-----------|
| 0 | Dark Dense Vegetation (DDV) |
| 1 | Cloud |
| 2 | Cloud shadow |
| 3 | Adjacent to cloud |
| 4 | Snow |
| 5 | Water |
| 6 | Unused |
| 7 | Unused |
For Landsat 8, aerosol retrieval information that may have impacted the product is provided in a SR_Aerosol_QA band. The default "Aerosol Level" is Climatology (00), which means no aerosol correction was applied. Pixels with an "Aerosol Level" classified as high are not recommended for use.
**Table 7: Landsat 8 aerosol level (SR_Aerosol_QA) bit index.**
| Bit | Flag | Description Values |
|-----|------|--------------------|
| 0 | Fill | 0 Pixel is not fill; 1 Pixel is fill |
| 1 | Valid aerosol retrieval | 0 Pixel retrieval is not valid; 1 Pixel retrieval is valid |
| 2 | Water | 0 Pixel is not water; 1 Pixel is water |
| 3 | Unused | 0 |
| 4 | Unused | 0 |
| 5 | Interpolated Aerosol | 0 Pixel is not aerosol interpolated; 1 Pixel is aerosol interpolated |
| 6-7 | Aerosol Level | 00 Climatology; 01 Low; 10 Medium; 11 High |
### Processing
Landsat Collection 2 Level-2 products are processed by the USGS from Collection 2 Level-1 inputs.
Landsat 8 OLI surface reflectance products are generated using the Land Surface Reflectance Code (LaSRC) algorithm. Landsat 4-5 TM and Landsat 7 ETM+ surface reflectance products are generated using the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) algorithm.
### Media and example images
<img src="../_static/data_specs/Landsat_C2_specs/ls_libya.png" alt="Landsat composites for Libya" width="600" align="left"/>
**Figure 1: Landsat false color composites (highlighting vegetation) over an area in Tripoli District, Libya, showing changes between selected dates from 1984 to 2021.**
### Related services
* [Water Observations from Space](https://docs.digitalearthafrica.org/en/latest/data_specs/WOfS_specs.html)
* [GeoMAD cloud-free composite services](https://docs.digitalearthafrica.org/en/latest/data_specs/GeoMAD_specs.html)
* [Landsat Collection 2 Level-2 Surface Temperature](https://docs.digitalearthafrica.org/en/latest/data_specs/Landsat_C2_ST_specs.html)
### References
[USGS Collection Level-2 website](https://www.usgs.gov/core-science-systems/nli/landsat/landsat-collection-2)
### License
There are no restrictions on Landsat data downloaded from the USGS; it can be used or redistributed as desired. USGS request that you include a [statement of the data source](https://www.usgs.gov/centers/eros/data-citation?qt-science_support_page_related_con=0#qt-science_support_page_related_con) when citing, copying, or reprinting USGS Landsat data or images.
### Acknowledgements
Landsat Level- 2 Surface Reflectance Science Product courtesy of the U.S. Geological Survey.
## Data access
### Amazon Web Services S3
Landsat Collection 2 Level-2 is available in AWS S3, sponsored by the [Public Dataset Program](https://registry.opendata.aws/deafrica-landsat/).
**Table 8: AWS data access details.**
|AWS S3 details | |
|----------|-------------|
|Bucket ARD | `arn:aws:s3:::deafrica-landsat`|
|Region | `af-south-1` |
The bucket is in the AWS region `af-south-1` (Cape Town). Additional region specifications can be applied as follows:
`aws s3 ls --region=af-south-1 s3://deafrica-landsat/`
The file paths follow the format `collection02/level-2/standard/<sensor>/<year>/<path>/<row>/<scene_id>/`.
**Table 9: AWS file path convention.**
|File path element | Description |Example |
|----------|-------------|-----------------|
|`sensor`| Landsat sensor name, `tm`, `etm` or `oli-tirs` for landsat 5, 7 and 8 | `oli-tirs` |
|`year` | Observation year | `2021` |
|`path` | Landsat orbit path id | `172` |
|`row` | Landsat orbit row id | `057` |
|`scene_id` | Landsat scene id | `LC08_L2SP_172057_20210101_20210308_02_T1` |
### OGC Web Services (OWS)
This product is available through DE Africa's OWS.
**Table 10: OWS data access details.**
|OWS details | |
|----------|-------------|
|Name | `DE Africa Services` |
|Web Map Services (WMS) URL | `https://ows.digitalearth.africa/wms?version=1.3.0` |
| Web Coverage Service (WCS) URL | `https://ows.digitalearth.africa/wcs?version=2.1.0`|
| Layer name | `ls5_sr`, `ls7_sr`, `ls8_sr` |
Digital Earth Africa OWS details can be found at [https://ows.digitalearth.africa/](https://ows.digitalearth.africa/).
For instructions on how to connect to OWS, see [this tutorial](https://training.digitalearthafrica.org/en/latest/OWS_tutorial.html).
### Open Data Cube (ODC)
The Landsat Collection 2 Level-2 products can be accessed through the Digital Earth Africa ODC API, which is available through the [Digital Earth Africa Sandbox](https://sandbox.digitalearth.africa/hub/login).
**ODC product names:** `ls5_sr`, `ls7_sr`, `ls8_sr`
Specific bands of data can be called by using either the default names or any of a band's alternative names, as listed in the table below. ODC `Datacube.load` commands without specified bands will load all bands.
**Table 11: Landsat 5 and Landsat 7 Level-2 Surface Reflectance (ODC product ls5_sr and ls7_sr) band names.**
|Band name| Alternative names| Fill value |
|----------|-------------|:------:|
|SR_B1 | band_1, blue | `0` |
|SR_B2 | band_2, green | `0` |
|SR_B3 | band_3, red | `0` |
|SR_B4 | band_4, nir | `0` |
|SR_B5 | band_5, swir_1 | `0` |
|SR_B7 | band_7, swir_2 | `0` |
|QA_PIXEL | pq, pixel_quality | `1` |
|QA_RADSAT | radsat, radiometric_saturation | `0` |
|SR_ATMOS_OPACITY | atmos_opacity | `-9999` |
|SR_CLOUD_QA | cloud_qa | `0` |
**Table 12: Landsat 8 Level-2 Surface Reflectance (ODC product ls8_sr) band names.**
|Band name| Alternative names| Fill value |
|----------|-------------|:------:|
|SR_B1 | band_1, coastal_aerosol | `0` |
|SR_B2 | band_2, blue | `0` |
|SR_B3 | band_3, green | `0` |
|SR_B4 | band_4, red | `0` |
|SR_B5 | band_5, nir | `0` |
|SR_B6 | band_6, swir_1 | `0` |
|SR_B7 | band_7, swir_2 | `0` |
|QA_PIXEL | pq, pixel_quality | `1` |
|QA_RADSAT | radsat, radiometric_saturation | `0` |
|SR_QA_AEROSOL | qa_aerosol, aerosol_qa | `1` |
Band names are case-sensitive.
For examples on how to use the ODC API, see the DE Africa [example notebook repository](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).
## Technical information
### Surface Reflectance
The surface reflectance products for Landat 5, 7 and 8 are generated using two different methods.
Landsat 5 TM and Landsat 7 ETM+ Collection 2 Surface Reflectance are generated using the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) algorithm. The software applies Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction routines to Level-1 data products. Water vapor, ozone, atmospheric height, aerosol optical thickness, and digital elevation are input with Landsat data to the Second Simulation of a Satellite Signal in the Solar Spectrum (6S) radiative transfer models to generate top of atmosphere (TOA) reflectance, surface reflectance, TOA brightness temperature, and masks for clouds, cloud shadows, adjacent clouds, land, and water.
Landsat 8 OLI Collection 2 Surface Reflectance data are generated using the Land Surface Reflectance Code (LaSRC), which makes use of the coastal aerosol band to perform aerosol inversion tests, uses auxiliary climate data from MODIS, and a unique radiative transfer model.
For more information on the different processing algorithms and caveats of the products, visit the [Landsat Collection 2 Surface Reflectance webpage](https://www.usgs.gov/core-science-systems/nli/landsat/landsat-collection-2-surface-reflectance).
| github_jupyter |
## Association Rules Generation from Frequent Itemsets
Function to generate association rules from frequent itemsets
> from mlxtend.frequent_patterns import association_rules
## Overview
Rule generation is a common task in the mining of frequent patterns. _An association rule is an implication expression of the form $X \rightarrow Y$, where $X$ and $Y$ are disjoint itemsets_ [1]. A more concrete example based on consumer behaviour would be $\{Diapers\} \rightarrow \{Beer\}$ suggesting that people who buy diapers are also likely to buy beer. To evaluate the "interest" of such an association rule, different metrics have been developed. The current implementation make use of the `confidence` and `lift` metrics.
### Metrics
The currently supported metrics for evaluating association rules and setting selection thresholds are listed below. Given a rule "A -> C", *A* stands for antecedent and *C* stands for consequent.
#### 'support':
$$\text{support}(A\rightarrow C) = \text{support}(A \cup C), \;\;\; \text{range: } [0, 1]$$
- introduced in [3]
The support metric is defined for itemsets, not assocication rules. The table produced by the association rule mining algorithm contains three different support metrics: 'antecedent support', 'consequent support', and 'support'. Here, 'antecedent support' computes the proportion of transactions that contain the antecedent A, and 'consequent support' computes the support for the itemset of the consequent C. The 'support' metric then computes the support of the combined itemset A $\cup$ C -- note that 'support' depends on 'antecedent support' and 'consequent support' via min('antecedent support', 'consequent support').
Typically, support is used to measure the abundance or frequency (often interpreted as significance or importance) of an itemset in a database. We refer to an itemset as a "frequent itemset" if you support is larger than a specified minimum-support threshold. Note that in general, due to the *downward closure* property, all subsets of a frequent itemset are also frequent.
#### 'confidence':
$$\text{confidence}(A\rightarrow C) = \frac{\text{support}(A\rightarrow C)}{\text{support}(A)}, \;\;\; \text{range: } [0, 1]$$
- introduced in [3]
The confidence of a rule A->C is the probability of seeing the consequent in a transaction given that it also contains the antecedent. Note that the metric is not symmetric or directed; for instance, the confidence for A->C is different than the confidence for C->A. The confidence is 1 (maximal) for a rule A->C if the consequent and antecedent always occur together.
#### 'lift':
$$\text{lift}(A\rightarrow C) = \frac{\text{confidence}(A\rightarrow C)}{\text{support}(C)}, \;\;\; \text{range: } [0, \infty]$$
- introduced in [4]
The lift metric is commonly used to measure how much more often the antecedent and consequent of a rule A->C occur together than we would expect if they were statistically independent. If A and C are independent, the Lift score will be exactly 1.
#### 'leverage':
$$\text{levarage}(A\rightarrow C) = \text{support}(A\rightarrow C) - \text{support}(A) \times \text{support}(C), \;\;\; \text{range: } [-1, 1]$$
- introduced in [5]
Leverage computes the difference between the observed frequency of A and C appearing together and the frequency that would be expected if A and C were independent. An leverage value of 0 indicates independence.
#### 'conviction':
$$\text{conviction}(A\rightarrow C) = \frac{1 - \text{support}(C)}{1 - \text{confidence}(A\rightarrow C)}, \;\;\; \text{range: } [0, \infty]$$
- introduced in [6]
A high conviction value means that the consequent is highly depending on the antecedent. For instance, in the case of a perfect confidence score, the denominator becomes 0 (due to 1 - 1) for which the conviction score is defined as 'inf'. Similar to lift, if items are independent, the conviction is 1.
## References
[1] Tan, Steinbach, Kumar. Introduction to Data Mining. Pearson New International Edition. Harlow: Pearson Education Ltd., 2014. (pp. 327-414).
[2] Michael Hahsler, http://michael.hahsler.net/research/association_rules/measures.html
[3] R. Agrawal, T. Imielinski, and A. Swami. Mining associations between sets of items in large databases. In Proc. of the ACM SIGMOD Int'l Conference on Management of Data, pages 207-216, Washington D.C., May 1993
[4] S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting and implication rules for market basket data
[5] Piatetsky-Shapiro, G., Discovery, analysis, and presentation of strong rules. Knowledge Discovery in Databases, 1991: p. 229-248.
[6] Sergey Brin, Rajeev Motwani, Jeffrey D. Ullman, and Shalom Turk. Dynamic itemset counting and implication rules for market basket data. In SIGMOD 1997, Proceedings ACM SIGMOD International Conference on Management of Data, pages 255-264, Tucson, Arizona, USA, May 1997
## Example 1 -- Generating Association Rules from Frequent Itemsets
The `generate_rules` takes dataframes of frequent itemsets as produced by the `apriori` function in *mlxtend.association*. To demonstrate the usage of the `generate_rules` method, we first create a pandas `DataFrame` of frequent itemsets as generated by the [`apriori`](./apriori.md) function:
```
import pandas as pd
from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori
dataset = [['Milk', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Dill', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Milk', 'Apple', 'Kidney Beans', 'Eggs'],
['Milk', 'Unicorn', 'Corn', 'Kidney Beans', 'Yogurt'],
['Corn', 'Onion', 'Onion', 'Kidney Beans', 'Ice cream', 'Eggs']]
te = TransactionEncoder()
te_ary = te.fit(dataset).transform(dataset)
df = pd.DataFrame(te_ary, columns=te.columns_)
frequent_itemsets = apriori(df, min_support=0.6, use_colnames=True)
frequent_itemsets
```
The `generate_rules()` function allows you to (1) specify your metric of interest and (2) the according threshold. Currently implemented measures are **confidence** and **lift**. Let's say you are interested in rules derived from the frequent itemsets only if the level of confidence is above the 70 percent threshold (`min_threshold=0.7`):
```
from mlxtend.frequent_patterns import association_rules
association_rules(frequent_itemsets, metric="confidence", min_threshold=0.7)
```
## Example 2 -- Rule Generation and Selection Criteria
If you are interested in rules according to a different metric of interest, you can simply adjust the `metric` and `min_threshold` arguments . E.g. if you are only interested in rules that have a lift score of >= 1.2, you would do the following:
```
rules = association_rules(frequent_itemsets, metric="lift", min_threshold=1.2)
rules
```
Pandas `DataFrames` make it easy to filter the results further. Let's say we are ony interested in rules that satisfy the following criteria:
1. at least 2 antecedents
2. a confidence > 0.75
3. a lift score > 1.2
We could compute the antecedent length as follows:
```
rules["antecedent_len"] = rules["antecedents"].apply(lambda x: len(x))
rules
```
Then, we can use pandas' selection syntax as shown below:
```
rules[ (rules['antecedent_len'] >= 2) &
(rules['confidence'] > 0.75) &
(rules['lift'] > 1.2) ]
```
Similarly, using the Pandas API, we can select entries based on the "antecedents" or "consequents" columns:
```
rules[rules['antecedents'] == {'Eggs', 'Kidney Beans'}]
```
**Frozensets**
Note that the entries in the "itemsets" column are of type `frozenset`, which is built-in Python type that is similar to a Python `set` but immutable, which makes it more efficient for certain query or comparison operations (https://docs.python.org/3.6/library/stdtypes.html#frozenset). Since `frozenset`s are sets, the item order does not matter. I.e., the query
`rules[rules['antecedents'] == {'Eggs', 'Kidney Beans'}]`
is equivalent to any of the following three
- `rules[rules['antecedents'] == {'Kidney Beans', 'Eggs'}]`
- `rules[rules['antecedents'] == frozenset(('Eggs', 'Kidney Beans'))]`
- `rules[rules['antecedents'] == frozenset(('Kidney Beans', 'Eggs'))]`
## Example 3 -- Frequent Itemsets with Incomplete Antecedent and Consequent Information
Most metrics computed by `association_rules` depends on the consequent and antecedent support score of a given rule provided in the frequent itemset input DataFrame. Consider the following example:
```
import pandas as pd
dict = {'itemsets': [['177', '176'], ['177', '179'],
['176', '178'], ['176', '179'],
['93', '100'], ['177', '178'],
['177', '176', '178']],
'support':[0.253623, 0.253623, 0.217391,
0.217391, 0.181159, 0.108696, 0.108696]}
freq_itemsets = pd.DataFrame(dict)
freq_itemsets
```
Note that this is a "cropped" DataFrame that doesn't contain the support values of the item subsets. This can create problems if we want to compute the association rule metrics for, e.g., `176 => 177`.
For example, the confidence is computed as
$$\text{confidence}(A\rightarrow C) = \frac{\text{support}(A\rightarrow C)}{\text{support}(A)}, \;\;\; \text{range: } [0, 1]$$
But we do not have $\text{support}(A)$. All we know about "A"'s support is that it is at least 0.253623.
In these scenarios, where not all metric's can be computed, due to incomplete input DataFrames, you can use the `support_only=True` option, which will only compute the support column of a given rule that does not require as much info:
$$\text{support}(A\rightarrow C) = \text{support}(A \cup C), \;\;\; \text{range: } [0, 1]$$
"NaN's" will be assigned to all other metric columns:
```
from mlxtend.frequent_patterns import association_rules
res = association_rules(freq_itemsets, support_only=True, min_threshold=0.1)
res
```
To clean up the representation, you may want to do the following:
```
res = res[['antecedents', 'consequents', 'support']]
res
```
## API
```
with open('../../api_modules/mlxtend.frequent_patterns/association_rules.md', 'r') as f:
print(f.read())
```
| github_jupyter |
```
import keras
keras.__version__
```
# A first look at a neural network
This notebook contains the code samples found in Chapter 2, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
----
We will now take a look at a first concrete example of a neural network, which makes use of the Python library Keras to learn to classify
hand-written digits. Unless you already have experience with Keras or similar libraries, you will not understand everything about this
first example right away. You probably haven't even installed Keras yet. Don't worry, that is perfectly fine. In the next chapter, we will
review each element in our example and explain them in detail. So don't worry if some steps seem arbitrary or look like magic to you!
We've got to start somewhere.
The problem we are trying to solve here is to classify grayscale images of handwritten digits (28 pixels by 28 pixels), into their 10
categories (0 to 9). The dataset we will use is the MNIST dataset, a classic dataset in the machine learning community, which has been
around for almost as long as the field itself and has been very intensively studied. It's a set of 60,000 training images, plus 10,000 test
images, assembled by the National Institute of Standards and Technology (the NIST in MNIST) in the 1980s. You can think of "solving" MNIST
as the "Hello World" of deep learning -- it's what you do to verify that your algorithms are working as expected. As you become a machine
learning practitioner, you will see MNIST come up over and over again, in scientific papers, blog posts, and so on.
The MNIST dataset comes pre-loaded in Keras, in the form of a set of four Numpy arrays:
```
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
```
`train_images` and `train_labels` form the "training set", the data that the model will learn from. The model will then be tested on the
"test set", `test_images` and `test_labels`. Our images are encoded as Numpy arrays, and the labels are simply an array of digits, ranging
from 0 to 9. There is a one-to-one correspondence between the images and the labels.
Let's have a look at the training data:
```
train_images.shape
len(train_labels)
train_labels
```
Let's have a look at the test data:
```
test_images.shape
len(test_labels)
test_labels
```
Our workflow will be as follow: first we will present our neural network with the training data, `train_images` and `train_labels`. The
network will then learn to associate images and labels. Finally, we will ask the network to produce predictions for `test_images`, and we
will verify if these predictions match the labels from `test_labels`.
Let's build our network -- again, remember that you aren't supposed to understand everything about this example just yet.
```
from keras import models
from keras import layers
network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,)))
network.add(layers.Dense(10, activation='softmax'))
```
The core building block of neural networks is the "layer", a data-processing module which you can conceive as a "filter" for data. Some
data comes in, and comes out in a more useful form. Precisely, layers extract _representations_ out of the data fed into them -- hopefully
representations that are more meaningful for the problem at hand. Most of deep learning really consists of chaining together simple layers
which will implement a form of progressive "data distillation". A deep learning model is like a sieve for data processing, made of a
succession of increasingly refined data filters -- the "layers".
Here our network consists of a sequence of two `Dense` layers, which are densely-connected (also called "fully-connected") neural layers.
The second (and last) layer is a 10-way "softmax" layer, which means it will return an array of 10 probability scores (summing to 1). Each
score will be the probability that the current digit image belongs to one of our 10 digit classes.
To make our network ready for training, we need to pick three more things, as part of "compilation" step:
* A loss function: the is how the network will be able to measure how good a job it is doing on its training data, and thus how it will be
able to steer itself in the right direction.
* An optimizer: this is the mechanism through which the network will update itself based on the data it sees and its loss function.
* Metrics to monitor during training and testing. Here we will only care about accuracy (the fraction of the images that were correctly
classified).
The exact purpose of the loss function and the optimizer will be made clear throughout the next two chapters.
```
network.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
```
Before training, we will preprocess our data by reshaping it into the shape that the network expects, and scaling it so that all values are in
the `[0, 1]` interval. Previously, our training images for instance were stored in an array of shape `(60000, 28, 28)` of type `uint8` with
values in the `[0, 255]` interval. We transform it into a `float32` array of shape `(60000, 28 * 28)` with values between 0 and 1.
```
train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
```
We also need to categorically encode the labels, a step which we explain in chapter 3:
```
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
```
We are now ready to train our network, which in Keras is done via a call to the `fit` method of the network:
we "fit" the model to its training data.
```
network.fit(train_images, train_labels, epochs=5, batch_size=128)
```
Two quantities are being displayed during training: the "loss" of the network over the training data, and the accuracy of the network over
the training data.
We quickly reach an accuracy of 0.989 (i.e. 98.9%) on the training data. Now let's check that our model performs well on the test set too:
```
test_loss, test_acc = network.evaluate(test_images, test_labels)
print('test_acc:', test_acc)
```
Our test set accuracy turns out to be 97.8% -- that's quite a bit lower than the training set accuracy.
This gap between training accuracy and test accuracy is an example of "overfitting",
the fact that machine learning models tend to perform worse on new data than on their training data.
Overfitting will be a central topic in chapter 3.
This concludes our very first example -- you just saw how we could build and a train a neural network to classify handwritten digits, in
less than 20 lines of Python code. In the next chapter, we will go in detail over every moving piece we just previewed, and clarify what is really
going on behind the scenes. You will learn about "tensors", the data-storing objects going into the network, about tensor operations, which
layers are made of, and about gradient descent, which allows our network to learn from its training examples.
| github_jupyter |
<a href="https://colab.research.google.com/github/jdclifton2/praiseanalysis/blob/main/praise_to_and_from.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# The first part of this notebook loads the data. It was done by @ygg_anderson
```
import panel as pn
pn.extension()
import pandas as pd
import numpy as np
#import hvplot.pandas
import param, random
import datetime as dt
import matplotlib.pyplot as plt
%matplotlib inline
periods = [
"#17 May 7",
"#16 Apr 24",
"#15 Apr 9",
"#14 Mar 26",
"#13 Mar 12",
"#12 Feb 26",
"#11 Feb 12",
"#10 Jan 29",
"#9 Jan 15",
"#8 Jan 1",
"#7 Dec 18",
"#6 Dec 4",
"#5 Nov 20", #
"#4 Nov 6", #
"#3 Oct 23", #
"#2 Oct 9",
"#1 Sept 24", #
"#0 Sept 7 (historic)", #
]
data = []
for i, period in enumerate(periods):
if i not in (17, 16, 14, 12, 13):
df = pd.read_excel('data/TEC Praise Quantification.xlsx', skiprows=2, sheet_name=period,engine='openpyxl', usecols="A:M")
df[['v1','v2','v3']] = list(df.columns[6:9])
df.columns = list(df.columns[:6]) + ['v1 norm', 'v2 norm', 'v3 norm'] + list(df.columns[9:])
df['period'] = period
df = df.dropna(thresh=8)
data.append(df)
combined_data = pd.concat(data)
combined_data
```
quantifiers = combined_data[combined_data[['IH per Praise', 'IH per person', 'Unnamed: 12']].isna().all(axis=1)]
quantifiers
```
receivers = combined_data[~combined_data[['IH per Praise', 'IH per person', 'Unnamed: 12']].isna().all(axis=1)]
receivers
```
---
# Investigations from octopus🐙
I'm going to do some basic analysis of the parts of the data set that are encoded with words.
```
import seaborn as sns #seaborn is my plotting tool of choice
receivers.columns
```
### Where does praise happen?
```
sources = receivers.groupby("Unnamed: 3").count()
sources
ax = sns.barplot(y = sources.index, x = sources["To"], order = sources.sort_values("To", ascending = False).index)
ax.set(xlabel="Count", ylabel="Source", title = "Where Praise is Given")
room_sources = receivers.groupby("Room").count()
room_sources
plt.figure(figsize=(10,8))
ax = sns.barplot(y = room_sources.index, x = room_sources["To"], order = room_sources.sort_values("To", ascending = False).index)
ax.set(xlabel="Count", ylabel="Source", title = "Where Praise is Given")
```
### Do some basic cleaning and analysis.
Now I'm going to create a new data frame that incorporates how many times each user gave and received praise. The next few cells accomplish this by using pivot_tables to create data frames which are then merged, with missing values replaced by 0.
```
praise_to = receivers.pivot_table(index = ["To"], aggfunc = 'size' ).to_frame(name = "to")
praise_to
praise_from = receivers.pivot_table(index = ["From"], aggfunc = 'size').to_frame(name = "from")
praise_from
praise_to_and_from = pd.concat([praise_from, praise_to], axis = 1)
praise_to_and_from = praise_to_and_from.fillna(0)
praise_to_and_from.head(5)
```
### A naming issue in our data
So now we have **praise_to_and_from** as a data frame where each row is a user, and we see how many times they gave oraise (measured in **from**) and received praise (measured in **to**).
There is a **naming** issue that should be addressed at some point. Look below.
```
zep_df = praise_to_and_from.filter(like = "zep", axis = 0)
zep_df
ygg_df = praise_to_and_from.filter(like = "ygg", axis = 0)
ygg_df
```
The issue is that some users receive praise with variations on their names. It would be good to consoliate these users for this analysis. However, this is an issue that's unlikely to affect most users, so we wll come back to it.
### Relationship Between Givers and Receivers
Let's make a scatterplot to see if we think there is a relationship betwen these two quantities.
```
ax = sns.scatterplot(x = "from", y = "to", data = praise_to_and_from)
ax.set(title = "Praise: To and From")
```
There doen't apear to be much of a relationship, primarily because **so many users** have zero values of **from**; this includes many users who are frequent praise recipients but do not give much. This is partially distorted by the naming issues above.
```
sns.histplot(data = praise_to_and_from, x = "to")
sns.histplot(data = praise_to_and_from.query("to > 0"), x = "to")
sum(praise_to_and_from["to"] <= 1)/len(praise_to_and_from)
praise_to_and_from["from"] == 0
sum(praise_to_and_from["from"] == 0)/len(praise_to_and_from)
praise_to_and_from[praise_to_and_from["from"] > 0]["from"].mean()
```
Almost 80% of users here have not given any praise (this might be skewed *slightly* by the naming issue above, but probably not much). This is probably described by a Pareto distribution -- with 20% of the users giving 100% of the praise. Among users who gave some praise, they gave an average of roughly 90 praises.
```
praise_to_and_from.describe()
praise_to_and_from["generosity"] = praise_to_and_from["from"]/(praise_to_and_from["to"] + 1)
praise_to_and_from
praise_to_and_from.query("generosity > 0").query("generosity < 1")
```
# Investigations from courier
```
# Run this cell to install the pygraphviz library
#!apt-get install python3-dev graphviz libgraphviz-dev pkg-config
#!pip install pygraphviz
import networkx as nx
import pygraphviz
```
We will attempt to make a graph from the dataframe.
First we will begin with a simple proof of concept.
```
df = pd.read_csv("recievers.csv")
df.head()
df.columns
G = nx.DiGraph()
edges = df[['To','From']]
for ind, tup in edges[1:2].iterrows():
to = tup[0]
fro = tup[1]
G.add_edge(fro, to)
G.edges
G = nx.nx_agraph.to_agraph(G)
G.layout(prog="dot")
G.draw("file.png")
edges[1:2]
```

Now we create a function that will automate this process.
```
def viz_graph(df, file_name):
"""
Creates a visualization of a graph created from a dataset of praise.
:param df: The dataframe that contains who the praise is from and who it is to.
Note that this dataframe should ONLY contain columns To and From.
:param file_name: The name of the image that will be saved to your computer.
"""
G = nx.DiGraph()
for ind, tup in df.iterrows():
to = tup[0]
fro = tup[1]
G.add_edge(fro, to)
G = nx.nx_agraph.to_agraph(G)
G.layout(prog="dot")
G.draw(file_name)
return G
#viz_graph(edges, "praise_graph.png")
```
You can use boolean conditions to get more specific graphs.
```
#viz_graph(edges[edges["To"] == "zeptimusQ"], "zeptimus_praise_graph.png")
def to_user_graph(df, user, file_name):
"""
Creates a visualization of a graph created from a dataset of praise.
:param df: The dataframe that contains who the praise is from and who it is to.
Note that this dataframe should ONLY contain columns To and From.
:param user: All donations to this user will be graphed.
:param file_name: The name of the image that will be saved to your computer.
"""
df = pd.read_csv("recievers.csv")
G = nx.DiGraph()
for ind, tup in edges[edges["To"] == user].iterrows():
to = tup[0]
fro = tup[1]
G.add_edge(fro, to)
G = nx.nx_agraph.to_agraph(G)
G.layout(prog="dot")
G.draw(file_name)
return G
def from_user_graph(df, user, file_name):
"""
Creates a visualization of a graph created from a dataset of praise.
:param df: The dataframe that contains who the praise is from and who it is to.
Note that this dataframe should ONLY contain columns To and From.
:param user: All donations from this user will be graphed.
:param file_name: The name of the image that will be saved to your computer.
"""
df = pd.read_csv("recievers.csv")
G = nx.DiGraph()
for ind, tup in edges[edges["To"] == user].iterrows():
to = tup[0]
fro = tup[1]
G.add_edge(to, fro)
G = nx.nx_agraph.to_agraph(G)
G.layout(prog="dot")
G.draw(file_name)
return G
```
# You can utilize these 2 forms to create user graphs. The first one will create a graph of donations from the provided user and the second form will graph donations to the provided user.
```
df_str = "recievers.csv" #@param {type:"string"}
user = "zeptimusQ" #@param {type:"string"}
file_name = "zeptimusQ_to_graph.png" #@param {type:"string"}
from_user_graph(df=df_str, user=user,file_name=file_name)
df_str = "recievers.csv" #@param {type:"string"}
user = "recievers.csv" #@param {type:"string"}
file_name = "zeptimusQ_from_graph.png" #@param {type:"string"}
to_user_graph(df=df_str, user=user,file_name=file_name)
df.columns
df.isna().sum()
```
We observe many null values in the final 3 columns. We will drop these for now.
```
df.drop(['Cred per Praise', 'Cred per person', 'To.1'], inplace=True, axis=1)
```
# Date inconsistency
It appears that the format of the dates in the data is inconsistent. Some entries opt for the format Month-Day-Year while others use Year-Month-Day.
```
df['Date']
df["Date"].tail()
df["Date"].head()
```
We will drop any NA dates.
```
df["Date"] = df["Date"].dropna()
from dateutil import parser
```
First we begin by turning all of the dates in the column into strings.
```
df["Date"] = df["Date"].apply(str)
```
Some other dates just have the word "dup" instead of a date. We can get rid of these.
```
df[df["Date"] == "dup"]
df = df[df["Date"] != "dup"]
df = df[df["Date"] != "nan"]
def normalize_dates(date_str):
"""
This function takes in a string that contains a date and converts it to the
format Year-Month-Day.
:param date_str: The string containing the date.
:return: The date in the format Year-Month-Day.
"""
d = parser.parse(date_str)
return d.strftime("%Y-%m-%d")
```
A quick sanity check to ensure that we have the correct results after this function was applied.
```
df["Date"].apply(normalize_dates)
df["Date"]
df["Date"] = pd.to_datetime(df["Date"])
```
Now we add features for the year month and day.
```
df["Year"] = df["Date"].dt.year
df["Month"] = df["Date"].dt.month
df["Day"] = df["Date"].dt.day
```
Now we can group the data based on these features.
```
days = df.groupby(['Day']).To.nunique().reset_index()
days
```
Consider a plot of the number of praises given broken down by day of the month. We see that the most praise is given on the 5th day of the month.
```
fig, ax = plt.subplots()
ax.bar(days["Day"], days["To"], color='k')
ax.set_ylim(0, days["To"].max()+5)
fig.autofmt_xdate()
plt.xlabel('Day of Month')
plt.ylabel('Counts')
plt.show()
months = df.groupby(['Month']).To.nunique().reset_index()
```
Consider a plot of the number of praises given broken down by the month. We see that the most praise is given in April. Interestingly, no praise at all is given between the months of June-October.
```
fig, ax = plt.subplots()
ax.bar(months["Month"], months["To"], color='k')
ax.set_ylim(0, months["To"].max()+5)
fig.autofmt_xdate()
plt.xlabel('Month')
plt.ylabel('Counts')
plt.show()
dates = df.groupby(['Date']).To.nunique().reset_index()
fig, ax = plt.subplots()
ax.plot_date(dates["Date"], dates["To"], color='k', xdate=True)
ax.set_ylim(0, dates["To"].max()+5)
fig.autofmt_xdate()
plt.xlabel('Date')
plt.ylabel('Counts')
plt.show()
sns.lineplot(x="Date", y="To", data=dates)
dates.sort_values(by=['To'], ascending=False)
df.to_csv("processed_praise.csv")
```
```
```
| github_jupyter |
<a href="https://colab.research.google.com/github/christianhidber/easyagents/blob/master/jupyter_notebooks/intro_cartpole.ipynb"
target="_parent">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
# CartPole Gym environment with TfAgents
## Install packages (gym, tfagents, tensorflow,....)
#### suppress package warnings, prepare matplotlib, if in colab: load additional packages for rendering
```
%matplotlib inline
import matplotlib.pyplot as plt
import sys
import warnings
warnings.filterwarnings('ignore')
if 'google.colab' in sys.modules:
!apt-get update >/dev/null
!apt-get install xvfb >/dev/null
!pip install pyvirtualdisplay >/dev/null
from pyvirtualdisplay import Display
Display(visible=0, size=(960, 720)).start()
else:
# for local installation
sys.path.append('..')
```
#### install easyagents
```
import sys
if 'google.colab' in sys.modules:
!pip install easyagents >/dev/null
```
The fc_layers argument defines the policy's neural network architecture. Here we use 3 fully connected layers
with 100 neurons in the first, 50 in the second and 25 in the final layer.
By default fc_layers=(75,75) is used.
The first argument of the train method is a list of callbacks. Through callbacks we define the plots generated during
training, the logging behaviour or control training duration.
By passing [plot.State(), plot.Loss(), plot.Actions(), plot.Rewards()] we add in particular the State() plot,
depicting the last observation state of the last evaluation episode. plot.Actions() displays a histogram of the
actions taken for each episode played during the last evaluation period.
Besides num_iterations there are quite a few parameters to specify the exact training duration (e.g.
num_episodes_per_iteration, num_epochs_per_iteration, max_steps_per_episode,...).
## Switching the algorithm
Switching from Ppo to Dqn is easy, essentially just replace PpoAgent with DqnAgent (the evaluation may take a few
minuites):
```
from easyagents.agents import DqnAgent
from easyagents.callbacks import plot
%%time
dqnAgent = DqnAgent('CartPole-v0', fc_layers=(100, ))
dqnAgent.train([plot.State(), plot.Loss(), plot.Actions(), plot.Rewards()],
num_iterations=20000, num_iterations_between_eval=1000)
```
Since Dqn by default only takes 1 step per iteration (and thus an episode spans over several iterations) we increased
the num_iterations parameter.
## Next: custom training, creating a movie & switching backends.
* see
[Orso on colab](https://colab.research.google.com/github/christianhidber/easyagents/blob/master/jupyter_notebooks/intro_orso.ipynb)
(an example of a gym environment implementation based on a routing problem)
| github_jupyter |
Tutorials table of content:
- [Tutorial 1: Run a first scenario](./Tutorial-1_Run_your_first_scenario.ipynb)
- [Tutorial 2: Add contributivity measurements methods](./Tutorial-2_Add_contributivity_measurement.ipynb)
- Tutorial 3: Use a custom dataset
# Tutorial 3 : Use homemade dataset
With this example, we dive deeper into the potential of the library, and run a scenario on a new dataset, that we will implement
## 1 - Prerequisites
In order to run this example, you'll need to:
* use python 3.7 +
* install this package https://pypi.org/project/mplc/
If you did not follow our firsts tutorials, it is highly recommended to [take a look at it !](https://github.com/SubstraFoundation/distributed-learning-contributivity/tree/master/notebooks/examples/)
```
!pip install mplc
```
## 2 - Context
In collaborative data science projects partners sometimes need to train a model on multiple datasets, contributed by different data providing partners. In such cases the partners might have to measure how much each dataset involved contributed to the performance of the model. This is useful for example as a basis to agree on how to share the reward of the ML challenge or the future revenues derived from the predictive model, or to detect possible corrupted datasets or partners not playing by the rules. The library explores this question and the opportunity to implement some mechanisms helping partners in such scenarios to measure each dataset's *contributivity* (as *contribution to the performance of the model*).
In the [first tutorial](./Tutorial-1_Run_your_first_scenario.ipynb), you learned how to parametrize and run a scenario.
In the [second tutorial](./Tutorial-2_Add_contributivity_measurement.ipynb), you discovered how to add to your scenario run one of the contributivity measurement methods available.
In this third tutorial, we are going to use a custom dataset.
### The dataset : Sentiment140
We are going to use a subset of the [sentiment140](http://help.sentiment140.com/for-students) dataset and try to
classified short film review, between positive sentiments and negative sentiments for movies.
*The whole machine learning process is inspired from this [article](https://medium.com/@alyafey22/sentiment-classification-from-keras-to-the-browser-7eda0d87cdc6)*
Please note that the library provided a really easy way to adapt a single partner, common machine learning use case with tensorflow, to a multipartner case, with contributivity measurement.
```
# imports
import seaborn as sns
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import re
from keras.models import Sequential
from keras.layers import Dense, GRU, Embedding
from mplc.dataset import Dataset
from mplc.scenario import Scenario
sns.set()
```
## 3 - Generation, and preparation of the dataset
The scenario object needs a dataset object to run. In the previous tutorials, we indicate which one to generate automatically by passing a name of a pre-implemented dataset to the scenario constructor.
Here, we will create this dataset object and pass it to the scenario constructor. To do so, we are going to create a new class, which inherit from the mplc.Dataset abstract class.
A sub-class of Dataset needs few attribute and method. First, the constructor of the Dataset object needs few arguments.
### Dataset generator :
The structure of the dataset generator is represented below:
```python
dataset = Dataset(
"name",
x_train,
x_test,
y_train,
y_test,
input_shape,
num_classes,
)
```
#### Data labels
The data labels can take whatever shape you need, with only one condition.
The labels need to be convertible into string format, and with respect to the condition that if label1 is equal to label2 (
reciprocally different from), therefore str(label1) must be equal to str(label2) (reciprocally different from)
#### Model generator
This method needs to be implemented, and provides the model use, which will be trained by the `Scenario` object.
Note: It is mandatory to have loss and accuracy as metrics for your model.
#### Train/validation/test splits
The `Dataset` constructor (called via `super()`) must be provided some separated train and test sets (referred to as global train set and global test set).
The global train set is then further split into a global train set and a global validation set, by the function `train_val_split_global`. Please denote that if this function is not overwritten, the sklearn's `train_test_split` function will be called by default, and 10% of the training set will be use as validation set.
In the multi-partner learning computations, the global validation set is used for early stopping and the global test set is used for performance evaluation.
The global train set is then split amongst partners (according to the scenario configuration) to populate the partner's local datasets.
For each partner, the local dataset will be split into separated train, validation and test sets, using the `train_test_split_local` and `train_val_split_local` methods.
These are not mandatory, by default the local dataset will not be split.
Denote that currently, the local validation and test set are not used, but they are available for further developments of multi-partner learning and contributivity measurement approaches.
### Dataset construction
Now that we know all of that, we can create our dataset class.
#### Download and unzip data if needed
```
!curl https://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip --output trainingandtestdata.zip
!unzip trainingandtestdata.zip
```
#### Define our Dataset class
```
class Sentiment140(Dataset):
def __init__(self):
x, y = self.load_data()
self.max_tokens = self.getMax(x)
self.num_words = None
self.word_index = self.tokenize()
self.num_words = len(self.word_index)
x = self.create_sequences(x)
y = self.preprocess_dataset_labels(y)
self.input_shape = self.max_tokens
self.num_classes = len(np.unique(y))
print('length of the dictionary ',len(self.word_index))
print('max token ', self.max_tokens)
print('num classes', self.num_classes)
(x_train, x_test) = train_test_split(x, shuffle = False)
(y_train, y_test) = train_test_split(y, shuffle = False)
super(Sentiment140, self).__init__(dataset_name='sentiment140',
num_classes=self.num_classes,
input_shape=self.input_shape,
x_train=x_train,
y_train=y_train,
x_test=x_test,
y_test=y_test)
@staticmethod
def load_data(): # load the data, transform the .csv into usable dataframe
df_train = pd.read_csv("training.1600000.processed.noemoticon.csv", encoding = "raw_unicode_escape", header=None)
df_test = pd.read_csv("testdata.manual.2009.06.14.csv", encoding = "raw_unicode_escape", header=None)
df_train.columns = ["polarity", "id", "date", "query", "user", "text"]
df_test.columns = ["polarity", "id", "date", "query", "user", "text"]
# We keep only a fraction of the whole dataset
df_train = df_train.sample(frac = 0.1)
x = df_train["text"]
y = df_train["polarity"]
return x, y
# Preprocessing methods
@staticmethod
def process( txt):
out = re.sub(r'[^a-zA-Z0-9\s]', '', txt)
out = out.split()
out = [word.lower() for word in out]
return out
@staticmethod
def getMax( data):
max_tokens = 0
for txt in data:
if max_tokens < len(txt.split()):
max_tokens = len(txt.split())
return max_tokens
def tokenize(self, thresh = 5):
count = dict()
idx = 1
word_index = dict()
for txt in x:
words = self.process(txt)
for word in words:
if word in count.keys():
count[word] += 1
else:
count[word] = 1
most_counts = [word for word in count.keys() if count[word]>=thresh]
for word in most_counts:
word_index[word] = idx
idx+=1
return word_index
def create_sequences(self,data):
tokens = []
for txt in data:
words = self.process(txt)
seq = [0] * self.max_tokens
i = 0
for word in words:
start = self.max_tokens-len(words)
if word.lower() in self.word_index.keys():
seq[i+start] = self.word_index[word]
i+=1
tokens.append(seq)
return np.array(tokens)
@staticmethod
def preprocess_dataset_labels( label):
label = np.array([e/4 for e in label])
return label
def generate_new_model(self): # Define the model generator
model = Sequential()
embedding_size = 8
model.add(Embedding(input_dim=self.num_words,
output_dim=embedding_size,
input_length=self.max_tokens,
name='layer_embedding'))
model.add(GRU(units=16, name = "gru_1",return_sequences=True))
model.add(GRU(units=8, name = "gru_2" ,return_sequences=True))
model.add(GRU(units=4, name= "gru_3"))
model.add(Dense(1, activation='sigmoid',name="dense_1"))
model.compile(loss='binary_crossentropy',
optimizer="Adam",
metrics=['accuracy'])
return model
```
#### Create dataset
And we can eventually generate our object!
```
my_dataset = Sentiment140()
## 4 - Create the custom scenario
The dataset can be passed to the scenario, through the `dataset` argument.
```
# That's it!
Now you can explore our other tutorials for a better overview of what can be done with `mplc`!
This work is collaborative, enthusiasts are welcome to comment open issues and PRs or open new ones.
Should you be interested in this open effort and would like to share any question, suggestion or input, you can use the following channels:
- This Github repository (issues or PRs)
- Substra Foundation's [Slack workspace](https://substra-workspace.slack.com/join/shared_invite/zt-cpyedcab-FHYgpy08efKJ2FCadE2yCA), channel `#workgroup-mpl-contributivity`
- Email: hello@substra.org

| github_jupyter |
# Programming and Database Fundamentals for Data Scientists - EAS503
Python classes and objects.
In this notebook we will discuss the notion of classes and objects, which are a fundamental concept. Using the keyword `class`, one can define a class.
Before learning about how to define classes, we will first understand the need for defining classes.
### A Simple Banking Application
Read data from `csv` files containing customer and account information and find all customers with more than \$25,000 in their bank account, and send a letter to them with some scheme (find their address).
```
# Logical design
import csv
# load customer information
customerMap = {}
with open('customers.csv','r') as f:
rd = csv.reader(f)
next(rd)
for row in rd:
customerMap[int(row[0])] = (row[1],row[2])
# load account information
accountsMap = {}
with open('accounts.csv','r') as f:
rd = csv.reader(f)
next(rd)
for row in rd:
if int(row[1]) not in accountsMap.keys():
accountsMap[int(row[1])] = []
l = accountsMap[int(row[1])]
l.append(int(row[2]))
accountsMap[int(row[1])] = l
customerMap
accountsMap
for k in accountsMap.keys():
if sum(accountsMap[k]) > 25000:
print(customerMap[k])
# OOD
class Customer:
def __init__(self, customerid, name, address):
self.__name = name
self.__customerid = customerid
self.__address = address
self.__accounts = []
def add_account(self,account):
self.__accounts.append(account)
def get_total(self):
s = 0
for a in self.__accounts:
s = s + a.get_amount()
return s
def get_name(self):
return self.__name
class Account:
def __init__(self,accounttype,amount):
self.__accounttype = accounttype
self.__amount = amount
def get_amount(self):
return self.__amount
import csv
customers = {}
with open('./customers.csv') as f:
reader = csv.reader(f)
next(reader)
for row in reader:
customer = Customer(row[0],row[1],row[2])
customers[row[0]] = customer
with open('./accounts.csv') as f:
reader = csv.reader(f)
next(reader)
for row in reader:
customerid = row[1]
account = Account(row[0],int(row[2]))
customers[customerid].add_account(account)
for c in customers.keys():
if customers[c].get_total() > 25000:
print(customers[c].get_name())
```
## Defining Classes
More details about `class` definition
```
# this class has no __init__ function
class myclass:
def mymethod_myclass(self):
print("hey")
myobj = myclass()
myobj.mymethod_myclass()
# this class has no __init__ function
class myclass:
# we define a field
__classtype='My Class'
def mymethod(self):
print("This is "+self.__classtype)
def getClasstype(self):
return self.__classtype
# making fields private
myobj = myclass()
myobj.mymethod()
print(myobj.getClasstype())
myobj = myclass()
myobj.mymethod()
# this class has not __init__ function
class myclass:
# we define a global field
classtype='My Class'
def mymethod(self):
print("this is a method")
self.a = 'g'
#print("This is "+self.classtype) # note that we are explicitly referencing the field of the class
def mymethod2(self):
print("This is"+self.classtype)
print(self.a)
m = myclass()
m.mymethod()
type(m)
myobj = myclass()
myobj.mymethod()
myobj.mymethod2()
```
#### Issues with defining fields outside the `__init__` function
If global field is mutable
```
# this class has not __init__ function
class myclass:
# we define a field
version='1.0.1'
classtypes=['int']
def __init__(self):
self.a = ['m']
def mymethod(self):
print(self.classtypes) # note that we are explicitly referencing the field of the class
print(self.a)
def mystaticmethod():
print('This class is open source')
myobj1 = myclass()
myobj2 = myclass()
myobj1.mymethod()
myobj2.mymethod()
myobj1.classtypes.append('float')
myobj1.a.append('n')
myobj1.mymethod()
myobj2.mymethod()
```
#### How to avoid the above issue?
Define mutable fields within `__init__`
```
# this class has an __init__ function
class myclass:
def __init__(self):
# we define a field
self.classtypes=['int']
def mymethod(self):
print(self.classtypes) # note that we are explicitly referencing the field of the class
myobj1 = myclass()
myobj2 = myclass()
myobj1.mymethod()
myobj2.mymethod()
myobj1.classtypes.append('float')
myobj1.mymethod()
myobj2.mymethod()
# you can directly access the field
myobj1.mymethod()
```
#### Hide fields from external use
```
class account:
def __init__(self,u,p):
self.username = u
self.password = p
act = account('chandola','chandola')
print(act.password)
class account:
def __init__(self,u,p):
self.__username = u
self.__password = p
def getUsername(self):
return self.__username
def checkPassword(self,p):
if p == self.__getPassword():
return True
else:
return False
def __getPassword(self):
return self.__password
act = account('chandola','chandola')
print(act.getUsername())
print(act.checkPassword('chandola'))
print(act.__getPassword())
# this class has an __init__ function
class myclass:
def __init__(self):
# we define a field
self.__classtypes=['int']
myobj1 = myclass()
myobj1.__classtypes
# the private field will be accessible to the class methods
class myclass:
def __init__(self):
# we define a field
self.__classtypes=['int']
def appendType(self,newtype):
self.__classtypes.append(newtype)
myobj1 = myclass()
myobj1.appendType('float')
# still cannot access the field outside
myobj1.__classtypes
# solution -- create a getter method
class myclass:
def __init__(self):
# we define a field
self.__classtypes=['int']
def appendType(self,newtype):
self.__classtypes.append(newtype)
def getClasstypes(self):
return self.__classtypes
myobj1 = myclass()
myobj1.appendType('float')
myobj1.getClasstypes()
print(['s','g','h'])
```
One can create `getter` and `setter` methods to manipulate fields. While the name of the methods can be arbitrary, a good programming practice is to use get`FieldNameWithoutUnderscores()` and set`FieldNameWithoutUnderscores()`
## Inheritance in Python
Ability to define subclasses.
Let us assume that we want to have defined a class called `Employee` that has some information about a bank employee and some supporting methods.
```
class Employee:
def __init__(self,firstname,lastname,empid):
self.__firstname = firstname
self.__lastname = lastname
self.__empid = empid
# following is a special function used by the Python in-built print() function
def __str__(self):
return "Employee name is "+self.__firstname+" "+self.__lastname
def checkid(self,inputid):
if inputid == self.__empid:
return True
else:
return False
def getfirstname(self):
return self.__firstname
def getlastname(self):
return self.__lastname
emp1 = Employee("Homer","Simpson",777)
print(emp1)
print(emp1.checkid(777))
```
Now we want to create a new class called `Manager` which retains some properties of an `Employee` buts add some more
```
class Manager(Employee):
def __init__(self,firstname,lastname,empid):
super().__init__(firstname,lastname,empid)
mng1 = Manager("Charles","Burns",666)
print(mng1)
```
But we want to add extra fields and set them in the constructor
```
class Manager(Employee):
def __init__(self,firstname,lastname,empid,managerid):
super().__init__(firstname,lastname,empid)
self.__managerid = managerid
def checkmanagerid(self,inputid):
if inputid == self.__managerid:
return True
else:
return False
mng1 = Manager("Charles","Burns",666,111)
print(mng1)
mng1.checkid(666)
mng1.checkmanagerid(111)
```
You can modify methods of base classes
```
class Manager(Employee):
def __init__(self,firstname,lastname,empid,managerid):
super().__init__(firstname,lastname,empid)
self.__managerid = managerid
def checkmanagerid(self,inputid):
if inputid == self.__managerid:
return True
else:
return False
def __str__(self):
# why will the first line not work and the second one will
#return "Manager name is "+self.__firstname+" "+self.__lastname
return "Manager name is "+self.getfirstname()+" "+self.getlastname()
mng1 = Manager("Charles","Burns",666,111)
print(mng1)
```
**Remember** - Derived classes cannot access private fields of the base class directly
### Inheriting from multiple classes
Consider a scenario where you have additional class, `Citizen`, that has other information about a person. Can we create a derived class that inherits properties of both `Employee` and `Citizen` class?
```
class Citizen:
def __init__(self,ssn,homeaddress):
self.__ssn = ssn
self.__homeaddress = homeaddress
def __str__(self):
return "Person located at "+self.__homeaddress
ctz1 = Citizen("123-45-6789","742 Evergreen Terrace")
print(ctz1)
# it is easy
class Manager2(Employee,Citizen):
def __init__(self,firstname,lastname,empid,managerid,ssn,homeaddress):
Citizen.__init__(self,ssn,homeaddress)
Employee.__init__(self,firstname,lastname,empid)
self.__managerid = managerid
def __str__(self):
return "Manager name is "+Employee.getfirstname(self)+" "+Employee.getlastname(self)+", "+Citizen.__str__(self)
mgr2 = Manager2("Charles","Burns",666,111,"123-45-6789","742 Evergreen Terrace")
print(mgr2)
```
| github_jupyter |
# Случайни ефекти върху генетичната структура на популации
***
Изследвани са основни популационни модели върху генетичната структура на популации, както са представени в курса във ФМИ ["Въведение в изчислителната биология", доц. П. Рашков (ИМИ-БАН)](http://www.math.bas.bg/nummeth/rashkov/teaching/), 2018-2019г
***
## Стохастичен модел за популация с два фенотипа
Разглеждаме популация от белокрили и чернокрили молци.
Цветът на крилата на индивид от популацията се определя от локус с два алела. Означаваме ги с **W** и **w**.
Когато локусът е:
- **WW** или **Ww** - молецът е белокрил
- **ww** - молецът е чернокрил
#### Закон на Харди-Вайнберг
>В идеална популация съотношението между алелите остава непроменено с течение на поколенията.
Под идеална се имат предвид следните условия (за един локус с 2 алела):
- Поколенията са неприпокриващи се
- Чифтосването между индивидите е на случаен принцип
- Всички са еднакво способни да оцелеят
- Няма мутации
- Честотата на даден алел е еднаква за цялата популация
#### Подобрен модел
Нека бележим с $\alpha$ делът на белокрилите молци (WW или Ww), които оцеляват. А, с $\gamma$ делът на чернокрилите (ww), които оцеляват.
Очевидно, ако $\alpha > \gamma$, белокрилите имат предимство. Обратно, ако $\gamma > \alpha$ чернокрилите са повече.
Нека $p_n = \frac{\text{бр. W-алели}}{\text{бр. всички алели}}$
За да изследваме генотипа на бъдещите поколения, ще използваме [квадрата на Пънет](https://en.wikipedia.org/wiki/Punnett_square):
<table style="width: 50%; text-align: center !important;">
<tbody>
<tr>
<td></td>
<td></td>
<th colspan=2>Женски</th>
</tr>
<tr>
<td></td>
<td></td>
<th>W</th>
<th>w</th>
</tr>
<tr>
<th rowspan=2>Мъжки</th>
<th>W</th>
<td>$\alpha p_n^2$</td>
<td>$\alpha p_n (1 - p_n)$</td>
</tr>
<tr>
<th>w</th>
<td>$\alpha p_n (1 - p_n)$</td>
<td>$\gamma (1 - p_n)^2$</td>
</tr>
</tbody>
</table>
От тук следва, че вероятностите за генотипа на потомството са:
- **WW**: $\alpha p_n^2$
- **Ww**: $2 \alpha p_n (1 - p_n)$
- **ww**: $\gamma (1 - p_n)^2$
Тогава за $p_{n+1}$ получаваме:
\begin{equation}
p_{n+1} = \frac{\alpha p_n^2 + 2 \alpha p_n (1 - p_n) \frac{1}{2}}{\alpha p_n^2 + 2 \alpha p_n (1 - p_n) + \gamma (1 - p_n)^2} \Leftrightarrow p_{n+1} = \frac{\alpha p_n}{p_n^2(\gamma - \alpha) - 2 p_n (\gamma - \alpha) + \gamma}
\end{equation}
- Ако $\alpha = \gamma$ => Закон на Харди-Вайнберг (съотношението между алелите остава непроменено)
- Ако $\alpha \neq \gamma$, търсим равновесни точки. Можем да ги видим и на графиката:
```
import math
import matplotlib.pyplot as plt
from numpy import *
def f(pn, alpha=0.5, gamma=0.7):
return (alpha * pn) / (pn**2 * (gamma - alpha) - 2 * pn * (gamma - alpha) + gamma)
def show_balance_points(initial, **kwargs):
previous = initial
for i in range(50):
current = f(previous, **kwargs)
plt.plot(previous, current, 'bo')
previous = current
plt.xlabel("p_n")
plt.ylabel("p_n+1")
plt.show()
```
Когато $\alpha > \gamma$, $p_n \rightarrow 1$ и следователно белокрилите имат предимство:
```
show_balance_points(0.3, alpha=0.7, gamma=0.3)
```
Когато $\alpha > \gamma$, $p_n \rightarrow 0$ и следователно чернокрилите имат предимство:
```
show_balance_points(0.3, alpha=0.4, gamma=0.6)
```
## Стохастични модели за генетичния дрейф
> **Генетичният дрейф** представлява промяна в честотата на гените (алелите), която е случайна и няма приспособителен
характер.
>Генетичната структура на природните популации е **силно зависими от случайния генетичен дрейф**, тъй като случайните
ефекти могат:
>- да унищожат генетичното разнообразие, изградено чрез мутация
>- да противодействат на естествения отбор,
>- да изградят статистически асоциации между различни локуси в генома
>Затова изясняването на повечето въпроси в еволюционната биология изисква да се вземат под внимание случайните ефекти.
>Значението на случайните ефекти **зависи от много параметри на модела** (размера на популацията или скоростта на мутация).
>В това упражнение ще изведем прости случайни модели, за да разграничим ролята различните фактори и за да получим по-ясна представа за генетичния дрейф.
### Основен модел
В основния модел разглеждаме само един локус с алели A и a.
Двата алела имат честоти fA и fa = 1 − fA .
За простота предполагаме, че поколенията са дискретни и във всяко има мутация и естествен отбор.
По принцип трябва да симулираме мутацията и естествения отбор като случайни процеси.
За да опростим модела, разглеждаме приближение, в което:
1. мутацията и отборът са едновременно детерминистични,
2. крайният размер на популацията се определя чрез вземане„
на извадка от N генома (N е размерът на популацията) след като са се случили мутацията и естественият отбор.
```
def model(population_size=1000, mutation_rate=0.0001, selection_strenght=0, generations=5000, initial_frequency=0.1, sampling=True):
frequencies = []
previous_frequency = initial_frequency
for i in range(1, generations):
# Mutation:
frequency = mutation_rate * (1 - previous_frequency) + (1 - mutation_rate) * previous_frequency # m(1 − fA) + (1 − m)fA
# Selection:
average_adaptability = (1 - selection_strenght) * frequency + (1 - frequency)
frequency = frequency * (1 - selection_strenght) / average_adaptability
# Sampling:
if sampling:
frequency = random.binomial(population_size, frequency) / population_size
frequencies.append(frequency)
previous_frequency = frequency
plt.plot(frequencies)
plt.show()
model(population_size=1000, generations=5000, initial_frequency=0.1)
model(population_size=10000, generations=5000, initial_frequency=0.1)
model(population_size=10000000, generations=5000, initial_frequency=0.1)
model(population_size=10000, generations=5000, initial_frequency=0.1)
model(population_size=10000, generations=5000, initial_frequency=0.1, sampling=False)
model(population_size=10000, generations=500, initial_frequency=0.1, sampling=False)
model(population_size=10000, generations=5000, initial_frequency=0.6, sampling=False)
model(population_size=10000, generations=5000, initial_frequency=0.6, sampling=True)
model(population_size=10000, generations=5000, initial_frequency=0.5, sampling=False)
model(population_size=10000, generations=5000, initial_frequency=0.5, sampling=True)
model(population_size=10000, generations=5000, initial_frequency=0.1, sampling=False, mutation_rate=0.03)
model(population_size=10000, generations=5000, initial_frequency=0.1, sampling=True, mutation_rate=0.03)
model(population_size=10000000, generations=5000, initial_frequency=0.1, sampling=True, mutation_rate=0.03)
model(population_size=10000, generations=5000, initial_frequency=0.1, mutation_rate=0.03, selection_strenght=0.09)
model(population_size=10000, generations=5000, initial_frequency=0.1, mutation_rate=0.01, selection_strenght=0.01, sampling=False)
model(population_size=10000, generations=5000, initial_frequency=0.1, mutation_rate=0.01, selection_strenght=0.09, sampling=False)
```
### Разширен модел (кръстосване)
```
DEFAULT_FREQUENCIES = {'ab': 0.1, 'Ab': 0.25, 'aB': 0.25, 'AB': 0.4}
def _opposite_allel(allel):
if ord(allel) < 97:
return allel.lower()
return allel.upper()
def _different_allel(locus, diff_index):
new_locus = ''
for i, allel in enumerate(locus):
if (i + 1) in diff_index:
new_locus += _opposite_allel(allel)
else:
new_locus += allel
return new_locus
def _mutate(frequencies, m): # m - mutation_rate
new_frequencies = dict()
for locus, freq in frequencies.items():
new_frequencies[locus] = ((1 - m)**2)*freq +\
(1 - m) * m * (frequencies[_different_allel(locus, (1,))] + frequencies[_different_allel(locus, (2,))]) +\
(m**2)*frequencies[_different_allel(locus, (1,2))]
return new_frequencies
def _select(f, s): # f - frequencies, s - selection strengths (dicts per locus type)
new_frequencies = dict()
average_adaptability = sum([s[locus] * f[locus] for locus in f])
for locus in f:
new_frequencies[locus] = f[locus] * s[locus] / average_adaptability
return new_frequencies
def _recombinate(f, r): # f - frequencies, r - recombination rate
D = f['ab'] * f['AB'] - f['Ab'] * f['aB'] # коефициентът на неравновесие на свързване на гените
new_frequencies = dict()
new_frequencies['ab'] = f['ab'] - r * D
new_frequencies['AB'] = f['AB'] - r * D
new_frequencies['Ab'] = f['Ab'] + r * D
new_frequencies['aB'] = f['aB'] + r * D
return new_frequencies
def model_two_loci(
population_size=1000, mutation_rate=0.0001, selection_strengths=None,
generations=5000, initial_frequencies=None, recombination_rate=0.1):
initial_frequencies = initial_frequencies or DEFAULT_FREQUENCIES
assert sum(list(initial_frequencies.values())) == 1.0, "All frequencies must equal 1.0"
frequencies_values = list()
previous_frequencies = initial_frequencies
for i in range(1, generations):
# Mutation:
frequencies = _mutate(previous_frequencies, mutation_rate)
# Selection:
frequencies = _select(frequencies, selection_strengths)
# Recombination:
frequencies = _recombinate(frequencies, recombination_rate)
# Sampling:
freqs = sorted(frequencies.items())
freqs_values = [x[1] for x in freqs]
freqs_values = [x / population_size for x in random.multinomial(population_size, freqs_values)]
frequencies_values.append(freqs_values)
frequencies = dict(zip([x[0] for x in freqs], freqs_values))
previous_frequencies = frequencies
graph = plt.plot(frequencies_values)
first_legend = plt.legend(graph, sorted(previous_frequencies))
plt.ylabel("frequency")
plt.xlabel("generation")
plt.rcParams["figure.figsize"] = 10, 8
plt.show()
model_two_loci(
initial_frequencies={'ab': 1, 'Ab': 0, 'aB': 0, 'AB': 0},
selection_strengths={'ab': 1, 'Ab': 1.01, 'aB': 1.01, 'AB': 1.02},
population_size=10000, generations=1000, recombination_rate=0.1)
model_two_loci(
initial_frequencies={'ab': 1, 'Ab': 0, 'aB': 0, 'AB': 0},
selection_strengths={'ab': 1, 'Ab': 1.01, 'aB': 1.01, 'AB': 1.02},
population_size=10000, generations=1000, recombination_rate=0.4)
```
| github_jupyter |
```
import twint
import os
import nest_asyncio
nest_asyncio.apply()
import logging
import pandas as pd
# logging.basicConfig(filename='twint_data_collection.log',level=logging.DEBUG)
# logging.debug('This message should go to the log file')
# logging.warning('And this, too')
pd.set_option('display.max_colwidth', None)
# Configure
c = twint.Config()
c.Username = "neuripsconf"
c.Store_object = True
# c.User_full = True
# Run
twint.run.Following(c)
follow_list = twint.output.follows_list
len(follow_list)
import json
# for i in ["AndrewYNg", "ylecun", "icmlconf"]:
all_users_info_dict = []
def get_user_info(username):
c = twint.Config()
c.Username = str(username)
c.Store_object = True
twint.run.Lookup(c)
users = twint.output.users_list[-1]
print(i)
print(users.username)
user_info_dict = twint.storage.write_meta.userData(users)
print(user_info_dict)
all_users_info_dict.append(user_info_dict)
users.bio
c.Username = "ylecun"
c.Store_object = True
twint.run.Lookup(c)
users = twint.output.users_list[0]
users.name
follow_list = twint.output.follows_list
len(follow_list)
ML_KEYWORDS = ["ai", "ml", "artificial intelligence", "machine learning", "deeplearning", "machinelearning", "nlproc", "nlp", "computer vision", "computervision", "cv", " reinforcement learning", "rl", "kaggle", "datascience", "google brain", "deepmind", "googleai"]
test = ""
if any(word in str(test).lower() for word in ML_KEYWORDS):
print ("hi")
relevant_user = []
import twint
import os
import nest_asyncio
nest_asyncio.apply()
import logging
logging.basicConfig(filename='twint_data_collection.log',level=logging.DEBUG)
def is_relevant_user(user):
twint.output.clean_lists()
c = twint.Config()
c.Username = str(user)
c.Store_object = True
twint.run.Lookup(c)
# print (twint.output.users_list)
user = twint.output.users_list[0]
print (user.name)
if any(word in str(user.bio).lower() for word in ML_KEYWORDS):
return True, user
else:
return False, None
def main():
relevant_user.append("neuripsconf")
for target in relevant_user:
# logging.info('target user: ', target," ====> Starting to find followings <====")
# Configure
c = twint.Config()
c.Username = str(target)
c.Store_object = True
# Run
twint.run.Following(c)
follow_list = twint.output.follows_list
# logging.info("====> Starting to find relevant users <====")
for user in follow_list:
rel, rel_user_info = is_relevant_user(user)
if rel:
if user not in relevant_user:
relevant_user.append(user)
# logging.info ("==> User :" + str(user) + " added!")
if len(relevant_user) % 100 == 0:
# logging.info ("====> Starting to find write to files <====")
with open('relevant_user.txt', 'w') as f:
for item in relevant_user:
f.write("%s\n" % item)
# logging.info("====> Completed Writting to file! <====")
# else:
# logging.info("==> User :" + str(user) + " already present in the list!")
# else:
# logging.info("==> User :" + str(user) + " not added!")
import twint
import os
import nest_asyncio
import logging
import json
import pandas as pd
nest_asyncio.apply()
logging.basicConfig(filename="twint_data_collection.log", level=logging.DEBUG)
ML_KEYWORDS = [
"ai ",
"ml ",
".ai" "fast.ai" "artificial intelligence",
"machine learning",
"deeplearning",
"machinelearning",
"nlproc",
"nlp ",
"computer vision",
"computervision",
"cv ",
"reinforcement learning",
"rl ",
"kaggle",
"datascience",
"data science",
"google brain",
"deepmind",
"googleai",
"data scientist",
"pattern analysis",
"statistical modelling",
"computational learning",
"natural language processing",
"vision and learning",
"data visualization",
"matplotlib",
"computer science",
"data ethics",
"stats ",
"deepmind",
"intelligent systems",
"a.i.",
"pytorch",
"tensorflow",
"keras",
"theano",
"bayesian statistics",
"openai",
"forecasting"
]
def write_to_file(relevant_user, ):
if len(relevant_user) % 10 == 0:
with open("relevant_user.txt", "w+") as f:
for item in relevant_user:
f.write("%s\n" % item)
def write_user_info(relevant_user_info_list):
if len(relevant_user_info_list) > 100:
relevant_user_info_df = pd.DataFrame(relevant_user_info_list)
main_csv = pd.read_csv("relevant_user_info.csv")
main_csv.append(relevant_user_info_df, ignore_index=True)
main_csv.to_csv("relevant_user_info.csv",index=False)
empty_list = []
return empty_list
return relevant_user_info_list
def is_relevant_user(user):
twint.output.clean_lists()
c = twint.Config()
c.Username = str(user)
c.Store_object = True
# c.Hide_output = True
twint.run.Lookup(c)
users_list = twint.output.users_list
user = twint.output.users_list[0]
user_info_dict = twint.storage.write_meta.userData(user)
if any(word in str(user.bio).lower() for word in ML_KEYWORDS):
print ( "====> ", user_info_dict)
return True, user_info_dict
else:
return False, None
def main():
relevant_user = []
relevant_user_info_list = []
relevant_user.append("neuripsconf")
for target in relevant_user:
print("==> ", target)
c = twint.Config()
c.Username = str(target)
# c.Hide_output = True
c.Store_object = True
twint.run.Following(c)
follow_list = twint.output.follows_list
for user in follow_list:
print("hi")
rel, rel_user_info = is_relevant_user(user)
if rel:
relevant_user_info_list.append(rel_user_info)
relevant_user_info_list = write_user_info(relevant_user_info_list)
if rel and user not in relevant_user:
relevant_user.append(user)
write_to_file(relevant_user)
break
main()
if True and True:
print ("hi")
c = twint.Config()
c.Username = "noneprivacy"
# c.Hide_output = True
c.Pandas = True
# c.Store_pandas = True
c.Pandas_clean=True
c.Store_object = True
c.Limit= 20
twint.run.Lookup(c)
followed = twint.storage.panda.User_df
twint.storage.panda.clean()
followed.shape
followed.values
twint.output.clean_lists()
c = twint.Config()
c.Username = "noneprivacy"
c.Store_object = True
c.Hide_output = True
c.Store_object = True
twint.run.Lookup(c)
users_list = twint.output.users_list
user = twint.output.users_list[0]
user_info_dict = twint.storage.write_meta.userData(user)
relevant_user_info_list = []
relevant_user_info_list.append(user_info_dict)
with open('relevant_user_info.json', 'w') as fout:
json.dump(relevant_user_info_list , fout)
main_csv = pd.DataFrame({})
main_csv.head()
with open('relevant_user.txt') as f:
lines = f.read().splitlines()
# lines[24:]
import twint
c = twint.Config()
c.Username = "neuripsconf"
# c.Custom["tweet"] = ["id"]
# c.Custom["user"] = ["bio"]
c.Limit = 10
c.Store_csv = True
c.Output = "neuripsconf.csv"
twint.run.Search(c)
ls ../twitter_thought_leader/data/raw
import json
data = json.load(open("../twitter_thought_leader/data/raw/user_info.json"))
len(data)
import pandas as pd
df = pd.DataFrame(data)
df.head()
df.to_csv("../twitter_thought_leader/data/raw/results_user_info.csv", index=False)
# /home/hustle/playground/twitter_thought_leader/data/raw/backup_july_22/tweets_1800+_users_1K_tweets_22_July
from os import listdir
from os.path import isfile, join
mypath = "/home/hustle/playground/twitter_thought_leader/data/raw/backup_july_22/tweets_1800+_users_1K_tweets_22_July/tweets"
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
downloaded_user = []
for i in onlyfiles:
fname = i.split(".")[0]
downloaded_user.append(fname)
len(set(downloaded_user))
with open("../twitter_thought_leader/data/raw/"+ "relevant_user.txt") as f:
relevant_user = f.read().splitlines()
len(set(relevant_user))
relevant_user_b2 = list(set(relevant_user) - set(downloaded_user))
len(relevant_user_b2)
with open("../twitter_thought_leader/data/raw/"+ "relevant_user_batch_2.txt", "w+") as f:
for item in relevant_user_b2:
f.write("%s\n" % item)
import tweepy
import os
def twitter_auth():
access_token = os.environ.get('TWITTER_ACCESS_KEY')
access_token_secret = os.environ.get('TWITTER_ACCESS_SECRET_KEY')
consumer_key = os.environ.get('TWITTER_API_KEY')
consumer_secret = os.environ.get('TWITTER_API_SECRET_KEY')
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
def collect_follower_using_twitter_api():
follow_dict = {}
with open("../twitter_thought_leader/data/raw/"+ "relevant_user.txt") as f:
relevant_user = f.read().splitlines()
for i in relevant_user:
follow_dict[str(i)] = tweepy.Cursor(api.friends, screen_name="neuripsconf").items()
time.sleep(2)
2 +2
import pandas as pd
df = pd.read_csv("../twitter_thought_leader/data/raw/relevant_user_info_complete.csv",engine='python')
df.shape
drop_dup_df = df.drop_duplicates()
drop_dup_df.shape
drop_dup_df["bio"]
ML_KEYWORDS =[
"#ai",
" ai",
"ai ",
"_ai",
"ai,",
"ai.",
"ai+",
"a.i.",
"#ml",
"ml.",
".ml",
"ml@",
"ml,",
"ml ",
".ai",
"#rl",
" rl ",
"#cv",
"cv ",
"ai/",
"ml/",
"rl/",
"fast.ai",
"artificial intelligence",
"artificialintelligence",
"machine learn",
"machinelearning",
"deep learn",
"deeplearning",
"nlproc",
"nlp",
"stanfordnlp",
"stanfordai",
"stanfordailabs",
"data sci",
"computer vision",
"computervision",
"reinforcement learning",
"kaggle",
"datascience",
"data science",
"google brain",
"deepmind",
"deep mind",
"googleai",
"google ai",
"googlebrain",
"data scientist",
"pattern analysis",
"statistical modelling",
"computational learning",
"natural language processing",
"vision and learning",
"data visualization",
"matplotlib",
"computer science",
"data ethics",
"stats ",
"autonomous cars",
"gan",
"openai",
"icml",
"neurips",
"intelligent systems",
"pytorch",
"tensorflow",
"keras",
"theano",
"bayesian statistics",
"openai",
"forecasting",
"iclr"
]
test = ""
if any(word in str(test).lower() for word in ML_KEYWORDS):
print ("hi")
def is_relevant(test):
if any(word in str(test).lower() for word in ML_KEYWORDS):
return True
return False
df = pd.read_csv("/home/hustle/playground/twitter_thought_leader/data/raw/2nd_degree_relevant_user_info.csv",engine="python")
df.shape
rel_df = df[df["bio"].apply(is_relevant)]
rel_df = rel_df.drop_duplicates(subset=['id'])
rel_df.shape
non_rel_df = df[~df["bio"].apply(is_relevant)]
non_rel_df = non_rel_df.drop_duplicates(subset=['id'])
non_rel_df.shape
non_rel_df.to_csv("/home/hustle/playground/twitter_thought_leader/data/raw/non_rel_user_info.csv", index=False)
rel_df = rel_df.drop_duplicates(subset=['id'])
rel_df.shape
rel_df.to_csv("/home/hustle/playground/twitter_thought_leader/data/raw/2nd_degree_relevant_user_info.csv", index=False)
import os
import glob
import pandas as pd
os.chdir("/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists")
extension = 'csv'
all_filenames = [i for i in glob.glob('*.{}'.format(extension))]
all_filenames
combined_csv = pd.concat([pd.read_csv(f, engine="python") for f in all_filenames ])
#export to csv
combined_csv.shape
combined_csv_rel_df = combined_csv[combined_csv["bio"].apply(is_relevant)]
non_rel_df = combined_csv[~combined_csv["bio"].apply(is_relevant)]
combined_csv_rel_df = combined_csv_rel_df.drop_duplicates(subset=['id'])
combined_csv_rel_df.to_csv("/home/hustle/playground/twitter_thought_leader/data/raw/rel_user_info.csv", mode="a", index=False, header=False)
rel_user_info = pd.read_csv("/home/hustle/playground/twitter_thought_leader/data/raw/rel_user_info.csv",engine="python")
rel_user_info.shape
combined_rel_df = rel_user_info[rel_user_info["bio"].apply(is_relevant)]
combined_rel_df.shape
combined_rel_df = combined_rel_df.drop_duplicates(subset=['id'])
combined_rel_df.shape
combined_rel_df.to_csv("/home/hustle/playground/twitter_thought_leader/data/raw/rel_user_info.csv", index=False)
pd.Series(combined_rel_df.username).to_csv("/home/hustle/playground/twitter_thought_leader/data/raw/4k_rel_user.txt",index=False)
df = pd.read_csv("/home/hustle/playground/twitter_thought_leader/data/raw/rel_user_info.csv",engine="python")
rel_df = df[df["bio"].apply(is_relevant)]
non_rel_df = df[~df["bio"].apply(is_relevant)]
rel_df.to_csv("/home/hustle/playground/twitter_thought_leader/data/raw/rel_user_info.csv", index=False)
# non_rel_df
with open("/home/hustle/playground/twitter_thought_leader/data/raw/"+ "refined_relevant_user_list.txt") as f:
relevant_user = f.read().splitlines()
len(relevant_user)
len(set(relevant_user))
len(set(rel_df.username) - set(relevant_user))
relevant_user = list(set(rel_df.username) - set(relevant_user))
ser = pd.Series(relevant_user)
ser.shape
ser.to_csv("/home/hustle/playground/twitter_thought_leader/data/raw/"+ "refined_relevant_user_list.txt", mode='a', index=False, header = False)
non_rel_df.shape
non_rel_df
from os import listdir
from os.path import isfile, join
import time
def check_if_file_present(username):
mypath = '/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/'
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
downloaded_user = []
for i in onlyfiles:
fname = i.split(".")[0]
downloaded_user.append(fname)
if username in downloaded_user:
return
raise
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print( proc_stdout)
list_of_user = [ "HEPfeickert",
"ClementDelangue",
"FedPernici"]
def get_follow_list_with_retry(username):
count = 0
while count < 5 :
print("attempt: " + str(count) + " for user: " + username)
count = count + 1
try:
command = 'cd /home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/; conda activate tp; twint -u ' + username + ' --following -o ' + username + '.txt'
print(command)
subprocess_cmd(command)
check_if_file_present(username)
except:
print ("sleeping for 20 secs")
time.sleep(20)
continue
print("Completed for user: " + username)
break
for i in list_of_user:
get_follow_list_with_retry(i)
follow_list
follow_list
import os
command = "twint -u neuripsconf --following --user-full -o neuripsconf.csv --csv"
os.system(command)
import subprocess
result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
result.stdout
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print( proc_stdout)
subprocess_cmd('conda activate tp; twint -u neuripsconf --following -o neuripsconf.txt --csv')
import glob
read_files = glob.glob("/home/hustle/playground/twitter_thought_leader/data/raw/test/*.txt")
with open("/home/hustle/playground/twitter_thought_leader/data/raw/selected_2nd_degree_result.txt", "wb") as outfile:
for f in read_files:
with open(f, "rb") as infile:
outfile.write(infile.read())
len(read_files)
fd = pd.read_csv("/home/hustle/playground/twitter_thought_leader/data/raw/pandeyparul_relevant_user_info.csv",engine='python')
fd.bio
rel_df = fd[fd["bio"].apply(is_relevant)]
len(set(rel_df.username))
with open("/home/hustle/playground/twitter_thought_leader/data/raw/"+ "4k_rel_user.txt") as f:
relevant_user = f.read().splitlines()
with open("/home/hustle/playground/twitter_thought_leader/data/raw/2nd_degree_follow_list.txt") as f:
new_relevant_user = f.read().splitlines()
len(set(new_relevant_user))
new_users = set(rel_df.username) - set(relevant_user)
len(new_users)
ser = pd.Series(list(new_users))
ser.to_csv("/home/hustle/playground/twitter_thought_leader/data/raw/"+ "4k_rel_user.txt", mode='a', index=False, header = False)
ser = list(set(rel_df.username) - set(relevant_user))
ser.to_csv("/home/hustle/playground/twitter_thought_leader/data/raw/"+ "refined_relevant_user_list.txt", mode='a', index=False, header = False)
with open("/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/"+ "pandeyparul.txt") as f:
pandeyparul_relevant_user = f.read().splitlines()
len(pandeyparul_relevant_user)
with open("/home/hustle/playground/twitter_thought_leader/data/raw/"+ "4k_rel_user.txt") as f:
relevant_user = f.read().splitlines()
mypath = '/home/hustle/playground/twitter_thought_leader/data/raw/tweets_last_1500/'
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
downloaded_user = []
for i in onlyfiles:
fname = i.split(".")[0]
downloaded_user.append(fname)
len(relevant_user)
len(set(pandeyparul_relevant_user) - set(downloaded_user))
rel_user_info = pd.read_csv("/home/hustle/playground/twitter_thought_leader/data/raw/rel_user_info.csv",engine="python")
rel_user_info.shape
rel_user_info = rel_user_info.drop_duplicates(subset=['id'])
rel_user_info.shape
rel_user_info_list = list(rel_user_info.username)
rel_user_info_list
len(set(pandeyparul_relevant_user) - set(rel_user_info_list))
user_list = [
# "/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/JayAlammar.txt",
# "/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/abhi1thakur.txt",
# "/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/suzatweet.txt",
# "/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/fchollet.txt",
# "/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/A_K_Nain.txt",
# "/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/NirantK.txt",
# "/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/omarsar0.txt",
"/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/lexfridman.txt",
"/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/bhutanisanyam1.txt",
"/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/drfeifei.txt",
# "/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/PralayRamteke.txt",
]
import glob
# read_files = glob.glob("/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists/*.txt")
with open("/home/hustle/playground/twitter_thought_leader/data/raw/user_list_1_result.txt", "wb") as outfile:
for f in user_list:
with open(f, "rb") as infile:
outfile.write(infile.read())
with open("/home/hustle/playground/twitter_thought_leader/data/raw/"+ "selected_2nd_degree_result.txt") as f:
relevant_user = f.read().splitlines()
len(relevant_user)
len(set(relevant_user))
import os
import glob
import pandas as pd
os.chdir("/home/hustle/playground/twitter_thought_leader/data/raw/follow_lists")
extension = 'csv'
all_filenames = [i for i in glob.glob('*.{}'.format(extension))]
all_filenames
combined_csv = pd.concat([pd.read_csv(f, engine="python") for f in all_filenames ])
#export to csv
combined_csv.shape
combined_csv_rel_df = combined_csv[combined_csv["bio"].apply(is_relevant)]
non_rel_df = combined_csv[~combined_csv["bio"].apply(is_relevant)]
combined_csv_rel_df.shape
combined_csv_rel_df = combined_csv_rel_df.drop_duplicates(subset=['id'])
combined_csv_rel_df.shape
(set(combined_csv_rel_df.username) - set(downloaded_user))
ser = list(set(relevant_user) - set(downloaded_user))
ser = pd.Series(ser)
ser.to_csv("/home/hustle/playground/twitter_thought_leader/data/raw/"+ "selected_2nd_degree.txt", index=False, header = False)
os.chdir("/home/hustle/playground/twitter_thought_leader/data/interim/cleaned_tweets")
extension = 'txt'
all_filenames = [i for i in glob.glob('*.{}'.format(extension))]
print(len(all_filenames))
```
| github_jupyter |
# BigQuery ML models with feature engineering
In this notebook, we will use BigQuery ML to build more sophisticated models for taxifare prediction.
This is a continuation of our [first models](../../02_bqml/solution/first_model.ipynb) we created earlier with BigQuery ML but now with more feature engineering.
## Learning Objectives
1. Create and train a new Linear Regression model with BigQuery ML
2. Evaluate and predict with the linear model
3. Apply transformations using SQL to prune the taxi cab dataset
4. Create a feature cross for day-hour combination using SQL
5. Examine ways to reduce model overfitting with regularization
6. Create and train a DNN model with BigQuery ML
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solution/feateng_bqml.ipynb).
```
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
```
## Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __serverlessml__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
```
%%bash
## Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: serverlessml"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:serverlessml
echo "\nHere are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}
echo "\nHere are your current buckets:"
gsutil ls
fi
```
## Model 4: With some transformations
BigQuery ML automatically scales the inputs. so we don't need to do scaling, but human insight can help.
Since we we'll repeat this quite a bit, let's make a dataset with 1 million rows.
```
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers
FROM `nyc-tlc.yellow.trips`
# The full dataset has 1+ Billion rows, let's take only 1 out of 1,000 (or 1 Million total)
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
# placeholder for additional filters as part of TODO 3 later
%%bigquery
# Tip: You can CREATE MODEL IF NOT EXISTS as well
CREATE OR REPLACE MODEL serverlessml.model4_feateng
TRANSFORM(
* EXCEPT(pickup_datetime)
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek
, CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday
)
# TODO 1: Specify the BigQuery ML options for a linear model to predict fare amount
# OPTIONS()
AS
SELECT * FROM serverlessml.feateng_training_data
```
Once the training is done, visit the [BigQuery Cloud Console](https://console.cloud.google.com/bigquery) and look at the model that has been trained. Then, come back to this notebook.
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data:
```
%%bigquery
SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL serverlessml.model4_feateng)
%%bigquery
# TODO 2: Evaluate and predict with the linear model
# Write a SQL query to take the SQRT() of the Mean Squared Error as your loss metric for evaluation
# Hint: Use ML.EVALUATE on your newly trained model
```
What is the RMSE? Could we do any better?
Try re-creating the above feateng_training_data table with additional filters and re-running training and evaluation.
### TODO 3: Apply transformations using SQL to prune the taxi cab dataset
Now let's reduce the noise in our training dataset by only training on trips with a non-zero distance and fares above $2.50. Additionally, we will apply some geo location boundaries for New York City. Copy the below into your previous feateng_training_data table creation and re-train your model.
```sql
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
```
Yippee! We're now below our target of 6 dollars in RMSE.
We are now beating our goals, and with just a linear model.
## Making predictions with BigQuery ML
This is how the prediction query would look that we saw earlier [heading 1.3 miles uptown](https://www.google.com/maps/dir/'40.742104,-73.982683'/'40.755174,-73.983766'/@40.7481394,-73.993579,15z/data=!3m1!4b1!4m9!4m8!1m3!2m2!1d-73.982683!2d40.742104!1m3!2m2!1d-73.983766!2d40.755174) in New York City.
```
%%bigquery
SELECT * FROM ML.PREDICT(MODEL serverlessml.model4_feateng, (
SELECT
-73.982683 AS pickuplon,
40.742104 AS pickuplat,
-73.983766 AS dropofflon,
40.755174 AS dropofflat,
3.0 AS passengers,
TIMESTAMP('2019-06-03 04:21:29.769443 UTC') AS pickup_datetime
))
```
## Improving the model with feature crosses
Let's do a [feature cross](https://developers.google.com/machine-learning/crash-course/feature-crosses/video-lecture) of the day-hour combination instead of using them raw
```
%%bigquery
CREATE OR REPLACE MODEL serverlessml.model5_featcross
TRANSFORM(
* EXCEPT(pickup_datetime)
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
# TODO 4: Create a feature cross for day-hour combination using SQL
, ML.( # <--- Enter the correct function for a BigQuery ML feature cross ahead of the (
STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday)
) AS day_hr
)
OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg')
AS
SELECT * FROM serverlessml.feateng_training_data
%%bigquery
SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL serverlessml.model5_featcross)
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model5_featcross)
```
Sometimes (not the case above), the training RMSE is quite reasonable, but the evaluation RMSE is terrible. This is an indication of overfitting.
When we do feature crosses, we run into the risk of overfitting (for example, when a particular day-hour combo doesn't have enough taxirides).
## Reducing overfitting
Let's add [L2 regularization](https://developers.google.com/machine-learning/glossary/#L2_regularization) to help reduce overfitting. Let's set it to 0.1
```
%%bigquery
CREATE OR REPLACE MODEL serverlessml.model6_featcross_l2
TRANSFORM(
* EXCEPT(pickup_datetime)
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday)) AS day_hr
)
# TODO 5: Set the model options for a linear regression model to predict fare amount with 0.1 L2 Regularization
# Tip: Refer to the documentation for syntax:
# https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create
OPTIONS()
AS
SELECT * FROM serverlessml.feateng_training_data
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model6_featcross_l2)
```
These sorts of experiment would have taken days to do otherwise. We did it in minutes, thanks to BigQuery ML! The advantage of doing all this in the TRANSFORM is the client code doing the PREDICT doesn't change. Our model improvement is transparent to client code.
```
%%bigquery
SELECT * FROM ML.PREDICT(MODEL serverlessml.model6_featcross_l2, (
SELECT
-73.982683 AS pickuplon,
40.742104 AS pickuplat,
-73.983766 AS dropofflon,
40.755174 AS dropofflat,
3.0 AS passengers,
TIMESTAMP('2019-06-03 04:21:29.769443 UTC') AS pickup_datetime
))
```
## Let's try feature crossing the locations too
Because the lat and lon by themselves don't have meaning, but only in conjunction, it may be useful to treat the fields as a pair instead of just using them as numeric values. However, lat and lon are continuous numbers, so we have to discretize them first. That's what ML.BUCKETIZE does.
Here are some of the preprocessing functions in BigQuery ML:
* ML.FEATURE_CROSS(STRUCT(features)) does a feature cross of all the combinations
* ML.POLYNOMIAL_EXPAND(STRUCT(features), degree) creates x, x^2, x^3, etc.
* ML.BUCKETIZE(f, split_points) where split_points is an array
```
%%bigquery
-- BQML chooses the wrong gradient descent strategy here. It will get fixed in (b/141429990)
-- But for now, as a workaround, explicitly specify optimize_strategy='BATCH_GRADIENT_DESCENT'
CREATE OR REPLACE MODEL serverlessml.model7_geo
TRANSFORM(
fare_amount
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday), 2) AS day_hr
, CONCAT(
ML.BUCKETIZE(pickuplon, GENERATE_ARRAY(-78, -70, 0.01)),
ML.BUCKETIZE(pickuplat, GENERATE_ARRAY(37, 45, 0.01)),
ML.BUCKETIZE(dropofflon, GENERATE_ARRAY(-78, -70, 0.01)),
ML.BUCKETIZE(dropofflat, GENERATE_ARRAY(37, 45, 0.01))
) AS pickup_and_dropoff
)
OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg', l2_reg=0.1, optimize_strategy='BATCH_GRADIENT_DESCENT')
AS
SELECT * FROM serverlessml.feateng_training_data
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model7_geo)
```
Yippee! We're now below our target of 6 dollars in RMSE.
## DNN
You could, of course, train a more sophisticated model. Change "linear_reg" above to "dnn_regressor" and see if it improves things.
__Note: This takes 20 - 25 minutes to run.__
```
%%bigquery
-- This is alpha and may not work for you.
CREATE OR REPLACE MODEL serverlessml.model8_dnn
TRANSFORM(
fare_amount
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING),
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS day_hr
, CONCAT(
ML.BUCKETIZE(pickuplon, GENERATE_ARRAY(-78, -70, 0.01)),
ML.BUCKETIZE(pickuplat, GENERATE_ARRAY(37, 45, 0.01)),
ML.BUCKETIZE(dropofflon, GENERATE_ARRAY(-78, -70, 0.01)),
ML.BUCKETIZE(dropofflat, GENERATE_ARRAY(37, 45, 0.01))
) AS pickup_and_dropoff
)
-- at the time of writing, l2_reg wasn't supported yet.
# TODO 6: Create a DNN model (dnn_regressor) with hidden_units [32,8]
OPTIONS()
AS
SELECT * FROM serverlessml.feateng_training_data
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model8_dnn)
```
We really need the L2 reg (recall that we got 4.77 without the feateng). It's time to do [Feature Engineering in Keras](../../06_feateng_keras/labs/taxifare_fc.ipynb).
Copyright 2019 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# Exploratory Data Analysis
---
*By Ihza Gonzales*
This notebook aims to explore the data collected. A series of graphs specific for time series data will be used. The distribution of stats and salaries will also be explored.
## Import Libraries
---
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#%matplotlib inline
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
pd.set_option('display.max_rows', None)
import warnings
warnings.filterwarnings('ignore')
```
## Functions Implemented
---
```
# Code modified from code written by Matthew Garton.
def plot_series(df, cols=None, title='Title', xlab=None, ylab=None, steps=1):
# Set figure size to be (18, 9).
plt.figure(figsize=(18,9))
# Iterate through each column name.
for col in cols:
# Generate a line plot of the column name.
# You only have to specify Y, since our
# index will be a datetime index.
plt.plot(df[col])
# Generate title and labels.
plt.title(title, fontsize=26)
plt.xlabel(xlab, fontsize=20)
plt.ylabel(ylab, fontsize=20)
# Enlarge tick marks.
plt.yticks(fontsize=18)
plt.xticks(df.index[0::steps], fontsize=15);
```
## Exploring a Batter for Time Series
---
```
df = pd.read_csv('../data/clean_players_bat/Enrique-Hernandez-571771.csv')
df.head()
df.shape
plot_series(df, ['AVG'], title = "AVG", steps= 50)
```
The spikes in the AVG is due to the fact that the stats are reset for the beginning of each season. If the player does well in the first few games his AVG will be really high and vice versa if the player does badly in the beginning of the season.
```
games_30 = df.rolling(30).mean()
plot_series(games_30, ['AVG'], title = "AVG for rolling mean of 30 for Enrique (Kike) Hernandez", steps= 50)
```
There seems to be a trend towards the 2021 season and typically when the season is almost at the end. Kike looks like for this season to have an upward trend. Besides that his average has had a pretty stationary trend.
```
plot_acf(df['AVG'], lags = 30);
```
Since there are large and positive values for the lower lags, there is a trend in the data. As for seasonality, a scallop shaped must be present but there is not one. This means the data has no seasonlity.
```
plot_pacf(df['AVG'], lags = 30);
```
There does not seem to have a pattern of fluctuations which means this data has no seasonlity
## Exploring Pitcher for Time Series
```
df = pd.read_csv('../data/clean_players_pitch/Max-Scherzer-453286.csv')
df.head()
df.shape
plot_series(df, ['ERA'], title = "ERA", steps= 30)
```
There does not seem to be any consistent fluctuations in the data. There is a considerable spike around 2020-07-23.
```
games_30 = df.rolling(30).mean()
plot_series(games_30, ['ERA'], title = "AVG for rolling mean of 30 for Max Scherzer", steps= 50)
```
It looks like had a very high ERA in 2020 which means he pitched bad during that season. But as he gets into 2021 his ERA has started to trend down.
```
plot_acf(df['ERA'], lags = 30);
```
There is a trend in the data because there is positive large values in the lower lags. There is no obvious scallop shape which means the data has no seasonlity.
```
plot_pacf(df['ERA'], lags = 30);
```
There is no obvious patterns in the fluctuations which means that the data has no seasonlity
## Exploring Stats and Salaries of Batters
---
```
bat = pd.read_csv('../data/mlb_players_bat.csv').drop('Unnamed: 0', axis = 1)
bat.head()
# Convert salary from object to int
bat['salary'] = bat['salary'].str.replace(',', '').str.replace('$', '').astype(int)
#Copied from https://stackoverflow.com/questions/38516481/trying-to-remove-commas-and-dollars-signs-with-pandas-in-python
bat.describe()
```
This provides the summary statistics of each of the batter stats. There is a lot of variability between how many at bats a person has. There are some players with only 50 at bats while others have a max of 664 at bats. This would be something to consider becuase it shows that some players might not as many data on them to be able to forecast well.
```
corr_matrix = bat.corr()
plt.figure(figsize=(25,20))
sns.heatmap(corr_matrix,
annot = True,
vmin = -1,
vmax = 1,
cmap = 'rocket');
#copied from lesson 3.04
```
This is the correlation of all stats and salary of batters. Many of the stats are very highly correlare with each other with some even reaching 1. This makes sense the more hits a player has the more runs, home runs, or runs batted in they can have. For salary there is no stat that it is highly correlated with. A triple (3B) even has a very low correlation with salary.
```
sns.pairplot(data = bat,
y_vars=['salary'],
x_vars = ['R',
'HR',
'RBI',
'BB'],
hue = 'Pos',
diag_kind=None);
```
This is the salary based on how many runs (R), home runs(HR), runs batted in(RBI), or walks(BB) a player has and the color represents the position of the batter. None of the graphs show any distinct pattern. Walks probably is the most disiguishable with a slight linear pattern. The other thing to note is that many players have a low salary but they could be performing well.
```
bat.hist(figsize = (17, 18));
```
Many of the stats as well as the salary is left skewed. The average(AVG), on base percentage(OBP), slugging(SLG) and on base plus slugging percentage (OPS) have a normal distribution.
```
plt.style.use('fivethirtyeight')
bat['salary'].hist()
plt.xlabel('Salary')
plt.ylabel('Count')
plt.title('Distribution of Batter Salary');
```
Just to look at the Batter salary closer. It is highly left skewed with over half of the batters having a salary of less than 5 million.
## Exploring Stats and Salaries of Pitchers
---
```
pitch = pd.read_csv('../data/mlb_players_pitch.csv').drop('Unnamed: 0', axis = 1)
pitch.head()
# Convert salary from object to int
pitch['salary'] = pitch['salary'].str.replace(',', '').str.replace('$', '').astype(int)
#Copied from https://stackoverflow.com/questions/38516481/trying-to-remove-commas-and-dollars-signs-with-pandas-in-python
pitch.describe()
```
For this summary statistics of the statistics of pitchers. Lets look at innings pitched (ip), the min is 10 while the max is 213. This is also indicative that there are different kinds of pitchers. There are closers that pitch one maybe less that one inning and then you have starting pitchers who will pitch 5-7 innings and sometimes a whole game.
```
corr_matrix = pitch.corr()
plt.figure(figsize=(25,20))
sns.heatmap(corr_matrix,
annot = True,
vmin = -1,
vmax = 1,
cmap = 'rocket');
#copied from lesson 3.04
```
Like the stats for batters, the stats for pitchers are very highly correlated with each other. The stats that are not highly correlated is earned run average(ERA) and walks/hits per inning pitch (WHIP). As for salary it is not very correlated with the stats with even some almost being 0.
```
sns.pairplot(data = pitch,
y_vars=['ERA'],
x_vars = ['WHIP',
'K',
'salary',
'IP'],
diag_kind=None);
```
Based on the heatmap, it would be behooving to look at ERA further. It makes sense that ERA and WHIP would be higly correlated because the a greater ERA means a pitchers let more people score. WHIP is walks and hits per inning pitched. If a pitcher has a high WHIP means they are more likely to let batters score. It makes sense that the correlation for ERA and strikeout(K) would be a straight line at zero. If a pitcher strikesout the batter less likely that a run will score. As for the salary it sort of makes sense as there are plenty of "rookie" pitchers. There are only a handful of really good pitchers and they are typically starting pitchers which allows them that higher ERA because they are expected to pitch more.
```
pitch.hist(figsize = (17, 18));
```
All the stats aside from ERA and WHIP are left skewed.
```
plt.style.use('fivethirtyeight')
pitch['salary'].hist()
plt.xlabel('Salary')
plt.ylabel('Count')
plt.title('Distribution of Pitcher Salary');
```
Salary is heavily skewed to the left with having well over pitchers making less than 5 million. There also seems to be a break between pitchers making 25 million and around 20 million.
## Recap
---
Explored the time series data, batter salary and stats, and pitcher salary and stats. This information will allow for better modeling decisions.
| github_jupyter |
```
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import scipy
import warnings
warnings.filterwarnings("ignore")
```
### Load data
Note the data files needed are large, and can be generated by the cells supplied in the experiement notebook. However, these will take some time to generate. One can therefore reduce the number of runs needed and use those instead.
```
if False:
#BCNN and SNPE-C results
ID = 'data'
## SNPE-C
sbi_post = np.load(f'{ID}/sbi_{ID}_post.npy')
sbi_time = np.load(f'{ID}/sbi_{ID}_time.npy')
## BCNN
bcnn_post = np.load(f'{ID}/bcnn_{ID}_post.npy')
bcnn_time = np.load(f'{ID}/bcnn_{ID}_time.npy')
else:
## SNPE-C
sbi_post = np.load(f'SBI_10_10gen_large_sample.npy')
sbi_time = np.load('SBI_10_10gen_large_sample_times.npy')
sbi_post = sbi_post[:5,:8,:,:]
sbi_time = sbi_time[:5,:8]
## BCNN
bcnn_post = np.load('bnn_res_5_5round_8gen_theta_thresh.npy')
bcnn_time = np.load('bnn_res_5_5round_8gen_time_thresh.npy')
bcnn_post = bcnn_post[:,1:,:,:] # first sample is simply from prior, remove.
## ABC-SMC
smc_post = np.load('smcabc_posterior_5gen.npy',allow_pickle=True)
Y = np.empty(shape=(5,8,1000,3))
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
Y[i,j,:,:] = smc_post[i][j][:1000][:]
smc_post = Y
smc_time = np.load('smcabc_posterior_5gen_time.npy')
```
# Main paper figures
### Compute mean and std
```
sbi_post_mean = sbi_post.mean(axis=2)
sbi_post_std = sbi_post.std(axis=2)
sbi_time_mean = sbi_time.mean(axis=0)
sbi_time_std = sbi_time.std(axis=0)
bcnn_post_mean = bcnn_post.mean(axis=2)
bcnn_post_std = bcnn_post.std(axis=2)
bcnn_time_mean = bcnn_time.mean(axis=0)
bcnn_time_std = bcnn_time.std(axis=0)
smc_post_mean = np.mean(smc_post, axis=2)
smc_post_std = np.std(smc_post, axis=2)
```
### Compute the MSE
```
theta_true = np.log([[1.0,0.005, 1.0]])
theta_ = np.expand_dims(theta_true,axis=[0,1])
sbi_post_mse = ((theta_ - sbi_post)**2).mean(axis=(2,3))
bcnn_post_mse = ((theta_ - bcnn_post)**2).mean(axis=(2,3))
smc_post_mse = ((theta_ - smc_post)**2).mean(axis=(2,3))
```
## Figure 4 - MSE of BNN, SNPE-C, and ABC-SMC
```
import matplotlib
matplotlib.rcParams['ps.useafm'] = True
matplotlib.rcParams['pdf.use14corefonts'] = True
matplotlib.rcParams['text.usetex'] = True
sns.set_theme()
font_size = 8
sns.set_context("paper", rc={"font.size":font_size,"axes.titlesize":font_size,"axes.labelsize":font_size, "axis.legendsize":font_size })
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
f = plt.figure(figsize=(4.25, 2), constrained_layout=True)
gs = f.add_gridspec(1, 1)
#Theta 1,2,3 mse(<post>)
ax = f.add_subplot(gs[0, 0])
ax.errorbar(x=np.arange(8)+1, y=sbi_post_mse.mean(axis=0)[:],
yerr=sbi_post_mse.std(axis=0)[:],
capsize=5, color='C0', label='SNPE-C')
ax.errorbar(x=np.arange(8)+1, y=bcnn_post_mse.mean(axis=0)[:],
yerr=bcnn_post_mse.std(axis=0)[:],
capsize=5, color='C2', label='BCNN')
ax.errorbar(x=np.arange(8)+1, y=smc_post_mse.mean(axis=0)[:],
yerr=smc_post_mse.std(axis=0)[:],
capsize=5, color='C1', label='ABC-SMC')
ax.set_ylabel('MSE')
ax.set_xlabel('Round')
plt.legend(loc='upper right')
#plt.yscale('log')
#plt.savefig('lv_mse.pdf',dpi=350, bbox_inches = 'tight', pad_inches = 0)
```
## Figure 5 - Snapshot of $p(\theta | D)$
```
def posterior_snaps(run_idx=0, save=True):
def multivar(grid, x, y, xlabel='', ylabel='', label='',color='C0'):
ax = f.add_subplot(grid)
sns.kdeplot(x=x, y=y, ax=ax, label=label,color=color)
ax.set_ylim(np.log(0.002),np.log(2))
ax.set_xlim(np.log(0.002),np.log(2))
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
return ax
def singlevar(grid, x, y, xlabel='', ylabel='', label='',color='C0'):
ax = f.add_subplot(grid)
ax.plot(x, y, marker='x', ms=5, label=label,color=color)
ax.set_ylim(np.log(0.002),np.log(2))
ax.set_xlim(np.log(0.002),np.log(2))
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
return ax
font_size = 8
sns.set_context("paper", rc={"font.size":font_size,"axes.titlesize":font_size,"axes.labelsize":font_size, "axis.legendsize":font_size })
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
f = plt.figure(figsize=(4.25, 4.25), constrained_layout=True)
widths = [1, 1, 1]# 1, 1, 1, 1, 1]
heights = [1, 1, 1]#, 1, 1, 1, 1, 1, 1]
gs = f.add_gridspec(3,3, width_ratios=widths, height_ratios=heights)
# k1 x k2
ax = multivar(gs[0, 0],x=smc_post[run_idx, 1, :, 1], y=smc_post[run_idx, 1, :, 0],ylabel=r'$(\theta_2, \theta_1)$',color='C1')
ax = multivar(gs[0, 1],x=smc_post[run_idx, 3, :, 1], y=smc_post[run_idx, 3, :, 0],color='C1')
ax = multivar(gs[0, 2],x=smc_post[run_idx, 7, :, 1], y=smc_post[run_idx, 7, :, 0], label='ABC-SMC',color='C1')
ax = multivar(gs[0, 0],x=bcnn_post[run_idx, 1, :, 1], y=bcnn_post[run_idx, 1, :, 0],ylabel=r'$(\theta_2, \theta_1)$',color='C2')
ax = multivar(gs[0, 1],x=bcnn_post[run_idx, 3, :, 1], y=bcnn_post[run_idx, 3, :, 0],color='C2')
ax = multivar(gs[0, 2],x=bcnn_post[run_idx, 7, :, 1], y=bcnn_post[run_idx, 7, :, 0],label='BCNN',color='C2')
ax = multivar(gs[0, 0],x=sbi_post[run_idx, 1, :, 1], y=sbi_post[run_idx, 1, :, 0],ylabel=r'$(\theta_2, \theta_1)$',color='C0')
ax = multivar(gs[0, 1],x=sbi_post[run_idx, 3, :, 1], y=sbi_post[run_idx, 3, :, 0],color='C0')
ax = multivar(gs[0, 2],x=sbi_post[run_idx, 7, :, 1], y=sbi_post[run_idx, 7, :, 0], label='SNPE-C',color='C0')
ax = singlevar(gs[0, 0],x=theta_true[0,1],y=theta_true[0,0],color='C3',ylabel=r'$(\theta_2, \theta_1)$')
ax = singlevar(gs[0, 1],x=theta_true[0,1],y=theta_true[0,0],color='C3')
ax = singlevar(gs[0, 2],x=theta_true[0,1],y=theta_true[0,0],color='C3',label='truth')
ax.legend(loc='lower right')
# k1 x k3
ax = multivar(gs[1, 0],x=smc_post[run_idx, 1, :, 2], y=smc_post[run_idx, 1, :, 0],ylabel=r'$(\theta_3, \theta_1)$',color='C1')
ax = multivar(gs[1, 1],x=smc_post[run_idx, 3, :, 2], y=smc_post[run_idx, 3, :, 0],color='C1')
ax = multivar(gs[1, 2],x=smc_post[run_idx, 7, :, 2], y=smc_post[run_idx, 7, :, 0], label='ABC-SMC',color='C1')
ax = multivar(gs[1, 0],x=bcnn_post[run_idx, 1, :, 2], y=bcnn_post[run_idx, 1, :, 0],ylabel=r'$(\theta_3, \theta_1)$',color='C2')
ax = multivar(gs[1, 1],x=bcnn_post[run_idx, 3, :, 2], y=bcnn_post[run_idx, 3, :, 0],color='C2')
ax = multivar(gs[1, 2],x=bcnn_post[run_idx, 7, :, 2], y=bcnn_post[run_idx, 7, :, 0],color='C2')
ax = multivar(gs[1, 0],x=sbi_post[run_idx, 1, :, 2], y=sbi_post[run_idx, 1, :, 0],ylabel=r'$(\theta_3, \theta_1)$', color='C0')
ax = multivar(gs[1, 1],x=sbi_post[run_idx, 3, :, 2], y=sbi_post[run_idx, 3, :, 0], color='C0')
ax = multivar(gs[1, 2],x=sbi_post[run_idx, 7, :, 2], y=sbi_post[run_idx, 7, :, 0], color='C0')
ax = singlevar(gs[1, 0],x=theta_true[0,2],y=theta_true[0,0],color='C3',ylabel=r'$(\theta_3, \theta_1)$')
ax = singlevar(gs[1, 1],x=theta_true[0,2],y=theta_true[0,0],color='C3')
ax = singlevar(gs[1, 2],x=theta_true[0,2],y=theta_true[0,0],color='C3',label='truth')
# k2 x k3
ax = multivar(gs[2, 0],x=smc_post[run_idx, 1, :, 2], y=smc_post[run_idx, 1, :, 1], xlabel='Round 2',ylabel=r'$(\theta_3, \theta_2)$',color='C1')
ax = multivar(gs[2, 1],x=smc_post[run_idx, 3, :, 2], y=smc_post[run_idx, 3, :, 1], xlabel='Round 2',color='C1')
ax = multivar(gs[2, 2],x=smc_post[run_idx, 7, :, 2], y=smc_post[run_idx, 7, :, 1], xlabel='Round 2', label='ABC-SMC',color='C1')
ax = multivar(gs[2, 0],x=bcnn_post[run_idx, 1, :, 2], y=bcnn_post[run_idx, 1, :, 1],xlabel='Round 2',ylabel=r'$(\theta_3, \theta_2)$',color='C2')
ax = multivar(gs[2, 1],x=bcnn_post[run_idx, 3, :, 2], y=bcnn_post[run_idx, 3, :, 1],xlabel='Round 4',color='C2')
ax = multivar(gs[2, 2],x=bcnn_post[run_idx, 7, :, 2], y=bcnn_post[run_idx, 7, :, 1],xlabel='Round 8',color='C2')
ax = multivar(gs[2, 0],x=sbi_post[run_idx, 1, :, 2], y=sbi_post[run_idx, 1, :, 1], xlabel='Round 2',ylabel=r'$(\theta_3, \theta_2)$', color='C0')
ax = multivar(gs[2, 1],x=sbi_post[run_idx, 3, :, 2], y=sbi_post[run_idx, 3, :, 1], xlabel='Round 4', color='C0')
ax = multivar(gs[2, 2],x=sbi_post[run_idx, 7, :, 2], y=sbi_post[run_idx, 7, :, 1], xlabel='Round 8', color='C0')
ax = singlevar(gs[2, 0],x=theta_true[0,2],y=theta_true[0,1],color='C3',xlabel='Round 2',ylabel=r'$(\theta_3, \theta_2)$')
ax = singlevar(gs[2, 1],x=theta_true[0,2],y=theta_true[0,1],color='C3',xlabel='Round 4')
ax = singlevar(gs[2, 2],x=theta_true[0,2],y=theta_true[0,1],color='C3',label='truth',xlabel='Round 8')
if save:
plt.savefig(f'lv_dens_{run_idx}.pdf',dpi=350, bbox_inches = 'tight', pad_inches = 0)
posterior_snaps(run_idx=0,save=False)
```
## Supplemental figures
### Figure S8 - multiple runs of the inference procedure with different number seeds
```
for i in range(bcnn_post.shape[0]):
posterior_snaps(run_idx=i,save=False)
```
### Figure S5 - impact of # of classes (bins$^2$)
```
bcnn3 = np.load('bnn_res_3_5round_8gen_theta_thresh.npy')
bcnn4 = np.load('bnn_res_4_5round_8gen_theta_thresh.npy')
bcnn5 = np.load('bnn_res_5_5round_8gen_theta_thresh.npy')
bcnn3_mse = ((theta_ - bcnn3)**2).mean(axis=(2,3))
bcnn4_mse = ((theta_ - bcnn4)**2).mean(axis=(2,3))
bcnn5_mse = ((theta_ - bcnn5)**2).mean(axis=(2,3))
sns.set_theme()
font_size = 8
sns.set_context("paper", rc={"font.size":font_size,"axes.titlesize":font_size,"axes.labelsize":font_size, "axis.legendsize":font_size })
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
f = plt.figure(figsize=(4.25, 2), constrained_layout=True)
gs = f.add_gridspec(1, 1)
#Theta 1,2,3 mse(<post>)
ax = f.add_subplot(gs[0, 0])
ax.errorbar(x=np.arange(9)+1, y=bcnn3_mse.mean(axis=0)[:],
yerr=bcnn3_mse.std(axis=0)[:],
capsize=5, color='C0', label='9 classes')
ax.errorbar(x=np.arange(9)+1, y=bcnn4_mse.mean(axis=0)[:],
yerr=bcnn4_mse.std(axis=0)[:],
capsize=5, color='C2', label='16 classes')
ax.errorbar(x=np.arange(9)+1, y=bcnn5_mse.mean(axis=0)[:],
yerr=bcnn5_mse.std(axis=0)[:],
capsize=5, color='C1', label='25 classes')
ax.set_ylabel('MSE')
ax.set_xlabel('Round')
plt.yscale('log')
plt.legend(loc='upper right')
#plt.savefig('lv_bins.pdf',dpi=350, bbox_inches = 'tight', pad_inches = 0)
```
### Figure S6 - threshold or not
```
bcnn5_no = np.load('bnn_res_5_5round_8gen_theta.npy')
bcnn5_no_mse = ((theta_ - bcnn5_no)**2).mean(axis=(2,3))
sns.set_theme()
font_size = 8
sns.set_context("paper", rc={"font.size":font_size,"axes.titlesize":font_size,"axes.labelsize":font_size, "axis.legendsize":font_size })
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
f = plt.figure(figsize=(4.25, 2), constrained_layout=True)
gs = f.add_gridspec(1, 1)
#Theta 1,2,3 mse(<post>)
ax = f.add_subplot(gs[0, 0])
ax.errorbar(x=np.arange(9)+1, y=bcnn5_no_mse.mean(axis=0)[:],
yerr=bcnn5_no_mse.std(axis=0)[:],
capsize=5, color='C0', label='$\delta = 0.0$')
ax.errorbar(x=np.arange(9)+1, y=bcnn5_mse.mean(axis=0)[:],
yerr=bcnn5_mse.std(axis=0)[:],
capsize=5, color='C1', label='$\delta = 0.05$')
ax.set_ylabel('MSE')
ax.set_xlabel('Round')
plt.yscale('log')
plt.legend(loc='upper right')
#plt.savefig('lv_thresh.pdf',dpi=350, bbox_inches = 'tight', pad_inches = 0)
```
### Figure S7 - Elapsed time
```
sns.set_theme()
sns.set_context("paper", font_scale=1.5, rc={"lines.linewidth": 1.5})
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
mpl.rcParams['ps.useafm'] = True
mpl.rcParams['pdf.use14corefonts'] = True
mpl.rcParams['text.usetex'] = True
sns.set_theme()
font_size = 8
sns.set_context("paper", rc={"font.size":font_size,"axes.titlesize":font_size,"axes.labelsize":font_size, "axis.legendsize":font_size })
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
f = plt.figure(figsize=(4.25, 2))
gs = f.add_gridspec(1, 2)
#Theta 1 E(<post>)
ax = f.add_subplot(gs[0, 0])
ax.errorbar(x=np.arange(8)+1, y=bcnn_time.mean(axis=0)/60,
yerr=bcnn_time.std(axis=0)/60,
capsize=5, color='C0', label='BCNN')
ax.errorbar(x=np.arange(8)+1, y=sbi_time.mean(axis=0)/60,
yerr=sbi_time.std(axis=0)/60,
capsize=5, color='C1', label='SNPE')
ax.set_xlabel("Round")
ax.set_ylabel("time/round")
ax.set_xticks(np.arange(8)+1)
ax = f.add_subplot(gs[0, 1])
bcnn_cumsum = np.cumsum(bcnn_time, axis=1)
sbi_cumsum = np.cumsum(sbi_time, axis=1)
ax.errorbar(x=np.arange(8)+1, y=bcnn_cumsum.mean(axis=0)/60,
yerr=bcnn_cumsum.std(axis=0)/60,
capsize=5, color='C0', label='BCNN')
ax.errorbar(x=np.arange(8)+1, y=sbi_cumsum.mean(axis=0)/60,
yerr=sbi_cumsum.std(axis=0)/60,
capsize=5, color='C1', label='SNPE-C')
ax.set_xlabel("Round")
ax.set_ylabel("cumsum(time)")
ax.set_xticks(np.arange(8)+1)
plt.legend()
plt.tight_layout()
#plt.savefig('lv_time.pdf',dpi=350, bbox_inches = 'tight', pad_inches = 0)
```
| github_jupyter |
# Tutorial 11: Normalizing Flows for image modeling

**Filled notebook:**
[](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial11/NF_image_modeling.ipynb)
[](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial11/NF_image_modeling.ipynb)
**Pre-trained models:**
[](https://github.com/phlippe/saved_models/tree/main/tutorial11)
[](https://drive.google.com/drive/folders/1gttZ5DSrpKwn9g3RcizqA5qG7NFLMgvv?usp=sharing)
**Recordings:**
[](https://youtu.be/U1fwesIusbg)
[](https://youtu.be/qMoGcRhVrF8)
[](https://youtu.be/YoAWiaEt41Y)
[](https://youtu.be/nTyDvn-ADJ4)
**Author:** Phillip Lippe
In this tutorial, we will take a closer look at complex, deep normalizing flows. The most popular, current application of deep normalizing flows is to model datasets of images. As for other generative models, images are a good domain to start working on because (1) CNNs are widely studied and strong models exist, (2) images are high-dimensional and complex, and (3) images are discrete integers. In this tutorial, we will review current advances in normalizing flows for image modeling, and get hands-on experience on coding normalizing flows. Note that normalizing flows are commonly parameter heavy and therefore computationally expensive. We will use relatively simple and shallow flows to save computational cost and allow you to run the notebook on CPU, but keep in mind that a simple way to improve the scores of the flows we study here is to make them deeper.
Throughout this notebook, we make use of [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/latest/). The first cell imports our usual libraries.
```
## Standard libraries
import os
import math
import time
import numpy as np
## Imports for plotting
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf') # For export
from matplotlib.colors import to_rgb
import matplotlib
matplotlib.rcParams['lines.linewidth'] = 2.0
import seaborn as sns
sns.reset_orig()
## Progress bar
from tqdm.notebook import tqdm
## PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import torch.optim as optim
# Torchvision
import torchvision
from torchvision.datasets import MNIST
from torchvision import transforms
# PyTorch Lightning
try:
import pytorch_lightning as pl
except ModuleNotFoundError: # Google Colab does not have PyTorch Lightning installed by default. Hence, we do it here if necessary
!pip install --quiet pytorch-lightning>=1.4
import pytorch_lightning as pl
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
# Path to the folder where the datasets are/should be downloaded (e.g. MNIST)
DATASET_PATH = "../data"
# Path to the folder where the pretrained models are saved
CHECKPOINT_PATH = "../saved_models/tutorial11"
# Setting the seed
pl.seed_everything(42)
# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Fetching the device that will be used throughout this notebook
device = torch.device("cpu") if not torch.cuda.is_available() else torch.device("cuda:0")
print("Using device", device)
```
Again, we have a few pretrained models. We download them below to the specified path above.
```
import urllib.request
from urllib.error import HTTPError
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/tutorial11/"
# Files to download
pretrained_files = ["MNISTFlow_simple.ckpt", "MNISTFlow_vardeq.ckpt", "MNISTFlow_multiscale.ckpt"]
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print(f"Downloading {file_url}...")
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print("Something went wrong. Please try to download the file from the GDrive folder, or contact the author with the full output including the following error:\n", e)
```
We will use the MNIST dataset in this notebook. MNIST constitutes, despite its simplicity, a challenge for small generative models as it requires the global understanding of an image. At the same time, we can easily judge whether generated images come from the same distribution as the dataset (i.e. represent real digits), or not.
To deal better with the discrete nature of the images, we transform them from a range of 0-1 to a range of 0-255 as integers.
```
# Convert images from 0-1 to 0-255 (integers)
def discretize(sample):
return (sample * 255).to(torch.int32)
# Transformations applied on each image => make them a tensor and discretize
transform = transforms.Compose([transforms.ToTensor(),
discretize])
# Loading the training dataset. We need to split it into a training and validation part
train_dataset = MNIST(root=DATASET_PATH, train=True, transform=transform, download=True)
pl.seed_everything(42)
train_set, val_set = torch.utils.data.random_split(train_dataset, [50000, 10000])
# Loading the test set
test_set = MNIST(root=DATASET_PATH, train=False, transform=transform, download=True)
# We define a set of data loaders that we can use for various purposes later.
# Note that for actually training a model, we will use different data loaders
# with a lower batch size.
train_loader = data.DataLoader(train_set, batch_size=256, shuffle=False, drop_last=False)
val_loader = data.DataLoader(val_set, batch_size=64, shuffle=False, drop_last=False, num_workers=4)
test_loader = data.DataLoader(test_set, batch_size=64, shuffle=False, drop_last=False, num_workers=4)
```
In addition, we will define below a function to simplify the visualization of images/samples. Some training examples of the MNIST dataset is shown below.
```
def show_imgs(imgs, title=None, row_size=4):
# Form a grid of pictures (we use max. 8 columns)
num_imgs = imgs.shape[0] if isinstance(imgs, torch.Tensor) else len(imgs)
is_int = imgs.dtype==torch.int32 if isinstance(imgs, torch.Tensor) else imgs[0].dtype==torch.int32
nrow = min(num_imgs, row_size)
ncol = int(math.ceil(num_imgs/nrow))
imgs = torchvision.utils.make_grid(imgs, nrow=nrow, pad_value=128 if is_int else 0.5)
np_imgs = imgs.cpu().numpy()
# Plot the grid
plt.figure(figsize=(1.5*nrow, 1.5*ncol))
plt.imshow(np.transpose(np_imgs, (1,2,0)), interpolation='nearest')
plt.axis('off')
if title is not None:
plt.title(title)
plt.show()
plt.close()
show_imgs([train_set[i][0] for i in range(8)])
```
## Normalizing Flows as generative model
In the previous lectures, we have seen Energy-based models, Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) as example of generative models. However, none of them explicitly learn the probability density function $p(x)$ of the real input data. While VAEs model a lower bound, energy-based models only implicitly learn the probability density. GANs on the other hand provide us a sampling mechanism for generating new data, without offering a likelihood estimate. The generative model we will look at here, called Normalizing Flows, actually models the true data distribution $p(x)$ and provides us with an exact likelihood estimate. Below, we can visually compare VAEs, GANs and Flows
(figure credit - [Lilian Weng](https://lilianweng.github.io/lil-log/2018/10/13/flow-based-deep-generative-models.html)):
<center width="100%"><img src="comparison_GAN_VAE_NF.png" width="600px"></center>
The major difference compared to VAEs is that flows use *invertible* functions $f$ to map the input data $x$ to a latent representation $z$. To realize this, $z$ must be of the same shape as $x$. This is in contrast to VAEs where $z$ is usually much lower dimensional than the original input data. However, an invertible mapping also means that for every data point $x$, we have a corresponding latent representation $z$ which allows us to perform lossless reconstruction ($z$ to $x$). In the visualization above, this means that $x=x'$ for flows, no matter what invertible function $f$ and input $x$ we choose.
Nonetheless, how are normalizing flows modeling a probability density with an invertible function? The answer to this question is the rule for change of variables. Specifically, given a prior density $p_z(z)$ (e.g. Gaussian) and an invertible function $f$, we can determine $p_x(x)$ as follows:
$$
\begin{split}
\int p_x(x) dx & = \int p_z(z) dz = 1 \hspace{1cm}\text{(by definition of a probability distribution)}\\
\Leftrightarrow p_x(x) & = p_z(z) \left|\frac{dz}{dx}\right| = p_z(f(x)) \left|\frac{df(x)}{dx}\right|
\end{split}
$$
Hence, in order to determine the probability of $x$, we only need to determine its probability in latent space, and get the derivate of $f$. Note that this is for a univariate distribution, and $f$ is required to be invertible and smooth. For a multivariate case, the derivative becomes a Jacobian of which we need to take the determinant. As we usually use the log-likelihood as objective, we write the multivariate term with logarithms below:
$$
\log p_x(\mathbf{x}) = \log p_z(f(\mathbf{x})) + \log{} \left|\det \frac{df(\mathbf{x})}{d\mathbf{x}}\right|
$$
Although we now know how a normalizing flow obtains its likelihood, it might not be clear what a normalizing flow does intuitively. For this, we should look from the inverse perspective of the flow starting with the prior probability density $p_z(z)$. If we apply an invertible function on it, we effectively "transform" its probability density. For instance, if $f^{-1}(z)=z+1$, we shift the density by one while still remaining a valid probability distribution, and being invertible. We can also apply more complex transformations, like scaling: $f^{-1}(z)=2z+1$, but there you might see a difference. When you scale, you also change the volume of the probability density, as for example on uniform distributions (figure credit - [Eric Jang](https://blog.evjang.com/2018/01/nf1.html)):
<center width="100%"><img src="uniform_flow.png" width="300px"></center>
You can see that the height of $p(y)$ should be lower than $p(x)$ after scaling. This change in volume represents $\left|\frac{df(x)}{dx}\right|$ in our equation above, and ensures that even after scaling, we still have a valid probability distribution. We can go on with making our function $f$ more complex. However, the more complex $f$ becomes, the harder it will be to find the inverse $f^{-1}$ of it, and to calculate the log-determinant of the Jacobian $\log{} \left|\det \frac{df(\mathbf{x})}{d\mathbf{x}}\right|$. An easier trick to stack multiple invertible functions $f_{1,...,K}$ after each other, as all together, they still represent a single, invertible function. Using multiple, learnable invertible functions, a normalizing flow attempts to transform $p_z(z)$ slowly into a more complex distribution which should finally be $p_x(x)$. We visualize the idea below
(figure credit - [Lilian Weng](https://lilianweng.github.io/lil-log/2018/10/13/flow-based-deep-generative-models.html)):
<center width="100%"><img src="normalizing_flow_layout.png" width="700px"></center>
Starting from $z_0$, which follows the prior Gaussian distribution, we sequentially apply the invertible functions $f_1,f_2,...,f_K$, until $z_K$ represents $x$. Note that in the figure above, the functions $f$ represent the inverted function from $f$ we had above (here: $f:Z\to X$, above: $f:X\to Z$). This is just a different notation and has no impact on the actual flow design because all $f$ need to be invertible anyways. When we estimate the log likelihood of a data point $x$ as in the equations above, we run the flows in the opposite direction than visualized above. Multiple flow layers have been proposed that use a neural network as learnable parameters, such as the planar and radial flow. However, we will focus here on flows that are commonly used in image modeling, and will discuss them in the rest of the notebook along with the details of how to train a normalizing flow.
## Normalizing Flows on images
To become familiar with normalizing flows, especially for the application of image modeling, it is best to discuss the different elements in a flow along with the implementation. As a general concept, we want to build a normalizing flow that maps an input image (here MNIST) to an equally sized latent space:
<center width="100%" style="padding: 10px"><img src="image_to_gaussian.svg" width="450px"></center>
As a first step, we will implement a template of a normalizing flow in PyTorch Lightning. During training and validation, a normalizing flow performs density estimation in the forward direction. For this, we apply a series of flow transformations on the input $x$ and estimate the probability of the input by determining the probability of the transformed point $z$ given a prior, and the change of volume caused by the transformations. During inference, we can do both density estimation and sampling new points by inverting the flow transformations. Therefore, we define a function `_get_likelihood` which performs density estimation, and `sample` to generate new examples. The functions `training_step`, `validation_step` and `test_step` all make use of `_get_likelihood`.
The standard metric used in generative models, and in particular normalizing flows, is bits per dimensions (bpd). Bpd is motivated from an information theory perspective and describes how many bits we would need to encode a particular example in our modeled distribution. The less bits we need, the more likely the example in our distribution. When we test for the bits per dimension of our test dataset, we can judge whether our model generalizes to new samples of the dataset and didn't memorize the training dataset. In order to calculate the bits per dimension score, we can rely on the negative log-likelihood and change the log base (as bits are binary while NLL is usually exponential):
$$\text{bpd} = \text{nll} \cdot \log_2\left(\exp(1)\right) \cdot \left(\prod d_i\right)^{-1}$$
where $d_1,...,d_K$ are the dimensions of the input. For images, this would be the height, width and channel number. We divide the log likelihood by these extra dimensions to have a metric which we can compare for different image resolutions. In the original image space, MNIST examples have a bits per dimension score of 8 (we need 8 bits to encode each pixel as there are 256 possible values).
```
class ImageFlow(pl.LightningModule):
def __init__(self, flows, import_samples=8):
"""
Inputs:
flows - A list of flows (each a nn.Module) that should be applied on the images.
import_samples - Number of importance samples to use during testing (see explanation below). Can be changed at any time
"""
super().__init__()
self.flows = nn.ModuleList(flows)
self.import_samples = import_samples
# Create prior distribution for final latent space
self.prior = torch.distributions.normal.Normal(loc=0.0, scale=1.0)
# Example input for visualizing the graph
self.example_input_array = train_set[0][0].unsqueeze(dim=0)
def forward(self, imgs):
# The forward function is only used for visualizing the graph
return self._get_likelihood(imgs)
def encode(self, imgs):
# Given a batch of images, return the latent representation z and ldj of the transformations
z, ldj = imgs, torch.zeros(imgs.shape[0], device=self.device)
for flow in self.flows:
z, ldj = flow(z, ldj, reverse=False)
return z, ldj
def _get_likelihood(self, imgs, return_ll=False):
"""
Given a batch of images, return the likelihood of those.
If return_ll is True, this function returns the log likelihood of the input.
Otherwise, the ouptut metric is bits per dimension (scaled negative log likelihood)
"""
z, ldj = self.encode(imgs)
log_pz = self.prior.log_prob(z).sum(dim=[1,2,3])
log_px = ldj + log_pz
nll = -log_px
# Calculating bits per dimension
bpd = nll * np.log2(np.exp(1)) / np.prod(imgs.shape[1:])
return bpd.mean() if not return_ll else log_px
@torch.no_grad()
def sample(self, img_shape, z_init=None):
"""
Sample a batch of images from the flow.
"""
# Sample latent representation from prior
if z_init is None:
z = self.prior.sample(sample_shape=img_shape).to(device)
else:
z = z_init.to(device)
# Transform z to x by inverting the flows
ldj = torch.zeros(img_shape[0], device=device)
for flow in reversed(self.flows):
z, ldj = flow(z, ldj, reverse=True)
return z
def configure_optimizers(self):
optimizer = optim.Adam(self.parameters(), lr=1e-3)
# An scheduler is optional, but can help in flows to get the last bpd improvement
scheduler = optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.99)
return [optimizer], [scheduler]
def training_step(self, batch, batch_idx):
# Normalizing flows are trained by maximum likelihood => return bpd
loss = self._get_likelihood(batch[0])
self.log('train_bpd', loss)
return loss
def validation_step(self, batch, batch_idx):
loss = self._get_likelihood(batch[0])
self.log('val_bpd', loss)
def test_step(self, batch, batch_idx):
# Perform importance sampling during testing => estimate likelihood M times for each image
samples = []
for _ in range(self.import_samples):
img_ll = self._get_likelihood(batch[0], return_ll=True)
samples.append(img_ll)
img_ll = torch.stack(samples, dim=-1)
# To average the probabilities, we need to go from log-space to exp, and back to log.
# Logsumexp provides us a stable implementation for this
img_ll = torch.logsumexp(img_ll, dim=-1) - np.log(self.import_samples)
# Calculate final bpd
bpd = -img_ll * np.log2(np.exp(1)) / np.prod(batch[0].shape[1:])
bpd = bpd.mean()
self.log('test_bpd', bpd)
```
The `test_step` function differs from the training and validation step in that it makes use of importance sampling. We will discuss the motiviation and details behind this after understanding how flows model discrete images in continuous space.
### Dequantization
Normalizing flows rely on the rule of change of variables, which is naturally defined in continuous space. Applying flows directly on discrete data leads to undesired density models where arbitrarly high likelihood are placed on a few, particular values. See the illustration below:
<center><img src="dequantization_issue.svg" width="40%"/></center>
The black points represent the discrete points, and the green volume the density modeled by a normalizing flow in continuous space. The flow would continue to increase the likelihood for $x=0,1,2,3$ while having no volume on any other point. Remember that in continuous space, we have the constraint that the overall volume of the probability density must be 1 ($\int p(x)dx=1$). Otherwise, we don't model a probability distribution anymore. However, the discrete points $x=0,1,2,3$ represent delta peaks with no width in continuous space. This is why the flow can place an infinite high likelihood on these few points while still representing a distribution in continuous space. Nonetheless, the learned density does not tell us anything about the distribution among the discrete points, as in discrete space, the likelihoods of those four points would have to sum to 1, not to infinity.
To prevent such degenerated solutions, a common solution is to add a small amount of noise to each discrete value, which is also referred to as dequantization. Considering $x$ as an integer (as it is the case for images), the dequantized representation $v$ can be formulated as $v=x+u$ where $u\in[0,1)^D$. Thus, the discrete value $1$ is modeled by a distribution over the interval $[1.0, 2.0)$, the value $2$ by an volume over $[2.0, 3.0)$, etc. Our objective of modeling $p(x)$ becomes:
$$ p(x) = \int p(x+u)du = \int \frac{q(u|x)}{q(u|x)}p(x+u)du = \mathbb{E}_{u\sim q(u|x)}\left[\frac{p(x+u)}{q(u|x)} \right]$$
with $q(u|x)$ being the noise distribution. For now, we assume it to be uniform, which can also be written as $p(x)=\mathbb{E}_{u\sim U(0,1)^D}\left[p(x+u) \right]$.
In the following, we will implement Dequantization as a flow transformation itself. After adding noise to the discrete values, we additionally transform the volume into a Gaussian-like shape. This is done by scaling $x+u$ between $0$ and $1$, and applying the invert of the sigmoid function $\sigma(z)^{-1} = \log z - \log 1-z$. If we would not do this, we would face two problems:
1. The input is scaled between 0 and 256 while the prior distribution is a Gaussian with mean $0$ and standard deviation $1$. In the first iterations after initializing the parameters of the flow, we would have extremely low likelihoods for large values like $256$. This would cause the training to diverge instantaneously.
2. As the output distribution is a Gaussian, it is beneficial for the flow to have a similarly shaped input distribution. This will reduce the modeling complexity that is required by the flow.
Overall, we can implement dequantization as follows:
```
class Dequantization(nn.Module):
def __init__(self, alpha=1e-5, quants=256):
"""
Inputs:
alpha - small constant that is used to scale the original input.
Prevents dealing with values very close to 0 and 1 when inverting the sigmoid
quants - Number of possible discrete values (usually 256 for 8-bit image)
"""
super().__init__()
self.alpha = alpha
self.quants = quants
def forward(self, z, ldj, reverse=False):
if not reverse:
z, ldj = self.dequant(z, ldj)
z, ldj = self.sigmoid(z, ldj, reverse=True)
else:
z, ldj = self.sigmoid(z, ldj, reverse=False)
z = z * self.quants
ldj += np.log(self.quants) * np.prod(z.shape[1:])
z = torch.floor(z).clamp(min=0, max=self.quants-1).to(torch.int32)
return z, ldj
def sigmoid(self, z, ldj, reverse=False):
# Applies an invertible sigmoid transformation
if not reverse:
ldj += (-z-2*F.softplus(-z)).sum(dim=[1,2,3])
z = torch.sigmoid(z)
else:
z = z * (1 - self.alpha) + 0.5 * self.alpha # Scale to prevent boundaries 0 and 1
ldj += np.log(1 - self.alpha) * np.prod(z.shape[1:])
ldj += (-torch.log(z) - torch.log(1-z)).sum(dim=[1,2,3])
z = torch.log(z) - torch.log(1-z)
return z, ldj
def dequant(self, z, ldj):
# Transform discrete values to continuous volumes
z = z.to(torch.float32)
z = z + torch.rand_like(z).detach()
z = z / self.quants
ldj -= np.log(self.quants) * np.prod(z.shape[1:])
return z, ldj
```
A good check whether a flow is correctly implemented or not, is to verify that it is invertible. Hence, we will dequantize a randomly chosen training image, and then quantize it again. We would expect that we would get the exact same image out:
```
## Testing invertibility of dequantization layer
pl.seed_everything(42)
orig_img = train_set[0][0].unsqueeze(dim=0)
ldj = torch.zeros(1,)
dequant_module = Dequantization()
deq_img, ldj = dequant_module(orig_img, ldj, reverse=False)
reconst_img, ldj = dequant_module(deq_img, ldj, reverse=True)
d1, d2 = torch.where(orig_img.squeeze() != reconst_img.squeeze())
if len(d1) != 0:
print("Dequantization was not invertible.")
for i in range(d1.shape[0]):
print("Original value:", orig_img[0,0,d1[i], d2[i]].item())
print("Reconstructed value:", reconst_img[0,0,d1[i], d2[i]].item())
else:
print("Successfully inverted dequantization")
# Layer is not strictly invertible due to float precision constraints
# assert (orig_img == reconst_img).all().item()
```
In contrast to our expectation, the test fails. However, this is no reason to doubt our implementation here as only one single value is not equal to the original. This is caused due to numerical inaccuracies in the sigmoid invert. While the input space to the inverted sigmoid is scaled between 0 and 1, the output space is between $-\infty$ and $\infty$. And as we use 32 bits to represent the numbers (in addition to applying logs over and over again), such inaccuries can occur and should not be worrisome. Nevertheless, it is good to be aware of them, and can be improved by using a double tensor (float64).
Finally, we can take our dequantization and actually visualize the distribution it transforms the discrete values into:
```
def visualize_dequantization(quants, prior=None):
"""
Function for visualizing the dequantization values of discrete values in continuous space
"""
# Prior over discrete values. If not given, a uniform is assumed
if prior is None:
prior = np.ones(quants, dtype=np.float32) / quants
prior = prior / prior.sum() * quants # In the following, we assume 1 for each value means uniform distribution
inp = torch.arange(-4, 4, 0.01).view(-1, 1, 1, 1) # Possible continuous values we want to consider
ldj = torch.zeros(inp.shape[0])
dequant_module = Dequantization(quants=quants)
# Invert dequantization on continuous values to find corresponding discrete value
out, ldj = dequant_module.forward(inp, ldj, reverse=True)
inp, out, prob = inp.squeeze().numpy(), out.squeeze().numpy(), ldj.exp().numpy()
prob = prob * prior[out] # Probability scaled by categorical prior
# Plot volumes and continuous distribution
sns.set_style("white")
fig = plt.figure(figsize=(6,3))
x_ticks = []
for v in np.unique(out):
indices = np.where(out==v)
color = to_rgb(f"C{v}")
plt.fill_between(inp[indices], prob[indices], np.zeros(indices[0].shape[0]), color=color+(0.5,), label=str(v))
plt.plot([inp[indices[0][0]]]*2, [0, prob[indices[0][0]]], color=color)
plt.plot([inp[indices[0][-1]]]*2, [0, prob[indices[0][-1]]], color=color)
x_ticks.append(inp[indices[0][0]])
x_ticks.append(inp.max())
plt.xticks(x_ticks, [f"{x:.1f}" for x in x_ticks])
plt.plot(inp,prob, color=(0.0,0.0,0.0))
# Set final plot properties
plt.ylim(0, prob.max()*1.1)
plt.xlim(inp.min(), inp.max())
plt.xlabel("z")
plt.ylabel("Probability")
plt.title(f"Dequantization distribution for {quants} discrete values")
plt.legend()
plt.show()
plt.close()
visualize_dequantization(quants=8)
```
The visualized distribution show the sub-volumes that are assigned to the different discrete values. The value $0$ has its volume between $[-\infty, -1.9)$, the value $1$ is represented by the interval $[-1.9, -1.1)$, etc. The volume for each discrete value has the same probability mass. That's why the volumes close to the center (e.g. 3 and 4) have a smaller area on the z-axis as others ($z$ is being used to denote the output of the whole dequantization flow).
Effectively, the consecutive normalizing flow models discrete images by the following objective:
$$\log p(x) = \log \mathbb{E}_{u\sim q(u|x)}\left[\frac{p(x+u)}{q(u|x)} \right] \geq \mathbb{E}_{u}\left[\log \frac{p(x+u)}{q(u|x)} \right]$$
Although normalizing flows are exact in likelihood, we have a lower bound. Specifically, this is an example of the Jensen inequality because we need to move the log into the expectation so we can use Monte-carlo estimates. In general, this bound is considerably smaller than the ELBO in variational autoencoders. Actually, we can reduce the bound ourselves by estimating the expectation not by one, but by $M$ samples. In other words, we can apply importance sampling which leads to the following inequality:
$$\log p(x) = \log \mathbb{E}_{u\sim q(u|x)}\left[\frac{p(x+u)}{q(u|x)} \right] \geq \mathbb{E}_{u}\left[\log \frac{1}{M} \sum_{m=1}^{M} \frac{p(x+u_m)}{q(u_m|x)} \right] \geq \mathbb{E}_{u}\left[\log \frac{p(x+u)}{q(u|x)} \right]$$
The importance sampling $\frac{1}{M} \sum_{m=1}^{M} \frac{p(x+u_m)}{q(u_m|x)}$ becomes $\mathbb{E}_{u\sim q(u|x)}\left[\frac{p(x+u)}{q(u|x)} \right]$ if $M\to \infty$, so that the more samples we use, the tighter the bound is. During testing, we can make use of this property and have it implemented in `test_step` in `ImageFlow`. In theory, we could also use this tighter bound during training. However, related work has shown that this does not necessarily lead to an improvement given the additional computational cost, and it is more efficient to stick with a single estimate [5].
### Variational Dequantization
Dequantization uses a uniform distribution for the noise $u$ which effectively leads to images being represented as hypercubes (cube in high dimensions) with sharp borders. However, modeling such sharp borders is not easy for a flow as it uses smooth transformations to convert it into a Gaussian distribution.
Another way of looking at it is if we change the prior distribution in the previous visualization. Imagine we have independent Gaussian noise on pixels which is commonly the case for any real-world taken picture. Therefore, the flow would have to model a distribution as above, but with the individual volumes scaled as follows:
```
visualize_dequantization(quants=8, prior=np.array([0.075, 0.2, 0.4, 0.2, 0.075, 0.025, 0.0125, 0.0125]))
```
Transforming such a probability into a Gaussian is a difficult task, especially with such hard borders. Dequantization has therefore been extended to more sophisticated, learnable distributions beyond uniform in a variational framework. In particular, if we remember the learning objective $\log p(x) = \log \mathbb{E}_{u}\left[\frac{p(x+u)}{q(u|x)} \right]$, the uniform distribution can be replaced by a learned distribution $q_{\theta}(u|x)$ with support over $u\in[0,1)^D$. This approach is called Variational Dequantization and has been proposed by Ho et al. [3]. How can we learn such a distribution? We can use a second normalizing flow that takes $x$ as external input and learns a flexible distribution over $u$. To ensure a support over $[0,1)^D$, we can apply a sigmoid activation function as final flow transformation.
Inheriting the original dequantization class, we can implement variational dequantization as follows:
```
class VariationalDequantization(Dequantization):
def __init__(self, var_flows, alpha=1e-5):
"""
Inputs:
var_flows - A list of flow transformations to use for modeling q(u|x)
alpha - Small constant, see Dequantization for details
"""
super().__init__(alpha=alpha)
self.flows = nn.ModuleList(var_flows)
def dequant(self, z, ldj):
z = z.to(torch.float32)
img = (z / 255.0) * 2 - 1 # We condition the flows on x, i.e. the original image
# Prior of u is a uniform distribution as before
# As most flow transformations are defined on [-infinity,+infinity], we apply an inverse sigmoid first.
deq_noise = torch.rand_like(z).detach()
deq_noise, ldj = self.sigmoid(deq_noise, ldj, reverse=True)
for flow in self.flows:
deq_noise, ldj = flow(deq_noise, ldj, reverse=False, orig_img=img)
deq_noise, ldj = self.sigmoid(deq_noise, ldj, reverse=False)
# After the flows, apply u as in standard dequantization
z = (z + deq_noise) / 256.0
ldj -= np.log(256.0) * np.prod(z.shape[1:])
return z, ldj
```
Variational dequantization can be used as a substitute for dequantization. We will compare dequantization and variational dequantization in later experiments.
### Coupling layers
Next, we look at possible transformations to apply inside the flow. A recent popular flow layer, which works well in combination with deep neural networks, is the coupling layer introduced by Dinh et al. [1]. The input $z$ is arbitrarily split into two parts, $z_{1:j}$ and $z_{j+1:d}$, of which the first remains unchanged by the flow. Yet, $z_{1:j}$ is used to parameterize the transformation for the second part, $z_{j+1:d}$. Various transformations have been proposed in recent time [3,4], but here we will settle for the simplest and most efficient one: affine coupling. In this coupling layer, we apply an affine transformation by shifting the input by a bias $\mu$ and scale it by $\sigma$. In other words, our transformation looks as follows:
$$z'_{j+1:d} = \mu_{\theta}(z_{1:j}) + \sigma_{\theta}(z_{1:j}) \odot z_{j+1:d}$$
The functions $\mu$ and $\sigma$ are implemented as a shared neural network, and the sum and multiplication are performed element-wise. The LDJ is thereby the sum of the logs of the scaling factors: $\sum_i \left[\log \sigma_{\theta}(z_{1:j})\right]_i$. Inverting the layer can as simply be done as subtracting the bias and dividing by the scale:
$$z_{j+1:d} = \left(z'_{j+1:d} - \mu_{\theta}(z_{1:j})\right) / \sigma_{\theta}(z_{1:j})$$
We can also visualize the coupling layer in form of a computation graph, where $z_1$ represents $z_{1:j}$, and $z_2$ represents $z_{j+1:d}$:
<center width="100%" style="padding: 10px"><img src="coupling_flow.svg" width="450px"></center>
In our implementation, we will realize the splitting of variables as masking. The variables to be transformed, $z_{j+1:d}$, are masked when passing $z$ to the shared network to predict the transformation parameters. When applying the transformation, we mask the parameters for $z_{1:j}$ so that we have an identity operation for those variables:
```
class CouplingLayer(nn.Module):
def __init__(self, network, mask, c_in):
"""
Coupling layer inside a normalizing flow.
Inputs:
network - A PyTorch nn.Module constituting the deep neural network for mu and sigma.
Output shape should be twice the channel size as the input.
mask - Binary mask (0 or 1) where 0 denotes that the element should be transformed,
while 1 means the latent will be used as input to the NN.
c_in - Number of input channels
"""
super().__init__()
self.network = network
self.scaling_factor = nn.Parameter(torch.zeros(c_in))
# Register mask as buffer as it is a tensor which is not a parameter,
# but should be part of the modules state.
self.register_buffer('mask', mask)
def forward(self, z, ldj, reverse=False, orig_img=None):
"""
Inputs:
z - Latent input to the flow
ldj - The current ldj of the previous flows.
The ldj of this layer will be added to this tensor.
reverse - If True, we apply the inverse of the layer.
orig_img (optional) - Only needed in VarDeq. Allows external
input to condition the flow on (e.g. original image)
"""
# Apply network to masked input
z_in = z * self.mask
if orig_img is None:
nn_out = self.network(z_in)
else:
nn_out = self.network(torch.cat([z_in, orig_img], dim=1))
s, t = nn_out.chunk(2, dim=1)
# Stabilize scaling output
s_fac = self.scaling_factor.exp().view(1, -1, 1, 1)
s = torch.tanh(s / s_fac) * s_fac
# Mask outputs (only transform the second part)
s = s * (1 - self.mask)
t = t * (1 - self.mask)
# Affine transformation
if not reverse:
# Whether we first shift and then scale, or the other way round,
# is a design choice, and usually does not have a big impact
z = (z + t) * torch.exp(s)
ldj += s.sum(dim=[1,2,3])
else:
z = (z * torch.exp(-s)) - t
ldj -= s.sum(dim=[1,2,3])
return z, ldj
```
For stabilization purposes, we apply a $\tanh$ activation function on the scaling output. This prevents sudden large output values for the scaling that can destabilize training. To still allow scaling factors smaller or larger than -1 and 1 respectively, we have a learnable parameter per dimension, called `scaling_factor`. This scales the tanh to different limits. Below, we visualize the effect of the scaling factor on the output activation of the scaling terms:
```
with torch.no_grad():
x = torch.arange(-5,5,0.01)
scaling_factors = [0.5, 1, 2]
sns.set()
fig, ax = plt.subplots(1, 3, figsize=(12,3))
for i, scale in enumerate(scaling_factors):
y = torch.tanh(x / scale) * scale
ax[i].plot(x.numpy(), y.numpy())
ax[i].set_title("Scaling factor: " + str(scale))
ax[i].set_ylim(-3, 3)
plt.subplots_adjust(wspace=0.4)
sns.reset_orig()
plt.show()
```
Coupling layers generalize to any masking technique we could think of. However, the most common approach for images is to split the input $z$ in half, using a checkerboard mask or channel mask. A checkerboard mask splits the variables across the height and width dimensions and assigns each other pixel to $z_{j+1:d}$. Thereby, the mask is shared across channels. In contrast, the channel mask assigns half of the channels to $z_{j+1:d}$, and the other half to $z_{1:j+1}$. Note that when we apply multiple coupling layers, we invert the masking for each other layer so that each variable is transformed a similar amount of times.
Let's implement a function that creates a checkerboard mask and a channel mask for us:
```
def create_checkerboard_mask(h, w, invert=False):
x, y = torch.arange(h, dtype=torch.int32), torch.arange(w, dtype=torch.int32)
xx, yy = torch.meshgrid(x, y)
mask = torch.fmod(xx + yy, 2)
mask = mask.to(torch.float32).view(1, 1, h, w)
if invert:
mask = 1 - mask
return mask
def create_channel_mask(c_in, invert=False):
mask = torch.cat([torch.ones(c_in//2, dtype=torch.float32),
torch.zeros(c_in-c_in//2, dtype=torch.float32)])
mask = mask.view(1, c_in, 1, 1)
if invert:
mask = 1 - mask
return mask
```
We can also visualize the corresponding masks for an image of size $8\times 8\times 2$ (2 channels):
```
checkerboard_mask = create_checkerboard_mask(h=8, w=8).expand(-1,2,-1,-1)
channel_mask = create_channel_mask(c_in=2).expand(-1,-1,8,8)
show_imgs(checkerboard_mask.transpose(0,1), "Checkerboard mask")
show_imgs(channel_mask.transpose(0,1), "Channel mask")
```
As a last aspect of coupling layers, we need to decide for the deep neural network we want to apply in the coupling layers. The input to the layers is an image, and hence we stick with a CNN. Because the input to a transformation depends on all transformations before, it is crucial to ensure a good gradient flow through the CNN back to the input, which can be optimally achieved by a ResNet-like architecture. Specifically, we use a Gated ResNet that adds a $\sigma$-gate to the skip connection, similarly to the input gate in LSTMs. The details are not necessarily important here, and the network is strongly inspired from Flow++ [3] in case you are interested in building even stronger models.
```
class ConcatELU(nn.Module):
"""
Activation function that applies ELU in both direction (inverted and plain).
Allows non-linearity while providing strong gradients for any input (important for final convolution)
"""
def forward(self, x):
return torch.cat([F.elu(x), F.elu(-x)], dim=1)
class LayerNormChannels(nn.Module):
def __init__(self, c_in):
"""
This module applies layer norm across channels in an image. Has been shown to work well with ResNet connections.
Inputs:
c_in - Number of channels of the input
"""
super().__init__()
self.layer_norm = nn.LayerNorm(c_in)
def forward(self, x):
x = x.permute(0, 2, 3, 1)
x = self.layer_norm(x)
x = x.permute(0, 3, 1, 2)
return x
class GatedConv(nn.Module):
def __init__(self, c_in, c_hidden):
"""
This module applies a two-layer convolutional ResNet block with input gate
Inputs:
c_in - Number of channels of the input
c_hidden - Number of hidden dimensions we want to model (usually similar to c_in)
"""
super().__init__()
self.net = nn.Sequential(
nn.Conv2d(c_in, c_hidden, kernel_size=3, padding=1),
ConcatELU(),
nn.Conv2d(2*c_hidden, 2*c_in, kernel_size=1)
)
def forward(self, x):
out = self.net(x)
val, gate = out.chunk(2, dim=1)
return x + val * torch.sigmoid(gate)
class GatedConvNet(nn.Module):
def __init__(self, c_in, c_hidden=32, c_out=-1, num_layers=3):
"""
Module that summarizes the previous blocks to a full convolutional neural network.
Inputs:
c_in - Number of input channels
c_hidden - Number of hidden dimensions to use within the network
c_out - Number of output channels. If -1, 2 times the input channels are used (affine coupling)
num_layers - Number of gated ResNet blocks to apply
"""
super().__init__()
c_out = c_out if c_out > 0 else 2 * c_in
layers = []
layers += [nn.Conv2d(c_in, c_hidden, kernel_size=3, padding=1)]
for layer_index in range(num_layers):
layers += [GatedConv(c_hidden, c_hidden),
LayerNormChannels(c_hidden)]
layers += [ConcatELU(),
nn.Conv2d(2*c_hidden, c_out, kernel_size=3, padding=1)]
self.nn = nn.Sequential(*layers)
self.nn[-1].weight.data.zero_()
self.nn[-1].bias.data.zero_()
def forward(self, x):
return self.nn(x)
```
### Training loop
Finally, we can add Dequantization, Variational Dequantization and Coupling Layers together to build our full normalizing flow on MNIST images. We apply 8 coupling layers in the main flow, and 4 for variational dequantization if applied. We apply a checkerboard mask throughout the network as with a single channel (black-white images), we cannot apply channel mask. The overall architecture is visualized below.
<center width="100%" style="padding: 20px"><img src="vanilla_flow.svg" width="900px"></center>
```
def create_simple_flow(use_vardeq=True):
flow_layers = []
if use_vardeq:
vardeq_layers = [CouplingLayer(network=GatedConvNet(c_in=2, c_out=2, c_hidden=16),
mask=create_checkerboard_mask(h=28, w=28, invert=(i%2==1)),
c_in=1) for i in range(4)]
flow_layers += [VariationalDequantization(var_flows=vardeq_layers)]
else:
flow_layers += [Dequantization()]
for i in range(8):
flow_layers += [CouplingLayer(network=GatedConvNet(c_in=1, c_hidden=32),
mask=create_checkerboard_mask(h=28, w=28, invert=(i%2==1)),
c_in=1)]
flow_model = ImageFlow(flow_layers).to(device)
return flow_model
```
For implementing the training loop, we use the framework of PyTorch Lightning and reduce the code overhead. If interested, you can take a look at the generated tensorboard file, in particularly the graph to see an overview of flow transformations that are applied. Note that we again provide pre-trained models (see later on in the notebook) as normalizing flows are particularly expensive to train. We have also run validation and testing as this can take some time as well with the added importance sampling.
```
def train_flow(flow, model_name="MNISTFlow"):
# Create a PyTorch Lightning trainer
trainer = pl.Trainer(default_root_dir=os.path.join(CHECKPOINT_PATH, model_name),
gpus=1 if torch.cuda.is_available() else 0,
max_epochs=200,
gradient_clip_val=1.0,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="min", monitor="val_bpd"),
LearningRateMonitor("epoch")])
trainer.logger._log_graph = True
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
train_data_loader = data.DataLoader(train_set, batch_size=128, shuffle=True, drop_last=True, pin_memory=True, num_workers=8)
result = None
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, model_name + ".ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
ckpt = torch.load(pretrained_filename)
flow.load_state_dict(ckpt['state_dict'])
result = ckpt.get("result", None)
else:
print("Start training", model_name)
trainer.fit(flow, train_data_loader, val_loader)
# Test best model on validation and test set if no result has been found
# Testing can be expensive due to the importance sampling.
if result is None:
val_result = trainer.test(flow, val_loader, verbose=False)
start_time = time.time()
test_result = trainer.test(flow, test_loader, verbose=False)
duration = time.time() - start_time
result = {"test": test_result, "val": val_result, "time": duration / len(test_loader) / flow.import_samples}
return flow, result
```
## Multi-scale architecture
One disadvantage of normalizing flows is that they operate on the exact same dimensions as the input. If the input is high-dimensional, so is the latent space, which requires larger computational cost to learn suitable transformations. However, particularly in the image domain, many pixels contain less information in the sense that we could remove them without loosing the semantical information of the image.
Based on this intuition, deep normalizing flows on images commonly apply a multi-scale architecture [1]. After the first $N$ flow transformations, we split off half of the latent dimensions and directly evaluate them on the prior. The other half is run through $N$ more flow transformations, and depending on the size of the input, we split it again in half or stop overall at this position. The two operations involved in this setup is `Squeeze` and `Split` which we will review more closely and implement below.
### Squeeze and Split
When we want to remove half of the pixels in an image, we have the problem of deciding which variables to cut, and how to rearrange the image. Thus, the squeezing operation is commonly used before split, which divides the image into subsquares of shape $2\times 2\times C$, and reshapes them into $1\times 1\times 4C$ blocks. Effectively, we reduce the height and width of the image by a factor of 2 while scaling the number of channels by 4. Afterwards, we can perform the split operation over channels without the need of rearranging the pixels. The smaller scale also makes the overall architecture more efficient. Visually, the squeeze operation should transform the input as follows:
<center><img src="Squeeze_operation.svg" width="40%"/></center>
The input of $4\times 4\times 1$ is scaled to $2\times 2\times 4$ following the idea of grouping the pixels in $2\times 2\times 1$ subsquares. Next, let's try to implement this layer:
```
class SqueezeFlow(nn.Module):
def forward(self, z, ldj, reverse=False):
B, C, H, W = z.shape
if not reverse:
# Forward direction: H x W x C => H/2 x W/2 x 4C
z = z.reshape(B, C, H//2, 2, W//2, 2)
z = z.permute(0, 1, 3, 5, 2, 4)
z = z.reshape(B, 4*C, H//2, W//2)
else:
# Reverse direction: H/2 x W/2 x 4C => H x W x C
z = z.reshape(B, C//4, 2, 2, H, W)
z = z.permute(0, 1, 4, 2, 5, 3)
z = z.reshape(B, C//4, H*2, W*2)
return z, ldj
```
Before moving on, we can verify our implementation by comparing our output with the example figure above:
```
sq_flow = SqueezeFlow()
rand_img = torch.arange(1,17).view(1, 1, 4, 4)
print("Image (before)\n", rand_img)
forward_img, _ = sq_flow(rand_img, ldj=None, reverse=False)
print("\nImage (forward)\n", forward_img.permute(0,2,3,1)) # Permute for readability
reconst_img, _ = sq_flow(forward_img, ldj=None, reverse=True)
print("\nImage (reverse)\n", reconst_img)
```
The split operation divides the input into two parts, and evaluates one part directly on the prior. So that our flow operation fits to the implementation of the previous layers, we will return the prior probability of the first part as the log determinant jacobian of the layer. It has the same effect as if we would combine all variable splits at the end of the flow, and evaluate them together on the prior.
```
class SplitFlow(nn.Module):
def __init__(self):
super().__init__()
self.prior = torch.distributions.normal.Normal(loc=0.0, scale=1.0)
def forward(self, z, ldj, reverse=False):
if not reverse:
z, z_split = z.chunk(2, dim=1)
ldj += self.prior.log_prob(z_split).sum(dim=[1,2,3])
else:
z_split = self.prior.sample(sample_shape=z.shape).to(device)
z = torch.cat([z, z_split], dim=1)
ldj -= self.prior.log_prob(z_split).sum(dim=[1,2,3])
return z, ldj
```
### Building a multi-scale flow
After defining the squeeze and split operation, we are finally able to build our own multi-scale flow. Deep normalizing flows such as Glow and Flow++ [2,3] often apply a split operation directly after squeezing. However, with shallow flows, we need to be more thoughtful about where to place the split operation as we need at least a minimum amount of transformations on each variable. Our setup is inspired by the original RealNVP architecture [1] which is shallower than other, more recent state-of-the-art architectures.
Hence, for the MNIST dataset, we will apply the first squeeze operation after two coupling layers, but don't apply a split operation yet. Because we have only used two coupling layers and each the variable has been only transformed once, a split operation would be too early. We apply two more coupling layers before finally applying a split flow and squeeze again. The last four coupling layers operate on a scale of $7\times 7\times 8$. The full flow architecture is shown below.
<center width="100%" style="padding: 20px"><img src="multiscale_flow.svg" width="1100px"></center>
Note that while the feature maps inside the coupling layers reduce with the height and width of the input, the increased number of channels is not directly considered. To counteract this, we increase the hidden dimensions for the coupling layers on the squeezed input. The dimensions are often scaled by 2 as this approximately increases the computation cost by 4 canceling with the squeezing operation. However, we will choose the hidden dimensionalities $32, 48, 64$ for the three scales respectively to keep the number of parameters reasonable and show the efficiency of multi-scale architectures.
```
def create_multiscale_flow():
flow_layers = []
vardeq_layers = [CouplingLayer(network=GatedConvNet(c_in=2, c_out=2, c_hidden=16),
mask=create_checkerboard_mask(h=28, w=28, invert=(i%2==1)),
c_in=1) for i in range(4)]
flow_layers += [VariationalDequantization(vardeq_layers)]
flow_layers += [CouplingLayer(network=GatedConvNet(c_in=1, c_hidden=32),
mask=create_checkerboard_mask(h=28, w=28, invert=(i%2==1)),
c_in=1) for i in range(2)]
flow_layers += [SqueezeFlow()]
for i in range(2):
flow_layers += [CouplingLayer(network=GatedConvNet(c_in=4, c_hidden=48),
mask=create_channel_mask(c_in=4, invert=(i%2==1)),
c_in=4)]
flow_layers += [SplitFlow(),
SqueezeFlow()]
for i in range(4):
flow_layers += [CouplingLayer(network=GatedConvNet(c_in=8, c_hidden=64),
mask=create_channel_mask(c_in=8, invert=(i%2==1)),
c_in=8)]
flow_model = ImageFlow(flow_layers).to(device)
return flow_model
```
We can show the difference in number of parameters below:
```
def print_num_params(model):
num_params = sum([np.prod(p.shape) for p in model.parameters()])
print("Number of parameters: {:,}".format(num_params))
print_num_params(create_simple_flow(use_vardeq=False))
print_num_params(create_simple_flow(use_vardeq=True))
print_num_params(create_multiscale_flow())
```
Although the multi-scale flow has almost 3 times the parameters of the single scale flow, it is not necessarily more computationally expensive than its counterpart. We will compare the runtime in the following experiments as well.
## Analysing the flows
In the last part of the notebook, we will train all the models we have implemented above, and try to analyze the effect of the multi-scale architecture and variational dequantization.
### Training flow variants
Before we can analyse the flow models, we need to train them first. We provide pre-trained models that contain the validation and test performance, and run-time information. As flow models are computationally expensive, we advice you to rely on those pretrained models for a first run through the notebook.
```
flow_dict = {"simple": {}, "vardeq": {}, "multiscale": {}}
flow_dict["simple"]["model"], flow_dict["simple"]["result"] = train_flow(create_simple_flow(use_vardeq=False), model_name="MNISTFlow_simple")
flow_dict["vardeq"]["model"], flow_dict["vardeq"]["result"] = train_flow(create_simple_flow(use_vardeq=True), model_name="MNISTFlow_vardeq")
flow_dict["multiscale"]["model"], flow_dict["multiscale"]["result"] = train_flow(create_multiscale_flow(), model_name="MNISTFlow_multiscale")
```
### Density modeling and sampling
Firstly, we can compare the models on their quantitative results. The following table shows all important statistics. The inference time specifies the time needed to determine the probability for a batch of 64 images for each model, and the sampling time the duration it took to sample a batch of 64 images.
```
%%html
<!-- Some HTML code to increase font size in the following table -->
<style>
th {font-size: 120%;}
td {font-size: 120%;}
</style>
import tabulate
from IPython.display import display, HTML
table = [[key,
"%4.3f bpd" % flow_dict[key]["result"]["val"][0]["test_bpd"],
"%4.3f bpd" % flow_dict[key]["result"]["test"][0]["test_bpd"],
"%2.0f ms" % (1000 * flow_dict[key]["result"]["time"]),
"%2.0f ms" % (1000 * flow_dict[key]["result"].get("samp_time", 0)),
"{:,}".format(sum([np.prod(p.shape) for p in flow_dict[key]["model"].parameters()]))]
for key in flow_dict]
display(HTML(tabulate.tabulate(table, tablefmt='html', headers=["Model", "Validation Bpd", "Test Bpd", "Inference time", "Sampling time", "Num Parameters"])))
```
As we have intially expected, using variational dequantization improves upon standard dequantization in terms of bits per dimension. Although the difference with 0.04bpd doesn't seem impressive first, it is a considerably step for generative models (most state-of-the-art models improve upon previous models in a range of 0.02-0.1bpd on CIFAR with three times as high bpd). While it takes longer to evaluate the probability of an image due to the variational dequantization, which also leads to a longer training time, it does not have an effect on the sampling time. This is because inverting variational dequantization is the same as dequantization: finding the next lower integer.
When we compare the two models to multi-scale architecture, we can see that the bits per dimension score again dropped by about 0.04bpd. Additionally, the inference time and sampling time improved notably despite having more parameters. Thus, we see that the multi-scale flow is not only stronger for density modeling, but also more efficient.
Next, we can test the sampling quality of the models. We should note that the samples for variational dequantization and standard dequantization are very similar, and hence we visualize here only the ones for variational dequantization and the multi-scale model. However, feel free to also test out the `"simple"` model. The seeds are set to obtain reproducable generations and are not cherry picked.
```
pl.seed_everything(44)
samples = flow_dict["vardeq"]["model"].sample(img_shape=[16,1,28,28])
show_imgs(samples.cpu())
pl.seed_everything(44)
samples = flow_dict["multiscale"]["model"].sample(img_shape=[16,8,7,7])
show_imgs(samples.cpu())
```
From the few samples, we can see a clear difference between the simple and the multi-scale model. The single-scale model has only learned local, small correlations while the multi-scale model was able to learn full, global relations that form digits. This show-cases another benefit of the multi-scale model. In contrast to VAEs, the outputs are sharp as normalizing flows can naturally model complex, multi-modal distributions while VAEs have the independent decoder output noise. Nevertheless, the samples from this flow are far from perfect as not all samples show true digits.
### Interpolation in latent space
Another popular test for the smoothness of the latent space of generative models is to interpolate between two training examples. As normalizing flows are strictly invertible, we can guarantee that any image is represented in the latent space. We again compare the variational dequantization model with the multi-scale model below.
```
@torch.no_grad()
def interpolate(model, img1, img2, num_steps=8):
"""
Inputs:
model - object of ImageFlow class that represents the (trained) flow model
img1, img2 - Image tensors of shape [1, 28, 28]. Images between which should be interpolated.
num_steps - Number of interpolation steps. 8 interpolation steps mean 6 intermediate pictures besides img1 and img2
"""
imgs = torch.stack([img1, img2], dim=0).to(model.device)
z, _ = model.encode(imgs)
alpha = torch.linspace(0, 1, steps=num_steps, device=z.device).view(-1, 1, 1, 1)
interpolations = z[0:1] * alpha + z[1:2] * (1 - alpha)
interp_imgs = model.sample(interpolations.shape[:1] + imgs.shape[1:], z_init=interpolations)
show_imgs(interp_imgs, row_size=8)
exmp_imgs, _ = next(iter(train_loader))
pl.seed_everything(42)
for i in range(2):
interpolate(flow_dict["vardeq"]["model"], exmp_imgs[2*i], exmp_imgs[2*i+1])
pl.seed_everything(42)
for i in range(2):
interpolate(flow_dict["multiscale"]["model"], exmp_imgs[2*i], exmp_imgs[2*i+1])
```
The interpolations of the multi-scale model result in more realistic digits (first row $7\leftrightarrow 8\leftrightarrow 6$, second row $9\leftrightarrow 4\leftrightarrow 6$), while the variational dequantization model focuses on local patterns that globally do not form a digit. For the multi-scale model, we actually did not do the "true" interpolation between the two images as we did not consider the variables that were split along the flow (they have been sampled randomly for all samples). However, as we will see in the next experiment, the early variables do not effect the overall image much.
### Visualization of latents in different levels of multi-scale
In the following we will focus more on the multi-scale flow. We want to analyse what information is being stored in the variables split at early layers, and what information for the final variables. For this, we sample 8 images where each of them share the same final latent variables, but differ in the other part of the latent variables. Below we visualize three examples of this:
```
pl.seed_everything(44)
for _ in range(3):
z_init = flow_dict["multiscale"]["model"].prior.sample(sample_shape=[1,8,7,7])
z_init = z_init.expand(8, -1, -1, -1)
samples = flow_dict["multiscale"]["model"].sample(img_shape=z_init.shape, z_init=z_init)
show_imgs(samples.cpu())
```
We see that the early split variables indeed have a smaller effect on the image. Still, small differences can be spot when we look carefully at the borders of the digits. For instance, the hole at the top of the 8 changes for different samples although all of them represent the same coarse structure. This shows that the flow indeed learns to separate the higher-level information in the final variables, while the early split ones contain local noise patterns.
### Visualizing Dequantization
As a final part of this notebook, we will look at the effect of variational dequantization. We have motivated variational dequantization by the issue of sharp edges/boarders being difficult to model, and a flow would rather prefer smooth, prior-like distributions. To check how what noise distribution $q(u|x)$ the flows in the variational dequantization module have learned, we can plot a histogram of output values from the dequantization and variational dequantization module.
```
def visualize_dequant_distribution(model : ImageFlow, imgs : torch.Tensor, title:str=None):
"""
Inputs:
model - The flow of which we want to visualize the dequantization distribution
imgs - Example training images of which we want to visualize the dequantization distribution
"""
imgs = imgs.to(device)
ldj = torch.zeros(imgs.shape[0], dtype=torch.float32).to(device)
with torch.no_grad():
dequant_vals = []
for _ in tqdm(range(8), leave=False):
d, _ = model.flows[0](imgs, ldj, reverse=False)
dequant_vals.append(d)
dequant_vals = torch.cat(dequant_vals, dim=0)
dequant_vals = dequant_vals.view(-1).cpu().numpy()
sns.set()
plt.figure(figsize=(10,3))
plt.hist(dequant_vals, bins=256, color=to_rgb("C0")+(0.5,), edgecolor="C0", density=True)
if title is not None:
plt.title(title)
plt.show()
plt.close()
sample_imgs, _ = next(iter(train_loader))
visualize_dequant_distribution(flow_dict["simple"]["model"], sample_imgs, title="Dequantization")
visualize_dequant_distribution(flow_dict["vardeq"]["model"], sample_imgs, title="Variational dequantization")
```
The dequantization distribution in the first plot shows that the MNIST images have a strong bias towards 0 (black), and the distribution of them have a sharp border as mentioned before. The variational dequantization module has indeed learned a much smoother distribution with a Gaussian-like curve which can be modeled much better. For the other values, we would need to visualize the distribution $q(u|x)$ on a deeper level, depending on $x$. However, as all $u$'s interact and depend on each other, we would need to visualize a distribution in 784 dimensions, which is not that intuitive anymore.
## Conclusion
In conclusion, we have seen how to implement our own normalizing flow, and what difficulties arise if we want to apply them on images. Dequantization is a crucial step in mapping the discrete images into continuous space to prevent underisable delta-peak solutions. While dequantization creates hypercubes with hard border, variational dequantization allows us to fit a flow much better on the data. This allows us to obtain a lower bits per dimension score, while not affecting the sampling speed. The most common flow element, the coupling layer, is simple to implement, and yet effective. Furthermore, multi-scale architectures help to capture the global image context while allowing us to efficiently scale up the flow. Normalizing flows are an interesting alternative to VAEs as they allow an exact likelihood estimate in continuous space, and we have the guarantee that every possible input $x$ has a corresponding latent vector $z$. However, even beyond continuous inputs and images, flows can be applied and allow us to exploit the data structure in latent space, as e.g. on graphs for the task of molecule generation [6]. Recent advances in [Neural ODEs](https://arxiv.org/pdf/1806.07366.pdf) allow a flow with infinite number of layers, called Continuous Normalizing Flows, whose potential is yet to fully explore. Overall, normalizing flows are an exciting research area which will continue over the next couple of years.
## References
[1] Dinh, L., Sohl-Dickstein, J., and Bengio, S. (2017). “Density estimation using Real NVP,” In: 5th International Conference on Learning Representations, ICLR 2017. [Link](https://arxiv.org/abs/1605.08803)
[2] Kingma, D. P., and Dhariwal, P. (2018). “Glow: Generative Flow with Invertible 1x1 Convolutions,” In: Advances in Neural Information Processing Systems, vol. 31, pp. 10215--10224. [Link](http://papers.nips.cc/paper/8224-glow-generative-flow-with-invertible-1x1-convolutions.pdf)
[3] Ho, J., Chen, X., Srinivas, A., Duan, Y., and Abbeel, P. (2019). “Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design,” in Proceedings of the 36th International Conference on Machine Learning, vol. 97, pp. 2722–2730. [Link](https://arxiv.org/abs/1902.00275)
[4] Durkan, C., Bekasov, A., Murray, I., and Papamakarios, G. (2019). “Neural Spline Flows,” In: Advances in Neural Information Processing Systems, pp. 7509–7520. [Link](http://papers.neurips.cc/paper/8969-neural-spline-flows.pdf)
[5] Hoogeboom, E., Cohen, T. S., and Tomczak, J. M. (2020). “Learning Discrete Distributions by Dequantization,” arXiv preprint arXiv2001.11235v1. [Link](https://arxiv.org/abs/2001.11235)
[6] Lippe, P., and Gavves, E. (2021). “Categorical Normalizing Flows via Continuous Transformations,” In: International Conference on Learning Representations, ICLR 2021. [Link](https://openreview.net/pdf?id=-GLNZeVDuik)
---
[](https://github.com/phlippe/uvadlc_notebooks/) If you found this tutorial helpful, consider ⭐-ing our repository.
[](https://github.com/phlippe/uvadlc_notebooks/issues) For any questions, typos, or bugs that you found, please raise an issue on GitHub.
---
| github_jupyter |
```
!pip install beautifulsoup4
import urllib.request
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup,Comment
import re
url = "http://www.hanban.org/hanbancn/template/ciotab_cn1.htm?v1"
response = urllib.request.urlopen(url)
#webContent = response.read().decode(response.headers.get_content_charset())
webContent = response.read().decode("utf-8")
print(len(webContent))
# info_dict->{name:{'country', 'city', 'type', 'date', 'status', 'url'}}
info_dict = {}
tabContent = BeautifulSoup(webContent).find("div", class_="tabContent")
continents = tabContent.find_all("div", class_="tcon")
for con in continents:
# 得到此大洲国家名的box
nations = con.find("div", class_=re.compile(r"nation\d*"))
# 去掉被注释的国家
for comment_country in nations(text=lambda text: isinstance(text, Comment)):
comment_country.extract()
# 得到此大洲国家列表
counties = [c.string for c in nations.find_all("a")]
# 得到此大洲的学校box
schools = con.find("div", class_=re.compile("tcon_nationBox\d*"))
# 得到此大洲里各个国家的学校tab
# find_all()会忽视被注释的tab,不需要再去掉注释
schools_nation = schools.find_all("div", class_="tcon_nation")
# 确认国家列表与学校列表是对应的
if len(schools_nation) != len(counties):
print("ERROR: schools tab no match the country.")
break
# 处理各个国家的学校
for idx, sc in enumerate(counties):
# 处理孔子学院
kys = schools_nation[idx].find("div", class_="KY")
# 检查是否有被注释的学院
comment_kys = kys.find_all(string=lambda text: isinstance(text, Comment))
# 处理被注释的学院
if comment_kys:
for ckys in comment_kys:
# 由于注释没有建树,所以需要在创建一个BeautifulSoup进行解析
ckys_bs = BeautifulSoup(ckys)
for cky in ckys_bs.find_all("a"):
ky_name = cky.string
if ky_name:
ky_name = ky_name.strip()
ky_url = cky.get("href") or "NaN"
info_dict[ky_name] = {'type':"孔子学院", 'country':counties[idx], 'status': 'hide', 'url':ky_url}
# 处理没有被注释的学院, 如果名字相同会覆盖
kys = kys.find_all("a")
# 处理每个学院
for ky in kys:
ky_name = ky.string
if ky_name:
ky_name = ky_name.strip()
ky_url = ky.get("href") or "NaN"
#ky_id = re.findall(r'\d+', ky_url.split('/')[-1])[0]
# 将信息保存到汇总字典中
info_dict[ky_name] = {'type':"孔子学院", 'country':counties[idx], 'status': 'show', 'url':ky_url}
# 处理孔子课堂
coures = schools_nation[idx].find("div", class_="coures")
# 检查是否有被注释的课堂
comment_coures = coures.find_all(string=lambda text: isinstance(text, Comment))
# 处理被注释的课堂
if comment_coures:
for ccoures in comment_coures:
# 由于注释没有建树,所以需要在创建一个BeautifulSoup进行解析
ccoures_bs = BeautifulSoup(ccoures)
for ccoure in ccoures_bs.find_all("a"):
coure_name = ccoure.string
if coure_name:
coure_name = coure_name.strip()
coure_url = ccoure.get("href") or "NaN"
info_dict[coure_name] = {'type':"孔子课堂", 'country':counties[idx], 'status': 'hide', 'url':coure_url}
# 处理没有被注释的课题
coures = coures.find_all("a")
# 处理每个课堂
for coure in coures:
coure_name = coure.string
if coure_name:
coure_name = coure_name.strip()
coure_url = coure.get("href") or "NaN"
#coure_id = re.findall(r'\d+', coure_url.split('/')[-1])[0]
info_dict[coure_name] = {'type':"孔子课堂", 'country':counties[idx], 'status': 'show', 'url':coure_url}
print(len(info_dict))
print(info_dict["伊利诺伊大学香槟分校孔子学院"])
print(info_dict["北佛罗里达大学孔子学院"])
# 爬取所有子页面内容,保存到一个字典中
subsite_dict = {}
def is_url(string_url):
urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\), ]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', string_url)
return urls
for idx, name in enumerate(info_dict.keys()):
if is_url(info_dict[name]['url']) and name not in subsite_dict:
# 爬取页面,如果中断则继续
print(idx, "\tHandling:", name, "\turl:", info_dict[name]['url'])
response = urllib.request.urlopen(info_dict[name]['url'])
subwebContent = response.read().decode("utf-8")
subsite_dict[name] = subwebContent
# 解析各个孔子学院和孔子课题url中的"城市"和"创立时间"
for idx, name in enumerate(info_dict.keys()):
print(idx, "\tHandling:", name)
# 默认NaN
info_dict[name]['city'] = "NaN"
info_dict[name]['date'] = "NaN"
# 处理没有爬取到内容的情况
if not subsite_dict[name]:
print("Skip1", name)
continue
# 创建解析器
bs = BeautifulSoup(subsite_dict[name])
# 有两种格式:<p>和<tbody>
if bs.find("table"):
all_info = bs.find("div", class_="main_leftCon").find_all("table")
else:
all_info = bs.find("div", class_="main_leftCon").find_all("p")
# 如果网页没有目标内容,跳过
if not all_info:
print("Skip2", name)
continue
# 逐条解析
for line in all_info:
info = [word for word in line.stripped_strings]
if not info:
continue
if info[0].find("城市") != -1:
if len(info) >= 2:
# 确认城市名存在
info_dict[name]['city'] = info[1]
if info[0].find("时间") != -1:
# 匹配时间,格式****年**月**日
date_string = re.findall(r'\d{4}[-/.|年]\d{1,2}[-\/.|月]\d{1,2}[-/.|日]*', info[-1])
# debug
# print(info)
# 确认日期存在
if date_string:
# 去掉中文,转成标准格式为 ****-**-**
date_list = re.findall(r'\d+',date_string[0])
date = '-'.join(date_list)
info_dict[name]['date'] = date
print(info_dict["伊利诺伊大学香槟分校孔子学院"])
print(info_dict["北佛罗里达大学孔子学院"])
print(info_dict["南太平洋大学孔子学院"])
print(info_dict["斯科奇•欧克伯恩学院孔子课堂"])
# Save the information to file
df = pd.DataFrame.from_dict(info_dict, orient='index')
df.to_excel("./hanban.xlsx", encoding='utf-8')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.